Secure Coding mailing list archives

Could I use Java or c#? [was: Re: re-writing college books]


From: everhart at gce.com (Glenn and Mary Everhart)
Date: Sun, 12 Nov 2006 21:54:32 -0500

Crispin Cowan wrote:
Al Eridani wrote:
On 11/9/06, Crispin Cowan <crispin at novell.com> wrote:
  
Prior to Java, resorting to compiling to byte code (e.g. P-code back in
the Pascal days) was considered a lame kludge because the language
developers couldn't be bothered to write a real compiler.
    
"Post-Java, resorting to compiling to machine code is considered a lame
kludge because the language developers cannot be bothered to write a
real optimizer."
  
I don't see what a bytecode intermediate stage has to do with "real
optimizer". Very sophisticated optimizers have existed for native code
generators for a very long time.

Bytecode interpreter performance blows goats, so I'm going to assume you
are referring to JIT. The first order effect of JIT is slow startup
time, but that's not an advantage either. So you must be claiming that
dynamic profiling (using runtime behavior to optimize code) is a major
advantage. It had better be, because the time constraints of doing your
optimization at JIT time restrict the amount of optimization you can do
vs. with a native code generator that gets to run off-line for as long
as it needs to.

But yes, dynamic profiling can be an advantage. However, its use is not
restricted to bytecode systems. VMware, the Transmeta CPU, and DEC's
FX86 (virtual machine emulation to run x86 code on Alpha CPUs) use
dynamic translation to optimize performance. It works, in that those
systems all do gain performance from dynamic profiling, but note also
the reputation that they all have for speed: poor.

And then there's "write once, run anywhere." Yeah ... right. I've run
Java applets, and Javascript applets, and the latter are vastly superior
for performance, and worse, all too often the Java applets are not "run
anywhere", they only run on very specific JVM implementations.

There's the nice property that bytecode can be type safe. I really like
that. But the bytecode checker is slow; do people really run it
habitually? More important; is type safety a valuable property for
*untrusted code* that you are going to have to sandbox anyway?

So I give up; what is it that's so great about bytecode? It looks a
*lot* like the Emperor is not wearing clothes to me.

Crispin

Considering that on VMS, due to the use everywhere of a single calling
standard and a linker that can understand things, you can link any language
with any other language and get things to run. A single app with some pieces
in Fortran, C, Pascal, BASIC, and assembly language, all in one program, is
perfectly feasible. I have worked with such. Everything's compiled, everything
compiles to optimised machine code, and there is no interpreter needed.

So why does anyone consider that multilingual development requires some kind
of interpretive runtime? The old P-system was current a LONG time ago now, 
always did have speed problems (and took down some pretty decent apps with that 
ship in its day because they couldn't run as fast as native code ones).

If there is some construct that NEEDS to be interpreted to gain something, it 
can be justified on that basis. Using interpretive runtimes just to link 
languages, or just to achieve portability when source code portability runs 
pretty well thanks, seems wasteful to me.

Anybody know why .net uses a runtime of the sort it does?

Glenn Everhart




Current thread: