Dailydave mailing list archives

Re: Mutex's, sheesh


From: mikeiscool <michaelslists () gmail com>
Date: Wed, 27 Sep 2006 13:53:44 +1000

On 9/27/06, Dave Aitel <dave.aitel () gmail com> wrote:
From this weblog here:
http://www.knowing.net/CategoryView,category,Concurrency.aspx
"Software development industry analysis by Larry O'Brien, the former
editor of Software Development and Computer Language"
"""
The CLR & JVM are based on abstract hardware. The virtual machines
have some things which immediately jump out as, let's say, "tough" for
parallelizability -- both have a model whereby separate threads are
responsible for coordinating their own access to shared memory (i.e.,
fields in objects). On the other hand, they have at least one thing
which jumps out as potentially a "very good" thing for
parallelizability -- their stacks are conceptually separate from main
memory, which may make the threading models easier to evolve (in a
world without pointers, data in the stack is inherently local to the
current thread.) The "inherently parallelizable" aspect of functional
languages arises from their exclusive use of the stack for volatile
state, but with the way the stack is generally conceived (as, y'know,
a stack) requiring pushing and popping and copying variables from one
to another, problems arise when copying large datastructures; thus my
thought that maybe the abstraction of the stack could be a "win."
"""

My thinking lately is this: that programming with types is a basically
dumb thing to do. I know a lot of other people disagree, but having a
runtime-defined type (as Python, etc do) is a hundred times better
than having compile-time types.The people who disagree with this are
all people who write in J2EE or C# and are managing truly massive
projects. They have boring jobs, so good luck to them.

For any serious program ten years from now, Python isn't going to cut
it. Not because it's not scalable enough, but I think the very concept
of scalability is going to change. A language should really be
agnostic as to how many computers its running on and storing its data
on, and it should be agnostic as to where those computers are on the
Internet. Any language with a threading model is broken by design.

It's like when the Python guys realized they didn't want to care how
many bits were in an Integer, so they just made it a big number by
default. You shouldn't care about mutex's and locking. It's just silly
and backwards.

http://scalability.org/?p=106#more-106 has a quick rundown of the
state of things in the parallel programming world.  In general -
pretty hard core on optimizing for CPU, which, history tells us, is
the wrong thing to do. We need to optimize by programmer hours
instead.

...preparing to get flamed...

i don't see how types get in the way of your other points. and most of
your other comments don't even make sense...

how does a language care about how many computers it runs on? it
doesn't. a program does. or multithreaded-ness. again the language
doesn't care, it's api or programs or vm does.

and what do you mean 'optimise by programmer hours'? that's what is
done already, mostly. and that's what causes problems ...

anyway, lets say we don't care about mutex's and locking, and
synchronisation. so, umm, who deals with this for us? cause it's not
just going to 'work' if you ignore it.

i'm happy to accept something new ... but really, what the hell are
you talking about?

-- mic
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: