Dailydave mailing list archives
Re: Asynchronous
From: Ben Nagy <ben () iagu net>
Date: Fri, 7 Oct 2011 13:03:08 +0545
On Fri, Oct 7, 2011 at 3:39 AM, Hatta <tmdhat () gmail com> wrote:
On Wed, Oct 5, 2011 at 1:38 PM, Dave Aitel <dave () immunityinc com> wrote:Frankly, I'm not a huge fan of it, but Chris is, and he's a better programmer than me, so we'll leave it at that. Largely, I think people are fans of Async because most languages and kernels are terrible at threads. Python, for example, does not have threads. "No worky worky", as we say around here.
Ruby shares this issue, and I'd say the reasoning is the same as here: http://docs.python.org/faq/library#can-t-we-get-rid-of-the-global-interpreter-lock Among several good reasons for keeping a GIL (especially for extension writers), I'd like to highlight: "This doesn’t mean that you can’t make good use of Python on multi-CPU machines! You just have to be creative with dividing the work up between multiple processes rather than multiple threads. " and then this: "It has been suggested that the GIL should be a per-interpreter-state lock rather than truly global; interpreters then wouldn’t be able to share objects. Unfortunately, this isn’t likely to happen either. [...long snip...] And finally, once you have multiple interpreters not sharing any state, what have you gained over running each interpreter in a separate process?" Bam!
I believe we're moving towards the massive use of threads in a near future because processors are likely to grow in cores not in speed. with many cores many threads become reasonable. I'm pretty sure many will disagree --- that's the fun part of discussion after all.
So, in light of the above, yes, I'd definitely disagree with you. As Dave said right up front "And then eventually you're like "Why on earth am I spending so much time worrying about how efficient an algorithm that runs on one machine is?" and you go off and build something that scales horizontally onto multiple machines." And that's where I think 'we' are moving. I am not a very good programmer, but scaling stuff sideways on multiple machines, and machines with many (48, currently) cores is pretty much all I do - so there's some chance that I represent a fraction of the programming community in terms of ability and goals. I ditched a fairly monolithic evented approach when the callback singularity created by adding new components made my brain explode, and went back to the unix way - small utilities and pipes. For me, it's not a question of 'is it harder to debug a complex threaded beast or a complex async beast?' - reject the initial premise! (of building a complex beast). Anyway, it turns out that once you know how to communicate between your small processes on a single machine, it is "simple" [1] to just extend that to multiple machines, because all the message buses / serialisation / RPC frameworks / cloudy candy toys work just as well over a network as on a single box. Overall, it's my Ruby mantra. I would much rather just add more tin and enjoy development than make my life miserable by optimising performance. Cheers, ben [1] * Not simple _______________________________________________ Dailydave mailing list Dailydave () lists immunityinc com https://lists.immunityinc.com/mailman/listinfo/dailydave
Current thread:
- Asynchronous Dave Aitel (Oct 05)
- Re: Asynchronous Thomas Ptacek (Oct 05)
- Re: Asynchronous Isaac Dawson (Oct 05)
- Re: Asynchronous Meta (Oct 06)
- Re: Asynchronous Hatta (Oct 06)
- Re: Asynchronous Kyle Creyts (Oct 07)
- Re: Asynchronous Adam Crosby (Oct 08)
- Re: Asynchronous Ben Nagy (Oct 07)
- Re: Asynchronous Dominique Brezinski (Oct 08)
- Re: Asynchronous Kyle Creyts (Oct 07)
- Re: Asynchronous Sebastian Krahmer (Oct 07)
- Re: Asynchronous Robert Graham (Oct 10)
- Re: Asynchronous greg hoglund (Oct 30)