Thursday, 7 December 2006

Stackless Python

Stackless Python is in a much better position than it used to be for someone approaching it for the first time. Between Grant Olson's tutorial, the wiki pages like the ones for tasklets, the scheduler and the examples, there is something there for them to work from. But it is still a long way off from being approachable and I think this shows in the amount of use it sees. I recently realised just how hard it was for someone to get the benefits they should be getting out of Stackless.

Andrew Dalke was playing around my Stackless-compatible socket module (which can be found on the examples page in the wiki) and tested it by having two pieces of Python code executing in parallel doing socket operations on the same thread (you can see Andrew's code in this post). Andrew was quite impressed by this and justifiably so. But the fact that this is not possible with Stackless, out of the box, is indicative of how Stackless could be more approachable. The problem, and therefore reason why we need a replacement socket module, is that any blocking call (like normal socket calls) blocks the Python interpreter and therefore also blocks the Stackless scheduler.

The best benefit from using Stackless is gained by being able to write arbitrary code in a straightforward synchronous manner and being able to run it in parallel with other similarly written code. But when all the most common resources which you will want to use (most often file or socket IO) can't be used without blocking all your other running code, your code might look better but you're losing the benefits you should be getting naturally from running your code in parallel as tasklets.

Ok. So you now know to use the "Stackless-compatible" socket module. And most of your code which uses the socket module works in parallel (I'll let the 'most' sit for now). But what about file IO? What about subprocess calls? And any other blocking calls? It is possible to do as I did and use asynchronous IO (i.e. for sockets the asyncore module), wrap it with Stackless channels and provide a replacement module. But not all blocking calls have equivalent forms usable in an asynchronous manner.

This is a problem I have always been blind to with Stackless. On one hand I use Stackless Python every workday in my employment at CCP Games, and have done for the last five years. But we provide our own Stackless compatible socket and file IO and it has always just been there doing its thing. Then there is the personal programming projects I have used Stackless in, I eventually evolved some code which wrapped asyncore to do the socket IO asynchronously in the background and didn't think much about it. Eventually I cleaned up this code and released it as the Stackless-compatible socket module, but only to support my MUD example code.

That Stackless would be more approachable and naturally usable for having this module built into it, was not something that occurred to me. Andrew suggested a solution where a module would monkeypatch the Python runtime and replace all the blocking calls where it was possible to do so with Stackless compatible versions which worked asynchronously in the background. I've done a little work on a module like this and while it is a start, it still needs a lot more work.

One last thought. Where calls which block have no asynchronous alternative exposed to the interpreter which we can use to replace them, I lean towards feeling like this is something which should be built into Python. As compared to using ctypes to access asynchronous file IO, like my current monkeypatching module does for Windows and its IO completion ports. I am thinking this because of the new generator coroutines which Python has acquired. Isn't this IO problem something generators share? I have never had a situation where I have found a need to use them, so I don't know for sure.

No comments:

Post a Comment