Serving a WSGI app, WebSockets and static files with Twisted

Long time no post! Lets solve that now shall we?

A few days ago I started playing a bit with Flask, since I’m considering it as the framework to build some API server. I have no web development experience, and Flask looks like a great project so I went with that.

I started with a tiny little hello world, and then I wanted to add some websockets and some CSS. Oh the trouble. When I started looking for how to combine a Flask app with WebSockets I found references to gevent-socketio for the most part, but I somewhat wanted to use Twisted this time, so I kept looking. Soon enough I found AutoBahn, a great WebSocket implementation for Twisted, which can be combined with a WSGI app, brilliant! After seeing how AutoBahn manages to add the websocket route to the WSGI app, adding support for static files was kind of trivial.

Here is the result of my experiments, a really simple web app which consists of a Flask WSGI app, a WebSocket server and some static files, all served by the same process running Twisted. You may not want to do this in a production environment, but hey, I’m just playing here 🙂

[gist]https://gist.github.com/saghul/5961882[/gist]

Since Gist does not currently allow folders, make sure you keep this layout after downloading the files:

├── app.py
├── settings.py
└── templates
    ├── assets
    │   └── style.css
    └── index.html

We’ll use the twistd command line tool to launch out application, since it can take care of logging, running as a daemon, etc. To run it in the foreground:

twistd -n -l - -y app.py

This will launch the application in non-daemon mode and log to standard output.

Hope this helps someone, all feedback is more than welcome 🙂

:wq

Evergreen 0.0.4 released!

It’s been a while since I haven’t posted around here! I made a few evergreen releases which are probably worth mentioning. They are pretty minor, no big changes have happened. The module which got most of the work is the io module, which I expect to improve more, as well as add cooperative UDP, TLS and file I/O support.

In addition, I created a couple of packages extending evergren’s functionality:

If you are using evergeen, let me know! Hopefully I can continue to make it better bit by bit.

:wq

 

Evergreen: cooperative multitasking and i/o for Python

I’ve been working on-and-off on this project for almost a year during my free time, and after meditating about it I thought: “fuck it, ship it”. Allow me to introduce Evergreen: cooperative multitasking and i/o for Python.

“So, another framework?” I hear you say. Yes, it’s another async framework. But it’s my async framework. I’ve used a number of frameworks for developing servers in Python such as Twisted, Tornado, Eventlet, Gevent and lately Tulip and all of them have great and not so great things, so I decided to blend the ideas I gathered from all of them, add some opinionated decisions, some Stackless flavour and Evergreen was the result.

standards

Evergreeen is a framework which allows developers to write synchronous looking code which is executed asynchronously in a cooperative manner. Evergreen presents an API which looks like the one you would use to write concurrent programs using threads or futures from the Python standard library. The facilities provided by Evergreen are however cooperative, that is, while a task is busy waiting for some i/o other tasks will have their chance to run.

“Show me the code!” I hear you say. Sure, it’s up here on GitHub, released under the MIT license. Since the usual example is a web crawler, here you have one.

Did I mention it supports Python 2 and 3?

“Is it production ready?” I hear you say. It’s still on a very early stage, but I believe the foundation is solid. However, the APIs provided by Evergreen may change a bit until I feel confortable with them. All feedback is welcome, so if you give it a try do let me know!

I’d like to thank all authors of similar libraries for releasing their work as Open Source which I could look into and learn from.

I hope Evergreen can help you solve some problems and you enjoy using it as much as I do developing it.

:wq

 

pyuv 0.10.0 relased!

Today I’m happy to announce that pyuv 0.10.0 has been released! Following libuv’s versioning, this is a stable release, that is, no API changes will occur during the 0.10.x branch cycle.

It has been a while since the last stable release, there have been many changes, even though not all of them are directly visible in the public API. Here is a short list of the most relevant changes for version 0.10.0 since the 0.8 series:

  • Added a true signal watcher
  • Added ability to handle uncaught exceptions (Loop.excepthook)
  • Added TCP.open and UDP.open methods
  • Added support for compilation with Visual Studio in Windows
  • Added thread module with several thread synchronization primitives
  • Added mode parameter to Loop.run (default, once or nowait)
  • Added fileno and get_timeout methods to Loop
  • Added ability to cancel threadpool, getaddrinfo and fs requests
  • Added ability to stop the event loop (Loop.stop)
  • Moved getaddrinfo to util module
  • Removed builtin c-ares resolver
  • Removed get/set process title functions
  • Fixed numerous refcounting issues
  • Multiple fixes for Windows
  • Multiple memory related internal optimizations

There are many more changes, all listed in the changelog file.

I’m glad to say that pyuv is now in better shape than it ever has been. Not only because I have learned many things along the way, but also because I got really good pull requests and help which enhanced pyuv in many different ways. I’m not a Windows guy and got invaluable help from people which helped make pyuv work properly on Windows. Releases 0.9.5-6 contain more commits from others than from myself, and I love that!

Last, I’d like to thank the libuv core team, more specifically Ben and Bert. They do a great job both coding libuv itself and helping others get involved in the project. This is one of the projects I’m really happy I contribute to. Oh, I also scored 4th in the libuv contributions (in lines of code) for Node 0.10.0!

You can get the source code at the usual place, and check the updated documentation here.

Rose: a PEP-3156 compatible event loop based on pyuv

For those who don’t know, PEP 3156 is a proposal for asynchronous I/O in Python, starting with Python 3.3. Until now each framework (Twisted, Tornado, …) has defined it’s own interface for defining protocols and transports. This makes very difficult if not impossible to reuse a protocol implementation across frameworks. PEP 3156 tries to fix that, among other things.

The reference implementation is called Tulip and can be found here. It’s a fast moving target, but it already contains working event loops for Windows and Unix systems. It uses pollers available in the select module for the Unix side, and a C module wrapping Windows IOCP functionality for Windows.

I was really excited to see this come through, so I started playing with it by implementing a pyuv based event loop. I called that it rose. It was a lot easier to implement than expected and it currently passes the entire test suite 🙂

Code can be found on GitHub.

Here is a quick example, the usual echo server, using rose and tulip:

[gist]https://gist.github.com/saghul/4718429[/gist]

Come and join the discussion in the python-ideas mailing list!

:wq

How do event loops work in Python?

I had the pleasure to give a presentation at the first ever Python Devroom at FOSDEM. I talked about how event loops work internally and how pyuv can help by abstracting a lot of the problems with a pretty simple to use API. I also introduced rose, a pyuv based PEP-3156 event loop implementation, but I’ll write a followup post on that 🙂

Thanks a lot to everyone who attended the talk, and for those who couldn’t here are the slides!

[slideshare id=16349302&doc=howdoeventloopswork-130204164956-phpapp01]

:wq

TLS connections with pyuv and pyOpenSSL

Those of you who have been following the pyuv and/or libuv libraries may have run into this at some point: “how do I use TLS with this”? pyuv provides something similar to a socket with a completion style interface, but it only does TCP. There is also the Poll handle, which can be used to use a regular Python socket with pyuv.

Of course, this second approach is the quickest/easiest in order to get TLS working, because the Python sockets already have TLS support thanks to the ssl module. I wanted to experiment with adding some sort of TLS handle, in the same fashion as the TCP handle, that is, not with regular Python sockets.

There are 2 main libraries providing TLS support (in general): OpenSSL and GnuTLS. What I basically wanted to do was encrypt/decrypt the data in memory and read/write it to a pyuv TCP handle. OpenSSL has this functionality through the BIO API, but I didn’t see anything similar in GnuTLS at a first glance so I went with OpenSSL.

I created a quick TLS handle with the ideas expressed above, it can be found in this gist.

It contains the TLS handle, example echo server and client and some sample certificates. Here is the client implementation sample, for the rest check the full gist.

[gist]https://gist.github.com/4599831[/gist]

It’s pretty basic, but I hope it serves as a starting point for using pyuv with TLS. I plan to analyze the performance compared to regular Python sockets in another blog post.

:wq

greenlet local storage on greenlet 0.4.0

Greenlet 0.4.0 brought an interesting new feature: an instance dictionary on each greenlet object, which makes it a lot simpler to implement greenlet local storage. Here is how greenlet local storage is currently implemented in Eventlet and in Gevent.

As it can be seen, the implementation is not particularly straightforward, mainly due to the fact that the actual information needs to be stored in a separate entity and mapped to each greenlet.

Thanks to the instance dictionary added in 0.4.0, we can use some attribute in it to keep the locally stored objects. The plan is to use a dictionary called __local_dict__ and store the greenlet local attributes there. Here is how it looks like:

[gist]https://gist.github.com/4151154[/gist]

Hope it’s of use.

:wq

Using functools.partial instead of saving arguments

I’m a big fan of functools.partial myself. It allows you to take a function, preset some of its arguments and return another function which you can call with more arguments. Not sure if currying is the right term here, but I’ve heard people refer to functools.partial like that.

Today while browsing the source code for the futures module I came across this class:

[gist]https://gist.github.com/4048428[/gist]

The moment I saw that func, args and kwargs were saved as attributes in the instance I thought: “why not use partial and save a single attribute?”. Then I thought that maybe performance had something to do here, so I wrote a dead simple stupid test to check it out:

[gist]https://gist.github.com/4048405[/gist]

Here are the results using CPython 2.7.3:

In [3]: %timeit testpartial.do_test(testpartial.WorkItem)
100000 loops, best of 3: 5.26 us per loop

In [4]: %timeit testpartial.do_test(testpartial.WorkItemPartial)
100000 loops, best of 3: 2.65 us per loop

We are talking microseconds here, but still the version with partial is almost twice as fast.

Now, lets see how PyPy performs:

In [9]: %timeit -n 10000 testpartial.do_test(testpartial.WorkItem)
10000 loops, best of 3: 154 ns per loop
Compiler time: 554050.78 s

In [10]: %timeit -n 10000 testpartial.do_test(testpartial.WorkItemPartial)
10000 loops, best of 3: 2.49 us per loop

Fun, the version without partial goes into the nanoseconds! And the one with partial doesn’t improve much with regards to CPython. Interesting.

So what’s the take here? Well, whenever I see fit I use partial, code looks nicer and it’s apparently faster, so why not? 🙂

:wq

Fast(er) locks in Python?

While searching for some information on Python locks I recently ran across this great post by David Beazley. In it, he explains how the syncronization primitives in the Python standard library are implemented. Basically, Lock is implemented as a binary semaphore in C, and the rest are implemented in pure Python. Even if the post is from 2009 this is still the case. UPDATE: As Antoine Pitrou points out in the comments, starting with CPython 3.2, RLock is now implemented in C.

This got me thinking. As you know I’ve created pyuv, a Python wrapper for libuv, and libuv includes cross-platform implementations for mutexes, semaphores, conditions rwlocks and barriers, which I never bothered to add to pyuv. I just didn’t add them because I thought they didn’t add any value, but after reading David’s article I decided to do a quick test: implement wrappers for a mutex and a condition variable and use them in a Queue implementation in order to see if there was any difference in performance. Not that I ever ran into performance issues related to that, but I was curious anyway 🙂

Someone may think “oh, but given that Python has the GIL, how does using multiple threads and speeding up the locks matter?”. The GIL is released whenever a IO operation is performed, so if your Python application is multithreaded and it mainly deals with IO-bound tasks, they GIL is not that relevant. If your application is CPU bound, however, better have a look at the multiprocessing module.

So, lets get into the code! I implemented Barrier, Condition, Mutex, RWLock and Semaphore in this pyuv branch, which directly wrap their libuv counterpats. Then I copied the Queue implementation from the standard library and used the freshly wrapped synchronization primitives:

[gist]https://gist.github.com/3997233[/gist]

For the testing part, I used the timeit function from IPython with 5 runs. Not sure if it’s the best way, but results suggest it is a good way 🙂 Here is the test script:

[gist]https://gist.github.com/3997244[/gist]

Here are the results:

The tests were run with 2, 4 and 100 threads, and since I was testing performance, I added PyPy to the mix. Now, as you can see in the results, the custom Queue is about 33% faster than the one in the standard library, so I was pretty happy about that. Until I tested PyPy. It just beats the shit out of both, which is awesome 🙂

The performance increase on CPython is nice, there is a downside, however: libuv treats errors in this primitives a bit “abruptly”, it calls abort(). This means that if you use the locks incorrectly your program will core dump. I personally like it, because it helps you find and fix the problem right away, but not everyone may like it.

:wq