It’s very annoying for a program to crash, but it happens. When it happens if the system was configured to dump the core you’ll get a core file with hopefully enough information to debug the crash.
But we don’t know if the system will be configured to dump the core, so we may want to do this from our Python program. We can use the resource module in the standard library. The following example shows how to enable core dumps if —dump-core option is specified:
PS: This worked just fine for the root user but didn’t seem to do the job for a normal user.
Registry is a quite common design pattern which I recently needed to implement and I found Python class decorators very useful so I thought I’d write about it. :–)
In the registry pattern we have a global object we call the registry or registrar which contains references to shared objects and it’s the only entity which can access those.
In my case, I had a plugin style architecture in which different plugins might be dynamically added without any configuration. The first implementation I made used a metaclass to add the plugin class to the registry:
That worked, at least for the beginning. But later I found a limitation: plugins needed the Plugin metaclass, so I couldn’t use another metaclass on them if I needed to. And at some point I did feel the need to do that. I wanted the plugins to use the Singleton metaclass (to implement the Singleton pattern) but then I couldn’t use the Plugin metaclass because only one metaclass can be used (and I didn’t want to create a SingletonPlugin metaclass). And class decorators saved the day:
Class decorators are supported only in Python >= 2.6, but that wasn’t a problem. :–)
This past week I’ve been enhancing my Vim configuration, since it’s the editor I use all the time. While browsing for interesting plugins I run across Ack, a plugin that promises to be an enhancement for Vim’s builtin grep capability.
The Vim plugin uses the ack command on your system which is Perl replacement for grep. After discovering it on http://betterthangrep.com I didn’t look back. Ack is so much better than grep for finding text among code!
One of the good things is the fact that Ack will automagically ignore well-known version control systems files. According to the documentation:
Directories ignored by default:
autom4te.cache, blib, build, .bzr, .cdv, cover_db, CVS, darcs, ~.dep,
~.dot, .git, .hg, MTN, ~.nib, .pc, ~.plst, RCS, SCCS, sgbak and .svn
Lets see an usage example:
And we don’t need to explicitly exclude any of the directories listed above, because Ack will do it for us.
In Python we have a way for creating anonymous functions at runtime: lambda. With lambda we can create a function that will not be bond to any name. That is, we don’t need to def a function, and lambda can be used instead.
Lambda can also be used to call another function by fixing certain argument:
However, lambda functions use late binding for the arguents they get. That is, when you create a lambda function passing a variable as an argument, it’s not immediately copied, a reference to the scope is kept and the value is resolved when the function is called. Thus, if the value of that argument changes within that scope the lambda function will be called with the changed value. Lets see it in action:
Unexpected? It depends, but it was definitely unexpected when I run into it. The solution for this is to bind early, by copying the value at the time of the creation of the function. We can do this 2 ways:
Using a fixed parameter in the lambda function:
I personally like the second approach, using partial, since its more explicit and I really see option one as a workaround to how lambda functions work.
Sometimes I find myself wanting to transfer one (or several) file to a different host or mobile device, and getting the USB cable or using SSH is not always possible. In those cases I just use this super-silly Python one-liner:
And there you go.
I usually need to browse through lots of RFCs, both for work and leisure. Reading them in the browser or even a PDF reader is quite not right for me and long ago I found the ultimate tool for RFC reading on my computer: qRFCView.
However, one day I wanted to read RFC6026 and suddenly I realized that for some reason qRFCView wouldn’t let me choose a RFC number greater than 5000.
A quick search led me to know that the number is hardcoded 🙁 It’s listed as an important bug in Debian since january 2009.
The fix is really easy, just get the Debian source for qrfcview, modify src/main.cpp, raise the number to 9999 or whatever you think it’s best and debuild again.
I’m a happy iPhone user. I was lucky to get the first iPhone soon after it was available and I just bought a new iPhone4. This is my first time having unlimited data plan on a mobile phone 🙂 so first thing I did was to configure my email accounts, both personal and work.
It felt weird to configure a GMail account as Microsoft Exchange in order to sync contacts and get push notifications, but that did the trick. For my work email account, I didn’t have push notifications, and polling is not any cool. I know very little about email itself, but I found that in order to have push email you need an IMAP IDLE compatible server and client.
As you may see below, the server does support IMAP IDLE:
but it looks like iPhone Mail app doesn’t 🙁
I started looking around and finally found the solution: use an application called Prowl (2.39€) which will push whatever notification it gets through a REST API. The idea is basically to create a console IMAP IDLE client and use the Prowl API to send a new notification. Sounds like fun!
It works on the background and it can connect to an arbitrary name of accounts (configured with an ini-style file) and send push notifications on behalf of them. It also logs to syslog, if running in the background.
This Prowl thing is just awesome, this is an example of what you can do with it, but anyone can write a really simple script to get instant notifications on his/her iPhone. Just give it a try!
In Python we have several types of objects for storing values in a
array-like way: lists, tuples, dictionaries (they are really more like
hash tables) and sets. Lists and sets might look alike but they are
different. Lets do a quick test:
In the above snippet, we can see that iterating over all items in a
list takes just a little longer tan a a set: 8.99us vs 6.87us we are
talking micro-seconds here!
Now, if we look at how long it takes to verify is the list or set
contains an object, we can see the difference: 1.99us vs 140ns. Sets
are an order of magnitude faster!
Why is that? We can think that a python list is like a C linked list.
It’s time complexity when checking for a value is O(n). Sets, on the
other hand, are implemented as hash tables, so the key is hashed and
instantaneously found (or not) with a time complexity of O(1).
For the curious, this is the list object declaration in Include/listobject.h
and this is the set object declaration, in Include/setobject.h
More on python and hash tables here:
When I download a library for using whatever API I usually see that people tend to use different JSON libraries, and sometimes I just don’t have it installed, but I know I got another one which could do the job just fine.
A while ago I ran across this while I was having a look at the Tropo WebAPI library (http://github.com/tropo/tropo-webapi-python) and ended up adding the following code:
import cjson as jsonlib
jsonlib.dumps = jsonlib.encode
jsonlib.loads = jsonlib.decode
from django.utils import simplejson as jsonlib
import simplejson as jsonlib
import json as jsonlib
Most used JSON libraries are ordered regarding speed. You might want to read a nice comparison between libraries here:
A while ago I was bored at home and decided to play a bit with the Twitter API. As a result I created a simple Twitter Bot which retewitted everything with the *#asterisk* hashtag (http://twitter.com/asteriskbot).>
It’s functionality was very simple andI didn’t even touch the (crappy) code for months, but Twitter decided to drop the basic authentication support in favor of OAuth so it was the right time for updating the bot.
Since the bot was created lots of other bots have, and it feels annoying to get the same tweet over and over again, so I though I’d try to make the bot a bit *smarter*. The main goal was to detect if a tweet was already tweeted (or RTd) not to repeat things. AFAIK there are 2 types of retweets:
- “some text RT @user1 RT @user2 some super cool tweet here!”
- “some super cool tweet! (via @user)”
The current bot detects both formats by using regular expressions and extracts the tweet itself. However, I felt this could not be enough, because we could get the retweet *before* the real tweet so it’s be great to save the entire tweet and then look for it.
As the bot was using a SQLite database I decided to use its *Full Text Search* (fts3) capability, inspired by a colleague. With FTS searching for an entire string is amazingly faster than doing a regular SELECT query. Here is an example taken from the SQLite website http://www.sqlite.org/fts3.html
CREATE VIRTUAL TABLE enrondata1 USING fts3(content TEXT); /* FTS3 table */
CREATE TABLE enrondata2(content TEXT); /* Ordinary table */
SELECT count(*) FROM enrondata1 WHERE content MATCH 'linux'; /* 0.03 seconds */
SELECT count(*) FROM enrondata2 WHERE content LIKE '%linux%'; /* 22.5 seconds */
The (silly) code for this bot is available on GitHub: http://github.com/saghul/twitterbot