pycares 2.0.0 released!

Tonight I’m happy to announce that pycares (the Python bindings for c-ares, an asynchronous DNS resolver) has reached version 2.0.0.

This release contains a few important features:

  • CFFI port for PyPy (it can optionally also be used in CPython)
  • Python 3.5 support
  • c-ares updated to version 1.11.1

Plus some minor bugfixes. I’d like to thank Jesse (@boytm) for the CFFI patch, that was a massive contribution, thank you so much!

Binary wheels are available for Python 2.7, 3.3, 3.4 and 3.5 on Windows (both 32 and 64 bits), checkout the PyPI page.

Enjoy!

libuv internals: the OSX select(2) trick

In case you didn’t know, libuv is an asynchronous platform abstraction library which you should totally check out.

libuv does a lot more than abstract epoll, kequeue and friends, and today we’re going to take a look at one of the many tricks libuv provides.

On OSX libuv uses the kqueue interface for polling sockets. This is currently the most efficient way to do so. For file i/o, however, libuv uses a thread pool, but that’s probably a topic for another post 🙂

Back to kqueue. While it works perfectly fine for sockets, we use it for other types of file descriptors too: pipes (with uv_pipe_t) and ttys (with uv_tty_t). On Unix systems, uv_pipe_t can also be used to open an arbitrary file descriptor and treat it as a libuv stream, trhough uv_pipe_open.

When a tty is opened with uv_tty_init, libuv opens /dev/tty instead (it’s a bit more complicated now, but let’s assume that, for the sake of simplicity) in order to be able to put stdin/out/err in non-blocking mode, without affecting processes which share them. There is a problem, however: file descriptors pointing to /dev/tty don’t work with kqueue. Ouch. You can verify that with this little Python script, which is a port of the test libuv does:

import errno
import os
import select


def test_fd(fd):
    kqueue = select.kqueue()
    kevent = select.kevent(fd, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_ADD | select.KQ_EV_ENABLE)
    events = kqueue.control([kevent], 1, 0.001)
    if not events:
        print "fd workss with kqueue"
        return
    assert len(events) == 1
    event = events[0]
    if (event.flags & select.KQ_EV_ERROR) == 0 or event.data != errno.EINVAL:
        print "fd workss with kqueue"
    else:
        print "fd does NOT work with kqueue"


tty_fd = os.open('/dev/tty', os.O_RDWR)
print "Testing if kqueue works with /dev/tty"
test_fd(tty_fd)

 

So, what do we do now? As it turns out, those file descriptors don’t work with poll(2) either… but they do work with select(2)! This means that while we use kqueue for most file descriptors, we can use select(2) when that doesn’t work.

Enter The OSX select(2) Trick (TM) by our OSX expert resident Fedor Indutny. The trick is to spawn an auxiliary thread which will use select(2) on a file descritor which doesn’t work with kqueue and report POLLIN and POLLOUT events to the loop thread, where the read and write operations are performed. Have a look here for the first implementation.

The avid reader might be wondering: “what if we have more than 1024 file descriptors? select is not going to work!” You’re right! This was a problem, so let’s enter The OSX select(2) Trick II: _DARWIN_UNLIMITED_SELECT.

This little gem hidden in the manual page tells us that if _DARWIN_UNLIMITED_SELECT is defined at compilation time, we are allowed to go beyond the FD_SETSIZE limit! We cannot create the fd_set as usual nor use FD_ZERO, we’ll need to manually allocate it and zero it with memset: 1, 2, 3.

So, there you go, this is how libuv is able to use file descrioptors that don’t work with kqueue seamlessly on OSX.

Liked the article? Want me to write more about libuv internals? Do let me know!

Skookum JS 0.2.0 released!

Hey there!

Skookum JS, the JavaScript runtime all your friends are talking about just released its 0.2.0 version.

The initial release was a couple of weeks ago, but there are improvement all across the board:

  • Better CLI experience and ability to toggle strict mode
  • Multiple fixes to the build system (I’m still learning CMake)
  • Fix building proper strack traces
  • New modules: os and refactored io
  • Buffer support for i/o operations

These are just the tip of the iceberg, check the changelog for all details.

For those wondering why sjs is so skookum, see the initial anoouncement and the design documentation.

In the next release I’ll be primarily focusing on child process support and maybe experimenting with multi-threading too. Stay tuned!

Introducing Skookum JS, a JavaScript runtime

Today I’m happy to announce the humble beginnings of a project I started a while ago: Skookum JS, a JavaScript runtime.

sjs

“A JavaScript runtime???” Yes, pretty much like Node, but with a different model. Skookum JS (sjs henceforth) uses the Duktape JavaScript engine to implement a runtime similar to Python’s CPython or Ruby’s MRI.

The runtime consists of a CLI utility called sjs which acts as the interpreter and will evaluate JavaScript code, and libsjs, the library which does the heavy lifting and can be embedded in other applications. Any project can add scripting support using JavaScript by linking with libsjs and using its C API.

The runtime model is quite different from Node: there is no builtin event-driven execution, all APIs are (for the most part) object oriented versions of POSIX APIs. Let’s see how to write a socket client which connects to a server, sends ‘hello’, waits for a reply and closes the connection:

https://gist.github.com/2ce6dc09d6d0fb54ada095ac091a912b

I started this project to have some fun (for certain definitions of fun) and learn some more stuff along the way. Even if the project is being open sourced now, the commit history shows its evolution, including all the mistakes and brainfarts, have fun going through it!

My idea is to have a large standard library, including the kitchen sink. Or at least that’s how I feel like today. This initial release contains the basics to get the project off the ground, expect to see improvements.

I’d like to finish this post by thanking its author for Duktape (the JavaScript engine used by sjs). It’s a really easy to use JavaScript engine, with outstanding documentation and great design choices, I couldn’t have done it without it. 10/10 would recommend.

Curious? Bored by Node because it just works? Head over to GitHub for the code, and here for the documentation.

From SIP to WebRTC and vice versa

A bit over a week ago I had the opportunity to present at the Real Time devroom as part of FOSDEM 2016. I gave a presentation titled “From SIP to WebRTC and vice versa”, where I explained how we built a WebRTC gateway to interact with traditional SIP endpoints and extend existing SIP infrastructure. Check it out!

[slideshare id=58040324&doc=oolntiwrtgy9pl8tpmdk-signature-01b38166c79e7f06fd55cc38ac4ddc29eb7f867e9b46ffb640370ca62944798f-poli-160209082007]

 

FOSDEM aftermath

It’s been a week since FOSDEM, time to reflect.

It was my 6th FOSDEM, and I loved it since the first time I attended. So many geeks per square metre, so many interesting talks, so much swag one can buy, … what’s there not to like?!

This year was slightly different, however: I was part of the team who organized the Real Time devroom. That was a first, so I didn’t really know what to expect. I spent the entire day coordinating the devroom, making sure speakers had everything they needed and that talks where smooth and on schedule.

At the end of the day I was exhausted, but overjoyed that everything went well. We have room for improvement next year, but it won’t be our first, so we’ll at least have that!

Since I was the one present in the devroom, many people gave me feedback on our work. All positive! While I was the one representing the organizers, it was a team effort: Daniel Pocock, Ralph Meijer and Iain Learmonth were also part of the team, huge shoutout to them!

After being involved in organizing a single devroom I can only imagine how complicated it must be to get the entire event going, so I’d like to thank everyone involved in making FOSDEM happen each and every year. So much love, see you next year!

fosdem-2016

OpenHRC 1.0.0 released!

Last weekend, while at FOSDEM, we released OpenHRC 1.0.0. It’s a nice and round number, but (hopefully) we’re just getting started.

The Open Home Router Contraption (OpenHRC) is an Ansible playbook to automate the configuration of a OpenBSD based home router with the most commonly used services plus some extras:

  • DHCP
  • NTP
  • Local caching and validating DNS resolver
  • Authoritative DNS server for a configurable zone
  • Firewall
  • UPnP
  • DDNS

On this release we focused on getting all core services working together nicely. Next up is IPv6 support top to bottom. Stay tuned!

OpenHRC is brought to you by ioc32 and yours truly.

Real Time Communications at FOSDEM 2016

It’s that time of the year again. It’s almost FOSDEM o’clock, ready fuels!

It has been 2 years without any devrooms representing Real Time Communications (in general) at FOSDEM. On 2013 we had the Jabber and Telephony devrooms, but none of those made it on 2014 or 2015.

That is changing! As you probably know, there will be a Real Time Communications devroom happening at FOSDEM 2016. Yours truly is one of the organizers and I’ super-excited about it.

We had a bunch of excellent talk submissions, and some of the speakers who had also submitted talks to the Main track got them accepted, so RTC will also be represented there!

Interested in VoIP, instant messaging, WebRTC, SIP, XMPP, <insert your favorite RTC related Open Standard here>? There is a chance we have a nice talk for you. Check the schedule, and drop by on Saturday!

IMPORTANT NOTE: We are still looking for volunteers to help out in the devroom. If you want to help out, please reach out to me or any other organizer. Hint: volunteers get reserved seats!

See you on Saturday, in Real Time.

Running Alpine Linux containers on LXD

So, more LXD! Today we are going to run some Alpine Linux containers on LXD. Why? Alpine describe themselves as a “security-oriented, lightweight Linux distribution based on musl libc and busybox.”, what’s there not to like? It has become quite popular in the Docker world due to the small yet fully functional containers one can create.

Building an Alpine container on LXD is not that straightforward, however. There are no Alpine images on the official repo, so we will have to build our own. LXD does provide some documentation on how an image should be like, lo let’s get to it!

I started by taking a look at the template for LXC. That was close enough, so I went ahead and modified it in order to create LXD images. That resulted in lxd-alpine-builder.

With that script, we can now create an image and then import it into LXD:

sudo ./build-alpine
lxc image import alpine-v3.3-x86_64-20160114_2308.tar.gz --alias alpine-v3.3

That image is 2.39MB, w00t! You can check it by listing the images:

saghul@lxd-test:~$ lxc image list
+----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+                                                                      
|           ALIAS            | FINGERPRINT  | PUBLIC |           DESCRIPTION           |  ARCH  |   SIZE   |          UPLOAD DATE          |                                                                      
+----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+                                                                      
| alpine-v3.3                | 9888dd281789 | no     | alpine v3.3 (20160114_23:08)    | x86_64 | 2.39MB   | Jan 14, 2016 at 11:10pm (CET) |                                                                      
| jessie-amd64               | 9f065ac6be10 | no     | Debian jessie (amd64)           | x86_64 | 102.66MB | Jan 12, 2016 at 4:17pm (CET)  |                                                                      
| jessie-amd64-base          | b85a1bdb5057 | no     | Debian Jessie base              | x86_64 | 87.98MB  | Jan 12, 2016 at 9:56am (CET)  |                                                                      
| jessie-amd64-base-sysvinit | 628b7f8470af | no     | Debian Jessie base (no systemd) | x86_64 | 82.91MB  | Jan 12, 2016 at 11:40am (CET) |                                                                      
| jessie-i386                | 769f90666ea8 | no     | Debian jessie (i386)            | i686   | 100.23MB | Jan 15, 2016 at 9:54am (CET)  |                                                                      
+----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+

Now we can launch a container and test it out!

lxc launch alpine-v3.3 alpinetest
lxc exec alpinetest /bin/ash
# cat /etc/alpine-release 
3.3.1

Happy containering!

Simple networking for your LXD containers

So, more LXD, here we go! Today we are going to see how to access our LXD containers from outside of the system running LXD itself.

If you are just trying out stuff (like I am), you probably installed some Ubuntu version on a VM in order to run LXD. This means that by default you have no access to your containers from your system, just from the system running LXD.

A simple solution is to add a route to the isolated network that the container get, going through the host running LXD:

ip route add addr:10.0.3.0/24 via 192.168.99.28

Here we are telling our system that the 10.0.3.0/24 network is routable through 192.168.99.28, our LXD machine.

You probably don’t want to use something like this in production, but we are exporing here! 🙂