This release contains a few important features:
- CFFI port for PyPy (it can optionally also be used in CPython)
- Python 3.5 support
- c-ares updated to version 1.11.1
This release contains a few important features:
On OSX libuv uses the kqueue interface for polling sockets. This is currently the most efficient way to do so. For file i/o, however, libuv uses a thread pool, but that’s probably a topic for another post 🙂
Back to kqueue. While it works perfectly fine for sockets, we use it for other types of file descriptors too: pipes (with uv_pipe_t) and ttys (with uv_tty_t). On Unix systems, uv_pipe_t can also be used to open an arbitrary file descriptor and treat it as a libuv stream, trhough uv_pipe_open.
When a tty is opened with uv_tty_init, libuv opens /dev/tty instead (it’s a bit more complicated now, but let’s assume that, for the sake of simplicity) in order to be able to put stdin/out/err in non-blocking mode, without affecting processes which share them. There is a problem, however: file descriptors pointing to /dev/tty don’t work with kqueue. Ouch. You can verify that with this little Python script, which is a port of the test libuv does:
import errno import os import select def test_fd(fd): kqueue = select.kqueue() kevent = select.kevent(fd, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_ADD | select.KQ_EV_ENABLE) events = kqueue.control([kevent], 1, 0.001) if not events: print "fd workss with kqueue" return assert len(events) == 1 event = events if (event.flags & select.KQ_EV_ERROR) == 0 or event.data != errno.EINVAL: print "fd workss with kqueue" else: print "fd does NOT work with kqueue" tty_fd = os.open('/dev/tty', os.O_RDWR) print "Testing if kqueue works with /dev/tty" test_fd(tty_fd)
So, what do we do now? As it turns out, those file descriptors don’t work with poll(2) either… but they do work with select(2)! This means that while we use kqueue for most file descriptors, we can use select(2) when that doesn’t work.
Enter The OSX select(2) Trick (TM) by our OSX expert resident Fedor Indutny. The trick is to spawn an auxiliary thread which will use select(2) on a file descritor which doesn’t work with kqueue and report POLLIN and POLLOUT events to the loop thread, where the read and write operations are performed. Have a look here for the first implementation.
The avid reader might be wondering: “what if we have more than 1024 file descriptors? select is not going to work!” You’re right! This was a problem, so let’s enter The OSX select(2) Trick II: _DARWIN_UNLIMITED_SELECT.
This little gem hidden in the manual page tells us that if _DARWIN_UNLIMITED_SELECT is defined at compilation time, we are allowed to go beyond the FD_SETSIZE limit! We cannot create the fd_set as usual nor use FD_ZERO, we’ll need to manually allocate it and zero it with memset: 1, 2, 3.
So, there you go, this is how libuv is able to use file descrioptors that don’t work with kqueue seamlessly on OSX.
Liked the article? Want me to write more about libuv internals? Do let me know!
The initial release was a couple of weeks ago, but there are improvement all across the board:
These are just the tip of the iceberg, check the changelog for all details.
In the next release I’ll be primarily focusing on child process support and maybe experimenting with multi-threading too. Stay tuned!
The runtime model is quite different from Node: there is no builtin event-driven execution, all APIs are (for the most part) object oriented versions of POSIX APIs. Let’s see how to write a socket client which connects to a server, sends ‘hello’, waits for a reply and closes the connection:
I started this project to have some fun (for certain definitions of fun) and learn some more stuff along the way. Even if the project is being open sourced now, the commit history shows its evolution, including all the mistakes and brainfarts, have fun going through it!
My idea is to have a large standard library, including the kitchen sink. Or at least that’s how I feel like today. This initial release contains the basics to get the project off the ground, expect to see improvements.
A bit over a week ago I had the opportunity to present at the Real Time devroom as part of FOSDEM 2016. I gave a presentation titled “From SIP to WebRTC and vice versa”, where I explained how we built a WebRTC gateway to interact with traditional SIP endpoints and extend existing SIP infrastructure. Check it out!
It’s been a week since FOSDEM, time to reflect.
It was my 6th FOSDEM, and I loved it since the first time I attended. So many geeks per square metre, so many interesting talks, so much swag one can buy, … what’s there not to like?!
This year was slightly different, however: I was part of the team who organized the Real Time devroom. That was a first, so I didn’t really know what to expect. I spent the entire day coordinating the devroom, making sure speakers had everything they needed and that talks where smooth and on schedule.
At the end of the day I was exhausted, but overjoyed that everything went well. We have room for improvement next year, but it won’t be our first, so we’ll at least have that!
Since I was the one present in the devroom, many people gave me feedback on our work. All positive! While I was the one representing the organizers, it was a team effort: Daniel Pocock, Ralph Meijer and Iain Learmonth were also part of the team, huge shoutout to them!
After being involved in organizing a single devroom I can only imagine how complicated it must be to get the entire event going, so I’d like to thank everyone involved in making FOSDEM happen each and every year. So much love, see you next year!
Last weekend, while at FOSDEM, we released OpenHRC 1.0.0. It’s a nice and round number, but (hopefully) we’re just getting started.
On this release we focused on getting all core services working together nicely. Next up is IPv6 support top to bottom. Stay tuned!
It’s that time of the year again. It’s almost FOSDEM o’clock, ready fuels!
That is changing! As you probably know, there will be a Real Time Communications devroom happening at FOSDEM 2016. Yours truly is one of the organizers and I’ super-excited about it.
We had a bunch of excellent talk submissions, and some of the speakers who had also submitted talks to the Main track got them accepted, so RTC will also be represented there!
Interested in VoIP, instant messaging, WebRTC, SIP, XMPP, <insert your favorite RTC related Open Standard here>? There is a chance we have a nice talk for you. Check the schedule, and drop by on Saturday!
IMPORTANT NOTE: We are still looking for volunteers to help out in the devroom. If you want to help out, please reach out to me or any other organizer. Hint: volunteers get reserved seats!
See you on Saturday, in Real Time.
So, more LXD! Today we are going to run some Alpine Linux containers on LXD. Why? Alpine describe themselves as a “security-oriented, lightweight Linux distribution based on musl libc and busybox.”, what’s there not to like? It has become quite popular in the Docker world due to the small yet fully functional containers one can create.
Building an Alpine container on LXD is not that straightforward, however. There are no Alpine images on the official repo, so we will have to build our own. LXD does provide some documentation on how an image should be like, lo let’s get to it!
With that script, we can now create an image and then import it into LXD:
sudo ./build-alpine lxc image import alpine-v3.3-x86_64-20160114_2308.tar.gz --alias alpine-v3.3
That image is 2.39MB, w00t! You can check it by listing the images:
saghul@lxd-test:~$ lxc image list +----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+ | alpine-v3.3 | 9888dd281789 | no | alpine v3.3 (20160114_23:08) | x86_64 | 2.39MB | Jan 14, 2016 at 11:10pm (CET) | | jessie-amd64 | 9f065ac6be10 | no | Debian jessie (amd64) | x86_64 | 102.66MB | Jan 12, 2016 at 4:17pm (CET) | | jessie-amd64-base | b85a1bdb5057 | no | Debian Jessie base | x86_64 | 87.98MB | Jan 12, 2016 at 9:56am (CET) | | jessie-amd64-base-sysvinit | 628b7f8470af | no | Debian Jessie base (no systemd) | x86_64 | 82.91MB | Jan 12, 2016 at 11:40am (CET) | | jessie-i386 | 769f90666ea8 | no | Debian jessie (i386) | i686 | 100.23MB | Jan 15, 2016 at 9:54am (CET) | +----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+
Now we can launch a container and test it out!
lxc launch alpine-v3.3 alpinetest lxc exec alpinetest /bin/ash # cat /etc/alpine-release 3.3.1
So, more LXD, here we go! Today we are going to see how to access our LXD containers from outside of the system running LXD itself.
If you are just trying out stuff (like I am), you probably installed some Ubuntu version on a VM in order to run LXD. This means that by default you have no access to your containers from your system, just from the system running LXD.
A simple solution is to add a route to the isolated network that the container get, going through the host running LXD:
ip route add addr:10.0.3.0/24 via 192.168.99.28
Here we are telling our system that the 10.0.3.0/24 network is routable through 192.168.99.28, our LXD machine.
You probably don’t want to use something like this in production, but we are exporing here! 🙂