From SIP to WebRTC and vice versa

A bit over a week ago I had the opportunity to present at the Real Time devroom as part of FOSDEM 2016. I gave a presentation titled “From SIP to WebRTC and vice versa”, where I explained how we built a WebRTC gateway to interact with traditional SIP endpoints and extend existing SIP infrastructure. Check it out!

[slideshare id=58040324&doc=oolntiwrtgy9pl8tpmdk-signature-01b38166c79e7f06fd55cc38ac4ddc29eb7f867e9b46ffb640370ca62944798f-poli-160209082007]

 

FOSDEM aftermath

It’s been a week since FOSDEM, time to reflect.

It was my 6th FOSDEM, and I loved it since the first time I attended. So many geeks per square metre, so many interesting talks, so much swag one can buy, … what’s there not to like?!

This year was slightly different, however: I was part of the team who organized the Real Time devroom. That was a first, so I didn’t really know what to expect. I spent the entire day coordinating the devroom, making sure speakers had everything they needed and that talks where smooth and on schedule.

At the end of the day I was exhausted, but overjoyed that everything went well. We have room for improvement next year, but it won’t be our first, so we’ll at least have that!

Since I was the one present in the devroom, many people gave me feedback on our work. All positive! While I was the one representing the organizers, it was a team effort: Daniel Pocock, Ralph Meijer and Iain Learmonth were also part of the team, huge shoutout to them!

After being involved in organizing a single devroom I can only imagine how complicated it must be to get the entire event going, so I’d like to thank everyone involved in making FOSDEM happen each and every year. So much love, see you next year!

fosdem-2016

OpenHRC 1.0.0 released!

Last weekend, while at FOSDEM, we released OpenHRC 1.0.0. It’s a nice and round number, but (hopefully) we’re just getting started.

The Open Home Router Contraption (OpenHRC) is an Ansible playbook to automate the configuration of a OpenBSD based home router with the most commonly used services plus some extras:

  • DHCP
  • NTP
  • Local caching and validating DNS resolver
  • Authoritative DNS server for a configurable zone
  • Firewall
  • UPnP
  • DDNS

On this release we focused on getting all core services working together nicely. Next up is IPv6 support top to bottom. Stay tuned!

OpenHRC is brought to you by ioc32 and yours truly.

Real Time Communications at FOSDEM 2016

It’s that time of the year again. It’s almost FOSDEM o’clock, ready fuels!

It has been 2 years without any devrooms representing Real Time Communications (in general) at FOSDEM. On 2013 we had the Jabber and Telephony devrooms, but none of those made it on 2014 or 2015.

That is changing! As you probably know, there will be a Real Time Communications devroom happening at FOSDEM 2016. Yours truly is one of the organizers and I’ super-excited about it.

We had a bunch of excellent talk submissions, and some of the speakers who had also submitted talks to the Main track got them accepted, so RTC will also be represented there!

Interested in VoIP, instant messaging, WebRTC, SIP, XMPP, <insert your favorite RTC related Open Standard here>? There is a chance we have a nice talk for you. Check the schedule, and drop by on Saturday!

IMPORTANT NOTE: We are still looking for volunteers to help out in the devroom. If you want to help out, please reach out to me or any other organizer. Hint: volunteers get reserved seats!

See you on Saturday, in Real Time.

Running Alpine Linux containers on LXD

So, more LXD! Today we are going to run some Alpine Linux containers on LXD. Why? Alpine describe themselves as a “security-oriented, lightweight Linux distribution based on musl libc and busybox.”, what’s there not to like? It has become quite popular in the Docker world due to the small yet fully functional containers one can create.

Building an Alpine container on LXD is not that straightforward, however. There are no Alpine images on the official repo, so we will have to build our own. LXD does provide some documentation on how an image should be like, lo let’s get to it!

I started by taking a look at the template for LXC. That was close enough, so I went ahead and modified it in order to create LXD images. That resulted in lxd-alpine-builder.

With that script, we can now create an image and then import it into LXD:

sudo ./build-alpine
lxc image import alpine-v3.3-x86_64-20160114_2308.tar.gz --alias alpine-v3.3

That image is 2.39MB, w00t! You can check it by listing the images:

saghul@lxd-test:~$ lxc image list
+----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+                                                                      
|           ALIAS            | FINGERPRINT  | PUBLIC |           DESCRIPTION           |  ARCH  |   SIZE   |          UPLOAD DATE          |                                                                      
+----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+                                                                      
| alpine-v3.3                | 9888dd281789 | no     | alpine v3.3 (20160114_23:08)    | x86_64 | 2.39MB   | Jan 14, 2016 at 11:10pm (CET) |                                                                      
| jessie-amd64               | 9f065ac6be10 | no     | Debian jessie (amd64)           | x86_64 | 102.66MB | Jan 12, 2016 at 4:17pm (CET)  |                                                                      
| jessie-amd64-base          | b85a1bdb5057 | no     | Debian Jessie base              | x86_64 | 87.98MB  | Jan 12, 2016 at 9:56am (CET)  |                                                                      
| jessie-amd64-base-sysvinit | 628b7f8470af | no     | Debian Jessie base (no systemd) | x86_64 | 82.91MB  | Jan 12, 2016 at 11:40am (CET) |                                                                      
| jessie-i386                | 769f90666ea8 | no     | Debian jessie (i386)            | i686   | 100.23MB | Jan 15, 2016 at 9:54am (CET)  |                                                                      
+----------------------------+--------------+--------+---------------------------------+--------+----------+-------------------------------+

Now we can launch a container and test it out!

lxc launch alpine-v3.3 alpinetest
lxc exec alpinetest /bin/ash
# cat /etc/alpine-release 
3.3.1

Happy containering!

Simple networking for your LXD containers

So, more LXD, here we go! Today we are going to see how to access our LXD containers from outside of the system running LXD itself.

If you are just trying out stuff (like I am), you probably installed some Ubuntu version on a VM in order to run LXD. This means that by default you have no access to your containers from your system, just from the system running LXD.

A simple solution is to add a route to the isolated network that the container get, going through the host running LXD:

ip route add addr:10.0.3.0/24 via 192.168.99.28

Here we are telling our system that the 10.0.3.0/24 network is routable through 192.168.99.28, our LXD machine.

You probably don’t want to use something like this in production, but we are exporing here! 🙂

LXD, Debian containers and systemd

I have been playing around with LXD the past few nights, and so far I really like it. It’s like VMs, but as a container, in contrast with Docker, which is designed around running a single application as a container.

In order to try LXD out I installed a Ubuntu 15.10 VM and added the LXD stable PPA. Then it was time to launch some containers!

lxc remote add images images.linuxcontainers.org
lxc image copy images:debian/jessie/amd64 local: --alias jessie-amd64
lxc launch jessie-amd64 jessie-test

Shortly after I hit a problem: I could not stop the container I just created! It would just hang there, so I had to stop it forcefully:

lxc stop --force jessie-test

That doesn’t look good at all. Digging around I found the issue on GitHub, which basically concludes that it’s a systemd issue, because it doesn’t seem to handle SIGPWR correctly. Oh boy. The systemd issue is still open on Launchpad, so what do we do then? Well, we get rid of systemd. Let’s prepare a base Debian Jessie image with good old SysV init, shall we?

lxc exec jessie-test /bin/bash
apt-get update && apt-get install sysvinit-core
exit
lxc stop jessie-test --force
lxc start jessie-test
lxc exec jessie-test /bin/bash
apt-get remove --purge --auto-remove systemd
rm -rf /var/lib/apt/lists/*
rm -rf /var/cache/apt/archives/*
exit
lxc stop jessie-test
lxc publish jessie-test --alias jessie-amd64-base-sysvinit

Now all containers we create with our new and shiny image will stop gracefully.

Downloading AppVeyor artifacts with a little bit of Python

I have recently released new versions of 3 of my Python modules (pyuv, pycares and python-fibers), which happen to be Python C extensions.

While preparing these releases, I decided to give AppVeyor a try, since it can be used for both integration testing on Windows and Python Wheels generation. I managed to do so following these instructions and checking this project example, and I was (almost) all set.

The missing part was to download all those built artifacts (the Python wheels) stored in AppVeyor and upload them to PyPI when I decided to make a release. Uploading the wheels can be easily done using twine, and for downloading the last built artifacts for a given project I created the following simple Python script using requests:

https://gist.github.com/saghul/08ef2f7495c0cf481e3b

Using it is simple:

appveyor-download --api-token 1234 --user saghul --project pyuv

I hope you find it useful!

:wq

 

pyuv 1.2.0 released

Quick heads up, I just released pyuv 1.2.0. pyuv is a Python wrapper for libuv. This time around, pyuv implements all the funcionality covering up to libuv 1.7.3.

This release was focused in 2 things: adjusting to new APIs / changes in libuv, and improving the testing, specifically on Windows.

As of this writing, pyuv is automatically tested on Linux systems thanks to Travis CI and on Windows thanks to AppVeyor. Thanks to AppVeyor, I’m now able to provide Python Wheels for pyuv, which is great because, frankly, compiling it on Windows is kind of a pain.

See the ChangeLog for a detailed outline of the changes, check the documentation, and fetch the code on GitHub. Packages have also been uploaded to PyPI.

python-fibers 1.0.0 released

Yeah, it must be 1.0.0 release week!

I’m happy to announce python-fibers 1.0.0! Fibers are cooperative microthreads for Python, a project I started about a couple of years ago. Head here for the initial project annoucement and rationale. (yes, I can hear you thinking “why didn’t he use greenlet?!”)

There are no API changes in this release, and since it has been stable so far I thought it it’s fair to call it a 1.0.

This release steps up the CI game by adding AppVeyor integration, and thanks to it we have binary Python wheels for Python 2.7, 3.3 and 3.4 on PyPI!

As usual, the code is available on GitHub and documentation on RTD.

 

:wq