Net change

Recently the FCC voted down the previously held rules on net neutrality. I think that this is a bad decision by the FCC, but I don't think that it will result in the amount of chaos that some people are suggesting. I thought I'd write about how I see the net changing, for better or worse, with these regulations removed.

If we think about how the Internet is today, basically everyone pays to access the network individually. Both groups that want to host information and people who want to access those sites. Everyone pays a fee for 'their connection' which contributes to companies that create and connect the backbone together. An Internet connection by itself has very little value, but it is the definition of a "network effect", because everyone is on the Internet it has value for you to connect there as well. Some services you connect to use a lot of your home Internet connection, and some of them charge different rates for it. Independent of how much they use or charge you, your ISP isn't involved in any meaningful way. The key change here is that now your ISP will be associated with the services that you use.

Let's talk about a theoretical video streaming service that charged for their video service. Before they'd charge something like $10 a month for licensing and their hosting costs. Now they're going to end up paying an access fee to get to consumer's Internet connections, so their charges are going to change. They end up charging $20 a month and giving $10 of that to the ISPs of their customers. In the end consumers will end up paying for their Internet connection just as much, but it'd be bundled into other services they're buying on the Internet. ISPs love this because suddenly they're not the ones charging too much, they're out of the billing here. They could even possibly charge less (free?) for home Internet access as it'd be subsidized by the services you use.

Better connections

I think that it is quite possible that this could result in better Internet connections for a large number of households. Today those households have mediocre connectivity, and they can complain about it, but for the most part ISPs don't care about a few individuals complaints. What could change is that when a large company is paying millions of dollars in access fees is complaining, they might start listening.

The ISPs are supporting the removal of Net Neutrality regulations to get money from the services on the Internet. I don't think that they realize that with that money will come an obligation to perform to those service's requirements. Most of those services are more customer focused than ISPs are, which is likely to cause a culture shock once they hold weight with their management. I think it is likely ISPs will come to regret not supporting net neutrality.

Expensive hosting for independent and smaller providers

It is possible for large services on the Internet to negotiate contracts with large ISPs and make everything generally work out so that most consumers don't notice. There is then a reasonable question on how providers that are too small to negotiate a contract play in this environment. I think it is likely that the hosting providers will fill in this gap with different plans that match a level of connectivity. You'll end up with more versions of that "small" instance, some with consumer bandwidth built-in to the cost and others without. There may also be mirroring services like CDNs that have group negotiated rates with various ISPs. The end result is that hosting will get more expensive for small businesses.

The bundling of bandwidth is also likely to shake up the cloud hosting business. While folks like Amazon and Google have been able to dominate costs through massive datacenter buys, suddenly that isn’t the only factor. It seems likely the large ISPs will build public clouds of their own as they can compete by playing funny-money with the bandwidth charges.

Increased hosting costs will hurt large non-profits the most, folks like Wikipedia and The Internet Archive. They already have a large amount of their budget tied up in hosting and increasing that is going to make their finances difficult. Ideally ISPs and other Internet companies would help by donating to these amazing projects, but that's probably too optimistic. We'll need individuals to make up this gap. These organizations could be the real victims of not having net neutrality.

Digital Divide

A potential gain would be that, if ISPs are getting most of the money from services, the actual connections could become very cheap. There would then be potential for more lower-income families to get access to the Internet as a whole. While this is possible, the likelihood would be that only families in regions that have customers the end-services themselves want. It will help those who are near an affluent area, not everyone. It seems that there is some potential for gain, but I don't believe it will end up being a large impact.

What can I do?

If you're a consumer, there's probably not a lot, you're along for the ride. You can contact your representatives, and if this is a world that you don't like the sound of, ask them to change it. Laws are a social contract for how our society works, make sure they're a contract you want to be part of.

As a developer of a web service you can make sure that your deployment is able to work on multi-cloud type setups. You're probably going to end up going from multi-cloud to a whole-lotta-cloud as each has bandwidth deals your business is interested in. Also, make sure you can isolate which parts need the bandwidth and which don't as that may become more important moving forward.

posted Dec 19, 2017 | permanent link

Replacing Docker Hub and Github with Gitlab

I've been working on making the Inkscape CI performant on Gitlab because if you aren't paying developers you want to make developing fun. I started with implementing ccache, which got us a 4x build time improvement. The next piece of low hanging fruit seemed to be the installation of dependencies, which rarely change, but were getting installed on each build and test run. The Gitlab CI runners use Docker and so I set out to turn those dependencies into a Docker layer.

The well worn path for doing a Docker layer is to create a branch on Github and then add an automated build on Docker Hub. That leaves you with a Docker Repository that has your Docker layer in it. I did this for the Inkscape dependencies with this fairly simple Dockerfile:

FROM ubuntu:16.04
RUN apt-get update -yqq 
RUN apt-get install -y -qq <long package list>

For Inkscape though we'd really like to not set up another service and accounts and permissions. Which led me to Gitlab's Container Registry feature. I took the same Git branch and added a fairly generic .gitlab-ci.yml file that looks like this:


  image: docker:latest
    - docker:dind
  stage: build
    - docker build --pull -t ${IMAGE_TAG} .
    - docker push ${IMAGE_TAG}

That tells the Gitlab CI system to build a Docker layer with the same name as the Git branch and put it in the project's container registry. For Inkscape you can see the results here:

We then just need to change our CI configuration for the Inkscape CI builds so that it uses our new image:


Overall the results were saving approximately one to two minutes per build. Not the drastic results I was hoping for, but this is likely to be caused by the builders being more IO constrained than CPU constrained, so uncompressing the layer is roughly the same cost as installing the packages. This still results in a 10% savings in total pipeline time. The bigger unexpected benefit is that it has cleaned up the CI build logs to where the first page starts the actual Inkscape build instead of having to scroll through pages of dependency installation (old vs. new).

posted Jun 15, 2017 | permanent link

ccache for Gitlab CI

When we migrated Inkscape to Gitlab we were excited about setting up the CI tools that they have. I was able to get the build going and Mc got the tests running. We're off! The problem is that the Inkscape build was taking about 80 minutes, plus the tests. That's really no fun for anyone as it's a walk away from the computer amount of time.

Gitlab has a caching feature in their CI runners that allows you to move data from one build to another. While it can be tricky to manage a cache between builds; ccache will do it for you on C/C++ projects. This took the rebuild time for a branch down to a more reasonable 20 minutes.

I couldn't find any tutorial or example on this, so I thought I'd write up what I did to enable ccache for people who aren't as familiar with it. Starting out our .gitlab-ci.yml looked (simplified) like this:

image: ubuntu:16.04

  - apt-get update -yqq 
  - apt-get install -y -qq # List truncated for web

  stage: build
    - mkdir -p build
    - cd build
    - cmake ..
    - make

First you need to add ccache to the list of packages you install and setup the environment for ccache in your before_script:

  - apt-get update -yqq 
  - apt-get install -y -qq # List truncated for web
  # CCache Config
  - mkdir -p ccache
  - export CCACHE_BASEDIR=${PWD}
  - export CCACHE_DIR=${PWD}/ccache

You then need to tell the Gitlab CI infrastructure to save the ccache directory:

    - ccache/

And lastly tell your build system to use the ccache compiler, for us in CMake that is using the COMPILER_LAUNCHER defines:

  stage: build
    - mkdir -p build
    - cd build
    - make

Final simplified .gitlab-ci.yml file:

image: ubuntu:16.04

  - apt-get update -yqq 
  - apt-get install -y -qq # List truncated for web
  # CCache Config
  - mkdir -p ccache
  - export CCACHE_BASEDIR=${PWD}
  - export CCACHE_DIR=${PWD}/ccache

    - ccache/

  stage: build
    - mkdir -p build
    - cd build
    - make

If you'd like to see the full version at the time of writing it is there. Also, assuming you are reading this in the future, you might be interesting in the current Gitlab CI config for Inkscape.

posted Jun 10, 2017 | permanent link

Migrating Inkscape to Gitlab

As a project we decided that Inkscape needed to move to Git. Bazaar has been a great tool for us, but since it is no longer being developed we felt the need to find a new solution. Also we see the majority of developers are learning to use Git for other projects, which turned using Bazaar into a skill they had to learn in order to contribute. We always want to make contributing as easy as possible. So after the 0.92 release we started getting ready to move.

There are several Git code hosting platforms available but as a project we found that Gitlab is the best fit for us. We're an Open Source project and we value using and promoting Open Source tools and solutions. We're exciting about diving in and using Gitlab's amazing CI infrastructure to better our testing. And we've got some plans to use subteams and other mechanisms to try and recognize and enable contributors of all types. We're pretty excited to get our project on Gitlab.

Getting there

Inkscape is a well established project with a long version history. While the conversion between Bazaar and Git isn't complex, that history made it more complicated to migrate. There are also several branches that we didn't want to lose if we didn't have to. This lead us to the lp2gh tool which is for migrations to Github. While we're not going to Github, we were able to use the version control conversion script to get all the branches out of Launchpad. For others who may follow in our footsteps, here is a small shell script showing what we did.

# Make a directory and ensure the tools we need are
# in our path

mkdir export
export PATH=$PATH:`pwd`/lp2gh/bin
cd export

# Do a massive export of everything

lp2gh-export-branches inkscape

# Delete the Bazaar branches in the repo from git-bzr

cd inkscape
git branch | grep -v master | grep bzr/ | xargs -n 1 git branch -D

# We had some inconsistent names and e-mail addresses
# In our revision history, so we're fixing it now

git filter-branch \
  --commit-filter 'OUT=` "$GIT_AUTHOR_NAME" "$GIT_AUTHOR_EMAIL"`; eval $OUT' \
  -- --all

# Send up to GitLab

git remote add origin [email protected]:inkscape/inkscape.git
git push --all origin

For us, on my home desktop, that took about 24-hours to complete. This was a rather long migration, which seemed to be mostly blocked on the fast export of the Bazaar branches. We applied a small patch by Eduard Braun that included the Bazaar revision number in the Git commit message which will make it possible find which revisions certain bugs referenced. This patch caused the export to take longer that it would for other projects not using this patch.

Some Thanks

We want to thank all of the developers on Bazaar and Launchpad for all the work that they've done in giving us a great code hosting solution for the last 10 years.

posted Jun 9, 2017 | permanent link

SSH to RaspPi from anywhere

Probably like most of you I have a Raspberry Pi 2 sitting around not doing a lot. A project that I wanted to use mine for is setting up reliable network access to my home network when I'm away. I'm a geek, so network access for me means SSH. The problem with a lot of solutions out there is that ISPs on home networks change IPs, routers have funky port configurations, and a host of other annoyances that make setting up access unreliable. That's where Pagekite comes in.

Pagekite is a service that is based in Iceland and allows tunneling various protocols, including SSH. It gives a DNS name at one end of that tunnel and allows connecting from anywhere. They run on Open Source software and their libraries are all Open Source. They charge a small fee, which I think is reasonable, but they also provide a free trial account that I used to set this up and test it. You'll need to signup for Pagekite to get the name and secret to fill in below.

The first thing I did was setup Ubuntu core on my Pi and get it booting and configured. Using the built in configure tool it grabs my SSH keys already, so I don't need to do any additional configuration of SSH. You should always use key based login when you can. Then I SSH'd in on the local network to install and setup a small Pagekite snap I made like this:

# Install the snap
sudo snap install pagekite-ssh

# Configure the snap
snap set pagekite-ssh kitename=<your name> kitesecret=<a bunch of hex>

# Restart the service to pickup the new config
sudo systemctl restart snap.pagekite-ssh.pagekite-ssh.service 

# Look at the logs to make sure there are no errors
journalctl --unit snap.pagekite-ssh.pagekite-ssh.service 

I then I configured my SSH to connect through Pagekite by editing my .ssh/config

Host *
    User <U1 name> 
    IdentityFile ~/.ssh/id_launchpad
    CheckHostIP no
    ProxyCommand /bin/nc -X connect -x %h:443 %h %p

Now I can SSH into my Raspberry Pi from anywhere on the Internet! You could also install this on other boards Ubuntu core supports or anywhere snapd runs.

What is novel to me is that I now have a small low-power board that I can plug into any network, it will grab an IP address and setup a tunnel to a known address to access it. It will also update itself without me interacting with it at all. I'm considering putting one at my Dad's house as well to enable helping him with his network issues when the need arises. Make sure to only put these on networks that you have permission though!

posted Apr 17, 2017 | permanent link

X11 apps on Ubuntu Personal

Snaps first launched with the ability to ship desktop apps on Ubuntu 16.04, which is an X11 based platform. It was noted that while secure and containerized, the fact that many Snaps were using X11 this made them less secure than they could be. It was a reality of shipping Snaps for 16.04, but something we definitely want to fix for 18.04 using Unity8 and the Mir graphics stack. We can't just ignore all the apps that folks have built for 16.04 though, so we need a solution to run X11 applications on Unity8 securely.

To accomplish this we give each X11 application its own instance of the XMir server. This means that even evil X applications that use insecure features of (or find vulnerabilities in) the Xorg server, they're only compromising their individual instance of the Xserver and are unable to affect other applications. Sounds simple, right? Unfortunately there is a lot more to making an application experience seamless than just handling the graphic buffers and making sure it can display on screen.

The Mir server is designed to handle graphics buffers and their positions on the screen, it doesn't handle all the complexities of things like cut-and-paste and window menus. To help make X11 apps that use these features we're using some pieces of the libertine project which runs X11 apps in LXD containers. It has in it a set of helpers, like pasted, who handle these additional protocols. pasted watches the selected window and the X11 clip buffers to connect into Unity8's cut-and-paste mechanisms which behave very differently. For instance, Unity8 doesn't allow or snooping on clip buffers to steal passwords.

It is also important at this point to note that in Ubuntu Personal we aren't just snapping up applications, we are snapping everything. We expect to have snaps of Unity8, snaps of Network Manager and a snap of XMir. This means that XMir isn't even running in the same security context as Unity8. A vulnerability in XMir only compromises XMir and the files that it has access to. This means that a bug in an X11 application would have get into XMir and then work on the Mir protocol itself before getting to other applications or user session resources.

The final user experience? We hope that no one notices that their applications are X11 applications or Mir applications, users shouldn't have to care about display servers. What we've tried to create is a way for them to still have their favorite X11 applications, as hopefully they transition away from X11, while still being able to get the security benefits of a Mir based desktop.

posted Apr 5, 2017 | permanent link

Applications under systemd

When we started to look at how to confine applications enough to allow an appstore that allows for anyone to upload applications we knew that Apparmor could do the filesystem and IPC restrictions, but we needed something to manage the processes. There are kernel features that work well for this, but we didn't want to reinvent the management of them, and we realized that Upstart already did this for other services in the system. That drove us to decide to use Upstart for managing application processes as well. In order to have a higher level management and abstraction interface we started a small library called upstart-app-launch and we were off. Times change and so do init daemons, so we renamed the project ubuntu-app-launch expecting to move it to systemd eventually.

Now we've finally fully realized that transition and ubuntu-app-launch runs all applications and untrusted helpers as systemd services.

bye, bye, Upstart. Photo from:

For the most part, no one should notice anything different. Applications will start and stop in the same way. Even users of ubuntu-app-launch shouldn't notice a large difference in how the library works. But for people tinkering with the system they will notice a few things. Probably the most obvious is that application log files are no longer in ~/.cache/upstart. Now the log files for applications are managed by journald, which as we get all the desktop services ported to use systemd, will mean that you can see integrated events from multiple processes. So if Unity8 is rejecting your connection you'll be able to see that next to the error from your application. This should make debugging your applications easier. You'll also be able to redirect messages off a device realtime, which will help debugging your application on a phone or tablet.

For those who are more interested in details we're using systemd's transient unit feature. This allows us to create the unit on the fly with multiple instances of each application. Under Upstart we used a job with instances for each application, but now that we're taking on more typical desktop style applications we needed to be able to support multi-instance applications, which would have been hard to manage with that approach. We're generating the service name using this pattern:

ubuntu-app-launch--$(application type)--$(application id)--$(time stamp).service

The time stamp is used to make a unique name for applications that are multi-instance. For applications that ask us to maintain a single instance for them the time stamp is not included.

Hopefully that's enough information to get you started playing around with applications running under systemd. And if you don't care to, you shouldn't even notice this transition.

posted Mar 23, 2017 | permanent link

Presentations Updated

This post is mostly a mea culpa to all the folks that asked me after a presentation: "And those slides will be online?" The answer is generally "yes" but they were in a tweet or something equally as hard to find. But now I finally got to making an updated presentations page that is actually useful. Hopefully you can find the slides you are looking for there. And more importantly you can use them as a basis for your talk to a local group in your town.

As I was redoing this I thought it was a bit interesting how my title pages seem to alternate every couple of years between complex and simple. And I think I have a candidate for worst theme (though there was a close second). Also a favorite theme along with a reminder of all the fun it is to make a presentation with JessyInk.

I think that there are a couple missing that I can't find, and also video links out on the Internet somewhere. Please drop me a line if you have any ideas, suggestions or I sent you files that I've now lost. Hopefully this is easier to maintain now so there won't be the same delay.

posted Jan 16, 2017 | permanent link

The Case for Ubuntu Phone

There are times in standard social interactions where people ask what you do professionally, which means I end up talking about Ubuntu and specifically Ubuntu Phone. Many times that comes down to the seemingly simple question: "Why would I want an Ubuntu phone?" I've tried the answer "becasue I'm a thought leader and you should want to be like me," but sadly that gets little traction outside of Silicon Valley. Another good answer is all the benefits of Free Software, but many of those are benefits the general public doesn't yet realize they need.

Ubuntu Phone

The biggest strength and weakness of Ubuntu Phone is that it's a device without an intrinsic set of services. If you buy an Android device you get Google Services. If you buy an iPhone you get Apple services. While these can be strengths (at least in Google's case) they are effectively a lock in to services that may or may not meet your requirements. You certainly can get Telegram or Signal for either of those, but they're never going to be as integrated as Hangouts or iMessage. This goes throughout the device including things like music and storage as well. Ubuntu and Canonical don't provide those services, but instead provide integration points for any of them (including Apple and Google if they wanted) to work inside an Ubuntu Phone. This means as a user you can use the services you want on your device, if you love Hangouts and Apple Maps, Ubuntu Phone is happy to be a freak with you.

Carriers are also interested in this flexibility. They're trying to put together packages of data and services that will sell, and fetch a premium price (effectively bundling). Some they may provide themselves and some by well known providers; but by not being able to select options for those base services they have less flexibility on what they can do. Sure, Google and Apple could give them a great price or bundle, but they both realize that they don't have to. So that effectively makes it difficult for the carriers as well as alternate service providers (e.g. Dropbox, Spotify, etc) to compete.

What I find most interesting thing about this discussion is that it is the original reason that Google bought Android. They were concerned that with Apple controlling the smartphone market they'd be in a position to damage Google's ability to compete in services. They were right. But instead of opening it up to competition (a competition that certainly at the time and even today they're likely to win) they decided to lock down Android with their own services. So now we see in places like China where Google services are limited there is no way for Android to win, only forks that use a different set of integrations. One has to wonder if Ubuntu Phone existed earlier whether Google would have bought Android, while Ubuntu Phone competes with Android it doesn't pose any threat to Google's core businesses.

It is always a failure to try and convince people to change their patterns and devices just for the sake of change. Early adopters are people who enjoy that, but not the majority of people. This means that we need to be an order of magnitude better, which is a pretty high bar to set, but one I enjoy working towards. I think that Ubuntu Phone has the fundamental DNA to win in this race.

posted Jan 14, 2017 | permanent link

Snapping Unity8

For the last little while we've been working to snap up Unity8. This is all part of the conversion from a system image based device to one that is entirely based on snaps. For the Ubuntu Phones we basically had a package layout with a system image and then Click packages on top of it.

System Image with Clicks

Where Snaps change things is now all the various pieces of the core operation of the device can be confined and managed as individual bits. They can also be upgraded and rolled back on failure independently. And to get the benefits of that system we need to get Unity8 into a snap. For all the time that we spent harassing application developers about how they need to transform their apps so that they could work within the constraints of a Click, now we have to do that work ourselves. And it is definitely non-trivial. But we're getting there.

Snaps for everything

All of the snaps needed to build a full system aren't yet available so we created a small debian package that intergrates the Unity8 snap onto standard Ubuntu system. This package puts into place the configuration files for LightDM so that when a user selects the Unity8 Snap Session it calls into the unity8-session snap directly. This way Unity8 itself stays all together in the snap and we don't have to worry about the other pieces of the core system that are also being developed at the same time. The package is available in the Stable Phone Overlay for Ubuntu 16.04 or in the Zesty archive and can be installed with apt install unity8-session-snap. It has a small helper script that will install the snaps you need that you can call as unity8-snap-install.

Right now we have the basic functionality mostly working. But there are a lot of warts. We're working to remove all of them and turn the Unity8 snap into the best way to get the latest features of Unity8 in a safe manner.

Unity8 Snap Demo r309

I plan to post a bit more about what we're doing and problems we're solving along the way, even if I have some catching up to do. Trying to keep the posts small so I get them done.

posted Jan 5, 2017 | permanent link

All the older posts...