Building Portable Binaries

Attempting to self-contain application dependencies

Written by Alex Coomans.

Deploying your code is the last major hurdle in getting shiny new features or important bug fixes out to users. However, making sure your application has everything it needs can be a chore. Three typical approaches for preparing your application for deployment are:

  1. Installing dependencies system-wide alongside the runtime

  2. Shipping the dependencies with the app and relying on the system runtime

  3. Shipping both the runtime and dependencies with the app

This post will talk about approach number three. But first, I’ll point out some of the problems we had with approach one and two, and how both options drove us to the third option.

With options one and two, your app is relying on the system to provide something. Normally that is perfectly fine, but imagine if you happen to upgrade your OS and it includes an update to your runtime. For example, Python is present on many Linux boxes. You may have just broken your app and caused downtime. With option one you may have also lost all system dependencies. Even if you try to reinstall them, they may not work with the newer runtime.

We’ve seen this happen before when an OS upgrade broke a handful of services relying on a system binary. These services were using an older MySQL shared object that disappeared after the OS upgrade. So, we looked for a better solution: option 3.

Why not use X?

When looking for solutions, we didn’t find anything that completely isolated both the runtime and dependencies in the way that we needed. Instead, we turned to two, well-known — if somewhat obscure — pieces of the Linux stack: shebangs and rpaths.

Shebangs are the first line of scripts — like /bin/bash or /usr/bin/env ruby — that begin with #!. The kernel reads #! and will execute the script with this interpreter.

The rpath is the runtime search path for shared libraries and is hard coded into the header of the compiled binary. This allows the binary to search for other parts of itself or core things like libc, which provides the core C standard library.

All this work started with a simple need to fix an application that was relying on the system to provide all of its dependencies and broke when deploying to new machines. While Virtualenv came up as a solution, it doesn’t help with the Python installation problem. As a result, once I had a relocatable Python, I installed the dependencies like normal — thus Virtualenv wouldn’t provide anything extra.

Shebangs

The biggest drawback with shebangs is that they don’t support relative paths with respect to the binary location. You need to either hard code a system path or use a path that is relative to your current working directory when executed. The first option doesn’t work when relocating binaries, and the second option doesn’t work because it requires you to cd into the correct directory.

Here’s an example pip shim from a version of Python that I compiled:

#!/home/vagrant/python/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==1.3.1','console_scripts','pip'
__requires__ = 'pip==1.3.1'
import sys
from pkg_resources import load_entry_point

if __name__ == '__main__':
    sys.exit(
        load_entry_point('pip==1.3.1', 'console_scripts', 'pip')()
    )

You’ll notice the #!/home/vagrant/python/bin/python shebang, which fails if I move the Python binary elsewhere.

Certain languages have supported tricks that make this problem easier to solve. For example, a long time ago kernels didn’t support shebangs; so, you’d have to make sure you were running in your interpreter. Inspired by this work around (and after much trial and error), I came up with the following:

#!/bin/bash -e
"eval" '$(cd `dirname $0`; pwd)/python $(cd `dirname $0`; pwd)/$(basename $0) "$@" && exit 0'
# python code

When executed in Python, strings next to each other are simply concatenated and act as a no-op. However in Bash, it’s executed as code with the whole quoted piece treated as one argument to eval — which is exactly what we needed.

I had to use the eval and exit 0 commands paired with the -e flag instead of exec due to the way it expects arguments, because otherwise you’d end up with a command like:

"/home/vagrant/python/bin/python /home/vagrant/python/bin/pip"

The tricky part is that the space is treated as a literal space in the argument name, but when printed it is no different than the correct set of arguments:

"/home/vagrant/python/bin/python" "/home/vagrant/python/bin/pip"

The single quotes were the final piece that took me a while to get correct so that the $@ (i.e. the arguments to the script), were passed along correctly.

RPATH

An rpath for an executable is a header in a program that helps the linker find the needed shared objects at runtime. A shared object is a set of compiled code that is intended to be shared amongst a bunch of different binaries. Libc is the best example of a shared object as it is required by just about everything. Here are some examples of the headers for /bin/bash:

$ readelf -d /bin/bash
Dynamic section at offset 0xd3738 contains 26 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libtinfo.so.5]
 0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
(other header information)

You’ll notice bash requires libc, libdl, and libtinfo. Running ldd on that binary gives you the exact locations of the dependency resolution the linker has done:

$ ldd /bin/bash
    linux-vdso.so.1 =>  (0x00007fff463ff000)
    libtinfo.so.5 => /lib64/libtinfo.so.5 (0x0000003415400000)
    libdl.so.2 => /lib64/libdl.so.2 (0x0000003412c00000)
    libc.so.6 => /lib64/libc.so.6 (0x0000003413000000)
    /lib64/ld-linux-x86-64.so.2 (0x0000003412800000)

ldd will report the locations of the shared objects it finds, but it ignores the rpath (if present). bash doesn’t use an rpath, but here’s an example of the Python we’ll be compiling soon to help reach option three (i.e. shipping both the runtime and dependencies with the app):

$ readelf -d /example/python/bin/python
Dynamic section at offset 0x8f0 contains 26 entries:
  Tag        Type                         Name/Value
 0x0000000000000001 (NEEDED)             Shared library: [libpython2.7.so.1.0]
 0x0000000000000001 (NEEDED)             Shared library: [libpthread.so.0]
 0x0000000000000001 (NEEDED)             Shared library: [libdl.so.2]
 0x0000000000000001 (NEEDED)             Shared library: [libutil.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libm.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]
 0x000000000000000f (RPATH)              Library rpath: [$ORIGIN/../lib]

Notice the RPATH line? That’s the magic that allows us to relocate binaries. It’s specifically important for Python, because the binary requires a shared object. That $ORIGIN variable is also super important as it allows you to make paths relative to the binary. In this case, Python is expected to look in /example/lib for shared objects, along with the default linker locations. The man page for ld.so has more details.

While that magic is awesome, actually going from source code to an actual binary is somewhat difficult. Due to the fact that variables in shell are of the form $var, nested calls to a shell can cause the variable to be interpolated or malformed. (Note: For this rpath to work, the literal string $ORIGIN needs to be present.)

I know, this is a cliffhanger. Why did I choose option three? Hold tight. You first need background on why I was working with rpaths. I was using Python and planned to compile mod_wsgi as my application server, but it requires Python to be a shared object. As a result, when compiling Python you can give it the — enable-shared flag to have it build the shared object. The only downside is that the Python interpreter now needs to be able to find that shared object. After my first attempt compiling a default Python build, I ended up this error:

$ /example/python/bin/python
/home/vagrant/python/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory

Well, that wasn’t what I wanted. I needed to do one of the following:

  1. Set the LD_LIBRARY_PATH at runtime.

  2. Set the RPATH at compile time so it references the correct libpython.sowhen executed.

The biggest downside with LD_LIBRARY_PATH is it needs to be set as an environment variable, which is a bit of a pain to remember everytime you want to use Python. Instead, I decided to go with option number two (RPATH), which led to me to this first attempt at compiling with an rpath:

LDFLAGS='-Wl,-rpath=$ORIGIN/../lib' ./configure --prefix=/example/python --enable-shared

The configure runs fine, so make should be a piece of cake:

$ make
...
gcc -pthread -shared -Wl,-rpath=RIGIN/../lib -Wl,-hlibpython2.7.so.1.0 -o libpython2.7.so.1.0 Modules/getbuildinfo.o Parser/bitset.o Parser/metagrammar.o Parser/firstsets.o ...  Modules/pwdmodule.o  Modules/_sre.o  Modules/_codecsmodule.o -lpthread -ldl  -lutil  -lm ; \
        ln -f libpython2.7.so.1.0 libpython2.7.so;
...

Notice the RIGIN/../lib missing the $O portion? The variable needs to be passed all the way to the linker as $ORIGIN/../lib for this trick to work. After a fair amount of trial and error, the final command to get the variable passed to the linker correctly was:

LDFLAGS='-Wl,-rpath=\$$ORIGIN/../lib' ./configure --prefix=/example/python --enable-shared

I was able to verify using the readelf I posted above, and the final confirmation came by running:

$ /example/python/bin/python
Python 2.7.8 (default, Aug 26 2014, 06:15:54)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>

If you’d like to triple check that it’s loading the correct library (i.e. your compiled one and not the system one if it happens to exists), you can use strace:

$ strace -e open,stat python/bin/python
...
open("/example/python/bin/../lib/tls/x86_64/libpython2.7.so.1.0", O_RDONLY) = -1 ENOENT (No such file or directory)
stat("/example/python/bin/../lib/tls/x86_64", 0x7fffa5a4d520) = -1 ENOENT (No such file or directory)
open("/example/python/bin/../lib/tls/libpython2.7.so.1.0", O_RDONLY) = -1 ENOENT (No such file or directory)
stat("/example/python/bin/../lib/tls", 0x7fffa5a4d520) = -1 ENOENT (No such file or directory)
open("/example/python/bin/../lib/x86_64/libpython2.7.so.1.0", O_RDONLY) = -1 ENOENT (No such file or directory)
stat("/example/python/bin/../lib/x86_64", 0x7fffa5a4d520) = -1 ENOENT (No such file or directory)
open("/example/python/bin/../lib/libpython2.7.so.1.0", O_RDONLY) = 3
...

You can see that it eventually ends up looking in the /example/python/libdirectory.

Conclusion

Obviously, choosing option three wasn’t the easiest or most obvious approach — compiling relocatable binaries can be hard (and sometimes impossible). However, using relative shebangs and rpaths help make it easier and allow more flexibility with your application deployment, while still being reliable. Alex Coomans (@alexcoomans) | Twitter *The latest Tweets from Alex Coomans (@alexcoomans). Dreaming up new things @square, proud @UTAustin alum. Austin, TX …*twitter.com

Table Of Contents