- PX4 Docker Containers
- Prerequisites
- Container Hierarchy
- Use the Docker Container
- Virtual Machine Support
- docker running GUI apps
PX4 Docker Containers
Docker containers are provided for the complete PX4 development toolchain including NuttX and Linux based hardware, Gazebo Simulation and ROS.
This topic shows how to use the available docker containers to access the build environment in a local Linux computer.
:::note Dockerfiles and README can be found on Github here. They are built automatically on Docker Hub. :::
Prerequisites
:::note
PX4 containers are currently only supported on Linux (if you don’t have Linux you can run the container inside a virtual machine). Do not use boot2docker
with the default Linux image because it contains no X-Server.
:::
Install Docker for your Linux computer, preferably using one of the Docker-maintained package repositories to get the latest stable version. You can use either the Enterprise Edition or (free) Community Edition.
For local installation of non-production setups on Ubuntu, the quickest and easiest way to install Docker is to use the convenience script as shown below (alternative installation methods are found on the same page):
curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh
The default installation requires that you invoke Docker as the root user (i.e. using sudo
). However, for building the PX4 firwmare we suggest to use docker as a non-root user. That way, your build folder won’t be owned by root after using docker.
# Create docker group (may not be required)
sudo groupadd docker
# Add your user to the docker group.
sudo usermod -aG docker $USER
# Log in/out again before using docker!
Container Hierarchy
The available containers are on Github here.
These allow testing of various build targets and configurations (the included tools can be inferred from their names).
The containers are hierarchial, such that containers have the functionality of their parents.
For example, below you can see that the docker container with nuttx build tools (px4-dev-nuttx-focal
) does not include ROS 2, while the simulation containers do:
- px4io/px4-dev-base-focal
- px4io/px4-dev-nuttx-focal
- px4io/px4-dev-simulation-focal
- px4io/px4-dev-ros-noetic
- px4io/px4-dev-ros2-foxy
The most recent version can be accessed using the latest
tag: px4io/px4-dev-nuttx-bionic:latest
(available tags are listed for each container on hub.docker.com.
For example, the px4io/px4-dev-nuttx-bionic
tags can be found here).
:::tip
Typically you should use a recent container, but not necessarily the latest
(as this changes too often).
:::
Use the Docker Container
The following instructions show how to build PX4 source code on the host computer using a toolchain running in a docker container. The information assumes that you have already downloaded the PX4 source code to src/PX4-Autopilot, as shown:
mkdir src
cd src
git clone https://github.com/PX4/PX4-Autopilot.git
cd PX4-Autopilot
Helper Script (docker_run.sh)
The easiest way to use the containers is via the docker_run.sh helper script.
This script takes a PX4 build command as an argument (e.g. make tests
). It starts up docker with a recent version (hard coded) of the appropriate container and sensible environment settings.
For example, to build SITL you would call (from within the /PX4-Autopilot directory):
./Tools/docker_run.sh 'make px4_sitl_default'
Or to start a bash session using the NuttX toolchain:
./Tools/docker_run.sh 'bash'
:::tip The script is easy because you don’t need to know anything much about Docker or think about what container to use. However it is not particularly robust! The manual approach discussed in the section below is more flexible and should be used if you have any problems with the script. :::
Calling Docker Manually
The syntax of a typical command is shown below.
This runs a Docker container that has support for X forwarding (makes the simulation GUI available from inside the container).
It maps the directory <host_src>
from your computer to <container_src>
inside the container and forwards the UDP port needed to connect QGroundControl.
With the -–privileged
option it will automatically have access to the devices on your host (e.g. a joystick and GPU). If you connect/disconnect a device you have to restart the container.
# enable access to xhost from the container
xhost +
# Run docker
docker run -it --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
-v <host_src>:<container_src>:rw \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-e DISPLAY=:0 \
-p 14570:14570/udp \
--name=<local_container_name> <container>:<tag> <build_command>
Where,
-
<host_src>
: The host computer directory to be mapped to<container_src>
in the container. This should normally be the PX4-Autopilot directory. -
<container_src>
: The location of the shared (source) directory when inside the container. -
<local_container_name>
: A name for the docker container being created. This can later be used if we need to reference the container again. -
<container>:<tag>
: The container with version tag to start - e.g.:px4io/px4-dev-ros:2017-10-23
. -
<build_command>
: The command to invoke on the new container. E.g.bash
is used to open a bash shell in the container.
The concrete example below shows how to open a bash shell and share the directory ~/src/PX4-Autopilot on the host computer.
# enable access to xhost from the container
xhost +
# Run docker and open bash shell
docker run -it --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
-v ~/src/PX4-Autopilot:/src/PX4-Autopilot/:rw \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-e DISPLAY=:0 \
-p 14570:14570/udp \
--name=mycontainer px4io/px4-dev-ros:2017-10-23 bash
If everything went well you should be in a new bash shell now. Verify if everything works by running, for example, SITL:
cd src/PX4-Autopilot #This is <container_src>
make px4_sitl_default gazebo
Re-enter the Container
The docker run
command can only be used to create a new container. To get back into this container (which will retain your changes) simply do:
# start the container
docker start container_name
# open a new bash shell in this container
docker exec -it container_name bash
If you need multiple shells connected to the container, just open a new shell and execute that last command again.
Clearing the Container
Sometimes you may need to clear a container altogether. You can do so using its name:
docker rm mycontainer
If you can’t remember the name, then you can list inactive container ids and then delete them, as shown below:
docker ps -a -q
45eeb98f1dd9
docker rm 45eeb98f1dd9
QGroundControl
When running a simulation instance e.g. SITL inside the docker container and controlling it via QGroundControl from the host, the communication link has to be set up manually. The autoconnect feature of QGroundControl does not work here.
In QGroundControl, navigate to Settings and select Comm Links. Create a new link that uses the UDP protocol. The port depends on the used configuration e.g. port 14570 for the SITL config. The IP address is the one of your docker container, usually 172.17.0.1/16 when using the default network. The IP address of the docker container can be found with the following command (assuming the container name is mycontainer
):
$ docker inspect -f '{ {range .NetworkSettings.Networks}}{ {.IPAddress}}{ {end}}' mycontainer
:::note Spaces between double curly braces above should be not be present (they are needed to avoid a UI rendering problem in gitbook). :::
Troubleshooting
Permission Errors
The container creates files as needed with a default user - typically “root”. This can lead to permission errors where the user on the host computer is not able to access files created by the container.
The example above uses the line --env=LOCAL_USER_ID="$(id -u)"
to create a user in the container with the same UID as the user on the host. This ensures that all files created within the container will be accessible on the host.
Graphics Driver Issues
It’s possible that running Gazebo will result in a similar error message like the following:
libGL error: failed to load driver: swrast
In that case the native graphics driver for your host system must be installed. Download the right driver and install it inside the container. For Nvidia drivers the following command should be used (otherwise the installer will see the loaded modules from the host and refuse to proceed):
./NVIDIA-DRIVER.run -a -N --ui=none --no-kernel-module
More information on this can be found here.
Virtual Machine Support
Any recent Linux distribution should work.
The following configuration is tested:
- OS X with VMWare Fusion and Ubuntu 14.04 (Docker container with GUI support on Parallels make the X-Server crash).
Memory
Use at least 4GB memory for the virtual machine.
Compilation problems
If compilation fails with errors like this:
The bug is not reproducible, so it is likely a hardware or OS problem.
c++: internal compiler error: Killed (program cc1plus)
Try disabling parallel builds.
Allow Docker Control from the VM Host
Edit /etc/defaults/docker
and add this line:
DOCKER_OPTS="${DOCKER_OPTS} -H unix:///var/run/docker.sock -H 0.0.0.0:2375"
You can then control docker from your host OS:
export DOCKER_HOST=tcp://<ip of your VM>:2375
# run some docker command to see if it works, e.g. ps
docker ps
docker running GUI apps
github I’ve been doing all of my real (paid) work on VMs / containers for a while now but when it comes to writing Java code for some projects for university I still need to move away from using vim and install some full blown IDE in order to be productive. This has been bothering me for quite some time but this week I was finally able put the pieces together to run NetBeans in a Docker container so that I can avoid installing a lot of Java stuff on my machine that I don’t use on a daily basis.
There are a few different options to run GUI applications inside a Docker container like using SSH with X11 forwarding, or VNC but the simplest one that I figured out was to share my X11 socket with the container and use it directly.
The idea is pretty simple and you can easily it give a try by running a Firefox container using the following Dockerfile as a starting point:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y firefox
# Replace 1000 with your user / group id
RUN export uid=1000 gid=1000 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/firefox
docker build -t firefox . it
and run the container with:
docker run -ti --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
firefox
If all goes well you should see Firefox running from within a Docker container.
https://github.com/fgrehm/docker-netbeans
Getting a NetBeans container up and running
Preparing a NetBeans base image was not that straightforward since we need to install some additional dependencies (namely the libxext-dev, libxrender-dev and libxtst-dev packages) in order to get it to connect to the X11 socket properly. I also had trouble using OpenJDK and had to switch to Oracle’s Java for it to work.
After lots of trial and error, I was finally able to make it work and the result is a base image available at the Docker Hub with sources on GitHub.
Here’s a quick demo of it in action
The following wiki, pages and posts are tagged with
Title | Type | Excerpt |
---|---|---|
gcs and cloud | post | Mon, Jan 31, 22, sample4 from sass2 product sample4 |
overview and initial powerup | post | Mon, Jan 31, 22, sample1.md of sass2 product2_sample files The most advanced hardware and software ecosystem for enterprise drones |
smartAP | post | Mon, Jan 31, 22, sample5 from sass2 product2 sample5 |
smartAPLink and faq | post | Mon, Jan 31, 22, sample3 from sass2 product2 sample3 |
telemetry and advanced software | post | Mon, Jan 31, 22, sample2.md of sass2 product2 sample2 file |
px4 docker image for jvsim simulation | post | 목, 2월 10, 22, docker image implmentation for docker px4 simuation |
qtcreator wiki from drone guide dev-setup | post | 화, 2월 15, 22, planning phase research for dashboard elements using |
offboard control using pixhawk raspi mavros | post | Wed, Feb 16, 22, hitl setup and configuraiton using pixhawk raspi mavros and px4 |
setup gazebo for simulation | post | Wed, Feb 16, 22, pixhawk ros gazebo gcs simulation |
setup mavros and px4 | post | Wed, Feb 16, 22, setup mavros and px4 |
testing sitl drone | post | Wed, Feb 16, 22, process to launch sitl drone |
ros and px4 architecture and data flow | post | Wed, Feb 16, 22, examine how data flows for user interface and drone control |
setup ros indigo with tutlesim | post | Wed, Feb 16, 22, pixhawk gcs simulation series 2 with ros indigo |
connecting raspi to matek f406 wing | post | Fri, Feb 18, 22, hardware setup with raspi 4 with matek f406 wing |
px4 simulation for gazebo | post | Fri, Feb 18, 22, simulation instruciton from px4 |
Let's roll and conquer! | post | Monday, Third week with jdlab and first week probably for actual work |
brainstorming session prior to setting out on gcs development | post | Mon, Feb 21, 22, pool resources and ideas into one single gcs you can develop |
overview of epp and eps for airframes | post | Tue, Feb 22, 22, research before business call to manufactueres |
connecting rpi to gcs with the use of uavmatrix on uavcast pro | post | Mon, Feb 28, 22, supported raspi board pinout maps and setup guide |
creating custom mission points for fixed wings | post | Fri, Mar 18, 22, p-turn or turnaround insertion to the mission raw data for exit and entry for p-turnaround and side/front-lap coverage creation that willmod... |
gStreamer vs qtAv | post | Wed, Mar 30, 22, qt movie qmovie phonon video player |
realtime georeferencing plus imu overlay | post | Tue, Apr 05, 22, how to add vehicle status sensor data to georeferencing |
rtk reach m2 receiver documentation | post | Wed, Apr 06, 22, rtk reach receiver wifi 5g lte |
avionics on airfoil and frames | post | Thu, Apr 07, 22, airfoil materials, designs and innovations in the avionics |
using openTX on radiomaster TX16s | post | Sun, Apr 10, 22, rc reciever transmitter opentx radiomaster configuration simulation |
adding GPS and IMU data to photos post flight | post | Mon, Apr 11, 22, perform post processing of gps/imu data or develop camera firmware lib to infuse IMU from fc to exif metadata |
BMU BMC BMS battery management | post | Thu, Apr 14, 22, to check the usage and health of batteries at all phases of flight cycle |
viewpro custom pwm | post | Thu, Apr 14, 22, customize viewpro camera and gimball with mavlink |
raspberrypi video streaming | post | Fri, Apr 22, 22, configure and setup raspi to enable streaming on mavlink and to advance to LTE transmission |
lx network, airlink, gcs and data transmission on smart radio, rf mesh and quantum encryption | post | Tue, Apr 26, 22, all about setup and how it operates and managed |