Running a Modified Copy of the Docker Image

Hello,
I am completely new to docker and I have been trying to run a slightly modified copy of the docker image running on the za6. I think I have successfully made a copy of the image, but I am not sure how to plug it into the script to make my copy run instead of the default image. All the ways I modify the launch-pathpilot-launcher script do not work.

Are there any guides available to show me how to run a custom docker image on the za6?

Or is there any more documentation on what is happening in the launch-pathpilot-launcher script and subsequent scripts it calls?

Hi,

Firstly, I wanted to check if you were still struggling with this, or if you managed to get what you needed and start the container you desired?

If you are still having an issue, can you give me any additional details on what you are trying to accomplish with your customized docker container, and perhaps a list of the type of changes you are trying to make?

For example: Are you trying to make filesystem changes, or expose a new port, or mount a new directory, or something slightly more complicated like modify the entrypoint or have an extra daemon start?

Hi Ryan, thanks for getting back to me. Yes, I would still very much like to know how to modify and run a modified docker container. I haven’t made any progress except to make what looks to be an image of my modified container with docker commit.

I have a related question on the forum in which I asked a similar question with the main goal of connecting the arm to another computer running ROS. On that, I did manage to connect the arm to another computer by changing the /etc/hosts file of the other computer, but in the past, I’ve always set the environment variable ROS_IP on the machine so that it didn’t matter what other computers named the machine. So adding the environment variable was my first reason to modify.

It would be nice to be able to add some of our custom nodes to launch with the arm, at least during testing, in order to eliminate network issues as a potential problem.

It would be nice to have nano in the container to be able to edit files. I know it exists outside the container, so maybe there is a way to use the nano outside the container, but I don’t know how to do it. The only modification in the image I tried to make was actually to install nano in the container.

We also plan to connect the arm to a ROS vision system and eventually other machinery. That’s still further out, but I envision needs for changes no matter what route we take. It may make sense to run the core on one of those machines (thus needing the ROS_MASTER_URI changed on the arm) or it may make sense to do a multi-master system (needing our multimaster node on the arm), or it may make sense to remote launch the other nodes from the arm (needing to edit the launch files).

We will eventually want to have the non-simulation config roslaunch start on boot. On a normal linux system, I’d just add a systemd job. I imagine I could still start the docker container on boot with systemd but I’d have to bypass the pathpilot script and all the user options it goes through. I can’t enter the container in the terminal until I choose one of the 3 configs, so I am guessing the container actually gets started after that choice is made.

Your last comment will be the one I address first, because it relates to everything else.

Yes, the Launcher runs first, you select a configuration to run, and then it starts the Robot container which will also start the RobotUI.

You can launch the Robot container directly via the docker cli (which also answers your question about how to launch the Robot container on a system boot, you could drop that into systemd like you mentioned). The drawback here is, the specific docker run command you’d use is very specific for each version (and maybe each run). The Launcher is responsible for essentially building the parameters that are passed to the underlying docker run command, while also being responsible for things like downloading new updates, etc.

With that background, here are two options to consider.

Option A. The easy-but-limited way. Currently, as of v3.2.2 we do not remove the RobotUI container when it is stopped. What this means is until the next run of the Launcher, you can always restart the last run config.

For example:

  1. Start Launcher, select a config. I used Sim to test this as I replied to you.
  2. After RobotUI comes up, connect to a terminal inside the container.
  3. Customize the container to your hearts content. I tested by installing nano inside the container and confirming it was still there on a restart, but you could also likely install some custom ros packages, and add a script that starts the nodes in those packages, etc.
  4. Exit the RobotUI (lower left).
  5. Restart your customized container (even after a reboot) with docker restart ros-None-ui.

To start over with a “fresh” Tormach container cause you hosed something, just rerun the Launcher.
If you put all your customizations into a script, you can also copy the script in with docker cp.

Option B. If you feel you are a docker expert there are some additional things you can do. A Google search for “reverse engineer docker run from docker inspect” will give you some results that you could use to essentially rebuild the docker run that our Launcher builds.

Having done that you can use the Dockerfile format to customize the image as much as you wanted, and then use the docker run to correctly start your very custom image.

I’m not giving examples here because it’s a bit more advanced, if you know how to use Dockerfiles, and how to create a custom image based off a previous image, and you know your way around docker inspect and can craft a docker run by hand or using a tool you feel confident in, then you don’t need me to hold your hand on those steps.

If you aren’t familiar with those steps, it hopefully helps getting your Google searches off into the right direction to learn more about Docker, and I wish you happy hacking if you head in that direction!

Thank you so much! That definitely helps. I’ve also been delving a little deeper into the system and understanding more of the setup you guys have going. I think I have most everything I need, but I do have a few more questions that might help me not go around in circles or miss obvious things. I didn’t realize at the beginning that there were two containers running.

In the supervisord.conf file that lists programs to run from the entrypoint, I saw a roslaunch as one of the programs to run and I thought that was the launch that I was seeing. But now I am thinking that it never actually runs from there. Rather, I see that a second container is spun up from the first container’s launcher_ui that actually runs the roslaunch in another container made from the same image.

But why are there two containers running from the same image? Do I need both of the containers if I know which launch file and config I want to run and am not worried about updates for the time being?

It sounds like you are saying I only need to run the second container if I can craft the correct docker run command. And indeed it looks like the first container only runs a few programs and runs the launcher_ui with the specific purpose of crafting the docker run command to run the second container with the roslaunch in it.

Thanks again!

You are correct, we spin up two containers from the same image, one is the launcher/updater and then with the options, we launch second container (the RobotUI) from the first one.

You only need the second container to actually control the Robot.

Either docker restart to restart the 2nd container after a shutdown or building the docker run command that starts the RobotUI container from the image like you suspected.

But as I mentioned above, you will miss out on any updates if you skip over the first container launch. That might be ok if you are customizing the image enough you don’t want anything upstream, but I figured it was worth mentioning again.