Three weeks ago, I got infected with this contagious virus, the symptoms of which are that you are trying to containerize everything :) Sure, I did hear about it for quite some time, but never got involved with it myself.

The source of my infection was Brian Christner, who recently became a Docker Captain. During the weekend my wife and I spent in Switzerland, I got a lot of information and asked even more questions which Brian was happy to answer :)

Both Brian and the Docker docs are quite clear that docker data volumes are the way forward, so any data I want to store persistently, I do by using named data volumes. At work, I am used to have a complete DTAP (Development/Testing and Acceptance/Production) environment and using Docker, I can easily replicate that for my blog with Docker.

For this to work well however, I want to have a quick way of copying my named data volume for the production instance to a second named data volume for the development or testing instance. This ensures I can test my new developments (updates of ghost, new themes, etc) directly with the latest data from my production blog.

I know that you can just copy the files under /var/lib/docker/volumes/, but to me this does not seem the cleanest way as it bypasses the docker daemon completely.

So I created a small bash script that gets two arguments (the names of the source and destination volumes) and it will copy the data from the source to the destination, using the alpine image.

By using the statement www-data www-dev-data

you can copy all data that is contained in the named volume www-data to the www-dev-data (after some sanity checking of course) and create a clone. Now I can use the named volume www-dev-data for my development version of the blog, ensuring I have the latest posts/data available to me when testing/developing.

You can find the latest version of the script on my repository on GitHub.

Do you have any comments/remarks or do you have any other great convenience scripts/functions, please share them as a comment!