<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.3">Jekyll</generator><link href="https://blog.sunekeller.dk/feed.xml" rel="self" type="application/atom+xml" /><link href="https://blog.sunekeller.dk/" rel="alternate" type="text/html" /><updated>2023-04-04T19:29:37+02:00</updated><id>https://blog.sunekeller.dk/feed.xml</id><title type="html">Sune Keller’s blog</title><subtitle></subtitle><author><name>Sune Keller</name></author><entry><title type="html">Vault + Swarm Docker secrets plugin (proof of concept)</title><link href="https://blog.sunekeller.dk/2019/04/vault-swarm-plugin-poc/" rel="alternate" type="text/html" title="Vault + Swarm Docker secrets plugin (proof of concept)" /><published>2019-04-05T00:00:00+02:00</published><updated>2019-04-05T00:00:00+02:00</updated><id>https://blog.sunekeller.dk/2019/04/vault-swarm-plugin-poc</id><content type="html" xml:base="https://blog.sunekeller.dk/2019/04/vault-swarm-plugin-poc/">&lt;h1 id=&quot;background&quot;&gt;Background&lt;/h1&gt;

&lt;p&gt;Secrets have been part of Swarm Mode since its inception, making it trivial to provide generic, static secrets to your distributed services. However, not all secrets are equal, and some use cases call for a more dynamic approach. Docker Engine allows installing a plugin and using it as a driver when creating secrets, letting the value of the secret be determined at runtime, thus enabling dynamic use cases. My &lt;a href=&quot;https://dockercon19.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=282000&quot;&gt;talk at DockerCon 2019 in San Fransisco&lt;/a&gt; will cover how to write a secrets plugin that fetches dynamic secret values from &lt;a href=&quot;https://www.vaultproject.io&quot;&gt;HashiCorp Vault&lt;/a&gt;, and how to deploy it as a Swarm service.&lt;/p&gt;

&lt;h2 id=&quot;static-vs-dynamic-secrets&quot;&gt;Static vs. dynamic secrets&lt;/h2&gt;

&lt;p&gt;Generic, static secrets will only get you so far. Once you get to a large enough number of secrets, you’ll either need a very good naming convention, or make sure you label secrets very carefully. Even then they might become cumbersome to manage, which risks either creating too broad policies, or drowning yourself in bureaucracy. And there are other secret management solutions out there, and in this post I will discuss a specific use case with HashiCorp Vault.&lt;/p&gt;

&lt;h2 id=&quot;basic-example-of-static-secrets&quot;&gt;Basic example of &lt;em&gt;static&lt;/em&gt; secrets&lt;/h2&gt;

&lt;p&gt;Here’s a basic but complete example (adapted from the &lt;a href=&quot;https://docs.docker.com/engine/swarm/secrets/#use-secrets-in-compose&quot;&gt;official documentation&lt;/a&gt;) of using the built-in secrets feature:&lt;/p&gt;

&lt;p&gt;First, create passwords for a database:&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; /dev/urandom | &lt;span class=&quot;nb&quot;&gt;tr&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-dc&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'0-9a-zA-Z!@#$%^&amp;amp;*_+-'&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;head&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 15 | docker secret create db_password -
&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; /dev/urandom | &lt;span class=&quot;nb&quot;&gt;tr&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-dc&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'0-9a-zA-Z!@#$%^&amp;amp;*_+-'&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;head&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 15 | docker secret create db_root_password -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then write a Docker Compose file:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;3.7&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;services&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;db&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;mysql:latest&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;command&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;--default-authentication-plugin=mysql_native_password&quot;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# See https://github.com/docker-library/wordpress/issues/313#issuecomment-400836783&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;db_data:/var/lib/mysql&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;environment&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;MYSQL_ROOT_PASSWORD_FILE&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/run/secrets/db_root_password&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;wordpress&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;MYSQL_USER&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;wordpress&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;MYSQL_PASSWORD_FILE&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/run/secrets/db_password&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;db_root_password&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;db_password&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;wordpress&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;depends_on&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;db&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;wordpress:latest&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;published&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;8000&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;80&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;environment&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;WORDPRESS_DB_HOST&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;db:3306&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;WORDPRESS_DB_USER&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;wordpress&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;WORDPRESS_DB_PASSWORD_FILE&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/run/secrets/db_password&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;db_password&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;db_password&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;external&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;db_root_password&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;external&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;db_data&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally, deploy the stack:&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker stack deploy &lt;span class=&quot;nt&quot;&gt;--compose-file&lt;/span&gt; docker-compose.yml example1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Wait a while for the database and blog app to start up, and you’ll be able to visit &lt;a href=&quot;http://localhost:8000&quot;&gt;http://localhost:8000&lt;/a&gt; and see the working Wordpress site, all without you ever seeing the password.&lt;/p&gt;

&lt;p&gt;As you can see in the Docker Compose file, both MySQL and WordPress are instructed to use files inside the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/run/secrets&lt;/code&gt; directory for the database passwords. This is the delivery method selected for secrets.&lt;/p&gt;

&lt;h2 id=&quot;how-static-secrets-work-inside-swarm&quot;&gt;How static secrets work inside Swarm&lt;/h2&gt;

&lt;p&gt;When you use Swarm secrets &lt;em&gt;without&lt;/em&gt; a plugin, secret data and metadata is saved in the Raft store of the Swarm managers. By design, secret data cannot be updated, and the Docker CLI offers no commands to update secret metadata (i.e. labels). You &lt;em&gt;can&lt;/em&gt;, however, update secret &lt;em&gt;labels&lt;/em&gt; through the &lt;a href=&quot;https://docs.docker.com/engine/api/v1.39/#operation/SecretUpdate&quot;&gt;Docker Engine API&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you create a service with a secret attached, the secret values are placed as files on a private tmpfs (i.e. an in-memory file-system) mounted inside the container, rather than in environment variables, which are &lt;a href=&quot;https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/&quot;&gt;too easily divulged to unconcerned parties&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;the-problem&quot;&gt;The problem&lt;/h1&gt;

&lt;p&gt;You can configure Vault with e.g. your database’s sysadmin credentials, and then use a combination of policies and authentication mechanisms to have Vault dynamically create time-limited user accounts with role-based grants. The most basic authentication method Vault offers based on &lt;a href=&quot;https://www.vaultproject.io/docs/concepts/tokens.html&quot;&gt;opaque tokens&lt;/a&gt;. Tokens can have policies attached to them, indicating what areas in Vault they give access to. But how do you get them to your containers?&lt;/p&gt;

&lt;p&gt;You &lt;em&gt;could&lt;/em&gt; of course create a static, long-lived token with the right policies attached, and then type that in as a static secret in Swarm. Then you could attach it to the relevant service, and the service could then communicate directly with Vault to read the secret data, e.g. database credentials. But then you get the problem of rotating the token you typed into Swarm, which either becomes a bureaucratic, repetitive task, or else you risk having to put that secret into your CI/CD system, which can lead to &lt;a href=&quot;https://www.hashicorp.com/resources/what-is-secret-sprawl-why-is-it-harmful&quot;&gt;secret sprawl&lt;/a&gt;. If you &lt;em&gt;don’t&lt;/em&gt; rotate the token, you instead run the risk of the token some day being intercepted, and then you &lt;em&gt;have&lt;/em&gt; to rotate it - if you notice, that is.&lt;/p&gt;

&lt;p&gt;Ideally, you would give a new token to every instance of your service, and use the use-limit feature of Vault to make sure you can detect interception and ensure a stolen token cannot be reused. You can read more about this concept, which Vault calls &lt;a href=&quot;https://www.vaultproject.io/docs/concepts/response-wrapping.html#overview&quot;&gt;Response Wrapping&lt;/a&gt;. However, with static Swarm secrets, there is no way of making use of response wrapping. If you typed a response wrapped token into Swarm and made use of it in a service, only the first instance would be able to make use of it, which is by design.&lt;/p&gt;

&lt;h1 id=&quot;a-solution&quot;&gt;A solution&lt;/h1&gt;

&lt;p&gt;In order to solve this challenge in a satisfying way, you’ll need to use one of the several extension points of Docker Swarm.&lt;/p&gt;

&lt;h2 id=&quot;introduction-to-the-pluggable-secrets-backend&quot;&gt;Introduction to the pluggable secrets backend&lt;/h2&gt;

&lt;p&gt;The &lt;em&gt;pluggable secrets backend&lt;/em&gt; allows you to specify a “driver” when creating a secret, e.g. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker secret create --driver &amp;lt;driver_name&amp;gt; &amp;lt;name&amp;gt; &amp;lt;file|-&amp;gt;&lt;/code&gt;. The plugin must advertise that it implements the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;secretprovider&quot;&lt;/code&gt; interface, and Docker provides a &lt;a href=&quot;https://github.com/docker/go-plugins-helpers/blob/master/secrets/api.go&quot;&gt;helpful repository&lt;/a&gt; for getting started with writing such a plugin in Go.&lt;/p&gt;

&lt;p&gt;When a driver is chosen for a secret, the Swarm manager still looks up the &lt;em&gt;metadata&lt;/em&gt; in the raft store, but will request the &lt;em&gt;data&lt;/em&gt; from the plugin with the given driver name. The corresponding plugin &lt;em&gt;must&lt;/em&gt; be installed on the &lt;em&gt;managers&lt;/em&gt; in the Swarm.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/liron-l&quot;&gt;Liron Levin&lt;/a&gt; from TwistLock contributed the pluggable secrets backend back in &lt;a href=&quot;https://github.com/docker/swarmkit/pull/2239&quot;&gt;2017&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;a-proof-of-concept-plugin&quot;&gt;A proof-of-concept plugin&lt;/h2&gt;

&lt;p&gt;My idea was to write a plugin that when used will call out to Vault to deliver secret values to Swarm service tasks. One of the requirements was that it should support response wrapping. It was not hard to write, given the &lt;a href=&quot;https://github.com/docker/go-plugins-helpers/blob/master/secrets/api.go&quot;&gt;go-plugin-helpers&lt;/a&gt; repo and the excellent official &lt;a href=&quot;https://www.vaultproject.io/api/libraries.html#go&quot;&gt;Vault Go client&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The plugin works like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Receive request, including the secret name and labels, service name, ID and labels, the task name and ID&lt;/li&gt;
  &lt;li&gt;Based on the secret labels, the plugin will then&lt;/li&gt;
  &lt;li&gt;Create a token on behalf of the service task, with a Vault policy on the token with the same name as the service, and then, &lt;em&gt;optionally&lt;/em&gt;:
    &lt;ol class=&quot;lower_alpha_list&quot;&gt;
      &lt;li&gt;Use that token to read a &lt;a href=&quot;https://www.vaultproject.io/api/secret/kv/kv-v2.html&quot;&gt;generic key/value secret&lt;/a&gt; from a specified path, and &lt;em&gt;optionally&lt;/em&gt;:
        &lt;ol class=&quot;lower_roman_list&quot;&gt;
          &lt;li&gt;Return a specific field inside that path&lt;/li&gt;
          &lt;li&gt;JSON-encode the returned value&lt;/li&gt;
        &lt;/ol&gt;
      &lt;/li&gt;
      &lt;li&gt;Optionally use response wrapping to deliver the returned value&lt;/li&gt;
    &lt;/ol&gt;
    &lt;p&gt;The source code for the proof of concept plugin is available on &lt;a href=&quot;https://gitlab.com/sirlatrom/docker-secretprovider-plugin-vault/&quot;&gt;GitLab&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;a-complication&quot;&gt;A complication&lt;/h3&gt;

&lt;p&gt;Now, when I first tried out the plugin, it worked as intended. However, when I scaled up the service to 2 replicas, I noticed two things:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;When setting the secret label that indicated that a generic token should be returned, sometimes the two replicas got the same value, and&lt;/li&gt;
  &lt;li&gt;When setting the label to use response wrapping, sometimes only one of the tasks would succeed, whereas the other would be told by Vault that the response wrapping token had already been used.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When investigating this puzzling finding, I read through some of the code that assigns secrets and other resources to Swarm nodes. The code responsibly caches the values of secrets such that for each node, any given secret is only requested once from either the raft store or the secret plugin. However, that does not help if the plugin returns values that are supposed to be individual for each task, e.g. when using response wrapping.&lt;/p&gt;

&lt;p&gt;I set about writing the necessary changes as PRs &lt;a href=&quot;https://github.com/docker/swarmkit/pull/2735/&quot;&gt;#2735&lt;/a&gt; and &lt;a href=&quot;https://github.com/docker/swarmkit/pull/2735/&quot;&gt;#2735&lt;/a&gt; for docker/swarmkit. They are currently merged/vendored in &lt;a href=&quot;https://github.com/moby/moby/pull/38123&quot;&gt;moby/moby&lt;/a&gt;’s master branch, and &lt;em&gt;should&lt;/em&gt; release with Docker 19.03.&lt;/p&gt;

&lt;h3 id=&quot;limitations&quot;&gt;Limitations&lt;/h3&gt;

&lt;p&gt;Until Docker 19.03, you cannot use response wrapping with the plugin. It goes without saying that I do &lt;em&gt;not&lt;/em&gt; recommend or even suggest using this plugin anywhere near production, or even in daily use. Rather see it as an example of what &lt;em&gt;can&lt;/em&gt; be done with Swarm and its extension points.&lt;/p&gt;

&lt;p&gt;Also, the plugin currently relies on getting its &lt;em&gt;own&lt;/em&gt; access to Vault (a more privileged token that can perform the plugin’s functions) through suboptimal means. Because plugins in Docker, even if run as containers, are currently very different from regular containers, there are several features you cannot make use of. Namely, even though you can have a special type of Swarm service to install the plugin in your Swarm, there is currently no way for you to attach Swarm secrets to such a service, static or otherwise. This leaves you with the problem of safely bootstrapping the plugin itself. The only method I could think of, which really is half-baked, but works in this POC, is to give the plugin access to the Docker socket of the manager node it is installed on, and use a helper service to hold the bootstrapping token. The plugin then uses the Docker API to find the helper service’s container and reads the bootstrapping token from there.&lt;/p&gt;

&lt;h3 id=&quot;usage&quot;&gt;Usage&lt;/h3&gt;

&lt;p&gt;See the &lt;a href=&quot;https://gitlab.com/sirlatrom/docker-secretprovider-plugin-vault/blob/master/README.md&quot;&gt;README&lt;/a&gt; in the repo for instructions; here is what it says:&lt;/p&gt;

&lt;p&gt;Run ./rebuild.sh, you should get the following - note the different tokens in the two task instances of the snitch service:&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;./rebuild.sh
&lt;span class=&quot;c&quot;&gt;...
&lt;/span&gt;&lt;span class=&quot;go&quot;&gt;Success! Uploaded policy: snitch
Key              Value
---              -----
created_time     2018-08-30T02:21:27.980476389Z
deletion_time    n/a
destroyed        false
version          1
Key              Value
---              -----
created_time     2018-08-30T02:21:28.088984314Z
deletion_time    n/a
destroyed        false
version          1
fiqw1xaqjqofvinflvmnzo83t
zj2hvlev230x0s1ei9t25ft9m
overall progress: 1 out of 1 tasks
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;ij2r01ffy6ak: running   [==================================================&amp;gt;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;go&quot;&gt;verify: Service converged
sirlatrom/docker-secretprovider-plugin-vault
5ioiauam5n9nms9neb6szbwxj
n5j855gu0i460bno1aaw3neq9
t99bbs7y5c1y61tyxxhd3msoj
i9sbmcaqc46v5a44u00fkfxfv
snitch.2.y42sqzj8524y@redacted_host    | secret:              this_was_not_wrapped
snitch.2.y42sqzj8524y@redacted_host    | wrapped_secret:      1afd51f9-c1a2-d4ec-8ceb-8e043b77b53a
snitch.1.gpy8rj3oxz0n@redacted_host    | secret:              this_was_not_wrapped
snitch.1.gpy8rj3oxz0n@redacted_host    | wrapped_secret:      6567b96c-338e-cd3b-e9bc-67c65597fd0f
snitch.2.y42sqzj8524y@redacted_host    | unwrapped_secret:    this_was_once_wrapped
snitch.2.y42sqzj8524y@redacted_host    | generic_vault_token: ddef57f5-a235-923c-4e7c-0a519d307f10
snitch.1.gpy8rj3oxz0n@redacted_host    | unwrapped_secret:    this_was_once_wrapped
snitch.1.gpy8rj3oxz0n@redacted_host    | generic_vault_token: b7b27691-1776-ae52-ffc3-b6a59152d12f
snitch.2.lepelzpcjscj@redacted_host    | secret:              this_was_not_wrapped
snitch.2.lepelzpcjscj@redacted_host    | wrapped_secret:      84df01da-11f9-acba-0373-89bd1f161798
snitch.2.lepelzpcjscj@redacted_host    | unwrapped_secret:    this_was_once_wrapped
snitch.2.lepelzpcjscj@redacted_host    | generic_vault_token: 9214b53f-027c-6552-a0a9-1b18783550d1
snitch.1.edwdvkmvpbke@redacted_host    | secret:              this_was_not_wrapped
snitch.1.edwdvkmvpbke@redacted_host    | wrapped_secret:      df1714e1-a1b6-07eb-10c1-7e1ba4e73022
snitch.1.edwdvkmvpbke@redacted_host    | unwrapped_secret:    this_was_once_wrapped
snitch.1.edwdvkmvpbke@redacted_host    | generic_vault_token: bedf29d5-fbdd-6085-7809-f113078c66b1
&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h1 id=&quot;future-work&quot;&gt;Future work&lt;/h1&gt;

&lt;p&gt;Configs can be created with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--template-driver&lt;/code&gt; option, allowing you to insert placeholders for secrets (as described &lt;a href=&quot;/2018/04/docker-18-03-config-and-secret-templating/&quot;&gt;here&lt;/a&gt;) in your config file and have those be resolved each time a task (container) for a service is created. There will be a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;template_driver&lt;/code&gt; equivalent in the Docker Compose file format eventually (here are the pull requests: &lt;a href=&quot;https://github.com/docker/cli/pull/1746&quot;&gt;docker/cli#1746&lt;/a&gt;+&lt;a href=&quot;https://github.com/docker/compose/issues/6530&quot;&gt;docker/compose#6530&lt;/a&gt;). Once that is in place (tentatively set for Compose file format version 3.8), you’ll be able to combine configs &lt;em&gt;and&lt;/em&gt; secrets &lt;em&gt;and&lt;/em&gt; secrets plugins to build a powerful and expressive config management solution, while keeping the concerns of the systems involved neatly separated.&lt;/p&gt;

&lt;p&gt;My dream is to be able to write a docker-compose file like this (obviously made-up and not realistic):&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;3.8&quot;&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;services&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;...&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;config.yml&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/app/config.yml&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;vault_token&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;config.yml&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;template_driver&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;golang&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;file&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;config.yml.tmpl&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;vault_token&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;vault_token&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;driver&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sirlatrom/docker-secretprovider-plugin-vault&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;labels&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;dk.almbrand.docker.plugin.secretprovider.vault.type&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;vault_token&quot;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Secret will contain a Vault token&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;dk.almbrand.docker.plugin.secretprovider.vault.wrap&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;true&quot;&lt;/span&gt;        &lt;span class=&quot;c1&quot;&gt;# Enable response wrapping&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and a config.yml file like this:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;vault_addr&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;https://vault.example.com:8200&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;vault_token&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;secret &quot;password&quot;&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and the config file would contain a response wrapped long-lived token that the app could then use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wouldn’t that be great?&lt;/strong&gt;&lt;/p&gt;</content><author><name>Sune Keller</name></author><category term="docker" /><category term="stack" /><category term="deploy" /><category term="swarm" /><category term="configs" /><category term="secrets" /><category term="vault" /><summary type="html">In this post, I show how to write and deploy a secrets plugin for Docker Swarm that will fetch its values from HashiCorp Vault.</summary></entry><entry><title type="html">Docker stack deploy: update configs and secrets</title><link href="https://blog.sunekeller.dk/2019/01/docker-stack-deploy-update-configs/" rel="alternate" type="text/html" title="Docker stack deploy: update configs and secrets" /><published>2019-01-31T00:00:00+01:00</published><updated>2019-01-31T00:00:00+01:00</updated><id>https://blog.sunekeller.dk/2019/01/docker-stack-deploy-update-configs</id><content type="html" xml:base="https://blog.sunekeller.dk/2019/01/docker-stack-deploy-update-configs/">&lt;h1 id=&quot;background&quot;&gt;Background&lt;/h1&gt;

&lt;p&gt;If you’ve ever deployed a Docker stack using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker stack deploy --compose-file docker-compose.yml &amp;lt;stack_name&amp;gt;&lt;/code&gt;, and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker-compose.yml&lt;/code&gt; file has configs in it referring to files in your working directory, you may have hit a snag when you wanted to update the contents of those configs.&lt;/p&gt;

&lt;h1 id=&quot;the-problem&quot;&gt;The problem&lt;/h1&gt;

&lt;p&gt;In Swarm Mode, configs and secrets are immutable objects with unique names, and there is no way to mutate their contents. You &lt;em&gt;can&lt;/em&gt; update a &lt;em&gt;service&lt;/em&gt;, though, to make it refer to a different config or secret.&lt;/p&gt;

&lt;p&gt;Say you have this docker-compose.yml file:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;3.6'&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;services&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/nginx/nginx.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;file&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;file&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you have the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nginx.conf&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cert.pem&lt;/code&gt; files in your working directory, Docker will read their contents and create the config and secret with their content, and add references to them to the spec of your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;app&lt;/code&gt; service.&lt;/p&gt;

&lt;p&gt;However, if you change the contents of either file, and try and run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker stack deploy ...&lt;/code&gt; again, you will get an error message saying you cannot create the config or secret, because it already exists.&lt;/p&gt;

&lt;h1 id=&quot;a-solution&quot;&gt;A solution&lt;/h1&gt;

&lt;p&gt;If you expand the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;configs:&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;secrets:&lt;/code&gt; top-level sections by adding a name to each entry, and add an appropriate environment variable as part of that name, you’ll get the desired result.&lt;/p&gt;

&lt;p&gt;If you’re working interactively, you can use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;${LINENO}&lt;/code&gt; shell variable which will increase with every command you enter.&lt;/p&gt;

&lt;p&gt;If on the other hand you’re in a CI/CD type of situation, you can choose one of several methods:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Use a job variable such as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;${CI_JOB_ID}&lt;/code&gt; in GitLab or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;${BUILD_NUMBER}&lt;/code&gt; in Jenkins&lt;/li&gt;
  &lt;li&gt;Calculate a digest of each referenced file and use that to determine whether a config or secret should be updated&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result of method 1 would look like:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;3.6'&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;services&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/nginx/nginx.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf-${CI_JOB_ID}&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;file&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem-${CI_JOB_ID}&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;file&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The downside of using the job number is that you’ll update the configs and secrets &lt;em&gt;and&lt;/em&gt; their dependent services &lt;em&gt;every&lt;/em&gt; time you run the job, which is not what you’d expect if you only edited one of the files.&lt;/p&gt;

&lt;p&gt;As for method 2, to use digests, you could decide on a certain convention for the variable names like this:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;3.6'&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;services&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;target&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/nginx/nginx.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;configs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf-${nginx_conf_DIGEST}&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;file&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;nginx.conf&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;secrets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem-${cert_pem_DIGEST}&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;file&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;cert.pem&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then you’d have to run a script along the lines of the following to generate the digests and deploy the stack. Notice that configs and secrets cannot have names exceeding 64 characters, which complicates the script a little. You’ll also need the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;json_xs&lt;/code&gt; tool (found in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;libjson-xs-perl&lt;/code&gt; package in Ubuntu) and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;jq&lt;/code&gt; tool (in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;jq&lt;/code&gt; package).&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Get a list of the keys, names and files of configs and secrets&lt;/span&gt;
json_xs &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; yaml &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; json &amp;lt; docker-compose.yml | jq &lt;span class=&quot;nt&quot;&gt;--raw-output&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'(.configs,.secrets) | to_entries | map(select(.value | has(&quot;file&quot;)) | .key, .value.name, .value.file)[]'&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; configs_and_secrets.txt
&lt;span class=&quot;c&quot;&gt;# Iterate over each three-tuple&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;while &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;read &lt;/span&gt;entry&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;read &lt;/span&gt;name&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;read &lt;/span&gt;file
&lt;span class=&quot;k&quot;&gt;do&lt;/span&gt;
  &lt;span class=&quot;c&quot;&gt;# Sanitize the variable name for the digest&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;sanitized_filename&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$file&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;sed&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'s/[./ ]/_/g'&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$entry&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;.name: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$sanitized_filename&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  &lt;span class=&quot;c&quot;&gt;# Get the part of the name without any variable references&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;name_without_references&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; envsubst &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;remainder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$((&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;64&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${#&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;name_without_references&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;))&lt;/span&gt;
  &lt;span class=&quot;c&quot;&gt;# Export a variable with the digest, truncate to a total of 64 characters&lt;/span&gt;
  &lt;span class=&quot;nb&quot;&gt;export&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;sanitized_filename&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;_DIGEST&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sha512sum&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$file&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'{print $1}'&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;cut&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; -&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;remainder&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Use variable &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;sanitized_filename&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;_DIGEST for &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$entry&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt; &amp;lt; configs_and_secrets.txt
&lt;span class=&quot;c&quot;&gt;# Deploy the stack&lt;/span&gt;
docker stack deploy &lt;span class=&quot;nt&quot;&gt;--prune&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; docker-compose.yml stack
&lt;span class=&quot;c&quot;&gt;# Clean up&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;rm &lt;/span&gt;configs_and_secrets.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Notice the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--prune&lt;/code&gt; flag; it &lt;em&gt;should&lt;/em&gt; take care of removing the configs and secrets from the previous deployment &lt;em&gt;next&lt;/em&gt; time you deploy, since they’ll no longer be referenced by any services.&lt;/p&gt;</content><author><name>Sune Keller</name></author><category term="docker" /><category term="stack" /><category term="deploy" /><category term="swarm" /><category term="configs" /><category term="secrets" /><category term="ci" /><category term="cd" /><summary type="html">In this post, I explain how you can use CI variables or file digests to update configs and secrets during docker stack deploy.</summary></entry><entry><title type="html">Deep Dive: Using Packer and Ansible to create a golden VMware image for Docker Enterprise - part 1</title><link href="https://blog.sunekeller.dk/2018/12/declarative-docker-enterprise-deep-dive-packer/" rel="alternate" type="text/html" title="Deep Dive: Using Packer and Ansible to create a golden VMware image for Docker Enterprise - part 1" /><published>2018-12-12T00:00:00+01:00</published><updated>2018-12-12T00:00:00+01:00</updated><id>https://blog.sunekeller.dk/2018/12/declarative-docker-enterprise-deep-dive-packer</id><content type="html" xml:base="https://blog.sunekeller.dk/2018/12/declarative-docker-enterprise-deep-dive-packer/">&lt;h1 id=&quot;background&quot;&gt;Background&lt;/h1&gt;

&lt;p&gt;In &lt;a href=&quot;/2018/08/declarative-docker-enterprise-part-1/#creating-a-golden-image&quot;&gt;part 1&lt;/a&gt; of my series on Declarative Docker Enterprise, I describe how we used Packer to create a golden image for the VMs that will later make up our Docker Enterprise cluster. This post will detail how that is done.&lt;/p&gt;

&lt;p class=&quot;notice--info&quot;&gt;&lt;strong&gt;Note:&lt;/strong&gt; I will use the terms “Golden Image” and “VM template” interchangeably throughout this post.&lt;/p&gt;

&lt;h1 id=&quot;the-problem&quot;&gt;The problem&lt;/h1&gt;

&lt;p&gt;In order to spin up VMs for a Docker cluster, you can either:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;boot and install each of them individually, with the very likely risk of making manual mistakes along the way, or&lt;/li&gt;
  &lt;li&gt;create a VM template first, then clone that whenever you need a new VM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Obviously, if you go and spin up and provision each VM individually, there are several drawbacks:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The risk of package versions being different between the VMs&lt;/li&gt;
  &lt;li&gt;The risk of manual mistakes along the way, selecting wrong packages or installer options&lt;/li&gt;
  &lt;li&gt;Wasting precious time on repetitive tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If, on the other hand, you create a single VM &lt;em&gt;template&lt;/em&gt; that contains all the packages you need, you only need to clone that template whenever you need a new VM.&lt;sup id=&quot;fnref:tf-plug&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:tf-plug&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Assuming you regularly create an upgraded golden image, all you need to do to &lt;em&gt;upgrade&lt;/em&gt; your cluster is to replace each VM with a new one cloned from an upgraded VM template. As it turns out, this requires some amount of automation work, but yields the benefits of avoiding the drawbacks listed above, and more.&lt;/p&gt;

&lt;h1 id=&quot;creating-the-vm-template&quot;&gt;Creating the VM template&lt;/h1&gt;

&lt;p&gt;Packer’s involvement in our pipeline consists of two phases:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Create a plain Ubuntu 16.04 installation&lt;/li&gt;
  &lt;li&gt;Build on top of the first phase to add Docker related packages and configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The reason for splitting the process of creating the VM template that will be used to create or upgrade the cluster is to shorten iteration times when making changes to phase 2. More than likely, when you first work on defining what packages go into your Docker VM template, you’ll figure out you’ll need a few more, and having to run that Ubuntu installer every time is really not worth it.&lt;/p&gt;

&lt;p&gt;Packer comes with a VMware plugin, but using it requires running it on a host with VMware Fusion for OS X, VMware Workstation for Linux and Windows, or VMware Player on Linux. We’d rather use the vSphere API, and it turns out JetBrains have created a &lt;a href=&quot;https://github.com/jetbrains-infra/packer-builder-vsphere&quot;&gt;Packer plugin for vSphere&lt;/a&gt; that supports both creating a new VM from an ISO, and cloning and provisioning a VM and provisioning it using e.g. Ansible.&lt;/p&gt;

&lt;h2 id=&quot;creating-the-base-vm-template&quot;&gt;Creating the base VM template&lt;/h2&gt;

&lt;p&gt;Using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vsphere-iso&lt;/code&gt; value for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;type&lt;/code&gt; option in the Packer config allows you to point to an ISO file stored in your vCenter and create a VM with that ISO mounted in the virtual optical drive of the VM. Now, you’re likely to want to automate that install, and for that you can use a Kickstart script to answer the Ubuntu installer’s questions. Using a Kickstart script will also allow you to install a few generic packages that are not specific to Docker, but can help in troubleshooting the VM template itself until you get it right.&lt;/p&gt;

&lt;h3 id=&quot;providing-the-iso&quot;&gt;Providing the ISO&lt;/h3&gt;

&lt;p&gt;To boot a VM from an ISO file, you must first upload it to a datastore in your vCenter cluster. We’ve opted for hand-crafting the ISO due to rigid networking requirements, and as such point to a static location in our config.&lt;/p&gt;

&lt;p&gt;We try and strive for having as few static/hardcoded/one-off components in our pipeline, and as such, another option would be to compose the ISO file from a stock Ubuntu netboot installer image and adding in any relevant customizations as part of our pipeline. In order to get that ISO file uploaded, we could use the tool &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;govc&lt;/code&gt; (maintained by VMware in the &lt;a href=&quot;https://github.com/vmware/govmomi/tree/master/govc&quot;&gt;govmomi&lt;/a&gt; repo) and its &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;datastore.upload&lt;/code&gt; command once we’ve crafted your ISO file. That would certainly make upgrading major Ubuntu releases easier.&lt;/p&gt;

&lt;p&gt;The config for pointing to an ISO file is as follows, assuming you parameterize the location in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ISO_DATASTORE&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ISO_PATH&lt;/code&gt; environment variables:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;builders&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;iso_paths&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;[{{ user `iso_datastore` }}] {{ user `iso_path` }}&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;variables&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;iso_datastore&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{ env `ISO_DATASTORE` }}&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;iso_path&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{ env `ISO_PATH` }}&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;kickstart&quot;&gt;Kickstart&lt;/h3&gt;

&lt;h4 id=&quot;configuring-your-install-using-a-kickstart-script&quot;&gt;Configuring your install using a Kickstart script&lt;/h4&gt;

&lt;p&gt;The detailed list of things we configure using in our Kickstart script is as follows:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;System language: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;lang en_US&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Language modules to install (in our case: Danish): &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;langsupport da_DK&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;System keyboard (Danish layout): &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;keyboard dk&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Timezone (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Europe/Copenhagen&lt;/code&gt;): &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;timezone --utc Europe/Copenhagen&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Disable password login for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;root&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rootpw --disabled&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Initial username and encrypted password (provided as an environment variable by GitLab, substituted in before the Kickstart config file is put on the virtual floppy disk): &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;user automationuser --fullname &quot;Automation user&quot; --iscrypted --password ${AUTOMATION_USER_CRYPTED_PASSWORD}&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Install instead of upgrade: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;install&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Use text mode install: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;text&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Reboot after install: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;reboot&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Use web installation: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;url --url http://dk.archive.ubuntu.com/ubuntu/&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Set hardware clock to UTC: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;preseed clock-setup/utc boolean true&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Set preseed time zone: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;preseed time/zone string Europe/Copenhagen&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Set NTP server: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;preseed clock-setup/ntp-server &amp;lt;redacted&amp;gt;&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Use MBR bootloader: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bootloader --location=mbr&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Zero out the MBR: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;zerombr yes&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Several options for partitioning:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;preseed partman-auto/disk string /dev/sda
preseed partman-auto/method string lvm
preseed partman-lvm/device_remove_lvm boolean true
preseed partman-md/device_remove_md boolean true
preseed partman-lvm/confirm boolean true
preseed partman-lvm/confirm_nooverwrite boolean true
preseed partman-auto/choose_recipe select atomic
preseed partman-partitioning/confirm_write_new_label boolean true
preseed partman/choose_partition select finish
preseed partman/confirm boolean true
preseed partman/confirm_nooverwrite boolean true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;Enable shadow file and password hashing: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;auth --useshadow --enablemd5&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Disable firewall for Docker compatibility and because we run an external one: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;firewall --disabled&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Skip configuring the X Window System: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;skipx&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then follows the list of packages we install initially (some are redacted out):&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;%packages
build-essential
curl
dnsmasq
dnsmasq-base
dnsmasq-utils
dnsutils
htop
man
nfs-common
ntp
open-vm-tools
rng-tools
software-properties-common
ssh
unzip
vim
wget
curl
ca-certificates
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;the-kickstart-post-script&quot;&gt;The Kickstart post script&lt;/h4&gt;

&lt;p&gt;The last touch is the post script, which will run after the installation is complete, but noteably before the automation user has been added. That’s the reason why we create the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rc.install&lt;/code&gt; script, which will run once the installation is done and the VM has rebooted. It copies the SSH public key of the automation user such that tools in the later phases (both Packer and Ansible) can SSH in as the automation user.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;%post &lt;span class=&quot;nt&quot;&gt;--nochroot&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;touch&lt;/span&gt; /target/etc/installdate

&lt;span class=&quot;nb&quot;&gt;umask &lt;/span&gt;077
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; /media
mount /dev/fd0 /media
&lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; /media/files/ntp.conf /target/etc/ntp.conf
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; /target/usr/local/share/ca-certificates/
&lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; /media/files/almbrand-corporate-pki-root-ca.crt /target/usr/local/share/ca-certificates/almbrand-corporate-pki-root-ca.crt

&lt;span class=&quot;c&quot;&gt;#### Configure sudoers&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /target/etc/sudoers.d/defaults &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
Defaults  secure_path=&quot;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&quot;
Defaults        env_keep=&quot;http_proxy https_proxy&quot;
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; /target/etc/sudoers.d/automationuser_nopasswd
automationuser ALL=(ALL) NOPASSWD:ALL
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF

&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;#### Keep a safe copy /etc/rc.local for later&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;mv&lt;/span&gt; /target/etc/rc.local /target/etc/rc.local.dist

&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /target/etc/rc.local &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will &quot;exit 0&quot; on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

if [ -x /etc/rc.install ]
then
    /etc/rc.install &amp;amp;&amp;amp; mv /etc/rc.install /etc/rc.install.1
else
    echo /etc/rc.install does not exist &amp;gt;&amp;gt; /root/rc.install.log
fi

exit 0
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF

&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x /target/etc/rc.local

&lt;span class=&quot;c&quot;&gt;#### Create /etc/rc.install ####&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /target/etc/rc.install &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
#!/bin/bash

# rc.install
#

#### automationuser SSH public key
mkdir -p /home/automationuser/.ssh
chmod 0700 /home/automationuser/.ssh
chmod 0600 /home/automationuser/.ssh/authorized_keys
mount /dev/fd0 /media
cp /media/files/authorized_keys /home/automationuser/.ssh/authorized_keys
umount /media
chown -R automationuser. /home/automationuser/.ssh

#### Corporate PKI Root CA certificate
(
  update-ca-certificates
) 2&amp;gt;&amp;amp;1 | tee /tmp/certinst.log
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF

&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x /target/etc/rc.install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;making-a-kickstart-script-available-to-the-installer&quot;&gt;Making a Kickstart script available to the installer&lt;/h4&gt;

&lt;p&gt;You have two options:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Bake the kickstart script into the ISO file, or&lt;/li&gt;
  &lt;li&gt;Use a neat trick&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s the neat trick: JetBrains’ Packer plugin for vSphere allows you to create and provision a &lt;em&gt;floppy disk&lt;/em&gt; with select files and attach it to your VM such that the installer can reach the files on it. This way, we’re able to keep our Kickstart script in the same repo as our Packer config and avoid having to run a file server to host any files we need during provisioning, thus reducing external dependencies for a successful VM template build.&lt;/p&gt;

&lt;p&gt;Specifically, you add these keys in your Packer template file, assuming your Kickstart script file is called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ks.cfg&lt;/code&gt; and your auxillary files are in a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;files/&lt;/code&gt; subdirectory relative to your Packer template file:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;builders&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;floppy_dirs&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{ template_dir }}/files&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;floppy_files&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{ template_dir }}/ks.cfg&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;speeding-up-iterations&quot;&gt;Speeding up iterations&lt;/h3&gt;

&lt;p&gt;Set the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;create_snapshot&lt;/code&gt; option to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;true&lt;/code&gt; to have a snapshot created after the VM has been successfully installed. This will allow you to use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;linked_clone&lt;/code&gt; option in the next phase, which leads to &lt;em&gt;much&lt;/em&gt;  faster cloning of the VM, increasing your iteration speed.&lt;/p&gt;

&lt;p&gt;Another aspect of keeping iterations fast is to avoid installing &lt;em&gt;too&lt;/em&gt; many packages in this first phase, since the cost in terms of waiting time is larger than in the next phase.&lt;/p&gt;

&lt;h2 id=&quot;ssh-access-to-the-vm-template-during-provisioning&quot;&gt;SSH access to the VM template during provisioning&lt;/h2&gt;

&lt;p&gt;This step is not really needed as long as you’re &lt;em&gt;only&lt;/em&gt; using Kickstart to run your provisioning, but you may easily end up in a situation where more advanced provisioning will require a tool such as Ansible, or to execute commands on the created VM template before it is considered done. For such purposes (and, indeed, for the configuration to be accepted by the vSphere builder plugin), you have to tell what SSH user and private key file will be used to access the VM once provisioned. In our case, we make sure to put that user’s &lt;em&gt;public&lt;/em&gt; key file on the VM as described in &lt;a href=&quot;#the-kickstart-post-script&quot;&gt;The Kickstart post script&lt;/a&gt; section above, specifically these lines:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;mount /dev/fd0 /media
&lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; /media/files/authorized_keys /home/automationuser/.ssh/authorized_keys
umount /media
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You’ll want to set the following options and make sure your SSH private key is accessible from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.ssh/id_rsa&lt;/code&gt; inside your pipeline’s working directory (or, if you’re running this from a laptop, whichever directory you’re running it from then). You can modify the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;ssh_private_key_file&quot;&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;ssh_username&quot;&lt;/code&gt; values to suit your needs.&lt;/p&gt;

&lt;div class=&quot;language-jinja highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;json
{
  &quot;builders&quot;: [
    {
      ...
      &quot;ssh_private_key_file&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;template_dir&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;/.ssh/id_rsa&quot;,
      &quot;ssh_username&quot;: &quot;automationuser&quot;,
      ...
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;the-final-config&quot;&gt;The final config&lt;/h2&gt;

&lt;p&gt;As you can probably see from the config, we parameterize almost all the values. This is because we have multiple vCenters, and we need to build the templates in all of them. In our repo, this file is saved as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;from-iso.json&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;language-jinja highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;{
  &quot;builders&quot;: [
    {
      &quot;CPU_limit&quot;: -1,
      &quot;CPU_reservation&quot;: 0,
      &quot;CPUs&quot;: &quot;2&quot;,
      &quot;RAM&quot;: &quot;4096&quot;,
      &quot;boot_wait&quot;: &quot;2s&quot;,
      &quot;cluster&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;cluster&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;convert_to_template&quot;: true,
      &quot;create_snapshot&quot;: true,
      &quot;datacenter&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;datacenter&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;datastore&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;datastore&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;disk_size&quot;: 51200,
      &quot;disk_thin_provisioned&quot;: true,
      &quot;floppy_dirs&quot;: [
        &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;template_dir&lt;/span&gt;&lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;/files&quot;
      ],
      &quot;floppy_files&quot;: [
        &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;template_dir&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;/ks.cfg&quot;
      ],
      &quot;folder&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;folder&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;guest_os_type&quot;: &quot;ubuntu64Guest&quot;,
      &quot;iso_paths&quot;: [
        &quot;[&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;iso_datastore&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;] &lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;iso_path&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;
      ],
      &quot;network&quot;: &quot;/&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;datacenter&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;/network/&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;network&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;network_card&quot;: &quot;vmxnet3&quot;,
      &quot;password&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;vsphere_password&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;resource_pool&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;resource_pool&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;ssh_private_key_file&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;template_dir&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;/.ssh/id_rsa&quot;,
      &quot;ssh_username&quot;: &quot;automationuser&quot;,
      &quot;type&quot;: &quot;vsphere-iso&quot;,
      &quot;username&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;vsphere_user&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;vcenter_server&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;vsphere_server&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
      &quot;vm_name&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;vm_name&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;
    }
  ],
  &quot;post-processors&quot;: [],
  &quot;variables&quot;: {
    &quot;cluster&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;CLUSTER&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;datacenter&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DATACENTER&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;datastore&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DATASTORE&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;folder&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;FOLDER&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;iso_datastore&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ISO_DATASTORE&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;iso_path&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ISO_PATH&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;network&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NETWORK&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;resource_pool&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;RESOURCE_POOL&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;vm_name&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;BASE_VM_TEMPLATE_NAME&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;vsphere_password&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VSPHERE_PASSWORD&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;vsphere_server&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VSPHERE_SERVER&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;,
    &quot;vsphere_user&quot;: &quot;&lt;span class=&quot;cp&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VSPHERE_USER&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;`&lt;/span&gt; &lt;span class=&quot;cp&quot;&gt;}}&lt;/span&gt;&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h1 id=&quot;creating-the-docker-vm-template&quot;&gt;Creating the Docker VM template&lt;/h1&gt;

&lt;p&gt;We perform the VM template creation from within our GitLab CI pipeline, but there isn’t much to it other than setting the relevant environment variables (all listed in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;variables&quot;&lt;/code&gt; key in the config above). In order to avoid having to install Packer on our GitLab runner, we run Packer from a Docker image thusly:&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker run &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--volume&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;PWD&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;:/data &lt;span class=&quot;nt&quot;&gt;--workdir&lt;/span&gt; /data &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tmpfs&lt;/span&gt; /tmp &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; BASE_VM_TEMPLATE_NAME &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; CLUSTER &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; DATACENTER &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; DATASTORE &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; FOLDER &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; ISO_DATASTORE &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; ISO_PATH &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; NETWORK &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; VSPHERE_SERVER &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; VSPHERE_USER &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--env&lt;/span&gt; VSPHERE_PASSWORD &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$packer_image&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  build from-iso.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As you can see, the variables are all defined outside of the command, meaning Docker will take the values of the variables in the environment and pass them on to the container.&lt;/p&gt;

&lt;p&gt;Additionally, we make sure to make the working directory available to Packer by mounting the current working directory on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/data&lt;/code&gt; inside the container and set it as the working directory of the container with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--workdir /data&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Another important detail is that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$packer_image&lt;/code&gt; bit - we build our own Packer image for Docker in order to add a few more tools in there, namely &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;govc&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;jq&lt;/code&gt;, as well as the two Packer vSphere builders released by JetBrains. In fact, since the next phase will use Ansible to provision Docker Enterprise on the next template, we base our final image off of &lt;a href=&quot;https://github.com/William-Yeh&quot;&gt;William Yeh&lt;/a&gt;’s &lt;a href=&quot;https://hub.docker.com/r/williamyeh/ansible/&quot;&gt;Ansible image on Docker Hub&lt;/a&gt;. The Dockerfile for that image looks like this:&lt;/p&gt;

&lt;div class=&quot;language-dockerfile highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;ARG&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;         packer_version=1.3.3@sha256:e65fb210abc027b7d66187d34eb095fffa2fd8401e7032196f760d7866c6484c&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;        hashicorp/packer:${packer_version} AS packer&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;        williamyeh/ansible:alpine3@sha256:8072eb5536523728d4e4adc5e75af314c5dc3989e3160ec4f347fc0155175ddf&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Copy in corporate certificates&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;COPY&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;        *.crt /usr/local/share/ca-certificates/&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;RUN         &lt;/span&gt;update-ca-certificates

&lt;span class=&quot;c&quot;&gt;# Add utilities used inside later Packer builds&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;RUN         &lt;/span&gt;apk add &lt;span class=&quot;nt&quot;&gt;--update&lt;/span&gt; bash jq wget curl
&lt;span class=&quot;k&quot;&gt;COPY&lt;/span&gt;&lt;span class=&quot;s&quot;&gt; --from=packer /bin/packer /bin/packer&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Download the Packer vSphere builders from the JetBrains infra repo&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;ARG&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;         vsphere_packer_builders_version=2.1&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;ADD&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;         https://github.com/jetbrains-infra/packer-builder-vsphere/releases/download/${vsphere_packer_builders_version}/packer-builder-vsphere-clone.linux /bin/packer-builder-vsphere-clone.linux&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;ADD&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;         https://github.com/jetbrains-infra/packer-builder-vsphere/releases/download/${vsphere_packer_builders_version}/packer-builder-vsphere-iso.linux /bin/packer-builder-vsphere-iso.linux&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Install GOVC&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;ARG&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;         govmomi_version=v0.19.0&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;ADD&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;         https://github.com/vmware/govmomi/releases/download/${govmomi_version}/govc_linux_386.gz /bin/govc.gz&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;RUN         &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /bin &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;gunzip &lt;/span&gt;govc.gz &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x /bin/packer-builder-vsphere-iso.linux /bin/packer-builder-vsphere-clone.linux govc

&lt;span class=&quot;c&quot;&gt;# Add a default ansible.cfg&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;COPY&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;        ansible.cfg /etc/ansible/ansible.cfg&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Add an automation user&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;RUN         &lt;/span&gt;adduser &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; 1000 &lt;span class=&quot;nt&quot;&gt;-D&lt;/span&gt; automationuser
&lt;span class=&quot;k&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;        automationuser&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Set the entrypoint to run Packer by default&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;ENTRYPOINT&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;  [ &quot;/bin/packer&quot; ]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You then build and push that image to your registry (Docker Hub, DTR or whichever you prefer) and use that to run Packer.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;Building a golden image VM template with Packer and vSphere is perfectly doable, the tools are all there, readily available and well documented.&lt;/p&gt;

&lt;p&gt;The next part of this deep dive will deal with actually installing Docker Enterprise in a subsequent VM template.&lt;/p&gt;
&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:tf-plug&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Obviously you’ll still need to specify the unique config (e.g. hostname, VM name, possibly IP address) of each VM, but that’s what Terraform is for. Read about that in an upcoming blog post. &lt;a href=&quot;#fnref:tf-plug&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Sune Keller</name></author><category term="docker" /><category term="docker enterprise" /><category term="packer" /><category term="vmware" /><category term="vsphere" /><category term="ansible" /><category term="gitlab" /><category term="series" /><category term="enterprise" /><category term="deep dive" /><summary type="html">In this post, I go into detail about how we build the VM template that is the basis of our Docker cluster.</summary></entry><entry><title type="html">Declarative Docker Enterprise with Packer, Terraform, Ansible and GitLab - part 2</title><link href="https://blog.sunekeller.dk/2018/08/declarative-docker-enterprise-part-2/" rel="alternate" type="text/html" title="Declarative Docker Enterprise with Packer, Terraform, Ansible and GitLab - part 2" /><published>2018-08-21T00:00:00+02:00</published><updated>2018-08-21T00:00:00+02:00</updated><id>https://blog.sunekeller.dk/2018/08/declarative-docker-enterprise-part-2</id><content type="html" xml:base="https://blog.sunekeller.dk/2018/08/declarative-docker-enterprise-part-2/">&lt;p&gt;This is the second part in a series about building and upgrading Docker EE clusters while striving for a declarative approach. See &lt;a href=&quot;/2018/08/declarative-docker-enterprise-part-1/#background&quot;&gt;part 1&lt;/a&gt; for more background.&lt;/p&gt;

&lt;p class=&quot;notice--info&quot;&gt;&lt;strong&gt;Post updated Aug 22 2018:&lt;/strong&gt; Added &lt;a href=&quot;#conclusion&quot;&gt;conclusion&lt;/a&gt; section.&lt;/p&gt;

&lt;h1 id=&quot;creation&quot;&gt;Creation&lt;/h1&gt;

&lt;p&gt;The first time a cluster is to be created, things are a little different. There are no existing VMs, and thus no services running on them. This makes things simpler in terms of how we apply the planned changes using Terraform.&lt;/p&gt;

&lt;h2 id=&quot;terraform-config&quot;&gt;Terraform config&lt;/h2&gt;

&lt;p&gt;In broad terms, the nodes that will make up the cluster are divided into three groups, which is reflected in our Terraform config:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;UCP Controllers (named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;managers&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;UCP Workers (named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;workers&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;DTR replicas (named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dtrs&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s a list of variables that we define for &lt;em&gt;every&lt;/em&gt; VM (using a map from VM name to value):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;MAC address&lt;/li&gt;
  &lt;li&gt;Deployment stage&lt;/li&gt;
  &lt;li&gt;vSphere Resource Pool&lt;/li&gt;
  &lt;li&gt;vSphere host &lt;sup id=&quot;fnref:host-drs&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:host-drs&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;vCenter folder path&lt;/li&gt;
  &lt;li&gt;Number of vCPUs&lt;/li&gt;
  &lt;li&gt;Memory size in MB&lt;/li&gt;
  &lt;li&gt;Disk size &lt;sup id=&quot;fnref:linked-clone-requirements&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:linked-clone-requirements&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Datacenter name (e.g. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dc1&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dc2&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;Primary network name &lt;sup id=&quot;fnref:indirect-tf-config&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:indirect-tf-config&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Storage network name &lt;sup id=&quot;fnref:indirect-tf-config:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:indirect-tf-config&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; &lt;sup id=&quot;fnref:netapp-dvp-note&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:netapp-dvp-note&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Storage IP address &lt;sup id=&quot;fnref:netapp-dvp-note:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:netapp-dvp-note&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Storage policy name &lt;sup id=&quot;fnref:netapp-dvp-note:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:netapp-dvp-note&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s a &lt;strong&gt;lot&lt;/strong&gt;! Additionally, we have to ask the network team to create DHCP reservations and VIP addresses, ask our storage vendor to add the storage IP addresses to the whitelist for the given storage policy, and request TLS certificates for UCP and DTR from our internal PKI. Those are all tasks ripe for automation, but we haven’t got there yet.&lt;/p&gt;

&lt;p&gt;But once all that is defined, we’re good to go and can run our pipeline in GitLab.&lt;sup id=&quot;fnref:gitlab-variables-note&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:gitlab-variables-note&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;gitlab-pipeline&quot;&gt;GitLab pipeline&lt;/h2&gt;

&lt;p&gt;The GitLab pipeline is configured as a number of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.gitlab-ci.yml&lt;/code&gt; files. Since we’re using GitLab Enterprise, we can include other YAML files from our main &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.gitlab-ci.yml&lt;/code&gt; file, leading to a slightly neater decomposition. However, we specify the same CI jobs for every cluster:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Pre*&lt;/li&gt;
  &lt;li&gt;Plan&lt;/li&gt;
  &lt;li&gt;Apply&lt;/li&gt;
  &lt;li&gt;Upgrade&lt;/li&gt;
  &lt;li&gt;Re-run Ansible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;* The “Pre” job is a single job that is always run before the plan phase.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/2018-08-16-ucp-provisioner-pipeline-1.png&quot; alt=&quot;A screenshot of our provisioning pipeline&quot; /&gt;&lt;/p&gt;

&lt;p&gt;As can be seen in the above screenshot, we divide our pipeline into three phases (the “Upstream” phase comes from the job being triggered by the previous repo’s CI pipeline), namely:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Pre&lt;/li&gt;
  &lt;li&gt;Plan&lt;/li&gt;
  &lt;li&gt;Apply&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;preparing-essential-docker-images&quot;&gt;Preparing essential Docker images&lt;/h3&gt;

&lt;p&gt;The “Pre” phase has one proper job: Pull any &lt;em&gt;essential&lt;/em&gt; Docker images from a registry that already exists (we still need to solve this catch-22), save them in a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.tar.gz&lt;/code&gt; file for later use.&lt;/p&gt;

&lt;h3 id=&quot;planning&quot;&gt;Planning&lt;/h3&gt;

&lt;p&gt;The “Plan” phase has a job for each cluster. The job gives the Terraform variables file pertaining to the cluster as the argument to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-var-file&lt;/code&gt; option of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;terraform plan&lt;/code&gt; command and saves the output in a file that is archived in GitLab. The job output log will show what actions Terraform will perform if the plan is applied. For a new cluster, this will include a number of VMs in each of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;managers&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;workers&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dtrs&lt;/code&gt; resource groups, as well as a number of so-called &lt;a href=&quot;https://www.terraform.io/docs/providers/null/resource.html&quot;&gt;null resources&lt;/a&gt; which are used, among other things, to insert custom scripts at different points in the lifecycle of applying the plan. Namely, this is how we run Ansible after Terraform has created the VMs. Terraform keeps track of the null resources, but they don’t represent an object in any other system.&lt;/p&gt;

&lt;h3 id=&quot;applying&quot;&gt;Applying&lt;/h3&gt;

&lt;p&gt;Hang on, this is where it gets complicated!&lt;/p&gt;

&lt;p&gt;The “Apply” phase’s main job is the “Apply” job. It fetches the plan from the “Plan” job and applies it by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;terraform apply &amp;lt;plan.out&amp;gt;&lt;/code&gt;. For a new cluster, the following things will happen in sequence:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Terraform creates the UCP Controller VMs (managers) by cloning the Docker + UCP VM template from &lt;a href=&quot;/2018/08/declarative-docker-enterprise-part-1/#creating-a-golden-image&quot;&gt;part 1&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Terraform creates an Ansible inventory file only including the manager nodes in a group called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp&lt;/code&gt; (sorry about the naming inconsistencies)&lt;/li&gt;
  &lt;li&gt;Terraform runs the script in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ansible_managers&lt;/code&gt; null resource
    &lt;ol class=&quot;lower_alpha_list&quot;&gt;
      &lt;li&gt;Ansible runs a playbook that consists of multiple nested playbooks that contain the tasks for setting up the Docker Swarm cluster and installing UCP
        &lt;ol class=&quot;lower_roman_list&quot;&gt;
          &lt;li&gt;The current Swarm cluster status of each manager node is collected&lt;/li&gt;
          &lt;li&gt;If there are no managers in any existing cluster (always the case first time around), the first manager will initialize a new Swarm cluster&lt;/li&gt;
          &lt;li&gt;The UCP configuration file from our repo is copied into the VM and created as a Swarm config using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker config create&lt;/code&gt; command&lt;/li&gt;
          &lt;li&gt;The TLS certificates that we inject into the pipeline using a GitLab secret variable are copied into the VM and copied to a newly created &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp-controller-server-certs&lt;/code&gt; Docker volume (instructions &lt;a href=&quot;https://success.docker.com/article/how-do-i-replace-the-tls-certificates-for-ucp&quot;&gt;here&lt;/a&gt;)&lt;/li&gt;
          &lt;li&gt;The configured version of UCP is installed using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run ... docker/ucp:&amp;lt;version&amp;gt; install&lt;/code&gt; (see the &lt;a href=&quot;https://docs.docker.com/datacenter/ucp/2.2/guides/admin/install/#step-4-install-ucp&quot;&gt;install docs&lt;/a&gt;)&lt;/li&gt;
          &lt;li&gt;We wait for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp-reconcile&lt;/code&gt; container on the first manager node to exit successfully (see the &lt;a href=&quot;https://docs.docker.com/ee/ucp/ucp-architecture/#under-the-hood&quot;&gt;UCP architecture&lt;/a&gt; page), indicating that UCP is up and running&lt;/li&gt;
          &lt;li&gt;We get the join token for Swarm managers from the first manager node by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker swarm join-token manager&lt;/code&gt;&lt;/li&gt;
          &lt;li&gt;We collect a list of remaining manager nodes to join to the cluster&lt;/li&gt;
          &lt;li&gt;For each of the remaining manager nodes, one at a time, we do the following:
            &lt;ol&gt;
              &lt;li&gt;Run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker swarm join --token &amp;lt;token&amp;gt; &amp;lt;first-node-addr&amp;gt;:2377&lt;/code&gt; to join the Swarm cluster&lt;/li&gt;
              &lt;li&gt;Wait for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp-reconcile&lt;/code&gt; container on the first manager node to exit successfully (see the &lt;a href=&quot;https://docs.docker.com/ee/ucp/ucp-architecture/#under-the-hood&quot;&gt;UCP architecture&lt;/a&gt; page), indicating that UCP is up and running&lt;/li&gt;
            &lt;/ol&gt;
          &lt;/li&gt;
        &lt;/ol&gt;
      &lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the manager nodes have been joined into a working Swarm/UCP cluster, we need to perform additional configuration of UCP for it to suit our needs. This is also done as part of the pipeline. Our Terraform config has a couple of additional variables:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Teams (a mapping from team name to LDAP filter)&lt;/li&gt;
  &lt;li&gt;Grants (a mapping from collection to a tuple of a team name and a role)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using the UCP HTTP API, we create the teams, checking for each one that it does not already exist, and configure its LDAP sync settings. We then create all the collections from the “Grants” variable. Some collections run several levels deep, and for those, we add a dummy entry to create their parent collections first. Once the collections have been created, we create grants for the teams with the specified roles. All this required some extra Ansible foo - the &lt;a href=&quot;https://docs.ansible.com/ansible/2.4/import_tasks_module.html&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;import_tasks&lt;/code&gt; action&lt;/a&gt; comes in handy, as it allows running a sequence of tasks for each element in a given map/dict or list variable.&lt;/p&gt;

&lt;ol start=&quot;4&quot;&gt;
  &lt;li&gt;Terraform then creates the Worker and DTR nodes&lt;/li&gt;
  &lt;li&gt;Terraform creates an Ansible inventory file including the manager nodes in a group called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp&lt;/code&gt; (sorry about the naming inconsistencies), the worker nodes in a group called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;worker&lt;/code&gt; and the DTR nodes in a group called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dtr&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Terraform then runs the script in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ansible_all&lt;/code&gt; null resource&lt;/p&gt;

    &lt;ol class=&quot;lower_alpha_list&quot;&gt;
      &lt;li&gt;Ansible runs a playbook that consists of multiple nested playbooks that contain the tasks for joining the worker nodes, install DTR and join DTR replicas
        &lt;ol class=&quot;lower_roman_list&quot;&gt;
          &lt;li&gt;We get the join token for Swarm workers from the first manager node by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker swarm join-token worker&lt;/code&gt;&lt;/li&gt;
          &lt;li&gt;The current Swarm cluster status of each non-manager node is collected&lt;/li&gt;
          &lt;li&gt;If the node is not already in the cluster, we join it by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker swarm join --token &amp;lt;token&amp;gt; &amp;lt;first-node-addr&amp;gt;:2377&lt;/code&gt;&lt;/li&gt;
          &lt;li&gt;We add a Swarm node label to every node based on its value from the Terraform config’s deployment stage variable (this is later used for constraining Swarm services to specific stages when the user has access to multiple stages), and one that signifies what datacenter the node is in&lt;/li&gt;
          &lt;li&gt;We also add a Swarm node label to assign each node to a UCP collection by setting the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;com.docker.ucp.access.label&lt;/code&gt; label, e.g. to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;prod&lt;/code&gt;, which will restrict which UCP users will be able to see and interact with/schedule services on the node&lt;/li&gt;
          &lt;li&gt;We wait for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp-reconcile&lt;/code&gt; container on the first manager node to exit successfully (see the &lt;a href=&quot;https://docs.docker.com/ee/ucp/ucp-architecture/#under-the-hood&quot;&gt;UCP architecture&lt;/a&gt; page), indicating that the UCP worker components are up and running&lt;/li&gt;
          &lt;li&gt;When all worker and DTR nodes are correctly configured as plain workers, we need to install DTR:
            &lt;ol&gt;
              &lt;li&gt;If there are no existing DTR containers on any DTR node (we check for the presence of a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dtr-api-*&lt;/code&gt; container), we install DTR using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run ... docker/dtr install&lt;/code&gt; command, referring to the load-balanced VIP address of our UCP cluster (using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--ucp-url&lt;/code&gt; option), and provide the NFS address of our externalized storage (using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--nfs-storage-url&lt;/code&gt; option) as well as the TLS certificates (using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--dtr-{ca,cert,key}&lt;/code&gt; options)&lt;/li&gt;
              &lt;li&gt;For each of the remaining DTR nodes, we run the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run ... docker/dtr join&lt;/code&gt; command, which replicates the DTR database to the joining replica nodes and makes them part of the DTR cluster&lt;/li&gt;
            &lt;/ol&gt;
          &lt;/li&gt;
        &lt;/ol&gt;
      &lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Phew, that was a lot, and that was even skipping over a few details!&lt;/p&gt;

&lt;h1 id=&quot;upgrading&quot;&gt;Upgrading&lt;/h1&gt;

&lt;p&gt;The upgrade starts out with a list of what template was used for each VM currently in the cluster, then swaps out one at a time with the most recent template.&lt;/p&gt;

&lt;p&gt;E.g., if the template list looked like this:&lt;/p&gt;

&lt;div class=&quot;language-hcl highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nx&quot;&gt;templates&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;ucp1&quot;&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;ucp2&quot;&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;ucp3&quot;&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;worker1&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;worker2&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;worker3&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;… we first look up the deployment stage for each node. We process them in this order: UCP, DTR, Workers (in increasing deployment stage order, i.e. dev –&amp;gt; prod). Then we proceed by swapping in the new templat name for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp1&lt;/code&gt; node, and the template list will look like this instead:&lt;/p&gt;

&lt;div class=&quot;language-hcl highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nx&quot;&gt;templates&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;ucp1&quot;&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180820-022846-master-63776-pipeline&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;ucp2&quot;&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;ucp3&quot;&lt;/span&gt;    &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;worker1&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;worker2&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;worker3&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&quot;&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You’ll notice the template for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp1&lt;/code&gt; having changed from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Ubuntu1604DockerTemplate-20180604-154739-master-50798-push&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Ubuntu1604DockerTemplate-20180820-022846-master-63776-pipeline&lt;/code&gt;. A useful tool called &lt;a href=&quot;https://github.com/kvz/json2hcl&quot;&gt;json2hcl&lt;/a&gt; (&lt;a href=&quot;https://hub.docker.com/r/fxinnovation/json2hcl/&quot;&gt;Docker image&lt;/a&gt;) helps us convert between Hashicorp’s HCL configuration format and JSON, allowing us to manipulate the config with &lt;a href=&quot;https://stedolan.github.io/jq/&quot;&gt;jq&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We then run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;terraform plan&lt;/code&gt; followed by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;terraform apply&lt;/code&gt;, and Terraform should then show the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp1&lt;/code&gt; VM as &lt;a href=&quot;https://www.terraform.io/docs/commands/taint.html&quot;&gt;&lt;em&gt;tainted&lt;/em&gt;&lt;/a&gt; since its template VM name has changed, forcing the VM to be destroyed and a new one created in its place. Thankfully, Terraform allows us to hook into the destroy process, running scripts &lt;em&gt;before&lt;/em&gt; the VM is destroyed. Furthermore, we can abort the destruction of the VM if our script errors out, allowing us to stop the process and carry out manual intervention when facing specific failure conditions. Terraform remembers its progress and will re-attempt to replace the VM next time we run the pipeline.&lt;/p&gt;

&lt;p&gt;Here’s the general process for upgrades:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Retrieve list of templates for current nodes from Terraform&lt;/li&gt;
  &lt;li&gt;For each stage (usually in this order: UCP, DTR, other workers):
    &lt;ol class=&quot;lower_alpha_list&quot;&gt;
      &lt;li&gt;For each VM in that stage:
        &lt;ol class=&quot;lower_roman_list&quot;&gt;
          &lt;li&gt;Update the template name to the most recent from the VMware vCenter catalog (the selection mechanism is likely to be improved to be more granular)&lt;/li&gt;
          &lt;li&gt;Run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;terraform plan&lt;/code&gt; followed by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;terraform apply&lt;/code&gt;, resulting in:
            &lt;ol&gt;
              &lt;li&gt;Terraform running a pre-destroy script that runs an Ansible playbook:
                &lt;ol&gt;
                  &lt;li&gt;Draining the node from running tasks&lt;sup id=&quot;fnref:drain-issues&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:drain-issues&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
                  &lt;li&gt;Performing additional tasks for UCP* or DTR** nodes&lt;/li&gt;
                  &lt;li&gt;Having the node leave the swarm and remove it from the node list&lt;/li&gt;
                &lt;/ol&gt;
              &lt;/li&gt;
              &lt;li&gt;Terraform destroying the existing VM in vSphere&lt;/li&gt;
              &lt;li&gt;Terraform creating a new VM in vSphere cloned from the new template&lt;/li&gt;
              &lt;li&gt;Terraform running the scripts of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ansible_managers&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ansible_all&lt;/code&gt; null resources as described in the &lt;a href=&quot;#apply&quot;&gt;Apply&lt;/a&gt; section:
                &lt;ol&gt;
                  &lt;li&gt;Joining the new VM to the Swarm cluster, wait for it to be marked as healthy&lt;/li&gt;
                  &lt;li&gt;Putting the VM in the right UCP collection (using node labels) so tasks can be scheduled on it&lt;/li&gt;
                  &lt;li&gt;Performing additional tasks for UCP* or DTR** nodes&lt;/li&gt;
                  &lt;li&gt;When the &lt;em&gt;last&lt;/em&gt; VM has been successfully upgraded to the same Docker Engine version, UCP and DTR are upgraded if the desired version is newer than what’s currently installed:
                    &lt;ol&gt;
                      &lt;li&gt;We take a backup of UCP using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run ... docker/ucp backup&lt;/code&gt; command, and copy the resulting archive to the GitLab runner for archival.&lt;/li&gt;
                      &lt;li&gt;UCP is upgraded by running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run ... docker/ucp upgrade&lt;/code&gt; command, and the command runs synchronously until the upgrade completes.&lt;/li&gt;
                      &lt;li&gt;We take a backup of DTR’s metadata (not its image data, which resides on an NFS share), which takes a considerable time, and copy the resulting archive to the GitLab runner for archival. Note that we have to run the output of the backup command through &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gzip&lt;/code&gt; for the resulting file size to be small enough that Ansible’s expensive &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fetch&lt;/code&gt; action doesn’t &lt;a href=&quot;https://docs.ansible.com/ansible/latest/modules/fetch_module.html#notes&quot;&gt;consume all available memory&lt;/a&gt; on our GitLab runner VM.&lt;/li&gt;
                      &lt;li&gt;The process is similar for DTR, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run ... docker/dtr upgrade&lt;/code&gt;, which upgrades each component one at a time on each DTR replica.&lt;/li&gt;
                    &lt;/ol&gt;
                  &lt;/li&gt;
                &lt;/ol&gt;
              &lt;/li&gt;
            &lt;/ol&gt;
          &lt;/li&gt;
        &lt;/ol&gt;
      &lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;* If the node being upgraded is one of the UCP controllers, when it is to be destroyed, it is demoted such that the other controllers will be made to know about the (temporary) new number of controllers.&lt;/p&gt;

&lt;p&gt;** If the node being upgraded is one of the DTR replicas, when it is to be destroyed, it is first removed from the DTR cluster such that the other replicas will know about the (temporary) new number of replicas. When its replacement has been joined to the cluster as a worker node, it is additionally joined to the DTR cluster to become one of the available replicas.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;Using this method, we can use the same approach whether we upgrade Docker Engine, add new Ubuntu packages or apply a security patch, by simply doing that once in our VM template and rolling the upgrade out across our cluster. There’s no ambiguity about whether a node is updated or not; it is either based on the latest VM template, or it isn’t. If it is, it’s up to date.&lt;/p&gt;

&lt;p&gt;Once we’ve gone through a few more supervised rolling upgrades, we want it to be a regular scheduled job (&lt;a href=&quot;https://docs.gitlab.com/ce/user/project/pipelines/schedules.html&quot;&gt;GitLab&lt;/a&gt; can help with visibility there compared with an oldfashioned &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cron&lt;/code&gt; job), e.g. running weekly.&lt;/p&gt;

&lt;p&gt;In an upcoming post, I will provide a deep dive into the configuration of the various tools involved, namely Packer, Terraform, Ansible and GitLab, and any quirks that may be useful to other organizations.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:host-drs&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The direct host assignment will be replaced with &lt;a href=&quot;https://www.terraform.io/docs/providers/vsphere/r/compute_cluster_vm_host_rule.html#example-usage&quot;&gt;DRS group membership&lt;/a&gt;, the management of which was introduced in the &lt;a href=&quot;https://github.com/terraform-providers/terraform-provider-vsphere/blob/master/CHANGELOG.md#150-may-11-2018&quot;&gt;Terraform vSphere Provider v1.5.0&lt;/a&gt;. &lt;a href=&quot;#fnref:host-drs&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:linked-clone-requirements&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Since the disk size must be identical to that of the VM template when using &lt;a href=&quot;https://www.terraform.io/docs/providers/vsphere/r/virtual_machine.html#linked_clone&quot;&gt;linked clones&lt;/a&gt;, this is practically always the same for all VMs. &lt;a href=&quot;#fnref:linked-clone-requirements&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:indirect-tf-config&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;These are made as indirect lookups to prevent data duplication. &lt;a href=&quot;#fnref:indirect-tf-config&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:indirect-tf-config:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:netapp-dvp-note&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;These related to our use of the &lt;a href=&quot;https://netapp-trident.readthedocs.io/en/stable-v18.07/&quot;&gt;Netapp Docker Volume Plugin&lt;/a&gt;. &lt;a href=&quot;#fnref:netapp-dvp-note&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:netapp-dvp-note:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:netapp-dvp-note:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:gitlab-variables-note&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Naturally, we have to define numerous variables in GitLab itself, too, in order to be able to externalize some configuration and to provide secret variables. &lt;a href=&quot;#fnref:gitlab-variables-note&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:drain-issues&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;When a Swarm node is put in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;drain&lt;/code&gt; availability status, it currently doesn’t wait for tasks to exit gracefully – rather, they are stopped rather shortly after, thus potentially causing downtime if the service does not have any replicas on other nodes in the stage. &lt;a href=&quot;#fnref:drain-issues&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Sune Keller</name></author><category term="docker" /><category term="docker enterprise" /><category term="packer" /><category term="terraform" /><category term="ansible" /><category term="gitlab" /><category term="series" /><category term="enterprise" /><summary type="html">This part describes how we use Terraform, Ansible and GitLab for creating and upgrading clusters in an automated fashion.</summary></entry><entry><title type="html">Declarative Docker Enterprise with Packer, Terraform, Ansible and GitLab - part 1</title><link href="https://blog.sunekeller.dk/2018/08/declarative-docker-enterprise-part-1/" rel="alternate" type="text/html" title="Declarative Docker Enterprise with Packer, Terraform, Ansible and GitLab - part 1" /><published>2018-08-17T00:00:00+02:00</published><updated>2018-08-17T00:00:00+02:00</updated><id>https://blog.sunekeller.dk/2018/08/declarative-docker-enterprise-part-1</id><content type="html" xml:base="https://blog.sunekeller.dk/2018/08/declarative-docker-enterprise-part-1/">&lt;h1 id=&quot;background&quot;&gt;Background&lt;/h1&gt;

&lt;p&gt;At &lt;a href=&quot;https://www.almbrand.dk&quot;&gt;Alm. Brand&lt;/a&gt;, we’ve been running Docker in production since the first beta of Docker Universal Control Plane (UCP). This is the story about how we moved on to a more automated and declarative approach.&lt;/p&gt;

&lt;p&gt;We started with greenfield services, a simple service discovery, config management and dynamic load balancing setup and accelerated quickly. After having proven the new Docker based platform, interest grew in Dockerizing and migrating the legacy apps running on our organically grown application server infrastructure, which had become painful to manage, and required daily firefighting.&lt;/p&gt;

&lt;p&gt;In our &lt;a href=&quot;https://www.youtube.com/watch?v=nI9WhhtFmFs&quot;&gt;DockerCon Europe 2017 talk&lt;/a&gt;, we describe our process and the gains from our migration journey, so I will not dive further into that in this post. Rather, I will describe how we became victims of our own success, and what we did to better the situation.&lt;/p&gt;

&lt;h1 id=&quot;the-problem&quot;&gt;The problem&lt;/h1&gt;

&lt;p&gt;In the summer of 2017, it became clear that co-locating our license constrained legacy apps and our open-source based greenfield services would prevent us from scaling our infrastructure to match the growing number of new services and migrated apps. As we’re an Enterprise™, it would become the end of 2017 before we were able to start working on a solution, since, finally, the resources in the cluster were becoming exhausted, causing outages and painful firefighting in both dev/test and production.&lt;/p&gt;

&lt;p&gt;The most straightforward solution would be to build a new Docker cluster and migrate the greenfield services to it, thus freeing up resources on the first cluster. To pull that off as fast as possible, that would of course require several key employees to be available within a short amount of time in order to modify external load balancers, create DHCP reservations, open firewall ports, create and provision VMs, etc.&lt;/p&gt;

&lt;p&gt;However, we were determined to make better use of our learnings from building and running the first cluster. We temporarily scaled up the first cluster, the bulk cost of which was additional licenses for the proprietary legacy software, and thus bought ourselves time for creating a better solution.&lt;/p&gt;

&lt;h1 id=&quot;our-solution&quot;&gt;Our solution&lt;/h1&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/2018-08-16-ucp-provisioner-pipeline-1.png&quot; alt=&quot;A screenshot of our provisioning pipeline&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Some of our learnings from the first two years of running a Docker cluster were:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Keeping OS packages on VMs up to date, even with config management tools, does not completely prevent configuration drift&lt;/li&gt;
  &lt;li&gt;Manual work leads to human error, some of which will only be discovered further down the line&lt;/li&gt;
  &lt;li&gt;If only a few people work daily with a system, it will become less approachable to newcomers, unless an effort is made to codify and automate it&lt;/li&gt;
  &lt;li&gt;As much as possible of the data and process involved in creating or modifying a cluster should be versioned and executable as a pipeline&lt;/li&gt;
  &lt;li&gt;When the processes are different for changing different components, it requires more knowledge and context switching. Simplifying processes can save time and reduce errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;creating-a-golden-image&quot;&gt;Creating a golden image&lt;/h2&gt;

&lt;p&gt;Since we run our own VMware based infrastructure, we have to create VMs from scratch. Fortunately, &lt;a href=&quot;https://www.packer.io/&quot;&gt;HashiCorp Packer&lt;/a&gt; is a very useful tool in that regard. JetBrains have made a &lt;a href=&quot;https://github.com/jetbrains-infra/packer-builder-vsphere&quot;&gt;VMware vSphere plugin for Packer&lt;/a&gt; that helps create a new VM or VM template from a given ISO file. The plugin also supports creating a virtual floppy drive to be made available to the installer, which we make use of to supply a Kickstart script. The Kickstart script configures the system account that we use in the following stages, and installs basic packages and configuration such as NTP, internal CA certificates, corporate proxy environment variables, timezone, etc.&lt;/p&gt;

&lt;p&gt;This process runs in a GitLab pipeline for a repo aptly named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;golden-image-base&lt;/code&gt;. In order to keep iteration times short, we save the resulting VM as a template at this stage and hand off to another repo’s pipeline, named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;golde-image-docker-ucp&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the Docker + UCP repo’s pipeline, we clone the base VM template and use Packer’s Ansible provisioner to provision it further. Among other things, we use &lt;a href=&quot;https://github.com/vmware/govmomi/blob/master/govc/README.md&quot;&gt;govc&lt;/a&gt; to add a separate disk for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/var/lib/docker&lt;/code&gt; partition as recommended in the &lt;a href=&quot;https://success.docker.com/api/asset/.%2Frefarch%2Fsecurity-best-practices%2FCIS_Docker_Community_Edition_Benchmark_v1.1.0.pdf&quot;&gt;CIS CE Benchmark v1.1.0&lt;/a&gt; section 1.1. We also installed a configurable version of Docker Engine and pre-pull the UCP and DTR images from Docker Hub. To get the list of images to pull for UCP, you can run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run --rm docker/ucp images --list&lt;/code&gt;. Beware that the output contains carriage returns (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;\r&lt;/code&gt;), which can interfere with automation, so pipe it through &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;tr -d '\r'&lt;/code&gt;. For DTR, the command is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run --rm docker/dtr images&lt;/code&gt;, and it &lt;em&gt;also&lt;/em&gt; yields carriage returns in the output.&lt;/p&gt;

&lt;p&gt;The final VM template is now ready to be used by the pipeline of the next repo in the process, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ucp-provisioner&lt;/code&gt;. That will be described in the next part of the series.&lt;/p&gt;</content><author><name>Sune Keller</name></author><category term="docker" /><category term="docker enterprise" /><category term="packer" /><category term="terraform" /><category term="ansible" /><category term="gitlab" /><category term="series" /><category term="enterprise" /><summary type="html">At [Alm. Brand](https://www.almbrand.dk), we've been running Docker in production since the first beta of Docker Universal Control Plane (UCP). This is the story about how we moved on to a more automated and declarative approach.</summary></entry><entry><title type="html">Using the new config and secret templating in Docker CE 18.03</title><link href="https://blog.sunekeller.dk/2018/04/docker-18-03-config-and-secret-templating/" rel="alternate" type="text/html" title="Using the new config and secret templating in Docker CE 18.03" /><published>2018-04-01T00:00:00+02:00</published><updated>2018-04-01T00:00:00+02:00</updated><id>https://blog.sunekeller.dk/2018/04/docker-18-03-config-and-secret-templating</id><content type="html" xml:base="https://blog.sunekeller.dk/2018/04/docker-18-03-config-and-secret-templating/">&lt;h1 id=&quot;background&quot;&gt;Background&lt;/h1&gt;

&lt;p&gt;A really helpful new feature just landed in Docker CE 18.03. Since &lt;a href=&quot;https://docs.docker.com/engine/swarm/secrets/&quot;&gt;Swarm Secrets&lt;/a&gt; were introduced in Docker 1.13 (January 2017) and &lt;a href=&quot;https://docs.docker.com/engine/swarm/configs/&quot;&gt;Swarm Configs&lt;/a&gt; came around in 17.06, getting sensitive data and configuration files securely distributed to your Swarm service containers is now more or less a solved problem. Using Swarm configs, you no longer have to bake configuration files into a custom image, or use external means to distribute the configuration files to all nodes in your Swarm cluster.&lt;/p&gt;

&lt;h1 id=&quot;the-problem&quot;&gt;The problem&lt;/h1&gt;

&lt;p&gt;However, if dense config files also contain secret data, what are your options? Putting the configuration file in a Swarm secret will make it cumbersome for operators to inspect the non-sensitive parts of the configuration, and using a Swarm config will make it too easy to reveal any secrets as part of daily operations.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://store.docker.com/images/mariadb&quot;&gt;Some&lt;/a&gt; &lt;a href=&quot;https://store.docker.com/images/wordpress&quot;&gt;of&lt;/a&gt; &lt;a href=&quot;https://store.docker.com/images/postgres&quot;&gt;the&lt;/a&gt; official images on Docker Store have built-in entrypoint scripts to help specifying the file containing e.g. the MariaDB root password by pointing an environment variable to a file, which could very well be a Swarm secret located in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/run/secrets/&lt;/code&gt; directory. However, most images have not been adapted to support Swarm secrets natively.&lt;/p&gt;

&lt;p&gt;A simple example is &lt;a href=&quot;https://store.docker.com/images/redis&quot;&gt;the official Redis image&lt;/a&gt;. Given that you wanted to set other configuration options than the password, you would have to do one of the following things:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Expose the password to operators:&lt;/strong&gt; Create a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis.conf&lt;/code&gt; Swarm config including the password in plaintext, visible to everyone who can inspect the config through the Docker API or who looks over the shoulders of such a person.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Hide all config from operators:&lt;/strong&gt; Put the entire &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis.conf&lt;/code&gt; file inside a Swarm secret, making it hard for operators to see the non-sensitive contents of the Redis configuration. If they copy out the contents from the container, they will still have unwillingly exfiltrated the secret password and risked its exposure.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Hack together some custom templating:&lt;/strong&gt;
    &lt;ol&gt;
      &lt;li&gt;Create a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis.conf&lt;/code&gt; Swarm config with a placeholder for the password,&lt;/li&gt;
      &lt;li&gt;Create a Swarm secret with the password,&lt;/li&gt;
      &lt;li&gt;Add a custom entrypoint script to replace it with the contents of the correct secret file inside &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/run/secrets/&lt;/code&gt;,&lt;/li&gt;
      &lt;li&gt;Write a Dockerfile which includes the entrypoint script, distribute the resulting image to your Swarm through a public or private registry and keep track of official image updates forever, repeating this step for every important feature or security update,&lt;/li&gt;
      &lt;li&gt;Become miserable because of point 4.&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, the choices are less than ideal.&lt;/p&gt;

&lt;h1 id=&quot;the-improvement&quot;&gt;The improvement&lt;/h1&gt;

&lt;p&gt;Since &lt;a href=&quot;https://docs.docker.com/release-notes/docker-ce/#18030-ce-2018-03-21&quot;&gt;Docker CE 18.03&lt;/a&gt;, however, you can have the best of both worlds. Using the new &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--template-driver&lt;/code&gt; option to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker config create&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker secret create&lt;/code&gt;, you can now use the Golang templating language to insert secret references as well as other templating placeholders directly in your Swarm configs and have them rendered only when each service task is created.&lt;/p&gt;

&lt;p&gt;The possibilities listed in the relevant pull request on the &lt;a href=&quot;https://github.com/moby/moby/pull/33702&quot;&gt;Moby project&lt;/a&gt; are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;{{ env &quot;VAR&quot; }}&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;{{ .Task.ID }}&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;{{ secret &quot;sometarget&quot; }}&lt;/code&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ve discovered that you can also use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;{{ config &quot;sometarget&quot; }}&lt;/code&gt;, though I’d be careful with introducing obscure dependencies between different Swarm configs.&lt;/p&gt;

&lt;p&gt;Adding documentation is tracked in &lt;a href=&quot;https://github.com/docker/docker.github.io/issues/6207&quot;&gt;this GitHub issue&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;example&quot;&gt;Example&lt;/h1&gt;

&lt;p&gt;Here’s a step-by-step example to illustrate the feature:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Create a Swarm secret with the intended password for Redis:&lt;/p&gt;

    &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;openssl rand &lt;span class=&quot;nt&quot;&gt;-hex&lt;/span&gt; 32 | &lt;span class=&quot;nb&quot;&gt;tr&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'\n'&lt;/span&gt; | docker secret create redis_pw -
&lt;span class=&quot;go&quot;&gt;s3agzeg68rvf2nhimk20mjm05
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis.conf&lt;/code&gt; file e.g. using the &lt;a href=&quot;http://download.redis.io/redis-stable/redis.conf&quot;&gt;default Redis config&lt;/a&gt;,&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Change the TCP port in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis.conf&lt;/code&gt; to show we’re changing the defaults, something we’d like to be able to see as an operator:&lt;/p&gt;

    &lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;port 2100
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Set the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;requirepass&lt;/code&gt; option in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis.conf&lt;/code&gt; as a &lt;em&gt;templated secret reference&lt;/em&gt;, the value of which we’d like to &lt;em&gt;avoid&lt;/em&gt; be able to read during daily operations:&lt;/p&gt;

    &lt;div class=&quot;language-go highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;requirepass&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{{&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;secret&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&quot;redis_pw&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create a Swarm config for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis.conf&lt;/code&gt; using the new &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--template-driver&lt;/code&gt; option, and inspect its contents to show that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;redis_pw&lt;/code&gt; secret is not revealed:&lt;/p&gt;

    &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker config create &lt;span class=&quot;nt&quot;&gt;--template-driver&lt;/span&gt; golang redis.conf ./redis.conf
&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker config inspect &lt;span class=&quot;nt&quot;&gt;--pretty&lt;/span&gt; redis.conf | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'^requirepass'&lt;/span&gt;
&lt;span class=&quot;go&quot;&gt;requirepass {{ secret &quot;redis_pw&quot; }}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create an attachable overlay network and a Swarm service using the official Redis image:&lt;/p&gt;

    &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker network create &lt;span class=&quot;nt&quot;&gt;--driver&lt;/span&gt; overlay &lt;span class=&quot;nt&quot;&gt;--attachable&lt;/span&gt; redis_network
&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker service create &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; redis &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--secret&lt;/span&gt; redis_pw &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; redis_network &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;redis.conf,target&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/usr/local/etc/redis/redis.conf &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  redis:alpine &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  redis-server /usr/local/etc/redis/redis.conf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Try running a Redis CLI against the Redis service, and you should be rejected:&lt;/p&gt;

    &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; redis_network redis:alpine redis-cli &lt;span class=&quot;nt&quot;&gt;-h&lt;/span&gt; redis &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 2100 &lt;span class=&quot;nb&quot;&gt;set &lt;/span&gt;x &lt;span class=&quot;s2&quot;&gt;&quot;I'm in&quot;&lt;/span&gt;
&lt;span class=&quot;go&quot;&gt;NOAUTH Authentication required.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Try running a CLI in a service with access to the secret:&lt;/p&gt;

    &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker service create &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--detach&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; redis-cli-test &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--secret&lt;/span&gt; redis_pw &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; redis_network &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--restart-condition&lt;/span&gt; on-failure &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  redis:alpine &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  sh &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'redis-cli -h redis -p 2100 -a &quot;$(cat /run/secrets/redis_pw)&quot; set x &quot;I'&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;'&quot;&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'m in&quot;; redis-cli -h redis -p 2100 -a &quot;$(cat /run/secrets/redis_pw)&quot; get x'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Wait a couple of seconds - currently we have to specify &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--detach&lt;/code&gt; with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--restart-condition on-failure&lt;/code&gt; since the synchronous &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker service create&lt;/code&gt; command (without &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--detach&lt;/code&gt;) interprets a task in state &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Completed&lt;/code&gt; to be a failure, and will not return you to the command line.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Inspect the logs to see it working, and you should get the following result:&lt;/p&gt;

    &lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;docker service logs &lt;span class=&quot;nt&quot;&gt;--raw&lt;/span&gt; redis-cli-test
&lt;span class=&quot;go&quot;&gt;OK
I'm in
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The example says &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;some_target&quot;&lt;/code&gt; because it is possible to specify a source and a target for a secret when adding it to a Swarm service like so: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--secret source=prod_redis_pw,target=redis_pw&lt;/code&gt;, and you have to use the &lt;em&gt;target&lt;/em&gt; name rather than the &lt;em&gt;source&lt;/em&gt; name in the templated reference in your Swarm config. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Note to self: I should create an issue in &lt;a href=&quot;https://github.com/docker/cli&quot;&gt;docker/cli&lt;/a&gt; for this. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Sune Keller</name></author><category term="docker" /><category term="swarm" /><category term="configs" /><category term="secrets" /><category term="intermediate" /><category term="features" /><category term="what's new" /><category term="docker ce" /><summary type="html">Since Docker CE 18.03, you can have the best of both worlds. Using the new --template-driver option to docker config create and docker secret create, you can insert secret references as well as other templating placeholders directly in your Swarm configs, evaluated at task creation time.</summary></entry></feed>