.. _operating_swarm: Setting up Docker Swarm ========================= Prerequisites -------------------- Docker ^^^^^^ Installation of VS on Docker Swarm requires only Docker Engine installed. It has been successfully tested with version 19 an 20. Operating System ^^^^^^^^^^^^^^^^^^ VS should be deployable on any reasonably new Unix based system and has been successfully tested to be deployed via Docker Swarm on following systems: - Red Hat Enterprise Linux 7.9 - Red Hat Enterprise Linux 8.6 - Ubuntu 18.04 - Ubuntu 20.04 Python and helm ^^^^^^^^^^^^^^^^^^ For configuration files generation, ``Python 3, Helm`` are essential but do not need to be installed on the target deployment system. .. _initswarm: Initialization Swarm -------------------- In order to set up an instance of the View Server (VS) for Docker Swarm, the separate ``vs_starter`` utility is recommended. Minimum ``Python`` version to be used is ``3.8``. Its objective is to create a set of static docker compose configuration files using the rendered helm templates, which in turn use a set of previously created ``values.yaml`` files. See :ref:`operating_k8s` for more information about ``values.yaml``, how to create them and meaning of individual values. First ensure that you have `Helm `_ software installed in order to generate the helm templates - instructions can be found `on the install page `_. The ``vs_starter`` utility is distributed as a Python package and easily installed via ``pip``. .. code-block:: bash pip3 install git+https://gitlab.eox.at/vs/vs-starter.git Configuration files for a new VS Swarm collection named ``test`` and deployment ``staging`` with additional set of PRISM specific configuration values can be created by: .. code-block:: bash # render helm templates from two sets of values.yaml files (generic ones and deployment specific). helm template test-staging --output-dir ./rendered-template --values ./test/values.yaml --values ./test/values.staging.yaml # convert the templates content to docker compose swarm deployment files vs_starter rendered-template/vs/templates --slug test --environment staging -o $PWD/test/docker-compose.shared.yml -o $PWD/test/docker-compose.instance.yml Following ``environments`` are supported: ``dev``, ``staging``, ``ops`` in the templates. Name of the collection - ``--slug`` parameter should be containing only lowercase letters and underscores. ``-o`` optional multiple parameters stand for absolute paths to additional override templates to be rendered together with the default one from ``vs-starter``. For more detailed usage guide for ``vs-starter``, continue to `vs-starter README `_ and `vs-starter sample templates `_. Once the initialization is finished the next step is to deploy the Docker Swarm stack. .. _setup_swarm: Setup Docker Swarm ------------------ In this chapter the setup of a new VS stack is detailed. Before this step can be done, the configuration and environment files need to be present. These files can be added manually or be created as described in the :ref:`initswarm` step. Docker ^^^^^^ In order to deploy the Docker Swarm stack to the target machine, Docker and its facilities need to be installed. This step depends on the systems architecture. On a Debian based system this may look like this: .. code-block:: bash sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - # add the apt repository sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/debian \ $(lsb_release -cs) \ stable" # fetch the package index and install Docker sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io Docker Swarm setup ^^^^^^^^^^^^^^^^^^^^^^^^ Now that Docker is installed, the machine can either create a new swarm or join an existing one. To create a new Swarm, the following command is used: .. code-block:: bash docker swarm init --advertise-address where ``ip`` will be the IP of master node, under which it will be reachable by the worker nodes. If only a single node setup (dev) is created, then `--advertise-address` is not needed. Alternatively, an existing Swarm can be joined by a worker. The easiest way to do this, is to obtain a ``join-token``. On an existing Swarm manager (where a Swarm was initialized or already joined as manager) run this command: .. code-block:: bash docker swarm join-token worker This prints out a command that can be run on a machine to join the swarm: .. code-block:: bash docker swarm join --token It is possible to dedicate certain workers for example to contribute to ingestion exclusively, while others can take care only for rendering. This setup has benefits, when a mixed setup of nodes with different parameters is available. In order to set a node for example as `external`, to contribute in rendering only, one can simply run: .. code-block:: bash docker node update --label-add type=external Additionally, it is necessary to modify `placement` parameter in the docker compose file. Note that default ``vs-starter`` templates do not consider any external/internal label placement restrictions. .. code-block:: yaml renderer: deploy: placement: constraints: - node.labels.type == external Additional information for swarm management can be obtained in the official `documentation of the project `_. Optional Logging setup ^^^^^^^^^^^^^^^^^^^^^^^^^^ For ``staging`` and ``ops`` environments, the services in the sample compose files reference the fluentd logging driver, no manual change is necessary. Another possible way is to configure the default logging driver on the docker daemon level to be fluent by creating the file ``/etc/docker/daemon.json`` with the following content: .. code-block:: json { "log-driver": "fluentd" } and afterwards restarting the docker daemon via .. code-block:: bash systemctl restart docker For ``dev`` environment, compose files for configure the ``json`` logging driver for each service. .. include:: configuration_swarm.rst Stack Deployment ^^^^^^^^^^^^^^^^ Before the stack deployment step, some environment variables and configurations which are considered sensitive (``SECRETS``) should be created beforehand, refer to :ref:`swarm_sensitive-vars` section. Now that a Docker Swarm is established and docker secrets and configs are created, it is time to deploy the VS as a stack. This is done using the created Docker Compose configuration files. The deployment of created stack compose files should be performed in following order: 1) base stack - with updated extnet networks for each collection 2) logging stack (references logging-extnet network from base) 3) -x) individual {slug} collections (references extnet network managed by base stack) logging and base stacks ~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash docker stack deploy -c docker-compose.base.yml base docker stack deploy -c docker-compose.logging.yml logging .. _redeploystack: stack redeployment ~~~~~~~~~~~~~~~~~~~~~~~~ For a redeployment of a single stack one would do following: .. code-block:: bash # only if was previously running docker stack rm collection_env="test-staging" && docker stack deploy -c "$collection_env"/docker-compose.yml -c "$collection_env"/docker-compose.shared.yml -c "$collection_env"/docker-compose.instance.yml .. (replace ``test-staging`` with the actual `slug-env` identifier and assuming that the vs-starter did output the templates to "$collection_env") These commands performs a set of tasks. First off, it obtains all necessary docker images. When all relevant images are pulled from their respective repository the services of the stack are initialized. When starting for the first time, the startup procedure takes some time, as everything needs to be initialized. This includes the creation of the database, user, required tables, and the Django instance. That process can be supervised using the ``docker service ls`` command, which lists all available services and their respective status. If a service is not starting or is stuck in ``0/x`` state, inspect its logs or status via .. code-block:: bash docker service ps --no-trunc docker service logs The above mentioned process necessarily involves a certain service downtime between possible shutting down of the stack and new deployment. .. include:: management_swarm.rst .. include:: access_swarm.rst