Operations and management

This chapter lists usual management/operations commands and procedures with references to other parts of the documentation where the concepts are generally described in more detail.

Generate VS configurations for 1 stack

Two commands to generate a set of docker-compose and other configuration files for one stack of VS for docker swarm.

# render helm templates from two sets of values.yaml files (generic ones and deployment-specific).
helm template test-staging <chart-location> --output-dir ./rendered-template --values ./test/values.yaml --values ./test/values.staging.yaml
# convert the templates content to docker-compose swarm deployment files
vs_starter rendered-template/vs/templates <OUTPUT_PATH> --slug test --environment staging -o $PWD/test/docker-compose.shared.yml -o $PWD/test/docker-compose.instance.yml

For more information, see Initialization Swarm.

Starting/Stopping the View Server

# stop stack if was previously running
docker stack rm <stack-name>
# deploy stack anew
collection_env="test-staging" && docker stack deploy -c "$collection_env"/docker-compose.yml -c "$collection_env"/docker-compose.shared.yml -c "$collection_env"/docker-compose.instance.yml <stack-name>

For more information, see stack redeployment.

Starting/Stopping individual services

Starting or stopping services is done by setting the number of running container replicas to 0. For more information, see Service Management or Scaling.

Restarting a running service (or updating its environment variables) can be done via the following command:

# restart service
docker service update --force <stack_service>
# update environment variable of service, changing the service logging to DEBUG mode
docker service update --env-add DEBUG=true <stack_service>

Deleting VS volumes and images

To delete individual docker volume or image in case of obsolete collections or stacks, when the respective collection stack has already been stopped.

# delete docker volume
docker volume rm <stack_volume>
# delete the docker image
docker image rm -f <image:tag>
# command to delete all unused docker images and volumes from a node
docker system prune -f --all --volumes

Harvesting new collection

To initiate harvesting of a certain harvester configuration under name <harvester-name>, two different ways are possible:

# inside harvester container
harvester harvest --config-file /config.yaml <harvester-name>
# OR on the main node via redis queue
docker exec -it $(docker ps -qf "name=<stack_redis>") redis-cli lpush harvester_queue '{"name":"<harvester-name>"}'

For more information, see Harvesting.

Register a STAC item

To manually register a single STAC item file, two different ways are possible:

# inside harvester container
registrar --config-file /config.yaml register items "$(cat json-file-containing-stac-item)"
# OR on the main node via redis queue
docker exec -it $(docker ps -qf "name=<stack_redis>") redis-cli lpush register_queue "$(cat json-file-containing-stac-item)"

Preprocess a STAC item

To manually preprocess a single STAC item, two different ways are possible:

# inside preprocessor container
preprocessor preprocess --config-file /config.yaml "$(cat json-file-containing-stac-item)"
# OR on the main node via redis queue
docker exec -it $(docker ps -qf "name=<stack_redis>") redis-cli lpush preprocess_queue "$(cat json-file-containing-stac-item)"

Stop ongoing ingestion by deleting all redis queues

# delete all used ingestion queues
docker exec -i $(docker ps -qf "name=^$<stack>_redis") redis-cli del preprocess_queue register_queue harvester_queue

Verify ingestion into the database

All commands in this subsection are done in renderer or registrar container.

Checking product/collection status

python3 $INSTANCE_DIR/manage.py id check <id>

<id> can be either a collection name or product identifier and yields possible outputs:

{"event": "The identifier 'urn:eop:SUPERVIEW-2:MULTISPECTRAL_2m:SV-2_20210913_L1B0000147080_1012101725100008-MUX_5532' is already in use by a 'Product'."}
{"event": "The identifier 'VHR_IMAGE_2021' is already in use by a 'Collection'."}
{"event": "The identifier 'test' is currently not in use."}

List products

List all existing products, optionally constrained by being part of <collection-name>.

python3 $INSTANCE_DIR/manage.py id list -c <collection-name> --suppress-type

Get total sum of products

Check number of products currently ingested:

python3 $INSTANCE_DIR/manage.py shell -c 'from eoxserver.resources.coverages.models import Product;print(Product.objects.count());'

Count products being part of a collection or product type

Check number of products currently ingested and being part of ProductType with name <product-type-name>.

python3 $INSTANCE_DIR/manage.py shell -c 'from eoxserver.resources.coverages.models import ProductType;pt=ProductType.objects.get(name="<product-type-name>);print(pt.products.count())'

Count products being part of collection.

python3 $INSTANCE_DIR/manage.py shell -c 'from eoxserver.resources.coverages.models import Collection;c=Collection.objects.get(identifier="Emergency");print(c.products.count())'

All the collections are available via OpenSearch interface openly on /opensearch endpoint. The example call to get the number of products in a <collection-name> as an ATOM response would be: <service-url>/opensearch/collections/<collection-name>/atom/?count=0

Vacuum database tables

After the ingestion campaign of a collection finishes, it is suggested to manually vacuum a database by the following command inside a database container

vacuumdb -f -d $DB_NAME -U $DB_USER -h $DB_HOST --analyze

Export all metadata, data and browse paths

To get an exported list of all data and metadata files referenced by all registered products, use the following set of Python commands in the Django instance shell (registrar or renderer container).

python3 $INSTANCE_DIR/manage.py shell
from eoxserver.resources.coverages.models import (Product, Coverage, MetaDataItem, ArrayDataItem, Browse)
# print all metadata paths and all files to a file
with open("/registered_paths.txt", "w") as ff:
    prods = Product.objects.all()
    for prod in prods:
        covs = Coverage.objects.filter(parent_product_id=prod.id)
        metadata_items = MetaDataItem.objects.filter(eo_object_id = prod.id)
        browses = Browse.objects.filter(product_id=prod.id)
        for cov in covs:
            data_items = ArrayDataItem.objects.filter(coverage_id = cov.id)
            for it in data_items:
                print(it.location, file=ff)
        for md in metadata_items:
            print(md.location, file=ff)
        for b in browses:
            print(b.location, file=ff)

The file will be saved to the path /registered_paths.txt inside the container.

Deregister a product

To completely deregister a single product with a given identifier, you can do:

python3 $INSTANCE_DIR/manage.py product deregister <identifier>

To deregister all products from a certain collection a combination of two commands can be used:

for item in $(python3 $INSTANCE_DIR/manage.py id list -c <collection-name> -s); do python3 manage.py product deregister $item; done

To completely remove a collection:

python3 $INSTANCE_DIR/manage.py collection delete <collection-name>

Adding a new configuration to running stack

If operator wants to add a new collection, product type, coverage type or browse type - instance of any of the database models, then a few manual steps are necessary after generating the new configurations.

At the moment there is no auto-update (create/delete/update) of the database structure on change/update of values.

It is necessary to invoke the relevant parts of the init_db.sh script containing the python3 manage.py CLI commands to manually add the models either renderer or registrar container.

Example of adding of a single collection SAR_IMP_1P with a default RGBA rendering and a product type with the same name would be running following:

python3 manage.py producttype create "SAR_IMP_1P" \
      --coverage-type "RGBA"
python3 manage.py browsetype create "SAR_IMP_1P"  \
      --red "red" \
      --green "green" \
      --blue "blue" \
      --red-range 0 255 \
      --green-range 0 255 \
      --blue-range 0 255 \
      --red-nodata 0 \
      --green-nodata 0 \
      --blue-nodata 0 \
      --alpha "alpha" \
      --alpha-range 0 255
 python3 manage.py collectiontype create "SAR_IMP_1P_type" \
      --coverage-type "RGBA" \
      --product-type "SAR_IMP_1P"
 python3 manage.py collection create "SAR_IMP_1P" \
      --type "SAR_IMP_1P_type"

Troubleshooting – accessing logs

see more detailed instructions at Inspecting logs in development.

Restart services/memory clean

To clean up used memory by external services as a one-off operation, do the following:

for stack in 'bs1' 'bs2'
do
    for service in 'renderer' 'cache'
    do
          docker service update --force "$stack"_"$service"
    done
done

Cache seeding operations

If seeder and registrar services are configured to be linked via input and output queue, the successful registration will automatically trigger seeding.

To manually trigger the seeding of a certain product, do the following command in seeder container:

# seed a single product for a set of pre-configured layers
python3 seeder/seeder.py --mode standard --product-to-seed <product-id> --config-file /seeder-config.yaml -v 4

optionally add –leave_existing parameter to not delete existing tiles which, only fills the missing ones.

For seeding a set of configured layers for a whole collection, export the list of products and perform seeding of them one by one. Bulk-seeding of a whole layer is not available in seeder yet.

# export all products from all collections for a single stack to a text file on the node
docker exec -it $(docker ps -qf "name=^<stack>_registrar") bash -c 'python3 $INSTANCE_DIR/manage.py id list --suppress-type' > export-product-<stack>.txt
# stream a list of products into the seed command in a single seeder container
while read product_id;do docker exec -i $(docker ps -qf "name=^<stack>_seeder") python3 seeder/seeder.py --config-file /seeder-config.yaml --product-to-seed "$product_id";done<export-product-<stack>.txt