Container Orchestration

Introduction

There are many container orchestration tools suitable for Infinite Scale, e.g. Docker-Compose, Docker Swarm and Kubernetes.

Container orchestration tools are necessary to meet the requirements described in Availability and Scalability.

The pages
Docker Swarm vs Kubernetes: how to choose a container orchestration tool and
Kubernetes Vs. Docker Swarm: A Comparison of Containerization Platforms
can give a brief overview of their purpose, advantages and disadvantages of both tools.

In this section, we provide guidelines for using Infinite Scale with Docker-Compose and Kubernetes. For Kubernetes, there are already Helm Charts available that can be used and adjusted.

Docker Compose

Similar to when using docker run and handing over command-line parameters for a single container, you can define a docker-compose.yml yaml file which defines all the environment variables for each container in one file. This is the next step of multi-container environments.

Check if the package docker-compose is installed (in addition to docker):

which docker-compose

If Docker-Compose is installed, you’ll be informed. If not, you may get no output at all or a message that it couldn’t be found. In that case you need to install Docker-Compose first. On most Linux distributions, you can simply use the package manager to do so. Note that in many cases, this will install a 1.x version (python-based) while a 2.x version (go-based) is preferred. To check which version would be installed with the package manager, simply type:

apt-cache policy docker-compose

If the output shows a version like 2.x, you can use the package manager to install docker-compose, for example with the following command on Debian-based distributions:

sudo apt install docker-compose

If the output shows a version like 1.25.0-1, follow the Install Docker Compose guide to install a 2.x version.

When done, create a project directory, like ocis-compose in your home directory to have a common location for your Infinite Scale compose files.

Kubernetes and Helm Charts

Kubernetes (K8s) is an open-source plattform for managing containers that run applications. It ensures there is no downtime and an optimal usage of resources. It offers a framework in which to run distributed systems. Infinite Scale was designed with Kubernetes in mind. Therefore we provide Helm charts for a convenient deployment of Infinite Scale on a Kubernetes cluster.

For more information on Kubernetes (K8s) features, check out Why you need Kubernetes and what it can do. If that is too abstract, there is an ELI5 writeup.

We also recommend Marcel Wunderlich’s 4 series articles on Kubernetes, clarifying its declarative nature, deep-diving into ingress networking, storage and monitoring.

Also see the Deployment Evolution in our Availability and Scalability guide.

Infinite Scale follows the Twelve-Factor App principles regarding configuration, which means almost every aspect of Infinite Scale is modifiable via environment variables. This comes in handy when you a look at how a helm chart’s list of values looks like.

Requirements

For efficient use of Infinite Scale with Kubernetes, you either need to install Kubernetes directly or minikube. Since a full-fledged Kubernetes setup is far beyond the scope of this documentation, we assume you have already a Kubernetes cluster running or you want to experiment with minikube.

minikube

minikube lets you run a single-node Kubernetes cluster locally. It is the most approachable way to test a deployment. It requires no extra configuration on any cloud platform as everything runs on your local machine.

kubectl

kubectl is the command-line tool for Kubernetes. It allows users to run commands against a K8s cluster. It supports multiple contexts for as many clusters as you have access to. minikube also provides kubectl wrapped as minikube kubectcl.

Helm Charts

Helm is the equivalent of a package manager for Kubernetes. It can be described as a layer on top of how you would write pods, deployments or any other K8s resource declaration.

Deploying minikube

Follow the official instructions on how to set up minikube for your specific OS.

Verify your installation is correct:

minikube status
minikube
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

Next, start the cluster:

minikube start
minikube v1.25.2 on Ubuntu 20.04
Using the docker driver based on existing profile
Starting control plane node minikube in cluster minikube
Pulling base image ...
Restarting existing docker container for "minikube" ...
Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
▪ kubelet.housekeeping-interval=5m
Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
Enabled addons: storage-provisioner, default-storageclass
kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Using Helm Charts with Infinite Scale

Installing Helm

To install Helm, refer to the official guide Installing Helm.

The easiest way to run the entire Infinite Scale package is by using the available charts. It is not the purpose of this guide to explain the inner workings of Kubernetes or its resources as Helm builds an abstraction on top of it, letting you interact with a refined interface that roughly translates to "helm install" and "helm uninstall".

To host charts, one can create a charts repository, but this is also beyond the scope of this documentation.

Requirements

  • minikube up and running.

  • kubectl installed. minikube wraps kubectl as minikube kubectl. By default you should be able to access the minikube cluster.

  • Helm cli installed.

  • git installed.

Setup

  1. Clone the charts:

    git clone https://github.com/owncloud/ocis-charts.git /var/tmp/ocis-charts
  2. Change directory into the charts' root:

    cd /var/tmp/ocis-charts/charts/ocis
  3. Install the package:

    helm install ocis .
  1. Verify the application is running in the cluster with:

    minikube kubectl get pods
    NAME                                     READY   STATUS    RESTARTS        AGE
    glauth-55d7b5878c-25qnt                  1/1     Running   1 (2d23h ago)   2d23h
    graph-859855c94d-l5xgt                   1/1     Running   2 (9m21s ago)   2d23h
    idp-7759f4c6b9-l25t4                     1/1     Running   1 (2d23h ago)   2d23h
    nats-6857bc5f8f-5s597                    1/1     Running   1 (2d23h ago)   2d23h
    ocs-8454747c4b-wxwms                     1/1     Running   2 (9m21s ago)   2d23h
    proxy-79df886fb4-njr9p                   1/1     Running   2 (9m23s ago)   2d23h
    settings-79597cb89d-ttvmm                1/1     Running   2 (9m23s ago)   2d23h
    storage-authbasic-6c4ccd4dc6-rwlhx       1/1     Running   1 (2d23h ago)   2d23h
    storage-authbearer-6f79cd5cc6-ldz7h      1/1     Running   1 (2d23h ago)   2d23h
    storage-authmachine-7cf95d8d89-qsxnj     1/1     Running   1 (2d23h ago)   2d23h
    storage-frontend-64d44f8f66-vnndm        1/1     Running   1 (2d23h ago)   2d23h
    storage-gateway-668b47f76f-2tvj2         1/1     Running   1 (2d23h ago)   2d23h
    storage-groupprovider-7475b4dddf-wj2g7   1/1     Running   1 (2d23h ago)   2d23h
    storage-metadata-74f6b5f489-rbsp4        1/1     Running   2 (9m19s ago)   2d23h
    storage-publiclink-f497dd5dd-flrw5       1/1     Running   1 (2d23h ago)   2d23h
    storage-shares-69d8b67d6b-rhq98          1/1     Running   1 (2d23h ago)   2d23h
    storage-sharing-5567d9b7f-978bf          1/1     Running   1 (2d23h ago)   2d23h
    storage-userprovider-59d87db58f-h7lpd    1/1     Running   1 (2d23h ago)   2d23h
    storage-users-7989b5df8-78hwc            1/1     Running   1 (2d23h ago)   2d23h
    store-6b878df78c-7cdlb                   1/1     Running   1 (2d23h ago)   2d23h
    thumbnails-7d5799b64b-wj9dx              1/1     Running   1 (2d23h ago)   2d23h
    web-967b76f6c-rgq9h                      1/1     Running   1 (2d23h ago)   2d23h
    webdav-9c494b5c-6r8r6                    1/1     Running   2 (9m21s ago)   2d23h
  2. Expose the proxy as a service to the host:

    minikube service proxy-service --url
     Starting tunnel for service proxy-service.
    |-----------|---------------|-------------|------------------------|
    | NAMESPACE |     NAME      | TARGET PORT |          URL           |
    |-----------|---------------|-------------|------------------------|
    | default   | proxy-service |             | http://127.0.0.1:63633 |
    |-----------|---------------|-------------|------------------------|
  3. Attempt a PROPFIND WebDAV request to the storage. Note this example uses one of the demo users as described in Create Demo Users and Groups:

    curl -v -k -u einstein:relativity -H "depth: 0" -X \
        PROPFIND https://127.0.0.1:63633/remote.php/dav/files/ | \
        xmllint --format -

    If all is correctly setup, you should get a response like the following:

    <?xml version="1.0" encoding="utf-8"?>
    <d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns" xmlns:oc="http://owncloud.org/ns">
      <d:response>
        <d:href>/remote.php/dav/files/einstein/</d:href>
        <d:propstat>
          <d:prop>
            <oc:id>MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjZlMWIyMjdmLWZmYTQtNDU4Ny1iNjQ5LWE1YjBlYzFkMTNmYw==</oc:id>
            <oc:fileid>MTI4NGQyMzgtYWE5Mi00MmNlLWJkYzQtMGIwMDAwMDA5MTU3OjZlMWIyMjdmLWZmYTQtNDU4Ny1iNjQ5LWE1YjBlYzFkMTNmYw==</oc:fileid>
            <d:getetag>"92cc7f069c8496ee2ce33ad4f29de763"</d:getetag>
            <oc:permissions>WCKDNVR</oc:permissions>
            <d:resourcetype>
              <d:collection/>
            </d:resourcetype>
            <d:getcontenttype>httpd/unix-directory</d:getcontenttype>
            <oc:size>4096</oc:size>
            <d:getlastmodified>Tue, 14 Sep 2021 12:45:29 +0000</d:getlastmodified>
            <oc:favorite>0</oc:favorite>
          </d:prop>
          <d:status>HTTP/1.1 200 OK</d:status>
        </d:propstat>
      </d:response>
    </d:multistatus>

    The above setup works because the proxy is configured to run using basic authentication. To access the WebUI, you need an external identity provider.

With the command minikube dashboard you start the monitoring dashboard for your cluster in a browser. With minikube stop you’re shutting down the minikube node.