General Information

Introduction

This document covers general aspects of Infinite Scale like start modes, services, important minimum configuration etc. for a common understanding.

We highly recommend reading this document first before you start setting up your system. Many obstacles can be avoided when knowing the basic concepts. Though it is tempting to just give things a try - which is totally ok, you will quickly realize that you may have to start again from scratch before the setup meets your requirements.

The example commands shown need to be adjusted depending on whether you are using the Binary Setup or a Container Setup.

When using global options on startup, you can always use command line options or environment variables. Run ocis help and see Starting Infinite Scale With Environment Variables for details.

Embedded Supervisor (Runtime)

Infinite Scale has an embedded supervisor for managing the runtime and reducing the memory footprint. In addition, this supervisor takes care that a service will be restarted automatically if it fails. When using an external supervisor like systemd, Kubernetes or others, the embedded supervisor is not needed, the services are managed by the external supervisor.

Deciding the Startup Mode

The following two mode types do not predefine a particular installation method like the manual or container setup. When using Kubernetes, the embedded supervisor is not necessary, the supervisor of the underlying system is used.

Starting Infinite Scale using the embedded supervisor

This mode can be used when scaling is not the primary focus and can be the case if you have a:

Starting Infinite Scale in unsupervised mode

This mode is used when availability, scaling and the adjustment to dynamically changing requirements have a high priority. In this case, an external supervisor like Kubernetes is used to deploy and run Infinite Scale with its services.

Managing Services

Services are built as microservices which can be started, stopped or instantiated. Services Rules documentation has been added to explain some background. Read this carefully to avoid unwanted behavior. For details of each service see the Services documentation.

List Available Services

Just type ocis to get a list of commands and available services.

When typing ocis <service-name> --help, you will get detailed help regarding the specified service.

Manage Instances of Services

Infinite Scale Supervised Services

In supervised mode, all services are started with one command as you can see below in the example when using the binary setup. Note that the services started with the runtime share the same PID.

Start the Infinite Scale Runtime
ocis server
List running services
ocis list
Stopping the Infinite Scale Runtime

In supervised mode, you have to stop the ocis server which also stops all services. See Stopping Infinite Scale for more details.

Unsupervised Services

At any time, you can create unsupervised instances of a service with ocis [service-name] server, for example ocis proxy server. These services are independent of services in supervised mode and have their own PID. The Instances are managed with classic OS methods or e.g. via Kubernetes.

Note that you need configuration for and access to the service instances like with a load balancer when you scale.

Configuring Services

To configure services, see the Services section in the Deployment documentation.

Configuration Rules

Administrators must be aware of the sources, the location and order applied (the configuration file arithmetics). Mismanaging them can be a source of confusion leading to undesired results on the final configuration created and applied.
  • Infinite Scale uses a hierarchical structure for its configuration, where each element overwrites its precedent. These are:

    1. Environment variables

    2. Services configuration file

    3. Infinite Scale configuration file

Configuration File Naming

The configuration files for Infinite Scale are YAML-based (a human-friendly data serialization language).

The filename to define a config has the following namespace:

ocis.yaml
 or
[service name].yaml

When using ocis.yaml and you configure a service, the topic for the service configuration must be the service name.

You can list the possible services names by typing:

ocis list

Default Paths

As you will read below, the config directory and the base directory for storing metadata must be located on POSIX filesystems. Consider for the ease of backup and restore, to keep both directories on the same filesystem.

Note that the term blob is used for file data the user uploads, while metatdata refers to all data that describes the blob.

Environment variable Description

Path to config files.

Path to system relevant data.

Path to blobs and metadata if POSIX is used.
Derives from OCIS_BASE_DATA_PATH if not set otherwise.
Used if STORAGE_USERS_DRIVER is set to ocis

Path to metadata if S3 is used.
Derives from OCIS_BASE_DATA_PATH if not set otherwise.
Used if STORAGE_USERS_DRIVER is set to s3ng.
See Using S3 for Blobs for the S3 configuration.

Configuration Directory

The default locations (no need to set them explicitly) for config files, which have to be on a POSIX storage, are:

  • For container images (inside the container)
    /etc/ocis/

  • For binary releases
    $HOME/.ocis/config/

    You can deviate from the default location and define a custom configuration file location on startup using the environment variable OCIS_CONFIG_DIR.
    When using a system user for the runtime, which has no login and therefore no home directory, like in the scenario Setting up systemd for Infinite Scale, you must specify a configuration file location.

Base Data Directory

Because Infinite Scale does not use a database for storing information like users, groups, spaces, internal data, etc., it saves all this data to a permanent file location. Depending on the system setup, the base directory contains not only the metadata but also blobs. See Filesystems and Shared Storage for more details.

  • When only using a supported POSIX filesystem, blobs and metadata are stored on POSIX.

  • When using S3 and POSIX, blobs are stored on S3, while metadata is stored on POSIX. Also see Using S3 for Blobs.

Path for System Data

The base directory is the root for many services which automatically add a subdirectory to that root for storing their data. Some services can manually define that path if necessary. Defining them independently can be required when using a Container Orchestration setup or recommended for using the search service.

Default Location

The base path has default locations (see below) if not manually defined by the environment variable OCIS_BASE_DATA_PATH.

Services That Can Deviate from the Base Data Path

The environment variable OCIS_BASE_DATA_PATH, if manually defined, sets the base path in a generic way. The following services can overwrite the base path for that service if necessary.

The default locations for the base directory are:

  • For container images (inside the container)
    /var/lib/ocis

  • For binary releases
    $HOME/.ocis/

    You can deviate from the default location and define a custom configuration file location on startup using the environment variable OCIS_BASE_DATA_PATH. When setting the base directory manually, it will be used automatically for the services described above - if they are not otherwise manually defined.
    When using a system user for the runtime, which has no login and therefore no home directory, like in the generic binary setup scenario Setting up systemd for Infinite Scale or in the deployment example Small-Scale with systemd, you must specify a base directory location because a system user has no logon and therefore no home directory!
    The location must be used by Infinite Scale exclusively. Writing into this location not using Infinite Scale is discouraged to avoid any unexpected behavior.

    Consider using a separate partition or an external filesystem like NFS for the data path. If you only have one partition for your OS, Infinite Scale and your data, filling up the filesystem with user data can make your system unresponsive. This can easily happen under the following conditions:

    • The total storage space consumed by all spaces, even if there are individual quotas set for spaces, exceeds the available disk space.

    • When multiple users have concurrent uploads of big files, those big files - partly uploaded - will not count against the target space quota. These files are temporarily located in the upload folder located in the data path and moved when finished to the target space if the target space quota is not exceeded.

    • Expired uploads that have not been cleaned up (see Manage Unfinished Uploads) can demand storage unnecessarily and can be a hidden cause of exceeding the available storage space.

    • The index data stored for the search service is located on the same root path and filled up the filesystem.

Using S3 for Blobs

When using S3 for storing user data, metadata must reside on POSIX using the base directory as path. For more details see the section Base Data Directory above.

Configuring the Storage-Users service is necessary to define the usage of POSIX and S3 storage for Infinite Scale.

The following environment variables are need to be configured for the use with S3 (see the Storage-Users service for details):

# activate s3ng storage driver
STORAGE_USERS_DRIVER: s3ng
# Path to metadata stored on POSIX
STORAGE_USERS_S3NG_ROOT:
# keep system data on ocis storage
STORAGE_SYSTEM_DRIVER: ocis

# s3ng specific settings
STORAGE_USERS_S3NG_ENDPOINT:
STORAGE_USERS_S3NG_REGION:
STORAGE_USERS_S3NG_ACCESS_KEY:
STORAGE_USERS_S3NG_SECRET_KEY:
STORAGE_USERS_S3NG_BUCKET:

Also see the Docker Compose Examples for more details.

Initialize Infinite Scale

Infinite Scale can be run by manually defining the environment like you do when using Container Orchestration. When using the Binary Setup or the Container Setup, you can prepare Infinite Scale for further configuration and recurring starts. After reading The ocis init Command for important details, start the initialisation. To do so, run:

ocis init

You can add command line parameters. To see which ones are available, type:

ocis init --help

Command line parameters are beneficial if you e.g. want to hand over all necessary parameters without getting to any questionnaire or if you want to define the admin password yourself not getting a random one assigned.

The command line option --force-overwrite is only intended for developer usage. If you set this option, your config will be overwritten, your data, if any is present, will persist, but it will not be accessible anymore. This is, among other things, because the issuer (short iss part of openID Connect) will be overwritten.

To reinitialize Infinite Scale, you have to delete your config and your data and start from scratch.

Start Infinite Scale With All Predefined Services

When you type ocis server, the embedded supervisor is automatically used and starts available predefined services automatically. The supervisor starts by default on port 9250 and listens for commands regarding the lifecycle of the supervised services.

To list the started predefined services, type:

ocis list

This will print an output like:

+--------------------+
|      SERVICE       |
+--------------------+
| app-provider       |
| app-registry       |
| auth-basic         |
| auth-bearer        |
| auth-machine       |
| frontend           |
| gateway            |
| graph              |
| groups             |
| idm                |
| idp                |
| nats               |
| notifications      |
| ocdav              |
| ocs                |
| proxy              |
| search             |
| settings           |
| sharing            |
| storage-publiclink |
| storage-shares     |
| storage-system     |
| storage-users      |
| store              |
| thumbnails         |
| users              |
| web                |
| webdav             |
+--------------------+

Starting Infinite Scale With Environment Variables

You can use environment variables to define or overwrite config parameters which will be used when starting Infinite Scale like:

PROXY_HTTP_ADDR=0.0.0.0:5555 ocis server

or when using multiple environment variables like:

PROXY_HTTP_ADDR=0.0.0.0:5555 \
PROXY_DEBUG_ADDR=0.0.0.0:6666 \
ocis server

Globally Shared Logging Values

When running in supervised mode (ocis server), it is beneficial to have common values for logging so that the log output is correctly formatted or everything is piped to the same file without duplicating config keys and values all over the place. This is possible using the global log config key with the following example:

ocis.yaml
log:
  level: error
  color: true
  pretty: true
  file: /var/tmp/ocis_output.log
In case of a service overwriting its shared logging config received from the main ocis.yaml file, you must specify all values.

Log Config Keys

These are the necessary log keys and the available values:

log:
  level: [ error | warning | info | debug ]
  color: [ true | false ]
  pretty: [ true | false ]
  file: [ path/to/log/file ] # MUST not be used with pretty = true

Configurations to Access the Web UI

You can easily access Infinite Scale via ownCloud Web with minimal configuration needs. Without going into too much detail, you need to provide the following two environment variables. See also the section about Handling Certificates and Demo Users and Groups.

OCIS_URL

Expects a URL including protocol, host and optionally port to simplify configuring all the different services. Other service environment variables also using an URL still take precedence if set, but will fall back to this URL if not set.

If you need to access Infinite Scale running on a VM or a remote machine via a host name other than localhost or in a container, you must configure the host name with OCIS_URL. The same applies if you are not using host names but an IP address (e.g. 192.168.178.25) instead.
By default, Infinite Scale enforces https for web and client access. If necessary, this can be changed in particular setups to http, which is not recommended for production. For details see TLS for the HTTP Frontend and Proxy Service Configuration.
If you want to reuse an already configured minimized setup for any other address than https://localhost, you must use a reverse proxy. When Infinite Scale is accessed, it forwards requests to the embedded IDP service which requires a secure connection. See the Bare Metal Deployment with systemd for more details on using a reverse proxy setup.
PROXY_HTTP_ADDR

When using 0.0.0.0:9200, the proxy will listen to all available interfaces. If you want or need to change that based on your requirements, you can use a different address e.g. to bind the proxy to an interface.

The bind address for PROXY_HTTP_ADDR must be on the same interface where the configured URL from the OCIS_URL environment variable is reachable.
Common reasons binding to a particular IP address

  • Multiple network interfaces configured for specific tasks like web, storage, administration.

  • Binding SSL certificates to IP addresses.

  • …​

Examples

  • PROXY_HTTP_ADDR=127.0.0.0:9200
    This causes Infinite Scale to only bind to the local network interface.

  • PROXY_HTTP_ADDR=0.0.0.0:9200
    This tells Infinite Scale to bind it to all available network interfaces.

Also see the Using the Embedded IDP Service for configuration notes.

Handling Certificates

Certificates are necessary to secure browser access. Infinite Scale can run with embedded self-signed certificates mainly used for testing purposes or signed certificates provided by the admin. To tell Infinite Scale which kind of certificates you are using, the environment variable OCIS_INSECURE is used.

Embedded Self-Signed Certificates

In order to run Infinite Scale with automatically generated and self-signed certificates, set OCIS_INSECURE=true.

OCIS_INSECURE=true \
PROXY_HTTP_ADDR=0.0.0.0:9200 \
OCIS_URL=https://localhost:9200 \
ocis server

Provided Signed Certificates

Self-Signed Certificates

If your certificates are self-signed, set OCIS_INSECURE=true like in the example of embedded self-signed certificates above.

Certificates Signed by a Trusted CA

If you have your own certificates already in place, make Infinite Scale use them by adding the following environment variables to the command. Replace the certificates path and file names according to your needs:

OCIS_INSECURE=false \
PROXY_HTTP_ADDR=0.0.0.0:9200 \
OCIS_URL=https://localhost:9200 \
PROXY_TRANSPORT_TLS_KEY=./certs/your-host.key \
PROXY_TRANSPORT_TLS_CERT=./certs/your-host.crt \
ocis server

Default Users and Groups

Default users and groups are only created when you initialize Infinite Scale as first task. The same is true for demo users and groups which need an environment variable to be set on initializing Infinite Scale to get created.

If you have not declared demo user creation during initializing, you can for the time being only empty the Base Data Directory and remove the ocis.yaml file which resets the system. Then you can start from scratch and enable demo user creation.

Admin User

An admin user will be created when running the ocis init command with the following credentials:

Admin user and group created on first ocis start
Username Password Email Role Group

admin

Printed by the output of ocis init

admin@example.org

admin

users

Login to the webinterface with this admin user and change relevant data according your needs or create new users. As an example to reach out the webinterface use https://localhost:9200.

Password Reset for the Admin User

If you have forgotten the password for the admin user or want to change it, run the following command. Note that the admin user must already exist:

ocis idm resetpassword

After running this command and entering a new password, the admin can relogin using the new password.

The password is written into the ocis.yaml file in section idm:.

Demo Users and Groups

Create Demo Users and Groups

You can let Infinite Scale create demo users and groups for testing purposes. Because these demo users and groups can be a significant security issue, you should remove them before going into production or your system is exposed to the outside world, for details see Securing Infinite Scale.

To let Infinite Scale create these demo users and groups for you, start the runtime the very first time with:

IDM_CREATE_DEMO_USERS=true \
ocis init
List of available demo users and groups
Username Password Email Role Groups

einstein

relativity

einstein@example.org

user

users,
philosophy-haters,
physics-lovers,
sailing-lovers,
violin-haters

marie

radioactivity

marie@example.org

user

users,
physics-lovers,
polonium-lovers,
radium-lovers

moss

vista

moss@example.org

space admin

users

richard

superfluidity

richard@example.org

user

users,
philosophy-haters,
physics-lovers,
quantum-lovers

katherine

gemini

katherine@example.org

space admin

users,
sailing-lovers,
physics-lovers,
quantum-lovers

You can now login with one of the demo users created using the OCIS_URL in you browser like https://localhost:9200.

Manage Users and Groups

If you have enabled demo users and groups and you want to manage or delete them, use the web UI, e.g. https://localhost:9200.

Default Ports

See Used Port Ranges at the Services description for details.

Logging

See Logging at the Services description for details.

Using the Embedded IDP Service

See the Special Settings section in the Proxy service for configuration details when using the Infinite Scale IDP service instead of an external IDP.

Using a Reverse Proxy

If you are using a reverse proxy like Traefik and the reverse proxy manages the certificates to secure the client access, you can use extra certificates between the proxy and Infinite Scale, although this is not mandatory. See the section Handling Certificates for more details.

If you want to reuse an already configured minimized setup for any other address than https://localhost, you must use a reverse proxy. When Infinite Scale is accessed, it forwards requests to the embedded IDP service which requires a secure connection. See the Bare Metal Deployment with systemd for more details on using a reverse proxy setup.

Maintenance Commands

There are multiple commands available to maintain the Infinite Scale instance. See the Maintenance Commands document for more details.

S3 Bucket Policy

With S3 bucket policies, you can configure and secure access to objects in your buckets, see Using bucket policies. The following S3 bucket policies are a requirement when connecting to an S3 bucket, replace the bucket name accordingly. When using an S3 bucket you have to set the storage driver to s3ng. Also see the Deploy the Chart section in the Container Orchestration documentation and the STORAGE_USERS_DRIVER configuration value.

# The S3NG driver needs an existing S3 bucket with following permissions:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": ["s3:ListBucket"],
            "Resource": ["arn:aws:s3:::bucket-name"]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": ["arn:aws:s3:::bucket-name/*"]
        }
    ]
}