Search Service Configuration
Introduction
The Infinite Scale Search service is responsible for metadata and content extraction, stores that data as index and makes it searchable. The following clarifies the extraction terms metadata and content:
-
Metadata: all data that describes the file like
Name
,Size
,MimeType
,Tags
andMtime
. -
Content: all data that relates to content of the file like
words
,geo data
,exif data
etc.
General Considerations
-
To use the search service, an event system needs to be configured for all services like NATS, which is shipped and preconfigured.
-
The search service consumes events and does not block other tasks.
-
When looking for content extraction, Apache Tika - a content analysis toolkit can be used but needs to be installed separately.
Extractions are stored as index via the search service. Consider that indexing requires adequate storage capacity - and the space requirement will grow. To avoid filling up the filesystem with the index and rendering Infinite Scale unusable, the index should reside on its own filesystem.
You can change the path to where search maintains its data in case the filesystem gets close to full and you need to relocate the data. Stop the service, move the data, reconfigure the path in the environment variable and restart the service.
When using content extraction, more resources and time are needed, because the content of the file needs to be analyzed. This is especially true for big and multiple concurrent files.
The search service runs out of the box with the shipped default basic
configuration. No further configuration is needed, except when using content extraction.
Note that as of now, the search service can not be scaled. Consider using a dedicated hardware for this service in case more resources are needed.
Search engines
By default, the search service is shipped with bleve as its primary search engine.
Extraction Engines
The search service provides the following extraction engines and their results are used as index for searching:
-
The embedded
basic
configuration provides metadata extraction which is always on. -
The
tika
configuration, which additionally provides content extraction, if installed and configured.
Content Extraction
The search service is able to manage and retrieve many types of information. For this purpose the following content extractors are included:
Basic Extractor
This extractor is the most simple one and just uses the resource information provided by Infinite Scale. It does not do any further analysis. The following fields are included in the index: Name
, Size
, MimeType
, Tags
, Mtime
.
Tika Extractor
This extractor is more advanced compared to the Basic extractor. The main difference is that this extractor is able to provide file contents for the index.
Though you can compile Tika manually on your system by following the Getting Started with Apache Tika guide (newer Tika versions may be available) or download a precompiled Tika server, you can also run Tika using a Tika container. See the Tika container usage document for a quickstart. Note that at the time of writing, containers are only available for the amd64
platform.
As soon as Tika is installed and accessible, the search service must be configured for the use with Tika. The following settings must be set:
-
SEARCH_EXTRACTOR_TYPE=tika
-
SEARCH_EXTRACTOR_TIKA_TIKA_URL=http://YOUR-TIKA.URL
When the search service can reach Tika, it begins to read out the content on demand. Note that files must be downloaded during the process, which can lead to delays with larger documents.
Content extraction and handling the extracted content can be very resource intensive. Content extraction is therefore limited to files with a certain file size. The default limit is 20MB and can be configured using the SEARCH_CONTENT_EXTRACTION_SIZE_LIMIT variable.
|
When using the Tika container and docker-compose, consider the following:
-
See the Docker Compose Examples (ocis_wopi) for details.
-
Containers for the linked service are reachable at a hostname identical to the alias or the service name if no alias was specified.
When using the Tika extractor, make sure to also set FRONTEND_FULL_TEXT_SEARCH_ENABLED
in the frontend service to true
. This will tell the web client that full-text search has been enabled.
Search Functionality
The search service consists of two main parts which are file indexing
and file searching
.
Indexing
Every time a resource changes its state, a corresponding event is triggered. Based on the event, the search service processes the file and adds the result to its index. There are a few more steps between accepting the file and updating the index.
State Changes which Trigger Indexing
The following state changes in the life cycle of a file can trigger the creation of an index or an update:
Resource Trashed
The service checks its index to see if the file has been processed. If an index entry exists, the index will be marked as deleted. In consequence, the file won’t appear in search requests anymore. The index entry stays intact and could be restored via Resource Restored.
Resource Deleted
The service checks its index to see if the file has been processed. If an index entry exists, the index will be finally deleted. In consequence, the file won’t appear in search requests anymore.
Resource Restored
This step is the counterpart of Resource Trashed. When a file is deleted, is isn’t removed from the index, instead the service just marks it as deleted. This mark is removed when the file has been restored, and it shows up in search results again.
Resource Moved
This comes into play whenever a file or folder is renamed or moved. The search index then updates the resource location path or starts indexing if no index has been created so far for all items affected. See Notes for an example.
Folder Created
The creation of a folder always triggers indexing. The search service extracts all necessary information and stores it in the search index
File Created
This case is similar to Folder created with the difference that a file can contain far more valuable information. This gets interesting but time-consuming when data content needs to be analyzed and indexed. Content extraction is part of the search service if configured.
File Version Restored
Since Infinite Scale is capable of storing multiple versions of the same file, the search service also needs to take care of those versions. When a file version is restored, the service starts to extract all needed information, creates the index and makes the file discoverable.
Resource Tag Added
Whenever a resource gets a new tag, the service takes care of it and makes that resource discoverable by the tag.
Resource Tag Removed
This is the counterpart of Resource tag added. It takes care that a tag gets unassigned from the referenced resource.
File Uploaded - Synchronous
This case only triggers indexing if async post processing
is disabled. If so, the service starts to extract all needed file information, stores it in the index and makes it discoverable.
File Uploaded - Asynchronous
This is exactly the same as File uploaded - synchronous with the only difference that it is used for asynchronous uploads.
Manually Trigger Re-Indexing a Space
The service includes a command-line interface to trigger re-indexing a space:
ocis search index --space $SPACE_ID --user $USER_ID
Note that not names but IDs are necessary and that the specified user ID needs access to the space to be indexed.
Notes
The indexing process tries to be self-healing in some situations.
In the following example, let’s assume a file tree foo/bar/baz
exists.
If the folder bar
gets renamed to new-bar
, the path to baz
is no longer foo/bar/baz
but foo/new-bar/baz
.
The search service checks the change and either just updates the path in the index or creates a new index for all items affected if none was present.
Configuration
Environment Variables
The search
service is configured via the following environment variables. Read the Environment Variable Types documentation for important details.
Name | Type | Default Value | Description |
---|---|---|---|
|
bool |
false |
Activates tracing. |
|
string |
|
The type of tracing. Defaults to '', which is the same as 'jaeger'. Allowed tracing types are 'jaeger' and '' as of now. |
|
string |
|
The endpoint of the tracing agent. |
|
string |
|
The HTTP endpoint for sending spans directly to a collector, i.e. http://jaeger-collector:14268/api/traces. Only used if the tracing endpoint is unset. |
|
string |
|
The log level. Valid values are: 'panic', 'fatal', 'error', 'warn', 'info', 'debug', 'trace'. |
|
bool |
false |
Activates pretty log output. |
|
bool |
false |
Activates colorized log output. |
|
string |
|
The path to the log file. Activates logging to this file if set. |
|
string |
127.0.0.1:9224 |
Bind address of the debug server, where metrics, health, config and debug endpoints will be exposed. |
|
string |
|
Token to secure the metrics endpoint. |
|
bool |
false |
Enables pprof, which can be used for profiling. |
|
bool |
false |
Enables zpages, which can be used for collecting and viewing in-memory traces. |
|
string |
127.0.0.1:9220 |
The bind address of the GRPC service. |
|
string |
|
The secret to mint and validate jwt tokens. |
|
string |
com.owncloud.api.gateway |
The CS3 gateway endpoint. |
|
string |
|
TLS mode for grpc connection to the go-micro based grpc services. Possible values are 'off', 'insecure' and 'on'. 'off': disables transport security for the clients. 'insecure' allows using transport security, but disables certificate verification (to be used with the autogenerated self-signed certificates). 'on' enables transport security, including server certificate verification. |
|
string |
|
Path/File name for the root CA certificate (in PEM format) used to validate TLS server certificates of the go-micro based grpc services. |
|
string |
127.0.0.1:9233 |
The address of the event system. The event system is the message queuing service. It is used as message broker for the microservice architecture. |
|
string |
ocis-cluster |
The clusterID of the event system. The event system is the message queuing service. It is used as message broker for the microservice architecture. Mandatory when using NATS as event system. |
|
bool |
false |
Enable asynchronous file uploads. |
|
int |
0 |
The amount of concurrent event consumers to start. Event consumers are used for searching files. Multiple consumers increase parallelisation, but will also increase CPU and memory demands. The default value is 0. |
|
int |
1000 |
The duration in milliseconds the reindex debouncer waits before triggering a reindex of a space that was modified. |
|
bool |
false |
Whether to verify the server TLS certificates. |
|
string |
|
The root CA certificate used to validate the server’s TLS certificate. If provided SEARCH_EVENTS_TLS_INSECURE will be seen as false. |
|
bool |
false |
Enable TLS for the connection to the events broker. The events broker is the ocis service which receives and delivers events between the services. |
|
string |
bleve |
Defines which search engine to use. Defaults to 'bleve'. Supported values are: 'bleve'. |
|
string |
~/.ocis/search |
The directory where the filesystem will store search data. If not defined, the root directory derives from $OCIS_BASE_DATA_PATH:/search. |
|
string |
basic |
Defines the content extraction engine. Defaults to 'basic'. Supported values are: 'basic' and 'tika'. |
|
bool |
false |
Ignore untrusted SSL certificates when connecting to the CS3 source. |
|
string |
http://127.0.0.1:9998 |
URL of the tika server. |
|
uint64 |
20971520 |
Maximum file size in bytes that is allowed for content extraction. |
|
string |
|
Machine auth API key used to validate internal requests necessary for the access to resources from other services. |
YAML Example
Note that the filename shown below has been chosen on purpose.
See the Configuration File Naming for details when setting up your own configuration.
# Autogenerated
# Filename: search-config-example.yaml
tracing:
enabled: false
type: ""
endpoint: ""
collector: ""
log:
level: ""
pretty: false
color: false
file: ""
debug:
addr: 127.0.0.1:9224
token: ""
pprof: false
zpages: false
grpc:
addr: 127.0.0.1:9220
tls: null
token_manager:
jwt_secret: ""
reva:
address: com.owncloud.api.gateway
tls:
mode: ""
cacert: ""
grpc_client_tls: null
events:
endpoint: 127.0.0.1:9233
cluster: ocis-cluster
async_uploads: false
num_consumers: 0
debounce_duration: 1000
tls_insecure: false
tls_root_ca_certificate: ""
enable_tls: false
engine:
type: bleve
bleve:
data_path: ~/.ocis/search
extractor:
type: basic
cs3_allow_insecure: false
tika:
tika_url: http://127.0.0.1:9998
content_extraction_size_limit: 20971520
machine_auth_api_key: ""