Acceptance Tests

The Test Directory Structure

This is the structure of the acceptance directory inside the core repository’s tests directory:

tests
├── acceptance
│   ├── config
│   │   └── behat.yml
│   ├── features
│   │   ├── apiTags (example suite of API tests)
│   │   │   └── feature files (behat gherkin files)
│   │   ├── bootstrap
│   │   │   └── Contexts and traits (php files)
│   │   ├── cliProvisioning (example suite of CLI tests)
│   │   │   └── feature files (behat gherkin files)
│   │   ├── lib
│   │   │   └── Page objects for webUI tests (php files)
│   │   └── webUILogin (example suite of webUI tests)
│   │       └── feature files (behat gherkin files)
│   ├── filesForUpload
│   └── run.sh

Here’s a short description of each component of the directory.

config/

This directory contains behat.yml which sets up the acceptance tests. In this file we can add new suites and define the contexts needed by each suite. Here’s an example configuration:

default:
  autoload:
    '': '%paths.base%/../features/bootstrap'
  suites:
    apiMain:
      paths:
        - '%paths.base%/../features/apiMain'
      contexts:
        - FeatureContext: &common_feature_context_params
            baseUrl:  http://localhost:8080
            adminUsername: admin
            adminPassword: admin
            regularUserPassword: 123456
            ocPath: apps/testing/api/v1/occ
        - AppManagementContext:
        - CalDavContext:
        - CardDavContext:

    apiCapabilities:
      paths:
        - '%paths.base%/../features/apiCapabilities'
      contexts:
        - FeatureContext: *common_feature_context_params
        - CapabilitiesContext:

features/

This directory contains sub-directories for each of the test suites.

features/suiteName

This directory stores Behat’s feature files for the test suite. These contain Behat’s test cases, called scenarios, which use the Gherkin language.

feature/bootstrap

This folder contains all the Behat contexts. Contexts contain the PHP code required to run Behat’s scenarios. Every suite has to have one or more contexts associated with it. The contexts define the test steps used by the scenarios in the feature files of the test suite.

filesForUpload/

This folder contains convenience files that tests can use to upload.

run.sh

This script runs the test suites. It is called by the make commands that are used to run acceptance tests.

The Testing App

The testing app provides an API that allows the acceptance tests to set up the environment of the system-under-test. For example, running occ commands to set system and app config settings. The testing app must be installed and enabled on the system-under-test.

The testing app also provides skeleton folders that the tests can use as the default set of files for new users.

apps/testing/data/tinySkeleton/

This folder stores just a single file. This is useful when wanting to test with a skeleton and there is no need for more than one file.

apps/testing/data/smallSkeleton/

This folder stores a small set of initial files to be loaded for a new user.

apps/testing/data/largeSkeleton/

This folder stores a larger set of initial files to be loaded for a new user. These can be convenient when a longer list of files is needed, e.g., in a UI test that scrolls a file list.

Running Acceptance Tests

Preparing to Run Acceptance Tests

This is a concise guide to running acceptance tests on ownCloud 10. Before you can do so, you need to meet a few prerequisites available; these are

  • ownCloud

  • Composer

  • MySQL

In php.ini on your system, set opcache.revalidate_freq=0 so that changes made to ownCloud config.php by test scenarios are implemented immediately.

After cloning core, run make as your webserver’s user in the root directory of the project.

Now that the prerequisites are satisfied, and assuming that $installation_path is the location where you cloned the ownCloud/core repository, the following commands will prepare the installation for running the acceptance tests.

# Remove current configuration (if existing)
sudo rm -rf $installation_path/data/*
sudo rm -rf $installation_path/config/*

# Remove existing 'owncloud' database
mysql -u root -h localhost -e "drop database owncloud"
mysql -u root -h localhost -e "drop user oc_admin"
mysql -u root -h localhost -e "drop user oc_admin@localhost"

# Install ownCloud server with the command-line
sudo -u www-data ./occ maintenance:install \
  --database='mysql' --database-name='owncloud' --database-user='root' \
  --database-pass='mysqlrootpassword' --admin-user='admin' --admin-pass='admin'

Types of Acceptance Tests

There are 3 types of acceptance tests; API, CLI and webUI.

  • API tests test the ownCloud public APIs.

  • CLI tests test the occ command-line commands.

  • webUI tests test the browser-based user interface.

webUI tests require an additional environment to be set up. See the UI testing documentation for more information. API and CLI tests are run by using the test-acceptance-api and test-acceptance-cli make commands.

Test Server Environments

In order to run acceptance tests, server urls should be specified through environment variables.

Environment Variable Default Description

TEST_SERVER_URL

http://localhost:8080

OC server url to be used in tests.

TEST_SERVER_FED_URL

http://localhost:8180

OC federated server url to be used in tests.

Running Acceptance Tests for a Suite

Run a command like the following:

make test-acceptance-api BEHAT_SUITE=apiTags
make test-acceptance-cli BEHAT_SUITE=cliProvisioning

Running Acceptance Tests for a Feature

Run a command like the following:

make test-acceptance-api BEHAT_FEATURE=tests/acceptance/features/apiTags/createTags.feature
make test-acceptance-cli BEHAT_FEATURE=tests/acceptance/features/cliProvisioning/addUser.feature

Running Acceptance Tests for a Tag

Some test scenarios are tagged. For example, tests that are known to fail and are awaiting fixes are tagged @skip. To run test scenarios with a particular tag:

make test-acceptance-api BEHAT_SUITE=apiTags BEHAT_FILTER_TAGS=@skip
make test-acceptance-cli BEHAT_SUITE=cliProvisioning BEHAT_FILTER_TAGS=@skip

Running Acceptance Tests for different User Names and User Attributes

The user names and user attributes in test scenarios can be replaced at run-time. This allows running the acceptance test suites with different unusual user names, display names, email addresses and passwords. This can be useful for finding values that cause problems.

The replacement values are defined in tests/acceptance/usernames.json. Edit that file and specify the values to be used at run time. For example:

{
  "Alice": {
    "username": "000",
    "displayname": "0.0",
    "email": "zero@example.org",
    "password": "0123"
  },
  "Brian": {
    "username": "1.1",
    "displayname": "नेपाली name",
    "email": "nepal@example.org",
    "password": "नेपाल"
  },
  "Carol": {
    "username": "12E3",
    "displayname": "12 thousand",
    "email": "twelve-thousand@example.org",
    "password": "random12000"
  },
  "David": {
    "username": "123@someone",
    "displayname": "321@nobody",
    "email": "someone@example.org",
    "password": "some123one"
  },
  "Emily": {
    "username": "e+f",
    "displayname": "a+b-c*d",
    "email": "emily+fred@example.org",
    "password": "notsorandom"
  }
}

If you are running tests locally, define the environment variable REPLACE_USERNAMES to be true:

export REPLACE_USERNAMES=true

You can also run the acceptance tests in CI with the replaced user attributes. This is useful if you want to run many acceptance tests with an unusual combination of usernames, display names, email addresses and passwords without taking up many hours on a local machine. A PR is necessary for testing but is not to be merged, it is just a way to get test results. In the acceptance section of .drone.star switch on the replaceUsernames setting. Commit the changes to .drone.star and tests/acceptance/usernames.json, push to GitHub, and make a draft PR:

Example Changes

'acceptance': {
  'api': {
    'suites': [
      'apiAuth',
      'apiAuthOcs',
      'apiAuthWebDav',
      # and so on...
    ],
    'replaceUsernames': True,
  },
},

When the acceptance tests are run, the user names and attributes will be replaced. When you are finished running the tests, remember to close the PR and leave a comment describing the outcome of your testing.

Running Acceptance Tests Using Part System

The part system allows us to divide the test run without knowing how many or which suites are available. Filter tags can also be used to run targeted scenarios after divisions. Multiple test suites are grouped together in each part. For example, if there are 26 test suites to be grouped into 10 parts then 2 or 3 test suites will be run in each part. This functionality is most useful in CI. The CI can split the test suites into as many pipelines as is appropriate without needing to know the actual names of the test suites.

The following two methods can be used to achieve this:

1. Environment Variables

Environment Variable Description

DIVIDE_INTO_NUM_PARTS

The number of parts the test suite will be divided into

RUN_PART

The part-number to test

Execute the tests by setting the above environment variables and running the usual make command.

RUN_PART=1 DIVIDE_INTO_NUM_PARTS=5 make test-acceptance-api

2. part argument

If run.sh is used directly to run acceptance tests, the part system can be achieved using the --part flag. The script below divides the test suite into five parts and runs just the first one.

./run.sh --part 1 5

With this method, it is also possible to use environment variables.

RUN_PART=1 DIVIDE_INTO_NUM_PARTS=5 ./run.sh

Displaying the ownCloud Log

It can be useful to see the tail of the ownCloud log when the test run ends. To do that, specify SHOW_OC_LOGS=true:

make test-acceptance-api BEHAT_SUITE=apiTags SHOW_OC_LOGS=true

Step Through Each Step of a Scenario

When doing test development, or investigating problems with a test or with the system-under-test, it is useful to be able to stop the test at each step while investigating what happens. Setting STEP_THROUGH=true will cause the test runner to pause after each step. Press enter to resume the test and execute the next test step.

make test-acceptance-api STEP_THROUGH=true BEHAT_FEATURE=tests/acceptance/features/apiComments/createComments.feature:35
...
  Scenario: sharee comments on a group shared file
    Given group "grp1" has been created
  [Paused after "group "grp1" has been created" - press enter to continue]
    And user "Brian" has been added to group "grp1"
  [Paused after "user "Brian" has been added to group "grp1"" - press enter to continue]
    And user "Alice" has uploaded file "filesForUpload/textfile.txt" to "/myFileToComment.txt"
  [Paused after "user "Alice" has uploaded file "filesForUpload/textfile.txt" to "/myFileToComment.txt"" - press enter to continue]
...

Get Detailed Information About API Requests

If you set any of these environment variables, then the test runner will display information about the details of each request to and response from the API. This generates a large amount of output, but can be useful to understand exactly what a test is doing and why it fails.

Environment Variable Description

DEBUG_ACCEPTANCE_REQUESTS

Output the details of each API request.

DEBUG_ACCEPTANCE_RESPONSES

Output the details of each API response.

DEBUG_ACCEPTANCE_API_CALLS

Output the details of each API request and response.

make test-acceptance-api DEBUG_ACCEPTANCE_API_CALLS=true BEHAT_SUITE=apiTags

Optional Environment Variables

If you define SEND_SCENARIO_LINE_REFERENCES then the API tests will send an extra X-Request-Id header in each request to the API. The value sent is a string that indicates the test suite, feature, scenario and line number of the step. For example, apiComments/editComments.feature:26-28 indicates the apiComments test suite, editComments feature, the scenario at line 26 and the test step at line 28. A system-under-test could write that string into log entries, or report in a way that makes it easier to correlate the test runner API requests with the events in the system-under-test.

make test-acceptance-api BEHAT_SUITE=apiTags SEND_SCENARIO_LINE_REFERENCES=true

If you want to use an alternative home name using the env variable add to the execution OC_TEST_ALT_HOME=1, as in the following example:

make test-acceptance-api BEHAT_SUITE=apiTags OC_TEST_ALT_HOME=1

If you want to have encryption enabled add OC_TEST_ENCRYPTION_ENABLED=1, as in the following example:

make test-acceptance-api BEHAT_SUITE=apiTags OC_TEST_ENCRYPTION_ENABLED=1

How to Write Acceptance Tests

Each acceptance test is a scenario in a feature file in a test suite.

Feature Files

Each feature file describes and tests a particular feature of the software. The feature file starts with the Feature: keyword, a sentence describing the feature. This is followed by more detail explaining who uses the feature and why, in the format:

  As a [role]
  I want [feature]
  So that [benefit]

For example:

Feature: upload file using the WebDav API
  As a user
  I want to be able to upload files
  So that I can store and share files between multiple client systems

This detail is free-text and has no effect on the running of automated tests.

The rest of a feature file contains the test scenarios.

Make small feature files for individual features. For example "the Provisioning API" is too big to be a single feature. Split it into the functional things that it allows a client to do. For example:

  • addGroup.feature

  • addUser.feature

  • addToGroup.feature

  • deleteGroup.feature

  • deleteUser.feature

  • disableUser.feature

  • editUser.feature

  • enableUser.feature

  • removeFromGroup.feature

Test Scenarios

A feature file should have up to 10 or 20 scenarios that test the feature. If you need more scenarios than that, then perhaps there really are multiple features and you should make multiple feature files.

Each scenario starts with the Scenario: keyword followed by a description of the scenario. Then the steps to execute for that scenario are listed.

There are 3 types of test steps:

  • Given steps that get the system into the desired state to start the test (e.g., create users and groups, share some files)

  • When steps that perform the action under test (e.g., upload a file to a share)

  • Then steps that verify that the action was successful (e.g., check the HTTP status code, check that other users can access the uploaded file)

A single scenario should test a single action or logical sequence of actions. So the Given, When and Then steps should come in that order.

If there are multiple Given or When steps, then steps after the first start with the keyword And.

If there are multiple Then steps, then steps after the first start with the keyword And or But.

Writing a Given Step

Given steps are written in the present-perfect tense. They specify things that "have been done". For example:

  Scenario: delete files in a sub-folder
    Given user "Alice" has been created
    And user "Alice" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "Alice" has created a folder "/FOLDER/SUBFOLDER"
    And user "Alice" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"

Given steps do not mention how the action is done. They can mention the actor that performs the step, when that matters. For example, creating a user must be done by something with enough admin privilege. So there is no need to mention "the administrator". But creating a file must be done in the context of some user. So the user must be mentioned.

The test code is free to achieve the desired system state however it likes. For example, by using an available API, by running a suitable occ command on the system-under-test, or by doing it with the webUI. Typically the test code for Given steps will use an API, because that is usually the most efficient.

Writing a When Step

When steps are written in the simple present tense. They specify the action that is being tested. Continuing the example above:

  Scenario: delete all files in a sub-folder
    Given user "Alice" has been created
    And user "Alice" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "Alice" has created a folder "/FOLDER/SUBFOLDER"
    And user "Alice" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"
    When user "Alice" deletes everything from folder "/FOLDER/" using the WebDAV API

In ownCloud there are usually 2 or 3 interfaces that can implement an action. For example, a user can be created using an occ command, the Provisioning API or the webUI. Files can be managed using the WebDAV API or the webUI. File shares can be managed using the Sharing API or the webUI. So When steps should end with a phrase specifying the interface to be tested, such as:

  • using the occ command

  • using the Sharing API

  • using the Provisioning API

  • using the WebDAV API

  • using the webUI

If a When step takes an action that is not expected to succeed, then the step can use the phrase "tries to". This makes it clear to the reader that the action is not expected to succeed in the normal way. It also allows the test code to act differently when handling the step. For example, it could ignore the fact that some element that is usually on the UI is missing, or it can understand that some different UI page will be displayed next.

  Scenario: admin login with invalid password
    Given the user has browsed to the login page
    When the administrator tries to login with an invalid password "wrongPassword" using the webUI
    ...

Write When steps that state what the user wants to achieve. This helps the test to remain focused on the business need rather than the implementation detail.

Sometimes there is a workflow on the UI that takes a few UI actions to achieve the result. For example, when the user moves or copies a file they go through a few actions. Normally write a single When step. But sometimes there are points in the workflow where the user has the option to take a different path. For example, there is a cancel button available at each step of the workflow. In order to test the cancel button, write smaller When steps to describe exactly how the user progresses through the workflow.

  Scenario: cancel copying a file
    Given user "Alice" has logged in using the webUI
    And the user has browsed to the files page
    When the user opens the file action menu of folder "data.zip" using the webUI
    And the user selects the copy action for folder "data.zip" using the webUI
    And the user selects the folder "simple-empty-folder" as a place to copy the file using the webUI
    And the user cancels the attempt to copy the file into folder "simple-empty-folder" using the webUI
    Then file "data.zip" should be listed on the webUI
    But file "data.zip" should not be listed in the folder "simple-empty-folder" on the webUI

Writing a Then Step

Then steps describe what should be the case if the When step(s) happened successfully. They should contain the word should somewhere in the step text.

  Scenario: delete all files in a sub-folder
    Given user "Alice" has been created
    And user "Alice" has moved file "/welcome.txt" to "/FOLDER/welcome.txt"
    And user "Alice" has created a folder "/FOLDER/SUBFOLDER"
    And user "Alice" has copied file "/textfile0.txt" to "/FOLDER/SUBFOLDER/testfile0.txt"
    When user "Alice" deletes everything from folder "/FOLDER/" using the WebDAV API
    Then user "Alice" should see the following elements
      | /FOLDER/           |
      | /PARENT/           |
      | /PARENT/parent.txt |
      | /textfile0.txt     |
      | /textfile1.txt     |
      | /textfile2.txt     |
      | /textfile3.txt     |
      | /textfile4.txt     |
    But user "Alice" should not see the following elements
      | /FOLDER/SUBFOLDER/              |
      | /FOLDER/welcome.txt             |
      | /FOLDER/SUBFOLDER/testfile0.txt |

Note that there are often multiple things that should or should not be the case after the When action. For example, in the above scenario, various files and folders (that are part of the skeleton) should still be there. But other files and folders under FOLDER should have been deleted.

Where it makes the scenario read more easily, use the But as well as And keywords in the Then section.

Then steps should test an appropriate range of evidence that the When action did happen. For example:

  Scenario: admin creates a user
    Given user "brand-new-user" has been deleted
    When the administrator sends a user creation request for user "brand-new-user" password "%alt1%" using the provisioning API
    Then the OCS status code should be "100"
    And the HTTP status code should be "200"
    And user "brand-new-user" should exist
    And user "brand-new-user" should be able to access a skeleton file

In this scenario we check that the OCS and HTTP status codes of the API request are good. But it is possible that the server lies, and returns HTTP status 200 for every request, even if the server did not create the user. So we check that the user exists. However maybe the user exists according to some API that can query for valid user names/ids, but the user account is not really valid and working. So we also check that the user can do something, in this case that they can access one of their skeleton files.

Specifying the Actor

Test steps often need to specify the actor that does the action or check. For example, the user.

Use realistic user names, display names and email addresses when writing scenarios. This helps real humans to more easily understand scenarios. There are five user names that are typically used in the test scenarios. Use those usernames unless there is some special reason not to. That helps to be able to automatically replace user names and run all the test scenarios with different unusual user names. The acceptance test code has defaults for the display name and email address of these "known" users. So you can just create these users in Given steps and they get the corresponding display name and email address.

User Name Display Name Email Address Description

Alice

Alice Hansen

alice@example.org

The primary actor in a scenario, e.g. the one doing the sharing

Brian

Brian Murphy

brian@example.org

The second actor, e.g., the one receiving a share

Carol

Carol King

carol@example.org

The third actor, e.g., might be a member of a group

David

David Lopez

david@example.org

Another actor, when needed

Emily

Emily Wagner

emily@example.org

Another actor, when needed

The acceptance test code can remember the "current" user with a step like:

    Given as user "Alice"
    And the user has uploaded file "abc.txt"
    When the user deletes file "abc.txt"
    ...

So that later steps can just mention the user.

Or you can mention the user in each step:

    Given user "Alice" has uploaded file "abc.txt"
    When user "Alice" deletes file "abc.txt"
    ...

Either form is acceptable. Longer tests with a single user read well with the first form. Shorter tests, or sharing tests that mix actions of multiple users, read well with the second form.

When the actor is the administrator (a special user with privileges) then use the administrator in the step text. Do not write When user "admin" does something. The user name of the user with administrator privilege on the system-under-test might not be admin. The user name of the administrator needs to be determined at run-time, not hard-coded in the scenario.

Referring to Named Entities

When referring to specific named entities on the system, such as a user, group, file, folder or tag, then do not put the word the in front, but do put the name of the entity. For example:

    Given user "Alice" has been added to group "grp1"
    And user "Alice" has uploaded file "abc.txt" into folder "folder1"
    And user "Alice" has added tag "aTag" to file "folder1/abc.txt"
    When user "Alice" shares folder "folder1" with user "Brian"
    ...

This makes it clearer to understand which entity is required in which position of the sentence. For example:

    And "Alice" has uploaded "abc.txt" into "folder1"
    ...

would be less clear that the required entities for this step are a user, file and folder.

Scenario Background

If all the scenarios in a feature start with a common set of Given steps, then put them into a Background: section. For example:

  Background:
    Given user "Alice" has been created
    And user "Brian" has been created
    And user "Alice" has uploaded file "abc.txt"

  Scenario: share a file with another user
    When user "Alice" shares file "abc.txt" with user "Brian" using the sharing API
    Then the HTTP status code should be "200"
    And user "Brian" should be able to download file "abc.txt"

  Scenario: share a file with a group
    Given group "grp1" has been created
    And "Brian" has been added to group "grp1"
    When user "Alice" shares file "abc.txt" with user "Brian" using the sharing API
    Then the HTTP status code should be "200"
    And user "Brian" should be able to download file "abc.txt"

This reduces some duplication in feature files.

Controlling Running Test Scenarios In Different Environments

A feature or test scenario might only be relevant to run on a system-under-test that has a particular environment. For example, a particular app enabled.

To allow the test runner script to run the features and scenarios relevant to the system-under-test the feature file or individual scenarios are tagged. The test runner script can then filter by tags to select the relevant features or scenarios.

For general information on tagging features and scenarios see the Behat tags documentation.

Tagging Features By API, CLI and webUI

Tag every feature with its major acceptance test type api, cli or webUI, as in the following examples. Doing so allows the tests of a particular major type to be quickly run or skipped.

@api

@api
Feature: add groups
  As an admin
  I want to be able to add groups
  So that I can more easily manage access to resources by groups rather than individual users

@cli

@cli
Feature: add group
  As an admin
  I want to be able to add groups
  So that I can more easily manage access to resources by groups rather than individual users

@webUI

@webUI
Feature: login users
  As a user
  I want to be able to log into my account
  So that I have access to my files

Tagging Scenarios That Require An App

When a feature or scenario requires a core app to be enabled then tag it like:

@comments-app-required
@federation-app-required
@files_trashbin-app-required
@files_versions-app-required
@notifications-app-required
@provisioning-app-required
@systemtags-app-required

The above apps might be disabled on a system-under-test. Tagging the feature or scenario allows all tests for the app to be quickly run or skipped.

For tests in an app repository, do not tag them with the app name (e.g., files_texteditor-app-required). It is already a given that the app in the repository is required for running the tests!

Tagging Scenarios That Need to Be Skipped

Skip UI Tests On A Particular Browser

Some browsers have difficulty with some automated test actions. To skip scenarios for a browser tag them with the relevant tags:

@skipOnCHROME
@skipOnFIREFOX
@skipOnINTERNETEXPLORER
@skipOnMICROSOFTEDGE

Skip Tests On A Particular Version Of ownCloud

The acceptance test suite is sometimes run against a system-under-test that has an older version of ownCloud. When writing new test scenarios for a new or changed feature, tag them to be skipped on the previous recent release of ownCloud. Use tag formats like the following to skip on a particular major, minor or patch version.

@skipOnOcV10
@skipOnOcV10.4
@skipOnOcV10.5.0

The acceptance test suite has scenarios that test federated sharing. Those scenarios are run against federated servers running older versions of ownCloud, to ensure that federated sharing can work between different server versions. When writing scenarios for new or fixed federated features that are not expected to work with older versions, then skip those scenarios using tags like:

@skipOnFedOcV10
@skipOnFedOcV10.4
@skipOnFedOcV10.5.0

If there are significant changes for a new release and many test scenarios have to be modified and skipped on older ownCloud versions then the old scenarios can be left in the feature files for when the test suite is used against an older version of ownCloud. Tag the older scenarios like the following. Add logic to tests/acceptance/run.sh when you need to add new tags for newer versions.

@skipOnAllVersionsGreaterThanOcV10.8.0

Skip Tests In Other Environments

Annotation Description

@skipOnDockerContainerTesting

skip the scenario if the test is running against the ownCloud docker container. Some settings are preset in the docker container and the tests cannot change those, so the related test scenarios must be skipped.

@skipOnLDAP

skip the scenario if the test is running with the LDAP backend. For example, some user provisioning features may not be relevant when LDAP is the backend for authentication.

@skipOnStorage:ceph

skip the scenario if the test is running with ceph backend storage.

@skipOnStorage:scality

skip the scenario if the test is running with scality backend storage.

@skipOnEncryption

skip the scenario if the test is running with encryption enabled.

@skipOnEncryptionType:masterkey

skip the scenario if the test is running with masterkey encryption enabled.

@skipOnEncryptionType:user-keys

skip the scenario if the test is running with user-keys encryption enabled.

@notToImplementOnOCIS

the scenario is not relevant on an OCIS system. OCIS CI and developers can skip these scenarios.

Tags For Tests To Run In Special Environments

Annotation Description

@smokeTest

this scenario has been selected as part of a base set of smoke tests.

@TestAlsoOnExternalUserBackend

this scenario is selected as part of a base set of tests to run when a special user backend is in place (e.g., LDAP).

@local_storage

this scenario requires and tests the local storage feature.

@mailhog

this scenario requires an email server running

Special Tags for UI Tests

Annotation Description

@insulated

this makes the browser driver restart the browser session between each scenario. It helps isolate the browser state. When the browser session is recording, there is a separate video for each scenario. Use this tag on all UI scenarios.

@disablePreviews

generating previews/thumbnails takes time. Use this tag on UI test scenarios that do not need to test thumbnail behavior.

Running tests using release tarballs in CI

If you want to run the tests in CI against a system installed from one of the release tarballs, you can use the testAgainstCoreTarball setting in the config section of .drone.star. You can use the coreTarball option to specify which release tarball to install from. If no tarball version is specified then daily-master-qa will be used. This will only use the release tarballs for running the acceptance tests while the unit and integration tests will run using the git branch.

config = {
  'acceptance': {
    'api': {
      'suites': [
        'apiAuth',
        'apiAuthOcs',
        'apiAuthWebDav',
        # and so on...
      ],
      'testAgainstCoreTarball': True,
      'coreTarball': '10.7.0',
    },
  },
}

Writing Scenarios For Bugs

If you are developing a new feature, and the scenarios that you have written do not pass, or existing scenarios are failing, then fix the code so that they pass.

If you are writing scenarios to cover features and scenarios that are not currently covered by acceptance tests then you may find existing bugs.

If the bug is easy to fix, then provide the bugfix and the new acceptance test scenario(s) in the same pull request.

If the bug is not easy to fix, then:

  • create an issue describing the bug.

  • write a scenario that demonstrates the existing wrong behavior.

  • include commented-out steps in the scenario to document what is the expected correct behavior.

  • write the scenario so that it will fail when the bug is fixed.

  • tag the scenario with the issue number.

  @issue-32385
  Scenario: Change email address
    When the user changes the email address to "new-address@owncloud.com" using the webUI
    # When the issue is fixed, remove the following step and replace with the commented-out step
    Then the email address "new-address@owncloud.com" should not have received an email
    #And the user follows the email change confirmation link received by "new-address@owncloud.com" using the webUI
    Then the attributes of user "Brian" returned by the API should include
      | email | new-address@owncloud.com |

The above scenario is an example of this. When the bug is fixed then the step about should not have received an email will fail. CI will fail, and so the developer will notice this scenario and will have to correct it.

How to Add New Test Steps

See the Behat User Guide for information about writing test step code.

In addition to that, follow these guidelines.

Given Steps

The code of a Given step should achieve the desired system state by whatever means is quick to execute. Typically use a public API if available, rather than running an occ command via the testing app or entering data in the webUI.

If there is a simple way to gain confidence that the Given step was successful, then do it. Typically this will check a status code returned in the API response. Doing simple confidence checks in Given steps makes it easier to catch some unexpected problem during the scenario Given section.

Here’s example code for a Given step:

/**
 * @Given the administrator has changed the password of user :user to :password
 *
 * @param string $user
 * @param string $password
 *
 * @return void
 * @throws \Exception
 */
public function adminHasChangedPasswordOfUserTo(
    $user, $password
) {
    $this->adminChangesPasswordOfUserToUsingTheProvisioningApi(
        $user, $password
    );
    $this->theHTTPStatusCodeShouldBe(
        200,
        "could not change password of user $user"
    );
}

The code calls the method for the When step and then checks the HTTP status code.

When Steps

The code of a When step should perform the action but not check its result. A When step should not ordinarily fail. Often a When step will save the response. It is the responsibility of later Then steps to decide if the scenario passed or failed.

Here’s example code for a When step:

/**
 * @When the administrator changes the password of user :user to :password using the provisioning API
 *
 * @param string $user
 * @param string $password
 *
 * @return void
 * @throws \Exception
 */
public function adminChangesPasswordOfUserToUsingTheProvisioningApi(
    $user, $password
) {
    $this->response = UserHelper::editUser(
        $this->getBaseUrl(),
        $user,
        'password',
        $password,
        $this->getAdminUsername(),
        $this->getAdminPassword()
    );
}

The code saves the response so that later Then steps can examine it.

Then Steps

The code of a Then step should check some result of the When action. Often it will find information in the saved response and assert something.

Here’s example code for a Then step:

/**
 * @Then /^the groups returned by the API should include "([^"]*)"$/
 *
 * @param string $group
 *
 * @return void
 */
public function theGroupsReturnedByTheApiShouldInclude($group) {
    $respondedArray = $this->getArrayOfGroupsResponded($this->response);
    PHPUnit\Framework\Assert::assertContains($group, $respondedArray);
}

However, a Then step may need to do actions of its own to retrieve more information about the state of the system. For example, after changing a user password we could check that the user can still access some file:

/**
 * @Then /^as "([^"]*)" (file|folder|entry) "([^"]*)" should exist$/
 *
 * @param string $user
 * @param string $entry
 * @param string $path
 *
 * @return void
 * @throws \Exception
 */
public function asFileOrFolderShouldExist($user, $entry, $path) {
    $path = $this->substituteInLineCodes($path);
    $this->responseXmlObject = $this->listFolder($user, $path, 0);
    PHPUnit\Framework\Assert::assertTrue(
        $this->isEtagValid(),
        "$entry '$path' expected to exist but not found"
    );
}

In the above example, listFolder is called and does an API call to access the file and then asserts that the response has a valid ETag.

References

For more information on Behat, and how to write acceptance tests using it, see the Behat documentation. For background information on Behaviour-Driven Development (BDD), see Dan North resources.

Skipping and Debugging Test Suites in CI

Skip Pipelines

For various purposes, you may skip one or more CI pipelines. Use skip in the drone config to skip the test pipelines. skip is available for javascript, phpunit and acceptance tests.

Usage:

...
'phpunit': {
    'allDatabases' : {
      'phpVersions': [
        '7.3',
      ],
      'skip': True
  },
}
...
'acceptance': {
  'api': {
    'suites': [
      'apiAuth',
      'apiAuthOcs',
      'apiAuthWebDav',
      'apiCapabilities',
      'apiComments'
    ],
    'skip': True
  },
}
...

Debug Specific Test Suites

In CI, you may want to run only one or specific test suites for debugging purposes. To do so you can use debugSuites in the drone config which takes a list of suite names. If debugSuites is included with one or more test suites in it then only those suites will run in CI. (Note: remember to set 'skip': False if you are using skip)

Usage:

...
'acceptance': {
  'api': {
    'suites': [
      'apiAuth',
      'apiAuthOcs',
      'apiAuthWebDav',
      'apiCapabilities',
      'apiComments'
    ],
    'debugSuites': ['apiAuth']
  }
}
...

Similarly, in the case of test suites that run in parts, you can use skipExceptParts to specify only which part(s) you want to run.

Usage:

...
'acceptance': {
  'apiProxy': {
    'suites': {
      'apiProxySmoketest': 'apiProxySmoke',
    },
    'numberOfParts': 8,
    'skipExceptParts': [3, 7]
  }
}
...

Organizing feature files

It may become difficult to maintain or even understand feature files as they grow larger. Now that we have a general background about the scenarios and steps in a feature file, the following things should be kept in mind while writing a feature file:

Adding comments

Feature files are themselves a documentation of the feature being tested. It is a good practice to add comments in the code but for feature files, mostly feature & scenario descriptions should be prioritized. However, there are some cases where comments are necessary. An example might be to add a comment to a scenario explaining the step that shows the bug, as well as specifying the replacement after the bug is fixed.

Adding tags

A scenario can have multiple tags. Tags not only help to run a subset of scenarios, but also provide the documentation of the scenario. For example, a scenario tagged with @notToImplementOnOcis can be used to track the scenarios that are not to be implemented for the oCIS server. A scenario can also be tagged with a related issue if it describes or refers to a bug. This makes it easier to track bugs and their fixes. For example, a scenario tagged with @issue-123 can be used to track the scenario that is related to the issue with id 123 in the respective repository.

The following two factors might complicate the process of assigning/removing tags:

  • Should we leave closed issue tags in the scenarios? - Yes, we should. This is purely for the purpose of documentation. The same issue may reappear in the future or something similar to it may happen. So, it is better to keep the tags in the scenarios.

  • Which issue to tag in case of multiple issues? - In cases where a scenario pertains to multiple issues, it should be tagged with the most relevant issue. A tagged issue should have a clearer description or more relevant discussion.

Inter-scenario spacings

Since we follow the line numbers of the scenarios in the expected failure files, the scenarios in the feature files should be maintained in a more stable way. This means that the scenarios should not be moved around more often or gaps should be made consistent. For this, the following things should be kept in mind:

  • Between two scenarios, one blank line should be left.

  • One line below the gap is reserved for the tags.

There will be two lines of gaps if a scenario has no assigned tags.

Usage:

...
                                    # inter-scenario gap
@issue-123                          # reserved tag line
Scenario: Scenario 1
  Given ...
  When ...
  Then ...
                                    # inter-scenario gap
                                    # reserved tag line
Scenario: Scenario 2
  Given ...
  When ...
  Then ...
...