Installation Steps

  1. Install dependencies

  2. Build images or fetch images

  3. Configure parts

  4. Run each part

Dependencies

List of dependencies for this project:

Please consult with your distribution for more information on how to install these for your machine.

For help regarding Docker, please visit docker documentation.

Each module has multiple dependencies; these are handled by the docker files when the images are built. No user interaction is required to install these. To see what those dependencies are, please check each module’s docker file.

Running Docker As A Service

It is recommended to run the Docker service at boot time. Else one will be responsible for starting the docker service manually before running this project.

Most modern Linux distributions use systemd as a service manager; the following commands apply. If your system uses SystemV please look here for a brief tutorial.

Start docker service manually now

sudo systemctl start docker

Start Docker at boot

sudo systemctl enable docker

To check the status of the Docker service

sudo systemctl status docker

Building images

Image names

The following names are useful if wishing to handle image building and container creation manually or when using the deprecated scripts:

Module

Names

priv-libs

cybexp-priv-libs

collector

cybexp-collector

query

cybexp-priv-query

KMS

cybexp-kms-priv

backend API

cybexp-priv-backend-api

All modules depend on the priv-libs image. Please build this image first before building any other image.

Since this project runs on Docker, images can also be exported and imported to new machines without running the building procedure at each machine.

Wrapper scripts are provided to handle environment variable assignment, image building, persistent storage mounting, port mapping, and container deployment. It is highly recommended to use these scripts.

Alternatively, one can manually build all these images by using Docker’s build procedure (docker-compose). If building manually, assign environment variables as specified in the docker-compose.yaml files. Also, it is necessary to mount volumes or bind mount a filesystem for data persistency and bind ports when running manually (without wrapper script).

Please build before running for the first time or when the code has been modified. Rebuilding is not necessary if the project has already been build or imported. Building is unnecessary when changing configuration files, as these are mounted at run time (also handled by wrapper scripts).

Build base image

To build the base image (cybexp-priv-libs):

cd <project-priv-libs>
sh ./build.sh

Build Collector, KMS, Backend, Query-client

It is necessary to build before running each module for the first time. Use the wrapper script provided in each of the module’s folders to build.

Build the collector:

cd <module-collector>
bash collector.sh --build --build-only

Build the KMS:

cd <module-kms>
bash kms.sh --build --build-only

Build the backend:

cd <module-backend>
bash backend.sh --build --build-only

Build the query client:

cd <module-query>
bash query.sh --build --build-only

Important Information About Data Persistence

The secrets folder that is located at the root of each module is mounted at runtime. It holds the keys. Each module has one. Keys are only fetched from the KMS server if they are not found there. One can mount an encrypted LUKS filesystem to the secrets folder before running the module to store these keys in the underlying hardware medium securely.

Secrets and configurations are never included in the images as they are mounted to the container at runtime. This prevents accidentally including secrets and configurations in images and allowing for modification of the configurations without rebuilding. One can modify the configuration of a module and restart said container for the changes to take effect.

The modules that use MongoDB instance (created by the wrapper script automatically) have their storage mounted under the directory db at runtime for persistency purposes. This directory is located at the root of the module. This can be configured with the docker-compose.yaml file.

Both secrets and db folders are mounted at runtime. It is important to note that these contain sensitive information and all the data that should be persistent. Removing the containers using the docker rm command will delete all the data in them as well but will not delete the persistent storage as they are part of the host system (unless modified otherwise). Recreating the containers with the same mount points (default if unchanged, change under docker-compose.yaml) will create new containers and mount the same folders/data as the previous container. This is useful when updating code without deleting the data. This allows to stop and rename the old container to make a new container with the same data. If the update went well, then maybe delete the old container with the old code. For more information about data persistency, see Docker Storage documnetation, Dcoker bind mount documentation, Docker Volumes documentation.

Bind interfaces and port

KMS and backend

To change bind interface and port edit docker-compose.yml for each corresponding module, under port. They will be ending in :5000, this is required as it binds to internal container port 5000, for more information on dbinding correctly refer to Docker’s network documentation. For information editing docker compose file refer to docker compose port documentation. The correct notation is IPADDR:HOSTPORT:CONTAINERPORT.

locahost does not work

localhost is not a valid interface, please use 127.0.0.1 if wanting to bind locally only. A local bind will not allow outide of the machine access, perfect for setting up a proxy with TLS and basic authentication infront of it.

Collector

The collector’s bind interface is controller by the ./collector.sh script. By default it binds to all network interfaces (0.0.0.0) at port 6000. If this is not the desired behaviour please change, as it could dbe a security risk because data is not yet encrpyted. Please refer to the help output ./collector.sh --help. When using specify interface and port, like so <interface>:<port>, e.g. 127.0.0.1:6000.

locahost does not work

localhost is not a valid interface, please use 127.0.0.1 if wanting to bind locally only. A local bind will not allow outide of the machine access, perfect for setting up a proxy with TLS and basic authentication infront of it.

TLS/SSL

TLS/SSL provides us an extra layer of transport security security. HTTPS is not enabled by default for this project. Please add a reverse proxy in front of the KMS and the Backend servers to enable HTTPS. All modules support the use of HTTPS.

To enable HTTPS and enable automatic certificate renewal please visit your certificate authority for instructions on obtaining certificates.

In a nutshell, to enable HTTPS we change the server bindings to a local only interface and then we configure the reverse proxy with HTTPS to point to our local interface.

To enable HTTPS (follow these instructions for the KMS and backend servers):

  1. add reverse reverse proxy to server, Nginx or Apache

  2. Make KMS and Backend bind to local interface (for safety). That is so the HTTP services are only exposed and available to the proxy.

    • edit docker-compose.yml file in root of respective module

    • for the KMS and backend look under ports: (not under the MongoDB ports service) the line that look like from - "<INTERFACE_IP>:50XX:50YY" to - "127.0.0.1:50XX:50YY", where in - "<INTERFACE_IP>:<bind_port>:<docker_port>" interface is the bind location for the service, bind port is the port that it will be made available, and docker port is the internal docker service the service originates from, the docker port should remain unchanged. Important to note the interface can not be the hostname localhost(it must the the IP of the interface localhost), for example 127.0.0.1 is valid. For more information look at the Docker Port Documentation and Docker compose documentation.

    • For example:

       # Change this in docker-compose.yaml (not under the DB section):
       ports:
         - "192.168.1.101:5002:5000"
      
       # To this:   
      ports:
        - "127.0.0.1:5012:5000"
      
      # Use 5012 on the next step
      
  3. point the reverse proxy servers to each respective service in teh NGINX configuration:

    server {
       location / {
           proxy_pass http://127.0.0.1:50XX; #<-- change port to match bind port in docker-compose.yml
       }
    }
    
  4. modify reverse proxy configuration to use certificates

    • If using let’s encrypt: run the certbot tool. Link below

  5. configure autorenewal of certificates

    • if using let’s encrypt and certbot, a systemd timer has been added to the system for autorenewal. Please follow the instructions on cerbot’s instructions page

More information:

Securing Services from unauthorized access

Each module supports basic authentication as an extra layer of security. This will act as a service gatekeeper.

To enable:

  1. In each server (KMS and backend), modify the reverse proxy configuration to enable the use of basic authentication. More info here

    • add a user to the .htaccess file

    • add the exact same user with the same password to both KMS and Backend

  2. Configure basic authentication credentials on all the modules(except KMS and backend) by editing the configuration file. The basic authentication stanzas should be located in the example configuration file. Add these to your current configurations and uncomment them. Fill out with the credentials added from the previous step

  3. If a module is already running, send SIGINT signal and wait for the service to gracefully exit(to avoid loss data). Then start again using the new configuration

Warning

The backend server API has no authentication, therefore it is very important to add this layer of security(unless deploying for testing/development purposes).

KMS and Backend basic authentication consistency

When configuring basic authentication in KMS and Backend make sure that system user has the same username and password across both KMS and Backend and the user will setup basic authentication on their configuration file where the software will use the same credentials for KMS and backend. i.e. basic authentication credentials for a user should be consistent across servers.

Must configure modules after enabling

After enabling basic authentication (step 1), everyone using the system must follow steps 2 and 3, else modules will exit because they can not connect to servers.

Check server accessibility

To check if KMS or backend are properly configured (reverse proxy, HTTPS, basic auth, and bind interface/port) one can do a GET request to their root (/) endpoint. You should get an up response from each server. If you get “bad gateway” or “connection timeout,” the module may be down or the configuration or binding locations may be miss configured. We can use curl to test them (or anu other API test utility). The syntax os is a follows:

# without basic auth
curl <protocol>://<host>:<port>
# with basic auth
curl <protocol>://<user>:<password>@<host>:<port>

for example:

curl https://johndoe:AKk2345rtfxde5@cybexp-priv.example.com:5001

KMS Administration account

The KMS is managed by administratior accounts. KMS administrator account(s) will allow the creation of users.

One can create an administrator account in the KMS container with the following steps:

  1. cd into

  2. run ./add_admin.sh

  3. enter administrator username to create

  4. enter password

  • Administrator can also login and retrieve its token via KSM API in the even that it is misplaces after creation

  1. If succesfull you will be presented with account username and token, and level of permissions (in this case admin)

Note

When quering the API enpoints for keys, the administrator account will only be able to retrieve public keys as no private key is generated for the user.

Warning

Physical access to the secrets folder will provide full access to all the keys. keep this folder secret. If possible mount an encrypted filesystem prior to running the KMS as this will provide protection of keys at rest.

Warning

Do not use the an administrator API token for anything other than user creation.

KMS JWT Secret

This random value is used for Bearer token creation. This value is to be kept secret as it can be used to generate access to the system. Please generate this randomly when configuring KMS for the first time using proven cryptographically secure random number generator. One can do so as follows:

python -c 'import os; print(os.urandom(16).hex())'

place output of above command in FLASK_JWT_SECRET_KEY field in the KMS configuration.

KMS Secret Key

Changing the value of FLASK_JWT_SECRET_KEY in the configuration effectively revokes all active tokens. To get new tokens use the login endpoint in the KMS.

KMS User Creation

User creation must be done via the KMS API (Please refer to KMS API section for endpoint details). This will also automatically create user keys with their respective attributes.

  1. Refer to API reference

  2. Use administrator API key in X-Authorization header

  3. Call user creation endpoint

  4. Hand API keys to User, else discard as they can login and get API keys

Server authentication and authorization

The backend API and KMS API use two types of authentication. That is, Basic Athentication and Bearer Token Authentication. Basic Authentication is used to prevent anauthenticated access to any resource on the systems. Bearer token is used for authentication and authorization for key distribution. Basic authentication uses header Authorization and Bearer token uses X-Authorization header.

Registered Username

Usernames registered with KMS may differ from provided username, please check response. This is because the KMS normalizes username before using them for compatibility purposes. The registered username is the one provided in the response to the registration API call.

KMS System keys

KMS automatically checks for keys in the mounted secrets folder, if not found it will generated them.