This page is a draft and may contain incomplete or inaccurate information
This page is intended for ITC and hosting sites selected to participate in the USxS Redesign Pilot project. Other ITC's are welcome to use these instructions and install the Pilot release if they wish for preview and demonstration.
The SSDT has issued a "Pilot Release" of the USxS Redesign application. The primary purpose of this release is to provide allow selected district and ITC's to perform a full field test of the Redesign software and architecture prior to production releases.
This page and sub-pages contain technical information for deploying the Pilot Release. The instructions here assume a basic familiarity with Docker. You should at least read the Getting Started With Docker pages.
For the PR release, it is not expected that any ITC will need more than a single docker host. As we approach full production releases, it will be more likely that we will need to implement a production environment based on swarm. Therefore, for now, we recommend a simple deployment on a single host.
You may wish to think of the PR instances and even the docker host as disposable. It's quite likely, that based on what we learn from the Pilot releases, that the production environment will be significantly different. it's quite possible that we will recommend deleting your docker host and creating a new production environment.
In order to successfully deploy the PR release, you will need:
- Minimum Server (VM) requirements:
- Memory: 3GB per district + 1GB for docker and OS overhead
- Disk space: 2GB per district. This allows for database, backups, logs, etc.
- CPU: 4 cores (8 cores prefered)
- Host running:
USxS Architecture as Containers
The USxS Redesign systems are typical two-tier single tenant applications. Each school district will have it's own instance including a Tomcat deployment and a Postgres database server. The convention for Docker is to have each "container" model a single process. Therefore, there will be at least two docker containers. One running Tomcat with the USxS application and a second container running the Postgres database server. These two containers must be orchestrated to start and stop together.
The remainder of these pages will show how to use
docker-compose to configure and run these containers.
The SSDT hosts a registry containing images published by the SSDT. This includes application images as well as custom images for postgres and utilities. The registry will only contain images which have been published for release. Applications released by the SSDT will be tagged to indicate the current release status. For example, the USAS Pilot release will be tagged as:
Therefore, the application can be updated simply by pulling the latest pilot image.
The SSDT has prepared a "utils" image which contains (or will contain):
- template compose files for SSDT applications
- Useful scripts for maintaining services and containers containing SSDT applications
- Dockerfile's for building the
Examples in this wiki will assume you have the SSDT-Utils installed on your docker host. See Install and Update SSDT Utils package for details.
Host Directory Structure
On your docker host, you will need to establish a directory structure for the individual district configurations. Each district will need a separate directory.
For the pilot release, we recommend you place all pilot instances under a unique directory structure, such as
/data/pilot/. The preview instances can be thought of as "disposable". When we move toward production releases, it is likely that the pilot instances will be deleted and new production instances established.
For example, here is a how the structure may look for some of NWOCA's districts:
Each district directory will contain:
docker-compose.ymlfile(s) defining the services (USAS , USPS or both) for the district
- a hidden
.envfile containing environment variables
- database backups
The directory containing the
docker-compose.yml is referred to as the "project" directory. The name of this directory is important because it will affect the default name of the containers created by the project.
Because these directories contain important configuration and database backup files, you should including them in the host's nightly backup.
See the specific application section below for how to configure each district's
Setting up a Pilot Instance
The SSDT Utils contains a directory called
/ssdt/pilot which contains template docker files and a
setup.sh script to assist in configuring the docker compose configuration files. The
setup.sh scripts performs the following:
- creates a
- Service configurations for USAS, USPS or both
- Networking between both applications and database
- Volume definitions for each database
- API Keys for USAS ↔ USPS integration
- Unique passwords for district database passwords
- creates a
.envfile to store passwords and API keys
- create a
.docker-compose.md5file to allow verification that the
docker-compose.ymlhas not been modified.
Each time the
setup.sh script is executed, it creates a new
docker-compose.yml file containing the latest mandatory configuration from the SSDT. Therefore, the ITC should never edit this file. Instead, the ITC should use a
docker-compose.override.yml to provide additional configuration values or override SSDT provided values.
.env file contain generated keys and passwords what will be unique for each district. It is very important that this file be preserved and the values not changed after the applications are launched. For example, when the databases are launched for the first time the database password will be set to the value in the
.env file. If the database password is lost, then the applications will lose access to the database (requiring manual resetting of the database password).
However, the ITC may edit the
.env file to add additional environment variables to override the SSDT default values.
setup.sh script to create a new district instance as follows:
Please note the following:
- The project name defaults to the name of the current directory. The project name affects the names of volumes and networks when the project is started.
- By default, the script will automatically create API keys needed by the USAS/USPS Integration modules. We recommend you take the default.
- You may execute the script multiple times. it will recreate the
docker-compose.ymland leave any existing
Generated Sampletown Configuration
Below are the files created by the setup script for Sampletown:
The generated configuration leverages docker-compose's default behavior to create valid instances of both applications with persistent database storage and inter-application communication. The configuration will defines the following by default for the
- Unique database passwords for each database
- Unique API keys for integration for between USAS and USPS. These will be configured automatically in each application when the integration modules are enabled.
- On the docker engine (when the project is first deployed):
- network named
sampletown_default. This network will contain all four containers and allows the applications to communicate with the databases and each other.
- two volumes named
sampletown_uspsdata. This will contain the database files for each database. They will use the default 'local' driver (i.e. mounted on the hosts file system)
- network named
The configuration is structured to permit flexible customization by the ITC. For example, the ITC may wish to use a different volume driver to store database data on a remove file share.
As mentioned above, ITC's should not modify the generated
docker-compose.yml file directly. The SSDT may release improvements to the setup scripts which which re-create the file. Instead, the ITC should create a
docker-compose.override.yml and place all district customizations there.
The generated configuration is sufficient to deploy and execute the USAS and USPS applications. However, each district will need some customization. At a minumum, the applications HTTP ports must be made accessible.
The generated configuration does not expose any ports and the applications do not provide encrypted ports. Each hosting site must decide how to expose the ports and provide an encrypted connection (HTTPS). The two general solutions are:
- Expose ports on the docker engine and use an external reverse-proxy
- Use an nginx-proxy to automatically reverse proxy for containers on the host
Using an External Reverse Proxy
If the ITC uses a standalone system for reverse proxy (e.g. a KEMP appliance), then the application ports can be exposed as unique ports on the docker engine and those ports configured in the reverse proxy. In this configuration, the ITC would create a
docker-compose.override.yml like this to expose the application ports:
In the above example, USAS would be exposed on the docker host's port 8100 and USPS on port 8101. It would be the ITC's responsibility to assign a unique port to each district and application.
The ITC would then use the reverse proxy (likely with a wildcard certificate) to reverse proxy a domain name to these ports.
In this configuration, it's important not to expose the docker engine out site the firewall to the public internet. Users should only be able to access the application through the proxy's HTTPS port.
Auto Proxying with NGINX
See Using nginx-proxy for instructions on configuring an nginx web server with HTTPS to provide a reverse proxy on the docker engine. Once the proxy is established, then each district needs to be configured with environment variables to define the applications host name. This example, configures sampletown to use nginx-proxy with LetsEncrypt for the certificate:
This configuration does several things:
- It joins the applications to the "proxy" network. This allows the nginx container to access the application ports.
- Defines the environment variables for nginx-proxy and LetsEncrypt (omit these if using a signed cert)
- Defines the "proxy" network as an external network (defined outside this project) as being the "proxy_default" network. The "proxy_default" is the network defined by the nginx-proxy project.
Notice that the file uses the COMPOSE_PROJECT_NAME environment variable to define the host names. If you follow the convention of using the project name in the hostname, then you can use this file as a template for each new district.