Run

Step 3: Running the DAECOS Predictor

The DAECOS Predictor is shipped as a container with the UI, API and DB servers within the container. For resiliency we recommend moving the DB data folders outside of the containers using the DB_DIR environment variable.

Configuration

Environment Variables

The application needs to be configured using the following environment variables:

VariableDescriptionRequired
EPIC_CLIENT_IDEPIC Client IDYes
EPIC_BULK_GROUP_IDBulk Group ID for background jobYes
EPIC_TOKEN_URLEPIC Token endpointYes
EPIC_FHIR_URLFHIR endpointYes
EPIC_CRON_EXPRESSIONFrequency of Background jobYes
DB_DIRHost folder to mount DB filesYes
WATCH_DIRHost folder to mount watch folderNo
LOG_DIRHost folder to mount log folderYes
KEYS_FILEJWKS keys.json file path on hostYes
HOST_UIDThe UID of the user used to create the Host foldersYes
HOST_GIDThe GID of the group used to create the Host foldersYes
PORTPort of local appYes

Using Environment Files

Create a .env file on the host OS. Here’s a sample file to get you started:

# EPIC settings
EPIC_CLIENT_ID='epic-client-id'
EPIC_BULK_GROUP_ID='epic-bulk-group-id'
EPIC_TOKEN_URL='https://fhir.epic.com/interconnect-fhir-oauth/oauth2/token'
EPIC_FHIR_URL='https://fhir.epic.com/interconnect-fhir-oauth/api/FHIR'

# Background cron job time
EPIC_CRON_EXPRESSION="* */6 * * *"  # This will run at every 6 hours

# Docker host settings
DB_DIR=./daecos-db-dir
WATCH_DIR=./daecos-watch-dir
LOG_DIR=./daecos-log-dir
KEYS_FILE=./keys.json
PORT=3211

# User and Group ID used to create the Host Folders
HOST_UID=1001
HOST_GID=1001

# Your organization name in lowercase
ORG_NAME=orgname 

DB_DIR permissions

The database for Daecos is being powered by Postgres. We recommend setting the DB_DIR folder on the host machine. Note: The permissions of the host folder must match up with the permissions and users of the mapped container folder with the same permission set and the same user. For this to work correctly you should pass the HOST_UID and HOST_GID in the .env folder.

Idenitifying the HOST_UID and HOST_GID on the host machine

Run the following commands to identify the User ID and Group ID of the host user :

# Run the ls command where you created the WATCH_DIR, LOG_DIR and identify the user which created the directory
ls -lah

# Run the following command to get the 32-bit Int value of the username, eg. my_user
id -u my_user  # please replace my_user with the actual user name that you identified with the ls command

# Run the following command to get the 32-bit Int value of the group, eg. my_group
id -g my_group  # please replace my_group with the actual group that you identified with the ls command

# Assign the values to HOST_UID and HOST_GID respectively

Make sure you run your docker compose commands (below) from the same location as the .env file.

Docker Compose Configuration

For easier management, you can use Docker Compose. Create the following docker-compose.yml file:

version: '3.8'

services:
  daecos-app:
    image: 713377909722.dkr.ecr.us-east-1.amazonaws.com/daecos-predictor/${ORG_NAME}:latest 
    env_file: .env       #create .env file on local
    container_name: ${CONTAINER_NAME:-daecos-predictor}
    user: "${HOST_UID}:${HOST_GID}"
    volumes:
      - ${DB_DIR}:/var/lib/postgresql/data     # Named volume for PostgreSQL Mount
      - ${WATCH_DIR}:/tmp/watch-dir            # Mount watch_folder mount for regular files)
      - ${LOG_DIR}:/daecos-log-dir
      - ${KEYS_FILE}:/app/epic-backend-app/keys.json   #keys.json file for epic-backend-app
    ports:
      - "${PORT}:3210"  # Map host port to container port 3210

    healthcheck:
      test: ["CMD-SHELL", "nc -z localhost 3210 || exit 1"]  
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 30s

    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "2"
    
    restart: unless-stopped
    
    deploy:
      resources:
        limits:
          cpus:  ${CPU_LIMIT:-2}
          memory:  ${MEM_LIMIT:-2G}
        reservations:
          memory: ${MEM_RESERVATION:-1G}

Then run the following:

# Start the application, the -d flag starts it as a background process
docker-compose up -d

Verification

After starting the application, verify everything is working:

  1. Check container status

    docker-compose ps
  2. View logs

    docker-compose logs -f app
  3. Test the health endpoint

    curl http://localhost:3000/health

    Expected response:

    {
      "status": "healthy",
      "timestamp": "2025-08-10T12:00:00Z",
      "version": "1.0.0"
    }

Common Commands

# Stop the application
docker-compose down

# View real-time logs
docker-compose logs -f

# Execute commands in running container
docker-compose exec daecos-predictor bash

# Restart a specific service
docker-compose restart daecos-predictor