Backup & Restore Guide
This guide covers how to backup and restore your ContextDX data, including database and application data.
ContextDX Backup & Restore Guide
This guide covers how to backup and restore your ContextDX data, including database and application data.
Table of Contents
- Overview
- Data Locations
- Backup Database
- Backup Application Data
- Restore Database
- Restore Application Data
- Automated Backups
- Best Practices
Related Guides:
Overview
ContextDX stores data in two main locations:
- PostgreSQL database - User data, configurations, sessions
- Data volume - Cache, entitlements, uploaded files
Regular backups are essential for disaster recovery and data protection.
Data Locations
| Volume | Contents | Priority |
|---|---|---|
contextdx-data | Sessions, cache, entitlements | Medium |
postgres-data | Database (user data, configurations) | High |
To see your volumes:
BASHdocker volume ls | grep contextdx
Backup Database
Quick Backup
BASH# Create backup directory mkdir -p backups # Backup database to SQL file docker compose -f docker-compose.production.yml exec postgres \ pg_dump -U contextdx contextdx > backups/backup-$(date +%Y%m%d).sql
Compressed Backup
For larger databases, compress the backup:
BASHdocker compose -f docker-compose.production.yml exec postgres \ pg_dump -U contextdx contextdx | gzip > backups/backup-$(date +%Y%m%d).sql.gz
Backup with Timestamp
BASHTIMESTAMP=$(date +%Y%m%d_%H%M%S) docker compose -f docker-compose.production.yml exec postgres \ pg_dump -U contextdx contextdx > backups/backup-${TIMESTAMP}.sql echo "Backup created: backups/backup-${TIMESTAMP}.sql"
Verify Backup
Check that the backup file is valid:
BASH# Check file size (should not be empty) ls -lh backups/backup-*.sql # Check file content (should start with PostgreSQL dump header) head -20 backups/backup-$(date +%Y%m%d).sql
Backup Application Data
Backup Data Volume
BASH# Backup the data volume to a tar archive docker run --rm \ -v contextdx-data:/data \ -v $(pwd)/backups:/backup \ alpine tar czf /backup/data-$(date +%Y%m%d).tar.gz /data
Windows PowerShell
POWERSHELL# Backup data volume docker run --rm ` -v contextdx-data:/data ` -v ${PWD}/backups:/backup ` alpine tar czf /backup/data-$(Get-Date -Format "yyyyMMdd").tar.gz /data
Restore Database
Standard Restore
BASH# Stop the server first (prevents data inconsistency) docker compose -f docker-compose.production.yml stop server # Restore from backup cat backups/backup-20240115.sql | docker compose -f docker-compose.production.yml exec -T postgres \ psql -U contextdx contextdx # Start server docker compose -f docker-compose.production.yml start server
Restore Compressed Backup
BASH# Stop the server docker compose -f docker-compose.production.yml stop server # Restore from compressed backup gunzip -c backups/backup-20240115.sql.gz | docker compose -f docker-compose.production.yml exec -T postgres \ psql -U contextdx contextdx # Start server docker compose -f docker-compose.production.yml start server
Full Database Reset and Restore
If you need to completely reset the database:
BASH# Stop all services docker compose -f docker-compose.production.yml down # Remove the database volume docker volume rm contextdx_postgres-data # Start postgres only docker compose -f docker-compose.production.yml up -d postgres # Wait for postgres to be ready sleep 10 # Restore from backup cat backups/backup-20240115.sql | docker compose -f docker-compose.production.yml exec -T postgres \ psql -U contextdx contextdx # Start all services docker compose -f docker-compose.production.yml up -d
Restore Application Data
Restore Data Volume
BASH# Stop services docker compose -f docker-compose.production.yml down # Remove existing volume (if needed) docker volume rm contextdx-data # Recreate volume and restore docker run --rm \ -v contextdx-data:/data \ -v $(pwd)/backups:/backup \ alpine sh -c "cd /data && tar xzf /backup/data-20240115.tar.gz --strip 1" # Start services docker compose -f docker-compose.production.yml up -d
Automated Backups
Linux/macOS (Cron)
Set up daily automated backups:
BASH# Edit crontab crontab -e # Add this line (runs at 2 AM daily) 0 2 * * * cd /path/to/contextdx && docker compose -f docker-compose.production.yml exec -T postgres pg_dump -U contextdx contextdx > backups/backup-$(date +\%Y\%m\%d).sql
Backup Script
Create a comprehensive backup script:
BASH#!/bin/bash # backup.sh - ContextDX backup script BACKUP_DIR="/path/to/contextdx/backups" TIMESTAMP=$(date +%Y%m%d_%H%M%S) RETENTION_DAYS=30 # Create backup directory mkdir -p $BACKUP_DIR # Backup database echo "Backing up database..." docker compose -f docker-compose.production.yml exec -T postgres \ pg_dump -U contextdx contextdx | gzip > $BACKUP_DIR/db-${TIMESTAMP}.sql.gz # Backup data volume echo "Backing up data volume..." docker run --rm \ -v contextdx-data:/data \ -v $BACKUP_DIR:/backup \ alpine tar czf /backup/data-${TIMESTAMP}.tar.gz /data # Remove old backups echo "Cleaning up old backups..." find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete find $BACKUP_DIR -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete echo "Backup complete: ${TIMESTAMP}"
Make it executable and schedule:
BASHchmod +x backup.sh # Add to crontab (runs at 2 AM daily) crontab -e 0 2 * * * /path/to/contextdx/backup.sh >> /path/to/contextdx/backup.log 2>&1
Windows (Task Scheduler)
Create a PowerShell backup script:
POWERSHELL# backup.ps1 - ContextDX backup script $BackupDir = "C:\contextdx\backups" $Timestamp = Get-Date -Format "yyyyMMdd_HHmmss" $RetentionDays = 30 # Create backup directory New-Item -ItemType Directory -Force -Path $BackupDir # Backup database Write-Host "Backing up database..." docker compose -f docker-compose.production.yml exec -T postgres ` pg_dump -U contextdx contextdx | Out-File "$BackupDir\db-$Timestamp.sql" # Compress Compress-Archive -Path "$BackupDir\db-$Timestamp.sql" -DestinationPath "$BackupDir\db-$Timestamp.sql.zip" Remove-Item "$BackupDir\db-$Timestamp.sql" # Remove old backups Get-ChildItem $BackupDir -Filter "*.zip" | Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-$RetentionDays) } | Remove-Item Write-Host "Backup complete: $Timestamp"
Schedule using Task Scheduler to run daily.
Best Practices
Backup Frequency
| Data Type | Recommended Frequency |
|---|---|
| Database | Daily minimum, hourly for critical data |
| Data volume | Weekly or after significant changes |
Storage Recommendations
-
Offsite backups - Store backups in a different location
- Cloud storage (S3, Azure Blob, Google Cloud Storage)
- Different physical server
- Network attached storage (NAS)
-
Encryption - Encrypt backups containing sensitive data
BASH# Encrypt backup with GPG gpg --symmetric --cipher-algo AES256 backups/backup-20240115.sql -
Retention policy - Keep multiple backup generations
- Daily backups: 7 days
- Weekly backups: 4 weeks
- Monthly backups: 12 months
Testing Restores
Regularly test your backup restoration process:
- Monthly - Verify backup files are not corrupted
- Quarterly - Perform full restore test in staging environment
- Document - Keep restore procedures documented and up-to-date
Monitoring
Set up alerts for backup failures:
BASH# Example: Send email on backup failure ./backup.sh || echo "Backup failed on $(date)" | mail -s "ContextDX Backup Failed" admin@example.com
Quick Reference
| Task | Command |
|---|---|
| Quick database backup | docker compose -f docker-compose.production.yml exec postgres pg_dump -U contextdx contextdx > backup.sql |
| Compressed backup | docker compose -f docker-compose.production.yml exec postgres pg_dump -U contextdx contextdx | gzip > backup.sql.gz |
| Restore database | cat backup.sql | docker compose -f docker-compose.production.yml exec -T postgres psql -U contextdx contextdx |
| Backup data volume | docker run --rm -v contextdx-data:/data -v $(pwd):/backup alpine tar czf /backup/data.tar.gz /data |
| List backups | ls -lh backups/ |