Dump and Restore Procedures Guide¶
Version: 3.1.0 Status: Production-Ready Last Updated: 2025-12-08
Table of Contents¶
- Overview
- Dump Operations
- Restore Operations
- Disaster Recovery
- Automated Backup Scheduling
- Storage Planning
- Performance Optimization
- Verification and Validation
- Troubleshooting
- Real-World Examples
Overview¶
HeliosDB-Lite's dump/restore system provides user-controlled persistence for in-memory databases. This feature enables:
- Manual data snapshots: Dump database state to portable files
- Incremental backups: Append-only dumps for changed data
- Point-in-time recovery: Restore to specific dump snapshots
- Disaster recovery: Automated backup and restoration workflows
- Cross-environment migration: Move data between instances
Dump File Format¶
HeliosDB dumps use the .heliodump format with the following characteristics:
- Version-stamped: Forward-compatible with future releases
- Compressed: Optional zstd or lz4 compression (50-80% reduction)
- Portable: Platform-independent binary format
- Schema-aware: Includes table definitions, indexes, and constraints
- Incremental-capable: Tracks LSN markers for delta dumps
File Structure:
┌─────────────────────────┐
│ Magic: "HELIODMP" │
│ Version: 3.1.0 │
├─────────────────────────┤
│ Metadata Header │
│ - dump_id (UUID) │
│ - created_at │
│ - mode (full/inc) │
│ - last_lsn │
│ - compression type │
├─────────────────────────┤
│ Schema Definitions │
│ - Tables │
│ - Indexes │
│ - Constraints │
├─────────────────────────┤
│ Data Batches │
│ - Row batches │
│ - Compressed chunks │
├─────────────────────────┤
│ Footer │
│ - Checksum (CRC32) │
│ - Statistics │
└─────────────────────────┘
Dump Operations¶
Basic Dump Command¶
# Full database dump
heliosdb-lite dump --output /backups/mydb.heliodump
# Dump with compression (recommended)
heliosdb-lite dump --output /backups/mydb.heliodump --compress zstd
# Dump specific tables
heliosdb-lite dump --output /backups/users-only.heliodump --tables users,sessions
# Dump with custom compression level
heliosdb-lite dump --output /backups/mydb.heliodump --compress zstd --compression-level 9
Full vs Incremental Dumps¶
Full Dump¶
Exports entire database state:
Characteristics: - Contains all tables, rows, indexes - Self-contained (no dependencies) - Larger file size - Slower to create - Use for: Weekly/monthly backups, archival, migrations
Incremental Dump (Append Mode)¶
Exports only changes since last dump:
# First dump (full)
heliosdb-lite dump --output /backups/base.heliodump
# Subsequent dumps (incremental)
heliosdb-lite dump --output /backups/base.heliodump --append
# Incremental dumps track LSN automatically
Characteristics: - Only changed/new data - Smaller, faster - Requires base dump - Use for: Hourly/daily backups, continuous archival
Compression Options¶
HeliosDB supports multiple compression algorithms:
| Algorithm | Ratio | Speed | CPU | Use Case |
|---|---|---|---|---|
| none | 1.0x | Fastest | Minimal | Fast backups, pre-compressed data |
| lz4 | 2.0-3.0x | Fast | Low | Frequent backups, balanced |
| zstd (default) | 3.0-5.0x | Medium | Medium | Archival, storage-constrained |
| zstd -9 | 4.0-7.0x | Slow | High | Long-term archival, rarely accessed |
Example:
# Fast compression (LZ4)
heliosdb-lite dump --output fast.heliodump --compress lz4
# Balanced compression (zstd level 3, default)
heliosdb-lite dump --output balanced.heliodump --compress zstd
# Maximum compression (zstd level 19)
heliosdb-lite dump --output archive.heliodump --compress zstd --compression-level 19
Selective Table Dumps¶
# Dump single table
heliosdb-lite dump --output users.heliodump --tables users
# Dump multiple tables (comma-separated)
heliosdb-lite dump --output critical.heliodump --tables users,orders,payments
# Dump all except specific tables (exclusion requires scripting)
# List all tables, filter, then dump
heliosdb-lite tables | grep -v 'logs\|temp' | xargs -I{} heliosdb-lite dump --output prod.heliodump --tables {}
Monitoring Dump Progress¶
# Run dump with verbose output
heliosdb-lite dump --output backup.heliodump --verbose
# Output example:
# [2025-12-08 10:00:00] Starting dump...
# [2025-12-08 10:00:01] Dumping table 'users' (1/5)
# [2025-12-08 10:00:02] Exported 10,000 rows (1.2 MB)
# [2025-12-08 10:00:03] Dumping table 'orders' (2/5)
# [2025-12-08 10:00:05] Exported 50,000 rows (8.5 MB)
# ...
# [2025-12-08 10:00:30] Dump complete: 5 tables, 250,000 rows, 45.2 MB (compressed: 12.1 MB)
Checking Dirty State¶
Before dumping, check if there are uncommitted changes:
-- Check if database has unsaved changes
SELECT pg_stat_get_dirty_bytes() as dirty_bytes,
pg_stat_get_dirty_bytes() > 0 as needs_dump;
-- View dirty state by table
SELECT table_name,
pg_table_dirty_bytes(table_name) as dirty_bytes
FROM information_schema.tables
WHERE table_schema = 'public'
ORDER BY dirty_bytes DESC;
Restore Operations¶
Basic Restore Command¶
# Restore full dump
heliosdb-lite restore --input /backups/mydb.heliodump
# Restore specific tables
heliosdb-lite restore --input /backups/mydb.heliodump --tables users,sessions
# Restore to running instance (requires connection params)
heliosdb-lite restore --input /backups/mydb.heliodump --host localhost --port 5432
Restore Modes¶
1. Clean Restore (Default)¶
Drops existing data and restores from dump:
Warning: This will delete all existing data!
2. Append Restore¶
Adds dump data to existing database (does not drop tables):
Conflicts: Primary key conflicts will cause errors unless --on-conflict is specified:
# Skip conflicting rows
heliosdb-lite restore --input backup.heliodump --mode append --on-conflict skip
# Update conflicting rows (upsert)
heliosdb-lite restore --input backup.heliodump --mode append --on-conflict update
3. Incremental Restore¶
Restore base dump + incremental dumps in sequence:
# Restore base
heliosdb-lite restore --input /backups/base.heliodump
# Apply incremental dumps in order
heliosdb-lite restore --input /backups/base.heliodump.inc1 --mode append
heliosdb-lite restore --input /backups/base.heliodump.inc2 --mode append
heliosdb-lite restore --input /backups/base.heliodump.inc3 --mode append
Restore Validation¶
# Dry-run mode (verify without restoring)
heliosdb-lite restore --input backup.heliodump --dry-run
# Output:
# Dump file: backup.heliodump
# Version: 3.1.0 (compatible)
# Created: 2025-12-08 10:00:00
# Tables: users, orders, payments, sessions, logs
# Total rows: 250,000
# Compressed size: 12.1 MB
# Uncompressed size: 45.2 MB
# Estimated restore time: ~30 seconds
# Validation: PASSED
Disaster Recovery¶
Automated Backup Strategy¶
Recommended 3-2-1 Backup Strategy: - 3 copies: Production + 2 backups - 2 media types: Local disk + cloud storage - 1 offsite: Cloud or remote datacenter
Example DR Workflow¶
Daily Backup Automation¶
#!/bin/bash
# /usr/local/bin/heliosdb-daily-backup.sh
set -e
BACKUP_DIR="/var/backups/heliosdb"
DATE=$(date +%Y%m%d-%H%M%S)
RETENTION_DAYS=30
# Create backup directory
mkdir -p "$BACKUP_DIR/daily"
# Dump database
heliosdb-lite dump \
--output "$BACKUP_DIR/daily/heliosdb-$DATE.heliodump" \
--compress zstd \
--compression-level 3 \
--verbose
# Upload to S3 (optional)
aws s3 cp "$BACKUP_DIR/daily/heliosdb-$DATE.heliodump" \
s3://my-bucket/heliosdb-backups/daily/
# Cleanup old backups
find "$BACKUP_DIR/daily" -name "*.heliodump" -mtime +$RETENTION_DAYS -delete
# Log success
echo "[$DATE] Backup completed successfully" >> "$BACKUP_DIR/backup.log"
Hourly Incremental Backups¶
#!/bin/bash
# /usr/local/bin/heliosdb-hourly-incremental.sh
set -e
BACKUP_DIR="/var/backups/heliosdb/incremental"
DATE=$(date +%Y%m%d-%H%M%S)
BASE_DUMP="$BACKUP_DIR/base.heliodump"
# Create base dump if missing
if [ ! -f "$BASE_DUMP" ]; then
heliosdb-lite dump --output "$BASE_DUMP" --compress zstd
echo "[$DATE] Created base dump" >> "$BACKUP_DIR/backup.log"
else
# Append incremental changes
heliosdb-lite dump --output "$BASE_DUMP" --append --compress zstd
echo "[$DATE] Incremental backup appended" >> "$BACKUP_DIR/backup.log"
fi
# Rotate base dump weekly
if [ "$(date +%u)" -eq 1 ] && [ "$(date +%H)" -eq 0 ]; then
mv "$BASE_DUMP" "$BACKUP_DIR/base-$(date +%Y%m%d).heliodump"
echo "[$DATE] Rotated base dump" >> "$BACKUP_DIR/backup.log"
fi
Disaster Recovery Procedure¶
Scenario: Production server crashes, need to restore from backup
# Step 1: Provision new server
# (Manual step: Create new VM/container)
# Step 2: Install HeliosDB-Lite
wget https://releases.heliosdb.com/v3.1.0/heliosdb-lite-linux-amd64.tar.gz
tar xzf heliosdb-lite-linux-amd64.tar.gz
sudo mv heliosdb-lite /usr/local/bin/
# Step 3: Download latest backup
aws s3 cp s3://my-bucket/heliosdb-backups/daily/heliosdb-latest.heliodump ./
# Step 4: Start in-memory instance
heliosdb-lite start --memory --port 5432 &
DB_PID=$!
sleep 5 # Wait for startup
# Step 5: Restore from backup
heliosdb-lite restore --input heliosdb-latest.heliodump --host localhost --port 5432
# Step 6: Verify restoration
psql -h localhost -p 5432 -c "SELECT COUNT(*) FROM users;"
psql -h localhost -p 5432 -c "SELECT MAX(created_at) FROM orders;"
# Step 7: (Optional) Dump to persistent storage if switching modes
heliosdb-lite dump --output /data/heliosdb-restored.heliodump
# Step 8: Update application connection strings
# (Manual step: Update config/environment variables)
echo "Disaster recovery completed at $(date)"
Testing DR Procedures¶
Monthly DR Drill:
#!/bin/bash
# /usr/local/bin/heliosdb-dr-test.sh
set -e
TEST_DIR="/tmp/heliosdb-dr-test-$(date +%s)"
mkdir -p "$TEST_DIR"
echo "Starting DR test..."
# 1. Download latest backup
aws s3 cp s3://my-bucket/heliosdb-backups/daily/heliosdb-latest.heliodump "$TEST_DIR/"
# 2. Start test instance
heliosdb-lite start --memory --port 15432 --data-dir "$TEST_DIR/db" &
TEST_PID=$!
sleep 5
# 3. Restore backup
heliosdb-lite restore --input "$TEST_DIR/heliosdb-latest.heliodump" --host localhost --port 15432
# 4. Run validation queries
psql -h localhost -p 15432 -c "SELECT COUNT(*) FROM users;" > "$TEST_DIR/user_count.txt"
psql -h localhost -p 15432 -c "SELECT COUNT(*) FROM orders;" > "$TEST_DIR/order_count.txt"
# 5. Compare with production counts (fetch via monitoring API)
PROD_USERS=$(curl -s https://monitoring.example.com/metrics/user_count)
TEST_USERS=$(cat "$TEST_DIR/user_count.txt" | grep -oP '\d+')
if [ "$PROD_USERS" -eq "$TEST_USERS" ]; then
echo "DR Test PASSED: User counts match"
else
echo "DR Test FAILED: User count mismatch (prod: $PROD_USERS, test: $TEST_USERS)"
exit 1
fi
# 6. Cleanup
kill $TEST_PID
rm -rf "$TEST_DIR"
echo "DR test completed successfully"
Automated Backup Scheduling¶
Using Cron¶
# Edit crontab
crontab -e
# Add backup jobs:
# Daily full backup at 2 AM
0 2 * * * /usr/local/bin/heliosdb-daily-backup.sh >> /var/log/heliosdb-backup.log 2>&1
# Hourly incremental backup
0 * * * * /usr/local/bin/heliosdb-hourly-incremental.sh >> /var/log/heliosdb-incremental.log 2>&1
# Weekly DR test on Sundays at 3 AM
0 3 * * 0 /usr/local/bin/heliosdb-dr-test.sh >> /var/log/heliosdb-dr-test.log 2>&1
Using systemd Timers¶
Create /etc/systemd/system/heliosdb-backup.service:
[Unit]
Description=HeliosDB Daily Backup
After=network.target
[Service]
Type=oneshot
User=heliosdb
ExecStart=/usr/local/bin/heliosdb-daily-backup.sh
StandardOutput=journal
StandardError=journal
Create /etc/systemd/system/heliosdb-backup.timer:
[Unit]
Description=HeliosDB Daily Backup Timer
Requires=heliosdb-backup.service
[Timer]
OnCalendar=daily
OnCalendar=02:00
Persistent=true
[Install]
WantedBy=timers.target
Enable and start:
sudo systemctl enable heliosdb-backup.timer
sudo systemctl start heliosdb-backup.timer
sudo systemctl list-timers # Verify
Configuration-Based Scheduling¶
Add to config.toml:
[dump]
# Auto-dump schedule (cron syntax)
schedule = "0 */6 * * *" # Every 6 hours
# Auto-dump when WAL size exceeds threshold (1GB)
wal_size_threshold = 1073741824
# Default compression
compression = "zstd"
compression_level = 3
# Backup directory
backup_dir = "/var/backups/heliosdb"
# Retention policy
retention_days = 30
# Notification on failure
notify_email = "ops@example.com"
Storage Planning¶
Size Estimation¶
Rule of Thumb: - Uncompressed dump: ~90% of in-memory size - Compressed dump (zstd): 20-40% of uncompressed - Incremental dump: 5-15% of full dump (per day)
Example Calculation:
In-memory database: 10 GB
Uncompressed dump: 9 GB
Compressed dump (zstd): 2.7 GB
Daily incremental: ~270 MB
Weekly full: 2.7 GB
Monthly storage (1 weekly + 28 daily): 2.7 GB + (28 * 270 MB) = 10.3 GB
Storage Requirements Table¶
| Database Size | Compressed Dump | Daily Inc | Monthly Total |
|---|---|---|---|
| 100 MB | 30 MB | 3 MB | 120 MB |
| 1 GB | 300 MB | 30 MB | 1.2 GB |
| 10 GB | 3 GB | 300 MB | 12 GB |
| 100 GB | 30 GB | 3 GB | 120 GB |
| 1 TB | 300 GB | 30 GB | 1.2 TB |
Measuring Actual Dump Size¶
# Dry-run to estimate size
heliosdb-lite dump --output /dev/null --dry-run --compress zstd
# Output:
# Estimated dump size: 2.7 GB (compressed)
# Estimated time: 120 seconds
# Actual dump with size tracking
heliosdb-lite dump --output backup.heliodump --compress zstd --verbose | \
tee >(grep "Dump complete" | awk '{print $NF}')
Performance Optimization¶
Dump Performance Tips¶
-
Use compression for I/O-bound systems:
-
Parallel table dumps:
-
Tune batch size (config.toml):
Restore Performance Tips¶
-
Disable indexes during bulk restore:
-
Use uncompressed dumps for repeated restores:
-
Batch insert configuration:
Benchmarks¶
| Operation | Size | Time | Throughput |
|---|---|---|---|
| Dump (no compression) | 1 GB | 15s | 66 MB/s |
| Dump (lz4) | 1 GB → 400 MB | 22s | 45 MB/s |
| Dump (zstd) | 1 GB → 300 MB | 35s | 28 MB/s |
| Restore (no compression) | 1 GB | 25s | 40 MB/s |
| Restore (lz4) | 400 MB → 1 GB | 30s | 33 MB/s |
| Restore (zstd) | 300 MB → 1 GB | 40s | 25 MB/s |
Verification and Validation¶
Checksum Verification¶
Dump files include CRC32 checksums for integrity:
# Verify dump file integrity
heliosdb-lite verify --input backup.heliodump
# Output:
# Verifying backup.heliodump...
# Magic: OK
# Version: 3.1.0 (compatible)
# Metadata checksum: OK
# Data checksum: OK (CRC32: 0xA1B2C3D4)
# Verification: PASSED
Data Validation¶
Compare row counts after restore:
#!/bin/bash
# validate-restore.sh
TABLES=("users" "orders" "products" "sessions")
for table in "${TABLES[@]}"; do
# Count in dump
DUMP_COUNT=$(heliosdb-lite inspect --input backup.heliodump --table "$table" | grep "Row count" | awk '{print $3}')
# Count in restored database
RESTORE_COUNT=$(psql -h localhost -p 5432 -t -c "SELECT COUNT(*) FROM $table")
if [ "$DUMP_COUNT" -eq "$RESTORE_COUNT" ]; then
echo "✓ $table: $RESTORE_COUNT rows (match)"
else
echo "✗ $table: MISMATCH (dump: $DUMP_COUNT, restore: $RESTORE_COUNT)"
exit 1
fi
done
echo "All tables validated successfully"
Schema Validation¶
-- Compare schema after restore
SELECT table_name, column_name, data_type
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position;
-- Verify indexes
SELECT tablename, indexname, indexdef
FROM pg_indexes
WHERE schemaname = 'public'
ORDER BY tablename, indexname;
Troubleshooting¶
Issue: Dump Fails with "Out of Memory"¶
Symptoms:
Solutions:
-
Reduce batch size:
-
Dump tables individually:
-
Use streaming mode:
Issue: Restore Fails with "Version Incompatible"¶
Symptoms:
Solutions:
-
Upgrade dump format:
-
Use compatible version:
Issue: Corrupted Dump File¶
Symptoms:
Solutions:
-
Verify file integrity:
-
Attempt partial recovery:
-
Use previous backup:
Real-World Examples¶
Example 1: E-commerce Site Backup¶
Requirements: - 24/7 uptime - Point-in-time recovery within 1 hour - 30-day retention
Solution:
# Daily full backup (2 AM, low traffic)
0 2 * * * heliosdb-lite dump --output /backups/daily/full-$(date +\%Y\%m\%d).heliodump --compress zstd
# Hourly incremental
0 * * * * heliosdb-lite dump --output /backups/hourly/inc-$(date +\%Y\%m\%d-\%H).heliodump --append --compress lz4
# Upload to S3 (real-time)
*/5 * * * * aws s3 sync /backups/hourly s3://backups/hourly/
# Cleanup old backups
0 3 * * * find /backups/daily -mtime +30 -delete
Example 2: Development Environment¶
Requirements: - Fast reset to clean state - Minimal storage
Solution:
# Create seed data dump once
heliosdb-lite dump --output /fixtures/seed-data.heliodump --compress zstd
# Reset before each test run
heliosdb-lite restore --input /fixtures/seed-data.heliodump --mode clean
Example 3: Analytics Pipeline¶
Requirements: - Export processed data daily - Retain for 90 days
Solution:
#!/bin/bash
# export-analytics.sh
DATE=$(date +%Y%m%d)
# Export analytics tables
heliosdb-lite dump \
--output "/analytics/exports/analytics-$DATE.heliodump" \
--tables "daily_metrics,user_aggregates,revenue_summary" \
--compress zstd \
--compression-level 9
# Upload to data warehouse
rclone copy "/analytics/exports/analytics-$DATE.heliodump" "s3:data-warehouse/analytics/"
# Cleanup
find /analytics/exports -mtime +90 -delete
See Also¶
- In-Memory Mode Guide - In-memory database operations
- CLI Reference - Complete command reference
- Configuration Reference - Dump/restore settings
- Disaster Recovery Guide - DR best practices
- Performance Tuning - Optimization techniques
Version: 3.1.0 Last Updated: 2025-12-08 Maintained by: HeliosDB Team