DocHub
Full operational documentation for the SaaS platform on OVH — gateway, Docker slices, nginx, monitoring

WhatsApp CRM SaaS — Production Deployment

Operational documentation for the WhatsApp CRM SaaS platform running on OVH VPS at app.queunir.com


Table of Contents

  1. Server Access
  2. Server Specifications
  3. Architecture Overview
  4. Gateway Service
  5. Nginx Configuration
  6. Database
  7. Docker Slices
  8. File System Layout
  9. API Reference
  10. Frontend
  11. Monitoring & Telemetry
  12. Provisioning & Orchestration
  13. Billing (Stripe)
  14. Admin Dashboard
  15. Deployment & Updates
  16. Troubleshooting
  17. Common Commands

Server Access

Item Details
Provider OVH Cloud (VPS-5)
IP 192.99.145.61
Location Canada East (Beauharnois)
SSH alias ssh ovh
SSH user ubuntu
SSH key ~/.ssh/ovh_vps
SSH command ssh -i ~/.ssh/ovh_vps ubuntu@192.99.145.61
Public URL https://app.queunir.com
OS Ubuntu 25.04

Server Specifications

Resource Value
CPU 16 vCores
RAM 61 GB
Swap 31 GB
Disk 339 GB SSD NVMe
Bandwidth 2.5 Gbps unlimited
Cost $40.40/month

Capacity Estimates

Metric Value
Per-slice memory budget ~400-500 MB
Shared services overhead ~2 GB
Comfortable max slices 60-80
Aggressive max slices ~130

Architecture Overview

                     Internet
                        |
                  ┌─────┴─────┐
                  │   nginx   │  (SSL termination, static files, proxying)
                  │ :443/:80  │
                  └─────┬─────┘
                        |
         ┌──────────────┼──────────────┐
         |              |              |
   Static Frontend   Gateway API    SSE Events
   /var/www/app/     port 3000      /api/events
         |              |              |
         |         ┌────┴────┐         |
         |         │ Gateway │─────────┘
         |         │ Service │
         |         │         │
         |         │ - Auth  │
         |         │ - Proxy │
         |         │ - Admin │
         |         │ - Billing│
         |         │ - Monitor│
         |         └────┬────┘
         |              |
         |    ┌─────────┼─────────┐
         |    |         |         |
         | ┌──┴──┐  ┌──┴──┐  ┌──┴──┐
         | │Slice│  │Slice│  │Slice│  ...
         | │  1  │  │  2  │  │  N  │
         | │:5001│  │:5003│  │:500X│
         | └──┬──┘  └──┬──┘  └──┬──┘
         |    |         |         |
         |    └─────────┼─────────┘
         |              |
         |    ┌─────────┴─────────┐
         |    │   PostgreSQL      │
         |    │   (shared)        │
         |    │                   │
         |    │ wank_saas (gateway)│
         |    │ wa_slice_1        │
         |    │ wa_slice_2        │
         |    │ wa_slice_N ...    │
         |    └───────────────────┘
         |
   User's browser loads static
   frontend, all /api/* and /auth/*
   calls go through gateway

Request Flow

  1. User visits https://app.queunir.com
  2. Nginx serves the static React SPA from /var/www/app/
  3. Frontend calls GET /auth/me to check session
  4. If authenticated, all /api/* requests are proxied: nginx -> gateway (port 3000) -> user’s slice (port 500X)
  5. Real-time events use SSE (GET /api/events) — plain HTTP, no WebSocket upgrade needed. The gateway connects to each slice’s Socket.io server as a client on localhost, then re-emits events to the browser as an SSE stream.

Gateway Service

The gateway is the central Node.js/Express service that handles authentication, API proxying, billing, monitoring, and administration.

Service Configuration

Item Details
Process Node.js (ES2020/CommonJS)
Systemd unit wank-gateway.service
Listen 127.0.0.1:3000 (localhost only)
Working directory /home/ubuntu/gateway/
Source code /home/ubuntu/gateway/src/
Compiled output /home/ubuntu/gateway/dist/
Environment file /home/ubuntu/gateway/.env

Systemd Unit File

# /etc/systemd/system/wank-gateway.service
[Unit]
Description=WANK SaaS Gateway
After=network.target postgresql.service

[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/gateway
ExecStart=/usr/bin/node --env-file=.env dist/index.js
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Service Management

# Status
sudo systemctl status wank-gateway

# Restart
sudo systemctl restart wank-gateway

# View logs
sudo journalctl -u wank-gateway -f
sudo journalctl -u wank-gateway --since "1 hour ago"

# Rebuild after code changes
cd /home/ubuntu/gateway && npx tsc && sudo systemctl restart wank-gateway

Environment Variables (.env)

Variable Purpose
NODE_ENV production
PORT 3000
DB_HOST localhost
DB_PORT 5432
DB_NAME wank_saas
DB_USER wank_gateway
DB_PASSWORD Gateway database password
ADMIN_KEY Admin API key for CLI/orchestrator access
SLICE_DB_PASSWORD Password for per-slice database access
STRIPE_SECRET_KEY Stripe API secret (optional — billing disabled if empty)
STRIPE_WEBHOOK_SECRET Stripe webhook signing secret
STRIPE_PRICE_ID Stripe price ID for the subscription
APP_URL https://app.queunir.com
CORS_ORIGIN Allowed CORS origin

Source Files

File Purpose
src/index.ts Express app setup, middleware, route mounting
src/sse.ts SSE endpoint, SliceConnector (socket.io-client → SSE bridge)
src/auth.ts Registration, login, logout, session management (/auth/*)
src/proxy.ts Session validation and API request proxying to correct slice
src/admin.ts Admin dashboard API, slice management, telemetry endpoints (/admin/*)
src/billing.ts Stripe checkout, portal, webhook handling (/billing/*)
src/orchestrator.ts Slice provisioning and destruction API (/orchestrator/*)
src/provisioner.ts Core provisioning logic — creates DB, schema, Docker container, assigns slices
src/monitor.ts Health monitoring loop (60s interval) — per-slice and server telemetry
src/health.ts Gateway health check endpoint (/health)
src/database.ts PostgreSQL connection pool configuration

Dependencies

Express 4.x, cors, helmet, morgan, cookie-parser, http-proxy-middleware, pg, bcryptjs, stripe


Nginx Configuration

File: /etc/nginx/sites-available/wank-saas

Key Routing Rules

Path Destination Purpose
/ /var/www/app/ (static) React SPA
/api/events http://127.0.0.1:3000 SSE stream (real-time events)
/api/* http://127.0.0.1:3000 API proxy (gateway -> slice)
/auth/* http://127.0.0.1:3000 Authentication endpoints
/admin/* http://127.0.0.1:3000 Admin dashboard API
/billing/* http://127.0.0.1:3000 Stripe billing endpoints
/orchestrator/* http://127.0.0.1:3000 Orchestrator API
/health http://127.0.0.1:3000 Health check

SSL

  • Managed by Let’s Encrypt (certbot)
  • Auto-renewal via systemd timer or cron

Editing Nginx

sudo nano /etc/nginx/sites-available/wank-saas
sudo nginx -t                    # Test config
sudo systemctl reload nginx      # Apply changes

Database

PostgreSQL Instance

Item Details
Port 5432 (standard)
Gateway database wank_saas
Gateway user wank_gateway
Gateway schema gateway
Slice databases wa_slice_1, wa_slice_2, … (one per slice)
Slice user wa_slice

Gateway Schema Tables

gateway.users

Column Type Notes
id SERIAL PK
email TEXT UNIQUE Lowercase
password_hash TEXT bcrypt (cost 12)
slice_id INTEGER FK References slices(id)
stripe_customer TEXT Stripe customer ID
subscription_status TEXT trial, active, past_due, cancelled
subscription_id TEXT Stripe subscription ID
subscription_ends_at TIMESTAMPTZ
is_admin BOOLEAN Admin dashboard access
created_at TIMESTAMPTZ
last_login_at TIMESTAMPTZ

gateway.slices

Column Type Notes
id SERIAL PK
port INTEGER UNIQUE Backend port (5001, 5003, …)
status TEXT available, assigned, suspended, destroying
user_id INTEGER FK References users(id)
wa_connected BOOLEAN WhatsApp session active
wa_phone TEXT Connected phone number
last_health_at TIMESTAMPTZ Last successful health check
created_at TIMESTAMPTZ
storage_bytes BIGINT Disk usage

gateway.sessions

Column Type Notes
id TEXT PK 256-bit random hex token
user_id INTEGER FK References users(id)
expires_at TIMESTAMPTZ 30-day rolling expiry
created_at TIMESTAMPTZ

gateway.telemetry (per-slice, time-series)

Column Type Notes
id BIGSERIAL PK
slice_id INTEGER FK References slices(id)
timestamp TIMESTAMPTZ
rss_bytes BIGINT Container memory usage
wa_state TEXT WhatsApp connection state
cpu_percent NUMERIC Container CPU usage
disk_bytes BIGINT Slice disk usage

Index: idx_telemetry_slice_time on (slice_id, timestamp DESC)

gateway.server_telemetry (server-level, time-series)

Column Type Notes
id BIGSERIAL PK
timestamp TIMESTAMPTZ
ram_used_bytes BIGINT
ram_total_bytes BIGINT
swap_used_bytes BIGINT
swap_total_bytes BIGINT
cpu_percent REAL
disk_used_bytes BIGINT
disk_total_bytes BIGINT
active_slices INTEGER
total_slices INTEGER

Index: idx_server_telemetry_time on (timestamp DESC)

Database Commands

# Connect to gateway database
sudo -u postgres psql -d wank_saas

# Query users
sudo -u postgres psql -d wank_saas -c "SELECT id, email, slice_id, subscription_status, is_admin FROM gateway.users;"

# Query slices
sudo -u postgres psql -d wank_saas -c "SELECT id, port, status, user_id, wa_connected, wa_phone FROM gateway.slices ORDER BY id;"

# Query sessions
sudo -u postgres psql -d wank_saas -c "SELECT id, user_id, expires_at FROM gateway.sessions;"

# Connect to a slice database
sudo -u postgres psql -d wa_slice_1

# Count slice data
sudo -u postgres psql -d wa_slice_1 -c "SELECT (SELECT COUNT(*) FROM contacts) as contacts, (SELECT COUNT(*) FROM chats) as chats, (SELECT COUNT(*) FROM messages) as messages;"

Schema File

The gateway schema SQL is at: /home/dev/code/WhatsApp/gateway/schema.sql

The per-slice schema SQL is at: /home/ubuntu/whatsapp/deploy/saas/slice-schema.sql (on the OVH server)


Docker Slices

Each user gets their own Docker container running the WhatsApp CRM backend + Chromium.

Container Naming

Convention Example
Container name wank-slice-{N}
Database name wa_slice_{N}
Data directory /data/slices/{N}/
Port mapping 127.0.0.1:{5001 + (N-1)*2}:3101

Port Allocation

Slice ports use odd numbers starting at 5001, incrementing by 2:

  • Slice 1: port 5001
  • Slice 2: port 5003
  • Slice 3: port 5005

Formula: port = 5001 + (sliceNum - 1) * 2 Reverse: sliceNum = floor((port - 5001) / 2) + 1

Container Resource Limits

Resource Limit
Memory 512 MB
CPU 0.5 cores
Shared memory 256 MB
Restart policy unless-stopped

Docker Image

Item Details
Image name wank-slice:latest
Internal port 3101
Session storage /app/.wwebjs_auth (volume mount)
Media storage /app/media (volume mount)

Docker Commands

# List running slice containers
docker ps --filter "name=wank-slice"

# View container stats (RAM, CPU)
docker stats --no-stream --filter "name=wank-slice"

# View logs for a specific slice
docker logs wank-slice-1 --tail 100
docker logs wank-slice-1 -f    # Follow

# Restart a slice
docker restart wank-slice-1

# Stop/start a slice
docker stop wank-slice-1
docker start wank-slice-1

# Inspect container details
docker inspect wank-slice-1

# Health check a slice directly
curl -s http://127.0.0.1:5001/api/health
curl -s http://127.0.0.1:5001/api/status/state

File System Layout

On OVH Server (192.99.145.61)

/home/ubuntu/
├── gateway/                    # Gateway service
│   ├── src/                    # TypeScript source
│   ├── dist/                   # Compiled JavaScript (runs from here)
│   ├── node_modules/
│   ├── package.json
│   ├── tsconfig.json
│   └── .env                    # Environment variables
│
├── whatsapp/
│   └── deploy/
│       └── saas/
│           └── slice-schema.sql  # Per-slice database schema
│
/var/www/
├── app/                        # Frontend static build
│   ├── index.html
│   └── assets/                 # JS, CSS bundles
│
/data/
├── slices/                     # Per-slice persistent data
│   ├── 1/
│   │   ├── media/              # Downloaded WhatsApp media
│   │   └── session/            # WhatsApp Web session auth
│   ├── 2/
│   │   ├── media/
│   │   └── session/
│   └── ...
│
/etc/nginx/
├── sites-available/
│   └── wank-saas               # Nginx site config
├── sites-enabled/
│   └── wank-saas -> ../sites-available/wank-saas
│
/etc/systemd/system/
└── wank-gateway.service        # Gateway systemd unit

On Local Development Machine

/home/dev/code/WhatsApp/
├── gateway/                    # Gateway source (mirrors OVH)
│   ├── src/
│   ├── dist/
│   ├── package.json
│   ├── tsconfig.json
│   └── schema.sql              # Gateway database schema
│
├── frontend/                   # React frontend
│   ├── src/
│   │   ├── SaasApp.tsx         # SaaS entry point
│   │   ├── pages/
│   │   │   ├── AdminDashboard.tsx
│   │   │   └── SaasLoginPage.tsx
│   │   └── contexts/
│   │       └── SaasAuthContext.tsx
│   ├── dist/                   # Build output → deploy to /var/www/app/
│   └── package.json
│
├── docs/
│   ├── saas-deployment.md      # This file
│   ├── SAAS-PLAN.md            # Original architecture plan
│   ├── dom-only-rule.md
│   └── ...
│
└── backend/                    # Original slice backend source

API Reference

Authentication (/auth/*)

Endpoint Method Auth Purpose
/auth/register POST None Create account. Body: { email, password }
/auth/login POST None Authenticate. Body: { email, password }. Sets session cookie.
/auth/logout POST Session Clear session.
/auth/me GET Session Return current user + slice info. Rolling session extension.

Session Details

  • Cookie name: session
  • Cookie flags: httpOnly, secure (production), sameSite: strict
  • Duration: 30 days, rolling (extended on each /auth/me call)
  • Token: 256-bit random hex (64 characters)
  • Password hashing: bcrypt with cost factor 12

API Proxy (/api/*)

All requests to /api/* are authenticated via session cookie, then proxied to the user’s assigned slice backend port. The gateway:

  1. Reads the session cookie
  2. Looks up user -> slice_id -> slice port
  3. Proxies the request to http://127.0.0.1:{port}/api/*

SSE Events (/api/events)

Real-time events (QR codes, connection state, messages) are delivered via Server-Sent Events (SSE) — plain HTTP, no WebSocket upgrade required.

The gateway maintains a SliceConnector per active slice. Each connector:

  1. Connects to the slice’s Socket.io server as a socket.io-client on localhost
  2. Caches the latest state (connectionState, QR, phoneNumber, etc.)
  3. On initial connect, fetches real state from slice HTTP API (/api/status/state)
  4. Fans out all wa:* events to connected SSE browser streams
  5. Sends a wa:catchup event with full cached state when a new SSE client connects
  6. Sends :keepalive comments every 15 seconds to prevent proxy timeouts

Idle connectors (no SSE clients) are cleaned up every 60 seconds.

Key file: gateway/src/sse.ts

Health (/health)

Endpoint Method Auth Purpose
/health GET None Gateway health + DB status + slice/user counts
/health/slices GET None Detailed status of all slices

Admin (/admin/*)

Admin endpoints accept either session-based auth (browser, must be admin user) or X-Admin-Key header (API/CLI).

Endpoint Method Auth Purpose
/admin/dashboard GET Session/Key Full dashboard overview (server stats, slices, users, metrics)
/admin/slice/:id GET Session/Key Detailed drill-down for a single slice
/admin/telemetry/server?hours=24 GET Session/Key Server telemetry time-series (max 168h)
/admin/telemetry/slices?hours=24 GET Session/Key Per-slice telemetry time-series (max 168h)
/admin/users GET Key only List all users
/admin/slices POST Key only Create slice entry. Body: { port }
/admin/slices/:id DELETE Key only Delete a slice
/admin/slices/:id/status POST Key only Update slice status. Body: { status }

Orchestrator (/orchestrator/*)

All orchestrator endpoints require X-Admin-Key header.

Endpoint Method Purpose
/orchestrator/provision POST Provision a new slice (create DB, container, register)
/orchestrator/destroy/:sliceId POST Destroy a slice (stop container, drop DB, delete data)
/orchestrator/status GET Overview of all slices + containers + memory

Billing (/billing/*)

Endpoint Method Auth Purpose
/billing/create-checkout POST Session Generate Stripe Checkout session URL
/billing/portal GET Session Get Stripe Customer Portal URL
/billing/status GET Session Current subscription status
/billing/webhook POST Stripe signature Receive Stripe webhook events

Frontend

Build & Deployment

The frontend is a React SPA built with Vite. It is built once and served as static files to all users.

# Local development
cd /home/dev/code/WhatsApp/frontend
npm run dev

# Build for production
npm run build

# Deploy to server
rsync -avz --delete -e "ssh -i ~/.ssh/ovh_vps" \
  dist/ ubuntu@192.99.145.61:/var/www/app/

Entry Point

The SaaS frontend uses SaasApp.tsx as its entry point (not App.tsx). The auth flow:

  1. Loading: Show spinner while checking /auth/me
  2. Not authenticated: Show SaasLoginPage
  3. Authenticated + admin + path /admin: Show AdminDashboard
  4. Authenticated + needs payment: Show PaymentPending (Stripe checkout)
  5. Authenticated + no slice assigned: Show ProvisioningWait (polls every 3s)
  6. Authenticated + has slice: Render the full CRM App

Monitoring & Telemetry

Monitor Loop

The monitor runs every 60 seconds (started 30s after gateway boot) and performs:

  1. Server telemetry: Collects RAM, swap, CPU, disk usage and stores in gateway.server_telemetry
  2. Slice pool management: Assigns available slices to waiting users, provisions new slices if pool is below minimum (1)
  3. Per-slice health checks: For each assigned/available slice:
    • Checks if Docker container is running (auto-restarts if not)
    • Calls /api/health and /api/status/state on each slice
    • Collects container RAM and CPU from docker stats --no-stream
    • Collects per-slice disk usage from du -sb /data/slices/{N}/
    • Stores telemetry in gateway.telemetry

Telemetry Retention

  • Data is pruned every 1 hour
  • Retention period: 7 days
  • Both gateway.telemetry and gateway.server_telemetry are pruned

Auto-Recovery

  • If a container is not running, the monitor attempts docker start wank-slice-{N}
  • If a user has no slice assigned and one becomes available, it is automatically assigned
  • If available slices drop below the minimum (1), a new slice is automatically provisioned

Provisioning & Orchestration

Automatic Provisioning

The provisioner ensures at least 1 warm (available) slice exists at all times. When a user registers and no payment is required (Stripe disabled), they are immediately assigned an available slice. When available slices run out, a new one is provisioned automatically within ~60 seconds.

Provisioning Steps

  1. Calculate next available port (5001, 5003, 5005, …)
  2. Create PostgreSQL database: wa_slice_{N}
  3. Apply slice schema from /home/ubuntu/whatsapp/deploy/saas/slice-schema.sql
  4. Grant permissions to wa_slice user
  5. Create data directories: /data/slices/{N}/media/ and /data/slices/{N}/session/
  6. Start Docker container with resource limits, volume mounts, and database URL
  7. Register in gateway.slices table
  8. Wait for health check (up to 60 seconds, polling every 2s)

Destruction Steps

  1. Stop Docker container
  2. Remove Docker container
  3. Drop slice database
  4. Delete data directory
  5. Unlink user from slice (if assigned)
  6. Delete from gateway.slices table

Slice Lifecycle

[Provisioned/Available] ──signup──> [Assigned] ──cancel──> [Destroying]
                                        |                       |
                                        | payment fail          v
                                        v                  [Destroyed]
                                   [Suspended]
                                        |
                                        | payment restored
                                        v
                                   [Assigned]

Billing (Stripe)

Configuration

Stripe billing is optional. If STRIPE_SECRET_KEY is not set in .env, billing is disabled and users get slices immediately on registration (trial mode).

Subscription Flow

  1. User registers at /auth/register
  2. If Stripe enabled: frontend shows payment page, redirects to Stripe Checkout
  3. Stripe processes payment, sends checkout.session.completed webhook
  4. Gateway updates user to subscription_status: 'active'
  5. Gateway assigns an available slice to the user

Webhook Events Handled

Event Action
checkout.session.completed Set status to active, assign slice
invoice.payment_succeeded Update subscription end date, confirm active
invoice.payment_failed Set status to past_due
customer.subscription.deleted Set status to cancelled, 7-day grace period
customer.subscription.updated Sync status (active/past_due/cancelled/trial)

Pricing

Item Price
Standard $9.00/month
Launch discount $4.50/month (50%)

Admin Dashboard

Access

Dashboard Panels

  1. Stats Bar: Total users, active subscriptions, trial users, assigned/available/total slices, total contacts, total messages
  2. Server Resources: RAM, swap, CPU, disk — progress bars with percentages and load averages
  3. Capacity Planning: Estimated max slices, headroom indicator based on current resource usage
  4. Slice Grid: Expandable cards showing per-slice status, user, phone number, RAM/CPU/disk, contacts/chats/messages, with drill-down for 24h telemetry sparklines
  5. Users Table: All users with email, subscription status, slice assignment, phone number, last login, Stripe customer link

Making a User Admin

sudo -u postgres psql -d wank_saas -c \
  "UPDATE gateway.users SET is_admin = true WHERE email = 'user@example.com';"

Deployment & Updates

Deploying Gateway Changes

# 1. Edit source locally
cd /home/dev/code/WhatsApp/gateway/src/

# 2. Sync to server
rsync -avz --exclude='node_modules' --exclude='dist' --exclude='.env' \
  -e "ssh -i ~/.ssh/ovh_vps" \
  /home/dev/code/WhatsApp/gateway/src/ \
  ubuntu@192.99.145.61:/home/ubuntu/gateway/src/

# 3. SSH and rebuild
ssh ovh
cd /home/ubuntu/gateway && npx tsc

# 4. Restart gateway
sudo systemctl restart wank-gateway

# 5. Verify
sudo systemctl status wank-gateway
curl -s http://127.0.0.1:3000/health | python3 -m json.tool

Deploying Frontend Changes

# 1. Build locally
cd /home/dev/code/WhatsApp/frontend
npm run build

# 2. Deploy to server
rsync -avz --delete -e "ssh -i ~/.ssh/ovh_vps" \
  dist/ ubuntu@192.99.145.61:/var/www/app/

Deploying Slice Image Updates

# On the OVH server
# 1. Build new image (from the WhatsApp backend source)
docker build -t wank-slice:latest .

# 2. Rolling restart of all slices
for container in $(docker ps --filter "name=wank-slice" --format "{{.Names}}"); do
  echo "Restarting $container..."
  docker stop $container
  docker rm $container
  # Re-create with same config (orchestrator handles this)
done

Troubleshooting

Gateway Won’t Start

# Check logs
sudo journalctl -u wank-gateway --since "5 min ago"

# Common issues:
# - TypeScript not compiled: cd /home/ubuntu/gateway && npx tsc
# - Missing .env: check /home/ubuntu/gateway/.env exists
# - DB connection: sudo -u postgres psql -d wank_saas (verify DB exists)
# - Port conflict: lsof -i :3000

Slice Container Not Running

# Check container status
docker ps -a --filter "name=wank-slice-1"

# View logs
docker logs wank-slice-1 --tail 50

# Manual restart
docker start wank-slice-1

# If container doesn't exist, provision via API:
curl -X POST http://127.0.0.1:3000/orchestrator/provision \
  -H "X-Admin-Key: YOUR_ADMIN_KEY"

WhatsApp Disconnected

# Check state via slice API
curl -s http://127.0.0.1:5001/api/status/state

# Check in gateway database
sudo -u postgres psql -d wank_saas -c \
  "SELECT id, port, wa_connected, wa_phone, last_health_at FROM gateway.slices;"

# User needs to re-scan QR code at app.queunir.com

Database Connection Issues

# Check PostgreSQL is running
sudo systemctl status postgresql

# Test gateway DB connection
sudo -u postgres psql -d wank_saas -c "SELECT 1;"

# Test slice DB connection
sudo -u postgres psql -d wa_slice_1 -c "SELECT 1;"

# Check connections
sudo -u postgres psql -c "SELECT count(*) FROM pg_stat_activity;"

High Memory Usage

# Check server memory
free -h

# Check per-container memory
docker stats --no-stream --filter "name=wank-slice"

# Check gateway telemetry for trends
sudo -u postgres psql -d wank_saas -c \
  "SELECT timestamp, ram_used_bytes/1073741824.0 as ram_gb, swap_used_bytes/1073741824.0 as swap_gb
   FROM gateway.server_telemetry ORDER BY timestamp DESC LIMIT 10;"

Common Commands

Quick Reference

# SSH to server
ssh ovh

# Gateway service
sudo systemctl status wank-gateway
sudo systemctl restart wank-gateway
sudo journalctl -u wank-gateway -f

# Docker
docker ps --filter "name=wank-slice"
docker stats --no-stream --filter "name=wank-slice"
docker logs wank-slice-1 -f

# Database
sudo -u postgres psql -d wank_saas
sudo -u postgres psql -d wank_saas -c "SELECT * FROM gateway.users;"
sudo -u postgres psql -d wank_saas -c "SELECT * FROM gateway.slices;"

# Nginx
sudo nginx -t
sudo systemctl reload nginx

# Server resources
free -h
df -h
top -bn1 | head -5

# Health checks
curl -s http://127.0.0.1:3000/health | python3 -m json.tool
curl -s http://127.0.0.1:5001/api/health | python3 -m json.tool

# Admin API (requires ADMIN_KEY)
curl -s http://127.0.0.1:3000/admin/dashboard \
  -H "X-Admin-Key: YOUR_KEY" | python3 -m json.tool
curl -s http://127.0.0.1:3000/orchestrator/status \
  -H "X-Admin-Key: YOUR_KEY" | python3 -m json.tool

Per-Slice Database Schema

Each slice gets its own PostgreSQL database (wa_slice_{N}). These are the tables:

Core CRM Data

Table Purpose Key Columns
contacts All WhatsApp contacts wa_id, phone_number, push_name, saved_name, full_name, company, email, notes, location_met, tags[], instagram, tiktok, telegram, birthday, anniversary, detail_tokens (JSONB), media_folder
chats Conversation threads wa_id, contact_id (FK), name, last_message_text, last_message_at, is_pinned, is_archived
messages Full message history wa_id, chat_wa_id, from_wa_id, from_me, body, message_type, has_media, media_url, ack, is_ai_reply, timestamp, reactions (JSONB), search_vector (tsvector)
media_files Downloaded media metadata message_wa_id, chat_wa_id, original_path, thumbnail_path, mime_type, file_size_bytes, storage_state

AI & Automation

Table Purpose Key Columns
ai_config LLM provider settings (single row) provider_name, api_key, base_url, model, max_tokens, temperature
personalities AI personality templates name, emoji, system_prompt, is_default
contact_personalities Per-contact AI assignment contact_wa_id, personality_id (FK), auto_reply_mode, delay_preset, is_paused, custom_traits, known_facts
auto_reply_log AI response history contact_wa_id, personality_id, incoming_message, ai_response, model_used, tokens_used

User Config & Features

Table Purpose Key Columns
connection_state WhatsApp session status (single row) state, phone_number, last_connected_at
user_settings Key-value config key, value (e.g. away_mode, auto_reply_enabled)
reminders Contact follow-up reminders contact_wa_id, reminder_text, due_date, completed, snoozed_until
quick_phrases Message templates phrase_text, sort_order
photo_library User-uploaded photos filename, thumbnail_name, md5_hash, caption
photo_library_sends Photo send audit log library_photo_id, message_wa_id, chat_wa_id, sent_at
chat_imports Chat import history chat_wa_id, filename, messages_imported, batch_id

Schema Files

  • Init schema: database/init/01-schema.sql
  • Migrations: database/migrations/001-*.sql through 016-*.sql
  • Combined for new slices: deploy/saas/slice-schema.sql (on OVH at /home/ubuntu/whatsapp/deploy/saas/slice-schema.sql)

Complete Account & Data Wipe

A user’s data is spread across three layers. All three must be cleaned for a true wipe.

Layer 1: Gateway Database (wank_saas)

Table Data Action
gateway.sessions Session tokens DELETE WHERE user_id = X
gateway.telemetry Slice metrics DELETE WHERE slice_id = Y
gateway.events Slice events DELETE WHERE slice_id = Y
gateway.slices Slice assignment UPDATE SET status=‘available’, user_id=NULL, wa_connected=false, wa_phone=NULL
gateway.users Account record DELETE WHERE id = X

Layer 2: Slice Database (wa_slice_{N})

Option A — Nuke entire database (if slice will be re-provisioned):

DROP DATABASE wa_slice_{N};

Option B — Wipe all data but keep schema (if slice will be reused):

-- Order matters for foreign keys
DELETE FROM photo_library_sends;
DELETE FROM chat_imports;
DELETE FROM media_files;
DELETE FROM messages;
DELETE FROM auto_reply_log;
DELETE FROM contact_personalities;
DELETE FROM reminders;
DELETE FROM quick_phrases;
DELETE FROM photo_library;
DELETE FROM chats;
DELETE FROM contacts;
UPDATE connection_state SET state = 'disconnected', phone_number = NULL, last_connected_at = NULL WHERE id = 1;
DELETE FROM user_settings;
-- Optionally reset AI config:
UPDATE ai_config SET api_key = NULL WHERE id = 1;

Layer 3: Filesystem (/data/slices/{N}/)

Path Contents Action
/data/slices/{N}/session/ WhatsApp Web Chromium session (auth tokens, cookies, profile) rm -rf /data/slices/{N}/session/*
/data/slices/{N}/media/ All downloaded media files organized by phone number (+PHONE/photos/, /videos/, /audio/, /docs/, /thumbs/) + avatars/ rm -rf /data/slices/{N}/media/*

Layer 4 (optional): Stripe

If user had a Stripe subscription, the customer record persists in Stripe. Delete via Stripe dashboard or API.

Complete Wipe Script (run on OVH)

# Variables
USER_ID=20          # gateway.users.id
SLICE_ID=39         # gateway.slices.id
SLICE_NUM=1         # container number (derives from port)
DB_NAME=wa_slice_1  # slice database name

# 1. Stop the WhatsApp session inside the container
curl -s -X POST http://127.0.0.1:5001/api/status/logout

# 2. Wipe slice database
sudo -u postgres psql -d $DB_NAME -c "
  DELETE FROM photo_library_sends;
  DELETE FROM chat_imports;
  DELETE FROM media_files;
  DELETE FROM messages;
  DELETE FROM auto_reply_log;
  DELETE FROM contact_personalities;
  DELETE FROM reminders;
  DELETE FROM quick_phrases;
  DELETE FROM photo_library;
  DELETE FROM chats;
  DELETE FROM contacts;
  UPDATE connection_state SET state='disconnected', phone_number=NULL, last_connected_at=NULL WHERE id=1;
  DELETE FROM user_settings;
"

# 3. Wipe filesystem data
sudo rm -rf /data/slices/$SLICE_NUM/session/*
sudo rm -rf /data/slices/$SLICE_NUM/media/*

# 4. Restart container (fresh state)
docker restart wank-slice-$SLICE_NUM

# 5. Wipe gateway records
sudo -u postgres psql -d wank_saas -c "
  DELETE FROM gateway.sessions WHERE user_id = $USER_ID;
  DELETE FROM gateway.telemetry WHERE slice_id = $SLICE_ID;
  DELETE FROM gateway.events WHERE slice_id = $SLICE_ID;
  UPDATE gateway.slices SET user_id = NULL, status = 'available',
    wa_connected = false, wa_phone = NULL WHERE id = $SLICE_ID;
  DELETE FROM gateway.users WHERE id = $USER_ID;
"

Existing POST /api/status/destroy Endpoint (Incomplete)

The slice backend has a destroyAll endpoint in statusController.ts that attempts to wipe data, but it’s missing several tables:

Wiped Missing
media_files reminders
messages quick_phrases
contact_personalities photo_library
chats photo_library_sends
contacts chat_imports
auto_reply_log user_settings
connection_state (reset) ai_config (API key still there!)
media folder (filesystem) session files (/app/.wwebjs_auth/)

This endpoint also does NOT touch the gateway database at all. It should NOT be relied on for a full wipe.

What Happens If You Only Wipe Gateway Users

This is the mistake to avoid. If you only delete from gateway.users:

  • User can re-register with the same email
  • They get assigned to the SAME slice (if it’s still marked available with their old data)
  • The slice still has their old WhatsApp session → auto-connects to the old phone
  • All their old contacts, messages, media are still there
  • It looks like nothing was wiped

You MUST wipe all three layers for a clean account deletion.