- CLAUDE.md: add compact registry numbers feature, sender name, test mode - ROADMAP.md: expand 8.03 with compact numbers, icon-only toolbar, test mode - REPO-STRUCTURE.md: add src/core/notifications/ directory + description - SYSTEM-ARCHITECTURE.md: add sender name, test mode, group company behavior - CONFIGURATION.md + DOCKER-DEPLOYMENT.md: NOTIFICATION_FROM_NAME=Alerte Termene Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
26 KiB
Docker Deployment Guide
ArchiTools internal reference -- containerized deployment on the on-premise Ubuntu server.
Overview
ArchiTools runs as a single Docker container behind Nginx Proxy Manager on the internal network. The deployment pipeline is:
Developer pushes to Gitea
--> Portainer webhook triggers stack redeploy (or Watchtower detects image change)
--> Docker builds multi-stage image
--> Container starts on port 3000
--> Nginx Proxy Manager routes tools.internal --> localhost:3000
--> Users access via browser
The container runs a standalone Next.js production server. No Node.js process manager (PM2, forever) is needed -- the container runtime handles restarts via restart: unless-stopped.
Dockerfile
Multi-stage build that produces a minimal production image.
# Stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production
# Stage 2: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Stage 3: Runner
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
CMD ["node", "server.js"]
Stage Breakdown
| Stage | Base | Purpose | Output |
|---|---|---|---|
deps |
node:20-alpine |
Install production dependencies only | node_modules/ |
builder |
node:20-alpine |
Compile TypeScript, build Next.js bundle | .next/standalone/, .next/static/, public/ |
runner |
node:20-alpine |
Minimal runtime image with non-root user | Final image (~120 MB) |
Why Multi-Stage
- The
depsstage cachesnode_modulesindependently of source code changes. If only application code changes, Docker reuses the cached dependency layer. - The
builderstage contains all dev dependencies and source files but is discarded after the build. - The
runnerstage contains only the standalone server output, static assets, and public files. Nonode_modulesdirectory, no source code, no dev tooling.
Security Notes
- The
nextjsuser (UID 1001) is a non-root system user. The container never runs as root. - Alpine Linux has a minimal attack surface. No shell utilities beyond BusyBox basics.
- The
NODE_ENV=productionflag disables React development warnings, enables Next.js production optimizations, and prevents accidental dev-mode behavior.
next.config.ts Requirements
The standalone output mode is mandatory for the Docker deployment. Without it, Next.js expects the full node_modules directory at runtime.
// next.config.ts
import type { NextConfig } from 'next';
const nextConfig: NextConfig = {
output: 'standalone',
// Required for Docker: trust the reverse proxy headers
// so that Next.js resolves the correct protocol and host
experimental: {
// If needed in future Next.js versions
},
};
export default nextConfig;
What output: 'standalone' Does
- Traces all required Node.js dependencies at build time.
- Copies only the needed files into
.next/standalone/. - Generates a self-contained
server.jsthat starts a production HTTP server. - Eliminates the need for
node_modulesin the runtime image.
The standalone output does not include the public/ or .next/static/ directories. These must be copied explicitly in the Dockerfile (which the Dockerfile above does).
docker-compose.yml
version: '3.8'
services:
architools:
build: .
container_name: architools
restart: unless-stopped
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- NEXT_PUBLIC_APP_URL=${APP_URL:-http://localhost:3000}
env_file:
- .env
volumes:
- architools-data:/app/data
networks:
- proxy-network
labels:
- "com.centurylinklabs.watchtower.enable=true"
volumes:
architools-data:
networks:
proxy-network:
external: true
Field Reference
| Field | Purpose |
|---|---|
build: . |
Build from the Dockerfile in the repository root. |
container_name: architools |
Fixed name for predictable Portainer/Dozzle references. |
restart: unless-stopped |
Auto-restart on crash or server reboot. Only stops if explicitly stopped. |
ports: "3000:3000" |
Map container port 3000 to host port 3000. Nginx Proxy Manager connects here. |
env_file: .env |
Load environment variables from .env. Never committed to Gitea. |
volumes: architools-data:/app/data |
Persistent volume for future server-side data. Not used in localStorage phase. |
networks: proxy-network |
Shared Docker network with Nginx Proxy Manager and other services. |
labels: watchtower.enable=true |
Opt in to Watchtower automatic image updates. |
The proxy-network Network
All services that Nginx Proxy Manager routes to must be on the same Docker network. This network is created once and shared across all stacks:
docker network create proxy-network
If the network already exists (it should -- other services like Authentik, MinIO, N8N use it), the external: true declaration tells Docker Compose not to create it.
Environment Configuration
.env File
# ──────────────────────────────────────────
# Application
# ──────────────────────────────────────────
NODE_ENV=production
NEXT_PUBLIC_APP_URL=https://tools.internal
NEXT_PUBLIC_APP_ENV=production
# ──────────────────────────────────────────
# Feature Flags (override defaults from src/config/flags.ts)
# ──────────────────────────────────────────
NEXT_PUBLIC_FLAG_MODULE_REGISTRATURA=true
NEXT_PUBLIC_FLAG_MODULE_PROMPT_GENERATOR=true
NEXT_PUBLIC_FLAG_MODULE_EMAIL_SIGNATURE=true
NEXT_PUBLIC_FLAG_MODULE_AI_CHAT=false
# ──────────────────────────────────────────
# Storage
# ──────────────────────────────────────────
NEXT_PUBLIC_STORAGE_ADAPTER=localStorage
# Future: API backend
# STORAGE_API_URL=http://localhost:4000/api/storage
# Future: MinIO
# MINIO_ENDPOINT=minio.internal
# MINIO_ACCESS_KEY=architools
# MINIO_SECRET_KEY=<secret>
# MINIO_BUCKET=architools
# ──────────────────────────────────────────
# Authentication (Authentik SSO)
# ──────────────────────────────────────────
# AUTHENTIK_ISSUER=https://auth.internal
# AUTHENTIK_CLIENT_ID=architools
# AUTHENTIK_CLIENT_SECRET=<secret>
# ──────────────────────────────────────────
# Email Notifications (Brevo SMTP)
# ──────────────────────────────────────────
BREVO_SMTP_HOST=smtp-relay.brevo.com
BREVO_SMTP_PORT=587
BREVO_SMTP_USER=<brevo-login>
BREVO_SMTP_PASS=<brevo-smtp-key>
NOTIFICATION_FROM_EMAIL=noreply@beletage.ro
NOTIFICATION_FROM_NAME=Alerte Termene
NOTIFICATION_CRON_SECRET=<random-bearer-token>
N8N cron setup: Create a workflow with Cron node (
0 8 * * 1-5), HTTP Request node (POSThttps://tools.beletage.ro/api/notifications/digest, headerAuthorization: Bearer <NOTIFICATION_CRON_SECRET>). The endpoint returns{ success, totalEmails, errors, companySummary }. Add?test=truequery param to send a test digest with sample data.
Variable Scoping Rules
| Prefix | Available In | Notes |
|---|---|---|
NEXT_PUBLIC_* |
Client + server | Inlined into the JavaScript bundle at build time. Visible to users in browser DevTools. Never put secrets here. |
| No prefix | Server only | Available in API routes, middleware, server components. Used for secrets, credentials, internal URLs. |
Build-Time vs. Runtime
NEXT_PUBLIC_* variables are baked into the bundle during npm run build. Changing them requires a rebuild. Non-prefixed variables are read at runtime and can be changed by restarting the container.
For Docker, this means:
NEXT_PUBLIC_*changes require rebuilding the image.- Server-only variables can be changed via Portainer environment editor and restarting the container.
Nginx Proxy Manager Setup
Proxy Host Configuration
| Field | Value |
|---|---|
| Domain Names | tools.internal (or tools.beletage.internal, etc.) |
| Scheme | http |
| Forward Hostname / IP | architools (Docker container name, resolved via proxy-network) |
| Forward Port | 3000 |
| Block Common Exploits | Enabled |
| Websockets Support | Enabled (for HMR in dev; harmless in production) |
SSL Configuration
Internal access (self-signed or internal CA):
- In Nginx Proxy Manager, go to SSL Certificates > Add SSL Certificate > Custom.
- Upload the internal CA certificate and key.
- Assign to the
tools.internalproxy host. - Browsers on internal machines must trust the internal CA (deployed via group policy or manual install).
External access (Let's Encrypt):
- When the domain becomes publicly resolvable (e.g.,
tools.beletage.ro), switch to Let's Encrypt. - In Nginx Proxy Manager, go to SSL Certificates > Add SSL Certificate > Let's Encrypt.
- Enter the domain, email, and agree to ToS.
- Nginx Proxy Manager handles renewal automatically.
Security Headers
Add the following in the proxy host's Advanced tab (Custom Nginx Configuration):
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Content Security Policy
add_header Content-Security-Policy "
default-src 'self';
script-src 'self' 'unsafe-inline' 'unsafe-eval';
style-src 'self' 'unsafe-inline';
img-src 'self' data: blob:;
font-src 'self' data:;
connect-src 'self' https://api.openai.com https://api.anthropic.com;
frame-ancestors 'self';
" always;
Notes on CSP:
'unsafe-inline'and'unsafe-eval'are required by Next.js in production. Tighten with nonces if migrating to a stricter CSP in the future.connect-srcincludes AI provider API domains for the AI Chat and Prompt Generator modules. Adjust as providers are added or removed.frame-ancestors 'self'prevents clickjacking (equivalent toX-Frame-Options: SAMEORIGIN).
Portainer Deployment
Stack Deployment from Gitea
- In Portainer, go to Stacks > Add Stack.
- Select Repository as the build method.
- Configure:
| Field | Value |
|---|---|
| Name | architools |
| Repository URL | https://gitea.internal/beletage/architools.git |
| Repository reference | refs/heads/main |
| Compose path | docker-compose.yml |
| Authentication | Gitea access token or SSH key |
- Under Environment variables, add all variables from the
.envfile. Portainer stores these securely and injects them at deploy time. - Enable Auto update with a webhook if desired.
Environment Variable Management
Portainer provides a UI for managing environment variables per stack. Use this for:
- Toggling feature flags without touching the repository.
- Updating server-side secrets (MinIO keys, Authentik credentials) without rebuilding.
- Switching
NEXT_PUBLIC_*values (requires stack redeploy to rebuild the image).
Important: NEXT_PUBLIC_* variables are build-time constants. Changing them in Portainer requires redeploying the stack (which triggers a rebuild), not just restarting the container.
Container Monitoring
Portainer provides:
- Container status: running, stopped, restarting.
- Resource usage: CPU, memory, network I/O.
- Logs: stdout/stderr output (same as Dozzle, but accessible from the Portainer UI).
- Console: exec into the container for debugging (use sparingly; the container has minimal tooling).
- Restart/Stop/Remove: Manual container lifecycle controls.
Watchtower Integration
Watchtower monitors Docker containers and automatically updates them when a new image is available.
How It Works with ArchiTools
- The
docker-compose.ymlincludes the labelcom.centurylinklabs.watchtower.enable=true. - Watchtower periodically checks (default: every 24 hours, configurable) if the image has changed.
- If a new image is detected, Watchtower:
- Pulls the new image.
- Stops the running container.
- Creates a new container with the same configuration.
- Starts the new container.
- Removes the old image (if configured).
Triggering Updates
Automatic (Watchtower polling): Watchtower polls at a configured interval. Suitable for non-urgent updates.
Manual (Portainer): Redeploy the stack from Portainer. This pulls the latest code from Gitea, rebuilds the image, and restarts the container.
Webhook (Portainer): Configure a Portainer webhook URL. Add it as a webhook in Gitea (triggered on push to main). Gitea pushes, Portainer receives the webhook, and redeploys.
Recommended Flow
For ArchiTools, the primary deployment trigger is the Portainer webhook from Gitea:
git push origin main
--> Gitea fires webhook to Portainer
--> Portainer redeploys the architools stack
--> Docker rebuilds the image (multi-stage build)
--> New container starts
--> Old container removed
Watchtower serves as a safety net for cases where the webhook fails or for updating the base node:20-alpine image.
Health Check Endpoint
The application exposes a health check endpoint at /api/health.
// src/app/api/health/route.ts
import { NextResponse } from 'next/server';
export async function GET() {
return NextResponse.json(
{
status: 'healthy',
timestamp: new Date().toISOString(),
version: process.env.npm_package_version ?? 'unknown',
environment: process.env.NODE_ENV ?? 'unknown',
},
{ status: 200 }
);
}
Usage
- Uptime Kuma: Add a monitor with type HTTP(s), URL
http://architools:3000/api/health, expected status code200. Monitor interval: 60 seconds. - Docker health check (optional): Add to
docker-compose.yml:
services:
architools:
# ... existing config ...
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
The start_period gives Next.js time to start before Docker begins health checking.
Logging
Strategy
ArchiTools logs to stdout and stderr. No file-based logging, no log rotation configuration inside the container. Docker captures all stdout/stderr output and makes it available via:
- Dozzle: Real-time log viewer. Access at
dozzle.internal. Filter by container namearchitools. - Portainer: Logs tab on the container detail page.
- CLI:
docker logs architoolsordocker logs -f architoolsfor live tail.
Log Levels
| Source | Output | Captured By |
|---|---|---|
| Next.js server | Request logs, compilation warnings | stdout |
Application console.log |
Debug information, state changes | stdout |
Application console.error |
Errors, stack traces | stderr |
| Unhandled exceptions | Crash traces | stderr |
Structured Logging (Future)
When the application grows beyond simple console output, adopt a structured JSON logger (e.g., pino). This enables Dozzle or a future log aggregator to parse, filter, and search log entries by level, module, and context.
Data Persistence Strategy
Current Phase: localStorage
In the current phase, all module data lives in the browser's localStorage. The Docker container is stateless -- no server-side data storage. This means:
- No data loss on container restart. Data is in the browser, not the container.
- No backup needed for the container. The volume mount (
architools-data:/app/data) is provisioned but empty. - No multi-user data sharing. Each browser has its own isolated data set.
- Export/import is the backup mechanism. Modules provide export buttons that download JSON files.
Future Phase: Server-Side Storage
When the storage adapter switches to api or a database backend:
| Concern | Implementation |
|---|---|
| Database | PostgreSQL container on the same Docker network. Volume-mounted for persistence. |
| File storage | MinIO (already running). ArchiTools stores file references in the database, binary objects in MinIO buckets. |
| Backup | Database dumps + MinIO bucket sync. Scheduled via N8N or cron. |
| Volume mount | architools-data:/app/data used for SQLite (if chosen as interim DB) or temp files. |
Volume Mount
The architools-data volume is defined in docker-compose.yml and mounted at /app/data. It persists across container restarts and image rebuilds. Currently unused but ready for:
- SQLite database file (interim before PostgreSQL).
- Temporary file processing (document generation, PDF manipulation).
- Cache files if needed.
Build and Deploy Workflow
Full Lifecycle
1. Developer pushes to Gitea (main branch)
|
2. Gitea fires webhook to Portainer
|
3. Portainer pulls latest code from Gitea repository
|
4. Docker builds multi-stage image:
a. Stage 1 (deps): npm ci --only=production
b. Stage 2 (builder): npm run build (Next.js standalone)
c. Stage 3 (runner): minimal image with server.js
|
5. Portainer stops the running container
|
6. Portainer starts a new container from the fresh image
|
7. Health check passes (GET /api/health returns 200)
|
8. Nginx Proxy Manager routes traffic to the new container
|
9. Uptime Kuma confirms service is up
|
10. Old image is cleaned up (Watchtower or manual docker image prune)
Build Time Expectations
| Stage | Typical Duration | Notes |
|---|---|---|
deps (cached) |
<5 seconds | Only re-runs if package.json or package-lock.json changes. |
deps (fresh) |
30--60 seconds | Full npm ci with all dependencies. |
builder |
30--90 seconds | Next.js build. Depends on module count and TypeScript compilation. |
runner |
<5 seconds | Just file copies. |
| Total (cached deps) | ~1--2 minutes | Typical deployment time. |
| Total (fresh) | ~2--3 minutes | After dependency changes. |
Rollback
If a deployment introduces a bug:
- In Portainer, stop the current container.
- Redeploy the stack pointing to the previous Gitea commit (change the repository reference to a specific commit SHA or tag).
- Alternatively, if the previous Docker image is still cached locally, restart the container from that image.
Tagging releases in Gitea (v1.0.0, v1.1.0) makes rollback straightforward.
Development vs. Production Configuration
Comparison
| Aspect | Development | Production |
|---|---|---|
| Command | npm run dev |
node server.js (standalone) |
| Hot reload | Yes (Fast Refresh) | No |
| Source maps | Full | Minimal (production build) |
| NODE_ENV | development |
production |
| Storage adapter | localStorage |
localStorage (current), api (future) |
| Feature flags | All enabled for testing | Selective per .env |
| Error display | Full stack traces in browser | Generic error page |
| CSP headers | None (permissive) | Strict (via Nginx Proxy Manager) |
| SSL | None (http://localhost:3000) |
Terminated at Nginx Proxy Manager |
| Docker | Not used (direct npm run dev) |
Multi-stage build, containerized |
| Port | 3000 (direct) | 3000 (container) --> 443 (Nginx) |
Running Development Locally
# Install dependencies
npm install
# Start dev server
npm run dev
# Access at http://localhost:3000
No Docker, no Nginx, no SSL. Just the Next.js dev server.
Testing Production Build Locally
# Build the production bundle
npm run build
# Start the production server
npm start
# Or test the Docker build
docker build -t architools:local .
docker run -p 3000:3000 --env-file .env architools:local
Troubleshooting
Container Fails to Start
Symptom: Container status shows Restarting in Portainer, or docker ps shows restart loop.
Diagnosis:
docker logs architools
Common causes:
| Error | Cause | Fix |
|---|---|---|
Error: Cannot find module './server.js' |
output: 'standalone' missing from next.config.ts |
Add output: 'standalone' and rebuild. |
EACCES: permission denied |
File ownership mismatch | Verify the Dockerfile copies files before switching to USER nextjs. |
EADDRINUSE: port 3000 |
Another container using port 3000 | Change the host port mapping in docker-compose.yml (e.g., "3001:3000"). |
MODULE_NOT_FOUND |
Dependency not in production deps | Move the dependency from devDependencies to dependencies in package.json. |
Build Fails at npm run build
Symptom: Docker build exits at the builder stage.
Common causes:
| Error | Cause | Fix |
|---|---|---|
| TypeScript errors | Type mismatches in code | Fix TypeScript errors locally before pushing. |
ENOMEM |
Not enough memory for build | Increase Docker memory limit (Next.js build can use 1--2 GB). |
| Missing environment variables | NEXT_PUBLIC_* required at build time |
Pass build args or set defaults in next.config.ts. |
Application Returns 502 via Nginx
Symptom: Browser shows 502 Bad Gateway.
Checklist:
- Is the container running?
docker ps | grep architools - Is the container healthy?
docker inspect architools | grep Health - Can Nginx reach the container? Both must be on
proxy-network. - Is the forward port correct (3000)?
- Is the scheme
http(nothttps-- SSL terminates at Nginx)?
Static Assets Not Loading (CSS, JS, Images)
Symptom: Page loads but unstyled, or browser console shows 404 for /_next/static/*.
Cause: Missing COPY --from=builder /app/.next/static ./.next/static in the Dockerfile.
Fix: Verify both public/ and .next/static/ are copied in the runner stage.
Environment Variables Not Taking Effect
Symptom: Feature flag change in Portainer does not change behavior.
Diagnosis:
- If the variable starts with
NEXT_PUBLIC_*: it is baked in at build time. You must redeploy (rebuild the image), not just restart. - If the variable has no prefix: restart the container. The value is read at runtime.
High Memory Usage
Symptom: Container uses more than expected memory (check Portainer or Netdata).
Typical usage: 100--200 MB for a standalone Next.js server with moderate traffic.
If higher:
- Check for memory leaks in server-side code (API routes, middleware).
- Set a memory limit in
docker-compose.yml:
services:
architools:
# ... existing config ...
deploy:
resources:
limits:
memory: 512M
Logs Not Appearing in Dozzle
Symptom: Dozzle shows the container but no log output.
Checklist:
- Is the container actually running (not in a restart loop)?
- Is the application writing to stdout/stderr (not to a file)?
- Is Dozzle configured to monitor all containers on the Docker socket?
Container Networking Issues
Symptom: Container cannot reach other services (MinIO, Authentik, N8N).
Checklist:
- All services must be on the same Docker network (
proxy-network). - Use container names as hostnames (e.g.,
http://minio:9000), notlocalhost. - Verify DNS resolution:
docker exec architools wget -q -O- http://minio:9000/minio/health/live
Quick Reference
Commands
# Build image
docker build -t architools .
# Run container
docker run -d --name architools -p 3000:3000 --env-file .env architools
# View logs
docker logs -f architools
# Exec into container
docker exec -it architools sh
# Rebuild and restart (compose)
docker compose down && docker compose up -d --build
# Check health
curl http://localhost:3000/api/health
# Prune old images
docker image prune -f
File Checklist
| File | Required | Purpose |
|---|---|---|
Dockerfile |
Yes | Multi-stage build definition. |
docker-compose.yml |
Yes | Service orchestration, networking, volumes. |
.env |
Yes (not committed) | Environment variables. |
.dockerignore |
Recommended | Exclude node_modules, .git, .next from build context. |
next.config.ts |
Yes | Must include output: 'standalone'. |
src/app/api/health/route.ts |
Yes | Health check endpoint. |
.dockerignore
node_modules
.next
.git
.gitignore
*.md
docs/
.env
.env.*
This reduces the Docker build context size and prevents leaking sensitive files into the image.