Bearer token auth (ADDRESSBOOK_API_KEY) for external tools like avizare.
Supports GET (list/search/filter/by-id), POST (create), PUT (update), DELETE.
Middleware exclusion so it bypasses NextAuth session requirement.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Filenames with Romanian characters (Ș, Ț, etc.) caused ByteString errors.
Also pass original filename through to extreme mode response.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Next.js middleware buffers the entire request body (10MB default limit)
before the route handler runs. middlewareClientMaxBodySize experimental
flag doesn't work reliably with standalone output.
Solution: exclude api/compress-pdf from middleware matcher so the body
streams directly to the route handler. Auth check moved to a shared
helper (auth-check.ts) called at the start of each route.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previous approach loaded entire raw body (287MB) into RAM via readFile,
then extracted PDF (another 287MB), then read output (287MB) = ~860MB peak.
Docker container OOM killed silently -> 500.
New approach:
- parse-upload.ts: scan raw file on disk using 64KB buffer reads (findInFile),
then stream-copy just the PDF portion. Peak memory: ~64KB.
- extreme/route.ts: stream qpdf output directly from disk via Readable.toWeb.
Never loads result into memory.
Total peak memory: ~64KB + qpdf process memory.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Busboy's file event never fires in Next.js Turbopack despite the
stream being read correctly (CJS/ESM interop issue). Replace with
manual boundary parsing: stream body to disk chunk-by-chunk, then
extract the PDF part using simple boundary scanning. Tested working
with 1MB+ payloads — streams to disk so memory usage stays constant
regardless of file size.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
req.arrayBuffer() fails with 502 on files >100MB because it tries to
buffer the entire body in memory before the route handler runs.
New approach: busboy streams the multipart body directly to a temp file
on disk — never buffers the whole request in memory. Works for any size.
Shared helper: parse-upload.ts (busboy streaming, 500MB limit, fields).
Both local (qpdf) and cloud (iLovePDF) routes refactored to use it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ghostscript -sDEVICE=pdfwrite fundamentally re-encodes fonts, causing
garbled text regardless of parameters. This cannot be fixed.
New approach:
- Local: qpdf-only lossless structural optimization (5-30% savings,
zero corruption risk — fonts and images completely untouched)
- Cloud: iLovePDF API integration (auth → start → upload → process →
download) with 3 levels (recommended/extreme/low), proper image
recompression without font corruption
Frontend: 3 modes (cloud recommended, cloud extreme, local lossless).
Docker: ILOVEPDF_PUBLIC_KEY env var added.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The -dPDFSETTINGS=/screen GS preset overwrites font encoding tables,
producing garbled text in output PDFs. Replace with individual params
that ONLY compress images while preserving fonts intact.
Three quality levels via GS (no Stirling dependency):
- extreme: 100 DPI, QFactor 1.2 (~quality 35)
- high: 150 DPI, QFactor 0.76 (~quality 50)
- balanced: 200 DPI, QFactor 0.4 (~quality 70)
Route all UI modes through the GS endpoint with level parameter.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
formData() fails with "Failed to parse body as FormData" on large PDFs
in Next.js route handlers. Switch to req.arrayBuffer() which reliably
reads the full body, then manually extract the PDF from multipart.
Extreme mode: arrayBuffer + multipart extraction + GS + qpdf pipeline.
Stirling mode: arrayBuffer forwarding to Stirling with proper headers.
Revert serverActions.bodySizeLimit (doesn't apply to route handlers).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Extreme mode: replace fragile manual multipart boundary parsing (which
extracted only a fraction of large files, producing empty PDFs) with
standard req.formData(). Add GS output validation + stderr capture.
Stirling mode: parse formData first then build fresh FormData for
Stirling instead of raw body passthrough (which lost data on large
files). Add 5min timeout + original/compressed size headers.
next.config: add 250MB body size limit for server actions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Only the last entry in a company+year sequence can be deleted. Trying
to delete an earlier number (e.g. #2 when #3 exists) returns a 409
error with a Romanian message explaining the restriction.
Also routes UI deletes through the API (like create/update) so they
get proper audit logging and sequence recalculation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
PostgreSQL JSONB value::text serializes JSON with spaces after colons
("number": "B-2026-00001") but all LIKE patterns searched for the
no-space format ("number":"B-2026-00001"), causing zero matches and
every new entry getting sequence #1.
Fixed in allocateSequenceNumber, recalculateSequence, and debug-sequences.
Added PATCH handler to migrate old-format entries (BTG/SDT/USW/GRP)
to new single-letter format (B/S/U/G).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
PostgreSQL POSIX regex on the server does not support \d shorthand,
causing SUBSTRING to return NULL and every entry to get sequence 1.
Replaced all \d with [0-9] in:
- allocateSequenceNumber (new + old format queries)
- recalculateSequence (new + old format queries)
- debug-sequences endpoint (GET + POST queries)
Also added samples field to debug GET for raw number diagnostics,
and POST now handles old-format entries (BTG→B mapping) with
ON CONFLICT GREATEST for proper counter merging.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New format: single-letter prefix + year + 5-digit sequence.
No direction code (IN/OUT) in the number — shown via arrow icon.
Sequence is shared across directions within the same company+year.
Changes:
- REGISTRY_COMPANY_PREFIX: BTG→B, USW→U, SDT→S, GRP→G
- OLD_COMPANY_PREFIX map for backward compat with existing entries
- allocateSequenceNumber: searches both old and new format entries
to find the actual max sequence (backward compat)
- recalculateSequence: same dual-format search
- parseRegistryNumber: supports 3 formats (current, v1, legacy)
- isNewFormat: updated regex for B-2026-00001
- CompactNumber: already used single-letter badges, just updated comment
- debug-sequences endpoint: updated for new format
- Notification test data: updated to new format
- RegistrySequence.type: now "SEQ" (shared) instead of "IN"/"OUT"
After deploy: POST /api/registratura/debug-sequences to clean up
old counters, then recreate test entries.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prisma tagged template literals were mangling regex backslashes.
Switch to $queryRawUnsafe for the complex regex queries.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The previous fix still used MAX(actualMax, counterVal) which meant a
stale counter (from entries deleted before the fix was deployed) would
override the actual entry count. Changed to use ONLY actualMax + 1.
The RegistrySequence counter is now just a cache that gets synced —
it never overrides the actual entries count.
Also added /api/registratura/debug-sequences endpoint:
- GET: shows all counters vs actual entry max (for diagnostics)
- POST: resets all counters to match actual entries (one-time fix)
After deploy, call POST /api/registratura/debug-sequences to reset
the stale counters, then delete the BTG-2026-OUT-00004 entry and
recreate it — it will get 00001.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The old allocateSequenceNumber blindly incremented a counter in
RegistrySequence, which drifted out of sync when entries were deleted
or moved between companies — producing wrong numbers (e.g., #6 for
the first entry of a company).
New approach:
- Uses pg_advisory_xact_lock inside a Prisma interactive transaction
to serialize concurrent allocations
- Always queries the actual MAX sequence from KeyValueStore entries
(the source of truth) before allocating the next number
- Takes MAX(actual entries, counter) + 1 so the counter can never
produce a stale/duplicate number
- Upserts the counter to the new value for consistency
- Also adds recalculateSequence to DELETE handler so the counter
stays in sync after entry deletions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The multipart body parser was using the first \r\n\r\n as the file
content start, but this could miss the actual file part. Now properly
iterates through parts to find the one with filename= header, and
uses lastIndexOf for the closing boundary to avoid false matches
inside PDF binary data.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add ExternalStatusTracking types + ExternalDocStatus semantic states
- Authority catalog with Primaria Cluj-Napoca (POST scraper + HTML parser)
- Status check service: batch + single entry, change detection via hash
- API routes: cron-triggered batch (/api/registratura/status-check) +
user-triggered single (/api/registratura/status-check/single)
- Add "status-change" notification type with instant email on change
- Table badge: Radio icon color-coded by status (amber/blue/green/red)
- Detail panel: full monitoring section with status, history, manual check
- Auto-detection: prompt when recipient matches known authority
- Activation dialog: configure petitioner name + confirm registration data
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Registratura: re-allocate number when company/direction changes on update,
recalculate old company's sequence counter from actual entries
- Extreme PDF: stream body to temp file instead of req.formData() to support large files
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Auth:
- Add middleware.ts that redirects unauthenticated users to Authentik SSO
- Extract authOptions to shared auth-options.ts
- Add getAuthSession() helper for API route protection
- Add loading spinner during session validation
- Dev mode bypasses auth (stub user still works)
ManicTime:
- Fix hardcoded companyId="beletage" — now uses group context from Tags.txt
- Fix extended project format label parsing (extracts name after year)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add dedicated dwg2dxf container (Debian slim + libredwg-tools + Flask)
instead of modifying the Alpine base image. The ArchiTools API route
proxies to the sidecar over Docker internal network.
- dwg2dxf-api/: Dockerfile + Flask app (POST /convert, GET /health)
- docker-compose.yml: dwg2dxf service, healthcheck, depends_on
- route.ts: rewritten from local exec to HTTP proxy
- .dockerignore: exclude sidecar from main build context
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Extreme PDF compression via direct Ghostscript + qpdf pipeline
(PassThroughJPEGImages=false, QFactor 1.5, 72 DPI downsample)
- DWG→DXF converter via libredwg (Docker only)
- PDF unlock in-app via Stirling PDF proxy
- Removed PDF/A tab (unused)
- Paste (Ctrl+V) on all file drop zones
- Mouse drag-drop reordering on thermal layers
- Tabs reorganized into 2 visual rows
- Dockerfile: added ghostscript, qpdf, libredwg
- New eterra-health.ts service: pings eTerra periodically (3min),
detects maintenance (503, keywords), tracks consecutive failures
- New /api/eterra/health endpoint for explicit health queries
- Session route blocks login when eTerra is in maintenance (503 response)
- GET /api/eterra/session now includes eterraAvailable/eterraMaintenance
- ConnectionPill shows amber 'Mentenanță' state with AlertTriangle icon
instead of confusing red error when eTerra is down
- Auto-connect skips when maintenance detected, retries when back online
- 30s session poll auto-detects recovery and re-enables auto-connect
- New API route /api/eterra/uat-dashboard with SQL aggregates
(area stats, intravilan/extravilan split, land use, top owners, fun facts)
- CSS-only dashboard component: KPI cards, donut ring, bar charts
- Dashboard button on each UAT card in DB tab, expands panel below
- New search mode toggle: Nr. Cadastral / Proprietar
- Owner search queries:
1. Local DB first (enrichment PROPRIETARI/PROPRIETARI_VECHI ILIKE)
2. eTerra API fallback (tries personName/titularName/ownerName filter keys)
- DB search works offline (no eTerra connection needed) — uses enriched data
- New API route: POST /api/eterra/search-owner
- New eterra-client method: searchImmovableByOwnerName()
- Owner results show source badge (DB local / eTerra online)
- Results can be added to saved list and exported as CSV
- Relaxed search tab guard: only requires UAT selection (not eTerra connection)
- Cadastral search still requires eTerra connection (shows hint when offline)
- Server: Promise.race with 120s timeout on no-geom-scan API route
- Client: AbortController with 120s timeout on scan fetch
- UI: show 'max 2 min' during scanning + hint that buttons work without scan
- UI: timeout state shows retry button + explains no-geom won't be available
- Prevents indefinitely stuck 'Se scanează...' on slow eTerra responses
- New POST /api/eterra/sync-background: fire-and-forget server-side processing
Starts sync + optional enrichment in background, returns 202 immediately.
Progress tracked via existing /api/eterra/progress polling.
Work continues in Node.js event loop even if browser is closed.
Progress persists 1 hour for background jobs (vs 60s for normal).
- Enhanced POST /api/eterra/export-local: base/magic mode support
mode=base: ZIP with terenuri.gpkg + cladiri.gpkg from local DB
mode=magic: adds terenuri_magic.gpkg (enrichment merged, includes no-geom),
terenuri_complet.csv, raport_calitate.txt, export_report.json
All from PostgreSQL — zero eTerra API calls, instant download.
- UI: background sync section in Export tab
'Sync fundal Baza/Magic' buttons: start background processing
'Descarc─â din DB Baza/Magic' buttons: instant download from local DB
Background job progress card with indigo theme (distinct from export)
localStorage job recovery: resume polling after page refresh
'Descarc─â din DB' button shown on completion
ROOT CAUSE: The cross-reference between immovable list and GIS layer
produces wildly different matchedCount on each scan (320, 430, 629, 433)
because the eTerra immovable/list API with inscrisCF=-1 returns
inconsistent results across calls. The GIS layer count (505) is stable.
SCAN DISPLAY — now uses only stable numbers:
- Header shows 'Layer GIS: 505 terenuri + X cladiri' (stable ArcGIS count)
- Shows 'Lista imobile: 2.717 (estimat ~2.212 fara geometrie)' using
simple subtraction totalImmovables - remoteGisCount
- Cross-ref matchedCount kept internally for import logic, but NOT shown
as the primary number — eliminates visual instability
- hasNoGeomParcels now uses estimated count (stable)
WORKFLOW PREVIEW — now accurate:
- Step 1: 'Sync GIS — descarca 505 terenuri + X cladiri' (separate counts)
or 'skip (date proaspete in DB)' when fresh
- Step 2 (enrichment): Fixed 'deja imbogatite' bug when DB is empty.
Now correctly computes what WILL be in DB after sync completes:
geoAfterSync + noGeomAfterImport - localDbEnrichedComplete
- Steps 3-4 unchanged
CLADIRI COUNT:
- Scan now also fetches CLADIRI_ACTIVE layer count (lightweight, OBJECTID only)
- New field remoteCladiriCount in NoGeomScanResult
- Displayed in header and workflow step 1
- Non-fatal: if CLADIRI fetch fails, just shows 0
SCAN DISPLAY:
- Use matchedCount (withGeometry) for 'cu geometrie' — ALWAYS adds up
with noGeomCount to equal totalImmovables (ground truth arithmetic)
- Show remoteGisCount separately as 'Layer GIS: N features (se descarca toate)'
- When remoteGisCount != matchedCount, show matching detail with breakdown
(X potrivite + cadRef/ID split) so mismatches are transparent
- Workflow preview step 1 still uses remoteGisCount (correct: all GIS
features get downloaded regardless of matching)
MATCH QUALITY TRACKING:
- New fields: matchedByRef, matchedById in NoGeomScanResult
- Track how many immovables matched by cadastral ref vs by IMMOVABLE_ID
- Console log match quality for server-side debugging
- scannedAt timestamp for audit trail
PIPELINE AUDIT (export report):
- New 'pipeline' section in export_report.json with full trace:
syncedGis, noGeometry (imported/cleaned/skipped), enriched, finalDb
- raport_calitate.txt now has PIPELINE section before quality analysis
showing exactly what happened at each step
- Capture noGeomCleaned + noGeomSkipped in addition to noGeomImported
- UI: scan card now shows remoteGisCount instead of matchedCount (withGeometry)
as the primary 'cu geometrie' number — this is the true GIS layer feature count
- UI: workflow preview step 1 shows remoteGisCount for download count
- UI: mismatch note reworded as secondary detail about cross-reference matching
- Import: automatic cleanup step at start of syncNoGeometryParcels
- Builds valid immovablePk set from fresh list (active + identification/area)
- Deletes stale NO_GEOMETRY records not in the valid set
- Reports cleaned count in result + progress note
- NoGeomSyncResult type: added 'cleaned' field
- Gitignore: temp-db-check.cjs
- Magic GPKG (terenuri_magic.gpkg) now contains ALL records:
rows with geometry render as polygons, rows without have null geom
but still carry all attribute/enrichment data (QGIS shows them fine)
- Added HAS_GEOMETRY column to Magic GPKG fields (0 or 1)
- GPKG builder now supports includeNullGeometry option: splits features
into spatial-first (creates table), then appends null-geom rows
- Base terenuri.gpkg / cladiri.gpkg unchanged (spatial only)
- CSV still has all records as before
- GeoJsonFeature type now allows null geometry
- Reproject: null geometry guard added
- UI text updated: no longer says 'Nu apar in GPKG'
- scanNoGeometryParcels now fetches TERENURI_ACTIVE features from remote
ArcGIS (lightweight, no geometry) to cross-reference with eTerra immovable list
- Cross-references by both NATIONAL_CADASTRAL_REFERENCE and IMMOVABLE_ID
- Works correctly regardless of whether user has synced to local DB
- Renamed totalInDb -> withGeometry in NoGeomScanResult, UI, and API
- Extended fetchAllLayer() to forward outFields/returnGeometry options
- resolveWorkspacePk chain: explicit param -> GisUat DB -> ArcGIS layer query
- UI passes workspacePk from UAT selection to scan API
- Fixes: FELEACU (Cluj, workspace!=65) returning 0 immovables
- Better messaging: shows X total, Y with geometry, Z without
- Shows warning when 0 immovables found (workspace resolution failed)
- Add geometrySource field to GisFeature (NO_GEOMETRY marker)
- New no-geom-sync service: scan + import parcels missing from GIS layer
- Uses negative immovablePk as objectId to avoid @@unique collision
- New /api/eterra/no-geom-scan endpoint for counting
- Export-bundle: includeNoGeometry flag, imports before enrich
- CSV export: new HAS_GEOMETRY column (0/1)
- GPKG: still geometry-only (unchanged)
- UI: checkbox + scan button on Export tab
- Baza de Date tab: shows no-geometry counts per UAT
- db-summary API: includes noGeomCount per layer
3 bugs fixed:
- syncLayer was called without jobId -> user saw no progress duringSync
- syncLayer set status:'done' prematurely -> client stopped polling before GPKG phase
- syncLayer errors were silently ignored -> confusing 'no features in DB' error
Added isSubStep option to syncLayer: when true, keeps status as 'running'
and doesn't schedule clearProgress. Export routes now pass jobId + isSubStep
so the real sync progress (Descărcare features 50/200) is visible in the UI.
- Rewrite export-bundle to sync-first: check freshness -> sync layers -> enrich (magic) -> build GPKG/CSV from local DB
- Rewrite export-layer-gpkg to sync-first: sync if stale -> export from DB
- Create enrich-service.ts: extracted magic enrichment logic (CF, owners, addresses) with DB storage
- Add enrichment + enrichedAt columns to GisFeature schema
- Update PostGIS views to include enrichment data
- UI: update button labels for sync-first semantics, refresh sync status after exports
- Smart caching: skip sync if data is fresh (168h / 1 week default)
Layer catalog now has 3 actions per layer:
- Sync: downloads from eTerra, stores in PostgreSQL (GisFeature table),
incremental — only new OBJECTIDs fetched, removed ones deleted
- GPKG: direct download from eTerra (existing behavior)
- Local export: generates GPKG from local DB (no eTerra needed)
New features:
- /api/eterra/export-local endpoint — builds GPKG from DB, ZIP for multi-layer
- /api/eterra/sync now uses session-based auth (no credentials in request)
- Category headers show both remote + local feature counts
- Each layer shows local DB count (violet badge) + last sync timestamp
- 'Export local' button in action bar when any layer has local data
- Sync progress message with auto-dismiss
DB schema already had GisFeature + GisSyncRun tables from prior work.