nano SIEM
Reference

Deployment Architecture

How nano deploys across tiers — from single-node Docker Compose to multi-pool Kubernetes with ClickHouse Cloud

Deployment Architecture

nano scales from a single VPS to a multi-pool Kubernetes cluster depending on your tier. This page covers what gets deployed at each level, how the services fit together, and how the architecture scales.

Architecture Overview

Every nano deployment runs the same core services — the difference across tiers is how many replicas, how storage is managed, and whether databases run in-cluster or as managed services.

Tiers

TierGB/DayEPSReplicasHACloud Providers
Hobby2301NoHetzner
Startup5751NoHetzner
Growth101551NoHetzner
Team152302YesCivo, Vultr, GCP, AWS
Business253852YesCivo, Vultr, GCP, AWS
Pro1001,5503YesGCP, AWS
EnterpriseCustomUnlimited4YesGCP, AWS

Hobby & Startup: Docker Compose

The smallest tiers run as a single-node Docker Compose deployment on a Hetzner VPS. All services share one machine — no Kubernetes overhead.

HobbyStartup
Machine2 CPU / 8 GB4 CPU / 16 GB
Storage50 GB50 GB
ClickHouseSingle node, no replicationSingle node, no replication
PostgreSQLSingle instanceSingle instance
Replicas1 of everything1 of everything
Retention90 days warm, then rolloff90 days warm, then rolloff

Growth: Single-Pool Kubernetes

Growth moves to Kubernetes but keeps everything in one node pool. ClickHouse runs via the ClickHouse Operator, PostgreSQL via CloudNativePG.

AspectGrowth
Nodes1 x 4 CPU / 8 GB
ClickHouseOperator-managed, 1 shard x 2 replicas
PostgreSQLCNPG, 1 instance
Keeper3-node quorum (lightweight)
App replicas1 (API, Search, Web, Vector)
Storage100 GB hot
Retention90 days warm, then rolloff
ProvidersHetzner

Team & Business: HA Kubernetes

Team and Business add high availability — multiple replicas, database replication, and failover.

TeamBusiness
Nodes3 x 4 CPU / 8 GB3 x 4 CPU / 8 GB
App replicas22
ClickHouse1 shard x 2 replicas1 shard x 2 replicas
PostgreSQL2 instances (primary + replica)2 instances (primary + replica)
Storage120 GB hot300 GB hot
Warm storageWasabi S3Wasabi S3
Retention365 days365 days
ProvidersCivo, Vultr, GCP, AWSCivo, Vultr, GCP, AWS

Pro: Multi-Pool Kubernetes

Pro separates workloads into dedicated node pools — stateless app services on spot instances, ClickHouse on dedicated on-demand nodes, and Vector on its own spot pool.

PoolNodesMachineSchedulingPurpose
App32 CPU / 8 GBSpotAPI, Search, Frontend, Prometheus
ClickHouse44 CPU / 16 GBOn-demandClickHouse (2 shards x 2 replicas)
Vector32 CPU / 8 GBSpotLog ingestion and parsing
AspectPro
App replicas3 (API, Search, Web)
ClickHouse2 shards x 2 replicas (4 pods)
PostgreSQL3 instances (primary + 2 replicas)
Storage1 TB hot (250 GB per CH pod)
Warm storageWasabi S3
Retention365 days
ProvidersGCP, AWS

Enterprise: Managed Databases

Enterprise removes databases from the Kubernetes cluster entirely. ClickHouse runs on ClickHouse Cloud, PostgreSQL on Cloud SQL (GCP) or RDS (AWS). The K8s cluster only runs application services and Vector.

VolumeApp PoolVector Pool
Up to 100 GB/day3 x 4 CPU / 16 GB3 x 4 CPU / 16 GB
100 GB - 1 TB/day3 x 4 CPU / 16 GB4 x 8 CPU / 32 GB
1-10 TB/day3 x 8 CPU / 32 GB6 x 16 CPU / 64 GB
10-50+ TB/day6 x 8 CPU / 32 GB12+ x 16 CPU / 64 GB
  • ClickHouse Cloud handles storage, replication, and scaling transparently — no local disk management
  • PrivateLink connects the K8s cluster to ClickHouse Cloud for network isolation
  • 4 replicas of all application services (API, Search, Frontend)
  • Providers: GCP, AWS only

Services

API Service

All replicas serve HTTP requests simultaneously behind the load balancer. A PostgreSQL advisory lock elects one replica as the scheduler leader. Only the leader runs background tasks:

  • Detection rule execution (cron-based scheduled queries)
  • Signal processing (materialized view alerts)
  • Enrichment auto-sync (IPinfo, threat feeds)
  • Auto-tuning (baseline collection, threshold detection)

If the leader pod dies, another replica acquires the lock within ~15 seconds. HTTP serving is uninterrupted.

Search Service — Active/Active

The Search service is stateless — all replicas serve queries simultaneously behind the load balancer. Query state, results, and search history are stored in ClickHouse and PostgreSQL, so any replica can handle any request.

On HA tiers (Team+), multiple Search replicas run with pod anti-affinity to spread across nodes. If a pod dies, the remaining replicas continue serving traffic with no interruption.

Search connects to a dedicated clickhouse-search service that targets a specific ClickHouse replica. This isolates read queries from Vector's ingestion writes, preventing heavy searches from competing with data ingestion on the same node.

Vector — Log Ingestion

Vector accepts logs on multiple protocols:

PortProtocolAuthUse Case
8080HTTPBearer tokenPrimary ingestion — applications, webhooks, log shippers
8088HECHEC tokenSplunk HTTP Event Collector — compatible forwarders
6000Vector nativemTLS (client certs)On-prem Vector aggregators

Vector is stateless and spot-tolerant. On HA tiers, multiple replicas run with pod anti-affinity to spread across nodes.

ClickHouse

ClickHouse handles all log storage and query execution. Configuration varies by tier:

TierTopologyKeeperStorage
Hobby/StartupSingle nodeNoneLocal disk
Growth1 shard x 2 replicas3-node quorumLocal disk
Team/Business1 shard x 2 replicas3-node quorumLocal + Wasabi warm
Pro2 shards x 2 replicas3-node quorumLocal + Wasabi warm
EnterpriseClickHouse CloudManagedManaged (hot/warm/cold)

Storage tiers:

  • Hot — local SSD on ClickHouse pods. Recent data lives here.
  • Warm — Wasabi S3 (or your own S3/GCS bucket). Extends retention to 365 days on Team+.
  • Cold — optional backup to object storage.

ClickHouse compresses data at roughly 8-10x, so 100 GB/day of raw logs uses approximately 10-12 GB/day of storage.

PostgreSQL

PostgreSQL stores metadata — rules, alerts, users, dashboards, credentials, and search history. Deployed via the CloudNativePG operator with streaming replication and automatic failover.

TierInstancesHA
Hobby/Startup1No
Growth1No
Team/Business2 (primary + replica)Yes
Pro3 (primary + 2 replicas)Yes
EnterpriseCloud SQL / RDSManaged

Deployment Modes

ModeDescription
Managednano provisions and manages the infrastructure on supported cloud providers
BYOC (Bring Your Own Cloud)nano deploys into your cloud account using your credentials

Both modes use the same architecture. Managed deployments handle provisioning, upgrades, and monitoring automatically. BYOC gives you full control over the cloud account and network configuration.

Monitoring

Prometheus scrapes all services at 10-15 second intervals:

EndpointMetrics
api:3000/metricsAPI latency, request counts, scheduler health
search:3002/metricsQuery performance, admission control
vector:9598/metricsIngestion throughput, buffer sizes, parse errors
postgres-exporter:9187Database health, replication lag

Next Steps

On this page

On this page