Deployment Architecture
How nano deploys across tiers — from single-node Docker Compose to multi-pool Kubernetes with ClickHouse Cloud
Deployment Architecture
nano scales from a single VPS to a multi-pool Kubernetes cluster depending on your tier. This page covers what gets deployed at each level, how the services fit together, and how the architecture scales.
Architecture Overview
Every nano deployment runs the same core services — the difference across tiers is how many replicas, how storage is managed, and whether databases run in-cluster or as managed services.
Tiers
| Tier | GB/Day | EPS | Replicas | HA | Cloud Providers |
|---|---|---|---|---|---|
| Hobby | 2 | 30 | 1 | No | Hetzner |
| Startup | 5 | 75 | 1 | No | Hetzner |
| Growth | 10 | 155 | 1 | No | Hetzner |
| Team | 15 | 230 | 2 | Yes | Civo, Vultr, GCP, AWS |
| Business | 25 | 385 | 2 | Yes | Civo, Vultr, GCP, AWS |
| Pro | 100 | 1,550 | 3 | Yes | GCP, AWS |
| Enterprise | Custom | Unlimited | 4 | Yes | GCP, AWS |
Hobby & Startup: Docker Compose
The smallest tiers run as a single-node Docker Compose deployment on a Hetzner VPS. All services share one machine — no Kubernetes overhead.
| Hobby | Startup | |
|---|---|---|
| Machine | 2 CPU / 8 GB | 4 CPU / 16 GB |
| Storage | 50 GB | 50 GB |
| ClickHouse | Single node, no replication | Single node, no replication |
| PostgreSQL | Single instance | Single instance |
| Replicas | 1 of everything | 1 of everything |
| Retention | 90 days warm, then rolloff | 90 days warm, then rolloff |
Growth: Single-Pool Kubernetes
Growth moves to Kubernetes but keeps everything in one node pool. ClickHouse runs via the ClickHouse Operator, PostgreSQL via CloudNativePG.
| Aspect | Growth |
|---|---|
| Nodes | 1 x 4 CPU / 8 GB |
| ClickHouse | Operator-managed, 1 shard x 2 replicas |
| PostgreSQL | CNPG, 1 instance |
| Keeper | 3-node quorum (lightweight) |
| App replicas | 1 (API, Search, Web, Vector) |
| Storage | 100 GB hot |
| Retention | 90 days warm, then rolloff |
| Providers | Hetzner |
Team & Business: HA Kubernetes
Team and Business add high availability — multiple replicas, database replication, and failover.
| Team | Business | |
|---|---|---|
| Nodes | 3 x 4 CPU / 8 GB | 3 x 4 CPU / 8 GB |
| App replicas | 2 | 2 |
| ClickHouse | 1 shard x 2 replicas | 1 shard x 2 replicas |
| PostgreSQL | 2 instances (primary + replica) | 2 instances (primary + replica) |
| Storage | 120 GB hot | 300 GB hot |
| Warm storage | Wasabi S3 | Wasabi S3 |
| Retention | 365 days | 365 days |
| Providers | Civo, Vultr, GCP, AWS | Civo, Vultr, GCP, AWS |
Pro: Multi-Pool Kubernetes
Pro separates workloads into dedicated node pools — stateless app services on spot instances, ClickHouse on dedicated on-demand nodes, and Vector on its own spot pool.
| Pool | Nodes | Machine | Scheduling | Purpose |
|---|---|---|---|---|
| App | 3 | 2 CPU / 8 GB | Spot | API, Search, Frontend, Prometheus |
| ClickHouse | 4 | 4 CPU / 16 GB | On-demand | ClickHouse (2 shards x 2 replicas) |
| Vector | 3 | 2 CPU / 8 GB | Spot | Log ingestion and parsing |
| Aspect | Pro |
|---|---|
| App replicas | 3 (API, Search, Web) |
| ClickHouse | 2 shards x 2 replicas (4 pods) |
| PostgreSQL | 3 instances (primary + 2 replicas) |
| Storage | 1 TB hot (250 GB per CH pod) |
| Warm storage | Wasabi S3 |
| Retention | 365 days |
| Providers | GCP, AWS |
Enterprise: Managed Databases
Enterprise removes databases from the Kubernetes cluster entirely. ClickHouse runs on ClickHouse Cloud, PostgreSQL on Cloud SQL (GCP) or RDS (AWS). The K8s cluster only runs application services and Vector.
| Volume | App Pool | Vector Pool |
|---|---|---|
| Up to 100 GB/day | 3 x 4 CPU / 16 GB | 3 x 4 CPU / 16 GB |
| 100 GB - 1 TB/day | 3 x 4 CPU / 16 GB | 4 x 8 CPU / 32 GB |
| 1-10 TB/day | 3 x 8 CPU / 32 GB | 6 x 16 CPU / 64 GB |
| 10-50+ TB/day | 6 x 8 CPU / 32 GB | 12+ x 16 CPU / 64 GB |
- ClickHouse Cloud handles storage, replication, and scaling transparently — no local disk management
- PrivateLink connects the K8s cluster to ClickHouse Cloud for network isolation
- 4 replicas of all application services (API, Search, Frontend)
- Providers: GCP, AWS only
Services
API Service
All replicas serve HTTP requests simultaneously behind the load balancer. A PostgreSQL advisory lock elects one replica as the scheduler leader. Only the leader runs background tasks:
- Detection rule execution (cron-based scheduled queries)
- Signal processing (materialized view alerts)
- Enrichment auto-sync (IPinfo, threat feeds)
- Auto-tuning (baseline collection, threshold detection)
If the leader pod dies, another replica acquires the lock within ~15 seconds. HTTP serving is uninterrupted.
Search Service — Active/Active
The Search service is stateless — all replicas serve queries simultaneously behind the load balancer. Query state, results, and search history are stored in ClickHouse and PostgreSQL, so any replica can handle any request.
On HA tiers (Team+), multiple Search replicas run with pod anti-affinity to spread across nodes. If a pod dies, the remaining replicas continue serving traffic with no interruption.
Search connects to a dedicated clickhouse-search service that targets a specific ClickHouse replica. This isolates read queries from Vector's ingestion writes, preventing heavy searches from competing with data ingestion on the same node.
Vector — Log Ingestion
Vector accepts logs on multiple protocols:
| Port | Protocol | Auth | Use Case |
|---|---|---|---|
| 8080 | HTTP | Bearer token | Primary ingestion — applications, webhooks, log shippers |
| 8088 | HEC | HEC token | Splunk HTTP Event Collector — compatible forwarders |
| 6000 | Vector native | mTLS (client certs) | On-prem Vector aggregators |
Vector is stateless and spot-tolerant. On HA tiers, multiple replicas run with pod anti-affinity to spread across nodes.
ClickHouse
ClickHouse handles all log storage and query execution. Configuration varies by tier:
| Tier | Topology | Keeper | Storage |
|---|---|---|---|
| Hobby/Startup | Single node | None | Local disk |
| Growth | 1 shard x 2 replicas | 3-node quorum | Local disk |
| Team/Business | 1 shard x 2 replicas | 3-node quorum | Local + Wasabi warm |
| Pro | 2 shards x 2 replicas | 3-node quorum | Local + Wasabi warm |
| Enterprise | ClickHouse Cloud | Managed | Managed (hot/warm/cold) |
Storage tiers:
- Hot — local SSD on ClickHouse pods. Recent data lives here.
- Warm — Wasabi S3 (or your own S3/GCS bucket). Extends retention to 365 days on Team+.
- Cold — optional backup to object storage.
ClickHouse compresses data at roughly 8-10x, so 100 GB/day of raw logs uses approximately 10-12 GB/day of storage.
PostgreSQL
PostgreSQL stores metadata — rules, alerts, users, dashboards, credentials, and search history. Deployed via the CloudNativePG operator with streaming replication and automatic failover.
| Tier | Instances | HA |
|---|---|---|
| Hobby/Startup | 1 | No |
| Growth | 1 | No |
| Team/Business | 2 (primary + replica) | Yes |
| Pro | 3 (primary + 2 replicas) | Yes |
| Enterprise | Cloud SQL / RDS | Managed |
Deployment Modes
| Mode | Description |
|---|---|
| Managed | nano provisions and manages the infrastructure on supported cloud providers |
| BYOC (Bring Your Own Cloud) | nano deploys into your cloud account using your credentials |
Both modes use the same architecture. Managed deployments handle provisioning, upgrades, and monitoring automatically. BYOC gives you full control over the cloud account and network configuration.
Monitoring
Prometheus scrapes all services at 10-15 second intervals:
| Endpoint | Metrics |
|---|---|
api:3000/metrics | API latency, request counts, scheduler health |
search:3002/metrics | Query performance, admission control |
vector:9598/metrics | Ingestion throughput, buffer sizes, parse errors |
postgres-exporter:9187 | Database health, replication lag |
Next Steps
- Set Up Your First Feed — Onboard your first log source
- On-Premise Collection — Deploy Vector agents and aggregators
- Search & Query — Learn the query language