pivt AI Settings
Configure nano's AI assistant — providers, models, per-agent settings, usage credits, and organizational guidance
pivt AI Settings
pivt is nano's built-in AI assistant, powered by Claude and routed through Cloudflare AI Gateway. It handles parser generation, query building, detection engineering, search summarization, investigation timelines, dashboards, and interactive chat.
How It Works
All AI requests route through Cloudflare AI Gateway, which provides:
- Multi-provider support — Anthropic (Claude), OpenAI, Google Gemini, Azure OpenAI, and AWS Bedrock
- Usage analytics — per-deployment request tracking and cost monitoring
- Rate limiting — protects against runaway usage
- Failover — switch providers without code changes
On managed deployments, the AI gateway and provider credentials are pre-configured — you don't need to set up anything. On BYOC/self-hosted deployments, you configure your own provider credentials.
AI Credits
Every AI request consumes credits from your monthly allowance. Credit costs depend on the model used:
| Model Class | Examples | Cost per Request |
|---|---|---|
| Lite | Claude Haiku, Gemini Flash Lite | 2 credits |
| Full | Claude Sonnet, Claude Opus, GPT-4 | 10 credits |
Per-Tier Limits
| Tier | Monthly Credits | Model Tier | Equivalent Full Requests |
|---|---|---|---|
| Hobby | 6,000 | Economy (Lite models only) | 600 |
| Startup | 20,000 | Standard (Lite + Sonnet) | 2,000 |
| Growth | 50,000 | Full (all models) | 5,000 |
| Team | 100,000 | Full | 10,000 |
| Business | 300,000 | Full | 30,000 |
| Pro | Unlimited | Full | Unlimited |
| Enterprise | Unlimited | Full | Unlimited |
Economy tier restricts available models to Haiku and Flash Lite. Standard adds Sonnet and Pro models. Full unlocks all models including Opus. Upgrade your tier to access more capable models.
Monitoring Usage
Navigate to Settings > pivt > Usage tab to see:
- Credits consumed this month vs. your limit
- Breakdown by model class (lite vs. full)
- Monthly reset date
Settings Tabs
The pivt settings page is at Settings > pivt and has six tabs:
Providers
Configure which AI providers nano can use. Each provider needs an API key.
| Provider | Models Available | Notes |
|---|---|---|
| Anthropic | Claude Haiku, Sonnet, Opus | Recommended — best for security analysis |
| OpenAI | GPT-4o, GPT-4, GPT-3.5 | Good alternative |
| Gemini Pro, Flash | Cost-effective for high volume | |
| Azure OpenAI | GPT-4, GPT-3.5 (your deployment) | For Azure-committed orgs |
| AWS Bedrock | Claude (via Bedrock) | For AWS-committed orgs |
For each provider you can:
- Add or update the API key
- Enable or disable the provider
- Validate the connection
Managed deployments don't show the Providers tab — provider credentials are managed by the platform.
Models
View and manage the available model catalog. Models are synced from configured providers. You can see which models are available, their provider, and deprecation status.
Agent Models
Override which model each AI agent uses. By default, all agents use the same model, but you can assign different models per agent for cost optimization:
| Agent | Default Use Case | Recommendation |
|---|---|---|
| Parser | Generate VRL parsers from log samples | Sonnet (accuracy matters) |
| Query | Natural language to nPL queries | Sonnet |
| Detection | Create and tune detection rules | Sonnet |
| Summarize | Analyze search results | Haiku (cost-effective) |
| Timeline | Investigation timelines | Haiku |
| Dashboard | Generate dashboards | Sonnet |
| Notebook Chat | Interactive multi-turn analysis | Sonnet |
| Query Correction | Fix failed queries | Haiku |
| Query Best Practices | Optimize query performance | Haiku |
Per-agent settings include temperature, max tokens, and timeout overrides.
Guidance
Provide organizational context so pivt gives more relevant responses:
- Priority Threats — tell pivt what your org cares about most (e.g., "ransomware, insider threats, cloud misconfigurations")
- Custom Instructions — organization-specific guidance (e.g., "our domain is corp.local, always check for lateral movement to 10.0.0.0/8")
- Agent toggles — apply guidance to specific agents: Chat, Query, Detection, Parser, Dashboard
This context is injected into every AI request for the selected agents, so pivt understands your environment without you repeating it.
Usage
Monitor your AI credit consumption:
- Credits used this month — current usage vs. tier limit
- Model tier — which model classes are available on your tier
- Monthly reset — credits reset at the start of each calendar month
Monitoring
Health monitoring for configured AI providers:
- Health check toggle — tests provider connectivity every 5 minutes
- Status alerts — notifies admins when a provider goes down
- Connection history — see recent health check results
AI Agents
pivt includes specialized agents, each tuned for a specific task:
| Agent | What It Does |
|---|---|
| Parser | Generates VRL parsers from sample logs with UDM field mapping |
| Parser Edit | Iteratively refines parsers based on feedback |
| Query | Converts natural language to nPL queries |
| Query Correction | Fixes failed or invalid queries |
| Query Best Practices | Reviews and optimizes query performance |
| Detection | Creates detection rules with MITRE ATT&CK mapping |
| Tuning | Reduces false positives by analyzing historical matches |
| Summarize | Analyzes search results and generates narrative summaries |
| Timeline | Creates chronological investigation timelines |
| Dashboard | Generates dashboards from descriptions |
| Notebook Chat | Multi-turn interactive analysis in notebooks |
| Shadow Investigation | Autonomous threat hunting on case creation |
Managed vs. BYOC
| Aspect | Managed | BYOC / Self-Hosted |
|---|---|---|
| Provider config | Pre-configured, locked | You configure providers and API keys |
| Model selection | Platform-managed | You select per-agent |
| Credits | Per your tier | Per your tier |
| Settings access | Guidance + Usage tabs only | All 6 tabs |
| Gateway | Platform Cloudflare gateway | Your own gateway or direct provider access |
Security
- Credentials — provider API keys are encrypted at rest using AES-256-GCM
- Data in transit — all AI gateway communication uses TLS 1.2+
- Data retention — Cloudflare AI Gateway and upstream providers do not retain request data after processing
- Permissions — AI features are RBAC-controlled (
settings:ai,melod:chat,melod:query,melod:detection,melod:parser,melod:summarize)
Next Steps
- Set Up Your First Feed — pivt generates parsers from sample logs
- Detection Rules — use pivt to create and tune rules
- Search & Query — ask pivt to build queries in natural language
- Notebooks — chat with pivt about your data