nano SIEM
User Guide

Alerts & Incident Response

Alerts & Incident Response

Alerts in nano are security events that require attention, generated when detection rules match suspicious activity in your logs. The alert system provides comprehensive incident response capabilities, from initial triage to final resolution.

Overview

The alert system transforms detection rule matches into actionable security incidents with:

  • Automated Detection - Rules continuously monitor logs and generate alerts
  • Intelligent Deduplication - Prevents alert fatigue from duplicate events
  • Workflow Management - Structured process from detection to resolution
  • Risk Scoring - Prioritizes alerts based on threat severity and context
  • Audit Trail - Complete timeline of investigation and response actions

Alert Lifecycle

1. Alert Generation

Alerts are automatically created when detection rules match events:

Detection Rule → Event Match → Alert Created → Analyst Notification

Key Properties:

  • Severity: Critical, High, Medium, Low, Info (inherited from detection rule)
  • Status: Open → Acknowledged → Closed
  • Matched Events: The specific log events that triggered the alert
  • Detection Latency: Time between event occurrence and alert generation
  • Alert Mode: Rules with alert_mode: per_event produce one alert per source event, with matched_events always containing exactly one element. This is used for vendor pass-through detections (e.g., CrowdStrike, SentinelOne) where each vendor detection maps 1:1 to a nano alert. The default grouped mode bundles all matches from a single execution into one alert.

2. Alert States

Open (New)

  • Freshly generated alerts requiring initial review
  • Highest priority for SOC analysts
  • Automatically assigned based on severity and routing rules

Acknowledged

  • Analyst has reviewed and is actively investigating
  • Prevents duplicate work by other team members
  • Requires disposition before closure

Closed

  • Investigation complete with final disposition
  • Includes resolution notes and lessons learned
  • Archived for reporting and trend analysis

Alert Management

Viewing Alerts

Navigate to Alerts in the main menu to see:

Alert Dashboard

  • Real-time alert counts by status and severity
  • 7-day activity trends and patterns
  • Severity distribution charts
  • Critical alert indicators

Alert List

  • Sortable table with key alert information
  • Filtering by status, severity, and rule name
  • Bulk operations for efficient triage
  • Quick actions for common workflows

Alert Details

Click any alert to view comprehensive details:

Overview Section

  • Alert metadata (severity, status, timestamps)
  • Detection rule information and query
  • Risk score and threat indicators
  • Investigation timeline

Matched Events Tab

  • Complete event data that triggered the alert
  • Detection latency indicators for each event
  • Expandable event details with all fields
  • Pagination for large event sets

Detection Rule Tab

  • Full rule definition and query syntax
  • MITRE ATT&CK technique mappings
  • Rule metadata (author, schedule, references)
  • Narrative description of the threat

Timeline Tab

  • Chronological investigation history
  • Status changes and analyst actions
  • Audit trail for compliance reporting

Detection Latency

nano tracks how quickly alerts are generated after events occur:

Latency Levels:

  • 🟢 Fast (≤5 minutes) - Optimal real-time detection
  • 🟡 Moderate (≤15 minutes) - Acceptable for most use cases
  • 🔴 Slow (>15 minutes) - May indicate performance issues

Factors Affecting Latency:

  • Log ingestion delays
  • Detection rule complexity
  • System resource availability
  • Data processing pipeline health

Alert Operations

Acknowledging Alerts

Purpose: Claim ownership and begin investigation

Process:

  1. Review alert details and matched events
  2. Click "Acknowledge" button
  3. Alert status changes to "Acknowledged"
  4. Your username is recorded as the analyst

Best Practices:

  • Acknowledge alerts promptly to prevent duplicate work
  • Use bulk acknowledge for related alerts
  • Add investigation notes as you progress

Closing Alerts

Purpose: Complete investigation with final disposition

Required Information:

  • Disposition: Classification of the alert outcome
  • Notes: Investigation summary and actions taken

Disposition Types:

True Positive

  • Confirmed security incident requiring response
  • Threat was real and actionable
  • May trigger incident response procedures

False Positive

  • Alert triggered incorrectly on benign activity
  • Detection rule needs tuning or refinement
  • Common during initial rule deployment

Benign Positive

  • Alert correctly identified the activity
  • Activity is authorized or expected
  • No security response required

Bulk Operations

Efficiently manage multiple alerts simultaneously:

Bulk Acknowledge

  • Select multiple open alerts
  • Acknowledge all at once
  • Useful for related alerts from the same incident

Bulk Close

  • Close multiple acknowledged alerts
  • Apply same disposition to similar alerts
  • Streamline routine alert processing

Selection Methods:

  • Individual checkboxes for specific alerts
  • "Select All" for current page
  • Filter-based selection for targeted operations

Status Filters

All Status - View alerts in any state Open - New alerts requiring attention Acknowledged - Alerts under investigation Closed - Completed investigations

Severity Filters

All Severity - View alerts of any severity Critical - Immediate response required High - Urgent investigation needed Medium - Standard priority review Low - Routine monitoring alerts

Search Capabilities

Text Search

  • Search alert rule names
  • Filter by severity or status keywords
  • Case-insensitive matching

Advanced Filtering

  • Combine multiple filter criteria
  • Time range selection
  • Rule-specific alert views

Alert Metrics & Analytics

Key Performance Indicators

Mean Time to Acknowledge (MTTA)

  • Average time from alert creation to acknowledgment
  • Measures SOC responsiveness
  • Target: <15 minutes for critical alerts

Mean Time to Resolution (MTTR)

  • Average time from alert creation to closure
  • Measures investigation efficiency
  • Varies by severity and complexity

Alert Volume Trends

  • Daily/weekly alert generation patterns
  • Seasonal variations and anomalies
  • Capacity planning indicators

Alert Quality Metrics

False Positive Rate

  • Percentage of alerts marked as false positives
  • Indicates detection rule accuracy
  • Target: <10% for mature rules

True Positive Rate

  • Percentage of alerts confirmed as real threats
  • Measures detection effectiveness
  • Higher rates indicate better rule quality

Integration with Detection Rules

Rule-Based Alert Generation

Alerts inherit properties from their detection rules:

  • Severity Level - Determines alert priority
  • MITRE ATT&CK Mapping - Provides threat context
  • Narrative Description - Explains the threat scenario
  • Reference Links - Additional threat intelligence

Alert Feedback Loop

Alert dispositions improve detection rules:

  • False Positives → Rule tuning opportunities
  • True Positives → Validation of rule effectiveness
  • Benign Positives → Baseline refinement needs

Detection Rule Metrics

View rule performance from alert outcomes:

  • Alert generation frequency
  • False positive rates per rule
  • Investigation time by rule type
  • Analyst feedback and recommendations

Notebook Integration

Automatic Investigation Capture

When notebooks are active, alert investigations are automatically documented:

  • Alert View Capture - Records when alerts are opened
  • Action Logging - Tracks acknowledgments and closures
  • Query Suggestions - AI-generated investigation queries
  • Evidence Collection - Preserves investigation artifacts

Linked Investigations

Alerts can reference notebook investigations:

  • Bidirectional Links - Connect alerts to detailed analysis
  • Investigation History - Track related research and findings
  • Collaboration - Share investigation notes with team members

Best Practices

Alert Triage

  1. Prioritize by Severity

    • Critical alerts require immediate attention
    • Use severity-based escalation procedures
    • Balance urgency with investigation quality
  2. Context Gathering

    • Review matched events thoroughly
    • Check for related alerts or patterns
    • Consult threat intelligence sources
  3. Efficient Workflows

    • Use bulk operations for similar alerts
    • Develop standard investigation procedures
    • Document common false positive patterns

Investigation Process

  1. Initial Assessment (5-10 minutes)

    • Review alert details and matched events
    • Determine if immediate escalation is needed
    • Acknowledge alert to claim ownership
  2. Detailed Analysis (15-60 minutes)

    • Examine all matched events and context
    • Search for related activity or indicators
    • Consult external threat intelligence
  3. Response Actions (varies)

    • Implement containment measures if needed
    • Coordinate with incident response team
    • Document findings and recommendations
  4. Alert Closure (5 minutes)

    • Select appropriate disposition
    • Add comprehensive investigation notes
    • Update detection rules if needed

Alert Hygiene

Regular Review

  • Monitor alert volume and trends
  • Identify rules generating excessive false positives
  • Adjust detection thresholds based on feedback

Documentation Standards

  • Use consistent disposition criteria
  • Include actionable investigation notes
  • Reference external tickets or systems

Performance Monitoring

  • Track MTTA and MTTR metrics
  • Monitor detection latency indicators
  • Review analyst workload distribution

Troubleshooting

Common Issues

High False Positive Rate

  • Review detection rule logic and thresholds
  • Analyze environmental factors and baselines
  • Consider rule tuning or suppression lists

Alert Fatigue

  • Implement intelligent alert grouping
  • Adjust severity levels for routine events
  • Use prevalence-based filtering

Slow Detection Latency

  • Check log ingestion pipeline health
  • Monitor system resource utilization
  • Optimize detection rule queries

Missing Alerts

  • Verify detection rule schedules
  • Check log source connectivity
  • Review rule query syntax and filters

Performance Optimization

Query Efficiency

  • Use indexed fields in detection rules
  • Limit time ranges for complex queries
  • Implement proper field filtering

System Resources

  • Monitor CPU and memory usage
  • Scale detection infrastructure as needed
  • Optimize database performance

Alert Processing

  • Batch similar alerts when possible
  • Implement intelligent deduplication
  • Use asynchronous processing for large volumes

Compliance & Reporting

Audit Requirements

nano maintains complete audit trails for:

  • Alert generation and modification
  • Analyst actions and timestamps
  • Investigation notes and dispositions
  • System configuration changes

Reporting Capabilities

Executive Dashboards

  • High-level security metrics
  • Trend analysis and comparisons
  • Risk posture indicators

Operational Reports

  • Alert volume and response times
  • Analyst performance metrics
  • Detection rule effectiveness

Compliance Reports

  • Incident response documentation
  • Audit trail exports
  • Regulatory requirement mapping

The alert system in nano provides comprehensive incident response capabilities while maintaining the flexibility to adapt to your organization's specific security operations workflows and requirements.

On this page

On this page