nano SIEM
User Guide

Risk-Based Alerting

Risk-Based Alerting

Risk-based alerting is nano's advanced scoring system that assigns numerical risk values to security events and entities. By quantifying threat levels, security teams can prioritize responses, track cumulative risk over time, and create sophisticated meta-detections that trigger when entities exceed risk thresholds.

Overview

The risk system operates on a 0-100 scale where higher scores indicate greater security risk:

  • 0-30: Low risk (blue)
  • 31-50: Medium risk (yellow)
  • 51-70: High risk (orange)
  • 71-100: Critical risk (red)

Risk scores are calculated using:

  1. In-query | risk command with static or dynamic scores
  2. Conditional expressions for context-aware scoring (if(), arithmetic)
  3. Global weight multiplier for system-wide tuning (can be overridden per-query)
  4. Entity extraction for tracking risk per IP, user, hostname, etc.

How Risk Scoring Works

Risk Scores in Queries

Risk scores are set directly in detection queries using the | risk command:

// Static score
| risk score=50 entity=user factor="Suspicious activity"

// Dynamic score based on event count
| stats count() as hits | risk score=hits*5 entity=src_ip

// Conditional score
| risk score=if(is_admin, 80, 40) entity=user factor="Admin access"

// With weight override
| risk score=70 entity=user weight=0.5

Severity-Based Default Scores

When using severity to guide risk scores, use this scale as a reference:

SeveritySuggested Score
Critical90
High70
Medium50
Low30
Informational10

Dynamic Risk Scoring

Risk scores can be dynamically calculated based on event conditions using the | risk command with expressions:

// Failed Login Attempts - score scales with attempt count
event_type=authentication AND status=failure
| stats count() as attempts by src_ip
| risk score=if(attempts > 50, 90, if(attempts > 10, 70, 40)) entity=src_ip factor="Failed logins"

Supported Score Expressions

The score= parameter supports full eval expressions:

  • Literal values: score=50
  • Field references: score=severity_level
  • Arithmetic: score=attempts*5, score=count/10
  • Conditionals: score=if(count > 10, 70, 40)
  • Functions: score=min(attempts*2, 100)

All dynamic scores are automatically clamped to the 0-100 range.

Entity Extraction

Risk scores are associated with entities (the "who" or "what" being scored). The system automatically extracts entities using:

  1. Explicit field: If risk_entity_field is specified, extract from that field
  2. Automatic detection: Try common fields in priority order:
    • IP addresses: src_ip, dest_ip, dvc_ip
    • Hostnames: src_host, dest_host, hostname
    • Users: src_user, dest_user, user
    • File hashes: file_hash, process_hash, service_hash

Global Weight Multiplier

The global weight (0.0-1.0) acts as a system-wide risk multiplier:

  • 1.0: Full risk scoring (default)
  • 0.5: Conservative scoring (50% of calculated risk)
  • 0.0: Risk scoring disabled

Final score = round(raw_score × global_weight)

Using Risk in Detection Rules

Risk scores are now set directly in the query using the | risk command. This approach is more flexible and allows for dynamic scoring based on event data.

Basic Risk Configuration

// Suspicious Process Execution
event_type=process_creation AND process_name=powershell.exe
| risk score=75 entity=src_host factor="PowerShell execution"

Dynamic Risk Scoring

Score based on aggregated counts or field values:

// Network Anomaly - score increases with connection count
event_type=network_connection
| stats count() as connection_count by src_ip
| risk score=if(connection_count > 1000, 85, if(connection_count > 100, 60, 30)) entity=src_ip factor="Connection volume"

Conditional Scoring

Use if() expressions for context-aware scoring:

// Higher score for external admin access
event_type=authentication AND user_type=admin
| risk score=if(src_ip=/^(10\.|192\.168\.)/, 40, 80) entity=user factor="Admin login"

Arithmetic Score Expressions

// Score based on multiple factors
event_type=network
| eval risk_points = if(bytes_out > 100000000, 30, 0) +
                     if(dest_port != 80 AND dest_port != 443, 20, 0) +
                     if(hour < 6 OR hour > 22, 25, 0)
| risk score=risk_points entity=src_ip factor="Data transfer risk"

Weight Override

Override the global risk weight for specific rules:

// Reduce impact of noisy detection during tuning
event_type=file_access
| risk score=30 entity=user factor="File access" weight=0.5

Risk Commands in Queries

The risk command allows dynamic risk scoring within search queries:

Basic Risk Command

-- Assign risk score of 50 to all matching events
event_type="malware_detected" | risk score=50

Risk with Entity Specification

-- Score by source IP with custom factor description
suspicious_activity=true 
| risk score=75 entity=src_ip factor="Suspicious behavior detected"

Additive Risk Scoring

Multiple risk commands in a query are additive (capped at 100). Here are practical examples:

1. Suspicious Login Activity

-- Comprehensive login risk assessment
event_type="authentication"
| risk score=10 entity=user factor="Login attempt"
| where auth_result="failure"
| risk score=20 factor="Failed login"
| where src_ip != /^(10\.|192\.168\.|172\.(1[6-9]|2[0-9]|3[01])\.)/
| risk score=25 factor="External IP"
| eval hour = tonumber(strftime(timestamp, "%H"))
| where hour < 6 OR hour > 22
| risk score=20 factor="Off-hours access"
| where risk_score >= 50
| table timestamp, user, src_ip, auth_result, risk_score, risk_factors

2. Rare Process with Network Activity

-- Detect uncommon processes making network connections
event_type="process_start" dest_port > 0
| risk score=15 entity=dest_host factor="Process with network"
| prevalence hash_prevalence < 3
| risk score=35 factor="Rare process hash"
| where dest_port != 80 AND dest_port != 443
| risk score=20 factor="Non-standard port"
| where risk_score >= 50
| table timestamp, dest_host, process_name, file_hash, dest_ip, dest_port, risk_score

3. Data Exfiltration Indicators

-- Identify potential data exfiltration patterns
event_type="network" bytes_out > 10000000
| risk score=20 entity=src_ip factor="Large outbound transfer"
| where dest_port != 80 AND dest_port != 443
| risk score=15 factor="Non-web port"
| where dest_ip != /^(10\.|192\.168\.)/
| risk score=20 factor="External destination"
| eval hour = tonumber(strftime(timestamp, "%H"))
| where hour < 6 OR hour > 20
| risk score=25 factor="Off-hours transfer"
| where risk_score >= 60
| table timestamp, src_ip, dest_ip, dest_port, bytes_out, risk_score, risk_factors

4. Suspicious Script Execution

-- Detect potentially malicious script execution
event_type="process_start"
| risk score=10 entity=user factor="Process execution"
| where process_name="wscript.exe" OR process_name="cscript.exe" OR process_name="mshta.exe"
| risk score=30 factor="Script interpreter"
| where command_line = /http:|https:|ftp:/
| risk score=35 factor="Remote script reference"
| where risk_score >= 50
| table timestamp, user, dest_host, process_name, command_line, risk_score

Cumulative Risk Detection

Meta-Detection Rules

Create detection rules that trigger when an entity's cumulative risk exceeds thresholds:

-- Alert when any entity accumulates >100 risk in 1 hour
source_type=findings 
| bin span=1h 
| stats sum(risk_score) as total_risk by risk_entity 
| where total_risk > 100
-- Alert for users with >150 risk over 24 hours
source_type=findings 
| where entity_type="user"
| bin span=24h
| stats sum(risk_score) as total_risk, count() as signal_count by risk_entity
| where total_risk > 150
| eval risk_per_signal = total_risk / signal_count
| where risk_per_signal > 20  -- Ensure consistent high-risk activity

Time Window Variations

-- Short-term spike detection (15 minutes)
source_type=findings 
| bin span=15m
| stats sum(risk_score) as total_risk by risk_entity
| where total_risk > 200  -- High threshold for short window

-- Long-term accumulation (7 days)  
source_type=findings
| bin span=7d
| stats sum(risk_score) as total_risk, 
        avg(risk_score) as avg_risk,
        count() as signal_count by risk_entity
| where total_risk > 500 AND avg_risk > 30

Risk Analytics Dashboard

Overview Metrics

The Risk Analytics page provides comprehensive visibility:

  • Total Entities: Count of entities with risk scores
  • Risk Level Distribution: Breakdown by critical/high/medium/low
  • Average Risk Score: Mean risk across all entities
  • Finding Volume: Total number of risk-generating detections

Entity Risk Tracking

View detailed entity risk information:

  • Risk Score: Current cumulative score
  • Finding Count: Number of contributing detections
  • Last Detection: Most recent rule that fired
  • Last Seen: Timestamp of latest activity
  • Risk Level: Color-coded classification

Time Window Analysis

Compare risk patterns across different time periods:

  • 24-hour view: Recent high-priority threats
  • 7-day view: Persistent or escalating risks
  • Entity type filtering: Focus on IPs, users, or hosts

TTL Decay (Time-Based Scoring)

nano implements Google SecOps-style TTL (Time-To-Live) decay for entity risk scores. This ensures that recent security findings contribute more to an entity's risk score than older findings, prioritizing current threats over stale activity.

How TTL Decay Works

Risk scores are calculated from findings stored in the last 7 days. Each finding's contribution to the total score is multiplied by a decay factor based on its age:

Finding AgeDecay FactorEffect
0-24 hours1.0 (100%)Full weight - recent findings are most important
1-3 days0.7 (70%)Slight decay - still highly relevant
3-5 days0.4 (40%)Moderate decay - reduced contribution
5-7 days0.2 (20%)Heavy decay - minimal contribution
>7 days0.0 (0%)Excluded - findings too old to be actionable

Decay Calculation Example

Consider an entity with the following findings:

FindingAgeRaw ScoreDecay FactorDecayed Score
Finding A2 hours ago501.050
Finding B12 hours ago301.030
Finding C2 days ago800.756
Finding D4 days ago600.424
Finding E6 days ago400.28

24h Window Results:

  • Raw Score: 50 + 30 = 80
  • Decayed Score: 50 + 30 = 80 (same, all findings have 1.0 factor)
  • Finding Count: 2

7d Window Results:

  • Raw Score: 50 + 30 + 80 + 60 + 40 = 260
  • Decayed Score: 50 + 30 + 56 + 24 + 8 = 168
  • Finding Count: 5

Understanding the Risk Table

The Risk Analytics table displays both 24-hour and 7-day metrics:

ColumnDescription
EntityThe IP, user, hostname, or other entity being tracked
TypeEntity type (ip, user, hostname, email)
24h ScoreDecayed risk score from findings in the last 24 hours
7d ScoreDecayed risk score from findings in the last 7 days
FindingsFinding count shown as "24h / 7d" format
Last DetectionMost recent detection rule that fired
Last SeenTimestamp of the most recent finding

Important Notes:

  • If 24h Score = 0 but 7d Score > 0, it means all findings are older than 24 hours
  • The 7d Score will always be ≥ the 24h Score (7d includes 24h findings plus older ones)
  • Findings older than 7 days are automatically excluded from all calculations

Why Use TTL Decay?

  1. Prioritize Active Threats: Entities with recent activity appear at the top
  2. Automatic Aging: Old incidents don't permanently inflate risk scores
  3. Reduce False Positives: Stale findings have less impact on current risk assessment
  4. No Manual Cleanup: Scores naturally decay without intervention

Configuring Decay Factors

Decay factors are configurable in Settings > Risk Scoring > TTL Decay:

  1. Navigate to the Risk Scoring settings page
  2. Adjust the four decay sliders:
    • 0-24 hours: Default 1.0 (full weight for recent findings)
    • 1-3 days: Default 0.7 (slight decay)
    • 3-5 days: Default 0.4 (moderate decay)
    • 5-7 days: Default 0.2 (heavy decay)
  3. Click "Save Decay Settings"

Tuning Recommendations:

EnvironmentDecay Strategy
High-velocity SOCAggressive decay (lower values) to focus on real-time threats
Compliance-focusedConservative decay (higher values) to maintain longer visibility
DefaultBalanced settings work well for most environments

API Endpoints for Decay

# Get current decay configuration
curl -X GET "/api/settings/risk-decay"

# Update decay configuration
curl -X PUT "/api/settings/risk-decay" \
  -H "Content-Type: application/json" \
  -d '{
    "decay_0_24h": 1.0,
    "decay_1_3d": 0.7,
    "decay_3_5d": 0.4,
    "decay_5_7d": 0.2
  }'

# Get time-windowed risk scores with decay applied
curl -X GET "/api/risk/time-windowed?limit=100"

Decay vs. Raw Scores

The system tracks both decayed and raw scores:

  • Decayed Scores (primary): Used for sorting, display, and alerting
  • Raw Scores (available via tooltip): Sum of all finding scores without decay applied

Hover over any score in the Risk table to see both values in the tooltip.

Configuration and Settings

Global Risk Weight

Adjust the global risk multiplier in Settings > Risk Scoring:

  • 1.0: Full risk scoring (aggressive)
  • 0.75: Moderate risk scoring
  • 0.5: Conservative risk scoring
  • 0.25: Minimal risk scoring
  • 0.0: Risk scoring disabled

Risk Score Validation

All risk scores are validated to ensure data integrity:

  • Range: Must be 0-100 (inclusive)
  • Type: Must be integers
  • Modifiers: Conditions must use valid operators
  • Weights: Global weight must be 0.0-1.0

API Integration

Risk Entities Endpoint

Query risky entities programmatically:

# Get top 50 risky entities in last 24h
curl -X GET "/api/risk/entities?window=24h&limit=50"

# Filter by entity type and minimum score
curl -X GET "/api/risk/entities?entity_type=src_ip&min_score=60"

Risk Overview Endpoint

Get risk analytics summary:

# Get overview statistics
curl -X GET "/api/risk/overview"

Clear Risk Scores

Reset entity risk scores when appropriate:

# Clear specific entity
curl -X POST "/api/risk/clear" \
  -H "Content-Type: application/json" \
  -d '{"entity": "192.168.1.100", "entity_type": "src_ip", "reason": "False positive"}'

# Clear all risk scores (admin only)
curl -X POST "/api/risk/clear-all"

Time-Windowed Risk Queries

Get accurate time-based risk calculations:

# Get entities with high 24h or 7d risk
curl -X GET "/api/risk/time-windowed?min_score_24h=80&min_score_7d=100"

# Get entities exceeding thresholds (for alerting)
curl -X GET "/api/risk/thresholds?threshold_24h=100&threshold_7d=150"

Best Practices

Risk Score Design

  1. Start Conservative: Begin with lower base scores and adjust based on false positive rates
  2. Use Modifiers Strategically: Focus on conditions that genuinely increase threat likelihood
  3. Consider Context: Account for normal business operations and expected behaviors
  4. Test Thoroughly: Validate risk logic with historical data before production deployment

Entity Selection

  1. Choose Meaningful Entities: Select fields that represent actionable security contexts
  2. Prioritize by Impact: IP addresses for network threats, users for insider threats, hosts for endpoint security
  3. Avoid Over-Granularity: Don't track risk for fields that change frequently (like timestamps)

Threshold Tuning

  1. Monitor Alert Volume: Adjust thresholds to maintain manageable alert rates
  2. Account for Environment Size: Larger environments may need higher thresholds
  3. Consider Time Windows: Shorter windows need higher thresholds to avoid noise
  4. Review Regularly: Risk patterns change as threats evolve

Operational Workflow

  1. Daily Review: Check high-risk entities each day
  2. Weekly Analysis: Look for trending risks and persistent threats
  3. Monthly Tuning: Adjust thresholds and modifiers based on observed patterns
  4. Incident Integration: Use risk scores to prioritize incident response

Advanced Use Cases

Risk-Based SOAR Integration

Integrate risk scores with Security Orchestration platforms:

# Example: Escalate high-risk entities to SOAR
def check_risk_escalation():
    high_risk_entities = api.get_risk_entities(min_score=80, window="24h")
    for entity in high_risk_entities:
        if entity.risk_score > 90:
            soar.create_high_priority_case(entity)
        elif entity.risk_score > 80:
            soar.create_medium_priority_case(entity)

Dynamic Blocking

Automatically block high-risk entities:

-- Detection rule for automatic blocking
source_type=findings 
| bin span=5m
| stats sum(risk_score) as total_risk by risk_entity
| where total_risk > 150
| eval action="block"
| output firewall_blocks

Track risk evolution over time:

-- Weekly risk trend analysis
source_type=findings
| bin span=1d
| stats sum(risk_score) as daily_risk by risk_entity, _time
| sort risk_entity, _time
| eval risk_trend = daily_risk - lag(daily_risk, 1)
| where risk_trend > 50  -- Entities with increasing risk

Compliance Reporting

Generate risk-based compliance reports:

-- Monthly risk summary for compliance
source_type=findings
| where _time >= relative_time(now(), "-30d")
| stats sum(risk_score) as total_risk,
        avg(risk_score) as avg_risk,
        max(risk_score) as peak_risk,
        count() as incident_count by risk_entity
| eval risk_category = case(
    total_risk > 500, "High Risk",
    total_risk > 200, "Medium Risk", 
    total_risk > 50, "Low Risk",
    1=1, "Minimal Risk"
)
| stats count() by risk_category

Risk-based alerting transforms raw security events into actionable intelligence by quantifying threat levels and tracking cumulative risk over time. This enables security teams to focus on the highest-priority threats while maintaining comprehensive visibility across their environment.

On this page

On this page