

One of the most insidious security risks isn't a sophisticated attack, it's the endpoint that stops reporting. A sensor that appears enrolled but hasn't sent telemetry in hours or days represents a critical blind spot. Whether due to network issues, system shutdown, agent crash, or intentional tampering, these silent sensors deserve immediate attention.
This post walks through a practical approach to identifying sensors that have gone quiet, using LimaCharlie's API to surface endpoints missing from your telemetry stream.
Security teams often focus on whether sensors are "online" or "offline," but this binary view misses an important middle ground: sensors that maintain connectivity but stop sending meaningful telemetry. This can happen for several reasons:
Resource constraints: The agent is throttled or starved of CPU/memory
Configuration drift: Telemetry collection was inadvertently disabled
Network filtering: Outbound telemetry is blocked while heartbeats succeed
Adversary action: An attacker has disabled or tampered with the agent
System state: The host is hibernating, suspended, or in a degraded state
Regardless of cause, the result is the same: you have no visibility into what's happening on that endpoint.
LimaCharlie tracks when each sensor last communicated with the cloud. By querying all sensors in an organization and filtering by their last_seen timestamp, we can identify those that haven't reported within a specified window.
The workflow is straightforward:
List all sensors in the organization using list_sensors
Extract the last_seen timestamp for each sensor
Filter for sensors where last_seen exceeds your threshold (e.g., 12 hours)
Scope the results to a relevant time window (e.g., past 30 days to exclude long-decommissioned hosts)
Here's what this looks like in practice against a demo organization with 813 enrolled sensors:
Query Parameters: - Stale threshold: >12 hours without telemetry - Time window: Last 30 days Result: 34 sensors identified as silent
Host name | Last Event (UTC) | Hours Silent |
hello.world.acme | 2026-01-21 21:40:21 | 51 |
bender-template | 2026-01-15 19:55:42 | 196 |
bender-dev | 2026-01-15 18:05:29 | 198 |
gcp-audit-logs | 2026-01-14 00:46:48 | 240 |
austin | 2026-01-01 18:31:17 | 534 |
desktop-14dh692.lan | 2025-12-25 15:50:23 | 705 |
The results immediately surface patterns worth investigating:
Test/ephemeral hosts: Multiple [hello.world](<http://hello.world>).acme and bender-template entries suggest CI/CD or test environments that spin up temporarily. These may be expected behavior.
Cloud adapter silence: The gcp-audit-logs sensor stopping on January 14 indicates a cloud integration issue. Credentials may have expired or the adapter configuration changed.
Workstation gaps: Individual workstations like austin and desktop-14dh692.lan being offline for weeks could indicate employee departures, or hardware changes.
Not every silent sensor is a problem. Context matters:
Expected silence:
Development/test VMs that are spun down after use
Seasonal or project-based systems
Decommissioned hardware pending sensor cleanup
Unexpected silence (investigate immediately):
Production servers that should run 24/7
Cloud adapters and log integrations
Executive or high-value target workstations
Any system that was recently active then went dark
The 12-hour threshold used here is aggressive—suitable for environments expecting continuous telemetry. Adjust based on your operational rhythm. A 24-hour or 48-hour threshold may be more appropriate for organizations with varied work schedules or global distribution.
There are several ways to identify silent sensors in LimaCharlie, from simple UI checks to programmatic queries.
The simplest approach is the LimaCharlie web console:
Navigate to Sensors in your organization
Sort by the Last Seen column (ascending)
Sensors at the top with the oldest timestamps are your silent endpoints
This works well for small fleets or quick spot-checks but doesn't scale for automated monitoring.
For programmatic access, the LimaCharlie Python SDK provides direct sensor enumeration:
import limacharlie from datetime import datetime, timedelta, timezone org = limacharlie.Manager(oid='your-org-id-here', secret_api_key='your-api-key-here') now = datetime.now(timezone.utc) threshold = now - timedelta(hours=12) window_start = now - timedelta(days=30) silent_sensors = [] for sensor in org.sensors(): last_seen = datetime.fromtimestamp(sensor.last_seen, tz=timezone.utc) if window_start < last_seen < threshold: silent_sensors.append({ 'sid': sensor.sid, 'hostname': sensor.hostname, 'last_seen': last_seen }) for s in sorted(silent_sensors, key=lambda x: x['last_seen'], reverse=True): print(f"{s['hostname']}: {s['last_seen']}")
Manual queries are useful for point-in-time audits, but this check should run continuously. LimaCharlie's D&R (Detection & Response) rules can automate silent sensor detection using a scheduled trigger and a Python playbook.
First, create a Python playbook that checks all sensors and generates detections for those exceeding the silence threshold. Navigate to Automation > Playbooks and create a new playbook named check-stale-sensors:
import time def playbook(sdk, data): STALE_THRESHOLD_SECONDS = 3600 # 1 hour MAX_AGE_SECONDS = 30 * 24 * 3600 # 30 days now = time.time() stale_sensors = [] # Iterate all sensors in the organization for sensor in sdk.sensors(): silence_duration = now - sensor.last_seen # Skip sensors that reported recently if silence_duration <= STALE_THRESHOLD_SECONDS: continue # Skip sensors offline for more than 30 days (likely decommissioned) if silence_duration > MAX_AGE_SECONDS: continue hours_silent = round(silence_duration / 3600, 1) stale_sensors.append({ 'sid': sensor.sid, 'hostname': sensor.hostname, 'hours_silent': hours_silent, 'last_seen': sensor.last_seen }) # Return detection if stale sensors found if stale_sensors: return { 'detection': { 'stale_sensors': stale_sensors, 'stale_count': len(stale_sensors), 'threshold_hours': STALE_THRESHOLD_SECONDS / 3600, 'check_time': now }, 'cat': 'stale-sensor-alert', 'data': { 'stale_count': len(stale_sensors), 'sensors': stale_sensors } } return {'data': {'stale_count': 0, 'message': 'All sensors reporting normally'}}
Create a D&R rule that triggers the playbook on an hourly schedule. Navigate to Automation > D&R Rules and add a new rule named stale-sensor-monitor:
Detection:
event: __schedule op: is path: routing/schedule_name value: hourly
Response:
- action: extension request extension name: ext-playbook extension action: run_playbook extension request: name: check-stale-sensors
đź’ˇ Note: The organization must be subscribed to the ext-playbook extension. Navigate to Add-ons > Extensions to verify or subscribe.
Every hour, LimaCharlie generates an internal __schedule event with routing/schedule_name: hourly
The D&R rule matches this event and invokes the ext-playbook extension
The extension runs the check-stale-sensors playbook
The playbook iterates all sensors, checking their last_seen timestamp
For each sensor silent longer than the threshold (1 hour), it adds them to the stale list
If stale sensors are found, the playbook returns a detection with category stale-sensor-alert
The detection appears in your detection feed and can trigger outputs (Slack, email, SIEM, etc.)
Adjust the threshold: Modify STALE_THRESHOLD_SECONDS in the playbook (3600 = 1 hour, 86400 = 24 hours).
Filter by tag: Add logic to only check sensors with specific tags:
if 'always-on' not in sensor.tags: continue
Change detection category by duration: Escalate based on how long the sensor has been silent:
# Determine severity based on silence duration max_hours = max(s['hours_silent'] for s in stale_sensors) if max_hours > 24: category = 'stale-sensor-critical' elif max_hours > 4: category = 'stale-sensor-warning' else: category = 'stale-sensor-info' return { 'detection': {...}, 'cat': category, ... }
Tagging strategy:
Tag sensors by expected reporting frequency (always-on, business-hours, ephemeral). Your playbook can then apply different thresholds based on tags, reducing false positives from known-intermittent systems.
Integration with asset inventory:
Cross-reference silent sensors against your CMDB or asset inventory. A sensor that's silent because the laptop was returned to IT is very different from one that went dark unexpectedly.
Sensor Type | Alert Threshold | Escalation |
Production servers | 1 hour | Immediate |
Cloud adapters | 4 hours | High priority |
Standard workstations | 24 hours | Normal priority |
Test/dev systems | 7 days | Low priority / cleanup |
Visibility gaps are security gaps. An endpoint that isn't reporting telemetry provides zero defensive value—you're paying for coverage you're not receiving. Regular audits of sensor health, combined with automated alerting on silence thresholds, ensure your security investment delivers actual protection.
The query demonstrated here took seconds to run and immediately surfaced 34 sensors warranting review. In a real incident, any one of those silent endpoints could be the compromised host you're searching for, invisible precisely because it stopped talking.
Build this check into your operational rhythm. Your future incident responders will thank you.
440 N Barranca Ave #5258
Covina, CA 91723
5307 Victoria Drive #566
Vancouver, BC V5P 3V6
Stay up-to-date on all things LimaCharlie with our monthly newsletter.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.