IDS Reality Check: What Most Deployments Actually Detect Versus What Attackers Are Actually Doing

By IPThreat Team May 9, 2026

The Gap Nobody Wants to Admit

Most organizations running intrusion detection systems believe they have meaningful visibility into their network. The reality, confirmed by breach after breach, is that attackers frequently operate inside monitored environments for days or weeks before any alert fires. The problem rarely comes down to the technology itself. It comes down to how the technology is deployed, tuned, and integrated into actual operational workflows.

Cybersecurity professionals and IT administrators inherit IDS deployments that were configured for a different threat landscape, monitored by teams with different staffing levels, and never properly tested against the attacks that are actually occurring. This article addresses that gap directly, with phased recommendations you can act on today, this week, and this quarter.

Why Modern Attacks Slip Through Standard IDS Configurations

The DFIR Report's recent analysis of SystemBC deployments illustrates the core problem clearly. SystemBC operates as a proxy and tunneling tool, and its traffic patterns are designed to blend into normal HTTPS flows. Standard IDS rulesets built around known-bad signatures will miss it unless the rules are specifically written to detect behavioral indicators: persistent outbound connections to unusual ASNs, small regular beacon intervals, or encrypted sessions to newly registered infrastructure.

The May 2025 reporting on Russia-linked actors harvesting Microsoft Office tokens through compromised routers points to the same structural issue. The initial compromise happened at the router layer, a point many IDS deployments treat as infrastructure rather than a monitored asset. By the time traffic entered the monitored network segment, the malicious activity looked like legitimate credential usage. Signature-based detection had nothing to fire on.

The Q1 2026 vulnerability summary highlights another pressure point. Local privilege escalation vulnerabilities like the recently disclosed Dirty Frag affect Linux systems widely. Once an attacker has a foothold and escalates privileges locally, the activity generating network-visible traffic often looks indistinguishable from authorized administrator actions. IDS systems watching for inbound attack patterns will miss post-exploitation behavior that is entirely internal and lateral.

Today: Audit What Your IDS Actually Covers

Before tuning rules or adding sensors, map what your current deployment actually monitors. This sounds obvious but most teams have not done it systematically. Pull your sensor placement diagram and answer three questions concretely.

  • Which network segments generate traffic that passes through a sensor?
  • Which segments, including cloud workloads, IoT subnets, OT networks, and router management interfaces, do not?
  • What percentage of your lateral east-west traffic is visible to any sensor?

In most environments, east-west coverage is the critical blind spot. Perimeter sensors capture ingress and egress, but attackers who have established a foothold move laterally through internal segments that are never inspected. The SystemBC reporting specifically identifies lateral proxy hopping as a post-exploitation technique that evades perimeter-focused detection.

Alongside the coverage audit, pull your top alert categories from the last 30 days and identify what percentage of those alerts were investigated versus auto-suppressed or ignored due to alert fatigue. If more than 60 percent of your daily alert volume is being dismissed without investigation, your detection signal is buried in noise and your team has effectively lost confidence in the system.

Sensor Placement and Traffic Visibility

Effective IDS deployment requires sensors at multiple choke points, not just the perimeter. For on-premises environments, this means placing sensors or span ports at:

  1. The perimeter ingress and egress points for external threat detection
  2. Core distribution switches to capture east-west traffic between internal segments
  3. DMZ segments where web-facing applications communicate with backend systems
  4. Management network segments where administrative credentials and privileged access tools generate traffic

For cloud environments, native traffic mirroring tools like AWS VPC Traffic Mirroring, Azure Network Watcher, and GCP Packet Mirroring can feed IDS sensors. Many teams enable these tools and never confirm the traffic volume and fidelity they actually receive. Sampling rates, MTU fragmentation, and protocol handling differences between cloud and physical environments all affect detection fidelity.

The watering hole attacks pushing ScanBox keylogger represent a case where endpoint-visible activity, specifically JavaScript execution in a browser, generates minimal network-layer indicators. IDS sensors at the perimeter may capture the initial connection to the compromised site but will not see the keylogger's data exfiltration if it uses HTTPS to a legitimate-looking domain. DNS-layer monitoring and TLS certificate inspection become necessary complements to packet inspection in these scenarios.

This Week: Build Behavioral Rules Alongside Signature Rules

Signature-based rules are necessary but not sufficient. Behavioral rules detect activity patterns rather than specific payload content. The following behavioral detections address attack techniques documented in recent threat intelligence and are implementable in most IDS platforms including Suricata, Snort, and Zeek.

Beacon Detection

Malware communicating with command-and-control infrastructure establishes regular check-in intervals. Configure your IDS or SIEM to flag internal hosts making outbound connections to the same external IP or domain at regular intervals, particularly if those intervals fall between 30 seconds and 10 minutes, the most common beacon frequencies in current malware families including SystemBC variants.

The detection query should look for connections where the standard deviation of the interval between sessions is low, indicating machine-generated regularity rather than human browsing behavior. Zeek's conn.log with statistical analysis provides this capability without requiring commercial tooling.

Unusual Protocol on Standard Ports

Attackers tunnel non-standard protocols through common ports to evade port-based filtering. Configure rules to detect traffic on port 80 or 443 that does not match expected HTTP or TLS handshake patterns. This catches tools like Cobalt Strike operating in malleable C2 mode as well as SSH tunneled through HTTP proxies.

SMB Lateral Movement Patterns

Lateral movement using SMB follows detectable patterns: a single source host authenticating to multiple destination hosts over a short time window, particularly if those destinations have not communicated previously. This pattern correlates with techniques observed across multiple intrusion sets documented in Q1 2026 threat reporting.

DNS Tunneling Indicators

DNS tunneling for data exfiltration or C2 generates DNS queries with unusually long subdomains, high entropy strings, or abnormally high query volumes to a single authoritative domain. Rules detecting these patterns catch a class of exfiltration that packet inspection frequently misses when traffic is encrypted at the application layer.

Handling the Alert Fatigue Problem Practically

Alert fatigue is the mechanism through which technically capable IDS deployments fail operationally. Analysts who receive hundreds of low-confidence alerts per day begin applying mental suppression filters that approximate the automatic suppression configured in the platform. The result is that high-confidence alerts generated by genuine intrusions get processed with the same delayed attention as the noise.

The practical fix operates at two levels. First, establish a tiered alert taxonomy with defined response SLAs. A Tier 1 alert requires investigation within 15 minutes. A Tier 2 alert requires investigation within 4 hours. A Tier 3 alert feeds into a daily review queue. Categorize your existing rules by tier and measure whether your team is actually meeting those SLAs.

Second, implement correlation logic that upgrades alert severity when multiple lower-confidence signals occur on the same host within a defined time window. A single failed authentication attempt is noise. Fifteen failed authentication attempts from the same source to five different internal hosts within 10 minutes is a Tier 1 alert regardless of the individual event confidence scores.

The NVIDIA GeForce NOW breach affecting Armenian users and the Zara breach exposing 197,000 records both involved data exposure at scale. Post-incident analysis in cases like these consistently shows that individual alerts were present in logs but were not correlated or escalated. Correlation rules that aggregate weak signals into actionable alerts address this operationally.

This Quarter: Build Detection Coverage for Post-Exploitation Activity

Most IDS deployments are optimized to detect initial access. Attackers know this and invest significant effort in making initial access appear benign, then executing their objectives during the post-exploitation phase. Building detection coverage for post-exploitation requires a different rule philosophy.

Living-off-the-Land Detection

Post-exploitation frequently relies on tools already present on the compromised system: PowerShell, WMI, certutil, mshta, and similar Windows utilities. At the network layer, these generate traffic signatures that differ from attacker-controlled tooling. PowerShell remoting over WinRM produces distinct connection patterns. WMI remote execution uses DCOM on port 135 followed by high dynamic port connections.

Write rules detecting these protocols originating from workstations or servers that have not historically used them. Baseline normal WinRM usage in your environment for two weeks, then alert on deviations from that baseline.

Credential Harvesting Network Indicators

Tools like Mimikatz do not generate network traffic directly, but the subsequent use of harvested credentials does. Detect pass-the-hash and pass-the-ticket attacks by looking for Kerberos ticket requests using RC4 encryption in environments configured for AES, authentication to multiple services within a short window using the same credential, and NTLM authentication crossing network segment boundaries where Kerberos should be used.

Data Staging and Exfiltration Detection

Before exfiltration, attackers stage data internally. Network indicators include large file transfers between internal hosts that have no established transfer relationship, access to file shares from hosts that have not accessed them previously, and compression tool usage detected through SMB or HTTP traffic patterns.

Exfiltration detection should look for large outbound transfers to cloud storage providers during off-hours, DNS query volumes significantly above baseline for a host, and outbound connections to IP addresses with no historical relationship to your environment. The Essential Data Sources for Detection Beyond the Endpoint reporting emphasizes that combining network telemetry with DNS and cloud access logs substantially improves exfiltration detection accuracy compared to network data alone.

Integrating Threat Intelligence Into IDS Operations

Threat intelligence feeds improve IDS detection when integrated operationally rather than passively. Passive integration means importing an IP blocklist and blocking matches. Operational integration means using threat intelligence to contextualize alerts, prioritize investigations, and proactively hunt for indicators before they generate alerts.

Configure your IDS to query threat intelligence APIs in near-real-time for IPs generating alerts. A connection that triggers a low-confidence behavioral rule becomes a high-priority investigation if that IP appears on multiple threat intelligence feeds with recent activity. This approach avoids the false confidence that comes from treating any single feed as authoritative while still benefiting from community-sourced intelligence.

The router compromise campaign harvesting Office tokens demonstrates why operational threat intelligence matters. The compromised router IPs were present in threat intelligence feeds before the campaign was widely reported. Organizations running those IPs against their egress logs proactively would have identified affected systems before credentials were exploited.

Testing Your IDS Before Attackers Do

IDS deployments that are never tested against real attack techniques develop blind spots that expand over time. Scheduled purple team exercises where your red team uses current attacker tooling against your monitored environment while the blue team attempts detection provide the most accurate measurement of real-world detection capability.

At minimum, run atomic detection tests using frameworks like Atomic Red Team on a quarterly basis. Each test executes a specific attack technique and verifies whether the corresponding IDS rule fires. Document which techniques generate alerts, which generate logs but no alerts, and which generate neither. Use those results to prioritize rule development and sensor placement improvements.

The Dirty Frag local privilege escalation vulnerability disclosure is a practical example. After a vulnerability like this is disclosed, run the exploit technique in your test environment and verify whether your IDS or EDR generates any network-visible indicator. If it does not, build a compensating detection rule targeting the behavior that follows successful exploitation rather than the exploit itself.

Documentation and Runbook Requirements

Detection without response is incomplete. For every Tier 1 detection rule in your IDS, maintain a runbook that specifies the investigation steps an analyst should take when that alert fires. The runbook should identify which additional data sources to query, what constitutes confirmation of a true positive versus a false positive, and what the escalation path is if confirmation occurs.

Runbooks serve two functions. They ensure consistent investigation quality regardless of analyst experience level, and they create a feedback mechanism for tuning. When analysts complete investigations and document outcomes, patterns emerge showing which rules generate actionable true positives at acceptable false positive rates and which rules require refinement.

Phased Action Summary

Improving IDS effectiveness is not a single project. It is an ongoing operational discipline. The phased approach below gives cybersecurity professionals and IT administrators a concrete sequence.

Today:
  • Complete a sensor placement audit and identify unmonitored segments
  • Review your top 10 alert types by volume and assess the true positive rate
  • Identify any cloud workloads not sending traffic to IDS sensors
This week:
  • Implement beacon detection rules if not already present
  • Configure correlated alert logic to reduce noise and surface high-confidence events
  • Establish tiered alert response SLAs and measure current compliance
This quarter:
  • Run atomic detection tests against your current rule set and document coverage gaps
  • Build behavioral detection rules for post-exploitation techniques specific to your environment
  • Integrate operational threat intelligence to contextualize alerts rather than passively blocking IPs
  • Complete one purple team exercise focused on techniques documented in current threat intelligence reporting

Closing Perspective

Intrusion detection systems are most valuable when they are treated as active investigative tools rather than passive monitoring infrastructure. The organizations that detect breaches early share a common characteristic: their IDS deployments are regularly tested, continuously tuned, and tightly integrated with human investigation workflows. The technology is necessary but the operational discipline surrounding it determines whether the alerts that matter get acted on before damage is done.

Contact IPThreat