The Problem Nobody Wants to Admit in Their Post-Incident Report
Most intrusion detection systems are configured to catch the attacker who behaves like an attacker. Loud scanning, known-bad IPs, repeated authentication failures, malware signatures in cleartext traffic. That profile described maybe 40% of threat actors five years ago. Today it describes a shrinking minority.
The DFIR Report's recent breakdown of SystemBC activity illustrates the gap clearly. Attackers using proxy-based C2 infrastructure are deliberately blending into traffic patterns that look like normal enterprise behavior. Encrypted tunnels, legitimate cloud provider egress IPs, and staged payload delivery mean your IDS sees something that resembles business-as-usual right up until lateral movement begins. By then, the attacker is already comfortable inside the environment.
Russia's router-hijacking campaign targeting Microsoft Office tokens is another example. The attackers weren't triggering traditional IDS signatures because they were living inside legitimate authentication flows. They didn't need to pop a shell. They harvested tokens through compromised routing infrastructure and operated through channels that most enterprise IDS configurations treat as trusted.
This article is for security operations teams and IT administrators who have an IDS deployed, know it's generating alerts, and suspect those alerts aren't catching what's actually happening on the network. The goal is practical reconfiguration, not a rip-and-replace conversation.
Where Most IDS Deployments Are Positioned Wrong
The traditional placement model puts sensors at the network perimeter, watching ingress and egress traffic. That made sense when the perimeter was a meaningful boundary. In environments with cloud workloads, remote workforce VPN connections, SaaS dependencies, and managed service provider access, the perimeter is more of a conceptual zone than a chokepoint.
The watering hole attacks pushing ScanBox keylogger demonstrate why perimeter-only detection fails. Users browse to a legitimate but compromised site. The malicious JavaScript executes in the browser context. Traffic to the watering hole looks like normal web browsing. The keylogger phone-home might use HTTPS to a domain that was registered weeks ago, aged deliberately to avoid fresh-domain heuristics. A perimeter sensor looking at that traffic stream sees nothing alarming.
Internal network segmentation points are where IDS sensors generate far more actionable signal. Once a host is compromised, the attacker needs to move. They need to enumerate, reach a domain controller, access file shares, pivot to additional systems. That lateral movement crosses internal segment boundaries that a perimeter-only deployment never sees.
Sensor Placement Strategy
- East-west traffic between VLANs: Place sensors or span ports at layer 3 switches that route between workstation subnets, server subnets, and management networks. Lateral movement almost always crosses these boundaries.
- Domain controller adjacency: Monitor traffic destined for or originating from domain controllers. Kerberoasting, pass-the-hash, DCSync attacks, and LDAP enumeration all generate detectable traffic patterns at this choke point.
- Management network ingress: Any access to out-of-band management interfaces, iDRAC, iLO, or network device management planes should be watched closely. Attackers who gain access to management networks often operate there for extended periods before triggering endpoint alerts.
- Cloud egress inspection: For cloud-native or hybrid environments, route traffic through an inspection layer before it leaves your VPC or vNet. Encrypted C2 channels, data exfiltration, and beaconing are detectable at this layer through behavioral analysis even when payload content is encrypted.
Signature Tuning in a World Where Attackers Know Your Rules
Commodity IDS signatures are public. SNORT rules, Suricata rule sets, and Sigma rules are available to anyone who wants to test their tooling against them before an operation. Sophisticated threat actors do exactly that. They run their malware and C2 infrastructure against common detection rule sets in lab environments and modify behavior until alerts stop firing.
The Dirty Frag Linux privilege escalation vulnerability reported in May 2025 is a useful case study. A new LPE vulnerability means new post-exploitation activity: attackers will attempt to escalate privileges on compromised Linux hosts. Generic signature coverage for this kind of exploit often lags by days or weeks after public disclosure. The teams that caught exploitation attempts earliest weren't relying on vendor-pushed signatures. They were monitoring for the behavioral indicators that any LPE attempt produces: unexpected setuid execution, unusual process lineage, kernel module loading from non-standard paths.
Behavioral detection fills the gap that signature-based rules leave open. Here's how to structure the layering:
Signature Layer
Keep vendor signatures updated on an automated cadence, but treat them as a floor rather than a ceiling. Configure your IDS to pull rule updates daily, prioritize high-confidence rules over noisy low-confidence ones, and suppress signatures that consistently fire on legitimate traffic in your environment without generating actionable alerts. A suppressed signature isn't catching anything. A well-tuned signature catching real events is worth far more than 500 untouched defaults generating alert fatigue.
Behavioral Layer
Build detection logic around what attackers need to do, not what specific tools they're currently using. Attackers change tools. Their operational requirements stay relatively constant. They need to establish persistence, enumerate the environment, move laterally, and exfiltrate data or deploy final-stage payloads. Each of those phases has behavioral fingerprints.
- Persistence attempts generate registry modifications, scheduled task creation, or service installation events. These should trigger medium-confidence alerts when they occur on hosts that haven't exhibited this behavior previously.
- Enumeration generates LDAP queries, NetBIOS lookups, port scan patterns, and DNS lookups for internal hostnames. A workstation that suddenly starts querying hundreds of internal hostnames over a short period is exhibiting anomalous behavior regardless of what tool is running.
- Lateral movement generates authentication attempts across multiple hosts from a single source, SMB connections to non-standard targets, remote service execution, and WMI activity. Threshold-based detection for these patterns catches a wide range of tooling.
- Exfiltration generates outbound data volume anomalies, connections to cloud storage services that aren't part of normal business operations, and DNS tunneling artifacts. Baseline your normal egress patterns and alert on deviation.
The Encrypted Traffic Problem and What You Can Actually Do About It
TLS inspection is the technically correct answer to encrypted C2 traffic. It's also operationally complex, introduces latency, creates certificate trust issues, and breaks certain applications that pin certificates or use mutual TLS. Most organizations end up with partial TLS inspection coverage, which creates predictable blind spots that sophisticated attackers will find and use.
The good news is that encrypted traffic still produces metadata that carries detection signal. You don't need to see inside the envelope to notice that a host is sending 500-byte packets to an external IP every 30 seconds with remarkable clockwork regularity. Beaconing analysis operates on connection metadata: timing intervals, packet size distributions, connection duration, and destination reputation.
Implementing Beaconing Detection
Beaconing detection requires a baseline window and an analysis window. Collect connection logs for a 24-hour baseline period for each host. Calculate the standard deviation of inter-connection timing to any single external destination. Connections with a timing standard deviation below a threshold (commonly 5-10 seconds for a 30-second beacon interval) warrant investigation regardless of whether you can see the payload.
Suricata's flowbits and the JA3/JA3S TLS fingerprinting capability give you additional metadata on encrypted connections without decryption. JA3 fingerprints the TLS client hello parameters. Known malware families have documented JA3 fingerprints because they use consistent TLS library configurations. Zeek (formerly Bro) extracts these fingerprints natively and feeds them into your detection pipeline alongside connection metadata.
DNS as a Detection Surface
DNS remains one of the richest detection surfaces available. The Zara breach affecting nearly 200,000 users, if it followed patterns common in retail data exfiltration, likely involved attackers who had established persistent access through infrastructure that made DNS queries throughout the intrusion lifecycle. DNS detection catches several classes of attacker behavior:
- Domain generation algorithm (DGA) traffic: Algorithmically generated domains have high entropy and unusual character distributions. Tools like
freq.pyscore domain name randomness. High-entropy domains queried by internal hosts are worth investigating. - DNS tunneling: Large TXT record responses, unusually long query names, and high query volumes to a single authoritative nameserver are all indicators of DNS tunneling.
- Newly registered domain queries: Integrate WHOIS age data with your DNS monitoring. Queries to domains registered within the last 30 days should trigger elevated scrutiny, particularly if those domains have no established reputation.
- Internal DNS reconnaissance: Zone transfer attempts, reverse DNS sweeps, and queries for non-existent internal hostnames (generating NXDOMAIN responses at high volume) indicate enumeration activity.
Phased Implementation: What to Prioritize and When
Today: Fix What's Breaking Your Existing Coverage
Before adding new detection capability, audit what your current IDS is actually doing. Pull the last 30 days of alerts and categorize them: how many were actionable, how many were suppressed without investigation, and how many were dismissed as false positives without verification that they were actually false. Most teams that complete this audit find a significant percentage of dismissed alerts that contained real signal buried in noise.
Configure your highest-value detection rules to alert with full packet capture for the first five packets of any matching flow. This gives analysts something to work with rather than a bare metadata record. Most IDS platforms support selective packet capture triggered by rule match. Enable this for your top 20 rules by alert volume and your top 10 rules by analyst-assessed severity.
Update your blocklists and threat intelligence feeds if they haven't been refreshed recently. Threat intelligence about the Russia router-hijacking campaign, for example, includes specific IOCs around IP ranges and token-harvesting infrastructure. Feed this into your IDS as blocking or alerting rules. The CISA advisories and DFIR community sources publish this material. It should be in your detection pipeline within 24 hours of publication, not 24 days.
This Week: Extend Coverage to Blind Spots
Map your current sensor placement against your network topology. Identify any segment-to-segment paths that aren't monitored. Prioritize segments that host sensitive data, authentication infrastructure, or production systems. You don't need sensors everywhere. You need sensors at the chokepoints that attackers must cross to reach high-value targets.
Enable Zeek logging if it isn't already running. Zeek runs alongside your IDS and generates structured connection logs, DNS logs, HTTP logs, SSL logs, and file extraction records without the alert overhead of signature matching. This log data feeds threat hunting, provides context for alerts, and serves as the behavioral baseline source for anomaly detection. Many organizations running Suricata or SNORT haven't enabled the companion Zeek instance that would dramatically increase their detection capability at minimal additional cost.
Integrate your IDS alert stream into your SIEM with enrichment. Raw IDS alerts without context drive analyst fatigue. Enrichment that adds asset classification (is this a server or a workstation, is it in production or development), user context (what account is associated with this IP), and threat intelligence correlation (has this destination IP appeared in any recent threat reports) makes each alert faster to triage and reduces the time analysts spend on lookups.
This Quarter: Build for Persistence and Lateral Movement Detection
Lateral movement detection requires correlation across multiple data sources. A single authentication event from one host to another is normal. The same authentication pattern repeated across 20 hosts in 10 minutes is reconnaissance or lateral movement. This kind of correlation requires your IDS events, Windows Security Event Log data, and network flow records to be in the same analysis pipeline.
Implement network traffic analysis (NTA) as a complement to signature-based IDS. NTA platforms analyze flow records and packet metadata at scale, building behavioral baselines and flagging deviations. They're particularly effective at catching encrypted C2 channels, beaconing, and data staging activity that signature-based rules miss. Platforms like Darktrace, ExtraHop, or open-source options like Zeek with a Kafka pipeline into ELK or Splunk provide this capability at different cost points.
Develop custom detection rules for your specific environment. Generic rule sets cover generic attacker behavior. Your organization has specific applications, specific user behavior patterns, and specific network topology characteristics. A rule that fires when any host in your developer VLAN makes an outbound connection on port 445 is specific to your environment and high-confidence. Generic rules can't be written at that specificity. Your team can.
Handling Alert Fatigue Without Reducing Coverage
Alert fatigue is the enemy of effective IDS operation. When analysts are processing hundreds of alerts per shift, they develop habits that trade accuracy for speed. Alerts get dismissed based on surface characteristics rather than investigation. The systemBC-style attacks that the DFIR Report documented are specifically designed to look like noise. An analyst working through a queue of 400 alerts in an eight-hour shift is going to misclassify some of them.
Risk-score your alerts rather than treating them as binary alert/no-alert. Assign base scores to rule matches and adjust those scores based on asset criticality, time of day, user behavior history, and threat intelligence correlation. A medium-confidence rule match on a server that holds PII, outside business hours, from a source IP that appeared in a threat report last week, scores much higher than the same rule match on a dev workstation during working hours from an internal IP with no threat association. Route the high-score alerts to immediate analyst attention and batch the lower-score alerts for periodic review.
Implement a feedback loop between analysts and detection engineering. When an analyst dismisses an alert as a false positive, that determination should feed back into rule tuning. When an alert leads to a confirmed incident, that rule should be reviewed for whether earlier alerts in the same sequence were dismissed. Retrospective analysis of confirmed incidents almost always reveals earlier detection opportunities that were missed due to tuning gaps or analyst decisions made under time pressure.
Data Sources Beyond Endpoint and Perimeter
Effective intrusion detection increasingly depends on data sources that most teams treat as secondary. The recent industry conversation around essential data sources for detection beyond the endpoint points to network infrastructure logs, cloud control plane activity, and identity provider logs as critical gaps in many organizations' detection coverage.
Cloud control plane activity deserves particular attention. Actions taken through cloud provider APIs, IAM role assumption, instance creation, security group modification, and S3 bucket access, all happen outside the visibility of traditional network IDS. These events appear in CloudTrail (AWS), Activity Logs (Azure), or Cloud Audit Logs (GCP). Feeding these into your detection pipeline adds coverage for attacks that operate entirely within cloud APIs without generating traditional network traffic that a sensor would ever see.
Identity provider logs from Active Directory, Azure AD, or Okta surface authentication anomalies that network-layer detection misses. Token theft attacks like the router-based Office token harvesting campaign are often most visible in identity provider logs: authentication from unusual locations, token refresh patterns that don't match established user behavior, or impossible travel indicators where the same credential authenticates from geographically distant locations within a short window.
Operationalizing Threat Intelligence in Your IDS
Threat intelligence feeds have the highest value when they're integrated into detection workflow rather than consumed as reports. The Q1 2026 vulnerability and exploit data, the NVIDIA breach details, and the campaign-specific IOCs from events like the watering hole ScanBox campaign all contain indicators that can be operationalized directly.
Structure your threat intelligence integration in tiers. Tier one includes high-confidence, high-severity IOCs that should generate immediate alerts: known C2 infrastructure IPs, malware distribution domains, and attacker-controlled infrastructure from active campaigns. These feed into blocking rules with alert generation. Tier two includes medium-confidence indicators that should generate alerts for analyst review without automatic blocking. Tier three includes contextual intelligence, TTP descriptions, campaign context, and infrastructure patterns that inform hunting queries rather than automated detection.
Automate the ingestion of STIX/TAXII feeds from your threat intelligence providers. Manual IOC import creates lag between intelligence publication and detection coverage. That lag is the window where attackers operating with known infrastructure can move through your environment undetected. Automated ingestion with a verification step (confirming the indicator hasn't expired and still meets confidence thresholds) closes that window without requiring analyst time for every update.
Testing Your Detection Before an Attacker Does
Purple team exercises validate whether your IDS configuration actually catches the techniques that matter. Run these exercises quarterly at minimum. Give your red team a specific scenario, such as initial access via a watering hole attack, proxy-based C2 using SystemBC-like infrastructure, and lateral movement toward a domain controller, and measure how many of their actions generated alerts, how quickly those alerts were triaged, and whether the analyst response would have contained the simulated attack.
Atomic tests using the Atomic Red Team framework let you validate specific detection coverage for individual techniques without running a full exercise. Test whether your IDS fires on a DCSync attempt. Test whether beaconing detection catches a simulated beacon at your threshold. Test whether your DNS monitoring catches a DGA-like query pattern. This kind of targeted validation identifies gaps in specific coverage areas and gives your detection engineering team a prioritized backlog of improvements to work through.
Document your detection coverage against MITRE ATT&CK explicitly. Map each of your active detection rules to the techniques they cover. Identify techniques in the tactics most relevant to your threat model (initial access, execution, persistence, lateral movement, exfiltration) that have no coverage. Those gaps are where your next detection development effort should focus.
Practical Takeaways for the Next 72 Hours
The most important thing your team can do right now is complete a coverage audit. Pull your alert data, map your sensor placement against your network topology, and identify the three biggest gaps between where attackers are operating and where your detection is looking. For most organizations, those gaps are east-west traffic monitoring, cloud control plane visibility, and behavioral detection for encrypted C2 traffic.
After the audit, pick one gap and fix it this week. A new sensor on the internal routing boundary, Zeek logging enabled on an existing sensor, or a threat intelligence feed integrated into your SIEM. One concrete improvement implemented beats three improvements planned. The attacker who is already inside your environment isn't waiting for your quarterly roadmap review.