A Breach That Looked Like Noise
A mid-sized financial services firm running a well-resourced security operations center discovered a compromise six weeks after initial access. The attacker had moved through the network methodically, harvesting credentials, staging data, and eventually exfiltrating client records. The intrusion detection system had fired alerts throughout the six-week window. Every single one was triaged, reviewed, and closed as a false positive or low-priority event. The alerts were technically correct. The rules were working. The deployment, however, was not.
This scenario is not unusual. The gap between an IDS that generates alerts and one that actually supports detection of real threats is where most organizations find themselves. Closing that gap requires looking beyond signature coverage and into the operational architecture of how detection systems are deployed, maintained, and acted upon.
Why Coverage Assumptions Betray Defenders
Most IDS deployments are built around the assumption that threats arrive at the network perimeter. Sensors get placed at ingress and egress points, rules get loaded from a commercial or open-source feed, and teams move on. The problem is that contemporary attackers rarely behave in ways that trigger perimeter-focused detection.
The Russia-linked operation that abused Microsoft Office authentication tokens by compromising router infrastructure is a precise example of this dynamic. The attacker did not crash through the front door. They manipulated trusted infrastructure to harvest tokens quietly, then operated inside environments that treated their activity as legitimate. A perimeter-focused IDS watching for inbound exploit traffic would have contributed very little to detecting that campaign.
Ivanti's recent disclosure of a new EPMM flaw exploited in zero-day attacks follows a similar pattern. Attackers targeting mobile management infrastructure are operating inside trust boundaries that most IDS deployments treat as safe zones. When the exploitation surface is a trusted management platform, your sensor placement and rule focus determine whether you see anything meaningful at all.
The Visibility Problem Starts With Sensor Placement
Effective IDS coverage requires sensors positioned to observe east-west traffic, not just north-south flows. Lateral movement, credential abuse, and data staging all happen inside the network. If your sensors cannot see host-to-host communication across segments, you are watching the edges of a fire while the interior burns.
Practical placement guidance for network-based IDS sensors:
- Place sensors at each network segment boundary, not just at the perimeter firewall.
- Monitor traffic between your server farms and your endpoint subnets separately from your internet-facing zones.
- Ensure sensors have visibility into management network traffic, where tools like Ivanti EPMM, remote access platforms, and authentication infrastructure live.
- Deploy host-based IDS or EDR on critical servers to complement network visibility, particularly for encrypted east-west traffic that NTA cannot inspect without SSL decryption infrastructure.
SSL/TLS decryption is a point of real operational tradeoff. Decrypting internal east-west traffic allows your IDS to inspect payloads that would otherwise be opaque, but it introduces latency, certificate management complexity, and potential compliance considerations depending on your industry. The decision should be deliberate and documented rather than deferred indefinitely.
Rule Quality Versus Rule Quantity
Loading every available signature from every available feed produces high-volume alert streams that exhaust analyst capacity and train teams to ignore noise. This is not a theoretical risk. The financial services case at the start of this article is representative of what happens when signal-to-noise ratios collapse.
A more productive approach treats rule management as an active, continuous process rather than a one-time configuration task. Start by categorizing your existing rules by confidence and relevance to your actual environment. Rules that have never fired in twelve months in a production environment with meaningful traffic deserve scrutiny. They may be relevant signatures waiting for an attack that has not come, or they may be rules that will never fire because the traffic they detect does not exist in your network topology.
Rules that fire constantly and produce near-zero confirmed true positives are the more urgent problem. These rules train analysts to suppress alerts and undermine confidence in the system as a detection tool. Suppressing or retiring these rules is not reducing coverage, it is restoring detection capacity.
Building Rules Around Observable Attacker Behaviors
The most durable detection logic targets attacker behaviors rather than specific exploit payloads or file hashes. The PCPJack campaign that followed the TeamPCP malware operation demonstrates why. When attackers modify tooling or swap implants, signature-based detection on specific byte sequences fails. Detection logic that targets the behavior of stealing cloud credentials, exfiltrating to external storage APIs, or establishing persistence through scheduled tasks persists across tooling changes.
MITRE ATT&CK provides a practical framework for building behavior-based detection rules. Map your current rule coverage against the tactics and techniques relevant to your threat model. Identify gaps, particularly in the persistence, credential access, and lateral movement categories, which tend to be underrepresented in commercial signature feeds relative to initial access and execution.
For network-based detection, behavioral rules worth implementing include:
- Anomalous DNS query volume from individual hosts, which can indicate beaconing or data exfiltration over DNS.
- Internal hosts communicating with cloud storage APIs outside of approved application workflows, a pattern relevant to cloud secret theft campaigns like PCPJack.
- SMB lateral movement patterns, including rapid sequential authentication attempts across multiple hosts within short time windows.
- Token replay indicators, such as authentication events from geographic locations or ASNs inconsistent with a user's established baseline.
The Alert Fatigue Cycle and How to Break It
Alert fatigue is a process problem, not a staffing problem. Hiring more analysts to close more tickets in the same broken workflow produces the same outcome faster. The fix is architectural.
Tiered alerting is the most effective structural change most teams can make. Not every detection event warrants a paged alert requiring immediate human triage. Build a priority classification that distinguishes between high-confidence, high-severity detections that require immediate action and lower-confidence signals that should be correlated, aggregated, and reviewed on a scheduled cadence rather than as individual tickets.
Correlation rules that group related low-confidence events into a single investigation case reduce analyst workload while preserving detection coverage. A single authentication failure from an unusual IP is low-priority noise. Five authentication failures across three different service accounts from the same source over fifteen minutes, combined with an outbound DNS query to a newly registered domain, is an investigation case. Your SIEM or SOAR platform should be configured to surface the latter as a unified event rather than five separate low-priority tickets.
Feedback Loops That Keep Tuning Current
Alert tuning without feedback loops produces temporary improvement that degrades over time as the threat landscape changes. Build a formal process for closing the loop between analyst decisions and rule adjustments.
When an analyst closes an alert as a false positive, that decision should feed directly into a review queue for the rule owner. If the same rule produces ten false-positive closures in a week, it warrants immediate review. When an alert is confirmed as a true positive and leads to an incident response engagement, the detection logic that fired should be reviewed for strengthening and the detection gap that allowed the attacker to operate prior to detection should be documented and addressed.
Red team exercises and purple team engagements are the most direct way to validate that your IDS is actually detecting what you believe it detects. Running attacker simulations against production detection infrastructure, with visibility into what fires and what does not, exposes gaps that log review and theoretical analysis cannot surface. Schedule these engagements at least quarterly and ensure results feed back into your rule tuning backlog.
Encrypted Traffic, AI Extensions, and New Blind Spots
The browser extension threat surfaced recently around AI writing tools illustrates a detection challenge that will only grow. Extensions operating inside authenticated browser sessions can read email content, form data, and session tokens without generating network traffic that looks anomalous to a perimeter-focused IDS. The traffic is HTTPS, the destination may be a legitimate cloud service, and the volume is consistent with normal user activity.
Detecting this class of threat requires moving detection logic to the endpoint. Host-based monitoring of browser extension activity, combined with behavioral analytics on outbound traffic patterns from individual workstations, provides coverage that network-based IDS alone cannot.
ClickFix-style attacks pushing Vidar Stealer, which Australia's cyber security authority recently warned about, follow a similar pattern. The initial delivery mechanism abuses legitimate user interaction, bypassing perimeter controls. The IDS value in these scenarios comes from detecting post-compromise behaviors: credential access patterns, data staging in user-accessible paths, and communication with command and control infrastructure.
Practical controls to address encrypted and endpoint-resident threats:
- Deploy endpoint detection and response platforms alongside network-based IDS, treating them as complementary rather than redundant.
- Implement DNS filtering with logging and alerting on newly registered or low-reputation domains, which provides detection coverage for C2 communication that HTTPS inspection cannot see.
- Monitor for browser extension installations and removals as part of endpoint event logging, flagging extensions from sources outside approved repositories.
- Establish outbound traffic baselines per user segment and alert on deviations in upload volume, particularly to cloud storage and communication platforms.
Malware That Spreads Through Trusted Channels
The TCLBanker malware spreading through WhatsApp and Outlook represents a detection problem that sits at the intersection of endpoint and network visibility. Self-propagating malware using legitimate communication platforms generates traffic that looks indistinguishable from normal user activity at the network layer. Detection depends on endpoint-side behavioral analysis and, critically, on IDS rules that watch for the post-infection behaviors rather than the propagation mechanism itself.
When malware spreads through trusted applications, the network-based IDS opportunity lies downstream of infection. Watch for behavioral anomalies on hosts that have received messages from internal sources: unexpected process launches, credential access events, and outbound connections to infrastructure not present in historical baselines. The propagation event you cannot detect is less important than the subsequent behaviors you can.
Connecting Threat Intelligence to Active Detection
Threat intelligence feeds provide value proportional to how quickly and accurately they are translated into active detection logic. A feed that generates IP reputation data or domain indicators has limited value if those indicators sit in a threat intelligence platform and never reach your IDS rule set or SIEM correlation logic.
Build an operational pipeline that moves actionable indicators from intelligence sources into detection infrastructure within defined time windows. High-confidence indicators tied to active campaigns should reach your IDS within hours. Lower-confidence indicators can follow a longer validation and testing process before production deployment.
For indicators derived from current campaigns, the ransomware threat intelligence theme captured in recent reporting about attacker patience before detonation is particularly relevant. Attackers in pre-ransomware stages operate quietly for days or weeks, staging, testing, and validating their access. The IDS opportunity in these scenarios is detecting the staging and reconnaissance activity, which is subtle but observable if you are watching the right traffic with tuned rules and sufficient historical context for anomaly detection.
Operational Metrics That Actually Reflect Detection Health
Measuring IDS health by alert volume or rule count tells you very little about actual detection effectiveness. More useful metrics include mean time to detect confirmed intrusions, false positive rate by rule category, percentage of ATT&CK technique coverage with active detection logic, and the frequency with which red team simulation events are detected versus missed.
Track these metrics over time and review them in regular operational cadences. An increase in mean time to detect may indicate analyst capacity problems, rule degradation, or attacker adaptation to your detection patterns. A sustained high false positive rate in a specific rule category indicates tuning debt that is degrading your overall detection capacity.
Document your detection gaps explicitly. Every organization has them. The teams that manage detection gaps effectively know where they are, have compensating controls in place, and have a remediation roadmap with defined timelines. The teams that manage poorly operate with gaps they do not know exist until an attacker exploits them.
Making the IDS Work for the Team Using It
An intrusion detection system is only as effective as the team operating it, and teams operate effectively when they have clear workflows, manageable alert volumes, and detection logic they trust. The technical configuration of your IDS matters less than whether your analysts believe the alerts it generates are worth investigating.
Invest in analyst training that goes beyond platform operation. Analysts who understand attacker behaviors, know what techniques are active in current campaigns, and can read network traffic contextually make better triage decisions than analysts who follow decision trees against alert descriptions. The npm threat landscape reporting, the Ivanti zero-day disclosure, and the router compromise campaign attributed to Russian state actors are all training materials for analysts who need to understand what active threats look like in practice.
The organizations that detect intrusions quickly share a common characteristic. They have built detection systems their teams trust, with workflows that surface real threats without burying them in noise, and they maintain those systems as living infrastructure rather than point-in-time deployments. That is the standard worth building toward.