What Does Your IDS Actually See When Attackers Operate Below the Alert Threshold?

By IPThreat Team May 8, 2026

When Silence in the Console Means Something Went Wrong

In early 2025, a mid-sized financial services firm discovered that an attacker had maintained persistent access to their environment for eleven days before a single high-severity IDS alert fired. The intrusion detection system was running, signatures were current, and the security team was monitoring dashboards daily. The attacker had simply stayed below the noise floor: low-frequency port scans spread across four-hour windows, lateral movement disguised as routine SMB traffic, and credential harvesting timed to overlap with morning login surges.

The IDS logged fragments of each activity. None of them individually crossed the configured thresholds. Nobody correlated them. By the time ransomware deployed, the attacker had already exfiltrated roughly 40GB of data through what the system had categorized as routine HTTPS outbound traffic.

This scenario is not unusual. It reflects a structural gap that exists in most IDS deployments: the system is configured to detect known, high-confidence attack patterns at volume, but attackers in 2026 operate with explicit awareness of those thresholds. The threat intelligence community has documented this pattern repeatedly, and recent headlines reinforce it. Russia's campaign targeting routers to steal Microsoft Office tokens relied on low-and-slow techniques that generated minimal IDS noise. The PCPJack threat actor stealing cloud credentials moved laterally using methods that looked like administrative activity to most signature-based sensors. The calm before a ransomware deployment, as multiple incident reports have described it, often looks like nothing at all in the alert console.

This article addresses the structural problems that allow attackers to operate below detection thresholds, and what cybersecurity professionals and IT administrators can do practically to close that gap.

Understanding the Alert Threshold Problem

Most IDS platforms, whether network-based (NIDS) or host-based (HIDS), rely on two detection mechanisms: signature matching and anomaly scoring. Signature matching compares traffic or behavior against known-bad patterns. Anomaly scoring compares current activity against a learned baseline and fires when deviations exceed a configured threshold.

Both mechanisms share a common vulnerability: they are point-in-time assessments. A single connection attempt, a single process execution, a single failed authentication — evaluated in isolation, these events rarely breach alert thresholds. Attackers who understand this operate in what defenders often call the sub-threshold space, keeping each individual action below the detection line while accumulating meaningful progress over time.

The Ivanti EPMM zero-day exploitation disclosed in recent weeks illustrates this precisely. Attackers exploiting that vulnerability in the wild did not trigger mass alerts. They sent carefully crafted requests that looked, to many IDS configurations, like slightly malformed but plausible API calls. Organizations running Ivanti without detection logic tuned to that specific request pattern saw nothing until the exploitation was complete.

The ClickFix campaign pushing Vidar Stealer, flagged by Australian cybersecurity authorities, used a similar technique at the endpoint level: execution chains that individually matched legitimate Windows behaviors, but collectively represented data theft. Host-based IDS solutions configured with default rules largely missed the early stages.

The Four Structural Gaps That Allow Sub-Threshold Operations

1. Threshold Calibration Based on Volume Rather Than Context

Most IDS deployments set alert thresholds based on avoiding alert fatigue. If a rule fires hundreds of times per day on benign traffic, administrators raise the threshold or disable the rule. This is rational from an operational standpoint but creates a systematic blind spot for low-frequency malicious activity.

An attacker conducting reconnaissance with one port probe every fifteen minutes generates exactly the kind of traffic that gets filtered out during threshold tuning. The same is true for credential stuffing attempts spread across a large IP space, lateral movement using legitimate administrative protocols, and staged data exfiltration through encrypted channels.

The practical fix here is not to lower thresholds universally — that re-introduces alert fatigue — but to implement tiered thresholds based on asset criticality and traffic context. A single authentication failure from an external IP against a domain controller should carry different weight than the same event against a shared workstation. Most platforms support this through rule prioritization and asset classification, but few organizations configure it systematically.

2. Insufficient Correlation Across Time Windows

Standard IDS correlation rules typically operate over short time windows, often five to fifteen minutes. This is appropriate for detecting brute force attacks and flood-based exploits, but it misses activity that is deliberately spread over hours or days.

The TCLBanker malware spreading through WhatsApp and Outlook demonstrates how attackers think about temporal distribution. Initial infection vectors, lateral movement, and data staging happened across distinct time periods, with each phase designed to look routine within its own observation window. An IDS looking at fifteen-minute windows saw nothing remarkable in any of them.

Extending correlation windows is computationally expensive and increases false positive rates if done without careful rule design. The practical approach is to identify specific high-value attack sequences and write long-window correlation rules for those specific patterns rather than applying extended windows globally. Authentication failures followed by successful logins followed by large file access over a 48-hour period is one example. Outbound connection attempts to a new external IP followed by increased DNS query volume six hours later is another.

3. Blind Spots in Encrypted Traffic

A significant portion of modern attack traffic is encrypted, and most IDS deployments have limited visibility into it. Network-based IDS sensors operating on raw traffic cannot inspect TLS payloads without SSL inspection infrastructure. Many organizations have deployed SSL inspection at the perimeter but not internally, leaving east-west encrypted traffic largely opaque.

The npm threat landscape update published this week highlighted how malicious packages communicate with command-and-control infrastructure exclusively over HTTPS with valid certificates, specifically to defeat perimeter IDS inspection. The AI browser extension data exfiltration problem, where extensions read email content and transmit it outbound, follows the same pattern: all traffic is HTTPS, all certificates are valid, and the payload inspection that would reveal the theft never happens.

Organizations need to evaluate where SSL inspection is feasible and prioritize deployment at choke points with the highest exposure. Internal traffic between sensitive segments is often overlooked. Certificate transparency monitoring and JA3/JA4 TLS fingerprinting provide partial visibility into encrypted channels without full payload decryption and are underutilized in most environments.

4. Missing Behavioral Baselines for Privileged Accounts

Attackers who gain access to privileged credentials, increasingly common given the volume of credential theft through browser extensions, phishing, and malware, generate activity that looks legitimate to signature-based IDS. A compromised domain admin account accessing file servers looks identical to a legitimate domain admin performing the same action.

Behavioral baselining for privileged accounts is one of the most effective controls against this class of attack and one of the most commonly absent. The Russia-linked router compromise campaign targeting Microsoft Office tokens succeeded in part because the stolen tokens allowed attackers to impersonate legitimate users, and most of the subsequent activity matched expected behavior patterns for those users.

Implementing user and entity behavior analytics (UEBA) as a complement to signature-based IDS addresses this gap. UEBA systems establish per-account behavioral baselines and flag deviations: a user account that normally authenticates from one geographic region suddenly accessing systems from another, or an account that never runs PowerShell suddenly executing encoded scripts. This requires integration between your IDS, identity provider, and endpoint telemetry, which adds architectural complexity, but the detection value for credential abuse scenarios is substantial.

Practical IDS Configuration Improvements

Asset Tagging and Zone-Based Rule Application

The single most impactful configuration change most organizations can make is systematic asset tagging and zone-based rule application. Rather than applying a uniform ruleset across all monitored traffic, define network zones based on asset sensitivity and apply progressively more sensitive detection rules to higher-value zones.

Domain controllers, certificate authorities, and privileged access workstations should have the most sensitive detection profiles, including rules that fire on single anomalous events rather than requiring volume thresholds. Production application servers handling customer data warrant a second tier. Standard user workstations can operate with higher thresholds where false positive management is more important.

Most commercial IDS platforms support this through variable sets or asset profiles. Snort and Suricata both support variable-based rule customization that allows zone-specific threshold configurations. Implementing this requires accurate and maintained asset inventory, which is a prerequisite that many organizations underinvest in.

Long-Window Correlation Rules for High-Value Attack Sequences

Write specific long-window correlation rules for the attack sequences most relevant to your threat model. For most organizations in 2026, these should include:

  • Reconnaissance followed by exploitation attempts against internet-facing services over a 24-hour window, specifically targeting device types mentioned in recent advisories such as Ivanti EPMM and network edge appliances
  • Successful external authentication followed by access to sensitive internal resources within 48 hours, with particular attention to cloud management consoles and secrets stores given the PCPJack campaign pattern
  • Staged data aggregation: file copy operations that individually are small but cumulatively represent large volumes over multiple days
  • DNS query patterns that suggest C2 beaconing: regular intervals, consistent payload sizes, or queries to recently registered domains

SIEM platforms with IDS integration, including Splunk, Microsoft Sentinel, and Elastic Security, support long-window correlation through scheduled searches and alert chaining. The computational cost is manageable when rules are specific rather than broad.

Deploying Deception Assets as IDS Supplements

Honeypots and deception-based detection assets have a significant advantage over threshold-based IDS: any interaction with them is definitionally anomalous. A legitimate internal user or process has no reason to connect to a honeypot share, authenticate against a fake domain controller, or query a deception DNS record.

Deploying deception assets in high-value network segments provides detection coverage for the sub-threshold activity that IDS misses. An attacker conducting slow reconnaissance across your internal network will eventually interact with a deception asset. When they do, you get a high-fidelity alert with full context rather than a volume-based trigger that requires correlation.

The implementation cost is relatively low compared to the detection value. A small set of well-placed honeypot systems, fake DNS records for internal services, and canary credentials in plausible locations (password manager entries that trigger alerts when used, for example) can catch lateral movement that generates no traditional IDS alerts.

JA3/JA4 Fingerprinting for Encrypted Traffic Visibility

JA3 and the newer JA4 fingerprinting standards allow network sensors to fingerprint TLS client behavior without decrypting traffic. Specific malware families, remote access tools, and exploitation frameworks have recognizable JA3/JA4 signatures because they implement TLS in characteristic ways.

Adding JA4 fingerprint matching to your NIDS configuration provides a detection layer for encrypted malicious traffic that does not require SSL inspection infrastructure. Suricata supports JA4 natively in recent versions. Commercial platforms including Darktrace and ExtraHop have integrated JA4 correlation into their detection pipelines.

The caveat is that sophisticated attackers aware of JA fingerprinting will use legitimate TLS libraries to normalize their fingerprints. JA4 is not a complete solution for encrypted traffic, but it catches a meaningful portion of commodity malware and exploitation frameworks that use custom or identifiable TLS implementations.

Integration Points That Most Deployments Miss

Connecting IDS to Vulnerability Management

IDS rules are most effective when tuned against the actual vulnerability state of your environment. An organization running unpatched Ivanti EPMM instances should have active detection rules for known exploitation patterns against that specific product. An organization that has fully patched those systems can deprioritize those rules and invest detection capacity elsewhere.

Most organizations run their IDS and vulnerability management programs in parallel without systematic integration. Creating a workflow that automatically elevates detection sensitivity for newly disclosed vulnerabilities affecting your installed software, at least until patches are deployed, significantly improves detection relevance. This requires inventory data that maps software versions to network segments, which feeds directly into the asset tagging recommendation above.

Threat Intelligence Feeds and Rule Prioritization

Commercial and open-source threat intelligence feeds provide indicators of compromise (IOCs) that can be integrated directly into IDS rulesets: known malicious IP addresses, C2 domain patterns, file hashes for known malware. The value of these feeds depends heavily on how quickly they are operationalized.

An IOC that takes 72 hours to move from a threat feed into an active IDS rule provides minimal protection against campaigns that complete their initial objectives within hours. The new TCLBanker malware spreading through WhatsApp and Outlook had a detection window of hours before widespread deployment. Organizations that automated feed ingestion into their IDS platforms had a meaningful advantage over those with manual update processes.

Automating threat intelligence integration through TAXII/STIX feeds connected to your IDS management platform reduces the operationalization lag. Most enterprise IDS platforms support this natively. For organizations running open-source tooling, the MISP threat intelligence platform integrates with both Suricata and Snort through automated rule generation.

Endpoint Telemetry as IDS Context

Network-based IDS operates without context about what is happening on the endpoint generating the traffic. A connection from a workstation to an unusual external IP looks different if that workstation is also running an encoded PowerShell process than if it is running normal user applications.

Integrating endpoint detection and response (EDR) telemetry with your NIDS alerts allows you to correlate network anomalies with process execution, file system changes, and registry modifications. This correlation is where many sophisticated attacks become visible: the network behavior alone might not cross alert thresholds, but the combination of network behavior and endpoint behavior does.

Implementation requires a shared correlation layer, typically the SIEM, that ingests both NIDS alerts and EDR telemetry and applies correlation rules across both data sources. The architecture is more complex than standalone IDS, but the detection improvement for sophisticated intrusions is significant.

Operational Practices That Matter as Much as Configuration

Regular Adversarial Testing of Detection Coverage

IDS configurations degrade over time as environments change, new traffic patterns emerge, and attack techniques evolve. Rules tuned six months ago against the threat landscape of six months ago may be substantially less effective against current techniques.

Regular adversarial testing, either through internal red team exercises or contracted penetration testing, should explicitly scope to test IDS detection coverage rather than just exploitation success. Ask your red team to operate sub-threshold and report what activities generated no alerts. Those gaps are your actual detection coverage holes.

Purple team exercises, where red and blue team operators work together to test and tune detection rules, are particularly effective for IDS improvement. The red team executes specific techniques from frameworks like MITRE ATT&CK, and the blue team validates whether existing rules caught them. Gaps get addressed in real time rather than discovered during an incident.

Alert Triage Discipline and Escalation Criteria

The most common operational failure mode in IDS deployments is alert fatigue leading to inadequate triage of lower-severity alerts. High-severity alerts get investigated; medium and low-severity alerts accumulate in queues and get closed without meaningful review.

Attackers who understand this, and experienced threat actors do, deliberately keep their activity in the medium and low-severity tiers. The incident at the beginning of this article is a direct illustration: multiple medium-severity alerts existed in the queue for days before anyone correlated them.

Establishing clear triage criteria for low and medium alerts, particularly around the high-value attack sequences described earlier, changes this dynamic. A medium-severity external reconnaissance alert against a server running a recently patched critical vulnerability should receive the same triage priority as a high-severity alert. Building these escalation criteria into your SOC playbooks ensures that sub-threshold activity in critical contexts gets appropriate attention.

Measuring Whether Your IDS is Actually Working

Most IDS effectiveness measurement focuses on alert volume and false positive rates. These metrics tell you about IDS activity but not about detection coverage. An IDS generating thousands of alerts per day may be missing the specific techniques attackers are using against your environment.

More useful metrics for assessing IDS effectiveness include:

  • Mean time to detect (MTTD) for specific attack categories, measured through periodic simulation exercises
  • Coverage percentage of MITRE ATT&CK techniques relevant to your threat model, assessed through adversarial testing
  • Rule staleness: the percentage of active rules that have not fired on any traffic in the past 90 days, which often indicates rules that are either too restrictive or covering techniques no longer used against your environment
  • Correlation rule hit rates for long-window rules, which validate whether your environment is generating the expected event sequences when simulated attacks are run

Building these metrics into a quarterly IDS review process provides the visibility needed to identify degrading coverage before attackers exploit the gaps rather than after.

Where This Leads in Practice

The structural gap between what most IDS deployments detect and what sophisticated attackers actually do is not a product failure. It reflects the fundamental challenge of detecting adversaries who study defensive tooling before conducting operations. The campaigns making headlines today, from cloud credential theft to router compromises to malware spreading through trusted communication channels, succeed against organizations with operational IDS deployments because they are designed to operate below detection thresholds.

Closing that gap requires treating IDS as a continuously tuned detection capability rather than a deployed product. Asset-aware rule application, long-window correlation for realistic attack sequences, deception-based detection supplements, encrypted traffic fingerprinting, and integrated endpoint telemetry each address specific structural weaknesses. None of them alone is sufficient, and all of them require ongoing operational investment rather than one-time configuration.

The organizations that catch attackers operating below standard thresholds are the ones that have done the architectural work to make sub-threshold activity visible, tested their assumptions with adversarial exercises, and built operational disciplines around the alert categories that commodity IDS tends to deprioritize. The technical capability exists in current tooling. The gap is almost always in how that tooling is configured, integrated, and operated.

Contact IPThreat