If P < D + R — Protection expires before D and R finish running in parallel. The attacker achieves their goal uncontested. Security is impossible. ✗
▶ P, D, and R all run from t=0. D and R must complete inside the protection window.
Where This Comes From
The time-based security (TBS) model was originally formalized by security researcher Winn Schwartau and has since become a foundational concept in SANS SEC530 — the course focused on Defensible Security Architecture and Network Security Monitoring. The elegance of the model is that it replaces vague security posture discussions with a concrete, measurable question: does your protection last long enough for your team to detect and respond to an attack?
Most organizations think about security in binary terms — either you are breached or you are not. Time-based security reframes the problem as a race against the clock. Your preventive controls are running a timer. The moment an attacker starts working on your system, that timer begins. The question is not whether the attacker can eventually succeed — given unlimited time, any protection can be broken. The question is whether your detection and response capabilities can fire before that timer runs out.
Breaking Down P, D, and R
P — Protection Time
Protection time is how long your security controls can withstand an active, targeted attack before the attacker reaches their objective. It is not a binary state. It is a duration — and every layer of your defense adds to it.
Protection comes from multiple stacked mechanisms: network segmentation slows lateral movement, strong authentication delays credential abuse, encryption makes exfiltrated data unusable, endpoint hardening raises the cost of privilege escalation, and rate limiting buys time against brute-force attacks. No single control provides infinite P. A firewall has a finite P against a determined attacker who will simply tunnel through port 443. An encrypted database has a finite P against someone who steals the server and takes it home. The goal of defense-in-depth is to maximize the total P by stacking layers such that an attacker must defeat each sequentially — each layer adding its own clock time to the attacker's problem.
D — Detection Time
Detection time is the interval between the moment an attacker begins operating inside your environment and the moment your security team becomes aware of it. This is often measured as Mean Time to Detect (MTTD) or, in the breach world, as dwell time.
Industry data makes for sobering reading. IBM's annual Cost of a Data Breach Report consistently shows a mean time to identify a breach of 194–207 days. Mandiant's M-Trends report has tracked median attacker dwell time historically in the range of 16–21 days for detected breaches — a dramatic improvement from the 200+ days of a decade ago, but still representing weeks of undetected attacker activity. In practice, detection time is the most dangerous variable in the TBS equation because organizations often dramatically underestimate it. Your SIEM might fire alerts, but if those alerts sit in a queue for 72 hours before a human reviews them, your effective D is 72 hours regardless of how fast the detection rule fired.
Detection time is reduced by: a mature Security Operations Center (SOC) with 24/7 coverage, high-fidelity SIEM correlation rules tuned to your environment, Network Detection and Response (NDR) tools watching east-west traffic, EDR with behavioral analytics on endpoints, User and Entity Behavior Analytics (UEBA) catching anomalous access patterns, and deception technologies (honeypots, honeytokens) that fire the moment an attacker touches them.
R — Response Time
Response time is the interval between detection and the moment the threat is contained — the attacker is ejected, their access revoked, and the integrity of the affected systems restored. This is measured as Mean Time to Respond (MTTR) or Mean Time to Contain (MTTC).
A common mistake is treating detection as the finish line. Detection without rapid response does not satisfy the TBS equation. If your SOC detects an attack at 11 PM on a Friday and your incident response team is not available until Monday morning, your effective R is approximately 60 hours regardless of how sophisticated your detection tooling is. Response time is reduced by: documented and rehearsed incident response playbooks, Security Orchestration, Automation, and Response (SOAR) platforms that execute containment actions automatically (isolate host, revoke token, block IP), pre-negotiated authority for the SOC to take containment actions without management approval chains, and regular tabletop exercises and purple team exercises that ensure the team can execute under pressure.
The Safe Analogy: Why Unlimited Time Always Wins
The clearest illustration of what happens when P < D + R is the safe analogy.
Imagine your most sensitive data — customer PII, source code, financial records — is locked inside a high-security safe. The safe is rated by the manufacturer to resist attack for four hours against a professional safecracker. That is your P: four hours of protection time.
Now consider your detection and response posture. Your monitoring tools might detect anomalous access to the room where the safe sits, but only after a two-hour lag as log data is correlated and alert thresholds are crossed. Once an alert fires, it routes to an on-call analyst who takes 30 minutes to assess it, escalates to the incident response team, and the physical security team is dispatched. Total response time: three hours.
In this scenario, D + R = 5 hours. Your protection is only 4 hours. P < D + R.
A sophisticated attacker does not need to crack the safe in the building. They take the safe home. They load it into a truck, drive it to their own facility, and work on it in an environment they fully control, with no time pressure, no surveillance, and no risk of interruption. Your four-hour protection rating becomes completely irrelevant. With unlimited time, a determined attacker with the right tools will always eventually win. The moment the asset leaves your detection boundary, P effectively drops to zero and the attacker has all the R they will ever need.
This is not a hypothetical scenario. This is what happens in every major data exfiltration attack. The attacker does not need to decrypt your data in your environment. They exfiltrate the encrypted database, move it to their own infrastructure, and crack it at leisure. The moment the data crosses out of your environment, D can never fire again — it is already too late.
IBM's 2024 Cost of a Data Breach Report places the global average mean time to identify a breach at 194 days and mean time to contain at 64 days. That means the industry average D + R = 258 days. For a system to achieve P > D + R against an average-case attacker, protection controls would need to hold for over 8 months. No perimeter, no firewall, no access control can maintain P for 8 months against a targeted, motivated adversary. The conclusion is not that we should abandon protection — it is that we must radically reduce D and R.
Why Prevention Alone Cannot Work
For decades, the dominant security paradigm was perimeter defense: build a high enough wall, and the attackers cannot get in. Firewalls, intrusion prevention systems, and endpoint antivirus were the primary investments. Detection and response were afterthoughts — largely because detection was assumed to be unnecessary if prevention worked.
The problem is that every prevention control has a finite P, and the economics of offense versus defense are deeply asymmetric. An attacker needs to find one vulnerability. A defender needs every vulnerability to be covered. A sophisticated threat actor with zero-day exploits, social engineering capability, and months of preparation will defeat any prevention-only architecture. When prevention eventually fails — and it will — an organization that has not invested in detection and response has no fallback. The attacker has unlimited time inside the environment.
The TBS model does not argue against prevention. Prevention is essential — maximizing P raises the cost of attack and buys time for detection and response. But prevention must be understood as a time-buying mechanism, not an absolute guarantee. The honest security architecture acknowledges: "our controls will eventually be bypassed; our job is to make sure D + R is smaller than the time it takes."
This is precisely why Zero Trust Architecture begins with the principle of assume breach. Not because we expect to be breached, but because designing a system that can detect, contain, and recover from a breach — rather than a system that only tries to prevent one — produces fundamentally better security outcomes. An organization that has invested in detection and response is resilient. An organization that has invested only in prevention is fragile.
Architecting for P > D + R
Satisfying the TBS equation requires deliberate investment across all three variables. The goal is not to maximize P alone — it is to ensure that P consistently exceeds D + R. This means raising P, lowering D, and lowering R simultaneously.
Maximizing P: Layered Protection That Stacks
Effective protection architecture does not depend on any single control. It stacks controls such that an attacker must defeat each one sequentially, with each layer consuming more of the attacker's time. Key architectural principles:
- Network segmentation and micro-segmentation: Lateral movement is the attacker's greatest multiplier. A flat network means that compromising one host effectively compromises all hosts — P collapses. Micro-segmented networks force the attacker to re-exploit each segment boundary, adding P at every step.
- Strong identity and access control: Multi-factor authentication, privileged access workstations, just-in-time access, and session recording all raise the cost of credential abuse. A stolen password alone is not enough — the attacker must also defeat MFA, which adds P.
- Encryption at rest and in transit: Even if data is exfiltrated, strong encryption extends P by ensuring the data remains unusable until the encryption is broken. FIPS 140-3 validated AES-256 provides meaningful P against offline cracking.
- Endpoint hardening: Application allowlisting, PowerShell constrained language mode, LSASS protection, credential guard, and attack surface reduction rules all raise the floor of what an attacker needs to accomplish post-compromise, adding to P on the endpoint.
- Immutable logging: Storing logs in a write-once, external system the attacker cannot touch ensures that even if they compromise the primary environment, they cannot erase evidence — protecting the integrity of your D capability.
Minimizing D: Detection Velocity
Every hour shaved from detection time is an hour of effective protection you gain without adding a single new preventive control. Detection investment often has a better security return than additional prevention at the margin.
NETWORK LAYER
NDR (Network Detection & Response) → east-west lateral movement, C2 beacons
DNS logging + sinkhole analysis → C2 domain resolution
NetFlow / PCAP sampling → data staging, exfiltration anomalies
ENDPOINT LAYER
EDR with behavioral analytics → process injection, credential dumping
UEBA → anomalous user access patterns
File integrity monitoring → unauthorized binary drops, config changes
IDENTITY LAYER
Failed auth spike detection → brute-force, credential stuffing
Impossible travel / impossible MFA → account takeover
Privileged account activity baselining→ lateral movement via admin accounts
APPLICATION LAYER
WAF + bot detection → injection, abuse of APIs
Application logging to immutable SIEM → auth events, data access patterns
DECEPTION LAYER
Honeytokens (fake AWS keys, etc.) → high-confidence alerts: D ≈ 0 on trigger
Honeypots in internal segments → instant lateral movement detection
The deception layer deserves special mention in the context of TBS. A honeytoken — a fake credential, API key, or document — that is never legitimately accessed provides a near-zero D detection mechanism. The moment it fires, you know with near certainty that an attacker has accessed it, because no legitimate user would ever touch it. This is perhaps the most favorable contribution to the TBS equation: a control that provides essentially no P (honeytokens are not preventive) but drives D toward zero, making P > D + R trivially satisfiable for any non-zero P.
Minimizing R: Response Automation and Rehearsal
Detection that does not result in fast containment is detection that does not satisfy the TBS equation. Response time is often the most neglected variable — organizations invest heavily in prevention and increasingly in detection, but leave response as a manual, improvised process.
- SOAR platforms: Automated playbooks that trigger on SIEM alerts can isolate an infected endpoint, revoke a compromised OAuth token, block a malicious IP at the firewall, and page the on-call analyst — all within seconds of a detection event. Automated containment can reduce R from hours to minutes.
- Pre-authorized containment actions: One of the largest contributors to slow R is the need to obtain approval before acting. Pre-authorizing the SOC to isolate endpoints, disable accounts, and block network flows without management approval for specific, well-defined scenarios removes hours from R.
- Documented, rehearsed playbooks: Every reasonably foreseeable attack scenario — ransomware, credential theft, insider data exfiltration, supply chain compromise — should have a written response playbook. Tabletop exercises and purple team simulations rehearse these under realistic conditions so the team can execute without hesitation during an actual incident.
- Network isolation capability: The ability to instantly quarantine a network segment — cutting off east-west traffic without taking the whole network down — is one of the highest-value response capabilities. Software-defined networking and micro-segmentation make this possible in seconds via API call.
Real Attacks Through the TBS Lens
Ransomware
P = time before ransomware encrypts enough critical systems to cause operational impact. Modern ransomware operators dwell for weeks before detonating encryption, deliberately staging data exfiltration first. If your D is measured in weeks, P < D + R even if encryption itself only takes hours — because the attacker has already achieved their secondary objective (exfiltration) before the encryption event triggers detection. Effective TBS against ransomware means detecting the pre-encryption behavior: anomalous SMB traversal, large-scale file enumeration, shadow copy deletion, and data staging to unusual external destinations.
Supply Chain Compromise
The SolarWinds attack is the canonical example of P < D + R at scale. The threat actor inserted malicious code into a software build pipeline in October 2019. The code was signed, distributed, and installed by 18,000+ organizations. Detection by FireEye did not occur until December 2020 — over 14 months later. D alone was over 400 days. No reasonable P could have held for 400 days. The lesson: supply chain attacks are designed to collapse P by entering via a trusted channel, making prevention nearly impossible. The only viable defense is reducing D through behavioral anomaly detection post-installation.
Insider Threat
Insider threats represent a case where P is dramatically reduced because the attacker starts inside the perimeter with legitimate credentials. Many of your perimeter controls contribute zero P against an insider. This makes minimizing D critical. UEBA that baselines normal data access and flags anomalous bulk downloads, after-hours access to sensitive repositories, or transfers to personal cloud storage can compress D from months to hours even when P is minimal.
Measuring Your Own P, D, and R
TBS is only useful if you can actually measure your values. Abstract assertions that "our protection is strong" or "we have good detection" do not satisfy the equation. Here is how security architects operationalize these measurements:
# Time how long it takes a red team to move from initial access to objective
# with your controls active. That elapsed time is your empirical P.
Metric: Mean time for red team to achieve objective (hours)
Tool: MITRE ATT&CK adversary emulation, VECTR for tracking
# Measuring D — Run purple team exercises with blue team blind
# Red team executes TTPs, measure time from first technique to SOC alert
Metric: MTTD — Mean Time to Detect (hours)
Tool: SIEM alert timestamps vs. red team execution log
# Measuring R — Time tabletop and live incident response exercises
# From confirmed alert to host isolated, account disabled, IOCs blocked
Metric: MTTR — Mean Time to Respond / Contain (hours)
Tool: SOAR playbook execution logs, incident tracking platform
# The verdict — plug in your real numbers
if P_hours > (MTTD_hours + MTTR_hours):
print("Effective security is achievable in this system.")
else:
print("WARNING: P < D + R. Security is not achievable without remediation.")
print("Priority: invest in detection speed and response automation.")
Purple team exercises are the most direct method of empirically measuring all three variables. A red team with defined objectives that executes known TTPs while the blue team operates normally yields precise P, D, and R measurements. This should be a recurring exercise — at minimum annually, ideally quarterly for high-maturity security programs — because all three variables drift over time as the environment, tooling, and threat landscape evolve.
The TBS Model in a Zero Trust World
Zero Trust Architecture and the TBS model are philosophically aligned in a fundamental way: both reject the assumption that prevention alone is sufficient.
Zero Trust's core principle — never trust, always verify — is a direct acknowledgment that P is finite. If every access request, internal or external, must be continuously authenticated and authorized, an attacker who defeats one authentication event must defeat the next one too, and the one after that. This keeps P from being a single-point-of-failure and distributes it across the entire session lifetime. Meanwhile, Zero Trust's continuous monitoring and validation requirements directly reduce D — anomalous behavior during an otherwise authenticated session is visible and actionable.
NIST SP 800-207 (Zero Trust Architecture) can be read as a practical implementation guide for maximizing P while enabling the rapid detection and response that the TBS equation demands. Micro-segmentation adds P. Identity-aware proxies generate rich telemetry that compresses D. Automated policy enforcement enables automated containment that minimizes R.
□ Have you measured your actual P through red/purple team exercises?
□ Do you know your current MTTD from your SIEM and SOC data?
□ Do you know your current MTTR from your incident response records?
□ Is your detection stack covering all five layers: Network, Endpoint, Identity, Application, Deception?
□ Are you monitoring east-west (lateral movement) traffic, not just north-south?
□ Do you have automated containment playbooks that execute in minutes, not hours?
□ Is your SOC authorized to take containment actions without approval chains?
□ Are your logs immutable and external to the environment an attacker could compromise?
□ Have you deployed deception technologies that drive D toward zero for lateral movement?
□ Do you run tabletop exercises at least annually to keep R low through rehearsal?
Conclusion
The time-based security equation — P > D + R — is one of the most clarifying frameworks in security architecture because it forces honesty. It replaces "how secure are we?" with a measurable question: "does our protection last long enough for our team to respond?" When P < D + R, no amount of additional prevention spending will fix the problem. The attacker simply takes the safe home and works on it at their leisure.
Satisfying the equation requires treating detection and response as first-class security investments — not afterthoughts to a prevention-first strategy. Every second shaved from D and R is a second of protection you effectively gain for free. A security architecture that detects in minutes and responds in minutes can be effective even with moderate protection, while a prevention-only architecture with slow detection is fundamentally broken no matter how many layers it stacks.
The goal is not to make breaches impossible. The goal is to make breaches survivable — to ensure that when an attacker does find their way in, your team catches them and ejects them before they can accomplish anything meaningful. That is what P > D + R means in practice: building a system where the defenders win the race.