In September 2024, we leaked dozens of real API keys, tokens, and credentials across public platforms as part of our research. Almost every secret was exploited within minutes - some in under 40 seconds. Watching it happen live was unsettling, even though we expected it.
This wasn’t a unique event. It’s just the reality we live in: secrets leak, secrets get exploited - Every. Single. Day.
And despite better secret scanning tools, it’s not getting better.
The uncomfortable truth is that our systems and our security mindsets haven’t kept pace with this new reality.

Since things are not getting any better, it might be time for an evolution in our thinking. We’ve evolved our security mindsets before: from believing in hard outer defenses to adopting "assume breach" when we realized perimeter-based security - the classic eggshell approach - was no longer enough. In response, we developed defense in depth, a layered architecture that assumes attackers are already inside. Zero Trust later emerged as the architectural embodiment of this shift, eliminating implicit trust and requiring continuous validation of every access request.

Today, it’s time for another evolution.
We must assume leak.

The Dangerous Illusion of Secret Control

As an industry, we still treat secrets - API keys, tokens, service accounts - as if they’re private by default - and as if storage alone, or the occasional rotation, guarantees safety.

Here are some of the comforting myths we keep telling ourselves:

  • "We rotate secrets, so we're covered."
  • "That token is scoped, it’s fine."
  • "Only production secrets really matter."

But the reality is harsher:

  • Secrets are sprawled across developer laptops, CI/CD pipelines, code repositories, third-party integrations, scripts and configurations, collaboration tools, and now even AI agents.
  • Secrets leak unintentionally through code commits, misconfigured systems, Slack threads, and support tickets.
  • Security teams rarely provision these secrets, but they are ultimately accountable for protecting them.

To combat secret sprawl, Vaults were created. They were a critical step forward: a necessary response to the era when secrets were hardcoded into source code and config files. They centralized storage and enabled better auditing. But vaults alone aren’t enough. They mitigate storage risk - they don't solve leak risk. Secrets still need to move to be used. Every time a secret leaves the vault, it reenters a world full of vulnerabilities.

Vaults created safer homes for secrets. They didn’t eliminate the dangers of traveling secrets.

Why Assume Leak, and Why Now?

1. Non-Human Identities Are Exploding

Non-human identities (NHIs) now outnumber human users by orders of magnitude - and the NHI Index shows it's accelerating.

Every SaaS integration, every cloud workload, and every AI workflow spawns new credentials. Secrets are not created by security teams, but by developers, analysts, operations, and even business users.

2. Secret Sprawl is Fragmented and Unseen

Secrets now live across:

  • Local dev machines
  • CI/CD pipelines
  • Code repositories
  • SaaS applications and metadata services
  • AI agents and no-code tools

No single control plane sees it all. Shadow tokens abound.

3. Default Configurations Aren’t Secure

Most cloud and SaaS platforms prioritize velocity over security:

  • Long-lived tokens
  • Broad privileges
  • No TTLs, no binding to IPs or environments
  • Poor or optional usage logging

4. Secrets Leak Fast, Get Exploited Faster

Our Secret Rotation Debunking research shows secrets exposed on public platforms (GitHub, NPM, PyPI) are often exploited within 2 minutes. That’s faster than most detection systems can even fire an alert.

So even if you rotate secrets monthly, a leak today can be catastrophic before the day ends.

5. Rotation is Not a Safety Net

Even if we were pretending that rotation does work - it’s not enough:

  • Many secrets aren't rotated properly.
  • Some can’t be rotated without major downtime.
  • And crucially, a single use before rotation is all an attacker needs.

Our research proved it: rotation is reactive, and too slow against the speed of exploitation.

6. We Treat Secrets Like Configs, Not Credentials

Most secrets aren’t treated with the gravity of user credentials. Developers think of them as harmless configuration variables, tucked away in a .env file or a Terraform script.

But secrets are active actors. They authenticate. They authorize. They move. They must be treated like credentials, because they are.

Welcome to Assume Leak

Assume leak means operating as if every secret in your environment has already leaked.

It’s not about paranoia, it’s about survival. As a thought exercise, if your most sensitive secret leaked right now, how bad would the damage be? The goal should not be to hide secrets better, but to build systems that don’t collapse when a secret leaks.

Here’s what that looks like:

1. No Single Point of Failure

A leaked secret shouldn't be a master key.

Tactics:

  • Break monolithic tokens into narrowly scoped credentials.
  • Segment secrets by environment, use case, and privilege level.
  • Eliminate "admin everything" API keys.

2. Secrets Must Be Ephemeral by Default

Secrets should decay faster than they can be exploited.

Tactics:

  • Use short-lived credentials (AWS STS, GCP Workload Identity Federation, Azure Managed Identities).
  • Prefer dynamic generation over static storage.
  • Avoid long-lived PATs or service account keys.

3. Full Observability into Secrets Usage

You can't stop what you can't see. However, observability must go beyond basic logging - it must include real-time behavioral validation.

Tactics:

  • Log every secret issuance, usage, and modification event, and inspect real-time actions performed by secrets.
  • Monitor by IP, geo, device, and service context.
  • Alert on anomalies: unexpected API calls, excessive permissions usage, unusual access patterns, or abnormal timeframes.

4. Access Must Be Contextual, Not Just Credentialed

Possessing a secret should not be enough.

Tactics:

  • Bind secrets to specific IPs, trusted devices, or known service identities.
  • Verify contextual signals - such as environment type, service name, and deployment metadata, before allowing secret usage.

5. Contain the Blast Radius

Leaks will happen. Damage should be limited.

Tactics:

  • Enforce minimal privilege and strict TTLs.
  • Ensure fast, tested revocation pipelines and workflows.

6. Secrets Are Identities - Treat Them That Way

Secrets aren’t inert. They are actors.

Tactics:

  • Track purpose, ownership, and lifecycle.
  • Include secrets in IAM reviews.
  • Kill secrets automatically when services are retired.

7. Leak Secrets Deliberately to Test Detection

If you leak a honeypot key and nothing happens, you’re flying blind.

Tactics:

  • Deploy honeypot secrets intentionally.
  • Monitor attacker behavior, detection latency, and response effectiveness.
  • Continuously refine based on real-world exploitation patterns.

8. Build Toward Secrets-Less Architectures

The real endgame isn’t secret hygiene. It’s fewer secrets.

Tactics:

  • Prefer federated identity over static secrets.
  • Adopt ephemeral, environment-bound authentication.
  • Move toward service mesh security and trusted runtime attestations.

The Mindset Shift That Saves Us

We stopped trusting perimeter firewalls. We stopped assuming devices are clean. Zero Trust taught us to validate every access request instead of relying on network location.

Why do we still assume secrets are secret?

Assume leak is realism.

Vaults are good. Secret Scanning is good. But neither is enough on its own.

We must architect systems that expect leaks. That survive them. That contain damage. That make secrets - even if exposed - not a catastrophe.

Because if your most sensitive token leaked right now, and the answer to "what would happen?" isn't "not much," then it's probably time to evolve.