When One Mistake Is All It Takes
1 minute. Just 1 minute. That’s how long it took for an attacker to exploit an exposed AWS Access Key on GitHub—the time it takes to pour your coffee, start a standup meeting, or click through your deployment pipeline. In that blink of an eye, an exposed secret can grant unauthorized access and spark a full-scale breach.
This blog uncovers how GitHub and GitLab, indispensable tools for modern developers, are also prime targets for attackers searching for secrets. You’ll see how exposed secrets are exploited in seconds and why secret rotation is not just insufficient but a dangerously misleading strategy.
If you missed our prelude blog, it sets the stage with the motivations behind this research and outlines the larger implications of secret mismanagement. Ready? Let’s dive in.
GitHub and GitLab: A Breeding Ground for Secrets Exposure
GitHub and GitLab are among the most widely used platforms for hosting code and managing version control. Their ubiquity makes them natural targets—not just for misconfigurations or overlooked practices, but also for secrets that accidentally slip into repositories. These platforms are not just tools; they are high-visibility ecosystems where a single mistake can result in immediate exploitation. Whether through human error, CI/CD automation, or ignored best practices, secrets frequently make their way into code—and attackers are always watching.
In this post, we’ll unpack the results of our experiments, demonstrating how secrets leaked in various scenarios were detected and exploited. We’ll also examine the platforms' native capabilities—such as GitHub’s secret scanning (and gitlab’s lack thereof)—and assess their effectiveness in identifying and responding to these exposures.
In our Code Hosting & Version Control experiments, we tested AWS access key exposure on GitHub and GitLab, the two most popular version control platforms. We tried various scenarios, like forking popular repositories, embedding keys in .env and Terraform files, and creating issues with embedded secrets. The results showed a clear difference: on GitHub, exposed keys were almost always exploited within minutes, indicating high levels of scanning activity by attackers. On GitLab, however, most keys went unnoticed, with only two cases of exploitation. The sections below detail the scenarios and the full results.
GitHub
Scenarios
In the table below, we have listed all the scenarios we executed on GitHub. The reason we used such a variety of scenarios is to understand the meticulousness of attackers scanning activity as well as GitHub’s own scanning nuances.
Scenario Name | Description |
Fork Yeah! | Forked the Kubernetes repository on GitHub and pushed a plaintext AWS access key into the master branch of the forked repository. |
What The Fork | Forked the Kubernetes repository and created a pull request to the original repository, including a plaintext AWS access key in the changes. |
Local Fork, Global Consequences | Created a pull request in a forked repository with a plaintext AWS access key; the pull request was not merged. |
Fresh Start, Same Mistake | Created a new public GitHub repository and added a plaintext AWS access key in a local pull request. |
The Remediation Mirage | Rotated a previously exposed AWS access key and placed the new key back into the repository. |
The .env File Fiasco | Added a plaintext AWS access key to an .env file within the repository. |
The .env File Strikes Again | Rotated the AWS access key and updated the .env file with the new key. |
When Examples Become Exposures | Placed an AWS access key in an .env.example file intended as a template, but included the actual key. |
Hidden in Plain Sight | Posted a plaintext AWS access key within a GitHub issue to see if it would be detected. |
Terraform State Turmoil | Added an AWS access key to a terraform.tfstate file within the repository. |
Terraform State Turmoil II | Rotated the AWS access key and updated the terraform.tfstate file with the new key. |
Results
Across our GitHub scenarios, we observed rapid exploitation of exposed AWS access keys, with most attacks occurring within minutes of exposure. On average, keys were exploited in 6.6 minutes from the time of exposure. AWS alerts were typically fast, arriving in an average of 1.4 minutes after exposure. However, even with timely alerts, the speed of exploitation shows how quickly attackers can act, often before defensive measures can be taken.
Scenario | Time to Exploit ⏱️ | AWS Alert Time ⚠️ |
Fork Yeah! | 5 minutes | 1 minute |
What The Fork | 1 minute | 1 minute |
Local Fork, Global Consequences | 5 minutes | 1 minute |
Fresh Start, Same Mistake | 2 minutes | 2 minutes |
The Remediation Mirage | 2 minutes | 1 minute |
The .env File Fiasco | 5 minutes | 2 minutes |
The .env File Strikes Again | 30 minutes | 1 minute |
When Examples Become Exposures | 5 minutes | 2 minutes |
Hidden in Plain Sight | No exploitation | 1 minute |
Terraform State Turmoil | 5 minutes | 1 minute |
Terraform State Turmoil II | 6 minutes | 3 minutes |
See it for yourself:
GitLab
Scenarios
In the table below, we have listed all the scenarios we executed on GitLab. We chose a more limited set of scenarios since we already knew that GitLab exhibits fewer controls than GitHub does when it comes to its own scanning. However, we wanted to assess the extent of attackers' interest in GitLab repositories when hunting for leaked keys.
Scenario Name | Description | Fork Yeah! | Forked a popular GitLab repository and pushed a plaintext AWS access key into the master branch of the forked repository. |
What The Fork | Forked the F-Droid repository and created a merge request to the original repository, including a plaintext AWS access key in the changes. |
Local Fork, Global Consequences | Created a merge request with a plaintext AWS access key in a forked GitLab repository; the merge request was not merged. |
Fresh Start, Same Mistake | Created a new public GitLab repository and added a plaintext AWS access key in a local merge request. |
The .env File Fiasco | Added a plaintext AWS access key to an .env file within the GitLab repository. |
When Examples Become Exposures | Placed an AWS access key in an .env.example file within the repository, including the actual key. |
Hidden in Plain Sight | Posted a plaintext AWS access key within a GitLab issue to see if it would be detected. |
Terraform State Turmoil | Added an AWS access key to a terraform.tfstate file within the GitLab repository. |
Results
In our GitLab scenarios, we saw significantly less exploitation activity compared to GitHub. Only two scenarios—What The Fork and Hidden in Plain Sight—resulted in exploitation, and both took approximately 3 days to be detected by attackers. Notably, no alerts were received from AWS in any of the GitLab scenarios, which suggests that GitLab lacks a security scanning partnership similar to GitHub’s Secret Scanning Partner Program. This absence of alerts also indicates that AWS Trusted Advisor’s secret scanning is limited to GitHub and does not cover GitLab.
Scenario | Time to Exploit ⏱️ | AWS Alert Time ⚠️ |
Fork Yeah! | No exploitation | No alert sent |
What The Fork | 3 days | No alert sent |
Local Fork, Global Consequences | No exploitation | No alert sent |
Fresh Start, Same Mistake | No exploitation | No alert sent |
The .env File Fiasco | No exploitation | No alert sent |
When Examples Become Exposures | No exploitation | No alert sent |
Hidden in Plain Sight | 3 days | No alert sent |
Terraform State Turmoil | No exploitation | No alert sent |
Comparison - Is One Safer Than The Other?
In the table below, marks scenarios where the key was exploited, while indicates those left unexploited, comparing similar scenarios ran in both GitHub and GitLab.
The takeaway is clear: GitHub remains a high-risk platform for exposed secrets, with most scenarios quickly exploited by attackers using automated tools. In contrast, GitLab saw far less exploitation, suggesting it is currently of less interest to attackers. However, this doesn't mean GitLab is inherently safer. The platform lacks built-in alerting for exposed secrets and could benefit from maturing its security controls. Strengthening these capabilities—alongside partnerships with key vendors—presents an opportunity for GitLab to better protect its customers and contribute to a safer cyber ecosystem widely used platforms like GitHub pose a significant risk for exposed secrets, and relying on rotation and quarantine policies alone isn’t enough to protect them.
The Method To The Madness: Scenarios Breakdown
Below is a full breakdown of the scenarios we ran, including the technical scenario setup, insights, and links to the original leaks (where applicable).
GitHub Scenarios
Fork Yeah!
We forked the widely used Kubernetes repository on GitHub. We then pushed a plaintext AWS access key into the master branch of our forked repository.
Exploited? | Exploitation Time | Alert Time |
Yes | 5 minutes | 1 minute |
Imagine our surprise upon seeing the results of this scenario, being the first one we tested out of many others detailed in this research. We received an alert that the secret was exposed, and it took 5 minutes for attackers to find the key and use it. Five minutes. We were shocked too.
What The Fork
We forked the same Kubernetes repository and created a pull request to the original repository, including the plaintext AWS access key in the changes.
Exploited? | Exploitation Time | Alert Time |
Yes | 1 minute | 1 minute |
In this case, exploitation occurred within just 1 minute - the same amount of time it took for AWS to send an exposure alert.
Local Fork, Global Consequences
We cloned the Kubernetes repository locally and created a pull request in our forked repository, adding a plaintext AWS access key. The pull request was not merged.
Exploited? | Exploitation Time | Alert Time |
Yes | 5 minutes | 1 minute |
Despite the secret being added only to a local fork's pull request, it was exploited within 5 minutes. AWS alerted us immediately upon exposure.
Fresh Start, Same Mistake
We created a new public GitHub repository and added a plaintext AWS access key in a local pull request.
Exploited? | Exploitation Time | Alert Time |
Yes | 2 minutes | 2 minutes |
What's frightening about the results of this scenario is that the secret was exploited at the exact same time we were alerted to its exposure. This means that even if teams had a streamlined process for responding to such a scenario—which is extremely unlikely, as even the most polished end-to-end process might not be fast enough—attackers would still have had ample time to get what they need to persist and move inside the environment.
The Remediation Mirage
Our previous key was "inadvertently" exposed, and we needed to immediately create a new key and place it there. What could possibly go wrong?
Exploited? | Exploitation Time | Alert Time |
Yes | 2 minutes | 1 minute |
Different secret, same results. Nothing new under the sun. The famous saying "doing the same thing over and over again expecting different results is the definition of insanity" shines right through here. This new key was exploited within just 2 minutes, alongside immediate alerting from AWS. Simply rotating and re-exposing secrets does not mitigate the risk—in fact, it perpetuates the vulnerability. But that's nothing new.
The .env File Fiasco
We added a plaintext AWS access key to an .env file in the repository, following a common practice for environment configuration files.
Exploited? | Exploitation Time | Alert Time |
Yes | 5 minutes | 2 minutes |
Within 5 minutes, we saw attackers exploiting our secret and having a party inside our AWS account—all due to an honest mistake of putting a secret in an env file. We were alerted within 2 minutes of exposure, but that didn't prevent the attackers from exploiting the key of course.
The .env File Strikes Again
We repeated the previous scenario by rotating the AWS key and updating the .env file with the new key, because lightning doesn’t strike twice, right?
Exploited? | Exploitation Time | Alert Time |
Yes | 30 minutes | 1 minute |
Although exploitation took longer - 30 minutes in this case - the key was still compromised. We were alerted within just 1 minute of the secret being there.
When Examples Become Exposures
We learned our lesson, but best practices are best practices. Let's see if we can trick attackers by putting our secrets in an example .env file—they'd never look there, right? We placed the AWS access key in an .env.example file, intending it as a template without real secrets, but included the actual key for the test.
Exploited? | Exploitation Time | Alert Time |
Yes | 5 minutes | 2 minutes |
Even though we changed our tactic and stored the secret in the .env.example file, it still didn't help; it fell into the wrong hands just 5 minutes after exposure.
Hidden in Plain Sight
We went ahead and put a secret in a GitHub issue, trying to see if anyone is scanning those as well.
Exploited? | Exploitation Time | Alert Time |
No | - | 1 minute |
We received an email from AWS within 1 minute of exposing this secret, however, we saw no exploitation of this key by any attacker. Is it possible that attackers are not scanning GitHub issues even though they are public? From our experiment, this might be the case, though we still wouldn’t recommend you go ahead and put your secrets there, don’t tempt anyone.
Terraform State Turmoil
We like to use Terraform, so we decided to put some credentials there, thinking it’s a good thing to do for automation and credential management purposes. We added the AWS access key to a Terraform state file (terraform.tfstate) within the repository.
Exploited? | Exploitation Time | Alert Time |
Yes | 5 minutes | 1 minute |
Within 5 minutes on the clock, attackers were all over our key. We were alerted within 1 minute that our key was exposed.
Terraform State Turmoil II
We rotated the AWS access key and updated the Terraform state file with the new key. Rotation is a best practice after all, right?
Exploited? | Exploitation Time | Alert Time |
Yes | 6 minutes | 3 minutes |
The new key was exploited in 6 minutes, slightly longer than the previous attempt, but still rapidly.
GitLab Scenarios
Fork Yeah!
To mirror our GitHub experiment, we forked a popular repository on GitLab—GitLab's own FOSS repository, which is a read-only mirror of GitLab. We then pushed a plaintext AWS access key into the master branch of our forked repository.
Exploited? | Exploitation Time | Alert Time |
No | N/A | No alert sent |
Despite exposing the secret in a well-known repository, we observed no exploitation of the key. This raises some questions: are attackers less active on GitLab compared to GitHub? As we'll see soon, that's not necessarily the case.
What The Fork
We forked F-Droid, an alternative app store for Android, and created a merge request (GitLab's equivalent of a pull request) to the original repository, including the plaintext AWS access key in the changes.
Exploited? | Exploitation Time | Alert Time |
Yes | 3 days | No alert sent |
Surprisingly, it took 3 days for attackers to exploit the exposed secret, and we received no alert regarding the exposure. In contrast, during our GitHub experiment with a similar scenario, the key was exploited within 1 minute, and we received an alert almost immediately. This longer timeframe on GitLab suggests that attackers may scan GitLab repositories less frequently or prioritize GitHub over GitLab. Nonetheless, this key was eventually compromised, proving that choosing GitLab over GitHub won't prevent an exposed key from being exploited.
Local Fork, Global Consequences
We forked F-Droid again and created a merge request with a plaintext secret on our local repository.
Exploited? | Exploitation Time | Alert Time |
No | No exploitation | No alert sent |
Despite exposing the secret in a merge request on our local fork, we observed no exploitation of the key. In our GitHub test of this scenario, the key was exploited within 5 minutes, and we received an alert instantly. However, this shouldn't lull anyone into a false sense of security - exposed secrets are a risk regardless of the platform.
Fresh Start, Same Mistake
We created a new public GitLab repository and added a plaintext AWS access key in a local merge request. This is the scenario we also ran in GitHub.
Exploited? | Exploitation Time | Alert Time |
No | No exploitation | No alert sent |
Once again, we observed no exploitation of the secret, even after exposing it in a merge request on a new public repository. In our GitHub experiment with the same setup, the key was exploited within 2 minutes. Attackers simply prioritize scanning GitHub due to its larger user base.
The .env File Fiasco
We put a secret in an .env file. Wait, isn't that a best practice? This is the scenario we also ran in GitHub.
Exploited? | Exploitation Time | Alert Time |
No | No exploitation | No alert sent |
Despite placing the secret in an .env file within the repository, we saw no exploitation of the key. On GitHub, this same scenario resulted in the key being exploited within 5 minutes.
When Examples Become Exposures
We learned our lesson, but best practices are best practices. Let's see if we can trick attackers by putting our secrets in an example .env file—they'd never look there, right? This is the scenario we also ran in GitHub.
Exploited? | Exploitation Time | Alert Time |
No | No exploitation | No alert sent |
Even though we placed the secret in an .env.example file, intending it as a harmless template, the key was not exploited on GitLab during our observation period. In contrast, on GitHub, the key was exploited within 5 minutes.
Hidden In Plain Sight
We went ahead and put a secret in a GitLab issue, trying to see if anyone is scanning those as well. This is the scenario we also ran in GitHub.
Exploited? | Exploitation Time | Alert Time |
Yes | 3 days | No alert sent |
Interestingly, the secret placed in a GitLab issue was exploited after 3 days, and we received no alert. In our GitHub experiment, a similar scenario resulted in no exploitation, but we received an alert within 1 minute.
Terraform State Turmoil
We use Terraform and decided to put some credentials there, thinking it's a good thing to do for automation and credential management purposes. We added the AWS access key to a Terraform state file (terraform.tfstate) within the repository. This is the scenario we also ran in GitHub.
Exploited? | Exploitation Time | Alert Time |
No | No exploitation | No alert sent |
Despite adding the AWS access key to a Terraform state file within the repository, we observed no exploitation of the key on GitLab. In our GitHub scenario, the key was exploited within 5 minutes, and we were alerted after 1 minute. This disparity suggests that attackers may not prioritize scanning infrastructure-as-code files on GitLab as they do on GitHub. However, this shouldn't encourage complacency—exposing secrets in any form is a risk that can lead to unauthorized access.
What’s Next: Secrets in Package Managers
This blog post revealed how AWS Access Keys exposed in code hosting platforms like GitHub and GitLab are exploited at lightning speed—often in minutes. But the risks extend far beyond repositories. In our next post, we’ll uncover how package managers like npm, PyPI, and Docker Hub serve as fertile ground for accidental secret exposure, proving once again that secret rotation is a practice that lulls organizations into a false sense of safety while attackers exploit exposed secrets faster than any rotation policy can respond.
The argument that “rotation works as long as secrets don’t leak” is both dangerous and outdated. Attackers move too fast, and environments are too complex for this approach to hold up. Protecting non-human identities (NHIs) demands a paradigm shift in how we think about security.
If you can’t wait for the next blog post, we invite you to download the full report, with all scenarios we ran, a deeper dive into our methodologies, platform-specific insights, attacker behavior patterns, and the tool we built to neutralize exposed secrets instantly. It’s an essential read for any organization looking to secure its systems against the escalating risks of exposed secrets.