Skip to main content

Cybersecurity Firm AppSecure Identifies Critical Flaw in Meta.AI Leaking Users’ AI Prompts and Responses, Rewarded $10,000

AppSecure, a cybersecurity firm specializing in penetration testing and red teaming, has discovered a critical vulnerability in Meta.AI, Meta’s generative AI chatbot platform. If left unaddressed, the flaw could have allowed other users’ data and private AI interactions to be leaked.

Sandeep Hodkasia, CEO and Founder of AppSecure Security, identified the issue during a security research exercise. His investigation revealed that Meta.AI’s GraphQL API was unintentionally exposing prompts and outputs generated by other users. This oversight posed a risk of unauthorized access to personal and potentially sensitive conversations within the platform.

Fortunately, no evidence of misuse or exploitation was found. The flaw originated from a missing authorization check in Meta.AI’s GraphQL API, specifically within the useAbraImagineReimagineMutation query. The system used a media_set_id to manage user interactions, but it didn’t validate whether the person making the request actually owned that ID. As a result, any logged-in user could alter the media_set_id parameter and gain access to prompts and AI-generated content created by others.

AppSecure reported the vulnerability to Meta on December 26, 2024. They looked into the issue and rolled out a temporary fix on January 24, 2025, with it being permanently resolved on April 24, 2025.

In their official response, Meta said: “You demonstrated an issue where a malicious actor could access users' prompts and AI-generated media via a certain GraphQL query, potentially allowing an attacker to access users’ private media. We mitigated this and found no evidence of abuse.” Recognizing the significance of the finding, Meta awarded $10,000 for the key vulnerability and an additional $4,550 for related issues identified during the same investigation.

“This wasn’t about chasing a bounty — it was about securing a system millions are starting to trust,” clarifies Sandeep. “If a platform as robust as Meta.AI can have such loopholes, it’s a clear signal that other AI-first companies must proactively test their platforms before users’ data is put at risk.”

As more companies rapidly deploy generative AI models, the surface area for potential attacks continues to grow. AppSecure’s findings highlight the need for a proactive approach to security, especially in systems that handle user-generated content, prompt history, or model outputs.

AppSecure has a reputation for carefully and responsibly uncovering important security vulnerabilities. Many AI-focused companies trust AppSecure to help protect their systems. The company actively tests how users interact with AI platforms and examines the behind-the-scenes processes to find hidden flaws that could cause security risks. This hands-on approach helps businesses fix issues before they become serious threats.

“Security is not just about fixing problems after they appear; it’s about anticipating risks and acting before damage occurs,” adds Sandeep. “That's why leading companies work with us to identify real-world risks early and build AI platforms that stay secure and reliable from the very beginning.”

About AppSecure Security

AppSecure Security is a CREST-accredited Penetration testing firm that identifies and addresses critical vulnerabilities through real-world attack simulations. The experienced team focuses on testing web applications, APIs, and networks to expose hidden risks before threats can cause harm. By following industry standards and taking a proactive approach, AppSecure helps businesses strengthen their defenses and stay ahead of evolving cyber challenges, making it a trusted partner for comprehensive security solutions.

Contacts

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.