
Falco Feeds extends the power of Falco by giving open source-focused companies access to expert-written rules that are continuously updated as new threats are discovered.

Since its emergence in May 2024, LLMjacking has evolved from a novel security concern into an industrialized cybercrime marketplace. When this new class of cloud-focused AI attack was reported, researchers predicted that motivated actors would commercialize the practice. Now, additional investigations confirm those predictions: an underground marketplace is now actively monetizing unauthorized AI access at scale. LLMjacking is the new cryptomining.
A quick recap: What is LLMjacking?
LLMjacking is defined as the unauthorized use of cloud-hosted LLM resources via compromised credentials, APIs, or exposed endpoints. The term was originally coined due to its similar goal of other resource jacking attacks such as proxyjacking and cryptojacking.
Unlike traditional API abuse, LLMjackers don’t just steal data, they steal compute cycles, inference costs, and access rights, leaving victims with inflated cloud bills and potentially exposed model capabilities.
Evidence from early LLMjacking attacks
When the first documented LLMjacking attacks were published in mid-2024, researchers identified a clear pattern: threat actors exfiltrated cloud credentials from compromised environments, verified which LLM services were enabled, and attempted to invoke locally hosted models.
It was clear after only a few months that LLMjacking was not a one-off campaign, but a growing market. Attackers were not only adapting rapidly, but treating LLMjacking as a source of income. Some attackers included unique techniques in their attacks:
- Leveraging reverse proxies to centralize access to multiple compromised accounts while hiding the underlying compromised credentials.
- Adding to the list of models being targeted, like DeepSeek-V3 within days of its release.
- Shifting from abusing readily available models in the victim environment to attempting to enable models not previously in existence.
- Using cloud intrusion techniques optimized with AI to achieve administrative access in minutes before shifting to LLMjacking.
LLMjacking turns commercial
In early 2026, it was clear based on independent research that the early 2024 prediction that was once theoretical has become a stark reality: LLMjacking has become commercialized.
Dubbed Operation Bizarre Bazaar, this new LLMjacking campaign represents the first LLMjacking ecosystem with clear marketplace monetization and attribution. Researchers also detailed:
- LLMjacking attacks targeting Model Context Protocol (MCP) server endpoints.
- Automated scanning using Shodan and Censys to locate exploitable endpoints like unauthenticated APIs, default ports, and exposed development servers.
- Validation of victim environment quality of access and LLMjacking capabilities or limitations.
- Resale of LLMjacked compute cycles and API access via underground marketplace silver.inc on Telegram and Discord in return for PayPl and crypto payments.
AI system risks are compounding
While fraudulent cloud costs remain a significant impact of LLMjacking, the security risks for operations are also expanding. MCP servers bridge AI systems with internal infrastructure like file systems and databases. The compromise of an MCP server could result in lateral movement into critical assets beyond LLMs or sensitive data exploitation. If credentials or APIs are stolen and resold following an LLMjacking attack, buyers may have other plans for the victim environment, potentially resulting in an endless combination of breaches.
What LLMjacking means for security leaders
LLMjacking’s evolution shouldn’t come as a surprise to anyone who has watched how attackers industrialize new resources. We’ve seen it with the cloud, cryptomining, credential abuse and AI is just the next substrate. For attackers, LLMs, AI integrations, and associated APIs are being treated like first-class assets worth their weight in gold due to the treasure trove of credentials, access, and data they hold. That has a few implications for security leaders and their teams:
- Credentials are the new attacker currency: Credential hygiene is truly a foundational practice in the cloud. Credentials, API keys, and tokens must have shorter lifetimes, tighter IAM scope, and continuous monitoring for unusual patterns to reduce the risk of a breach.
- Maintain an assume-breach mindset: If it’s exposed to the internet, it will be scanned and tested, often within hours. APIs, MCP servers, chatbot backends, and model endpoints cannot be thought of as “experimental” or “internal tooling”. Any intentionally publicly exposed asset must be inventoried, authenticated, rate-limited, and monitored.
- Cost anomalies are not just a finance problem: Unexpected spikes in LLM usage, inference calls, or cloud spend are indicators that need to be reviewed by both security and finance.
- Integrations expand the blast radius: MCP servers and AI integrations leave a lot of doors potentially open for attackers. Treat them like privileged middleware, enforce authentication and authorization, and monitor for and respond to reconnaissance techniques to stop attacks before it’s too late.
LLMjacking intersects cloud security, identity security, and AI risk. Its progression from isolated incident to commercialized marketplace is familiar, but happened faster than expected. This accelerated shift matters because commercialization lowers the barrier to entry for attackers; AI infrastructure compromise no longer requires any technical skill because now it can simply be bought.
