
Falco Feeds extends the power of Falco by giving open source-focused companies access to expert-written rules that are continuously updated as new threats are discovered.

AI issues are stealing the spotlight
AI didn’t just make the headlines in February; it swallowed them whole. From viral agentic platforms to breached agents and AI-powered attacks, artificial intelligence dominated the global conversation. Furthermore, agentic AI “communities,” through the likes of Moltbook and Moltroad, have taken hold — kind of like Myspace and Facebook did in their glory days, but with much less HTML.
But while AI issues kept us occupied, attackers kept themselves busy exploiting vulnerabilities and weak credentials. So yes, we will talk a lot about AI, but it’s not the only threat on the block worth noting.
Feb 6: BeyondTrust vulnerability and the speed of modern exploit weaponization
- CVE-2026-1731 is a critical pre-authorization remote code execution vulnerability in the BeyondTrust Remote Support product.
- Proof of concepts and active exploitations were seen hours after the vulnerability was disclosed by the researchers.
- One week later, the vulnerability was reclassified as a zero-day; BeyondTrust stated the exploits began on January 31.
- At the same time, CISA gave only a three-day window for federal agencies to patch the flaw.
- On February 20, the vulnerability was reported as being used in ransomware attacks.
Feb 17: AI recommendation poisoning
- Microsoft published research and coined the new term AI Recommendation Poisoning.
- The AI hijacking technique being used by legitimate organizations mirrors SEO poisoning.
- Instead of manipulating search engine ranking, businesses are using hidden instructions to introduce bias and therefore influence AI-generated summaries to their benefit.
- This raises a flag on the integrity of training data for AI systems, blurring transparency and trust.
Feb: All about AI
- Supply chain compromise: A compromised token led to an unintended update to the open source, AI-powered coding assistant, Cline CLI. On February 17, the Cline CLI v2.3.0 release quietly installed OpenClaw on developer systems over the course of eight hours. The project developers said the installation was not authorized or intended.
- Infostealer targeting agents: On February 16, a variant of the infostealer malware Vidar was reported as exfiltrating configuration files, tokens, and API keys from a victim’s OpenClaw deployment. This is a stark reminder to secure AI identities.
- ClawHub marketplace houses malicious skills: Just like malicious packages on DockerHub or GitHub, in early February, researchers found eight to 12% of audited skills included things like backdoors and credential stealers. Trust but verify before grabbing an open source capability.
- OpenClaw high-severity RCE vulnerability: On February 1, researchers stated that CVE-2026-25253 allows remote code execution (RCE) within seconds after a victim clicks on a malicious link. Their token is sent to an attacker-owned server, and the attacker can log in to the victim’s OpenClaw instance, access data, and perform actions. Be careful what you click and always rotate keys and tokens.
- Large-scale AI-driven attack: According to AWS, on February 23, more than 600 Fortinet FortiGate devices were compromised due to exposed management ports and weak credentials and identity management practices. The attacker used AI to plan, manage, and scale misconfigured devices in 55 countries. AI continues to lower the barrier for opportunistic campaigns.
Additional Sysdig TRT findings
The Sysdig Threat Research Team (TRT) published a blog on February 11 that demonstrates how AI can dramatically compress an attack kill chain. In eight minutes, an attacker went from initial access to admin privileges. With access, the attacker was able to exfiltrate sensitive data and launch an LLMjacking attack. The blog includes several mitigation techniques for the TTPs used in the attack, and lists rules Sysdig Secure customers should use to safeguard against this type of threat.
Also in the news
The European Commission responds to an attack the right way: The Commission published a press release on February 5 regarding an attack on a portion of its infrastructure. A cyberattack that could’ve resulted in the exposure of staff names and cell numbers was thwarted. Rapid response contained the incident and cleared the system within nine hours. Additional EU-based organizations were also very likely impacted by the same campaign, such as the Dutch Data Protection Authority and Council for Judiciary, and an agency supporting Finland’s Ministry of Finance.
Stolen credentials lead to a massive breach in France: The French Ministry of Finance published a press release on February 18 regarding the loss of sensitive banking data from potentially 1.2 million accounts (1.5% of total records) from the national bank account registry (FICOBA). At the end of January, a threat actor used the credentials of a privileged employee to access a file with a list of all bank accounts opened in French banks. FICOBA was still offline as of February 27 and the investigation is ongoing.
Anthropic champions using AI for security: On February 20, Anthropic launched Claude Code Security in the hopes of promoting AI-powered vulnerability remediation. What it really did, though, was tank a bunch of tech stocks. But in all seriousness, in order to keep up with the speed of AI-enabled exploitation, this is the way forward in cyber defense. Sysdig Sage™ does this, too.
Closing thoughts
AI is accelerating attacks, compressing kill chains, and lowering the barrier to entry, but it isn’t replacing security fundamentals. The same weaknesses still lead to compromised organizations. Don’t let the spotlight blind you: Minimize exposure, work on credential management, improve identity hygiene, rotate tokens, and apply patches or other mitigations. All the security best practices we’ve known for years still apply, and our actions still decide the outcome — even in an AI-powered era.
CISO corner
By: Conor Sherman, Sysdig CISO in Residence
Shadow AI is more potent than Shadow IT
Shadow IT moved data outside sanctioned environments. Shadow AI extends privileged access to them. Agent platforms on employee laptops hold API keys, tokens, and live connections to production systems — and historically, those privileged actions were bound by sanctioned SaaS apps and production environments your team could monitor and control. That boundary is gone. The attack surface now includes every laptop where a curious employee has installed an agentic AI tool.
As of this month, the risk is set in stone. A Vidar variant harvested tokens and API keys directly from an OpenClaw deployment. Sysdig TRT showed that, with those credentials in hand, an attacker can reach admin access in eight minutes. And eight to 12% of audited ClawHub skills contained backdoors or credential stealers. That's not a tail risk; it's a baseline contamination rate in a supply chain from which your teams are actively pulling.
If AI tooling isn't in your asset inventory, it's in your blind spot.
The exploitation window is now hours. Patch cycles are not enough
The BeyondTrust CVE (2026-1731) was under active exploitation within 24 hours of public PoC release, confirmed independently by GreyNoise, watchTowr, and Arctic Wolf. Ransomware use followed within three weeks. Threat actors are using AI to scan and weaponize disclosures faster than vulnerability management programs were designed to respond.
Patch when you can, as fast as you can. But the right question isn't "how fast do we patch?" It's "what's our posture when we haven't patched in time?" Real-time detection is the difference between catching exploitation in progress and reading about it in a disclosure.
The fundamentals haven't changed, but the stakes have:
