< back to blog

CVE-2026-44338: PraisonAI authentication bypass in under 4 hours and the growing trend of rapid exploitation

Michael Clark
CVE-2026-44338: PraisonAI authentication bypass in under 4 hours and the growing trend of rapid exploitation
Published by:
Michael Clark
CVE-2026-44338: PraisonAI authentication bypass in under 4 hours and the growing trend of rapid exploitation
Director of Threat Research
@
CVE-2026-44338: PraisonAI authentication bypass in under 4 hours and the growing trend of rapid exploitation
Published:
May 12, 2026
falco feeds by sysdig

Falco Feeds extends the power of Falco by giving open source-focused companies access to expert-written rules that are continuously updated as new threats are discovered.

learn more
Green background with a circular icon on the left and three bullet points listing: Automatically detect threats, Eliminate rule maintenance, Stay compliant, with three black and white cursor arrows pointing at the text.

On May 11, 2026, GitHub published advisory GHSA-6rmh-7xcm-cpxj, tracked as CVE-2026-44338 for PraisonAI, an open-source multi-agent orchestration framework with ~7,100 GitHub stars. The legacy api_server.py entrypoint shipped with authentication disabled by default, exposing two endpoints, GET /agents and POST /chat, to any caller.

Within three hours and 44 minutes of the advisory becoming public, a scanner identifying itself as CVE-Detector/1.0 was probing the exact vulnerable endpoint on internet-exposed instances. The advisory was published at 13:56 UTC. The first targeted request landed at 17:40 UTC the same day. 

The Sysdig Threat Research Team (TRT) operates a fleet of early warning systems across multiple cloud providers to measure advisory-to-exploitation latency. The rapid exploitation of PraisonAI following the publication of its vulnerability advisory is the latest example of a broader trend: Over the past several months, the Sysdig TRT has observed an increasing number of CVEs exploited within hours of disclosure. Similar patterns were observed with the recent Marimo, LMDeploy, and Langflow CVEs.

As illustrated by the Zero Day Clock, AI now enables attackers to reverse-engineer patches, identify the vulnerabilities they address, and generate functional exploits within minutes. The evidence suggests that rapid exploitation is no longer an outlier, but an emerging norm. 

The research below examines the Sysdig TRT’s observations on the exploitation of CVE-2026-44338, along with related detections and recommended actions.

Timeline

Time (UTC)

Event

May 11, 2026, at 13:56:16

GHSA-6rmh-7xcm-cpxj published, CVE-2026-44338 assigned

May 11, 2026, at 17:32:50

First contact from 146.190.133.49: generic recon (/, /.env, /admin)

May 11, 2026, at 17:40:53

Source IP pivots to PraisonAI-specific endpoints (/praisonai/version.txt, /docs, /api/agents/config, /api/agents)

May 11, 2026, at 17:40:55

GET /agents from 146.190.133.49, User-Agent CVE-Detector/1.0

May 11, 2026, at 17:41:32

Second GET /agents probe

The time from advisory publication to the first targeted request against the documented vulnerable path was three hours, 44 minutes, and 39 seconds.

The PraisonAI vulnerability

PraisonAI ships a legacy Flask-based API server, src/praisonai/api_server.py, that hard-codes AUTH_ENABLED = False and AUTH_TOKEN = None. The check_auth() helper returns True whenever authentication is disabled, so the two "protected" routes fail open by design:

  • GET /agents returns the configured agent metadata, including the agent definition file name and the list of agents.
  • POST /chat accepts any JSON body containing a message key and executes PraisonAI(agent_file="agents.yaml").run(). The submitted message value is ignored; the configured workflow runs regardless of what commands the caller sends.

Affected versions

Range

Status

>= 2.5.6, <= 4.6.33

Vulnerable

4.6.34

Fixed

At the time of publication, the latest PyPI release was 4.6.33, meaning every then-current installation was vulnerable.

What the Sysdig TRT observed

The activity from 146.190.133.49 (a DigitalOcean US IP address) followed a packaged-scanner profile, not interactive exploration. Two passes ran eight minutes apart, each pushing approximately 70 requests in roughly 50 seconds. The first pass swept generic disclosure paths (/.env, /admin, /users/sign_in, /eval, /calculate, /Gemfile.lock). The second pass narrowed to AI-agent surfaces, specifically:

Path class

Sample paths

FastAPI fingerprint

GET /docs, GET /openapi.json, GET /swagger.json

PraisonAI version fingerprint

GET /praisonai/version.txt, GET /pyproject.toml, GET /poetry.lock

API-server endpoint enumeration (CVE-2026-44338)

/api/agents/config, /api/agents, /api/v1/agents, /api/tasks, /api/tools, /agents

MCP-server endpoint enumeration 

GET + POST on /api/mcp/config, /api/mcp/servers, /api/mcp/list, /api/mcp/status, /mcp/config, /api/v1/mcp, /api/tools/config

The probe that matched CVE-2026-44338 directly was a single GET /agents with no Authorization header and User-Agent CVE-Detector/1.0. That request returns 200 OK with body {"agent_file":"agents.yaml","agents":[...]}, confirming the bypass was successful.

The scanner did not send POST /chat during either pass. The pattern we witnessed above is consistent with a validation step: Enumerate the agent list, confirm the auth bypass works, log the host as exploitable, and move on. Follow-on tooling is typically separate.

Next steps

This isn’t a typical remote code execution (RCE): a POST /chat on a successfully fingerprinted instance is a single request, and the impact depends entirely on what the operator's agents.yaml is scripted to do. 

The handler does not parse the submitted message. Instead, it just calls PraisonAI(agent_file="agents.yaml").run(). Whatever that workflow is configured for, an unauthenticated caller can often trigger it arbitrarily. In typical production deployments, we generally see the following:

Phase

Outcome

Model API quota burn

The configured workflow makes calls to OpenAI, Anthropic, Bedrock, or another LLM provider on the operator's account. An attacker who loops POST /chat from a botnet exhausts the operator's quota and bills them for it. Lowest-effort, highest-cost outcome.

Agent tool execution

PraisonAI workflows commonly grant agents access to a code_interpreter, file I/O, web fetch, shell, or HTTP request tools. Each /chat call invokes the full agent graph, including any side-effect-producing tools the operator wired up. Can write files, exfiltrate from internal datasets, send Slack messages, or trigger downstream workflows.

Configuration disclosure

GET /agents returns the agent file name and agent list. Combined with any debug logging or error responses produced by the workflow, this lets an attacker map the operator's agent design, prompts, and tool wiring.

The bypass itself is not arbitrary code execution. But because it removes authentication from a workflow trigger that an operator deliberately exposed to do something useful, the impact ceiling is whatever that workflow is allowed to do.

Indicators of compromise

Field

Value

Source IP

146.190.133.49 (AS14061, DigitalOcean, LLC, US)

User-Agent

CVE-Detector/1.0 (no other UA observed from this source)

The CVE-Detector/1.0 string is the operationally useful tell for defenders. The same User-Agent appearing in any log on any host should be treated as a known-CVE-targeting scanner regardless of which endpoint it hit.

Detection

Until an upgrade is possible, network-layer monitoring catches this class of traffic cleanly because the bypass leaves no missing-auth signal in the application logs. Successful unauthenticated requests look identical to legitimate ones. Detection must happen at the perimeter:

  • Set up WAF or log detection rules for GET /agents and POST /chat lacking an Authorization header and User-Agent CVE-Detector/1.0.
  • Check for access to Python-package fingerprint paths (/pyproject.toml, /poetry.lock, /praisonai/version.txt, /requirements.txt).

For Falco-instrumented hosts running PraisonAI, the post-exploitation footprint of a /chat-triggered workflow is whatever the configured tools run: Python spawning a subprocess, network egress from the agent process, and file writes outside the working directory. Existing rules for unexpected child processes and outbound network from interpreter processes will catch the second-stage activity, but not the bypass itself.

Recommendations

  • Upgrade to PraisonAI 4.6.34 or later.
  • Migrate off the legacy api_server.py entrypoint. The newer PraisonAI binds to 127.0.0.1 by default and supports --api-key.
  • Audit existing deployments. Any instance launched from the sample API deployment YAML inherits host: 0.0.0.0 with auth_enabled: false. The generated configuration does not warn the operator.
  • Bind to loopback or a private network if a token-less API is genuinely required for development. And do not expose :8080 to the internet on a multi-agent framework — ever.
  • Audit your model-provider billing for May 11, 2026 and later.
  • Rotate any credentials referenced in agents.yaml.

Conclusion

While CVE-2026-44338 itself is a serious concern, there is a broader trend looming behind this exploitation. Authentication-disabled-by-default in a development-grade API server is a known anti-pattern, and the impact ceiling is bound by what the configured workflow does. 

However, the broader concern is the operational reality of advisory-to-exploitation latency in 2026. A tool labeled CVE-Detector/1.0 was probing internet-facing AI-agent infrastructure for this specific CVE within three hours and 44 minutes of the advisory becoming public, on a project with less than 7,100 stars. 

Adversary tooling has scaled to the entire AI and agent ecosystem — no matter the size, and not just the household names — and the operating assumption for any project that ships an unauthenticated default must be that the window between disclosure and active exploitation is measured in single-digit hours. Runtime security remains the most effective approach for defenders to detect and respond to rapid vulnerability exploitation in real time.

About the author

Cloud detection & response
Cloud Security
Open Source
Security for AI
featured resources

Test drive the right way to defend the cloud
with a security expert