Microsoft recently introduced NLWeb, its bold attempt to create an ‘HTML for the Agentic Web’, which promises to bring ChatGPT-style interactivity to websites and apps. But just weeks after its Build 2025 debut, security researchers uncovered a major vulnerability.
The flaw, a classic path traversal bug, could allow remote attackers to access sensitive files like .env files, which often store system configurations and even OpenAI or Gemini API keys. The issue was so basic that it could be exploited by simply clicking on a malformed URL.
Researchers sound alarm: AI systems may be at greater risk
Security engineers Aonan Guan and Lei Wang responsibly reported the flaw to Microsoft on May 28, prompting the company to release a fix on July 1. However, Microsoft has not assigned the vulnerability a CVE (Common Vulnerabilities and Exposures) ID, which is the global standard for tracking such flaws. The researchers believe this limits awareness and transparency.
Guan emphasised that the risk is far greater in the AI space. “These API keys are like the brain of an AI agent. If stolen, an attacker could create a malicious clone of the AI or rack up huge API bills,” he warned.
Microsoft claims the fix is live, but manual action is needed
In response, Microsoft stated: “This issue was responsibly reported and we have updated the open-source repository… Microsoft does not use the impacted code in any of our products.”
Despite the fix, NLWeb users still need to manually update and deploy the patched version. Without this step, any public-facing NLWeb implementation remains vulnerable.
What this means for the future of AI on the web
Microsoft is also moving ahead with its Model Context Protocol (MCP), aiming to bring native AI agent support to Windows. But researchers warn that rushing AI deployment without stringent security checks could lead to even bigger risks.
This incident is a wake-up call, reminding tech giants and developers alike that AI security must not lag behind innovation—especially when AI agents act as digital brains with access to sensitive data and actions.
