Microsoft has raised concerns related to security risks from AI agents in its latest Cyber Pulse Report. The company has pointed out that when these tools have too much access, they can end up working against you, just like ‘double agents’ inside your own organisation.
Currently, AI agents are everywhere – you can see them popping up in offices all over the world, especially in India’s booming tech and startup scene.
Microsoft’s message is clear – businesses need to get serious about protecting their sensitive data from these new risks.
So, what’s a “double agent” in this context?
Well, Microsoft says plenty of AI agents today have access to all sorts of internal data and systems, but no real security guardrails. That’s a problem. Hackers can pull off prompt injection or manipulation attacks—basically, tricking the AI into doing things it’s not supposed to. If an AI agent has too many permissions, it’s a bit like a staff member with master keys and no supervision. All it takes is one clever attacker, and suddenly your AI is working for the wrong side.
This is not a fringe issue, either.
More than 80 per cent of Fortune 500 firms are using AI Agents
Microsoft’s report says over 80 per cent of Fortune 500 companies are already using AI agents, often built with easy-to-use low-code or no-code tools. Sure, this speeds up innovation, but moving too fast without thinking about security leaves big holes for attackers to slip through. For Indian companies jumping on the AI bandwagon, Microsoft’s advice is simple: don’t ignore cybersecurity in the rush to innovate.
The problem gets worse when you look at ‘Shadow AI’.
Shadow AI use is growing faster than expected
Microsoft’s survey of more than 1,700 data security professionals has reported that 29 per cent of employees are using AI agents for tasks without IT’s approval. When people go rogue with AI, you end up with more data leaks, compliance headaches, and a bigger target for cyberattacks.
Zero Trust and Governance Are Key
To tackle these risks, Microsoft recommends:
- Strong governance frameworks
- Improved monitoring and observability
- Adoption of Zero Trust security principles
Zero Trust follows the principle of “never trust, always verify”, which means you trust nothing and nobody—not users, not devices, and definitely not AI—without constant verification, no matter where they are.
Risk gets bigger with AI advancements
As AI is becoming an integral part of businesses, the risks keep growing faster than expected. Microsoft’s warning is not just a theory; it is a wake-up call, especially for Indian companies that have been racing to weave AI into everything they do.
If you want the benefits of AI without any risk, then you need to make security the priority – and it has to be part of the plan from the beginning of any new project.
