How attacks on MCP servers could stymy AI rollouts

27 August 2025

Mohammad Ismail, VP of EMEA, Cequence Security

Mohammad Ismail, VP of EMEA, Cequence Security

Many organisations have already devoted significant time and financial investment to projects intended to reap the rewards of artificial intelligence and agentic AI only to find they have precious little to show for it. They’ve faced considerable challenges in creating an underlying infrastructure that allows applications to communicate successfully with AI agents, dogged by issues such as authentication, authorisation and security, and so they frequently end up with little more than a proof-of-concept solution. These deployments typically lack the necessary controls to keep those applications secure, risking data leakage, misuse and non-compliance.

Connecting AI agents to internal and external applications is a problem the AI industry quickly realised it needed to solve and it’s why Anthropic devised a nifty standardised method for doing so in the form of the Model Context Protocol (MCP) back in November 2024. This standardised protocol acts in a very similar way to how Application Programming Interfaces (APIs) facilitate application-to-application processes, MCP eliminates the need for code to be developed for each and every data repository, tool, or service the AI agent wishes to connect to.

The process works by using an AI agent to route queries via the LLM and an MCP client to an MCP server which then connects to the data source and relays information. MCP effectively acts as a common language that allows the AI to communicate with the API and it’s been a real breakthrough, allowing what would have taken months of backend integration to happen in minutes.

History repeating itself

But, as always, ease of access can come with a corresponding increase in risk and MCP is no exception. As we saw with the rollout of APIs, which can be subverted with their functionality used against them, MCP servers can reveal more than is desirable, leading some to describe them as the newest vector for business logic abuse. In fact, we’re already seeing vulnerabilities emerge that suggest MCP servers may become a prime target for attacks.

Back in June, Asana alerted users to a logic flaw that could lead to data being exposed to other MCP users and, separately, security researchers discovered more than 7,000 MCP servers exposed to the internet. Hundreds of these were found to be susceptible to the ‘NeighborJack’ vulnerability which saw users on the same local network exposed, and 70 of the servers were found to have severe flaws ranging from unchecked input handling to excessive permissions.

This creates a similar situation to that faced by API-first businesses years ago when the dilemma was how to roll out the technology without exposing your applications and data on the backend. Now, as back then, organisations want the competitive advantages associated with agentic AI. Using the technology across their marketing, sales, customer experience, and ecommerce departments will enable them to drive automation, boost productivity, and deliver more intuitive user experiences. But they first need to securely connect those servers.

Using an AI gateway to govern and secure

One way of derisking this process is to implement an AI gateway. The technology can help avoid the problems of one-off prototypes and the associated time and costs of upskilling developers, generating code, carrying out quality assurance, or integrating systems. It transforms any application or API into an MCP-compatible endpoint and can even create an MCP server for this purpose in addition to applying real-time enforcement policies to secure these exchanges. So, what should businesses look for when selecting a solution?

An AI gateway should integrate into the existing network infrastructure and provide appropriate identity-based access to systems and data, ensuring that AI agents are properly authenticated to ensure only authorised agents are allowed access to applications and data. It should monitor and log all AI interactions continuously, observing the applications being accessed and the API calls are being made via specific agents. Monitoring agent and user behaviours in this way then ensures any anomalous or suspicious activity which could indicate business logic abuse or other form of attack can be detected. And finally, it's also important that the gateway complies with regulations and guidelines such as the EU AI Act and Anthropic’s AI Safety Levels (ASL) as well as futureproofing the organisation against new versions of the protocol.

Early adopters of agentic AI are already beginning to see the benefits of the using an AI gateway. In one instance, where a company was attempting to create a complex, customer-facing agentic application experience, the process had already been underway for many months but went from ‘stalled’ to ‘operational’ in under 48 hours with the help of an AI gateway. Now, the business reports customers are able to ask natural language questions and get real-time answers, reducing costly support interactions, conserving resources and improving customer satisfaction in a secure and compliant set-up.

However, agentic AI is such a nascent technology that many teams are still feeling their way with it. The danger is that these prototype projects, coupled with the exploitation of MCP server vulnerabilities, could create a perfect storm that sees a raft of enterprises become dangerously exposed. That in turn could endanger the uptake of the technology and lead to a spate of attacks. AI gateways can prevent that version of events from happening by creating a simple, scaleable way to create or utilise MCP servers and securely connect agentic AI to internal and external applications.