Model Context Protocol: Security Risks & Solutions
Introduction
Model Context Protocol, or MCP, is a new protocol that allows AI systems to connect to various applications and tools, enabling them to access specific data and perform tasks. This capability raises significant security concerns, particularly regarding how AI interacts with sensitive information and external systems.
AI is clever. However, it cannot do anything useful without the necessary information. Let’s say you’ve got ChatGPT or Anthropic’s Claude and you ask, “Hey, how did my company do over the last four quarters? What are the best performing products?” It doesn’t know a thing about that and therefore cannot help you. If you’re a public company, it might dig around the web and try to get an answer, but it will not be authoritative. It just doesn’t know.
This AI capability appears in line-of-business applications and, of course, in more and more custom-built solutions. The goal is to create truly useful applications, and that is where MCP comes in.
How AI Works
A fundamental consideration is how AI knows anything, and there are typically two ways. The first is what it learns in the schoolhouse, the training. That training happens once for a release of a model, and it is based on the billions or trillions of documents that the developers hoover up from every source, legal or not, to train these models.
The other way is something you control, and that is the context. For each request you make to an AI, like in ChatGPT, each time you hit “Enter” and send that request or your application makes a request, a few pieces of information are provided.
First is the system prompt. These are instructions typically controlled by the application, telling the model how it should behave. Next is the conversational history, everything you have talked about so far in that thread with the AI. There are also tools that AI systems can call and get responses from, just like function calls. Finally, there is the user message, the part you put in.
All of this context information is used only for that one conversational turn and can be changed each time you go through. There is the ambient knowledge that comes from training, and there is the specific knowledge provided in context. If you want to make useful AI systems, it is all about what goes into the context of that request. That is where it happens. The models are shared by everybody, but the context is what makes your AI system actually useful.
What if someone dragged a financial document into ChatGPT and asked, “How have my products been doing over the last four quarters?” Here is what happens behind the scenes:
- The PDF is uploaded and converted to text.
- The system then chops the text into pieces.
- Based on the question, it selects a few of those pieces and places them in the context.
- Next, the LLM gets to work. It even writes a small program to analyze the data, runs that program, looks at the result, and finally drafts the answer.
That raises concerns:
- Why was the entire document copied into a public service?
- Which chunks were chosen out of 500 pages?
- Is the generated program any good?
- Will the answer be consistent the next time I ask?
Clearly, this is not the right way to handle the question. So how do you make a useful AI system that can answer questions that require specific information about your company or systems? In 2025 and 2026, this is happening through tools reached by AI through MCP.
The Model Context Protocol
When you give AI the right tools for the job, you suddenly get the answers you want. That’s happening very quickly through MCP by connecting applications to other applications. The primary use case, and where this is coming from, is all of the AI providers. This started with Anthropic; in November 2024 they released the spec. A couple of months later, OpenAI picked it up and said, “You know what, we like this too.” With those two big names behind MCP, it has just exploded.
It is now the way these AI systems connect to all sorts of applications. If you go into ChatGPT, you have a tab where you can enable connectors, and one of those connectors is your Google Workspace. That is MCP in action: ChatGPT is the client, the MCP server run by Google is the server, and these two applications talk to each other to give the AI the data and tools it needs to get work done.
At the CIO and vendor level, when they start saying, “AI is going to change this world,” what they are really talking about, from a technical perspective, is AI with tools running in a loop to do a job. The way that AI is going to get to these tools is through MCP.
How does this work? What are the big problems with it? This is really important because there are literally thousands of MCP servers already. A website called glama.ai lists about 2,500. These servers let you hook your AI up to just about anything, including the local file system of the computer it’s running on, the webcam, business applications, and developer tools. If the question is how to connect AI to something, this year the answer is MCP.
MCP is about a lot more than just tools. It also gives resources, like files, to AI, and can provide prompts. It can use the AI model from the tool, acting like a reverse query that comes to the tool, not from the person. It can also ask your users questions.
At its core, it is a remote procedure call interface. Authorization is required so the AI knows what information it is allowed to access when answering a question. One of the big differences is the transport. With REST APIs you think of the web, HTTP, proxies, DNS, and the ability to intercept and inspect queries. MCP is protocol agnostic. One protocol it supports, primarily for developer purposes, is a local process running on the same machine.
Three Paths for Businesses
1. No MCP Plans
Maybe you feel this is not ready for your company yet and don’t want to implement it. First, be aware that end users who can install software can enable it themselves. They can install Claude Desktop and use local or remote MCP servers.
Other applications, including line-of-business apps, will likely embed MCP clients and servers so they can connect to AI locally. This capability may seep into the applications you already use or plan to use.
Once a process has file system access, it can reach many resources. For example, OneDrive files synced locally, or, on a Mac, iCal connected to your Exchange server. A user could ask Claude to find a meeting time with specific people, and it would query those calendars. That might be helpful, but it may not be what you intend.
Expect MCP clients to appear in line-of-business applications. Developers should note that VS Code added an MCP client a few months ago, and more are coming. Not all of them will be AI apps. AI demand is driving MCP adoption, but MCP is a useful integration point on its own, much like USB, which lets you connect all sorts of things.
If you’re not ready for this yet, you’ll need to be active about going out and finding where it’s being used, seeing which line-of-business applications are starting to offer these capabilities and which SaaS applications are. Make sure they don’t get activated and used behind your back. You need a bit of a campaign; it’s a standard anti-shadow IT project.
2. Use MCP
The second path is the people who want to use MCP. Now, let’s take an alternate approach: we want to support our users. AI has been useful, and if we give our AI systems better information and tools, they can answer questions and help people get their work done.
We must always remember that the AI, the LLM, is not a trusted party from a security perspective. Most people do not grasp this. As security practitioners, we must accept that the LLM itself is untrustworthy, even when a trustworthy user interacts with it.
When you hook an LLM to systems containing data you want to keep confidential, you need to think carefully about the risks. Even if your company only buys software, SaaS and line-of-business developers are rapidly adding MCP servers to their products and rediscovering well-known security vulnerabilities.
A unique burden falls on the person who decides which MCP systems or connectors to enable in ChatGPT or Claude. You must assess security exposure based on the combination of tools. Combining different MCP servers can give an LLM far more capability than you expect.
Everyone will want these systems to work with their SSO. Because MCP relies on advanced OAuth features that few other systems use, I anticipate bugs. New code gives us new problems.
Here’s the key takeaway. People are always concerned about prompt injection. Prompt injection is a behavior, but what you really need to watch for is what Simon Willison calls the lethal trifecta:
- Access to private data
- Ability to communicate externally
- Exposure to untrusted content
When a vendor says, “We’ve got this new feature; you should enable it in your ChatGPT instance,” sit down and ask yourself, “Does this tool bring together these three elements?” If it does, you have a heightened risk.
This is my key warning. Vendors will push MCP on you. It’s not just a flag you turn on and say, “Great, we have AI now.” Use caution and always check whether the lethal trifecta is present when used in combination with the other MCP servers you have enabled. When it is, there is risk.
3. Build with MCP
The third path is the people who might want to build with it in the coming year. If you’re going to be building MCP systems, there’s a lot to be concerned about. Prompt injection is not a solved problem in 2025. It can be mitigated, thwarted, and monitored, but it is not solved, and bolt-on security will not fix it.
Every vendor that claims to have solved prompt injection really hasn’t; they should say we have helped. That’s still valuable. WAFs in front of a web server are great; they don’t solve the underlying problems, but they can help manage them, and that’s useful. Mindful architecture, coupled with MCP, can be really effective.
Conclusion
MCP is how AI will be able to use tools, and there are thousands of them coming out. New things like this can give security professionals hives, because it’s almost guaranteed that there will be bugs.
If you have AI-related security questions, or just security questions in general, reach out to ivision to connect with our incredible engineers who can share some insight.