MCP and the Rise of AI Agents
Storyblok is the first headless CMS that works for developers & marketers alike.
Introduction
Watch any software company's annual conference, and you’d be surprised if the words “Artificial Intelligence” are not used at least 15 times. We are well into the AI Era, with LLM applications such as ChatGPT having deep penetration for all sorts of use cases. Large Language Models can act like good companions in the Context of a CMS, helping write first drafts and summarize research. However, the true potential of AI lies in deep integrations with your publishing stack, having a reliable way to pass context back and forth. LLMs like ChatGPT are powerful, but they’re not connected. You can’t ask them to fetch CMS content, run analytics, or understand your brand tone unless you set it all up manually. That’s where MCPs and AI agents come in.
MCP - a quick refresher
MCP, or the Model Context Protocol, is a developing standard that dictates how LLM clients “talk” to other applications. The official resource for MCPs, which states, “Think of MCP like a USB-C port for AI applications,” more accurately describes the practical usage of MCPs in the age of AI. An MCP client is the LLM (like Claude or GPT-4), and an MCP server is the system exposing your internal tools (like Storyblok or Google Analytics) via a standard protocol.
Storyblok is a headless CMS, which means your content is independently stored and managed. We have Content Delivery and Management APIs available, which means you have first-party programmatic access to read and write content and metadata stored on Storyblok. Through an MCP server, an LLM client has access to your data in the Storyblok ecosystem, which enables the LLM to better assist your content management needs.
Thus, MCPs can be very beneficial for use cases involving content creation and management. However, continuously prompting an LLM to perform operations can be cumbersome, even with the additional benefits that MCPs bring. AI agents aim to solve this problem.
AI agents
AI Agents are autonomous software systems that can make decisions and “learn” from experience. At their core, AI Agents are a next-step evolution of automation combined with AI capabilities.
Unlike traditional chat-based UIs, AI Agents have a certain degree of autonomy and do not continuously rely on user prompting. While LLMs follow a task-based approach, AI Agents are empowered to follow a plan-based approach. Another advantage of AI Agents is that they can retain context and memory across multiple runs.
Let’s now consider how an AI agent using MCP could be helpful in the following scenario:
We are a travel publication website. We want to research content ideas and write first drafts based on what our readers would like to read. We want our decision-making to be data-driven, so our research should include publicly available data about trending destinations and content analytics for the previous year. Our content cycle is seasonal, so we want weather data to be included in the decision-making as well.
Through standardization by the way of MCPs, our AI Agent only needs to know the MCP client protocol to collect data and collate information from the various data sources. For research on travel trends, a flight information MCP server can be used, for content analytics, a combination of Storyblok MCP and Analytics MCP can be used, and so on. The AI Agent acts as an orchestrator with MCP servers supplying it with context and also performing actions to write data externally.

AI Agent orchestrating contextual content creation
Why Headless CMS workflows are a sweet spot
Your AI Agent Workflows are as powerful as the amount of contextual data you can supply to it. Having all your data in a headless CMS like Storyblok allows you to feed all relevant data into the AI Agent system. This means the AI Agent generating and managing your content can already be aware of your brand guidelines and the voice and tone of your pre-existing content. Storyblok is enterprise-ready, which means you can integrate AI workflows confidently because Storyblok workflows support a detailed review process and audit trails for compliance.
Real-World Implementation in Storyblok
The AI Agent ecosystem is rapidly expanding, with new tooling available to simplify the process of coding AI Agents. The Vercel AI SDK (opens in a new window) supports multiple AI providers and MCP tooling. With a well-engineered prompt and access to a Storyblok MCP server, we can create an AI Agent that uses sequential thinking to perform operations and analysis on our Storyblok content.
Here’s an example of how a content freshness checker AI Agent can be coded:
import { openai } from '@ai-sdk/openai';
import { streamText, experimental_createMCPClient } from 'ai';
import { Experimental_StdioMCPTransport as StdioMCPTransport } from 'ai/mcp-stdio';
import dotenv from 'dotenv';
dotenv.config();
// 1. Connect to the Storyblok stdio MCP server
const mcpClient = await experimental_createMCPClient({
transport: new StdioMCPTransport({
command: 'node',
args: ['/Users/arpit/Documents/mcp-skeleton/build/index.js'],
}),
});
// 2. Define tools exposed by the Storyblok server
const tools = await mcpClient.tools();
// 3. Define the agent call
const result = await streamText({
model: openai('gpt-4o'),
tools,
prompt: `Audit Storyblok content for factual accuracy and freshness.`,
maxSteps: 5, // Allow tool use + response generation
onFinish({ text }) {
console.log('\n\n🧠 Final Answer:\n' + text);
},
onError({ error }) {
console.error('❌ Agent Error:', error);
},
});
// 4. Optionally stream the result to stdout
for await (const part of result.fullStream) {
if (part.type === 'text-delta') {
process.stdout.write(part.textDelta);
}
}
// 5. Close MCP client after stream
await mcpClient.close();
The above code uses an MCP server to fetch story data from a Storyblok space. Learn how to code an MCP server for Storyblok in this article.
In the above code, I’ve set up the MCP server for fetching Storyblok content. The server already has tools for fetching stories.
The two most important pieces of logic in the above code are the prompt itself and the parameter maxSteps.
Notice how the prompt Audit Storyblok content for factual accuracy and freshness
defines a goal instead of step-by-step instructions. The maxSteps
argument is what makes the AI Agent work. At each step, the LLM determines the next step of instructions to take, and then until the maxSteps
is not breached, it will go ahead and execute the next step of instructions. Since we have also included the Storyblok MCP server, the LLM knows how to fetch the Storyblok content. The LLM automatically determines the next step of instructions, thus making the AI Agent autonomous.
Caveats
An AI Agent reads and writes data from a lot of sources. Therefore, a thorough risk assessment and threat modelling are required to ensure data stays within boundaries. Also, if you’re using LLMs to draft content, ensure that a human author reviews all content, as LLMs can sometimes hallucinate information.
Conclusion
MCPs and AI Agents are constantly evolving, but in the context of content management, pairing Storyblok’s headless and enterprise capabilities with AI Agents can speed up your workflows multifold. Go on and experiment!