Welcome to No Dumb Questions, a series of Q&As between Stack Overflow’s least technical writer, Phoebe Sajor, and members of our technical staff, where she asks the simple, basic tech questions that most people are afraid to ask. This first interview is with Ben Marconi, Stack Overflow’s Director of Ecosystem Strategy, whose work on the Solutions Engineering team has been integral to the implementation of features like MCP into Stack Internal.
Phoebe Sajor: Hi Ben, thanks so much for joining me. I think the first question I definitely need to ask is what is an MCP server? Ben Marconi: Yeah, that's a good question. I think a lot of people in the market—even those who use these tools—are asking, “What is this new thing and is it going to stick around forever?”
MCP is just an acronym that stands for Model Context Protocol. So, think of a protocol as a standard. This was something that came from Anthropic, I think it was November or December of 2024. It's worth calling out because it's not been around forever, right? This is a relatively new development and the idea is that it lets LLMs connect securely to other external data sources. So, think of it as a standardized bridge that connects new cutting-edge AI functionality to all the other stuff—all the other tools that exist in the software world. And by standardizing with a protocol, we can more efficiently build connectors to and from these new AI systems.
PS: When you talk about it as a connector, is the reason that we need it because we're starting to build these agentic workflows that are needing to connect multiple things? I also don't really know what an API is, but does it, for instance, connect to some sort of API or some sort of database? Is it allowing these tools that usually live in the chat bot box to connect to those kinds of things?
BM: I know there's like an alphabet soup with all the acronyms here, but for a long time up until like November of this last year when the MCP standard was released by Anthropic, almost all the connections that were made from software products to other tools. So now we're connecting them to these agents or these AI tools, but any real connection between two products or two platforms was with an API—which is also an acronym that stands for Application Programming Interface. You can think of it basically as the window into the kitchen between the restaurant and the kitchen. And so you can pass information through the window in the form of code so that it can be structured and repeated over and over and over again to have two systems configured to talk to each other.
But it's a good question because the problem is that let's say we’ve got software product A, we've got software product B and software product C, right? These products are from different companies in this example, and they probably all have their own APIs. They all have an application programming interface that allows a person to interact with that system's data through a programming language. The problem is each of those three APIs in this example are probably all configured to work a little bit differently.
And so let's say we want to connect product A, product B, and product C all to a fourth tool that sits up above them, and I want to plumb all of them into one location. Historically that's been totally possible but the gap or the challenge is that it sometimes takes a lot of custom configuration to understand product A's API, product B's API, product C’s API. So when you're building these large interconnected systems it starts to get pretty difficult and complex. You have to understand how each of these programming interfaces work and how to write the code to structure those communications back and forth.
So, Anthropic came out with a reasonably elegant solution to that and said, "Hey, everyone's got their own APIs. They've got their own kind of access tools. We're going to build a tool that sits right above that." We'll still use all those programming interfaces, but this sits a layer above and it standardizes so that when that information flows up to the consolidated tool—in this case like a large language model or an AI tool—the AI model or agent effectively knows exactly what's going to show up.” It knows the structure of the data. It understands the fields, etc. And that allows us to connect things way more quickly and easily than we've ever been able to in the past.
PS: That actually made me understand what the context part of Model Context Protocol means. I'm actually learning so much. I love that. Okay, so you talked about these connectors, this configuration that allows APIs to build into this tool. Are there other things that people have tried? I know there's Agent2Agent and some other acronyms like we said. What’s special about MCP? Why is it the industry has started to push more towards MCP? Is it just because Anthropic made it? Is it because it's open source? What is the push towards MCP that's making it so exciting?
BM: I think everything that you said is partially true. Anthropic has obviously been a thought leader in the AI and agentic AI space with their Claude models and Claude code. It makes sense that Anthropic wanted to develop a protocol or a standard to allow better system connectivity because it also allows companies or even end users to connect their data to a model like Claude. So there's value for the LLM builders out there in this as well. But the big picture answer is that we're evolving into a world where far more data is being passed from system to system. We transitioned from more of an API-based connection world where you had to do more work and more custom configuration to connect tool A to tool B. Now we're connecting lots of tools, maybe not together to each other, but we're connecting them up to the LLM or to the agent layer.
You referred to another thing which is called A2A—Agent2Agent protocol—which is something that came out of Google's labs. What you see now is all of a sudden the number of systems that have to talk to each other are just ballooning. That number is growing in size, right? So if you say, "Hey Ben, can you build a connector to make tool one talk to tool two?" It's like, well, in theory, there's really only one stream of conversations that have to happen through the code. But if I'm connecting ten tools to one agent that becomes a lot more work. And what if there's ten agents or what if there's 10,000 agents out there? So the world effectively needs a more efficient means of supplying context, which you called out: the C in the MCP acronym, supplying that context to the agent layer and potentially even allowing those agents to share information with each other so that we can then offload some of the work that human knowledge workers would have to do to make more custom configuration in the past. I think the whole idea is that it's a more scalable future for using these new tools.
PS: That word, “context,” why is that so important to agents? Why is it important that these agents have that data or connection to APIs or are able to pull something from Stack Internal in the first place? Why can't the agents just live in their own box and not do any connecting?
BM: We have an interesting perspective here at Stack Overflow because we have certainly been a big part of optimizing the large language models and the agents that are built on top of them so they perform great through partnerships with the frontier labs. And even though that work has been super rewarding for the company and for those models, the reality is that agents don't have access to all of your private internal data to be smart enough realistically to get the work done that's required to help an employee be as productive as they can be. That's improved markedly over the last few years in terms of what the LM can do out of the box. But the context piece, especially secure access to enterprise context, is super important because the bottom line is the large language models and agents we’ve helped build just don't have access to my data, my company data, my Gdrive, my M365 data, my email, my notes, things like that. If we can build a secure connection and provide secure context access to enterprise data, we can make our agents way more productive. They can do things that we already do as employees in our secure work environments in our daily workflows. That's the key part of being able to directly supply great, high-quality data to the agents that people are working with through the Stack Internal MCP server. PS: I know one thing that people have been talking about with training data and all of these agents connecting to all of these different systems is security. Where MCP is at now, are there questions about security and privacy? Is that security and privacy built into the protocol itself? What can we expect as MCP continues to grow? Are there going to be more questions about the security of it?
BM: That's one of the biggest questions right now.A lot of big companies have about using MCP and allowing the connectivity across systems to explode, right? We use the chain example: your chain's only as strong as its weakest link. From an enterprise security perspective, if you have all these systems talking to one another and there's a data breach or there's PII—personal identifiable information—contained in one of these agents…if that's able to perpetuate across the system with communication structured around the Model Context Protocol or the A2A framework, that causes big, big problems for a company. So the answer is yes. There are big concerns over whether or not we can allow adoption of all of these new agents connecting to all the software and all the data that lives below them. Can we let that happen?
And we've had to be very thoughtful about this at our company with Stack Overflow’s or Stack Internal’s MCP server. And the way that we've solved this—and other people are doing similar work—is every individual user, like an engineer writing code, when they want to use our server, they're probably going to access that server through their IDE (integrated development environment) where they're writing the code, right? That engineer is going to have to authenticate—effectively log in to their account with us and that will then allow a secure connection using MCP to exist between the user’s coding session and the underlying platform, our software product. We use what's called OAuth2. It's effectively the industry standard authorization framework that makes sure we maintain security and user attribution as we're letting these data packets pass back and forth. And the whole purpose of this is so they can access great context or great verified enterprise data directly where they're working and they don't have to move all over the place by switching tabs, switching windows and they are able to stay productive and get the answers they need right where they're working. That engineer gets to be as productive as possible where they're working using that underlying data and context.
PS: What is special about Stack Internal’s MCP server? I've heard the word “bidirectional” getting thrown around and knowledge ingestion…all this stuff that's happening with our latest update to the enterprise tool. Basically, I would like to know what it means.
BM: There are a lot of MCP servers that exist out there in the world. I do want to be clear that it's very possible for even companies that are customers of ours to build their own MCP servers and build that on top of the API that we talked about earlier in this conversation. But as a company, we have produced our own MCP server that our customers can leverage. Why we would say, “Hey Customer A, you should try our MCP server,” is that within Stack Internal’s data repository, you've got all these questions, all these answers and there's a lot of community engagement with that data, right? Users who have used the platform for years, who have upvoted content, downvoted content, added comments to clarify or update packages as they change, things of that sort. So our MCP server when it accesses the data—when we pull the context from Stack Internal into a tool like Cursor or GitHub Copilot—our search functionality that's built into the MCP server is optimized to consider all of the heuristics that make one piece of data great.
An example that's actually quite hard to figure out is…let's say you've got content that's got tons of engagement, lots of upvotes, right? But it's super old. Is that more or less reliable than something that's pretty new, but maybe has less engagement? How do you relatively weight how to perform a great search? How do you retrieve that context and keep it accurate? Because we built it and we're experts in dealing with our own data as a company, you're going to get great search search performance when you retrieve.
Our MCP server also does the exact opposite of what I just described. Some MCP servers do this out there in the world but most of them today don't as of February 2026. One of the things that we feel strongly about here at Stack is that it's really important to keep these knowledge bases evergreen. Retrieval and pulling context from the underlying data and supplying it to the LM is very common, but knowledge bases need to have updated content to be healthy. Our MCP server writes back to the database, Stack Internal, to make sure that that content is always being kept up to date. So, if a user is engaging with a coding agent or working in the IDE with a tool like GitHub Copilot, for example, and they come to a great solution and they want to push that back to the database so that's stored and that knowledge can be reused, they can use the MCP server to post directly back to the underlying database—Stack Internal—where that information lives. They never have to leave the interface they're working in. This consolidates the entire user experience into one place. It lets them get the best of both worlds: accessing and writing back. They never have to context switch. It keeps them in flow more effectively.
PS: If I, for instance, as the Worst Coder in the World, wanted to connect our Stack Internal instance to some dumb, funny robot that I'm making on the side, how would I do it?
BM: It's not very difficult. The only real requirement is that you're connecting our MCP server to a tool that is MCP compatible. Realistically, most AI platforms and agents today are MCP equipped, for obvious reasons you probably see after having had this conversation. It's a big value add to those tools to be able to grab data from different resources. On our side, we’ve configured what we would call a “one-click install” from a landing page, where you can connect to some of the most common tools that developers are using. You can literally click one button that you can access from our MCP site.
Otherwise, the setup is super simple. It basically requires a few lines of code. We would give a user a JSON packet and they could install the MCP server and access the data once they login using that OAuth2 login flow we discussed earlier.
PS: If you were to make a cool agent for yourself using our MCP server, what would be one that you would create?
BM: That's a super good question. There's lots of focus on automation right now and so I would be interested in connecting together a few of my own internal data sources, maybe my own Slack messages and my own G Drive documents, for example, and then posting them as an article within Stack Internal. Then everything's consolidated in one place. And although it's not a custom build that I've created using the MCP server, I do have it hooked up to Cursor, which is where I do a lot of my coding experimentation, UI mockups, and things and it's super awesome. I actually use the write back functionality as much as I use the read functionality because I don't want to have to type updates in documentation and then share them. I don't want to use the word lazy but…maybe we’ll say as a busy person, a lot of times that update piece doesn't happen. I think that's true of busy people across all organizations. If you've got thousands of employees who are all doing less documentation than they should be because they just don't have the time—and frankly, it's not the most fun activity—then using the MCP server to have an agent write documentation or a readme file and posting it back to your Stack Internal repository is a huge game changer and I use it probably every day.
PS: Hey. In an agentic world, we're all lazy, busy people
BM: That's right. We're more productive while being even lazier than before. Something like that.
PS: Exactly.