MCPJam 2025 Rewind - Our stories and learnings
2025 has been a great year for the MCP community and us at MCPJam. I wanted to share our story, celebrate our accomplishments this year, and our learnings.

MCPJam started as a gateway, not an inspector
Marcelo and I started building MCPJam back in March. We were co-workers at our previous job and Streamable HTTP wasn't a thing yet. This was at the height of MCP's early adoption hype and we were on board with the value MCP brought to LLM tool use.
We didn't start building an inspector. We actually started out building an MCP gateway of sorts, our pitch, "One MCP server for all MCPs - like Zapier for MCP". We believed this gateway would make it easier for people to install MCP servers and handle OAuth. We actually applied to YC in May and got rejected. The feedback was earnest. We pushed for a couple of weeks after, not much progress was made.
Learning: building integrations didn't scale the way we wanted it to, app approvals take too long. Took weeks to just get Google integration approved. We were not going to catch up to the existing integration companies like Zapier that had years of edge over us.
MCPJam inspector
We scrapped the gateway after a couple more weeks. Up until that point of our MCP journey, we were using the official MCP inspector and we were frustrated. The MCP inspector was always broken, weeks behind the MCP spec, and the user experience was miserable.
This pain was the inspiration behind the MCPJam inspector that it is today. We forked the MCP inspector and started brainstorming. This is what the early days of MCPJam looked like in June / July.

We added the following improvements:
- Connections to multiple servers with OAuth and support for STDIO, SSE, and Streamable HTTP
- A more elegant and intuitive user interface, with saved states.
- Introduced the first built in LLM playground. Chat with MCP servers using any LLM model.
We realized developers were really resonating with our LLM playground feature. They wanted to observe how their MCP servers would behave in production environments without having to leave the inspector. The playground remains to be the most used feature to this day.
The project gained a ton of early traction, we grew to a couple of hundred GitHub stars, our Discord community started to grow, and most importantly, the community was contributing to the project! Our team learned everything about building MCP clients and have a lot to share.
Learnings: We went through many tech stack iterations to build our MCP client. We tried Vite+Express+Mastra, Next.js+Mastra, and Vite+Hono+AI-SDK. If you're building a client, go with Vite+Hono+AI-SDK. It's lightweight, Vercel AI-SDK has great docs, and things work like a charm.
Learnings: Strong community is the single most important success indicator of an open source project. When you have a community, users report bugs, submit feature suggestions, and contribute to the project. I'm very grateful for everyone in our community, you've all made the product unmeasurably better.
Inspector V2, LLM playground, MCPClientManager, and OAuth Debugger
We left our jobs to pursue MCPJam full time and raised a pre-seed. The original inspector's design limited our ability to build extended features such as the LLM playground and evals, which had also limited what we could do with our fork. The next step was to re-build MCPJam from the ground up, no longer being a fork of the original inspector.

Inspector V2 is built with our preferred stack, Vite+Hono+AI SDK. Keeping in mind that users are using MCPJam via npx, this stack allows us to stay lightweight while providing all those extensive features. Vite is the front end, Hono handles the MCP client connections, and AI-SDK supports our LLM playground / evals.
At the heart of MCPJam is our custom MCPClientManager class. The client manager is a utility that manages many MCPClient classes (from the Typescript SDK). Whereas MCPClient can only handle a single MCP connection, the client manager allows us to maintain multiple MCP client connections. We also put MCPClientManager out as a separate package on npm, enabling developers to build real-world MCP clients.
Learning: The MCP client ecosystem is very underdeveloped. From a server devs perspective, it's currently not worth it to build any other features besides MCP tools. Clients are the single greatest bottleneck to MCP feature adoption.
We also shipped the LLM playground and OAuth debugger, two of our most highly used features. The LLM playground allowed server devs to test their server against any LLM, simulating a production environment.
We found people had the most difficulty implementing and understanding MCP OAuth. Our OAuth debugger brought visual guidance to debugging MCP OAuth.
At the core of the MCP request and response system is JSON-RPC. We show every JSON-RPC message being sent back and forth to help you debug at the lowest level.

ChatGPT apps and MCP apps
When ChatGPT apps came out, we saw the potential for it to become the next iPhone app moment. ChatGPT's 800M weekly active users presents huge distribution potential and incentive for app developers. UI felt to be the natural next enhancement within a chat bot experience. People process information faster via UI.
Building a ChatGPT app is pretty painful. The only way to test a ChatGPT app is to NGROK tunnel your local MCP server and connect it remotely in ChatGPT. You also need a ChatGPT plus subscription to access dev mode.
We brought the first local emulator for ChatGPT apps to the community right in MCPJam. We're proud to have helped hundreds of ChatGPT app developers quickly iterate on their ChatGPT apps and develop locally-first. We've been able to help teams ship their apps on the ChatGPT store.
"MCPJam's helped save a ton of time for UI related changes. No longer have to do a 2 minute deploy for every UI change, or wait for an LLM response. Also removed a lot of friction for testing in mobile." - Michael Chu, Asana
ChatGPT apps and MCP apps are currently supported in the apps builder and the LLM playground.
Learning: realized many other devs resonated with the pain of local development. Look at any dev ecosystem, all development happens at the local level. I think we're on a great mission to fill this gap.
What's next for us
My favorite thing about our team is how aggressively we ship. We're always listening to what our community wants and delivering fast.
With the growing excitement in ChatGPT apps and MCP apps, we're making it our top priority to maintain our position as the best (currently only) local ChatGPT apps and MCP apps emulator in the community. This means consistently maintaining 1:1 parity with the prod behavior on ChatGPT and Claude MCP Apps.
We'll also continue to build testing and debugging features that support building apps such as CSP policies, viewing tool metadata, viewing window.openai messages, and more. I'll be releasing a technical roadmap on our README shortly.
Lastly, I want to thank the MCPJam community for all of the amazing discussions we have on the Discord, the countless issues reported on our repo that's made the product unmeasurable better, and I'm so grateful for the devs that have taken time out of their day to shipping code in MCPJam. Happy new year!
Learning: listen to users and keep on building.