NHacker Next
login
▲A critical look at MCPraz.sh
577 points by ablekh 1 days ago | 315 comments
Loading comments...
lolinder 1 days ago [-]
> the documentation is poorly written (all LLM vendors seem to have an internal competition in writing confusing documentation).

This is almost certainly because they're all using LLMs to write the documentation, which is still a very bad idea. The MCP spec [0] has LLM fingerprints all over it.

In fact, misusing LLMs to build a spec is much worse than misusing them to avoid writing good docs because when it comes to specifications and RFCs the process of writing the spec is half the point. You're not just trying to get a reasonable output document at the end (which they didn't get anyway—just try reading it!), you're trying to figure out all the ways your current thinking is flawed, inadequate, and incomplete. You're reading it critically and identifying edge cases and massaging the spec until it answers every question that the humans designing the spec and the community surrounding it have.

Which means in the end the biggest tell that the MCP spec is the product of LLMs isn't that it's somewhat incoherent or that it's composed entirely of bullet lists or that it has that uniquely bland style: it's that it shows every sign of having had very little human thought put into it relative to what we'd expect from a major specification.

[0] https://modelcontextprotocol.io/specification/2025-03-26

ComplexSystems 24 hours ago [-]
DeepSeek's documentation has a different problem, which is that there are spelling errors and weird grammatical constructions everywhere:

"DeepSeek API does NOT constrain user's rate limit. We will try out best to serve every request. However, please note that when our servers are under high traffic pressure, your requests may take some time to receive a response from the server. During this period, your HTTP request will remain connected, and you may continuously receive contents in the following formats..."

The documentation is still mostly easy to read, so it doesn't *really" matter, but I always thought this was bizarre. I mean, I get the language barrier reading manuals from Chinese products off of Amazon or whatever, but this is a company that does nothing but work with language all day long, and even at one point had the world's leading English-speaking language model. Shouldn't they be able to produce professional-looking documentation without spelling and grammatical errors?

comradesmith 22 hours ago [-]
What’s the problem? Can you point out a specific thing you would change from that quote?
lolinder 22 hours ago [-]
Are you a native English speaker?

"does NOT constrain user's rate limit" should be "does NOT rate limit incoming requests" or similar.

"We will try out best" should be "our best".

"when our servers are under high traffic pressure" is at least grammatical, but it's awkward. Normally you'd say "when our servers are dealing with high load" or something similar.

"your requests may take some time to receive a response from the server" is again grammatical but also awkward. "Our response times may be slower" would be more natural.

The last sentence is also awkward but the whole thing would need to be restructured, which is too much for an HN comment.

Basically: everything about this screams English as a second language. Which does mean that it's unlikely to have been LLM generated, because from what I've seen DeepSeek itself does a pretty good job with English!

taylorius 16 hours ago [-]
I'm a native English speaker, and I partially disagree with your claims of awkwardness.

"when our servers are under high traffic pressure" - this is a bit awkward I agree, but only the last three words.

If we rearrange it to "when our servers are under pressure from high traffic", I think it sounds good. It's using a metaphor, and I think that should be encouraged. It's interesting. And the phrase "high traffic" conveys some drama.

"your requests may take some time to receive a response from the server" - I think that's fine, to be honest. I like it.

I think you are conflating "awkwardness" with linguistic flair. Technical documentation English has become standardised to a large degree, which of course is useful, and efficient. But it is also a narrow usage of English, and breaking out of its straitjacket does not make language awkward.

ricardobeat 15 hours ago [-]
That’s a very generous interpretation. I don’t know mandarin but these are likely a transfer of grammar constructs from the primary language to english, in the same way the Dutch will say “make a picture” or “the house of my parents”, which can be justly classified as awkward rather than as linguistic flair.

If someone was editing my writing, it would feel a bit patronizing if they said grammar mistakes (many of which come from my mother tongue Portuguese) are “adding flair”, as they are not a stylistic choice.

taylorius 13 hours ago [-]
I'm not claiming it was intentional on their part. My point was solely one of language, so how the sentence came to be written that way is out of scope. And given the word swap I suggested, I don't think it is awkward at all (unlike your examples from Dutch, which definitely are).

As for it being patronising, why is telling a non-native speaker their sentence is interesting unacceptable, but telling them it's awkward is ok? (Assuming both are genuinely held opinions).

I'll reiterate my point that common English usage (non-awkward?) has narrowed enormously in the last 50 years. I think that this is a bad thing.

collingreen 7 hours ago [-]
Your point of how the social norms for English have changed over the last 50 years could be interesting but what does it have to do with the parents point of "these docs seem human written and not spell checked which is very different from the other ai companies AND which is weird anyway for a megacompany with ai tools that write English well".
ComplexSystems 7 hours ago [-]
Which part is it that has linguistic flair? Is it "The prices listed below are in unites of per 1M tokens", or "The expense = number of tokens × price"? Or maybe "you may continuously receive contents in the following formats"?
reliabilityguy 21 hours ago [-]
> "Our response times may be slower" would be more natural.

How can the time be slower? Response times may be longer, but not slower

lolinder 21 hours ago [-]
In colloquial English my construction is just fine, but sure, you'd be welcome to pick longer too.

Some examples of my usage in the wild ("response times may be slower" is present verbatim on each page):

https://github.com/aquasecurity/trivy/discussions/8133

https://www.ameristarstaffingny.com/the-negative-effects-of-...

https://oci.wi.gov/Pages/Regulation/Bulletin20200320Regulato...

https://playrix.helpshift.com/hc/en/27-questbound/faq/13930-...

brabel 17 hours ago [-]
This sentence is a good example where the native speaker's version is worse (in this case because it's just non-sense, as the parent commenter already pointed out).
lolinder 9 hours ago [-]
Sounds like you're the kind of person who will insist to Spanish speakers that a double negative is logically incoherent. Good luck with that approach to language!
reliabilityguy 10 hours ago [-]
> In colloquial English my construction is just fine,

Maybe. However, in my opinion, it’s better to write in such a way that leaves zero chance for misunderstanding.

lolinder 9 hours ago [-]
No real human being would misunderstand because, as you note, time can't go slower. This is just an excuse for pedantry.
fnord123 13 hours ago [-]
When you remark on improvements, up is generally better and down is generally worse. So saying "response times will be higher" gives an immediate sentiment of improvement. But, obviously, a moments thinking helps you re-orient and realize it's better. This is why plots often have "lower is better" in the legend, to help readers understand.

I often use 'slower' and 'faster' as a native speaker to help reinforce the meaning of the direction.

reliabilityguy 10 hours ago [-]
> "response times will be higher" gives an immediate sentiment of improvement.

Higher as opposed to lower? It makes no sense to me.

lolinder 9 hours ago [-]
Exactly.

"Response times will be higher" sounds very confusing as a way of saying we'll take less time to respond, right? So why should "response times will be lower" mean we'll take more time if the opposite construct is confusing?

Far better to just use the comparative forms that we already have for time specifically to make it perfectly clear.

lolinder 9 hours ago [-]
Yes, this is a good explanation for the phenomenon! Thanks.
numpad0 5 hours ago [-]
If your problem is that the texts you quoted were not written by someone with English as the first language, I tell you: English is not the framework of human civilization, it sometimes uses English for data quantization and message passing.

A lot of English native speakers has such assumptions that:

- any academic topics are universally discussed in English/Latin and so every highly educated person shall speak good English, - language is like a thin wrapper over a to-be-converted-to-YAML common intermediate language(Universal Grammar theory), - anything should translate into fluid English with intent completely intact, - but WWW is >90% English anyway, - etc.

None of these are true, and it's just not realistic for a well educated East Asian - common theme of East Asian languages is it's all custom implementations with minimal sharing with neighbors let alone English - to "just" pick up natural English. I suppose you're looking for something like following:

"At DeepSeek, we strive to serve every request to our customers with best of our effort, and we do not impose a rate limit for our APIs. However, do note that due to finite nature of our computing resources, API responses might become delayed in cases when our backend is experiencing high load. Under such circumstances, the HTTP sessions will be kept alive, and response will be served in following formats..."

... Isn't this a $1m/yr skill on its own? Have you seen a great Far East engineer write like this - I mean, how often do you come across a Far Eastern translator that can casually do this?

ComplexSystems 4 hours ago [-]
I don't really get the point of your post.

The goal is to pretend that DeepSeek doesn't have access to good English translators? Or good English translation capabilities?

Why don't we just not pretend this instead?

numpad0 30 minutes ago [-]
Why do we not pretend like foreign language technical ghostwriting is a solved problem! You guys are asking for complete rewrites by someone explicitly NOT Chinese natives for all documentations. There's some point it'll be just an unreasonable ask.

A lot of HNers puts blind trust on Universal Grammar Theory and downplay languages as all but obsolete human output packing format that are each no more than header differences and those are just wrong. Languages are at least CODEC. And if you go back to the original topic from there, I don't think it will sound so unreasonable that translating between different CODECs will induce losses and artifacts.

rrr_oh_man 21 hours ago [-]
I'd just shorten it:

  DeepSeek API does NOT have rate limits. 
  However, when our servers are under high traffic, 
  your requests may take some time. During this period, 
  you will continuously receive the following responses:
albert_e 22 hours ago [-]
Maybe the first sentence? I am guessing they meant "DeepSeek API does not enforce any rate limit on users." would be more appropriate.

_Constraining the rate 'limit'_ seems like incorrect usage - but it is an a easy mistake to make in a first draft. Review should have caught it.

meindnoch 13 hours ago [-]
That's just standard Chinglish.
fakedang 21 hours ago [-]
I've seen documents that were applications by CCP-affiliated provincial government bodies, things like detailed studies for loan applications to international banks, etc. and trust me, the Deepseek documentation is miles ahead of that. These are official government documents from one government agency to some international agency.
ComplexSystems 19 hours ago [-]
This has fascinated me for years. I'll just re-link this comment of mine from a few years ago: https://news.ycombinator.com/item?id=37544019#37548278.

This was about Amazon products rather than government documentation, but the point is the same. I'll just quote the relevant part:

> The people who make these products have to spend millions and millions of dollars setting up factories, hiring people, putting things into production, etc. But somehow they don't have a budget for a bilingual college student intern to translate a bunch of copy to English better than "using this product will bring a great joy." Why?

> I will make a super strong claim: ChatGPT can now do nearly perfect mass translations of this stuff for free, in theory simultaneously increasing translation quality and reducing costs. Despite this, for whatever reason, I predict that the average translation quality on Amazon won't improve within the next few years.

My super strong claim has so far been correct. Just go on Amazon.com and click just about anything. For instance, here's a random blanket: https://www.amazon.com/dp/B07MR4FSPT

"OPTIMUM GIFT: All people can use this flannel fleece blanket in Coach、Office、Bed、Study, etc. Reversible softness offers all seasons warmth. INTIMATE SERVICE: If you have any questions, please contact us. it is our pleasure to serve you."

How does a human being in this situation somehow invent the phrase "OPTIMUM GIFT?" "Optimum" is a fairly advanced English word. Maybe you'd expect, I dunno, "GREAT GIFT" or "BEST GIFT"? And "INTIMATE SERVICE?"

And once again, we now have magic English-speaking computers that can do this all for us - for free - and China has unanimously decided "nah, screw that. We'd rather go with INTIMATE SERVICE."

gyomu 19 hours ago [-]
I live in Japan, and when you read English texts here (it doesn’t really matter if it’s a restaurant menu, a pamphlet at a touristic area, a flyer for local government services…) the same English word will often be written differently within the same document (eg for a recent one I saw: “curbside” was spelled “crubside” and “carbside”).

I always wonder how that happens, because the documents themselves often smell strongly of machine translation - but if they’re machine translated, how would those mistakes get in? My best guess is that there’s a human manually typing out a machine translation output, which kind of boggles the mind.

I think us computer nerds who are used to using computers to do work efficiently have a hard time imagining all the weird ways in which non-computer nerds actually use computers.

numpad0 7 hours ago [-]
Sometimes they are machine translated by someone who doesn't understand Ctrl+C shortcut if it's longer than 10 words, but equally often they're just hand kneaded. Japanese English education is effectively machine translation with human brains as machines - we're not actually taught English at all[0], just memorized technical rulesets that yield predictable garbage. A lot of weird "Engrish" text is likely result of that.

0: That's supposed to be drastically changing, we'll see if it does. English skills is still a resume stuffer in Japan.

delian66 18 hours ago [-]
It may have been A/B tested, and people do prefer the "INTIMATE SERVICE" version more...
DonHopkins 14 hours ago [-]
Because A/B testing always has a happy ending.
collingreen 7 hours ago [-]
This is a top tier joke about "INTIMATE SERVICE" and I wanted it to know it was appreciated.
yard2010 12 hours ago [-]
One of my guilty pleasures is to read random chinese product pages and try not to laugh.
fakedang 18 hours ago [-]
I mean, it's not so easy getting a perfectly knowledgeable English speaker in China. Heck, if you see some of the interviews of the insanely viral Tony from LC Signs, his English outside of his skits is actually very Chinese-flavored. One of my friend's exes who was Chinese and studied at Oxbridge had a very similar strong Chinese twang, and often made grammatical mistakes while writing, even though the British would easily call her fluent.
lolinder 18 hours ago [-]
DeepSeek is a perfectly knowledgeable English speaker in China.
numpad0 5 hours ago [-]

  [me]
  > Translate to palatable startup-style English:
  >> DeepSeek API 不限制用户的访问速率。我们会尽力满足每个请求。但是,请注意,当我们的服务器流量压力较大时,您的请求可能需要一些时间才能收到服务器的响应。在此期间,您的 HTTP 请求将保持连接状态,您可能会持续收到以下格式的内容……

  [LLM]
  > Here's a more palatable, startup-style translation of your message:
  > DeepSeek API has no hard rate limits. We strive to process every request as quickly as possible. However, during peak traffic, responses may take slightly longer. Rest assured, your HTTP connection will remain active, and you may continue to receive real-time updates in the following format…
I asked it to turn it "corpospeak like":

  > At DeepSeek, we prioritize accessibility and scalability—which is why we enforce no strict rate limits on API usage. Our systems are designed to handle all requests with high availability, though during peak operational loads, response times may experience nominal delays. Rest assured, your connection will remain active, and responses will continue streaming in real time with the following structure:
... This is Google Translate from GP -> DeepSeek Web. I don't think DeepSeek is a perfectly knowledgeable English speaker in China. "However, during peak traffic," is basically a word substitution on "但是,当服务器流量压力大时", if my Han Script reading is right. Parts of the corpo version like "response times may experience nominal delays." still shows Chinese accent, assuming that's the part you think must be thoroughly washed off.

What you're asking needs English-first bilingual human person who can be trusted and has tech backgrounds. That's quite a tall order.

k__ 13 hours ago [-]
That text would be cut to at least half its length by an editor.
827a 5 hours ago [-]
Similarly strange and incorrect grammatical constructions are found in the English translations for Game Science’s hit game Black Myth Wukong. My expectations for, for example, the construction manual for a bookshelf is pretty different than a game or AI model & service costing tens of millions of dollars in development (or more).

Heck, they could literally pay any native English speaker to take their English-ish translations and regionalize them; you don’t even need to know Chinese to fix those paragraphs. Why is this such a common problem with the English China exports? Is it cultural? Are they so disconnected from the west that they don’t realize?

A great counter-example is NetEase’s Marvel Rivals; their English translations are fantastic, and even their dev interviews with their Chinese development team is fantastically regionalized. They make a real effort to appeal to English audiences.

ljm 24 hours ago [-]
Sometimes I wonder if I have ADHD or if it's induced by the content, because I can spend hours soaking up interesting literature and putting my weird thoughts down onto paper but I can barely make it a few words through LLM-driven drivel.

It's crazy seeing bots posting AITA rage bait on Reddit that always follows the same pattern: some inter-personal conflict that escalates to a wider group: "I told my husband I wasn't into face-sitting and now all my colleagues are saying I should sit on his face to keep the peace."

That is one thing but using the same LLM to drive your tech specs, knowing it can say a whole lot of shit the 'author' isn't aware of, because they're illiterate and that is fucking normal... is worrying.

stuaxo 15 hours ago [-]
Yeah it's unreadable for me.

There's been a trend to post LLM slop about tech subjects and they anger me - I don't know why someone wanted to waste people's time like that.

Even worse - I've come across an AI slop site that masquerades as dev information, with just plain wrong information.

DonHopkins 14 hours ago [-]
I let the domain "micropolisonline.com" expire, which I was using for the old OpenLaszlo/Flash Python/SWIG/C++ client/server based version of open source SimCity, and somebody took it over and replaced it with AI generated claptrap, stealing a lot of my own and other's images without any credit. It even promises the source code, but doesn't actually link to it, just has promises and placeholders.

It totally misrepresents what Micropolis is, which was based on the original SimCity classic, and confuses it with all the subsequent versions of SimCity and other made-up stuff. And it never mentions the GPL-3 license, EA's license and restrictions on the use of their SimCity trademark, or Micropolis's license to use their trademark. I have no idea what the point of it is.

https://micropolisonline.com/

https://micropolisonline.com/source-code/

>How to Access the Source Code: For those eager to explore the Micropolis Online Source Code, it is available on our dedicated GitHub repository. Visit [Link] to access the repository, where you can browse the code, contribute to ongoing projects, or initiate your own.

The source code is actually not at [Link] but at:

https://github.com/SimHacker/MicropolisCore

Not even so much as a link to the my demo!

https://www.youtube.com/watch?v=8snnqQSI0GE

They could be in some legal jepordy since they didn't mention or link to the Micropolis GPL License or the Micropolis Public Name License, which they may be violating.

https://github.com/SimHacker/MicropolisCore/blob/main/Microp...

https://github.com/SimHacker/MicropolisCore/blob/main/Microp...

The have a "Meet the Team" page that mentions nobody, just hand waves about "we" and the community. They couldn't even bother to generate generic looking fake profiles of non-existent people. Suffice it to say I never heard back from anyone after using the "Contact Us" page.

They even have a cute little Terms and Conditions page with their very own license, which doesn't allow anyone to do to them what they did to me, and is not particularly GPL-v3 compatible:

https://micropolisonline.com/terms-conditions/

>License to Use Micropolis Online

>Unless otherwise stated, Micropolis Online and/or its licensors own the intellectual property rights for all material on Micropolis Online. All intellectual property rights are reserved. You may view and/or print pages from micropolisonline.com for your own personal use subject to restrictions set in these terms and conditions.

>You must not:

>Republish material from micropolisonline.com Sell, rent, or sub-license material from micropolisonline.com Reproduce, duplicate, or copy material from micropolisonline.com

They also claim all rights to all user created content:

>By displaying Your Content, you grant Micropolis Online a non-exclusive, worldwide irrevocable, sub-licensable license to use, reproduce, adapt, publish, translate, and distribute it in any and all media.

Kind of ironic for an LLM to go around stealing people's content, then telling them that not only can't anyone copy it back, but it owns the rights to everything anyone else may contribute in the future.

glimps 23 hours ago [-]
I get the distinct feeling the spec was created by llm too. As with the doc, every evidence hints at it.

Makes great IPO to tell investor most tour product are already created be averaging out the most likely outcome

clbrmbr 1 days ago [-]
Certainly a shame if true, there are some really sharp folks at Anthropic and this is an important building block in the emerging ecosystem.
jes5199 21 hours ago [-]
someone is going to write an MCP adaptor that lets Claude use OpenAPI and then we can forget that MCP was a thing
cruffle_duffle 19 hours ago [-]
How would that even work?
DonHopkins 14 hours ago [-]
South Park explored that question:

https://www.youtube.com/watch?v=sbCj0i8WQA0

jes5199 7 hours ago [-]
new MCP tool: make-curl-request, headers, payload
otabdeveloper4 20 hours ago [-]
This. Endure a couple months and this madness ends.
teaearlgraycold 24 hours ago [-]
In my experience AI startups are AI maximalists. They use AI for everything they can. AI meeting summarizations, AI search (Perplexity), AI to write code and contracts, AI to perform SEO, AI to recruit candidates, etc. So I 100% believe they would use AI to write specs.
runlaszlorun 49 minutes ago [-]
Seems like many are dreading our near future. Not I, I can't wait to see how this all plays out...
whatever1 23 hours ago [-]
So many bullet points in the documentation!
meander_water 23 hours ago [-]
I can't say whether the original spec was written with AI assistance, but having a cursory look through the commit history [0] it doesn't look like they're just blatantly auto-generating the docs. The git history indicates that they do think about the spec and manually update the docs as the spec changes.

[0] https://github.com/modelcontextprotocol/modelcontextprotocol...

never_inline 14 hours ago [-]
I don't write perfect English. Far from it. But I'd prefer broken English any day over default LLM verbiage. It seems so unnatural and facetitious. I always have this in my prompts: "Be succinct and use simple English sentences".
benatkin 1 days ago [-]
The DeepSeek documentation seems to be better. It looks to be quickly thrown together but not bad. I’m not sure what that says about LLMs writing documentation.
jerf 24 hours ago [-]
It had not occurred to me that the AI coding vendors are basically positively motivated to themselves produce code that is not documented. They want code that is comprehensible to AIs but actively not comprehensible to humans. Then you need their AIs to manipulate it.

AI code as the biggest "lock you in the box" in programming history. That takes rather a lot of the luster out of it....

They'd better be right that they can get to the point that they can fully replace programmers in about two years, otherwise following this siren song will, well, demonstrate why I chose "siren song" as my metaphor. If AI code produces big piles of code that are simply incomprehensible to humans, but then the AIs can't handle it either, they'll crash out their own market by the rather disgusting mechanism of killing all their customers, precisely because the customers consumed their service.

never_inline 14 hours ago [-]
To be honest I don't think they have any plans either.
walterbell 22 hours ago [-]
Self Alignment™
hirsin 1 days ago [-]
In the same way that crypto folks speedran "why we have finance regulations and standards", LLM folks are now speedrunning "how to build software paradigms".

The concept they're trying to accomplish (expose possibly remote functions to a caller in an interrogable manner) has plenty of existing examples in DLLs, gRPC, SOAP, IDL, dCOM, etc, but they don't seem to have learned from any of them, let alone be aware that they exist.

Give it more than a couple months though and I think we'll see it mature some more. We just got their auth patterns to use existing rails and concepts, just have to eat the rest of the camel.

ethbr1 1 days ago [-]
> Give it more than a couple months though and I think we'll see it mature some more.

Or like the early Python ecosystem, mistakes will become ossified at the bottom layers of the stack, as people rapidly build higher level tools that depend on them.

Except unlike early Python, the AI ecosystem community has no excuse, BECAUSE THERE ARE ALREADY HISTORICAL EXAMPLES OF THE EXACT MISTAKES THEY'RE MAKING.

volemo 1 days ago [-]
Could you throw on a couple of examples of calcified early mistakes of Python? GIL is/was one, I presume?
achierius 1 days ago [-]
CPython in particular exposes so much detail about its internal implementation that other implementations essentially have to choose between compatibility and performance. Contrast this with, say, JavaScript, which is implemented according to a language standard and which, despite the many issues with the language, is still implemented by three distinct groups, all reasonably performant, yet all by and large compatible.
Timwi 21 hours ago [-]
Static functions (len, map/filter,...) that should have been methods on objects.
aerhardt 5 hours ago [-]
What’s the root cause?

It certainly makes functional semantics in Python suck. Comprehensions don’t make up for it.

azeirah 14 hours ago [-]
This one frustrates me so, so much.
Doxin 1 days ago [-]
Possibly packaging too? though lately that has improved to the point where I'd not really consider it ossified at all.
fullstackchris 1 days ago [-]
> early mistakes of Python?

Python.

worldsayshi 1 days ago [-]
I guess there's an incentive to quickly get a first version out the door so people will start building around your products rather than your competitors.

And now you will outsource part of the thinking process. Everyone will show you examples when it doesn't work.

FridgeSeal 21 hours ago [-]
Hey there, expecting basically literacy or comprehension out of a sub-industry seemingly dedicated to minimising human understanding and involvement is bridge too far.

Clearly if these things are problems, AI will simply solve them, duhhh.

/s

brabel 16 hours ago [-]
You joke, but with the right prompt, I am almost certain that an LLM would've written a better spec than MCP. Like others said, there are many protocols that can be used as inspiration for what MCP tries to achieve, so LLMs should "know" how it should be done... which is definitely NOT by using SSE and a freaking separate "write" endpoint.
ethbr1 9 hours ago [-]
> with the right prompt

That's LLM in a nutshell though. A naive prompt + taking the first output = high probability of garbage. A thoughtful prompt informed by subject matter expertise + evaluating, considering, and iterating on output = better than human-alone.

But the pre-knowledge and creative curation are key components of reliable utility.

TheOtherHobbes 16 hours ago [-]
It's a classic Worse is Better situation.

Most users don't care about the implementation. They care about the way that MCP makes it easier to Do Cool Stuff by gluing little boxes of code together with minimal effort.

So this will run ahead because it catches developer imagination and lowers cost of entry.

The implementation could certainly be improved. I'm not convinced websockets are a better option because they're notorious for firewall issues, which can be showstoppers for this kind of work.

If the docs are improved there's no reason a custom implementation in Go or Arm assembler or whatever else takes your fancy shouldn't be possible.

Don't forget you can ask an LLM to do this for you. God only knows what you'll get with the current state of the art, but we are getting to the point where this kind of information can be explored interactively with questions and AI codegen, instead of being kept in a fixed document that has to be updated manually (and usually isn't anyway) and hand coded.

wunderwuzzi23 1 days ago [-]
Your comment reminds me that when I first wrote about MCP it reminded me of COM/DCOM and how this was a bit of a nightmare, and we ended up with the infamous "DLL Hell"...

Let's see how MCP will go.

https://embracethered.com/blog/posts/2025/model-context-prot...

baxtr 1 days ago [-]
To this date I have not found a good explanation what an MCP is.

What is it in old dev language?

mondrian 1 days ago [-]
It's a read/write protocol for making external data/services available to a LLM. You can write a tool/endpoint to the MCP protocol and plug it into Claude Desktop, for example. Claude Desktop has MCP support built-in and automatically queries your MCP endpoint to discover its functionality, and makes those functions available to Claude by including their descriptions in the prompt. Claude can then instruct Claude Desktop to call those functions as it sees fit. Claude Desktop will call the functions and then include the results in the prompt, allowing Claude to generate with relevant data in context.

Since Claude Desktop has MCP support built-in, you can just plug off the shelf MCP endpoints into it. Like you could plug your Gmail account, and your Discord, and your Reddit into Claude Desktop provided that MCP integrations exist for those services. So you can tell Claude "look up my recent activity on reddit and send a summary email to my friend Bob about it" or whatever, and Claude will accomplish that task using the available MCPs. There's like a proliferation of MCP tools and marketplaces being built.

fendy3002 1 days ago [-]
If you know JSON-RPC: it's a JSON-RPC wrapper exposed for AI use and discovery.

If you know REST / http request:

it's single endpoint-only, partitioned / routed by single "type" or "method" parameter, with some different specification, for AI.

krackers 21 hours ago [-]
Wasn't the point of REST supposed to be runtime discoverability though? Of course REST in practice just seems to be json-rpc without the easy discoverability which seems to have been bolted on with Swagger or whatnot. But what does MCP do that (properly implemented) REST can't?
brabel 16 hours ago [-]
> Of course REST in practice just seems to be json-rpc

That's so wrong. REST in practice is more like HTTP with JSON payloads. If you find anything similar to json-rpc calling itself REST just please ask them politely to stop doing that.

anon7000 18 hours ago [-]
Half the point of MCP is just making it easy for an LLM to use some language in a standard way to talk to some other tool. I mean MCP is partly a standard schema for tools to interact and discover each other, and part of it is just allowing non-webserver based communication (like stdio piping, especially useful since initial the use case is running local scripts
kaoD 1 days ago [-]
In a nutshell: RPC with builtin discoverability for LLMs.
jimmySixDOF 1 days ago [-]
old dev language is deterministic, llm in the loop now so the language is stochastic.
jgalt212 1 days ago [-]
it is amazing we used to prize determinism, but now it's like determinism is slowing me down. I mean how do you even write test cases for LLM agents. Do you have another LLM judge the results as close enough, or not close enough?
jdlshore 24 hours ago [-]
Yes, and you have to do it in a loop on every request. Not joking. It’s called “LLM as judge.”
collingreen 7 hours ago [-]
What an amazing business to convince people to use. Making people pay to use LLMs to supervise the LLMs they pay for in order to get decent results is diabolically genius.

At the risk of offending some folks it feels like the genius of the Mormon church making its "customers" pay the church AND work for it for free AND market it for free in person AND shame anyone who wants to leave. Why have cost centers if you don't have to!

It's a business model I wasn't smart or audacious enough to even come up with.

jgalt212 7 hours ago [-]
It's a bit like paying more for AWS logging services than for the AWS services that provide the services your customers actually consume.
_raz 1 days ago [-]
A RPC standard that plays nicely with LLMs?
victorbjorklund 1 days ago [-]
self documenting API
matchagaucho 1 days ago [-]
Also missing in these strict, declarative protocols is a reliance on latent space, and the semantic strengths of LLMs.

Is it sufficient to put a agents.json file in the root of the /.well-known web folder and let agents just "figure it out" through semantic dialogue?

This forces the default use of HTTP as Agent stdio.

northern-lights 1 days ago [-]
also called Vibe Designing.
DonHopkins 13 hours ago [-]
I agree they should learn from DLLs, gRPC, SOAP, IDL, dCOM, etc.

But they should also learn from how NeWS was better than X-Windows because instead of a fixed protocol, it allowed you to send executable PostScript code that runs locally next to the graphics hardware and input devices, interprets efficient custom network protocols, responds to local input events instantly, implements a responsive user interface while minimizing network traffic.

For the same reason the client-side Google Maps via AJAX of 20 years ago was better than the server-side Xerox PARC Map Viewer via http of 32 years ago.

I felt compelled to write "The X-Windows Disaster" comparing X-Windows and NeWS, and I would hate if 37 years from now, when MCP is as old as X11, I had to write about "The MCP-Token-Windows Disaster", comparing it to a more efficient, elegant, underdog solution that got out worse-is-bettered. It doesn't have to be that way!

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

It would be "The World's Second Fully Modular Software Disaster" if we were stuck with MCP for the next 37 years, like we still are to this day with X-Windows.

And you know what they say about X-Windows:

>Even your dog won’t like it. Complex non-solutions to simple non-problems. Garbage at your fingertips. Artificial Ignorance is our most important resource. Don’t get frustrated without it. A mistake carried out to perfection. Dissatisfaction guaranteed. It could be worse, but it’ll take time. Let it get in your way. Power tools for power fools. Putting new limits on productivity. Simplicity made complex. The cutting edge of obsolescence. You’ll envy the dead. [...]

Instead, how about running and exposing sandboxed JavaScript/WASM engines on the GPU servers themselves, that can instantly submit and respond to tokens, cache and procedurally render prompts, and intelligently guide the completion in real time, and orchestrate between multiple models, with no network traffic or latency?

They're probably already doing that anyway, just not exposing Turing-complete extensibility for public consumption.

Ok, so maybe Adobe's compute farm runs PostScript by the GPU instead of JavaScript. I'd be fine with that, I love writing PostScript! ;) And there's a great WASM based Forth called WAForth, too.

https://news.ycombinator.com/item?id=34374057

It really doesn't matter how bad the language is, just look at the success and perseverance of TCL/Tk! It just needs to be extensible at runtime.

NeWS applications were much more responsive than X11 applications, since you download PostScript code into the window server to locally handle input events, provide immediate feedback, translate them to higher level events or even completely handle them locally, using a user interface toolkit that runs in the server, and only sends high level events over the network, using optimized application specific protocols.

You know, just what all web browsers have been doing for decades with JavaScript and calling it AJAX?

Now it's all about rendering and responding to tokens instead of pixels and mouse clicks.

Protocols that fix the shape of interaction (like X11 or MCP) can become ossified, limiting innovation. Extensible, programmable environments allow evolution and responsiveness.

Speed run that!

snthpy 1 days ago [-]
Bravo. Agree with both of your examples.
cmrdporcupine 12 hours ago [-]
It reminds me a bit of LSP, which feels to me like a similar speed-run and a pile of assumptions baked in which were more parochial aspects of the original application... now shipped as a standard.

And yeah, sounds like it's explicitly a choice to follow that model.

MuffinFlavored 1 days ago [-]
Isn't MPC based on JSON-RPC?
_raz 1 days ago [-]
Yes, the protocol seems fine to me in and of itself. It's the transport portion that seems to be a dumpster fire on the HTTP side of things.
koakuma-chan 1 days ago [-]
This article feels like an old timer who knows WebSockets just doesn't want to learn what SSE is. I support the decision to ditch WebSockets because WebSockets would only add extra bloat and complexity to your server, whereas SSE is just HTTP. I don't understand though why have "stdio" transport if you could just run an HTTP server locally.
saurik 1 days ago [-]
I'm confused by the "old timer" comment, as SSE not only predates WebSockets, but the techniques surrounding its usage go really far back (I was doing SSE-like things--using script blocks to get incrementally-parsed data--back around 1999). If anything, I could see the opposite issue, where someone could argue that the spec was written by someone who just doesn't want to learn how WebSockets works, and is stuck in a world where SSE is the only viable means to implement this? And like, in addition to the complaints from this author, I'll note the session resume feature clearly doesn't work in all cases (as you can't get the session resume token until after you successfully get responses).

That all said, the real underlying mistake here isn't the choice of SSE... it is trying to use JSON-RPC -- a protocol which very explicitly and very proudly is supposed to be stateless -- and to then use it in a way that is stateful in a ton of ways that aren't ignorable, which in turn causes all of this other insanity. If they had correctly factored out the state and not incorrectly attempted to pretend JSON-RPC was capable of state (which might have been more obvious if they used an off-the-shelf JSON-RPC library in their initial implementation, which clearly isn't ever going to be possible with what they threw together), they wouldn't have caused any of this mess, and the question about the transport wouldn't even be an issue.

koakuma-chan 1 days ago [-]
> I'm confused by the "old timer" comment

SSE only gained traction after HTTP/2 came around with multiplexing.

fendy3002 1 days ago [-]
HTTP call may be blocked by firewalls even internally, and it's overkill to force stdio apps to expose http endpoints for this case only.

As in, how MCP client can access `git` command without stdio? You can run a wrapper server for that or use stdio instead

koakuma-chan 1 days ago [-]
> As in, how MCP client can access `git` command without stdio?

MCP clients don't access any commands. MCP clients access tools that MCP servers expose.

fullstackchris 13 hours ago [-]
Here's a custom MCP tool I use to run commands and parse stdout / stderr all the time:

    try {
        const execPromise = promisify(exec);
        const { stdout, stderr } = await execPromise(command);
        if (stderr) {
            return {
                content: [{
                    type: "text",
                    text: `Error: ${stderr}`
                }],
                isError: true
            };
        }
        return {
            content: [{
                type: "text",
                text: stdout
            }],
            isError: false
        };
    } catch (error: any) {
        return {
            content: [{
                type: "text",
                text: `Error executing command: ${error.message}`
            }],
            isError: true
        };
    }
Yeah, if you want to be super technical, it's Node that does the actual command running, but in my opinion, that's as good as saying the MCP client is...
koakuma-chan 12 hours ago [-]
The point is that MCP servers expose tools that can do whatever MCP servers want them to do, and it doesn’t have to have anything to do with stdio.
foobarian 1 days ago [-]
Must have used GraphQL as a role model no doubt
LegNeato 11 hours ago [-]
GraphQL is transport agnostic
hirsin 1 days ago [-]
Indeed! But seemingly only for the actual object representation - it's a start, and I wonder if JSON is uniquely suited to LLMs because it's so text-first.
neuroelectron 1 days ago [-]
I think JSON is preferred because it adds more complexity.
sroussey 1 days ago [-]
I think it works because json is verbose and reinforces what everything is in each record.
visarga 1 days ago [-]
From this point of view XML offers all that and named brackets.
sroussey 1 days ago [-]
True. Even better with inline attributes.
DonHopkins 9 hours ago [-]
I Wanna Be <![CDATA[ Sung to the tune of “I Wanna Be Sedated”, with apologies to The Ramones. ]]>

https://donhopkins.medium.com/i-wanna-be-cdata-3406e14d4f21

sitkack 1 days ago [-]
Hey, at least they didn't use yaml-rpc.
_raz 1 days ago [-]
toml-rpc anyone? :)
immibis 1 days ago [-]
I understand those with experience have found that XML works better because it's more redundant.
wisemang 1 days ago [-]
Is it the redundancy? Or is it because markup is a more natural way to annotate language, which obviously is what LLMs are all about?

Genuinely curious, I don’t know the answer. But intuitively JSON is nice for easy to read payloads for transport but to be able to provide rich context around specific parts of text seems right up XML’s alley?

solidasparagus 1 days ago [-]
Or is is because most text in existence is XML - or more specifically HTML.
giantrobot 1 days ago [-]
The lack of inline context is a failing of JSON and a very useful feature of XML.

Two simple but useful examples would be inline markup to define a series of numbers as a date or telephone number or a particular phrase tagged as being a different language from the main document. Inline semantic tags would let LLMs better understand the context of those tokens. JSON can't really do that while it's a native aspect of XML.

_QrE 1 days ago [-]
Agreed with basically the entire article. Also happy to hear that someone else was as bewildered as me when they visited the MCP site and they found nothing of substance. RFCs can be a pain to read, but they're much better than 'please just use our SDK library'.
dlandis 1 days ago [-]
Agree... this is an important blog. People need to press pause on MCP in terms of adoption...it was simply not designed with a solid enough technical foundation that would make it suitable to be an industry standard. People are hyped about it, kind of like they were for LangChain and many other projects, but people are going to gradually (after diving into implementations) that it's not actually what they were looking for..It's basically a hack thrown together by a few people and there are tons of questionable decisions, with websockets being just one example of a big miss.
__loam 1 days ago [-]
The Langchain repo is actually hilariously bad if you ever go read the source. I can't believe they raised money with that crap. Right place right time I guess.
jtms 1 hours ago [-]
Yeah agree. I spent a few hours looking at the langchain repo when it first hit the scene and could not for the life of me understand what value it actually provided. It (at least at the time) was just a series of wrappers and a few poorly thought through data structures. I could find almost no actual business logic.
stuaxo 15 hours ago [-]
My first surprise on it:

I made an error trying with aws bedrock where I used "bedrock" instead of "bedrock-runtime".

The native library will give you an error back.

Langchain didn't try and do anything, just kept parsing the json and gave me a KeyError.

I was able to get a small fix, but was surprised they have no error like ConfigurationError that goes across all their backends at all.

The best I could get them to add was ValueError and worked with the devs to make the text somewhat useful.

But was pretty surprised, I'd expect a badly configured endpoint to be the kind of thing that happens when setting stuff up for the first time, relatively often.

worldsayshi 1 days ago [-]
Isn't that what a lot of this is about? It's a blue ocean and everyone are full of fomo.
__loam 19 hours ago [-]
Software quality be damned!
1 days ago [-]
oxidant 1 days ago [-]
I wish there was a clear spec on the site but there isn't https://modelcontextprotocol.io/specification/2025-03-26

It seems like half of it is Sonnet output and it doesn't describe how the protocol actually works.

For all its warts, the GraphQL spec is very well written https://spec.graphql.org/October2021/

9dev 1 days ago [-]
I didn’t believe you before clicking the link, but hot damn. That reads like the ideas I scribbled down in school about all the cool projects I could build. There is literally zero substance in there. Amazing.
svachalek 6 hours ago [-]
I thought it was just me. When I first saw all the hype around MCP I went to go read this mess and still have no idea what MCP even is.
1 days ago [-]
Spivak 21 hours ago [-]
And then you read the SDK code and the bewildering doesn't stop at the code quality, organization, complete lack of using exiting tools to solve their problems, it's an absolute mess for a spec that's like 5 JSON schemas in a trench coat.
_raz 1 days ago [-]
Glad to here, also thought I was alone :)
keithwhor 1 days ago [-]
On MCP's Streamable HTTP launch I posted a issue asking if we should just simplify everything for remote MCP servers to just be HTTP requests.

https://github.com/modelcontextprotocol/modelcontextprotocol...

MCP as a spec is really promising; a universal way to connect LLMs to tools. But in practice you hit a lot of edge cases really quickly. To name a few; auth, streaming of tool responses, custom instructions per tool, verifying tool authenticity (is the server I'm using trustworthy?). It's still not entirely clear (*for remote servers*) to me what you can do with MCP that you can't do with just a REST API, the latter being a much more straightforward integration path.

If other vendors do adopt MCP (OpenAI and Gemini have promised to) the problem they're going to run into very quickly is that they want to do things (provide UI elements, interaction layers) that go beyond the MCP spec. And a huge amount of MCP server integrations will just be lackluster at best; perhaps I'm wrong -- but if I'm { OpenAI, Anthropic, Google } I don't want a consumer installing Bob's Homegrown Stripe Integration from a link they found on 10 Best MCP Integrations, sharing their secret key, and getting (A) a broken experience that doesn't match the brand or worse yet, (B) credentials stolen.

keithwhor 1 days ago [-]
Quick follow up:

I anticipate alignment issues as well. Anthropic is building MCP to make the Anthropic experience great. But Anthropic's traffic is fractional compared to ChatGPT - 20M monthly vs 400M weekly. Gemini claims 350M monthly. The incentive structure is all out of whack; how long are OpenAI and Google going to let an Anthropic team (or even a committee?) drive an integration spec?

Consumers have barely interacted with these things yet. They did once, with ChatGPT Plugins, and it failed. It doesn't entirely make sense to me that OpenAI is okay to do this again but let another company lead the charge and define the limitations of the end user experience (because that what the spec ultimately does, dictates how prompts and function responses are transported), when the issue wasn't the engineering effort (ChatGPT's integration model was objectively more elegant) but a consumer experience issue.

The optimistic take on this is the community is strong and motivated enough to solve these problems as an independent group, and the traction is certainly there. I am interested to see how it all plays out!

Scotrix 1 days ago [-]
OpenAI takes the backseat and wait until something stable/usable comes out of it which gains traction and takes it over then. Old classic playbook to let others make the mistakes and profit from it…
rco8786 11 hours ago [-]
> It's still not entirely clear (for remote servers) to me what you can do with MCP that you can't do with just a REST API,

Nothing, as far as I can tell.

> the latter being a much more straightforward integration path.

The (very) important difference is that the MCP protocol has built in method discovery. You don't have to 'teach' your LLM about what REST endpoints are available and what they do. It's built into the protocol. You write code, then the LLM automatically knows what it does and how to work with it, because you followed the MCP protocol. It's quite powerful in that regard.

But otherwise, yea it's not anything particularly special. In the same way that all of the API design formats prior to REST could do everything a REST API can do.

angusturner 11 hours ago [-]
I’m really glad to see people converging on this view because I feel a bit insane for not understanding all the hype.

Like, yeah, we need a standard way to connect LLMs with tools etc, but MCP in its current state is not a solution.

_raz 1 days ago [-]
Once I published the blog post, I ended up doing a similar thing the other day. https://github.com/modelcontextprotocol/modelcontextprotocol...

From reading your issue, I'm not holding my breath.

It all kind of seems too important to fuck up

keithwhor 1 days ago [-]
In the grand scheme of things I think we are still very early. MCP might be the thing which is why I'd rather try and contribute if I can; it does have a grassroots movement I haven't seen in a while. But the wonderful thing about the market is that incentives, e.g. good customer experiences that people pay for, will probably win. This means that MCP, if it remains the focal point for this sort of work, will become a lot better regardless of whether or not early pokes and prods by folks like us are successful or not. :)
jes5199 21 hours ago [-]
I recently wrote an MCP server, in node, after trying and failing to get the official javascript SDK to work. I agree with the criticisms — this is a stunningly bad specification, perhaps the worst I have seen in my career. I don’t think the authors have actually tried to use it.

Trying to fix “you must hold a single connection open to receive all responses and notifications” by replacing it with “you must hold open as many connections as you have long-running requests, plus one more for notifications” is downright unhinged, and from reading the spec I’m not even sure they know that’s what they are asking clients to do

mattw1810 1 days ago [-]
MCP should just have been stateless HTTP to begin with. There is no good reason for almost any of the servers I have seen to be stateful at the request/session level —- either the server carries the state globally or it works fine with a session identifier of some sort.
taocoyote 1 days ago [-]
I don't understand the logistics of MCP interactions. Can anyone explain why they aren't stateless. Why does a connection need to be held open?
mattw1810 1 days ago [-]
I think some of the advanced features around sampling from the calling LLM could theoretically benefit from a bidirectional stream.

In practice, nobody uses those parts of the protocol (it was overdesigned and hardly any clients support it). The key thing MCP brings right now is a standardized way to discover & invoke tools. This would’ve worked equally well as a plain HTTP-based protocol (certainly for a v1) and it’d have made it 10x easier to implement.

brumar 16 hours ago [-]
Sampling is to my eyes a very promising aspect of the protocol. Maybe its implementation is lagging behind because it's too far from the previous mental model of tool use. I am also fine if the burden is on the client side if it enables a good DX on server side. In practice, there would be much more servers than clients.
brabel 16 hours ago [-]
> This would’ve worked equally well as a plain HTTP-based protocol

With plain HTTP you can quite easily "stream" both the request's and the response's body: that's a HTTP/1 feature called "chunking" (the message body is not just one byte array, it's "chunked" so that each chunk can be received in sequence). I really don't get why people think you need WS (or ffs SSE) for "streaming". I've implemented a chat using just good old HTTP/1.1 with chunking. It's actually a perfect use case, so it suits LLMs quite well.

0x457 22 hours ago [-]
Well, the point is to provide context, it's easier to do if server has state.

For example, you have a MCP client (let's say it's amazon q cli), a you have a MCP server for executing commands over ssh. If connection is maintained between MCP client and server, then MCP server can keep ssh connection alive.

Replace SSH server with anything else that has state - a browser for example (now your AI assistant also can have 500 open tabs)

lo0dot0 1 days ago [-]
I don't claim to have a lot of experience on this but my intuition tells me that a connection that ends after the request needs to be reopened for the next request. What is more efficient, keeping the session open or closing it, depends on the usage pattern, how much memory does the session consume, etc. etc.
mattw1810 1 days ago [-]
This is no different from a web app though, there’s no obvious need to reinvent the wheel. We know how to do this very very well: the underlying TCP connection remains active, we multiplex requests, and cookies bridge the gap for multi-request context. Every language has great client & server support for that.

Instead we ended up with a protocol that fights with load balancers and can in most cases not just be chucked into say an existing Express/FastAPI app.

That makes everything harder (& cynically, it creates room for providers like Cloudflare to create black box tooling & advertise it as _the_ way to deploy a remote MCP server)

ycombinatrix 18 hours ago [-]
That's not "stateful" for the purposes of correctness. Reusing a tcp stream doesn't make a protocol stateful.
mrcsharp 22 hours ago [-]
> "In HTTP+SSE mode, to achieve full duplex, the client sets up an SSE session to (e.g.) GET /sse for reads. The first read provides a URL where writes can be posted. The client then proceeds to use the given endpoint for writes, e.g., a request to POST /a-endpoint?session-id=1234. The server returns a 202 Accepted with no body, and the response to the request should be read from the pre-existing open SSE connection on /sse."

This just seems needlessly complicated. Performing writes on one endpoint and reading the response on another just seems so wrong to me. An alternative could be that the "client" generates a session id and the start of the chat and make http calls to the server passing that ID in a query string or header. Then, the response is sent back normally instead of just sending 202.

What benefit is SSE providing here? Let the client decide when a session starts/ends by generating IDs and let the server maintain that session internally.

nitely 21 hours ago [-]
> What benefit is SSE providing here? Let the client decide when a session starts/ends by generating IDs and let the server maintain that session internally.

The response is generated asynchronously, instead of within the HTTP request/response cycle, and sent over SSE later. But emulating WS with HTTP requests+SSE seems very iffy, indeed.

mrcsharp 19 hours ago [-]
Well with SSE the server and client are both holding a HTTP connection open for over a relatively long period of time. If the server is written with a language that supports async paradigms, then a http request that needs async IO will use about the same amount of resource anyways. And when the response if finished, that connection is closed and resources are freed. Whereas SSE will keep them for much longer.
nitely 17 hours ago [-]
Yes, and the client may do multiple requests, and if all take long to be processed you may end up with a lot of open connections at the same time (at least on http1), so there is a point to fast HTTP requests+SSE, instead of slow requests (and no SSE). Granted, if the server is HTTP2 the requests can share the same connection, but then it'd be similar to just using WS for this usage. Also, this allows to queue the work, and processed it either sequentially or concurrently.

By async I meant a process that may take longer than you are willing to do within the request/response cycle, not necessarily async IO.

mrcsharp 12 hours ago [-]
You make a good point about queuing up the work. You do get more control over resource management in this case.
Kiro 7 hours ago [-]
What benefit are WebSockets providing? It's the same. You send something to one endpoint and need to listen for a response in another handler.
aristofun 1 days ago [-]
This is a part of the bigger problem. Near all of AI is done by mathematicians, (data) scientists, students and amateur enthusiasts. Not by professional software engineers.

This is why nearly everything looks like a one weekend pet project by the standards of software engineering.

doug_durham 1 days ago [-]
Speak for yourself. I see the majority of work being done by professional software engineers.
aristofun 1 days ago [-]
Any popular examples to support your claim?

My claim is supported by the post article and many points there, for example. Another example is my own experience working with python ecosystem and ai/ml libraries in particular. With rare exceptions (like pandas) it is mostly garbage from DevX perspective (in comparison of course).

But I admit my exposure is very limited. I don’t work in ai area professionally (which is another example of my point btw, lol))

fleischhauf 1 days ago [-]
pytorch, tensorflow, numpy there are quite a few examples ai/ml has been steadily more commodetized, so it's far from only being developed by mathematicians. Hence every highschools student and his mother has an AI startup now. (And I'm not even mad, it's actually very exciting to see what people come up with nowadays)
lolinder 1 days ago [-]
Unfortunately when someone says "AI" these days they're not talking about pytorch, tensorflow, or numpy. They're talking specifically about LLMs, which are built on top of those tools but which do show the tendency that OP is identifying to generally appear to be vibe-coded over a weekend rather than designed by a rigorous engineering process like what we've come to expect from foundational tech like web browsers or operating systems (or, yes, pytorch or numpy).
acchow 21 hours ago [-]
Which LLMs seem to be vibe-coded over a weekend?

Do you perhaps mean small language models?

I doubt Llama or Deepseek were vibe coded..

lolinder 20 hours ago [-]
Sorry, I see that was confusing. I meant tooling for LLMs. Things like Langchain come to mind.
aristofun 1 days ago [-]
> pytorch, tensorflow, numpy

I would use those as examples of an exception from my generalized point.

Anything else? Just a handful of tools you can call professional of thousands and thousands used everyday?

lispisok 22 hours ago [-]
"professional software engineer" is a meaningless title because the industry has no professional standards.
aristofun 20 hours ago [-]
That can be a perfect example of kind of mentality that looks prevalent among ai developers - they often are not even aware of the problem.
tdullien 16 hours ago [-]
As a trained mathematician with 20+ years shipping software products, I object to this.

A lot of AI work is done by people that "dash-shaped" -- broad, but with no depth anywhere.

Then there's a few I-shaped people that drive research progress, and a few T-shaped people that work on the infrastructure that allows the training runs to go through.

But something like a protocol will certainly be designed by a dash, not an I or a T, because those are needed to keep the matrices multiplying.

jacob019 8 hours ago [-]
The consensus around here seems to be that the protocol itself is fine, but the transport is controversial.

Personally, even the stdio transport feels suboptimal. I mostly write python and startup time for a new process is nontrivial. Starting a new process for each request doesn't feel right. It works ok, and I'll admit that there's a certain elegance to it. It would be more practical if I were using a statically compiled language.

As far as the SSE / "Streamable HTTP" / websockets discussion, I think it's funny that there is all this controversy over how to implement sockets. I get that this is where we are, because the modern internet only supports a few protocols, but a the network level you can literally just open up a socket and send newline delimited JSON-RPC messages in both directions at full duplex. So simple and no one even thinks about it. Why not support the lowest level primitive first? There are many battle tested solutions for exposing sockets over higher level protocols, websockets being one of them. I like the Unix Philosophy.

Thinking further, the main issue with just using TCP is the namespace. It's similar to when you have a bunch of webservers and nginx or whatever takes care of the routing. I use domain sockets for that. People often just pick a random port number, which works fine too as long as you register it with the gateway. This is all really new, and I'm glad that the creators, David and Justin, had the foresight to have a clean separation between transport and protocol. We'll figure this out.

ximus 7 hours ago [-]
> Starting a new process for each request doesn't feel right.

I think there is a misunderstanding of how stdio works. The process can be long running and receive requests via stdio at any time. No need to start one for each request.

jacob019 7 hours ago [-]
Sure, but that is not what I'm seeing in FastMCP proxy mode, and it can only talk with the parent process. You make a good point though, stdio is similar to tcp sockets and cross-platform without namespace issues, but can only support one client per process. I could use socat if I want it to talk sockets.
dend 1 days ago [-]
Just to add one piece of clarification - the comment around authorization is a bit out-of-date. We've worked closely with Anthropic and the broader security community to update that part of MCP and implement a proper separation between resource server (RS) and authorization server (AS) when it comes to roles. You can see this spec in draft[1] (it will be there until a new protocol version is ratified).

[1]: https://modelcontextprotocol.io/specification/draft/basic/au...

lolinder 1 days ago [-]
What percentage of the MCP spec is (was?) LLM output?

It's setting off all kinds of alarm bells for me, and I'm wondering if I'm on to something or if my LLM-detector alarms are miscalibrated.

dend 19 hours ago [-]
Can only speak for the authorization spec, where I am actively participating - zero. The entire spec was written, reviewed, re-written, and edited by real people, with real security backgrounds, without leaning into LLM-based generation.
_raz 1 days ago [-]
Idk, I'm kind of agnostic and ended up throwing it in there.

Regurgitating the OAuth draft don't seem that usefull imho, and why am I forced into it if I'm using http. Seems like there are plenty of usecases where un-attended thing would like to interact over http, where we usually use other things aside from OAuth.

It all probably could have been replaced by

- The Client shall implement OAuth2 - The Server may implement OAuth2

dend 1 days ago [-]
For local servers this doesn't matter as much. For remote servers - you won't really have any serious MCP servers without auth, and you want to have some level setting done between client and servers. OAuth 2.1 is a good middle ground.

That's also where, with the new spec, you don't actually need to implement anything from scratch. Server issues a 401 with WWW-Authenticate, pointing to metadata for authorization server locations. Client takes that and does discovery, followed by OAuth flow (clients can use many libraries for that). You don't need to implement your own OAuth server.

vlovich123 1 days ago [-]
Bearer tokens work elsewhere and imho are drastically simpler than oauth
dend 19 hours ago [-]
But where would you get bearer tokens? How would you manage consent and scopes? What about revocation? OAuth is essentially the "engine" that gives you the bearer tokens you need for authorization.
18 hours ago [-]
_kidlike 18 hours ago [-]
I know it's not auth-related, but the main MCP "spec" says that it was inspired by LSP (language server protocol). Wouldn't something like HATEOAS be more apt?
punkpeye 1 days ago [-]
I am the founder of one of the MCP registries (https://glama.ai/mcp/servers).

I somewhat agree with author’s comments, but also want to note that the protocol is in the extremely early stages of development, and it will likely evolve a lot over the next year.

I think that no one (including me) anticipated just how much attention this will get straight out the door. When I started working on the registry, there were fewer than a few dozen servers. Then suddenly a few weeks later there was a thousand, and numbers just kept growing.

However, lots and lots of those servers do not work. Majority of my time has gone into trying to identify servers that work (using various automated tests). All of this is in large part because MCP got picked up by the mainstream AI audience before the protocol reached any maturity.

Things are starting to look better now though. We have a few frameworks that abstract the hard parts of the protocol. We have a few registries that do a decent job surfacing servers that work vs those that do not. We have a dozen or so clients that support MCPs, etc. All of this in less than half a year is unheard of.

So yes, while it is easy to find flaws in MCP, we have to acknowledge that all of it happened in a super short amount of time – I cannot even think of comparisons to make. If the velocity remains the same, MCP future is very bright.

For those getting started, I maintain a few resources that could be valuable:

* https://github.com/punkpeye/awesome-mcp-servers/

* https://github.com/punkpeye/awesome-mcp-devtools/

* https://github.com/punkpeye/awesome-mcp-clients/

ethical_source 1 days ago [-]
> I somewhat agree with author’s comments, but also want to note that the protocol is in the extremely early stages of development, and it will likely evolve a lot over the next year.

And that's why it's so important to spec with humility. When you make mistakes early in protocol design, you live with them FOREVER. Do you really want to live with a SSE Rube Goldberg machine forever? Who the hell does? Do you think you can YOLO a breaking change to the protocol? That might work in NPM but enterprise customers will scream like banshees if you do, so in practice, you're stuck with your mistakes.

jes5199 21 hours ago [-]
they already did though. the late-2024 version and the early-2025 version have completely incompatible SSE rube goldberg machines
punkpeye 1 days ago [-]
Just focusing on worst-case scenarios tends to spread more FUD than move things forward. If you have specific proposals for how the protocol could be designed differently, I’m sure the community would love to hear them – https://github.com/orgs/modelcontextprotocol/discussions
ethical_source 1 days ago [-]
The worst case scenario being, what, someone implementing the spec instead of using the SDK and doing it in a way you didn't anticipate? Security and interoperability will not yield to concerns about generating FUD. These concerns are important whether you like them or not. You might as well be whispering that ill news is a ill guest.

At the least, MCP needs to clarify things like "SHOULD rate limit" in more precise terms. Imagine someone who is NOT YOU, someone who doesn't go to your offsites, someone who doesn't give a fuck about your CoC, implementing your spec TO THE LETTER in a way you didn't anticipate. You going to sit there and complain that you obviously didn't intend to do the things that weird but compliant server is doing? You don't have a recourse.

The recent MCP annotations work is especially garbage. What the fuck is "read only"? What's "destructive"? With respect to what? And hoo boy, "open world". What the fuck? You expect people to read your mind?

What would be the point of creating GH issues to discuss these problems? The kind of mind that writes things like this isn't the kind of mind that will understand why they need fixing.

rco8786 1 days ago [-]
Agree with basically all of this.

The actual protocol of MCP is…whatever. I’m sure it will continue to evolve and mature. It was never going to be perfect out of the gate, because what is?

But the standardization of agentic tooling APIs is mind bogglingly powerful, regardless of what the standard itself actually looks like.

I can write and deploy code and then the AI just..immediately knows how to use it. Something you have to experience yourself to really get it.

punkpeye 1 days ago [-]
Yup. It's easy to focus on what’s missing or broken in early-stage tech, but I’m more excited about where this kind of standardization could take us. Sometimes you need to look beyond imperfections and see the possibilities ahead.
delian66 17 hours ago [-]
What are the possibilities that you see?
_raz 1 days ago [-]
Kind of my fear exactly. We are moving so fast and that mcp would create an accept a transport protocol that might take years or decades to get rid off for something better.

Kind of reminds me of the browser wars during 90s where everyone tried to run the fastest an created splits in standards and browsers what we didn't really det rid of for a good 20 year or more. IE11 was around for far to long

punkpeye 1 days ago [-]
I think that transport is a non-issue.

Whatever the transport evolves to, it is easy to create proxies that convert from one transport to another, e.g. https://github.com/punkpeye/mcp-proxy

As an example, every server that you see on Glama MCP registry today is hosted using stdio. However, the proxy makes them available over SSE, and could theoretically make them available over WS, 'streamable HTTP', etc

Glama is just one example of doing this, but I think that other registries/tools will emerge that will effectively make the transport the server chooses to implement irrelevant.

nylonstrung 22 hours ago [-]
Do you think WebTransport and HTTP3 could provide better alternatives for transport?
1 days ago [-]
bongodongobob 20 hours ago [-]
Then have a convo with all your devs to stop spamming the glory of MCP all over the damn place. Have some patience and finish writing and testing it first.
justanotheratom 1 days ago [-]
It is indeed quite baffline why MCP is taking off, but facts are facts. I would love to be enlightened how MCP is better than an OpenAPI Spec of an existing Server.
simonw 1 days ago [-]
My theory is that a lot of the buzz around MCP is actually buzz around the fact that LLM tool usage works pretty well now.

OpenAI plugins flopped back in 2023 because the LLMs at the time weren't reliable enough for tool usage to be anything more than interesting-but-flawed.

MCP's timing was much better.

fhd2 1 days ago [-]
I'm still having relatively disastrous results compared to just sending pre curated context (i.e. calling tools deterministically upfront) to the model.

Doesn't cover all the use cases, but for information retrieval stuff, the difference is pretty light and day. Not to mention the deterministic context management approach is quite a bit cheaper in terms of tokens.

visarga 1 days ago [-]
I find letting the agent iterate search leads to better results. It can direct the search dynamically.
runekaagaard 1 days ago [-]
I thinks a lot is timing and also that it's a pretty low bar to write your first mcp server:

    from mcp.server.fastmcp import FastMCP
    mcp = FastMCP("Basic Math Server")

    @mcp.tool()
    def multiply(a: int, b: int) -> int:
        return a * b

    mcp.run()
If you have a large MCP server with many tools the amount of text sent to the LLM can be significant too. I've found that Claude works great with an OpenAPI spec if you provide it with a way to look up details for individual paths and a custom message that explains the basics. For instance https://github.com/runekaagaard/mcp-redmine
_raz 1 days ago [-]
That's kind of my point, that the protocols complexity is hidden in py sdk making it feel easy... But taking on large tech dept
practal 1 days ago [-]
The difficult part is figuring out what kind of abstractions we need MCP servers / clients to support. The transport layer is really not important, so until that is settled, just use the Python / TypeScript SDK.
mmcnl 1 days ago [-]
But the spec is on the transport level. So for the specification, the transport layer is very important.
practal 1 days ago [-]
This is the spec that counts: https://github.com/modelcontextprotocol/modelcontextprotocol...

How exactly those messages get transported is not really relevant for implementing an mcp server, and easy to switch, as long as there is some standard.

vlovich123 1 days ago [-]
You’re ignoring the power of network effects - “shitty but widely adopted” makes “better but just starting adoption harder” to grow. Think about how long it takes to create a new HTTP standard - we’ve had 3 HTTP standards in the past 30 years, the first 19 of which so no major changes. HTTP/2 kind of saw no adoption in practice and HTTP/3 is still a mixed bag. In fact, most of the servers pretend HTTP/1 with layers in front converting the protocols.

Underestimate network effects and ossification at your own peril.

pixl97 1 days ago [-]
I mean isn't this the point of a lot of, if not most successful software? Abstracting away the complexity making it feel easy, where most users of the software have no clue what kind of technical debt they are adopting?

Just think of something like microsoft word/excel for most of its existence. Seems easy to the end user, but attempting to move away from it was complex, the format had binary objects that were hard to unwind, and interactions that were huge security risks.

Scotrix 1 days ago [-]
yeah, but it doesn’t need to be that way. It can be simple and makes it easier adoptable, why over engineering and reinventing the wheel of at least 2 decades experience and better practices.
pixl97 1 days ago [-]
Simple software is the most difficult software to make.

Historically stated as

>I apologize for such a long letter - I didn't have time to write a short one.

hirsin 1 days ago [-]
This is one of the few places I think it's obvious why MCP provides value - an OpenAPI document is static and does no lifting for the LLM, forcing the LLM to handle all of the call construction and correctness on its own. MCP servers reduce LLM load by providing abstractions over concepts, with basically the same benefits we get by not having to write assembly by hand.

In a literal sense it's easier, safer, faster, etc for an LLM to remember "use server Foo to do X" than "I read a document that talks about calling api z with token q to get data b, and I can combine three or four api calls using this http library to...."

acchow 1 days ago [-]
I believe gp is saying the MCP’s “tool/list” endpoint should return dynamic, but OpenAPI-format, content.

Not that the list of tools and their behavior should be static (which would be much less capable)

tedivm 1 days ago [-]
I'm not saying MCP is perfect, but it's better than OpenAPI for LLMs for a few reasons.

* MCP tools can be described simply and without a lot of text. OpenAPI specs are often huge. This is important because the more context you provide an LLM the more expensive it is to run, and the larger model you need to use to be effective. If you provide a lot of tools then using OpenAPI specs could take up way too much for context, while the same tools for MCP will use much less.

* LLMs aren't actually making the calls, it's the engine driving it. What happens when an LLM wants to make a call is it responds directly with a block of text that the engine catches and uses to run the command. This allows LLMs to work like they're used to: figuring out text to output. This has a lot of benefits: less tokens to output than a big JSON blob is going to be cheaper.

* OpenAPI specs are static, but MCP allows for more dynamic tool usage. This can mean that different clients can get different specs, or that tools can be added after the client has connected (possibly in response to something the client sent). OpenAPI specs aren't nearly that flexible.

This isn't to say there aren't problems. I think the transport layer can use some work, as OP sent, but if you play around in their repo you can see websocket examples so I wouldn't be surprised if that was coming. Also the idea that "interns" are the ones making the libraries is an absolute joke, as the FastMCP implementation (which was turned into the official spec) is pretty solid. The mixture of hyperbole with some reasonable points really ruins this article.

smartvlad 1 days ago [-]
If you look at the actual raw output of tools/list call you may find it surprisingly similar to the OpenAPI spec for the same interface. In fact they are trivially convertible to each other.

Personally I find OpenAPI spec being more practical since it includes not just endpoints with params, but also outputs and authentication.

Know all that from my own experience plugging dozens of APIs to both MCP/Claude and ChatGPT.

9dev 1 days ago [-]
> OpenAPI specs are static, but MCP allows for more dynamic tool usage.

This is repeated everywhere, but I don’t get it. OpenAPI specs are served from an HTTP endpoint, there’s nothing stopping you from serving a dynamically rendered spec depending on the client or the rest of the world?

armdave 1 days ago [-]
What does it mean that "different clients can get different specs"? Different in what dimension? I could imagine this makes creating repeatable and reliable workflows problematic.
tedivm 1 days ago [-]
Using MCP you can send "notifications" to the server, and the server can send back notifications including the availability of new tools.

So this isn't the same as saying "this user agent gets X, this gets Y". It's more like "this client requested access to X set of tools, so we sent back a notification with the list of those additional tools".

This is why I do think websockets make more sense in a lot of ways here, as there's a lot more two way communication here than you'd expect in a typically API. This communication also is very session based, which is another thing that doesn't make sense for most OpenAPI specs which assume a more REST-like stateless setup.

Brainlag 23 hours ago [-]
Why is it baffling? Worse is better! Look at PHP, why took that prank of a programming language ever anyone serious?
bsenftner 13 hours ago [-]
I'll say it: MPC is immature trash, and will be replaced with... nothing. It is not needed, nor necessary if one has any functional engineering experience. It's yet another poorly considered distraction created by the software industry that cannot tie it's own shoes.
NicuCalcea 9 hours ago [-]
> It is not needed, nor necessary if one has any functional engineering experience

There indeed are people without functional engineering experience. I wrote some software meant to be used by journalists. MCP is a great fit for it, it allows the tool to be expanded and adapted to their needs without having to code the whole thing themselves.

Sammi 13 hours ago [-]
MPC is a poor way of doing function calls or web requests, when you could just do function calls or web requests.

I'd love to be wrong, but the more I learn about MCP the more I fear that I'm right.

somnium_sn 7 hours ago [-]
Hey, I am one of the MCP authors.

We appreciate the criticism and take it very seriously. We know things are not perfect and there is lots of room for improvement. We are trying to balance the needs of the fast paced AI world, and the careful, time consuming needs of writing a spec. We’d love to improve the spec and the language, and would of course appreciate help here. We also work with an increasingly larger community that help us get this right. The most recent Authorization specification changes are just one example.

Similarly we are working on the SDKs and other parts of MCP to improve the ecosystem. Again, it’s all very early and we appreciate help from the community.

pelagicAustral 6 hours ago [-]
AI post
somnium_sn 6 hours ago [-]
Literally hand written. Maybe I just sound like an AI. Who knows
Scotrix 1 days ago [-]
Couldn’t agree more, played the whole day today trying to get a HTTP MCP server with Claude running.

Absolutely terrible, no clear spec, absolute useless errors and/or just broken behaviour without telling what’s wrong. Reference implementations and frameworks are not working either, so only reverse engineering + trial & error until it runs, yaaay.

Feels like the early 2000 over and over again, trying to make something work.

owebmaster 1 days ago [-]
> Feels like the early 2000 over and over again

Exciting, right? Technology is unpredictable and fun again :)

baalimago 17 hours ago [-]
Personally I don't get why they didn't extend the function calling system [1].

This enables practically the same functionality, only with less fuzz. Long term memory can then instead be implemented via RAG, exposed as function calls, instead of keeping it in the context of the MCP.

An "agent" is a pre-prompted server which receives external requests, by any API (the AI interface is not exposed, since there's no need for it to be). The server then performs query by announcing which tools the LLM should use via function calling + conversation flow.

The only downside of this approach is that you can't have a MCP "marketplace" (but it's perfectly possible to expose standardized structs for different tools [2], which ultimately achieves the same thing).

[1]: https://platform.openai.com/docs/guides/function-calling?api... [2]: https://github.com/baalimago/clai/blob/main/internal/tools/b...

fendy3002 1 days ago [-]
Opinion aside (still reading),

> Simply put, it is a JSON-RPC protocol with predefined methods/endpoints designed to be used in conjunction with an LLM.

Is a spot on / simplest explanation of MCP, wonder why nobody use that or insist that it's usb-c for AI on their tutorials! Seeing this early can makes me understand MCP in 5 minutes

bn-l 1 days ago [-]
Llms are bad at summaries. So if you vibe spec and vibe doc it’s makes it nice for you but frustrating for any poor schmuck who has to work with it because it’s inexplicably now the flavour of the month among vibe coders.
fendy3002 22 hours ago [-]
Yeah one main thing I really hate about ai in general: they overhype and targeted to non technical person, and everyone involved in it want to sell the shovels in a gold rush. It's frustrating to find a good technical material. Or good tips / practice
schappim 24 hours ago [-]
I’m building an MCP service in Ruby on Rails called ninja.ai [1], which functions as an app store offering one-click installation of MCP servers. Ninja installs Model Context Protocol servers on client devices using Tauri [2], a lightweight framework for building cross-platform desktop apps.

I’m also using Rails to host MCP servers in the cloud.

I share the criticism of HTTP+SSE and the recent “Streamable HTTP” feature—WebSockets would have been a more appropriate choice for interactive, bidirectional communication. Rails’ native support for SSE via ActionController::Live is limited (blocking) and has led to significant scalability challenges, prompting me to migrate these endpoints to the Falcon web server, which is better suited to concurrent streaming workloads.

When I reviewed the pull request for “Streamable HTTP” (which allows streaming a single controller response via server-sent events), I noticed it was largely driven by engineers at Shopify. I’m curious why they opted for this approach instead of WebSockets, especially given that Rails already includes ActionCable for WebSocket support. My assumption is that their choice was informed by specific infrastructure or deployment constraints, possibly related to simplicity or compatibility with existing HTTP/2 tooling.

It’s worth noting that the transport layer in the Model Context Protocol is intentionally abstracted. Future implementations could leverage WebSockets or even WebRTC, depending on the needs of the host environment or client capabilities.

[1] https://ninja.ai

[2] https://v2.tauri.app

angusturner 12 hours ago [-]
I think this is article is too generous about the use of stdio - I have found this extremely buggy so far, especially in the python sdk.

Also if you want to wrap any existing code that logs or prints to stdout then it causes heaps of ugly messages and warnings as it interferes with the comms between client and server.

I just want a way to integrate tools with Claude Desktop that doesn’t make a tonne of convoluted and weird design choices.

thatxliner 18 hours ago [-]
> Be honest... when was the last time you ran pip install and didn't end up in dependency hell?

Never? If you use proper encapsulation (e.g. tools such as pipx or using virtual environments), that's a non-issue. It only gets bad when there's Python version incompatibilities

22 hours ago [-]
hrpnk 1 days ago [-]
This critical look focuses just on the protocol. The fun starts with the actual MCP server implementations... Seems that providing an MCP server is the to be or not to be for all sorts of vendors. All REST APIs get wrapped into an MCP to make products LLM-compatible and tick checkmarks on newly extended checklists.

Many pass REST responses directly to LLMs that quickly leads to token burn. Wish providers took a closer look on the actual engineering practices for the servers.

Has someone seen a good implementation of an MCP server with a comprehensive test suite?

1 days ago [-]
_pdp_ 17 hours ago [-]
We also had to reverse engineer the SDKs because we couldn't find any good source of what Streamable HTTP really supposed to do and our big take away from this experiment was that MCP is an experiment.

Why would anyone want to write a non-scalable wrapper around a service that already is well documented using OpenAPI endpoints is beyond me.

Anyway, we ended up implementing it just because but I already know it is a mistake and a potential source of many hours wasted by our engineering team.

maCDzP 9 hours ago [-]
The comments are mostly negative, so I’ll add my experience as a non coder.

I wanted to let Claude search an open data source. It’s my counties version of library of congress.

So I pointed Claude to the MCP docs and the API spec for the open data. 5 minutes later I had a working MCP client so I can connect Claude to my data set.

Building that would have taken me days, now I can just start searching for the data that I want.

Sure, I have to proof read everything that the LLM turn out. It I believe that’s better than reading and searching though the library.

lolinder 9 hours ago [-]
I don't think any of the negativity is about whether MCP works. It's just about whether MCP is a horribly ill-planned design that could have been much better if they'd taken the time to learn from the 50 years of experience we have as an industry in building protocols.

That it works is in some ways worse because it means we'll be stuck with it. If it didn't work we'd be more likely to be able to throw it away and start over.

stuaxo 15 hours ago [-]
There is so much stuff like this right now:

I have a similar feeling looming at the converse API.

Why, do we have a thing that keeps returning all the text, when something on the other end is just appending to it.

Then there is the code inside langchain, some of which feels rushed.

I'm unconvinced of the abstraction that everything is a "Document". In an app I'm working on, once we switched to PGVector in Django the need for a lot of things went away.

What is great with langchain is the support for lots of things.. but not everything.

So, wanting to always use a thin abstraction over native libraries we find ourself using litellm, which covers some bits and langchain for the others (though the code for both of those is not much).

And then there's models: we can't even agree on standard names for the same models.

And this makes it painful when you support different backends, of course if Bedrock is a backend they have their own models you can't use anywhere else.

kgeist 13 hours ago [-]
Why can't it just be an OpenAPI spec and standard HTTP requests? Can't AI already figure out which endpoints to call on its own? The whole Streamable HTTP approach feels like premature optimization. Is there something else to it?
practal 1 days ago [-]
There isn't much detailed technical spec on MCP on the spec site, but they have a link to a schema [1]. You can add that schema to a Claude project, and then examine it. That's very helpful, although you will quickly run into unsupported things, for example embedded resources in tool call responses in Claude Desktop.

I think MCP will be a huge deal for Practal. Implementing Practal as an MCP server, I basically don't need a frontend.

[1] https://github.com/modelcontextprotocol/modelcontextprotocol...

yawnxyz 1 days ago [-]
I’ve been toying with building remote MCPs on Cloudflare workers. I came in with the idea that “you could probably use REST APIs for everything” then implemented both REST, and MCPs side by side. Also built an SSE and a “streamable http” version.

For building apps that call the server, using the APIs was way easier.

For building an LLM system for figure out what API tool calls to make, it’s quite a bit of work to recreate what the MCP folks did.

I think MCPs are a huge time saver for wrapping AI around a bag of tools, without having to hard code every API call and interaction.

If anything, I think using MCPs is a massive convenience and time saver for building and prototyping LLM + tool calling apps.

Also for SSE vs Streamable HTTP, I don’t think “streamable” uses SSE at all? I think the problem they were solving for was the annoying long lived SSE connections — you can definitely see the difference on Workers though, switching away from SSE makes the workers way faster for multiple connections

Edit: the experience of building on Cloudflare and Claude is extremely frustrating; Claude is unable to output errors properly, can’t edit config inside the app, and has to be restarted constantly. Cloudflare stops working properly on SSE connections randomly, and throws bizarre errors every now and then.

benpacker 22 hours ago [-]
Cloudflare + SSE with any substantial delay is a nightmare
notepad0x90 1 days ago [-]
What are arguments for involving HTTP/streaming at all in the context of MCP?

As I understand, the agent sdk/ADK can simply start a child process and use STDIO to execute MCP commands. Why not always use that approach? if some REST api needs to be queried, the child process can abstract that interaction and expose it by the same consistent MCP interface. That way, devs are free to use REST,SOAP,*RPC,etc.. or whatever they want.

cheriot 18 hours ago [-]
The hub bub about MCP buries the lede. The amazing thing is how effectively LLMs use tools.

Who cares which client side protocol turns a structured message into a function call? There will be as many of them as there are RPC protocols because that's effectively what it is.

_heimdall 1 days ago [-]
I haven't dug too deeply into MCP yet so I may very well be wrong here, but it feels like get another attempt to paper over the fact that we abandoned REST APIs nearly 20 years ago.

XML is ugly and building APIs that describe both the data and available actions is tough.

Instead we picked JSON RPCs that we still call REST, and we inevitably run into situations like Alexa or LLMs where we want a machine to understand what actions are supported in an API and what the data schema is.

ethan_smith 13 hours ago [-]
OpenAPI/Swagger already solves the machine-discoverable API problem MCP is attempting to address, with years of maturity and tooling support. The real innovation needed isn't yet another protocol but better LLM understanding of existing API description formats.
noor_z 13 hours ago [-]
One advantage of MCP being "inspired by" LSP, at least to people like myself who work on LSP tools, is that a decent chunk of existing LSP code can be reused and repurposed. For example, most editors already ship with code for managing local server sidecars over stdio. There are a few annoying differences though like the lack of a header section in MCP.
grogenaut 1 days ago [-]
I find the discussing of the quality of the MCP protocol funny. This space is constantly evolving very quickly. I consider MCP completely throw away, and I'm willing to deal with it as unimportant in that rapidly evolving space. I'm sure there will be a different implementation and approach in 6-12 months, or not. From the speed I'm able to turn on MCP integrations I don't think it'll take that long to swap to another protocol. We're in the sone age here. MCP may be crappy but it's a flint and steel versus a bowstring. Eventually we'll get to central or zoned heating.

I'm using MCP locally on my laptop, the security requirements are different there than on a server. Logging can be done at the actual integration with external api level if you have standard clients and logging, which I do and push for.

To me what is important right now is to glue my apis, data sources, and tools, to the AI tools my people are using. MCP seems to do that easily. Honestly I don't care about the protocol, at the end of the day, protocols are just ways for things to talk to each other, if they're interesting in and of themselves to you you're focusing on other things than I am. My goal is delivering power with the integration.

MCP may be messy, but the AI tools I'm using them with seem just fine and dealing with that mess to help me build more power into the AI tools. That, at the end of the day is what I care about, can I get the info and power into the tools so that my employees can do stuff they couldn't do before. MCP seems to do that just fine. If we move to some other protocol in 6 months, I'm assuming I can do that with AI tools on a pretty quick basis, as fast as I'm building it right now.

jsight 17 hours ago [-]
Wow, this is really incredible and timely analysis. I was just looking at the MCP "spec" the other day. I didn't really understood it, and assumed that I must have been missing something.

I tried asking some LLMs for from-scratch implementations of MCP hosts and clients, and they did a terrible job of it. This seemed odd to me.

It turns out that both of these problems likely have the same cause. The spec (if you can even call it that) really is horrendous. It doesn't really spell out the protocol properly at all!

1 days ago [-]
theturtle32 1 days ago [-]
Regarding the WebSocket critiques specifically, as the author of https://www.npmjs.com/package/websocket, and having participated in the IETF working group that defined the WebSocket protocol, I completely agree with this blog post's author.

The WebSocket protocol is the most ideal choice for a bi-directional streaming communication channel, and the arguments listed in https://github.com/modelcontextprotocol/modelcontextprotocol... for "Why Not WebSockets" are honestly bewildering. They are at best thin, irrelevant and misleading. It seems as though they were written by people who don't really understand the WebSocket protocol, and have never actually used it.

The comment farther down the PR makes a solid rebuttal. https://github.com/modelcontextprotocol/modelcontextprotocol...

Here are the stated arguments against using the WebSocket protocol, and my responses.

---

Argument 1: Wanting to use MCP in an "RPC-like" way (e.g., a stateless MCP server that just exposes basic tools) would incur a lot of unnecessary operational and network overhead if a WebSocket is required for each call.

Response 1: There are multiple better ways to address this.

Option A.) Define a plain HTTP, non-streaming request/response transport for these basic use cases. That would be both DRAMATICALLY simpler than the "Streaming HTTP" HTTP+SSE transport they did actually define, while not clouding the waters around streaming responses and bi-directional communications.

Option B.) Just leave the WebSocket connection open for the duration of the session instead of tearing it down and re-connecting it for every request. Conceptualizing a WebSocket connection as an ephemeral resource that needs to be torn down and reconstructed for every request is wrong.

---

Argument 2: From a browser, there is no way to attach headers (like Authorization), and unlike SSE, third-party libraries cannot reimplement WebSocket from scratch in the browser.

Response 2: The assertion is true. You cannot attach arbitrary headers to the initial HTTP GET request that initiates a WebSocket connection, not because of the WebSocket protocol's design, but because the design of the browser API doesn't expose the capability. However, such a limitation is totally irrelevant, as there are plenty of other ways that you could decide to convey that information from client to server:

- You can pass arbitrary values via standard HTTP GET query parameters to be interpreted during the WebSocket handshake. Since we're initiating a WebSocket connection and not actually performing a GET operation on an HTTP resource, this does not create issues with caching infrastructure, and does not violate standard HTTP GET semantics. The HTTP GET that initiates a WebSocket connection is HTTP GET in name only, as the response in a successful WebSocket handshake is to switch protocols and no longer speak HTTP for the remainder of the connection's lifetime.

- Cookies are automatically sent just as with any other HTTP request. This is the standard web primitive for correllating session state across connections. I'll grant, however, that it may be a less relevant mechanism if we're talking about cross-origin connections.

- Your subprotocol definition (what messages are sent and received over the WebSocket connection) could simply require that the client sends any such headers, e.g. Authorization, as part of the first message it sends to the server once the underlying WebSocket connection is established. If this is sent pipelined along with the first normal message over the connection, it wouldn't even introduce an additional round-trip and therefore would have no impact on connection setup time or latency.

These are not strange, onerous workarounds.

---

Argument 3: Only GET requests can be transparently upgraded to WebSocket (other HTTP methods are not supported for upgrading), meaning that some kind of two-step upgrade process would be required on a POST endpoint, introducing complexity and latency.

Response 3: Unless I'm missing something, this argument seems totally bewildering, nonsensical, and irrelevant. It suggests a lack of familiarity with what the WebSocket protocol is for. The semantics of a WebSocket connection are orthoganal to the semantics of HTTP GET or HTTP POST. There is no logical concept of upgrading a POST request to a WebSocket connection, nor is there a need for such a concept. MCP is a new protocol that can function however it needs to. There is no benefit to trying to constrain your conceptualization of its theoretical use of WebSockets to fit within the semantics of any other HTTP verbs. In fact, the only relationship between WebSockets and HTTP is that WebSockets utilizes standard HTTP only to bootstrap a connection, after which point it stops speaking HTTP over the wire and starts speaking a totally distinct binary protocol instead. It should be conceptualized as more analogous to a TCP connection than an HTTP connection. If you are thinking of WebSockets in terms of REST semantics, you have not properly understood how WebSockets differs, nor how to utilize it architecturally.

Since the logical semantics of communication over a WebSocket connection in an MCP server are functionally identical to how the MCP protocol would function over STDIN/STDOUT, the assertion that you would need some kind of two-step upgrade process on a POST endpoint is just false, because there would not exist any POST endpoint for you to have interacted with in the first place, and if one did exist, it would serve some other purpose unrelated to the actual WebSocket connection.

---

In my view, the right way to conceptualize WebSocket in MCP is as a drop-in, mostly transparent alternative to STDIO. Once the WebSocket connection is established, the MCP client/server should be able to speak literally EXACTLY the same protocol with each other as they do over STDIO.

_raz 1 days ago [-]
Thanks, very nice! A very explanatory write-up
nyclounge 13 hours ago [-]
> “RPC-like use of WebSocket adds overhead” – Actually, WebSocket reduces overhead in high-frequency RPC-like interactions by keeping a persistent connection.

Is this really true? Thought the whole reason to use SSE is that it is more lightweight than WebSocket?

ghoshbishakh 19 hours ago [-]
The "Why not WebSocket?" arguments in the PR ar hilarious. Also, these arguments look AI generated which tells something.
shivawu 1 days ago [-]
Obviously the article is making valid points. But a recent epiphany I had is, things by default are just mediocre but works. Of course the first shot at this problem is not going to be very good, very much like the first version of JavaScript is a shitshow and we’ll take years to pay down the technical debts. In order to force a beautiful creation, significant effort and will power needs to be put in place. So Id say I’m not surprised at all and this is just how the world works, in most cases.
keithwhor 1 days ago [-]
I think this is a cop out. OpenAI literally published a better integration spec two years ago, stored on `/.well-known/ai-plugin.json`. It just gave a summary of an OpenAPI spec, which ChatGPT could consume and then run your functions.

It was simple and elegant, the timing was just off. So the first shot at this problem actually looked quite good, and we're currently in a regression.

traviscline 1 days ago [-]
Thanks for this, I’ve been feeling similarly.

I’m working on some Go programs/tools with the explicit goal of describing existing servers in a language neutral manner to try to get some sanity into the mix.

I was reenergized to pick this back up because Google is working on a version so I want to get these tools ready.

Open to ideas and input, have been noodling on it for a bit now, lots not in form to share but figured I’d share early:

https://github.com/tmc/mcp

traviscline 1 days ago [-]
In the current state you can insert “mcpspy” in front of a server and it intercepts and streams out a plain text format that’s nice for humans and machines. There’s also a replay tool that emulates previous traffic, including in mock client and server modes, and a diffing program that is mcp protocol aware.

Oh, and most importantly, a vim syntax plugin for the .mcp file format.

traviscline 1 days ago [-]
https://github.com/tmc/mcp/blob/next/cmd/mcpdiff/testdata/sc...

This is what the tests look like, for both the tools and to validate the servers.

_raz 1 days ago [-]
I had to take a break from extending, our go LLM wrapper, https://github.com/modfin/bellman with mcp to write the blog entry. So some sort of server-like-thing will be added soon
DGAP 1 days ago [-]
I assume the docs, code, and design were done mostly by AI.
1 days ago [-]
sensanaty 23 hours ago [-]
Man I thought I was crazy for a bit there. At work I've had some people hyping up their MCP stuff, and after looking at the docs I was bewildered by what I was reading, it was clear AI slop with 0 thought put to it.

Glad to see the sentiment isn't as rare as I thought.

punnerud 1 days ago [-]
I don’t agree with MCP being a bad standard, remember it’s supposed to be as simple and easy as possible to not take up a lot of tokens and for the LLM to use.

More complex stuff you can build on the “outside”. So keeping it local seems ok, because it’s just the LLM facing part.

quasarj 1 days ago [-]
> Be honest... when was the last time you ran pip install and didn't end up in dependency hell?

Hasn't been in issue in at least 5 years. Maybe 10. Doubly so now that we're all using uv. You _are_ using, uv, right?

mountainriver 1 days ago [-]
MCP is in a race to be valuable at all. Smarter agents will have no use for it
foobahhhhh 1 days ago [-]
Because they can design a protocol in the blink of an eye? They can read docs, play with an API and then figure out how to call it?
mountainriver 2 hours ago [-]
Not sure about designing a protocol, but they will be able to use things like we do
1 days ago [-]
dvorka 8 hours ago [-]
"In the good old days, it was a good practice to run a new protocol proposal through some standards bodies like W3C or OASIS, which was mostly a useful exercise. Is the world somewhere else already, or would it be a waste of time?"
ethical_source 1 days ago [-]
You must understand that when you deal with AI people, you deal with children. Imagine the author of the spec you're trying to implement is a new grad in San Francisco (Mission, not Mission Street, thanks).

He feels infallible because he's smart enough to get into a hot AI startup and hasn't ever failed. He's read TCP 973 and 822 and 2126 and admitted the vibe or rigor but can't tell you why we have SYN and Message-ID or what the world might have been had alternatives one.

He has strong opinions about package managers for the world's most important programming languages. (Both of them.) But he doesn't understand that implementation is incidental. He's the sort of person to stick "built in MyFavoriteFramework" above the food on his B2B SaaS burrito site. He doesn't appreciate that he's not the customer and customers don't give a fuck. Maybe he doesn't care, because he's never had to turn a real profit in his life.

This is the sort of person building perhaps the most important human infrastructure since the power grid and the Internet itself. You can't argue with them in the way the author of the MCP evaluation article does. They don't comprehend. They CANNOT comprehend. Their brains do not have a theory of mind sufficient for writing a spec robust to implementation by other minds.

That's why they ship SDKs. It's the only thing they can. Their specs might as well be "Servers SHOULD do the right thing. They MUST have good vibes." Pathetic.

God help us.

phillipcarter 1 days ago [-]
…what? Literally nothing you wrote is accurate.
ethical_source 1 days ago [-]
You'll come around to my perspective in time. Don't take it personally. This generation isn't any worse than prior ones. We go through this shit every time the tech industry turns over.
phillipcarter 1 days ago [-]
The people who built MCP are seasoned software engineers, as are most folks who work for these labs. What are you even on about?
owebmaster 1 days ago [-]
The people building MCP are in the same cohort as the Doge "hackers". And we can see the result.
phillipcarter 8 hours ago [-]
Not even a remotely accurate statement.
ethical_source 1 days ago [-]
LOL
phillipcarter 1 days ago [-]
Okay, well, clearly you have some funny beliefs, and I won’t try to convince you otherwise. Just think first before posting weird screeds with no basis in reality next time.
ethical_source 1 days ago [-]
> weird

Adj, something the speaker wants the audience to dislike without the speaker being on the hook for explaining why.

> screed

Noun, document the speaker doesn't like but can't rebut.

It's funny how people in the blue tribe milieu use the same effete vocabulary. I'll continue writing at the object level instead of affecting seven zillion layers of affected fake kindness, thanks.

phillipcarter 8 hours ago [-]
You must be a wonderful person to work with.
auggierose 1 days ago [-]
I think you are wrong, but I upvoted anyway because it is so funny :-)
TZubiri 1 days ago [-]
"Why do I need to implement OAuth2 if I'm using HTTP as transport, while an API key is enough for stdio?"

Because one is made for local and the other for connecting through the internet.

Seattle3503 1 days ago [-]
You need something like OAuth because you don't want your end users generating API keys for every service they want to use via LLM.
deadbabe 1 days ago [-]
Maybe we should though
foobahhhhh 1 days ago [-]
I hate API keys. Get a horrible feeling when I see one. When will this have to expire and how do I remember to recycle it. And if it doesn't expire that is an issue too.
vmaurin 1 days ago [-]
It will probably never work. Companies have spend probably the last decade(s?) closing everything on Internet: * no more RSS feed * paywall * the need to have an "app" to access a service * killing open protocols

And all of the sudden, everyone will expose their data through simple API calls ?

Seattle3503 1 days ago [-]
Indeed I think a lot of companies will hate the idea of losing their analytics and app mediated control over their users.

I see it working in a B2B context where customers demand that their knowledge management systems (ticketing, docs, etc...) have an MCP interface.

auggierose 1 days ago [-]
I think it is a huge opportunity to dethrone established players that don't want to open up that way. Users will want it, and whoever gives it to them, wins.
neuroelectron 1 days ago [-]
MCP is the moat to keep small players outside of the AI market. Not only does implementing it require a team, it is a tarpit of sabotage, where logging and state are almost impossible to track.
triyambakam 1 days ago [-]
Have you tried it though? There are sdks where you can set up logging and MCP server or client in a few lines. Pydantic AI and Logfire as one example
neuroelectron 1 days ago [-]
Yes, there are SDKs that abstract away some of the setup. But what exactly is being logged? Where is the data going? How tamper-proof is that logging? How is the network communication implemented? How do you check those logs? What exactly is being sent through the line? It’s hard to audit, especially without deep visibility into the underlying layers which include binary blobs and their tokens for trust. How do you model internal state? How do you write regression tests?
lelanthran 1 days ago [-]
SDKs aren't a spec.

A spec is what I use to write an SDK.

jes5199 21 hours ago [-]
I had the idea that maybe it was actually a flytrap for large companies! force them to waste cycles chasing a moving target so they don’t even notice they’re being leapfrogged
esafak 1 days ago [-]
MCP is simple, as protocols go.
qldp 14 hours ago [-]
first off, very well done. this article captures exactly the uneasiness i've felt as an mcp implementer.

doing seemingly in-the-box, mundane things like asking the server to dynamically register a new resource after a tool call yields new local files is met with surprising errors like "resources cannot be registered after transport connection."

i reached for the official kotlin sdk, found it did not work (mcp clients refused stdio comms), looked at the source and saw that the transport layer had recently been converted to a proprietary implementation, leaving commented-out code in-place that showed the previous transport details. reimplementing the former, assumedly-functional interface yielded the same broken stdio behavior, and everything being package-private meant i couldn't easily modify sdk components to fiddle with a solution without rewriting the entire transport mechanism. i wasn't willing to do this on a lark and hope the rest of the sdk behaved as promised, so my team is now stuck with a typescript mcp server that no one is comfortable maintaining.

what's really concerning is that openai threw in the towel on a competing standard, so we're now being corraled into accepting mcp as the only reliable tool interface, because every llm (frontier, and derivatives trained on these models) will end up conforming to mcp whether they intend to or not.

i haven't yet mentioned that there's an implicit hierarchy in the three capabilities exposed to mcp clients/servers -- tools above resources and resources above prompts, the latter of which is flat-out ignored by clients. the instructions aspect of server initialization was the only reliable way to bootstrap context with actionable exemplars, and that's just a big, inlined markdown document.

all of that said, the mcp contract is not pretty, but it works. in ~200 lines of code, i can spin up a wrapper over existing APIs (web and/or local binaries) and provide a workable plugin that adds real value to my team's day-to-day activities. mcp hasn't really promised more than that, and what it's promised, it's delivered.

bosky101 1 days ago [-]
I drew a cartoon/satire on mcp titled zombie prompting. You can find the original here https://x.com/0xBosky/status/1906356379486679521

Tldr; it's json array passed to llms. Swagger would have sufficed. How and why youcome up with the array shouldn't matter. We shouldn't need 10000 redundant servers.

Aperocky 1 days ago [-]
MCP is a microcosm of LLM.

Everything looks great works snappy and fast, until you look deeper inside or try to get it to do more complex stuff.

fullstackchris 13 hours ago [-]
Well... if the author wants websockets, why doesn't the author just do it? They state clearly in pull request the community is totally free to make their own.

My guess is one will crop up within the next few months...

Speaking from the TypeScript side of things...

I will say the documentation is indeed garbage - it still includes code snippets of APIs / source code examples that don't exist anymore. Also, the choice to use zod types is also in my opinion, way over the top... the fact that I need to import a third party library to write a MCP server is wild - when you're already writing in a typed language (TypeScript)... (and yes I know the other advantages zod provides)

Otherwise it's simple enough to get started, if just tinkering around.

dboreham 1 days ago [-]
People love to invent new stuff, even when said stuff already exists. Other people love to embrace said unnecessary new stuff without question.
DanHulton 8 hours ago [-]
This is honestly kind of hilarious. I fell briefly in love with SSE last summer and started to write a chat/game web server using them, only to step on every rake the author describes and attempt nearly every band-aid solution the MCP protocol implements, only to throw my hands up at the end, disgusted with the mess I made, and embrace WebSockets instead. After about a day's refactor, I had about three-quarters less code that was _much_ easier to reason about, and didn't have several key limitations I was uneasy about in my SSE implementation.
quantadev 1 days ago [-]
MCP was invented by some very young LLM experts probably with limited experience in "protocol" design. They'll probably see this article you wrote criticizing it and realize they made a mistake. I bet there's a way to wrap that stdio stuff with WebSockets, like was recommended in the blog/article.

Frankly I'm not sure why an ordinary REST service (just HTTP posts) wasn't considered ok, but I haven't used MCP yet myself.

What MCP got right was very powerful of course which I'd summarize as giving all AI-related software the ability to call functions that reside on other servers during inference (i.e. tool calls), or get documents and prompt templates in a more organized way where said docs are specifically intended for consumption by AIs residing anywhere in the world (i.e. on other servers). I see MCP as sort of a 'function call' version of the internet where AIs are doing the calling. So MCP is truly like "The Internet for AIs". So it's huge.

But just like JavaScript sucks bad, yet we run the entire web on it, it won't be that bad if the MCP protocol is jank, as long as it works. Sure would better to have a clean protocol tho, so I agree with the article.

huqedato 1 days ago [-]
MCP is emerging technology. The mess is unavoidable for at least one year or so.
rvz 1 days ago [-]
> However, I'm astonished by the apparent lack of mature engineering practices.

Exactly.

MCP is one of the worst 'standards' that I have seen come out from anywhere since JSON Web Tokens (JWTs) and the author rightfully points out the lack of engineering practices of a 'standard' that is to be widely used like any properly designed standard with industry-wide input.

> Increased Attack Surface: The multiple entry points for session creation and SSE connections expand the attack surface. Each entry point represents a potential vulnerability that an attacker could exploit.

JWTs have this same issue with multiple algorithms to use including the horrific 'none' algorithm. Now we have a similar issue with MCP with multiple entry points to chose from which is more ways to attack the protocol.

This one is the most damning.

> Python and JavaScript are probably one of the worst choices of languages for something you want to work on anyone else's computer. The authors seem to realize this since all examples are available as Docker containers.

Another precise point and I have to say that our industry is once again embracing the worst technologies to design immature standards like this.

The MCP spec appears to be designed without consideration for security or with any input from external companies like a normal RFC proposal should and is quite frankly repeating the same issues like JWTs.

neuroelectron 1 days ago [-]
I think it's clear that they want a proprietary solution that takes a year or more for others to copy. That gives them another year head start on the competition.
danielbln 1 days ago [-]
Who is "they"?
neuroelectron 1 days ago [-]
The AI houses buying up the entire market of GPUs. Have you heard about them?
pixl97 1 days ago [-]
This is paranoid drivel....

Tell me which is more likely.

1. There is a cabal of companies painstakingly working together to make the most convoluted software possible from scratch so they can dominate the market.

or

2. A few people threw together a bit of code to attempt to get something working without any deep engineering or systematic view of what they were trying to accomplish, getting something to work well enough that it took off quickly in a time where everyone wants to have tool use on LLMs.

I've been on the internet a long time and number 2 is a common software paradigm on things that are 'somewhat' open and fast moving. Number 1 does happen but it either is started and kept close by a single company, or you have a Microsoft "embrace, extend, extinguish" which isn't going on here.

const_cast 20 hours ago [-]
2 works when you're making software, but not when you're making a specification.

The entire point of a specification is that it's well thought out. You SHOULD be considering ways it can be misused, vulnerabilities that might sneak into implementations.

neuroelectron 1 days ago [-]
3. The internal & enterprise models are better and not based on Python.
owebmaster 1 days ago [-]
there is a 1.5 option: VC-funded company decides what they want to achieve and inexperient engineers come up with a bugged implementation (that still focus on what VC-funded wanted, in this case more LLM calls with bloated context)
1 days ago [-]
artursapek 21 hours ago [-]
It sounds like they vibe coded their protocol
nicomt 1 days ago [-]
The post misses the mark on why a stateless protocol like MCP actually makes sense today. Most modern devs aren’t spinning up custom servers or fiddling with sticky sessions—they’re using serverless platforms like AWS Lambda or Cloudflare Workers because they’re cheaper, easier to scale, and less of a headache to manage. MCP’s statelessness fits right into that model and makes life simpler, not harder.

Sure, if you’re running your own infrastructure, you’ve got other problems to worry about—and MCP won’t be the thing holding you back. Complaining that it doesn’t cater to old-school setups kind of misses the point. It’s built for the way things work now, not the way they used to.

progbits 1 days ago [-]
It's not really stateless. How do you want to support SSE or "Streamable HTTP" on your lambda? Each request will hit a new random worker, but your response is supposed to go on some other long-running SSE stream.

The protocol is absolute mess both for clients and servers. The whole thing could have been avoided if they picked any sane bidirectional transport, even websocket.

halter73 1 days ago [-]
> Each request will hit a new random worker, but your response is supposed to go on some other long-running SSE stream.

It seems your knowledge is a little out of date. The big difference between the older SSE transport and the new "Streamable HTTP" transport is that the JSON-RPC response is supposed to be in the HTTP response body for the POST request containing the JSON-RPC request, not "some other long-running SSE stream". The response to the POST can be a text/event-stream if you want to send things like progress notifications before the final JSON-RPC response, or it can be a plain application/json response with a single JSON-RPC response message.

If you search the web for "MCP Streamable HTTP Lambda", you'll find plenty of working examples. I'm a little sympathetic to the argument that MCP is currently underspecified in some ways. For example, the spec doesn't currently mandate that the server MUST include the JSON-RPC response directly in the HTTP response body to the initiating POST request. Instead, it's something the spec says the server SHOULD do.

Currently, for my client-side Streamable implementation in the MCP C# SDK, we consider it an error if the response body ends without a JSON-RPC response we're expecting, and we haven't gotten complaints yet, but it's still very early. For now, it seems better to raise what's likely to be an error rather than wait for a timeout. However, we might change the behavior if and when we add resumability/redelivery support.

I think a lot of people in the comments are complaining about the Streamable HTTP transport without reading it [1]. I'm not saying it's perfect. It's still undergoing active development. Just on the Streamable HTTP front, we've removed batching support [2], because it added a fair amount of additional complexity without much additional value, and I'm sure we'll make plenty more changes. As someone who's implemented a production HTTP/1, HTTP/2 and HTTP/3 server that implements [3], and also helped implement automatic OpenAPI Document generation [4], no protocol is perfect. The HTTP spec misspells "referrer" and it has a race condition when a client tries to send a request over an idle "keep-alive" connection at the same time the server tries to close it. The HTTP/2 spec lets the client just open and RST streams without the server having any way to apply backpressure on new requests. I don't have big complaints about HTTP/3 yet (and I'm sure part of that is a lot of the complexity in HTTP/2 was properly handled by the transport layer which for Kestrel means msquic), but give it more time and usage and I'm sure I'll have some. That's okay though, real artists ship.

1: https://modelcontextprotocol.io/specification/2025-03-26/bas...

2: https://github.com/modelcontextprotocol/modelcontextprotocol...

3: https://learn.microsoft.com/aspnet/core/fundamentals/servers...

4: https://learn.microsoft.com/aspnet/core/fundamentals/openapi...

progbits 17 hours ago [-]
Client is allowed to start a new request providing sessionid which should maintain the state from previous request.

Where do you store this state?

stevev 1 days ago [-]
One major obstacle to grasping high-level abstractions and their implementations lies in poorly designed systems or language limitations. At this stage, any human-produced effort—be it documentation, explanations, or naming—should be reviewed by AI. Language models often excel at crafting clearer analogies and selecting more meaningful or intuitive naming conventions than we do. In short, let LLMs handle the documentation.

— written by ai

stalfosknight 1 days ago [-]
I thought this was about the Master Control Program at first.
homarp 1 days ago [-]
https://tron.fandom.com/wiki/Master_Control_Program
1 days ago [-]
moralestapia 1 days ago [-]
Context is stdin and stdio.

"It kind of breaks the Unix/Linux piping paradigm using these streams for bidirectional communication."

Uhm ... no? They were meant for that.

But the rest of the critique is well founded. "Streamable HTTP" is quite an amateurish move.

kelnos 1 days ago [-]
> They were meant for that.

No they weren't. If we look at it from the perspective of pipelines (and not interactive programs that take input directly from the user and display output on the screen), stdin is for receiving data from the program in the pipeline before you, and stdout is for sending data to the thing in the pipeline after you. That's not bidirectional, that's a unidirectional flow.

moralestapia 1 days ago [-]
You are wrong.

STDIN means Standard INPUT.

STDOUT means Standard OUTPUT.

There is no design/hardware/software limitation to reading and writing to them at the same time. That's your bidirectional channel with that one process.

>stdin is for receiving data from the program in the pipeline before you, and stdout is for sending data to the thing in the pipeline after you

Yes, and you took that from my comment here: https://news.ycombinator.com/item?id=43947777

Did you just wanted to ratify my argument, or is there something else you want to add?

OJFord 1 days ago [-]
I think 'bidirectional' is unclear there, they really mean a back and forth dialogue, interactive bidirectional communication. Which, yeah, a socket (as they said) seems a better choice.
moralestapia 1 days ago [-]
I think stdin and stdio are meant to always be piped forward, and a program further down the pipe cannot modify a tool back in the pipeline, maybe that's what he's trying to convey with "bidirectional"?
OJFord 22 hours ago [-]
I don't think so, because it's a description of a system which does work.

But it's more like an HTTP API, writing or requesting some data, getting a response, doing something else with that, etc. - whereas typically with stdin/stdout you're doing something more like `generate-data | transform | transform2 | store`, manipulating an initial input, not decision-making and providing more input based on the output of early input. Not to say you can't, but it does seem a bit weird to me too.

(To be fair I suppose a shell is an obvious counter-example. Or anything that launches an interactive prompt or interpreter.)

elesbao 1 days ago [-]
my dude really got angry but forgot almost all cloud message queue offerings over HTTP works like this (minus SSE). Eventually MCP will take the same route as WS which started clunky with the http upgrade thing being standard but not often used and evolved. Then it will migrate to any other way of doing remote interface, as GRPC, REST and so on.
zbentley 1 days ago [-]
I mean…the cloud message queues that use HTTP are not good examples of quality software. They all end up being mediocre to poor on every axis: they’re not generalizable enough to be high quality low level components in complex data routing (e.g. SQS’s design basically precludes rapid redelivery on client failure, and is resistant to heterogenous workloads by requiring an up-front redelivery/deadletter timeout); simultaneously, HTTP’s statelessness at the client makes extremely basic use cases flaky since e.g. consumer acknowledgment/“pop” failures are hard to differentiate as server-side issues, incorrect client behavior, or conceptual partitions in the consume transaction network link…”conceptual” because that connection doesn’t actually exist, leading to all these problems. Transactionality between stream operations, too, is either a hell-no or a hella-slow (requiring all the hoop-jumping mentioned in TFA for clients’ operations to find the server session that “owns” the transaction’s pseudo connection) if built on top of HTTP.

In other words, you can’t emulate a stateful connection on top of stateless RPC—well, you can, but nobody does because it’d be slow and require complicated clients. Instead, they staple a few headers on top of RPC and assert that it’s just as good as a socket. Dear reader: it is not.

This isn’t an endorsement of AMQP 0.9 and the like or anything. The true messaging/streaming protocols have plenty of their own issues. But at least they don’t build on a completely self-sabotaged foundation.

Like, I get it. HTTP is popular and a lot of client ecosystems balk at more complex protocols. But in the case of stateful duplex communication (of which queueing is a subset), you don’t save on complexity by building on HTTP. You just move the complexity into the reliability domain rather than the implementation domain.

petesergeant 1 days ago [-]
This all sounds valid, but it’s also the least interesting part of the whole thing. As a developer I’m expecting to be able to reach for a framework that’ll just abstract away all the weird design decisions that he mentions.
kelnos 1 days ago [-]
You're not the intended audience for this blog post. The people who care about this are the kinds of people who have to implement the protocol on either end, and deal with all of those complexities.

You won't be able to fully insulate yourself from those complexities, though. Unnecessary complexity causes user-visible bugs and incompatibilities. You want to reach for a framework that will abstract all this stuff away, but because of poorly-designed protocols like MCP, those frameworks will end up being more unreliable than they need to be, in ways that will leak out to you.

_raz 1 days ago [-]
Well put
quantadev 1 days ago [-]
Just like I saw with protocols like ActivityPub and IPFS, what happens is the developers of these protocols do a couple of reference implementations in their favorite languages, which work, but then when they write the actual "spec" of what they think they did in the code, they never get it fully correct, or even if they do it's messy, incomplete, or not kept up to date.

So as long as you're a developer working in one of those two languages you just take their code and run it, and all is fine. However for someone coming along trying to implement the protocol in a brand new language, it gets discovered that the protocol is insufficient and horrible and attempts to build based on the protocol are therefore doomed to fail.

I'm not saying MCP has already reached this level of chaos, but I'm just saying this is the failure pattern that's fairly common.

doug_durham 1 days ago [-]
Meanwhile I'm happily writing MCP servers that meet my personal needs and installing complex packages over pip. The author seems to have a very narrow concept of what makes acceptable software.
vlovich123 1 days ago [-]
I too love to close out bugs with “works for me on my machine”