I have noticed a resent dramatic change in the behavior of major GPTs online providers – most notably, Gemini is now providing just outline of code, full of stubs and “mocks” of real APIs, and not the full code. This is a significant change from the previous behavior where they would provide a semi-complete (but ridden with errors) code solution. Perhaps, mimicking the behavior of ChatGPT, which has been doing this for a while now – they “optimize” for more what appears to be a “dialogue” (more like a normie-level chat), to create a better illusion of “actually conversing with an artificial intelligence”.

This trend is kind of obvious and easy to see and understand – they are targeting normies and optimize for an “average user” who is not a top-tier programmer, and who is not looking for a full code solution (and can accurately evaluate the intrinsic quality of it), but rather for a “conversation” with an AI about the code, just what amateurs would like to do. This is a significant change from the previous behavior where Gemini would provide a semi-complete (but still ridden with errors) well-commented code solution.

One more time – the code blocks they spew out today are incomplete, and cannot be compiled or run, and are full of stubs and “mocks” and comments like “here the other functions shall be put” . In this mode the errors are “not even out there”, they literally does not arise, because this is just an “outline” or a “sketch” of the code, a “skeleton” of the solution, instead of a full solution. This “innovation” deepens the illusion for normies but makes code generation useless for real programmers, who are looking for a full code solution, not just a “conversation” about the code.

Lets make our understanding a bit more clear. Among the major “megacorp” AI providers, the ChatGPT was the most normie-focused, “chatty” and provide “infortainment” and even recreation, instead of anything “serious”. It dealt with generalities and banalities – just what Liberal Arts “educated” people and those without any education whatsoever would like and prefer (in order to feel confident, important and “in charge”). Why, we understand this strategy – a basic Psychology 101.

The Gemini was the most “serious” and “professional” AI when it comes to programming, for the obvious reasons, (Google has “nothing” but its wast code monorepo, billions of lines of code), which provided the most complete (as probabilisticly possible) code solutions, and was the most useful for real programmers. Now, it seems that Gemini is following the ChatGPT’s path, and is becoming more “chatty” and norime-friendly and thus refusing to provide any “working code” at all.

Seems like everyone is trying to mimic these “Cursor” workflows, which magically look like a “real” and even [applied] “agile” programming (if you are a normie) due to a series of micro-iterations , executed right from the chat window, which is, indeed, the best and the most convincing illusion so far, at least as seen on TV (shown to millions by @karpathy).

Grok has an ambition to become a “source of truth” (no more, no less, leaving all the political anti- “left-wing bias” bullshit aside) and to provide the most accurate answers to the questions. The PhD-level meme was created to emphasize the fact that Grok is not a “normie” AI, but rather a “serious”, even “well-educated” AI (LOL, lmao even!)

This is what it was just a couple of weeks ago. Since then Gemini has been meme’d into an “IMO gold medal level performance”, following ChatGPT, who coined that meme first.

Now, there is something to realize. All these claims are simply bullshit to lure paying normies in, which, it seems, does not go as expected and outlined in the business plans. Normies are not interested in “serious” AI, they are not interested in “PhD-level” AI, they are not interested in “gold medal level performance”. They are interested in “chatty” AI, which is what ChatGPT has been providing all along.

So, lets call “the top” of the bubble. The actual tech does not live its promises and even pivoting into just paid infortainment services, which will never fly. At least it will not pay back the billions spend.

The only real use case – of being a “glorified auto-complete” for crappy, badly designed, stateful imperative verbose OO and webshit APIs – is so error-prone, that they decided to “optimize” for the “conversation” instead of the actual code generation. This is a clear sign that the bubble is bursting, and the AI providers are trying to save face by pretending that they are not failing, but rather “innovating” and “optimizing” for the “average user”.

Today Gemini gave me an utter, sub-par crap full of comments and “stubs” – an outline of the solution – for almost the same (even more carefully crafted) prompt. ChatGPT for the same prompt gave me an empty “project structure”. Grok did “OK”, just as before (does not compile, of course). This is just the following quick analysis of this fact.

But yes, enjoy your “vibecoding”, retards.