Vibecoding explained

https://karpathy.bearblog.dev/year-in-review-2025/ In this episode @karpathy blessed us all with another blogpost. While his wording is much more careful and even nuanced, there is still a lot of bullshit in it. It way less outrageous bullshit as in the Friedman poocast and around that time, but still. Here are some excerpts: With vibe coding, programming is not strictly reserved for highly trained professionals, it is something anyone can do. … But not only does vibe coding empower regular people to approach programming, it empowers trained professionals to write a lot more (vibe coded) software that would otherwise never be written. ...

December 21, 2025 · lngnmn2@yahoo.com

LLMs: The "Good" Parts

Okay, lets look at the “better side” of things. The good thing about using LLMs is that you do not have to deal with Google Search and any fucking Social Media. Imagine a painfully typical scenario – you want to clarify or better understand something you already vaguely knew or at least aware of. You type a query into Google Search, and you get… a fucking CEO fucked-up list of Ad-infested links to various web pages – either the largest social media containment boards (StackOverflow, Reddit, Medium), or some CEO’d blogs, when you are either greeted with a wall of text (usually directly pasted from tutorials and docs), ads, pop-ups, and other distractions, or some narcissistic asshole’s low-effort over-verbose crappy verbiage about “how fucking smart he is”. ...

December 16, 2025 · lngnmn2@yahoo.com

And this is exactly how

Just like a spontaneous, “natural and organic” continuation to the previous post (which implicitly confirm that it has properly captured at least some aspect of reality [as it is]). I have had to delete some nice movies in order to download and try that over-hyped “5M downloads” Nvidia’s meme-model Nemotron-3-Nano-30B" I have a small set of highly sophisticated prompts which I use to measure the apparent quality of a generated slop of the 4 major data-center-sized LLMs. ...

December 16, 2025 · lngnmn2@yahoo.com

Idiots, Idiots Everywhere.jpg

Everything is broken and idiots are everywhere. There is a clown which attention whoring, sorry, publicly arguing (and gaining a lot of unwarranted attention) that one shall vapecode in C. https://news.ycombinator.com/item?id=46207505 Basically, making such a claim is idiotic on so many levels that it is hard to know where to start. Almost the whole of classic non-bullshit programming language theory research is about how to correctly address C’s shortcomings and semantic issues, and how to avoid the inherent in the design of the language (and the ABI) problems. ...

December 10, 2025 · lngnmn2@yahoo.com

Some Final Words

So, it seems like this is the time to somehow sum up the current AI hype (way through the roof) and the immediate and long term consequences of it. First of all, a proper education – studying the fundamental underlying principles instead of particulars – which used to be an unofficial mantra of MIT, pays off again. One just sets particular constraints to a coding LLM and use it as a whole-data-center-powerful constraint satisfaction engine that spews out a slop, which then can be used for rapid prototyping and minimal-viable products. The properly constrained slop can even be used as the basis of a project, which then undergo proper continuous improvement and refinement by a human expert (who knows the whys). ...

December 9, 2025 · lngnmn2@yahoo.com

The new Brahmanas

I think I have seen this before. Once in Varanasi, wandering around book stalls (most titles already “tourist books”, – oversimplified and westernized “tantric” bullshit), I found a whole book by some local publisher which describes in a minute details one single Brahmanic ritual (an elaborate sacrifice) which last almost a whole day. Hundreds of ingredients are being burn in a precise sequence, or rather a simphony of chants. motions, gestures (mudras) and many other elaborate details. The priests (brahmans) definitely knew what they are doing and why exactly this way is the only proper way. ...

December 3, 2025 · lngnmn2@yahoo.com

Aaand boom!

The thing I hate the most is when some of these fucking YouTube content “creators”, which decide to monetize an AI coding clickbait with low-effort subpar videos, say “aaand boom!” when another chunk of a slop has been spewed out by an AI. This “boom!” is an insult to the last 60 years of the programming languages research (including the math-based theory) and to the “old sages” which crafted their languages and standard libraries in the best possible, “just right”, perfect, in the sense of “nothing more to take away” by surveying all the available literature, non-bullshit papers and spending months of anguish and self-doubt. ...

November 12, 2025 · lngnmn2@yahoo.com

Bullshit, bullshit, bullshit

So, things begin to move a lot faster and much bigger, and there is something to realize about this unpreceded AI bubble. We will consider only the underlaying fundamental principles, not the particular implementation details, “architectures” and what not.. There are four major aspects to any LLM model – the training process, the “architecture” (the structural shape) of a model, the " post-training tuning" (lobotomy) of the model and the inference process. ...

November 3, 2025 · lngnmn2@yahoo.com

My First Llm Experience

Today I am sentimental, so lets reminisce a little about my first experience with LLMs. I found some early article about people using something called llama.cpp to run models locally on their machines. Some overconfident retard in another blogpost wrote that the “best model” and “by far” is Mistral “from Nvidia”, and it is supposed to be best because/ it is allegedly from Nvidia (they have some partnership, investment, I suppose). So I compiled the code (old habits) and downloaded the model from the hugginface. ...

October 10, 2025 · lngnmn2@yahoo.com

LLMs and AI so far

Let’s summarize the current state of Large Language Models (LLMs) and so called “Artificial Intelligence” (AI) as of October 2025. They all are still just [estimated] probabilities of the next token, given the “context”. This implies no “knowledge” or “understanding” [of any kind] whatsoever. Nothing can be taken as “true” or even “correct” or “accurate”. All the talks about “knowledge in the weights” or “knowledge encoded within the network” is just bullshit. ...

October 4, 2025 · lngnmn2@yahoo.com