LLMs: The "Good" Parts

Okay, lets look at the “better side” of things. The good thing about using LLMs is that you do not have to deal with Google Search and any fucking Social Media. Imagine a painfully typical scenario – you want to clarify or better understand something you already vaguely knew or at least aware of. You type a query into Google Search, and you get… a fucking CEO fucked-up list of Ad-infested links to various web pages – either the largest social media containment boards (StackOverflow, Reddit, Medium), or some CEO’d blogs, when you are either greeted with a wall of text (usually directly pasted from tutorials and docs), ads, pop-ups, and other distractions, or some narcissistic asshole’s low-effort over-verbose crappy verbiage about “how fucking smart he is”. ...

December 16, 2025 · lngnmn2@yahoo.com

Idiots, Idiots Everywhere.jpg

Everything is broken and idiots are everywhere. There is a clown which attention whoring, sorry, publicly arguing (and gaining a lot of unwarranted attention) that one shall vapecode in C. https://news.ycombinator.com/item?id=46207505 Basically, making such a claim is idiotic on so many levels that it is hard to know where to start. Almost the whole of classic non-bullshit programming language theory research is about how to correctly address C’s shortcomings and semantic issues, and how to avoid the inherent in the design of the language (and the ABI) problems. ...

December 10, 2025 · lngnmn2@yahoo.com

Some Final Words

So, it seems like this is the time to somehow sum up the current AI hype (way through the roof) and the immediate and long term consequences of it. First of all, a proper education – studying the fundamental underlying principles instead of particulars – which used to be an unofficial mantra of MIT, pays off again. One just sets particular constraints to a coding LLM and use it as a whole-data-center-powerful constraint satisfaction engine that spews out a slop, which then can be used for rapid prototyping and minimal-viable products. The properly constrained slop can even be used as the basis of a project, which then undergo proper continuous improvement and refinement by a human expert (who knows the whys). ...

December 9, 2025 · lngnmn2@yahoo.com

The new Brahmanas

I think I have seen this before. Once in Varanasi, wandering around book stalls (most titles already “tourist books”, – oversimplified and westernized “tantric” bullshit), I found a whole book by some local publisher which describes in a minute details one single Brahmanic ritual (an elaborate sacrifice) which last almost a whole day. Hundreds of ingredients are being burn in a precise sequence, or rather a simphony of chants. motions, gestures (mudras) and many other elaborate details. The priests (brahmans) definitely knew what they are doing and why exactly this way is the only proper way. ...

December 3, 2025 · lngnmn2@yahoo.com

Aaand boom!

The thing I hate the most is when some of these fucking YouTube content “creators”, which decide to monetize an AI coding clickbait with low-effort subpar videos, say “aaand boom!” when another chunk of a slop has been spewed out by an AI. This “boom!” is an insult to the last 60 years of the programming languages research (including the math-based theory) and to the “old sages” which crafted their languages and standard libraries in the best possible, “just right”, perfect, in the sense of “nothing more to take away” by surveying all the available literature, non-bullshit papers and spending months of anguish and self-doubt. ...

November 12, 2025 · lngnmn2@yahoo.com

Bullshit, bullshit, bullshit

So, things begin to move a lot faster and much bigger, and there is something to realize about this unpreceded AI bubble. We will consider only the underlaying fundamental principles, not the particular implementation details, “architectures” and what not.. There are four major aspects to any LLM model – the training process, the “architecture” (the structural shape) of a model, the " post-training tuning" (lobotomy) of the model and the inference process. ...

November 3, 2025 · lngnmn2@yahoo.com

My First Llm Experience

Today I am sentimental, so lets reminisce a little about my first experience with LLMs. I found some early article about people using something called llama.cpp to run models locally on their machines. Some overconfident retard in another blogpost wrote that the “best model” and “by far” is Mistral “from Nvidia”, and it is supposed to be best because/ it is allegedly from Nvidia (they have some partnership, investment, I suppose). So I compiled the code (old habits) and downloaded the model from the hugginface. ...

October 10, 2025 · lngnmn2@yahoo.com

LLMs and AI so far

Let’s summarize the current state of Large Language Models (LLMs) and so called “Artificial Intelligence” (AI) as of October 2025. They all are still just [estimated] probabilities of the next token, given the “context”. This implies no “knowledge” or “understanding” [of any kind] whatsoever. Nothing can be taken as “true” or even “correct” or “accurate”. All the talks about “knowledge in the weights” or “knowledge encoded within the network” is just bullshit. ...

October 4, 2025 · lngnmn2@yahoo.com

Probabilistic bullshit

Look, ma, a new episode just dropped! This one is full of shit to the brim. Even more so than prof. Ellen Langer who cannot stay within a context and claimed that 1+1 = 10 because in the binary notation it looks like 10 in decimal… anyway, whatever. https://www.youtube.com/watch?v=MlmFj1-mOtg No, the brain ain’t computing any hecking probabilities. It is not a Bayesian machine. It is not a prediction machine. It is not a simulator. It is not a statistical engine. ...

September 30, 2025 · lngnmn2@yahoo.com

Prompt engineers, lmao

Time waits for no one, the race to the bottom accelerates faster than ever, and the “future” is now. Competition is severe and mostly meaningless, as in some third-world criminal infested ghetto. This is what LLMs turned our world into. So, lets “pee on” so called “prompt engineers”, in the 4chan parlance, of course.. Here is my benchmark prompt to evaluate performance of LLMs. All the “simple” offline models fail miserably, and only Grok and Gemini can produce something adequate. Claude is also good, but it is supposed to be the best, being trained especially for code generation. ...

September 26, 2025 · lngnmn2@yahoo.com