Let the bubble burst, for Crist's sake!

I have noticed a resent dramatic change in the behavior of major GPTs online providers – most notably, Gemini is now providing just outline of code, full of stubs and “mocks” of real APIs, and not the full code. This is a significant change from the previous behavior where they would provide a semi-complete (but ridden with errors) code solution. Perhaps, mimicking the behavior of ChatGPT, which has been doing this for a while now – they “optimize” for more what appears to be a “dialogue” (more like a normie-level chat), to create a better illusion of “actually conversing with an artificial intelligence”. ...

August 3, 2025 · <lngnmn2@yahoo.com>

The Knowledge Work Bubble

We are living through a paradigmatic shift, the one described in the “Scientific Revolution” by Thomas Kuhn. As I mentioned many times, texts and even crappy code became very, very cheap, just like a processed junk-food or a low-effort street-food slop. This is the “shift” and the end of so-called “knowledge work” as we know it. At least this is the end of the pretentious “knowledge work”, when one just pretends to be an expert is social settings, using very straightforward verbal and non-verbal cues to signal their “knowledge” and “expertise”, just as a priest would do in the not so distant past. ...

August 2, 2025 · <lngnmn2@yahoo.com>

Now What?

I understand a lot of complex things, maybe because I spent my whole life trying to understand and explain things around me, since 4 year old, when I used no name every single car on the road in a small Ukrainian Steel and mining town where I was born. Understanding cannot be “outsourced” or even safely “delegated”. One will always end up with a sort of “tragedy of commons”, when sterilization and modern technologies produced packaged foods which slowly but surely kill you. This is what happens when you “delegate” your own understanding. ...

July 13, 2025 · <lngnmn2@yahoo.com>

Software In The Era of AI

https://www.youtube.com/watch?v=LCEmiRjPEtQ and, of course, the No.1 spot on the Chuddie safe space https://news.ycombinator.com/item?id=44314423. Karpathy is shilling “Cursor” and other cloud-based mettered AI services (which have to pay back their debts). Probably has an interest in it and some other meme AI startups. Nothing to see here. We should some day know which marketing “genious” came up with this “winning strategy” – to metter every single token (byte) and try to sell this to corporations. Corporations do not want to be mettered like that, they want to metter normies, the way Cellular operators do, and they never use any normies plans themselves. ...

June 19, 2025 · <lngnmn2@yahoo.com>

Yes, it is time to scream and panic

There is something to be actually realized as soon as possible (which I recently experienced directly) –the condition-probability-based “glorified autocomplete” could generate some small pieces of code (with intelligent prompting) that simply above (in intrinsic quality) 90% of all the related crap that could be found on Github, which easily extrapolates to 90% of people in existence who identify themselves as “programmers”. Period. The code will be better than high-dopamine, “laser-focused” (on Adderall or just amphetamines), but not so well-educated on fundamentals zealots could produce. Better than inexperienced in realities of the actual open-source software ecosystems academics could even hope to write, no matter how good their theoretical understanding is. With very specific and properly constrained prompting, which takes into account the major results of the last 50-or-60 years of PL and CS research, especially in the field of math-based Functional Programming and Immutable Data Structures, one can get almost optimal (approaching perfection small pieces of code, provided you understand what theoretical-limited perfection actually looks like in this particular case, and why. ...

June 12, 2025 · <lngnmn2@yahoo.com>

Vibe coding explained

Reading less social media bullshit and using ones own well-trained mind sometimes helps to clarify complex and noisy topics. Here is why, in principle, any LLM coding (I cannot call this crap “programming” because there is no uderstanding involved) will always yield sub-par bullshit. LLMs operate on tokens, which are abstract numbers, if you will. Parts of words of a human language are being associated with a distinct token, and then a probabilistic graph-like structure is created out of these tokens, using a fundamental back-propagation algorithm. ...

June 6, 2025 · <lngnmn2@yahoo.com>

Enshittification Of Knowledge

There are some philosophical “ideals”, which has been identified since antiquity and to attainment (or approaching of) which people are striving ever since. To see things as they really are. To do things just right way. To find an optimum or a “perfection” Perfection has been famously defined as “when there is nothing else (more) to take away (to remove)”. Modern meme-based socially-constructed (by retarded majority) social concensus frown upon “perfectionism” and sees it as the inhibition to “getting shit done”. They are not wrong, though. Approaching a perfection (finding a local optimum) is a very different process from just putting together some slop. Yes, indeed, “perfection is the enemy of good-enough”. ...

June 3, 2025 · <lngnmn2@yahoo.com>

Bullshit Bullshit Everywhere

“The Darwin Gödel Machine: AI that improves itself by rewriting its own code” https://sakana.ai/dgm/ There is what is actually going on. A model trained on a large amount code is, in principle, no different from any other LLMs – it is just a statistical model that predicts the next token based on the previous ones. It does not understand the code it spews out, it does not “know” what it is doing. These are just mathematical procedures (not even functions) – given an input encoded in a particular way, it produces an output, not even the same for the same input. ...

May 31, 2025 · <lngnmn2@yahoo.com>

Carmack On Ai

https://twitter.com/ID_AA_Carmack/status/1925710474366034326 I have read the notes. they are a mess. For me, Carmack, aside from being a legend, is sort of Goggins of imperative procedural programming, who learned everything by doing without studying the theories first. His ultimate strength is, it seems, in a focused doing, ploughing through a problem, if you will, without being exceedingly dramatic. Learning from experience (actual trails and errors and quick feedback loops) and gradual improvement of his own “emergent” intuitive understanding – ones own mental model of how things should be done. ...

May 24, 2025 · <lngnmn2@yahoo.com>

Reasoning Models Don't Always Say What They Think

Another fresh piece of an utter bullshit. https://www.anthropic.com/research/reasoning-models-dont-say-think Please, for fucks sake, cut it down already. There are even that narcissistic @karpathy videos which show in great details (we have to admit, he is really good at explaining things) that there is no “thinking” or “saying” or “reasoning”, just probability distributions encoded as a huge graph. At least be fucking consistent among yourselves (“I really understand AI and you don’t” talking heads). ...

April 4, 2025 · <lngnmn2@yahoo.com>