DeepMind and OpenAI win Gold

Oh, look DeepMind and OpenAI win gold at ICPC (codeforces.com) So, a model memorized a lot more of a convoluted stuff and been able to spit out a coherent code. While this is absolutely amazing considering that it works at the level of syntax, and that its “representation” captures only “possible relations among tokens” and posses no “understanding”, thinking leave alone reasoning capacities whatsoever. It just picks up one of the few next most probable tokens, given the previous “context” (a sequence of tokens “so far”). ...

September 18, 2025 · lngnmn2@yahoo.com

Defeating Nondeterminism, my ass

And while passing by… Yet another “look, look at us, we are soooo smart and clever, give us much more money just because we are so cool” article dropped. https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/ The lack of exact precision is not the fundamental issue here. Even if one manages to overcome the “numerical instability issues” and would be able to reproduce always the same structural output from the same linguistic (or otherwise structured) input the “hallucinations” and “subtle bullshitting” won’t go away in principle. ...

September 11, 2025 · lngnmn2@yahoo.com

the LLM upanishad

To understand what non-deterministic, “syntax level” probabilistic models are actually producing (an illusion), we have to understand how the Mind (of an external observer) works, and how it produces Maya (which has been intuitively understood since the early Upanishads) – an ultimate illusion created by the Mind itself – an inner representations of the “outside” world, which the mind (and body) uses for “decision making”. The “outside” world is inherently complex, non-deterministic and “concurrent” at the level of “compositions” and, at the same “time”, deterministic enough at the level of the “most basic building blocks” (of biology, lets say) – “simple” molecular structure’s (“small” molecules) are exactly the same, exact copies of clones of each other (otherwise everything will break) while larger molecular structures (like whole proteins and their compositions) may have “flaws” or “mutations” or just simple “kinks” – a slightly different shape or a resulting form. ...

September 11, 2025 · lngnmn2@yahoo.com

Let the bubble burst, for Crist's sake!

I have noticed a resent dramatic change in the behavior of major GPTs online providers – most notably, Gemini is now providing just outline of code, full of stubs and “mocks” of real APIs, and not the full code. This is a significant change from the previous behavior where they would provide a semi-complete (but ridden with errors) code solution. Perhaps, mimicking the behavior of ChatGPT, which has been doing this for a while now – they “optimize” for more what appears to be a “dialogue” (more like a normie-level chat), to create a better illusion of “actually conversing with an artificial intelligence”. ...

August 3, 2025 · <lngnmn2@yahoo.com>

The Knowledge Work Bubble

We are living through a paradigmatic shift, the one described in the “Scientific Revolution” by Thomas Kuhn. As I mentioned many times, texts and even crappy code became very, very cheap, just like a processed junk-food or a low-effort street-food slop. This is the “shift” and the end of so-called “knowledge work” as we know it. At least this is the end of the pretentious “knowledge work”, when one just pretends to be an expert is social settings, using very straightforward verbal and non-verbal cues to signal their “knowledge” and “expertise”, just as a priest would do in the not so distant past. ...

August 2, 2025 · <lngnmn2@yahoo.com>

Now What?

I understand a lot of complex things, maybe because I spent my whole life trying to understand and explain things around me, since 4 year old, when I used no name every single car on the road in a small Ukrainian Steel and mining town where I was born. Understanding cannot be “outsourced” or even safely “delegated”. One will always end up with a sort of “tragedy of commons”, when sterilization and modern technologies produced packaged foods which slowly but surely kill you. This is what happens when you “delegate” your own understanding. ...

July 13, 2025 · <lngnmn2@yahoo.com>

Software In The Era of AI

https://www.youtube.com/watch?v=LCEmiRjPEtQ and, of course, the No.1 spot on the Chuddie safe space https://news.ycombinator.com/item?id=44314423. Karpathy is shilling “Cursor” and other cloud-based mettered AI services (which have to pay back their debts). Probably has an interest in it and some other meme AI startups. Nothing to see here. We should some day know which marketing “genious” came up with this “winning strategy” – to metter every single token (byte) and try to sell this to corporations. Corporations do not want to be mettered like that, they want to metter normies, the way Cellular operators do, and they never use any normies plans themselves. ...

June 19, 2025 · <lngnmn2@yahoo.com>

Yes, it is time to scream and panic

There is something to be actually realized as soon as possible (which I recently experienced directly) –the condition-probability-based “glorified autocomplete” could generate some small pieces of code (with intelligent prompting) that simply above (in intrinsic quality) 90% of all the related crap that could be found on Github, which easily extrapolates to 90% of people in existence who identify themselves as “programmers”. Period. The code will be better than high-dopamine, “laser-focused” (on Adderall or just amphetamines), but not so well-educated on fundamentals zealots could produce. Better than inexperienced in realities of the actual open-source software ecosystems academics could even hope to write, no matter how good their theoretical understanding is. With very specific and properly constrained prompting, which takes into account the major results of the last 50-or-60 years of PL and CS research, especially in the field of math-based Functional Programming and Immutable Data Structures, one can get almost optimal (approaching perfection small pieces of code, provided you understand what theoretical-limited perfection actually looks like in this particular case, and why. ...

June 12, 2025 · <lngnmn2@yahoo.com>

Vibe coding explained

Reading less social media bullshit and using ones own well-trained mind sometimes helps to clarify complex and noisy topics. Here is why, in principle, any LLM coding (I cannot call this crap “programming” because there is no uderstanding involved) will always yield sub-par bullshit. LLMs operate on tokens, which are abstract numbers, if you will. Parts of words of a human language are being associated with a distinct token, and then a probabilistic graph-like structure is created out of these tokens, using a fundamental back-propagation algorithm. ...

June 6, 2025 · <lngnmn2@yahoo.com>

Enshittification Of Knowledge

There are some philosophical “ideals”, which has been identified since antiquity and to attainment (or approaching of) which people are striving ever since. To see things as they really are. To do things just right way. To find an optimum or a “perfection” Perfection has been famously defined as “when there is nothing else (more) to take away (to remove)”. Modern meme-based socially-constructed (by retarded majority) social concensus frown upon “perfectionism” and sees it as the inhibition to “getting shit done”. They are not wrong, though. Approaching a perfection (finding a local optimum) is a very different process from just putting together some slop. Yes, indeed, “perfection is the enemy of good-enough”. ...

June 3, 2025 · <lngnmn2@yahoo.com>