Fuck This Shit

No, really. A fresh torrent of abstract “norime-friendly” bullshit is trending on tech social media. https://www.anthropic.com/research/tracing-thoughts-language-model Claude sometimes thinks in a conceptual space that is shared between language… No, it does not think. Period. There are paths, which emerge by from the training process, which consists of “thickness” of weights (if you will), or more precisely – the paths are the emergent structures which result from selecting the highest probabilities. This is not thinking....

March 28, 2025 · <lngnmn2@yahoo.com>

Large Ladyboy Models

Classy Andrej is making shilling videos from Thailand (he leaked his location in the video ) targeting normies (the previous set of videos has been partially filmed in Japan. Andrej is living a truly digital nomad’s life). https://www.youtube.com/watch?v=EWvNQjAaOHw Why would he shill? Well, he and guys like him made a lot of promises, not to us (who tf cares), but to the money guys, that this particular technology will completely transform the world, and that they are the very top guys in the field, so money shall be given to them (to the affiliated companies and entities)....

March 9, 2025 · <lngnmn2@yahoo.com>

Coding with LLMs

DESCRIPTION: Idiots, idiots everywhere. Now I can accurately summarize what coding using LLMs actually /is in just a few sentences. Recall how people usually describe a code maintenance job: we have this code to run, while the original developers are gone and leave us no design documentation. This hypothetical situation is exactly what you get when an LLM finished spewing out the slop: you now have some code, very cheap, even for free, but it is not yours, the underlying understanding (of the whys) is not in your head, and the original developer is already gone....

February 24, 2025 · <lngnmn2@yahoo.com>

Grok3

Well, I’ve watched it. There are a few things to realize. The code it generated ran without an issue with the simulation task. It, however, is incomprehensible without understanding of all the details (like any other code). This is, probably, so because they have feed in a lot of very similar internal code in the training phase. The gibberish from the “thinking” phase might be helpful or it may be equally cryptic....

February 18, 2025 · <lngnmn2@yahoo.com>

AI Slop

slop noun Cambridge dictionary food that is more liquid than it should be and is therefore unpleasant liquid or wet food waste, especially when it is fed to animals Oxford Learner’s Dictionary ​waste food, sometimes fed to animals liquid or partly liquid waste, for example urine or dirty water from baths There is also a very related term “goyslop” from internet sewers (losers are always looking for someone to blame and hate [instead of themselves])....

February 16, 2025 · <lngnmn2@yahoo.com>

Deepseek In Action

Let’s do it again, because why tf not, especially given the magnitude of the current mass-hysteria about this AI meme (it is literally everywhere and even on Slashdot, which is the last bastion of sanity, there are 4 articles in a row with “AI” in the title). what are the roles of type-classes in Haskell and traits in other languages? This is supposedly a naive and uninformed question I asked Deepseek R1 14b....

February 15, 2025 · <lngnmn2@yahoo.com>

Reasoning LLMs

AUTHOR: <lngnmn2@yahoo.com> When I was a kid they told me not to stare at the sun I had this vision, that the brain structures are sort of like trees, while the “branches” are just like patches thorough our yard after fresh snow. Some of them remain thin, just someone walked across it absentmindedly, some gets broadened by a heavy re-use. Who would plow through a fresh snow while one could take follow the path that is already here....

February 11, 2025 · Ln Gnmn

Deepseek R1

DESCRIPTION: Memes and mirrors. Nowadays things are moving way too fast. It is not just controlled trial-and-error, it is literally throwing everything at the wall (to see what sticks). It started with that meme “Attention Is All You Need”, when they just came up with an “architecture” that sticks. That “attention” and “multi head attention” turned out to be just a few additional layers of a particular kind. No one can explain the actual mechanisms of how exactly or even why the layers are as they are (abstract bullshit aside)....

January 26, 2025 · &lt;lngnmn2@yahoo.com&gt;

LLMs und AI

DATE: <2024-11-20 Wed> Lets write a few paragraphs which will destroy the current LLM narrative (naive bullshit), while neither any single piece nor the whole article can be refuted. This is a high-level proper (classic Eastern) philosophy, which is many levels away from simple logical forms, but it still can be reduced to these, if one wants to. The Proper Philosophy isn’t dead, not even it is dying. It cannot, lmao....

November 20, 2024 · &lt;lngnmn2@yahoo.com&gt;

Attention Is All bullshit.

Once again I tried to through This meme video. Once again with the same results. I have seen these social patterns many times before – when people begin to use ill-defined, anthropomorphic and purely abstract concepts to construct familiar analogies and to invoke intuitions, so everything seems “right” and “logical”. Abidharma uses abstract terminology to produce a seemingly coherent system. It started from very reasonable abstractions of The Buddha, which illustrated his ideas with abstract notions like pealing off layers of an onion (to find literally nothing in the center) or a mixture of spices using for cooking, but very quickly went all to pure abstract bullshit....

June 4, 2024 · &lt;lngnmn2@yahoo.com&gt;