Grok3

Well, I’ve watched it. There are a few things to realize. The code it generated ran without an issue with the simulation task. It, however, is incomprehensible without understanding of all the details (like any other code). This is, probably, so because they have feed in a lot of very similar internal code in the training phase. The gibberish from the “thinking” phase might be helpful or it may be equally cryptic....

February 18, 2025 · <lngnmn2@yahoo.com>

AI Slop

slop noun Cambridge dictionary food that is more liquid than it should be and is therefore unpleasant liquid or wet food waste, especially when it is fed to animals Oxford Learner’s Dictionary ​waste food, sometimes fed to animals liquid or partly liquid waste, for example urine or dirty water from baths There is also a very related term “goyslop” from internet sewers (losers are always looking for someone to blame and hate [instead of themselves])....

February 16, 2025 · <lngnmn2@yahoo.com>

Deepseek In Action

Let’s do it again, because why tf not, especially given the magnitude of the current mass-hysteria about this AI meme (it is literally everywhere and even on Slashdot, which is the last bastion of sanity, there are 4 articles in a row with “AI” in the title). what are the roles of type-classes in Haskell and traits in other languages? This is supposedly a naive and uninformed question I asked Deepseek R1 14b....

February 15, 2025 · <lngnmn2@yahoo.com>

Reasoning LLMs

AUTHOR: <lngnmn2@yahoo.com> When I was a kid they told me not to stare at the sun I had this vision, that the brain structures are sort of like trees, while the “branches” are just like patches thorough our yard after fresh snow. Some of them remain thin, just someone walked across it absentmindedly, some gets broadened by a heavy re-use. Who would plow through a fresh snow while one could take follow the path that is already here....

February 11, 2025 · Ln Gnmn

Deepseek R1

DESCRIPTION: Memes and mirrors. Nowadays things are moving way too fast. It is not just controlled trial-and-error, it is literally throwing everything at the wall (to see what sticks). It started with that meme “Attention Is All You Need”, when they just came up with an “architecture” that sticks. That “attention” and “multi head attention” turned out to be just a few additional layers of a particular kind. No one can explain the actual mechanisms of how exactly or even why the layers are as they are (abstract bullshit aside)....

January 26, 2025 · &lt;lngnmn2@yahoo.com&gt;

LLMs und AI

DATE: <2024-11-20 Wed> Lets write a few paragraphs which will destroy the current LLM narrative (naive bullshit), while neither any single piece nor the whole article can be refuted. This is a high-level proper (classic Eastern) philosophy, which is many levels away from simple logical forms, but it still can be reduced to these, if one wants to. The Proper Philosophy isn’t dead, not even it is dying. It cannot, lmao....

November 20, 2024 · &lt;lngnmn2@yahoo.com&gt;

Attention Is All bullshit.

Once again I tried to through This meme video. Once again with the same results. I have seen these social patterns many times before – when people begin to use ill-defined, anthropomorphic and purely abstract concepts to construct familiar analogies and to invoke intuitions, so everything seems “right” and “logical”. Abidharma uses abstract terminology to produce a seemingly coherent system. It started from very reasonable abstractions of The Buddha, which illustrated his ideas with abstract notions like pealing off layers of an onion (to find literally nothing in the center) or a mixture of spices using for cooking, but very quickly went all to pure abstract bullshit....

June 4, 2024 · &lt;lngnmn2@yahoo.com&gt;

LLM Philosophy 101

The LLM mania is still going on, with no sign of bursting of the bubble. This will be (already is) way larger than even the DotCom bubble. Grab your popcorn. I already wrote this on the old site, and, of course, because I haven’t followed the rules I got “canceled” as they do nowadays with anyone who disagree with their current set of beliefs. Lets talk it again, even with millions of views behind each Karpathy or Friendman videos....

May 21, 2024 · &lt;lngnmn2@yahoo.com&gt;

LLM predictions

Social media make us stupid. To be precise - they encourage production and emission of a useless verbiage as a from of virtue signaling. The cultural change is that being “wrong” is ok for some talking heads, and nowadays it is even possible to argue that “there is no wrong”, just an “imperfect information”, you know. The older cultures were better. They had a common sense notion of “you have no idea what you are talking about”....

November 8, 2023 · &lt;lngnmn2@yahoo.com&gt;

Transformers bullshit everywhere

There is another meme “scientific” paper (well, it is a “research paper”, which does not have be correct lmao) about trying to interpret of what transformers actually do. When the hype was at its peak, I wrote an article about “handwaving with too abstract math” or “sweeping the meaning under the rug”. I had very strong intuition that I have seen this before, and now I will show it. Where all have seen this kind of sophisticated bullshitting with abstract entities taken out of context (from another, highly remote and ephemeral levels of abstraction) bing used to explain a natural phenomena?...

October 6, 2023 · &lt;lngnmn2@yahoo.com&gt;