Grok3

Well, I’ve watched it. There are a few things to realize. The code it generated ran without an issue with the simulation task. It, however, is incomprehensible without understanding of all the details (like any other code). This is, probably, so because they have feed in a lot of very similar internal code in the training phase. The gibberish from the “thinking” phase might be helpful or it may be equally cryptic....

February 18, 2025 · <lngnmn2@yahoo.com>

AI Slop

slop noun Cambridge dictionary food that is more liquid than it should be and is therefore unpleasant liquid or wet food waste, especially when it is fed to animals Oxford Learner’s Dictionary ​waste food, sometimes fed to animals liquid or partly liquid waste, for example urine or dirty water from baths There is also a very related term “goyslop” from internet sewers (losers are always looking for someone to blame and hate [instead of themselves])....

February 16, 2025 · <lngnmn2@yahoo.com>

Deepseek In Action

Let’s do it again, because why tf not, especially given the magnitude of the current mass-hysteria about this AI meme (it is literally everywhere and even on Slashdot, which is the last bastion of sanity, there are 4 articles in a row with “AI” in the title). what are the roles of type-classes in Haskell and traits in other languages? This is supposedly a naive and uninformed question I asked Deepseek R1 14b....

February 15, 2025 · <lngnmn2@yahoo.com>

Reasoning LLMs

AUTHOR: <lngnmn2@yahoo.com> When I was a kid they told me not to stare at the sun I had this vision, that the brain structures are sort of like trees, while the “branches” are just like patches thorough our yard after fresh snow. Some of them remain thin, just someone walked across it absentmindedly, some gets broadened by a heavy re-use. Who would plow through a fresh snow while one could take follow the path that is already here....

February 11, 2025 · Ln Gnmn

OpenAI vs. Deepseek

The shitstorm is of the highest category and pleasing to watch. The main issue is that there is models can be produced (trained) at a fraction of the costs by just good and hardworking graduate students who did their homework. It reminds me of legendary Andrew Ng, who was way above everyone else just by being smart, hardworking and systematic in building everything(deriving all the math) from scratch. The Deepseek success has the same vibes....

January 30, 2025 · &lt;lngnmn2@yahoo.com&gt;

Deepseek R1

DESCRIPTION: Memes and mirrors. Nowadays things are moving way too fast. It is not just controlled trial-and-error, it is literally throwing everything at the wall (to see what sticks). It started with that meme “Attention Is All You Need”, when they just came up with an “architecture” that sticks. That “attention” and “multi head attention” turned out to be just a few additional layers of a particular kind. No one can explain the actual mechanisms of how exactly or even why the layers are as they are (abstract bullshit aside)....

January 26, 2025 · &lt;lngnmn2@yahoo.com&gt;

Haskell and a LLM

This is the difference between a LLM and an expert. An LLM spews out a “propaganda” from the web. ### what are the unique properties of a Haskell program 1. **Statically Typed**: Haskell has a strong, static type system that ensures type safety at compile time. 2. **Purely Functional**: Haskell programs are expressions, and functions have no side effects. This leads to code that's easy to reason about, test, and debug....

January 26, 2025 · &lt;lngnmn2@yahoo.com&gt;

L. Tao on LLMs

I am being systematically cancelled on HN (as if I am a fucking russian – I am NOT – or something), but sometimes it feels ridiculous. LLM don’t “do maths” by definition (which in this context is the code). LLM “predict” the next symbol after “2+2=” and separately and independently (in principle) predict token-by-token of why that is true. There is absolutely no “reasoning” or “understanding” of any kind whatsoever, just as a parrot would do....

June 13, 2024 · &lt;lngnmn2@yahoo.com&gt;

Selfawareness

Another day – another bullshit from some Chud. https://thewaltersfile.substack.com/p/bootstrapping-self-awareness-in-gpt Self-awareness and awareness is general is not at a language level (or information level). Animals, obviously, have awareness, but not a “language level awareness” due to the fact that their brains lack any language areas. The series of mutations and subsequent developments that lead to a human language is unique to humans. All other animals are using just “voices” - pitch, volume, distinct cries, etc....

November 20, 2023 · &lt;lngnmn2@yahoo.com&gt;

LLM predictions

Social media make us stupid. To be precise - they encourage production and emission of a useless verbiage as a from of virtue signaling. The cultural change is that being “wrong” is ok for some talking heads, and nowadays it is even possible to argue that “there is no wrong”, just an “imperfect information”, you know. The older cultures were better. They had a common sense notion of “you have no idea what you are talking about”....

November 8, 2023 · &lt;lngnmn2@yahoo.com&gt;