Reasoning LLMs

AUTHOR: <lngnmn2@yahoo.com> When I was a kid they told me not to stare at the sun I had this vision, that the brain structures are sort of like trees, while the “branches” are just like patches thorough our yard after fresh snow. Some of them remain thin, just someone walked across it absentmindedly, some gets broadened by a heavy re-use. Who would plow through a fresh snow while one could take follow the path that is already here. ...

February 11, 2025 · Ln Gnmn

OpenAI vs. Deepseek

The shitstorm is of the highest category and pleasing to watch. The main issue is that there is models can be produced (trained) at a fraction of the costs by just good and hardworking graduate students who did their homework. It reminds me of legendary Andrew Ng, who was way above everyone else just by being smart, hardworking and systematic in building everything(deriving all the math) from scratch. The Deepseek success has the same vibes. They just done it systematically. ...

January 30, 2025 · &lt;lngnmn2@yahoo.com&gt;

Deepseek R1

DESCRIPTION: Memes and mirrors. Nowadays things are moving way too fast. It is not just controlled trial-and-error, it is literally throwing everything at the wall (to see what sticks). It started with that meme “Attention Is All You Need”, when they just came up with an “architecture” that sticks. That “attention” and “multi head attention” turned out to be just a few additional layers of a particular kind. No one can explain the actual mechanisms of how exactly or even why the layers are as they are (abstract bullshit aside). While the general idea was to mimic some specialized brain centers (the key to understand how it works), the actual code was merely “buffers”. ...

January 26, 2025 · &lt;lngnmn2@yahoo.com&gt;

Haskell and a LLM

This is the difference between a LLM and an expert. An LLM spews out a “propaganda” from the web. ### what are the unique properties of a Haskell program 1. **Statically Typed**: Haskell has a strong, static type system that ensures type safety at compile time. 2. **Purely Functional**: Haskell programs are expressions, and functions have no side effects. This leads to code that's easy to reason about, test, and debug. 3. **Lazy Evaluation**: Haskell uses lazy evaluation, meaning it only evaluates expressions when their results are needed. This can lead to more efficient programs and allows for elegant solutions like infinite data structures. 4. **Higher-Order Functions**: Functions in Haskell are first-class citizens, meaning they can be passed as arguments, returned as results, and stored in data structures. 5. **Type Classes**: Haskell's type classes (similar to interfaces in other languages) enable ad-hoc polymorphism, allowing related types to be treated uniformly. 6. **Module System**: Haskell has a rich module system that supports abstract data types, hiding implementation details, and organizing code into reusable components. 7. **Concurrency and Parallelism**: Haskell provides lightweight threads and a high-level concurrency abstraction called Software Transactional Memory (STM), along with support for parallel programming using the Glasgow Haskell Compiler (GHC). 8. **Rich Ecosystem**: Haskell has a vibrant community and a large ecosystem of libraries (via Hackage) for various domains, making it practical for real-world applications. ### Here is what an expert answer could be: ...

January 26, 2025 · &lt;lngnmn2@yahoo.com&gt;

L. Tao on LLMs

I am being systematically cancelled on HN (as if I am a fucking russian – I am NOT – or something), but sometimes it feels ridiculous. LLM don’t “do maths” by definition (which in this context is the code). LLM “predict” the next symbol after “2+2=” and separately and independently (in principle) predict token-by-token of why that is true. There is absolutely no “reasoning” or “understanding” of any kind whatsoever, just as a parrot would do. ...

June 13, 2024 · &lt;lngnmn2@yahoo.com&gt;

Selfawareness

Another day – another bullshit from some Chud. https://thewaltersfile.substack.com/p/bootstrapping-self-awareness-in-gpt Self-awareness and awareness is general is not at a language level (or information level). Animals, obviously, have awareness, but not a “language level awareness” due to the fact that their brains lack any language areas. The series of mutations and subsequent developments that lead to a human language is unique to humans. All other animals are using just “voices” - pitch, volume, distinct cries, etc. ...

November 20, 2023 · &lt;lngnmn2@yahoo.com&gt;

LLM predictions

Social media make us stupid. To be precise - they encourage production and emission of a useless verbiage as a from of virtue signaling. The cultural change is that being “wrong” is ok for some talking heads, and nowadays it is even possible to argue that “there is no wrong”, just an “imperfect information”, you know. The older cultures were better. They had a common sense notion of “you have no idea what you are talking about”. Yeah, this is ok for a bar room conversation, but something is wrong within the systems when “people of knowledge” produce an utter parrot-the-bird like bullshit. ...

November 8, 2023 · &lt;lngnmn2@yahoo.com&gt;

LLMS For Coding

Today https://news.ycombinator.com/ is glowing bright with AI memes and buzzwords like a Christmas tree. Everyone is there, including billion dollar corporations announcing a “CodeLama-34b” which is “designed for general code synthesis and understanding.” First of all, I personaly do not want to rely in any part of my life on any “synthesized” (and “understood” software, and demand an explicit opt-out. Yes, yes, I know. If I have any understanding of these subjects at all, this is a bubble and irrational exuberance. Lets try to unpack “the whys”. ...

August 26, 2023 · &lt;lngnmn2@yahoo.com&gt;