I am a well-read Humbert. I even remember reading back then that Norvig’s essay about how all the “stupid” OO “design patterns” can be just higher-order functions, (which are, in turn, are just instances Barbara Liskov’s Abstraction by Parameterization principle). Yes, I have read her books too, and the Jackson’s, and Richard Bird’s and what not. Like Jelal in the Orhan Pamuk’s “Black Book” I have carefully selected and cultivated my “garden of memory” (or was it “memery”?) which is now begin to wither and to fade away. Was it all in vain? Well, everything is in vain in the long-enough run, but not like this. ...
A few steps closer to Vedanta
Once upon a time I find it difficult to read these lecture notes in https://www.cl.cam.ac.uk/teaching/1718/L28/materials.html because of ESL and math “fear”. Nowadays I could, probably, explain the fundamentals more clearly in just a few pages. The “End Of Knowledge” (of seeking to Understand) can be achieved by just a few simple realizations, which means attaining one’s own “Right Understanding” through direct experience and one’s own “a-ha moments”. Arguably, the “Upanishads” of Programming began with this book: Abstraction and Specification in Program Development by Barbara Liskov and John V. Guttag. No, a few years earlier Michael A. Jackson wrote Principles of Program Design, which emphasized the fundamental principle that one’s program structure should reflect (actually to be in one-to-one correspondence with) the problem structure (which can be defined as an inherent layered structure of complexity of the domain ). But Liskov and Guttag took it to a whole new level. ...
Aaand boom!
The thing I hate the most is when some of these fucking YouTube content “creators”, which decide to monetize an AI coding clickbait with low-effort subpar videos, say “aaand boom!” when another chunk of a slop has been spewed out by an AI. This “boom!” is an insult to the last 60 years of the programming languages research (including the math-based theory) and to the “old sages” which crafted their languages and standard libraries in the best possible, “just right”, perfect, in the sense of “nothing more to take away” by surveying all the available literature, non-bullshit papers and spending months of anguish and self-doubt. ...
Bullshit, bullshit, bullshit
So, things begin to move a lot faster and much bigger, and there is something to realize about this unpreceded AI bubble. We will consider only the underlaying fundamental principles, not the particular implementation details, “architectures” and what not.. There are four major aspects to any LLM model – the training process, the “architecture” (the structural shape) of a model, the " post-training tuning" (lobotomy) of the model and the inference process. ...
Everything Is Fucked Up
So I compiled that zed “editor” thing everyone is talking about. And I watched carefully the shilling video. Conclusion? Everything is fucked up beyond repair. Why, yes, I understand, a project which is trying to compete with Cursor, but with their own halfassed “vscode” writtern in Rust slop (as a wrapper to the webrts, lmao). It compiles hundreds of crates into something that looks like vscode, sacrificing any safety concerns (simply because the underlying C++ code is inherently unsafe and imperative, even with -fno-exceptions -fno-rtti). “Written in Rust” thus is just a “marketing” meme. ...
16 Billions Liquidated
So what happened October 10 2025? This is called a “liquidation cascade”, when all the degens rushed on the same side of a boat (uncertainly over-levelraged longs at an obvious top). The exact mechanics cannot be known in principle, since everything happens at the level of exchange’s matching engines, which may even glitch at such high loads. The facts are that market orders are “by design” executed to the words possible price (depending on the direction) to maximize profits for an exchange, since clients “don’t care”, and limit orders are almost always been “not filled” (not out problem!), to maximize loses of our “valued customers” and hence exchange’s profits. ...
My First Llm Experience
Today I am sentimental, so lets reminisce a little about my first experience with LLMs. I found some early article about people using something called llama.cpp to run models locally on their machines. Some overconfident retard in another blogpost wrote that the “best model” and “by far” is Mistral “from Nvidia”, and it is supposed to be best because/ it is allegedly from Nvidia (they have some partnership, investment, I suppose). So I compiled the code (old habits) and downloaded the model from the hugginface. ...
LLMs and AI so far
Let’s summarize the current state of Large Language Models (LLMs) and so called “Artificial Intelligence” (AI) as of October 2025. They all are still just [estimated] probabilities of the next token, given the “context”. This implies no “knowledge” or “understanding” [of any kind] whatsoever. Nothing can be taken as “true” or even “correct” or “accurate”. All the talks about “knowledge in the weights” or “knowledge encoded within the network” is just bullshit. ...
Probabilistic bullshit
Look, ma, a new episode just dropped! This one is full of shit to the brim. Even more so than prof. Ellen Langer who cannot stay within a context and claimed that 1+1 = 10 because in the binary notation it looks like 10 in decimal… anyway, whatever. https://www.youtube.com/watch?v=MlmFj1-mOtg No, the brain ain’t computing any hecking probabilities. It is not a Bayesian machine. It is not a prediction machine. It is not a simulator. It is not a statistical engine. ...
Prompt engineers, lmao
Time waits for no one, the race to the bottom accelerates faster than ever, and the “future” is now. Competition is severe and mostly meaningless, as in some third-world criminal infested ghetto. This is what LLMs turned our world into. So, lets “pee on” so called “prompt engineers”, in the 4chan parlance, of course.. Here is my benchmark prompt to evaluate performance of LLMs. All the “simple” offline models fail miserably, and only Grok and Gemini can produce something adequate. Claude is also good, but it is supposed to be the best, being trained especially for code generation. ...