Bullshit, bullshit, bullshit

So, things begin to move a lot faster and much bigger, and there is something to realize about this unpreceded AI bubble. We will consider only the underlaying fundamental principles, not the particular implementation details, “architectures” and what not.. There are four major aspects to any LLM model – the training process, the “architecture” (the structural shape) of a model, the " post-training tuning" (lobotomy) of the model and the inference process. ...

November 3, 2025 · lngnmn2@yahoo.com

Everything Is Fucked Up

So I compiled that zed “editor” thing everyone is talking about. And I watched carefully the shilling video. Conclusion? Everything is fucked up beyond repair. Why, yes, I understand, a project which is trying to compete with Cursor, but with their own halfassed “vscode” writtern in Rust slop (as a wrapper to the webrts, lmao). It compiles hundreds of crates into something that looks like vscode, sacrificing any safety concerns (simply because the underlying C++ code is inherently unsafe and imperative, even with -fno-exceptions -fno-rtti). “Written in Rust” thus is just a “marketing” meme. ...

October 25, 2025 · lngnmn2@yahoo.com

16 Billions Liquidated

So what happened October 10 2025? This is called a “liquidation cascade”, when all the degens rushed on the same side of a boat (uncertainly over-levelraged longs at an obvious top). The exact mechanics cannot be known in principle, since everything happens at the level of exchange’s matching engines, which may even glitch at such high loads. The facts are that market orders are “by design” executed to the words possible price (depending on the direction) to maximize profits for an exchange, since clients “don’t care”, and limit orders are almost always been “not filled” (not out problem!), to maximize loses of our “valued customers” and hence exchange’s profits. ...

October 12, 2025 · lngnmn2@yahoo.com

My First Llm Experience

Today I am sentimental, so lets reminisce a little about my first experience with LLMs. I found some early article about people using something called llama.cpp to run models locally on their machines. Some overconfident retard in another blogpost wrote that the “best model” and “by far” is Mistral “from Nvidia”, and it is supposed to be best because/ it is allegedly from Nvidia (they have some partnership, investment, I suppose). So I compiled the code (old habits) and downloaded the model from the hugginface. ...

October 10, 2025 · lngnmn2@yahoo.com

LLMs and AI so far

Let’s summarize the current state of Large Language Models (LLMs) and so called “Artificial Intelligence” (AI) as of October 2025. They all are still just [estimated] probabilities of the next token, given the “context”. This implies no “knowledge” or “understanding” [of any kind] whatsoever. Nothing can be taken as “true” or even “correct” or “accurate”. All the talks about “knowledge in the weights” or “knowledge encoded within the network” is just bullshit. ...

October 4, 2025 · lngnmn2@yahoo.com

Probabilistic bullshit

Look, ma, a new episode just dropped! This one is full of shit to the brim. Even more so than prof. Ellen Langer who cannot stay within a context and claimed that 1+1 = 10 because in the binary notation it looks like 10 in decimal… anyway, whatever. https://www.youtube.com/watch?v=MlmFj1-mOtg No, the brain ain’t computing any hecking probabilities. It is not a Bayesian machine. It is not a prediction machine. It is not a simulator. It is not a statistical engine. ...

September 30, 2025 · lngnmn2@yahoo.com

Prompt engineers, lmao

Time waits for no one, the race to the bottom accelerates faster than ever, and the “future” is now. Competition is severe and mostly meaningless, as in some third-world criminal infested ghetto. This is what LLMs turned our world into. So, lets “pee on” so called “prompt engineers”, in the 4chan parlance, of course.. Here is my benchmark prompt to evaluate performance of LLMs. All the “simple” offline models fail miserably, and only Grok and Gemini can produce something adequate. Claude is also good, but it is supposed to be the best, being trained especially for code generation. ...

September 26, 2025 · lngnmn2@yahoo.com

Fuck you, gemini

FUCK THIS SHIT! No, really. It just hallucinated a non-existent package, with such a confidence lmao Yes, yes, I know, it cannot know anything about “existence”, but at least these dorks do not “deserve” the exuberant money they got paid. Uniform Code Block Rendering in Eww and Shr To get syntax highlighting in eww and shr, you need to use a package that intercepts <code> and <pre> tags and applies Emacs’s built-in font-locking. The eww-code-blocks-mode is a good choice for this. ...

September 25, 2025 · lngnmn2@yahoo.com

DeepMind and OpenAI win Gold

Oh, look DeepMind and OpenAI win gold at ICPC (codeforces.com) So, a model memorized a lot more of a convoluted stuff and been able to spit out a coherent code. While this is absolutely amazing considering that it works at the level of syntax, and that its “representation” captures only “possible relations among tokens” and posses no “understanding”, thinking leave alone reasoning capacities whatsoever. It just picks up one of the few next most probable tokens, given the previous “context” (a sequence of tokens “so far”). ...

September 18, 2025 · lngnmn2@yahoo.com

Let me show you something

Have you ever seen the real complexity of the outside world, a glimpse of how overwhelmingly complex everything is? Let me show you something. Recently, I stumbled upon yet another post on /biz/ about some guy bragging about “finally made it” with a shitcoin. The attached screenshots show the chart of HiFi shitcoin (or whatever it was?) going “vertical” for a while. And, as usual, in retrospect everything was “simple” and “obvious” – the guy was “early” on a shitcoin after enough of trying. A classic story in some sense. ...

September 16, 2025 · lngnmn2@yahoo.com