My First Llm Experience

Today I am sentimental, so lets reminisce a little about my first experience with LLMs. I found some early article about people using something called llama.cpp to run models locally on their machines. Some overconfident retard in another blogpost wrote that the “best model” and “by far” is Mistral “from Nvidia”, and it is supposed to be best because/ it is allegedly from Nvidia (they have some partnership, investment, I suppose). So I compiled the code (old habits) and downloaded the model from the hugginface. ...

October 10, 2025 · lngnmn2@yahoo.com

LLMs and AI so far

Let’s summarize the current state of Large Language Models (LLMs) and so called “Artificial Intelligence” (AI) as of October 2025. They all are still just [estimated] probabilities of the next token, given the “context”. This implies no “knowledge” or “understanding” [of any kind] whatsoever. Nothing can be taken as “true” or even “correct” or “accurate”. All the talks about “knowledge in the weights” or “knowledge encoded within the network” is just bullshit. ...

October 4, 2025 · lngnmn2@yahoo.com

Probabilistic bullshit

Look, ma, a new episode just dropped! This one is full of shit to the brim. Even more so than prof. Ellen Langer who cannot stay within a context and claimed that 1+1 = 10 because in the binary notation it looks like 10 in decimal… anyway, whatever. https://www.youtube.com/watch?v=MlmFj1-mOtg No, the brain ain’t computing any hecking probabilities. It is not a Bayesian machine. It is not a prediction machine. It is not a simulator. It is not a statistical engine. ...

September 30, 2025 · lngnmn2@yahoo.com

Prompt engineers, lmao

Time waits for no one, the race to the bottom accelerates faster than ever, and the “future” is now. Competition is severe and mostly meaningless, as in some third-world criminal infested ghetto. This is what LLMs turned our world into. So, lets “pee on” so called “prompt engineers”, in the 4chan parlance, of course.. Here is my benchmark prompt to evaluate performance of LLMs. All the “simple” offline models fail miserably, and only Grok and Gemini can produce something adequate. Claude is also good, but it is supposed to be the best, being trained especially for code generation. ...

September 26, 2025 · lngnmn2@yahoo.com

Fuck you, gemini

FUCK THIS SHIT! No, really. It just hallucinated a non-existent package, with such a confidence lmao Yes, yes, I know, it cannot know anything about “existence”, but at least these dorks do not “deserve” the exuberant money they got paid. Uniform Code Block Rendering in Eww and Shr To get syntax highlighting in eww and shr, you need to use a package that intercepts <code> and <pre> tags and applies Emacs’s built-in font-locking. The eww-code-blocks-mode is a good choice for this. ...

September 25, 2025 · lngnmn2@yahoo.com

DeepMind and OpenAI win Gold

Oh, look DeepMind and OpenAI win gold at ICPC (codeforces.com) So, a model memorized a lot more of a convoluted stuff and been able to spit out a coherent code. While this is absolutely amazing considering that it works at the level of syntax, and that its “representation” captures only “possible relations among tokens” and posses no “understanding”, thinking leave alone reasoning capacities whatsoever. It just picks up one of the few next most probable tokens, given the previous “context” (a sequence of tokens “so far”). ...

September 18, 2025 · lngnmn2@yahoo.com

Defeating Nondeterminism, my ass

And while passing by… Yet another “look, look at us, we are soooo smart and clever, give us much more money just because we are so cool” article dropped. https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/ The lack of exact precision is not the fundamental issue here. Even if one manages to overcome the “numerical instability issues” and would be able to reproduce always the same structural output from the same linguistic (or otherwise structured) input the “hallucinations” and “subtle bullshitting” won’t go away in principle. ...

September 11, 2025 · lngnmn2@yahoo.com

the LLM upanishad

To understand what non-deterministic, “syntax level” probabilistic models are actually producing (an illusion), we have to understand how the Mind (of an external observer) works, and how it produces Maya (which has been intuitively understood since the early Upanishads) – an ultimate illusion created by the Mind itself – an inner representations of the “outside” world, which the mind (and body) uses for “decision making”. The “outside” world is inherently complex, non-deterministic and “concurrent” at the level of “compositions” and, at the same “time”, deterministic enough at the level of the “most basic building blocks” (of biology, lets say) – “simple” molecular structure’s (“small” molecules) are exactly the same, exact copies of clones of each other (otherwise everything will break) while larger molecular structures (like whole proteins and their compositions) may have “flaws” or “mutations” or just simple “kinks” – a slightly different shape or a resulting form. ...

September 11, 2025 · lngnmn2@yahoo.com

Let the bubble burst, for Crist's sake!

I have noticed a resent dramatic change in the behavior of major GPTs online providers – most notably, Gemini is now providing just outline of code, full of stubs and “mocks” of real APIs, and not the full code. This is a significant change from the previous behavior where they would provide a semi-complete (but ridden with errors) code solution. Perhaps, mimicking the behavior of ChatGPT, which has been doing this for a while now – they “optimize” for more what appears to be a “dialogue” (more like a normie-level chat), to create a better illusion of “actually conversing with an artificial intelligence”. ...

August 3, 2025 · &lt;lngnmn2@yahoo.com&gt;

The Knowledge Work Bubble

We are living through a paradigmatic shift, the one described in the “Scientific Revolution” by Thomas Kuhn. As I mentioned many times, texts and even crappy code became very, very cheap, just like a processed junk-food or a low-effort street-food slop. This is the “shift” and the end of so-called “knowledge work” as we know it. At least this is the end of the pretentious “knowledge work”, when one just pretends to be an expert is social settings, using very straightforward verbal and non-verbal cues to signal their “knowledge” and “expertise”, just as a priest would do in the not so distant past. ...

August 2, 2025 · &lt;lngnmn2@yahoo.com&gt;