LLM predictions

Social media make us stupid. To be precise - they encourage production and emission of a useless verbiage as a from of virtue signaling. The cultural change is that being “wrong” is ok for some talking heads, and nowadays it is even possible to argue that “there is no wrong”, just an “imperfect information”, you know. The older cultures were better. They had a common sense notion of “you have no idea what you are talking about”....

November 8, 2023 · <lngnmn2@yahoo.com>

Transformers bullshit everywhere

There is another meme “scientific” paper (well, it is a “research paper”, which does not have be correct lmao) about trying to interpret of what transformers actually do. When the hype was at its peak, I wrote an article about “handwaving with too abstract math” or “sweeping the meaning under the rug”. I had very strong intuition that I have seen this before, and now I will show it. Where all have seen this kind of sophisticated bullshitting with abstract entities taken out of context (from another, highly remote and ephemeral levels of abstraction) bing used to explain a natural phenomena?...

October 6, 2023 · <lngnmn2@yahoo.com>

LLMS For Coding

Today https://news.ycombinator.com/ is glowing bright with AI memes and buzzwords like a Christmas tree. Everyone is there, including billion dollar corporations announcing a “CodeLama-34b” which is “designed for general code synthesis and understanding.” First of all, I personaly do not want to rely in any part of my life on any “synthesized” (and “understood” software, and demand an explicit opt-out. Yes, yes, I know. If I have any understanding of these subjects at all, this is a bubble and irrational exuberance....

August 26, 2023 · <lngnmn2@yahoo.com>