It is that simple. The momentum, which is partially due to the unprecedented AI bubble, is such that it actually became “too big to fail” and too important (for more than one industries) to not be “done right”. 3.14 is getting a proper tail-calls in the interpreter, 3.13 got an initial support for native compilation. It will only continue to get polished by literally millions. The last fundamentally right addition was the support for the proper sum-types (a tagged union) as dataclasses and the the related pattern-matching syntax....
Coding with LLMs
DESCRIPTION: Idiots, idiots everywhere. Now I can accurately summarize what coding using LLMs actually /is in just a few sentences. Recall how people usually describe a code maintenance job: we have this code to run, while the original developers are gone and leave us no design documentation. This hypothetical situation is exactly what you get when an LLM finished spewing out the slop: you now have some code, very cheap, even for free, but it is not yours, the underlying understanding (of the whys) is not in your head, and the original developer is already gone....
How To Program 3
Here is the Dan Grossman’s Caml tutorial (refresher) for Ocaml. The cool thing about it is that it shows how little we all need. https://homes.cs.washington.edu/~djg/teachingMaterials/gpl/lectures/camlTutorial.pdf He is actually a very cool guy, who teaches the principles (and precise semantics) of programming using the classic languages – SML and Ocaml, which were carefully designed by talented math majors to build theorem provers and proof assistants. These languages (and Erlang) ought to be “all you need”, but the world is what it is (Pootin, Trump and what not) so we have Java or C++ or, if unlucky – PHP or Javascript....
Grok3
Well, I’ve watched it. There are a few things to realize. The code it generated ran without an issue with the simulation task. It, however, is incomprehensible without understanding of all the details (like any other code). This is, probably, so because they have feed in a lot of very similar internal code in the training phase. The gibberish from the “thinking” phase might be helpful or it may be equally cryptic....
One More Time
This is just an unfinished draft. How could one finish something like this? The Curry-Howard isomorphism I asked GPT and it spewed out vague over-generalized crap (see below). The fundamental significance of the Curry-Howard isomorphism is that this is “all you need” (really) – just a few properly generalized and captured patterns (from What Is). The notions of \(\rightarrow\), \(\and\), \(\or\), \(\forall\) and \(\exists\) are even more “real” and fundamental that their corresponding captures in Mathematics (and mathematical logic, where they are way too abstract and too general, by throwing causality itself out of the window)....
AI Slop
slop noun Cambridge dictionary food that is more liquid than it should be and is therefore unpleasant liquid or wet food waste, especially when it is fed to animals Oxford Learner’s Dictionary waste food, sometimes fed to animals liquid or partly liquid waste, for example urine or dirty water from baths There is also a very related term “goyslop” from internet sewers (losers are always looking for someone to blame and hate [instead of themselves])....
Deepseek In Action
Let’s do it again, because why tf not, especially given the magnitude of the current mass-hysteria about this AI meme (it is literally everywhere and even on Slashdot, which is the last bastion of sanity, there are 4 articles in a row with “AI” in the title). what are the roles of type-classes in Haskell and traits in other languages? This is supposedly a naive and uninformed question I asked Deepseek R1 14b....
Reasoning LLMs
AUTHOR: <lngnmn2@yahoo.com> When I was a kid they told me not to stare at the sun I had this vision, that the brain structures are sort of like trees, while the “branches” are just like patches thorough our yard after fresh snow. Some of them remain thin, just someone walked across it absentmindedly, some gets broadened by a heavy re-use. Who would plow through a fresh snow while one could take follow the path that is already here....
OpenAI vs. Deepseek
The shitstorm is of the highest category and pleasing to watch. The main issue is that there is models can be produced (trained) at a fraction of the costs by just good and hardworking graduate students who did their homework. It reminds me of legendary Andrew Ng, who was way above everyone else just by being smart, hardworking and systematic in building everything(deriving all the math) from scratch. The Deepseek success has the same vibes....
Deepseek R1
DESCRIPTION: Memes and mirrors. Nowadays things are moving way too fast. It is not just controlled trial-and-error, it is literally throwing everything at the wall (to see what sticks). It started with that meme “Attention Is All You Need”, when they just came up with an “architecture” that sticks. That “attention” and “multi head attention” turned out to be just a few additional layers of a particular kind. No one can explain the actual mechanisms of how exactly or even why the layers are as they are (abstract bullshit aside)....