AUTHOR: lngnmn2@yahoo.com “How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?” Sherlock Holmes So the “compiler” is there, right on Github [[https://github.com/anthropics/claudes-c-compiler], and the only interesting question is “but how”? Well, maybe we are grossly exaggerating what might be going on under the hood. There is an enormous, almost unbridgeable gap between a formal view and a statistical view of the world. ...
The Claude's C Compiler meme
AUTHOR: lngnmn2@yahoo.com There are a few facts to understand: it has been written from scratch, using an academic literature of the compiler’s sub-field, with focus on IR, SSA, guided by some “compiler people” it does not rely on the legacy gcc internal code which no one really understands, it does not rely on llvm/clang (only the literature) it is not optimizing (all the optimizations are missed) so the generated code quality is worse than gcc -O0 so they frightfully followed all the architecture specifications and ABI standards, which is what slop generators are good for. the actual Rust code has to be evaluated yet (the key metrics are modularity and abstraction, clear abstraction barriers) but I predict it will be an imperative spaghetti crap. lets see. ...
The Fundamental Problem of LLM-assistant Vapecoding
Here we formulate (and analyze the implications of) The Fundamental Problem of LLM-assistant vapecoding. The problem is this: There is, at the moment, no machinery to systematically and correctly bridge the gap between the generated slop (code) and the accompanied verbiage, which appear to define the semantics of the code. What appears to be coherent and consistent is only a cognitive illusion, made out of familiar words, which refer to familiar concepts of the mind, which looks plausible [to the mind]. ...
@karpathy On Claude
https://x.com/karpathy/status/2015883857489522876 – 5.5M views. What could we possible do? Meanwhile, my 2 cents: The current Sonet 4.5 via the web-interface (free-tier) is what I have access to. It can generate very convincing verbiage, consistent with we could find in the best books. That only means it has been trained on the [pirated] books too. Last iteration it wrote a very nice few pages summary of the Google’s testing practices, found in the public domain, like the SWE book, Abseil guidelines, the Testing blog, etc. This is only impressive because it managed to combine all these sources into a coherent narrative, without any major hallucinations. ...
Just an Illusion
Modern coding LLMs are still a shitshow. I would not even comment on the humanties – the “sectarian consensus” abstract (ill-defined) verbiage (subtle bullshit) it could produce – not even an expert could “validate” the “correctness” of the slop (such notion is not defined in their domains). It would be interesting to heavily prompt it about rigorous mathematics, properly captured, generalized and named from the observed aspects of What Is (which is the only proper mathematics, including the derivations of pure abstract algebraic structures, like Monoid, a Group, Latice or even a Category) … Okay, some day. ...
F-Lang
Here is what is possible if you have some education about the last 50 years of research in Program Language semantics, and have a refined taste (influenced by proper mathematics): The last third of the chat, when we switched back to mathematical concepts, contains a significant achievement, which I leave in the form “as it is” for now (without refinement and proper publishing). The whole thing is a nice (but a bit tough) read anyway, and this is why I’m publishing it “as is”. ...
Another Day – Another Slop
Here some important observations from the long hours of “experiments”. Once there is a simple bug in the slop, the chat does not fix just this very line. Most of the time it regenerates the whole file from scratch, sometimes with slightly different structure and names, suggesting (as one would expect) that it just repeats the whole task (without understanding your “precious” feedback at all) adding your verbiage as additional context (if at all). This is exactly how it fixes compilation errors – by adding them as training data together with the slop which produced the errors, capturing somehow the actually existing relation between the bad code and particular compiler errors. ...
Wake up, Neo
So, this is the why all the megacorps suddenly building AI data-centers and purchase nuclear power plants to feed them with electricity and book all the DRAM and VRAM production and in the whole world. The reason is this. Remember Google reCAPTCHA - that window when you are forced to select all the cars or traffic lights to view the website you wanted (for an expected revard through a dopamine release). ...
We are fucked
We are fucked. This time for real. I just had some 3 hours Gemini3 session of some 50 interactions, which bootstrapped for me a prototype of the project I wished to build for years. In Just a Couple of Hours, motherfuckers. Yes, the code won’t compile, and will be full of subtle hallucinations, The markdown files would be more of “wishful thinking” and “confirmation bias” than a formal, rigorous description of the code, but still. ...
Cognitive Biases At Work
There is some output (slop) I got from my recent sessions (yes, yes, I am guilty and ashamed): ### Validation Summary I have verified the implementation across the **Functional Core** (Domain) and the **Imperative Shell** (Infrastructure & Interface), confirming that the code doesn't just "look" right but functionally enforces the rules. **1. Domain Layer (Functional Core)** **Strict Encapsulation:** All primitives (`f64`, `String`) are wrapped in **Newtypes** (`Price`, `Quantity`, `Symbol`) with private fields. **Correctness by Construction:** **Smart Constructors** (`new() -> Result`) enforce invariants (e.g., price > 0, symbol format). It is impossible to instantiate invalid domain objects. **Algebraic Data Types:** `Order` is a Sum Type (Enum) of Product Types (Structs), exhaustively modeling valid order strategies (`Market`, `Limit`, `StopLimit`). **Typestate Pattern:** The `OrderBuilder` uses zero-sized types (`NoSymbol`, `WithSymbol`) to enforce the order of operations at compile time. You cannot build an order without mandatory fields. **Error Handling:** `DomainError` is a specific Sum Type using `thiserror`, implementing the "Make Illegal States Unrepresentable" rule. **2. Infrastructure & Interface (Imperative Shell)** **Hexagonal Architecture:** The `client` module acts as an **Adapter**, translating domain objects into Binance API calls. It isolates side effects (I/O, Time, Crypto). **Functional REPL:** The `repl` module implements the Command Interpreter using **Functional Streams** (`input_stream().map().for_each()`), strictly avoiding imperative `while` loops. **Security:** `Credentials` are handled via `secrets` module, and signing logic (`compute_signature`) is a pure function. **3. Verification (Tests)** **Pass Rate:** **124/124 tests passed**. **Coverage:** The test suite is organized into "Stories" (e.g., `story_user_places_valid_limit_sell_order_gtc`), directly testing the Business Rules and Workflows against the domain logic. The project is correctly set up as a mathematically rigorous, type-driven trading assistant. here is another one: ...