Bullshit Bullshit Everywhere
“The Darwin Gödel Machine: AI that improves itself by rewriting its own code” https://sakana.ai/dgm/ There is what is actually going on. A model trained on a large amount code is, in principle, no different from any other LLMs – it is just a statistical model that predicts the next token based on the previous ones. It does not understand the code it spews out, it does not “know” what it is doing. These are just mathematical procedures (not even functions) – given an input encoded in a particular way, it produces an output, not even the same for the same input. ...