Well. I will say however that, well, I'm not doubting that this is happening, I mean, the GitHub repo is sitting right there for anyone to see, but, however they are programming it's not just a simple process of sticking the ideas into gpt4. Nust for the last couple days I've been trying to get it to create a Richter scale visualizer, which I thought would be a simple little thing, and it can't do it, and it's literally like trying to get a baby to help you code. Every single iteration is some crazy, literally crazy mispronunciation or misconception of what you're asking it to do, even if you just cut and paste and say, redo code, doesn't work, and then repast your entire idea, it gives you some bizarre permutation. It becomes more performance art really where you're just like, okay, how can this screw it up again this way? It's absolutely crazy to think that these guys are getting these kinds of results out of the ai. Obviously there is something that maybe I don't understand or could do better. But still, this is pretty amazing, yes. But it's still very much a hacker thing. But the true innovation is that the value payload has has moved upwards from the execution layer to the innovation layer, to the idea itself, which is really the interesting thing here I think. Like, I don't know how to code, I often have ideas that are no require coding that I can't execute on, but, this obviously would change that, potentially
This little thing reminds me of a Terrence McKenna quote:
"We are creating a technological entity, and the more we immerse ourselves in its configurations, the more it will become like us, and we like it, until the boundary between us is blurred and the entity is born... And like the fetus—or, yes, poised at the head of the birth canal—we don't know where we're going."
Save your money. Only 3.5 can code, really, and it does a really poor job. I do think that if they can get recursive versions looking at each other's code then maybe they could do something but right now, the amount of quality control you have to do on it come to the amount of redo and just obvious stupid errors is just too much to make it worth it
Well. I will say however that, well, I'm not doubting that this is happening, I mean, the GitHub repo is sitting right there for anyone to see, but, however they are programming it's not just a simple process of sticking the ideas into gpt4. Nust for the last couple days I've been trying to get it to create a Richter scale visualizer, which I thought would be a simple little thing, and it can't do it, and it's literally like trying to get a baby to help you code. Every single iteration is some crazy, literally crazy mispronunciation or misconception of what you're asking it to do, even if you just cut and paste and say, redo code, doesn't work, and then repast your entire idea, it gives you some bizarre permutation. It becomes more performance art really where you're just like, okay, how can this screw it up again this way? It's absolutely crazy to think that these guys are getting these kinds of results out of the ai. Obviously there is something that maybe I don't understand or could do better. But still, this is pretty amazing, yes. But it's still very much a hacker thing. But the true innovation is that the value payload has has moved upwards from the execution layer to the innovation layer, to the idea itself, which is really the interesting thing here I think. Like, I don't know how to code, I often have ideas that are no require coding that I can't execute on, but, this obviously would change that, potentially
This little thing reminds me of a Terrence McKenna quote:
Save your money. Only 3.5 can code, really, and it does a really poor job. I do think that if they can get recursive versions looking at each other's code then maybe they could do something but right now, the amount of quality control you have to do on it come to the amount of redo and just obvious stupid errors is just too much to make it worth it