Well. I will say however that, well, I'm not doubting that this is happening, I mean, the GitHub repo is sitting right there for anyone to see, but, however they are programming it's not just a simple process of sticking the ideas into gpt4. Nust for the last couple days I've been trying to get it to create a Richter scale visualizer, which I thought would be a simple little thing, and it can't do it, and it's literally like trying to get a baby to help you code. Every single iteration is some crazy, literally crazy mispronunciation or misconception of what you're asking it to do, even if you just cut and paste and say, redo code, doesn't work, and then repast your entire idea, it gives you some bizarre permutation. It becomes more performance art really where you're just like, okay, how can this screw it up again this way? It's absolutely crazy to think that these guys are getting these kinds of results out of the ai. Obviously there is something that maybe I don't understand or could do better. But still, this is pretty amazing, yes. But it's still very much a hacker thing. But the true innovation is that the value payload has has moved upwards from the execution layer to the innovation layer, to the idea itself, which is really the interesting thing here I think. Like, I don't know how to code, I often have ideas that are no require coding that I can't execute on, but, this obviously would change that, potentially
This little thing reminds me of a Terrence McKenna quote:
"We are creating a technological entity, and the more we immerse ourselves in its configurations, the more it will become like us, and we like it, until the boundary between us is blurred and the entity is born... And like the fetus—or, yes, poised at the head of the birth canal—we don't know where we're going."
I use it a lot as a tool to generate code, but not a full program. Basically, it lets me be an engineer, while it goes and fills the role of a code monkey. It's not perfect, but it saves a lot of time and toil.
GPT-4 is not good for code, GPT-3.5 is good.
Also remember if you run out of response, you can tell it "continue previous response".
Well. I will say however that, well, I'm not doubting that this is happening, I mean, the GitHub repo is sitting right there for anyone to see, but, however they are programming it's not just a simple process of sticking the ideas into gpt4. Nust for the last couple days I've been trying to get it to create a Richter scale visualizer, which I thought would be a simple little thing, and it can't do it, and it's literally like trying to get a baby to help you code. Every single iteration is some crazy, literally crazy mispronunciation or misconception of what you're asking it to do, even if you just cut and paste and say, redo code, doesn't work, and then repast your entire idea, it gives you some bizarre permutation. It becomes more performance art really where you're just like, okay, how can this screw it up again this way? It's absolutely crazy to think that these guys are getting these kinds of results out of the ai. Obviously there is something that maybe I don't understand or could do better. But still, this is pretty amazing, yes. But it's still very much a hacker thing. But the true innovation is that the value payload has has moved upwards from the execution layer to the innovation layer, to the idea itself, which is really the interesting thing here I think. Like, I don't know how to code, I often have ideas that are no require coding that I can't execute on, but, this obviously would change that, potentially
This little thing reminds me of a Terrence McKenna quote:
I use it a lot as a tool to generate code, but not a full program. Basically, it lets me be an engineer, while it goes and fills the role of a code monkey. It's not perfect, but it saves a lot of time and toil.
GPT-4 is not good for code, GPT-3.5 is good.
Also remember if you run out of response, you can tell it "continue previous response".