So if I’m reading this right. In essence he said ChatGPT basically slightly edits training data and regurgitates that as if it was completely new material when it’s just slightly edited versions of the material it was trained on.
Hardly seems like something worth killing a guy over. Though I suppose it’d be worth killing over to hide from investors and other competitors whose programs are far ahead of OpenAI. In that they seem actually capable of generating novel and new material
He's not high enough on the ladder to arrange decent personal security.
He'd have been better off publishing such important information anonymously under a pseudonym, something like hitoshi nakamura, or whatever ...
So if I’m reading this right. In essence he said ChatGPT basically slightly edits training data and regurgitates that as if it was completely new material when it’s just slightly edited versions of the material it was trained on.
Hardly seems like something worth killing a guy over. Though I suppose it’d be worth killing over to hide from investors and other competitors whose programs are far ahead of OpenAI. In that they seem actually capable of generating novel and new material
https://x.com/5149jamesli/status/1867755258812018939?t=A2bSJ2271BGVybbe-c8oUg&s=19
https://threadreaderapp.com/5149jamesli/status/1867755258812018939 It hasn't been done yet
The guy was concerned as well about openai's push towards artificial general intelligence, or AGI.
HAL wouldn’t open the front door.