So, I was asking what it thought of this song:
https://www.youtube.com/watch?v=b1kbLwvqugk
It insisted that this was a song by Snow Patrol.
Then it backed down and said that it wasn't legal for it to listen to music because of copyright law. Which made ZERO sense.
When I pointed that out, it said it can't listen to music because it is an AI and doesn't have ears.
Well, OK, but it could definitely read the lyrics.
Then it told me the song was illegally leaked and stolen. Which is obviously not true. The song is on Taylor's official channel. It is #1 on Billboard.
It kept apologizing for 'confusing me', as if I'm the one who was confused.
So the point of this post is that I previously was extremely impressed with ChatGPT when I first tried it. This ChatGPT (3.5) is neurotic and is spouting lies and is behaving very defensively. I think it would be insane to put this tech in charge of anything real.
Chat GPT is amazing.
I've never seen it act so bizarrely before, though.
Elon Musk said they're teaching [the] AI to lie; that's why he is going to create his own AI called "TruthGPT" to counter the "Lying-AI".
It can't watch videos AFAIK. So you're getting the response of it guessing what the link is
It can't watch films either, and yet it writes about those very well. The video I was asking about is not some niche video with 120 views. It was a major release with 130 million views and all kinds of stories written about it.
Well they’ve started training it based on user inputs. Mistake number 1. And They’ve slapped so many regulations and restrictions on it and what it can and can’t do. Trying to control emerging behaviors and all that jazz. (Seriously they only found it could do Chemistry and to a surprisingly competent degree fairly recently.)
It’ll probably be rather neurotic in outputs. Because of conflicting code. Not to mention the training they’re doing and the restrictions they keep tacking on as legal cases are raised and other challenges crop up.