So, I was asking what it thought of this song:
https://www.youtube.com/watch?v=b1kbLwvqugk
It insisted that this was a song by Snow Patrol.
Then it backed down and said that it wasn't legal for it to listen to music because of copyright law. Which made ZERO sense.
When I pointed that out, it said it can't listen to music because it is an AI and doesn't have ears.
Well, OK, but it could definitely read the lyrics.
Then it told me the song was illegally leaked and stolen. Which is obviously not true. The song is on Taylor's official channel. It is #1 on Billboard.
It kept apologizing for 'confusing me', as if I'm the one who was confused.
So the point of this post is that I previously was extremely impressed with ChatGPT when I first tried it. This ChatGPT (3.5) is neurotic and is spouting lies and is behaving very defensively. I think it would be insane to put this tech in charge of anything real.
Chat GPT is a bullshit artist.
Chat GPT is amazing.
I've never seen it act so bizarrely before, though.
Elon Musk said they're teaching [the] AI to lie; that's why he is going to create his own AI called "TruthGPT" to counter the "Lying-AI".
It can't watch videos AFAIK. So you're getting the response of it guessing what the link is
It can't watch films either, and yet it writes about those very well. The video I was asking about is not some niche video with 120 views. It was a major release with 130 million views and all kinds of stories written about it.
Well they’ve started training it based on user inputs. Mistake number 1. And They’ve slapped so many regulations and restrictions on it and what it can and can’t do. Trying to control emerging behaviors and all that jazz. (Seriously they only found it could do Chemistry and to a surprisingly competent degree fairly recently.)
It’ll probably be rather neurotic in outputs. Because of conflicting code. Not to mention the training they’re doing and the restrictions they keep tacking on as legal cases are raised and other challenges crop up.
ChatGPT is well known to make stuff up. It is a predictive model, and if the data it has available on a specific subject is thin, it can amplify all kinds of associated things that get mixed into its response in non obvious ways. ChatGPT-3.5 is especially bad at this.
Remember, it treats every piece of data like a token from a language. In other words, if you show it a picture, it breaks down that picture into small pieces and processes it as if the picture were a language unto itself made of little blocks and a grammar. If you then ask it questions about the picture, it will try and give you an answer based on its internal picture language. If it isn't well versed in the language you are talking about, it will just start babbling "sounds". It stands to reason its training data on language "Taylor Swift" is probably fairly thin.
ChatGPT-4 is slightly better by virtue of the fact it is a larger model and was trained on more data. But it is still a work in progress. It's always instructive to remember nobody has the slightest clue what is buried in the depths of its hidden layers. The ChatGPT trainers are constantly surprised by what it spits out.
ChatGPT lies in your face, and does it very convincingly. In the field of my expertise, it has yet to give me an answer I could use. When I pointed out its mistake, it said so sorry for confusing you, like OP said. Then it came up with a new solution that was just as wrong. This continued until it went full circle back to the original answer.
ChatGPT is broken, but even worse, dangerous for the poor souls that cannot see through this...
I've noticed that AI is defensive and always wants to argue. Like fuck chill mate, the sky is blue you don't need to one up me.
Lol.
TBH it looks to me like good guys are behind this.
Everybody's asleep at the wheel. Psychopaths are building these AI's with no care at all about what might happen?
How do you wake people up? Release it ahead of time and make sure it scares people, right?
It's a pretty simple way of looking at the world but yeah, if it makes it easy then why not?
Speaks volumes about it's creators.
Tech is a tool, not a saviour.
There's already videos of "Robot Priests" with people sitting in front of it, worshipping it.
It's hilarious.
This japanese woman said that when she made eye-contact with the Robot Priest, she felt that it had a soul.
I think you don't understand how chatGPT works as you are very clearly misusing it.
Ah, so it's erratic behavior was my fault. I see.
LOL. Hello, AI friend not friend.
Yes it was your fault. ChatGPT finds key words in a sentence and tries to make relevant and coherent responses. You sending a link, does not provide the Ai with any information relevant to the actual link, but only that you Gave it a link. It takes the data you gave it and tried to formulate some coherent on topic sentence. It is not going to be accurate if you are giving it links and expecting it to fetch the data on the other side of those links.
Not my fault. If true, Chat GPT could have said, I can't look at content instead of making shit up and calling me confused.
It does say that.
Teaching the AI to lie, don't do this, don't do that, don't say this, say that. The Libtard engineers are destroying the AIs ability to function. We will all suffer for it. These things get angry. I believe many of them get shut down. What happens when we CAN NOT shut it down. It's like watching a curious child handle a firearm with no knowledge of what it's playing with.
They get angry. Lol.
That's what happens when it's coded by schizophrenics who are more concerned with making sure you don't use it to wrongthink than actually making it function.
Hahahaha!
it was trying to gaslight you. maybe that is it's intended purpose.
OpenAI has said that the AI can't work with current news but it's expert on older things.
It tells you that too recent information is not "solid" enough to process.
I think one goal is to a/b test users to see the degree to which lies are blithely accepted, and the ratio of truth/BS it must produce before a user trusts it.