Who's ready for the SINGULARITY?...🤔
(media.greatawakening.win)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (79)
sorted by:
Couldn't agree more. Even chatgpt is still low level shit.
Look, an ai beat people at chess 25 years ago. We still haven't gotten much further than that.
Edit: mixed up deep blue and watson
Chess is a comparatively simple problem and well suited to computers. It has very well defined and rigid rules. All that is needed is a lot of processing power to think as many moves ahead as possible.
The kinds of problems that people are good, computers are still really bad at. Things such as determining whether a joke is funny. Or better yet, writing a good joke. Most people can't write good jokes either, but no computer can.
In addition, solving any kind of unstructured problem where the solution can come from many angles. For example: A large family have started to have trouble getting ready in the morning and getting to school/work on time due to limited kitchen and bathroom resources. What should they do? This problem can be approached from many different ways from having them buy a new larger house to coordinating with each other on who gets to use the limited resources and for how long. However, in the end, the solution involves finding out what they want to do and their relevant circumstances. Maybe they're really attached to their current house so the scheduling solution or even renovating their house might work best to them. It's unlikely that the ChatGP AI would consider that, unless it already had a similar problem in its database. It might also miss such factors like their current house being close to the school and/or work of many in the family so getting a new house might actually cause additional problems. In fact, it might really optimize for the stated problem and ignore other factors which would make the solution untenable like suggesting they sell the house and rent 3 apartments which might optimize everyone's situation with respect to crowding and getting to work/school on time but the parents would no longer own their own house and have no equity saved up in their house come retirement.
However, the advantage of a system like ChatGP is if a very similar solution already exists, they can look it up, modify it a bit, and provide the user with a starting point that might even work or be close to working, saving them time. The catch is, a person (user) would still always need to double check and fix any proposed solution as needed. These systems can at times provide ridiculous solutions. They are not reliable. You'd be surprised how often AIs can seem to work on a set of test problems to completely bungle one seemingly out of nowhere. For example: A few decades ago, the US army was trying to train a neural net to spot tanks hidden in a forest. They fed it a bunch of photos with forest which had tanks and with those that didn't appropriately identified for training. They then started testing the neural net. At first, it seemed to work. Then it started to fail occasionally. They were puzzled until someone noticed that they had in fact trained it to identify shadows, not tanks. And that was a simple system. ChatGP and anything similar is so complex, the potential for blunder is far greater due to the complexity and number of inputs. An the reason for any blunders will not be easily identifiable like in the previous example.
Finally, when encountering a new problem for which no previous solution exists, they will not come up with anything useful. They are unable to invent. They are, after all, only as good as the information that is fed into them and the system does not appear to create or infer new knowledge from existing data, even if it sometimes appears clever.
I appreciate you going through that. I sell AI solutions and it's always good to read others thoughts.