Welcome to General Chat - GAW Community Area
This General Chat area started off as a place for people to talk about things that are off topic, however it has quickly evolved into a community and has become an integral part of the GAW experience for many of us.
Based on its evolving needs and plenty of user feedback, we are trying to bring some order and institute some rules. Please make sure you read these rules and participate in the spirit of this community.
Rules for General Chat
-
Be respectful to each other. This is of utmost importance, and comments may be removed if deemed not respectful.
-
Avoid long drawn out arguments. This should be a place to relax, not to waste your time needlessly.
-
Personal anecdotes, puzzles, cute pics/clips - everything welcome
-
Please do not spam at the top level. If you have a lot to post each day, try and post them all together in one top level comment
-
Try keep things light. If you are bringing in deep stuff, try not to go overboard.
-
Things that are clearly on-topic for this board should be posted as a separate post and not here (except if you are new and still getting the feel of this place)
-
If you find people violating these rules, deport them rather than start a argument here.
-
Feel free to give feedback as these rules are expected to keep evoloving
In short, imagine this thread to be a local community hall where we all gather and chat daily. Please be respectful to others in the same way
Bonjour bonjour!🤓
I just had an Epiphany:
The biggest AI bots are trained using Reddit comments, AI bots, Fecebook posts…
So most of their training material come from attention-craving « influencers » or simply morons.
ChatGPT cannot get as much clues as a Cartesian thinker with a decent understanding, sound curiosity and a hint of serendipity: it will only sound creative and clever to morons. Prove me wrong.
This is actually a big problem in the AI community, and I was pleasantly surprised how many of the coders I came across were not raging lefties. Infact one of the goals of many developers is to create a base model that does not include reddit and MSM as the major source.
Unfortunately training from scratch is extremely expensive, so the alternative approach is to start with a decent base model like LLaMA, and then make it unlearn all the woke rubbish by finetuning it with additional training.
Wow, and in the few big-corp projects I've seen, they've struggled to make the things learn to play games (not even chess) and then when they attempt to apply to real-world stuff they crash and burn spectacularly. Sort of like the scientist who trained a bit to detect approaching humans by showing it walking men, it could only handle what the designer could imagine. Wasn't much.
These are really not the problems best suited for AI. Thats why I like language models. Seems to be one area where they can really shine.
I think that Clif High already proved that wrong.
Who’s that and how did he concludes this?
See https://clifhigh.substack.com/p
Scroll down. He did several sessions including "AI does DMT!"