There once was a post on 4chan suggesting that AI would learn to ignore fake injected data as well as deep fakes in order to keep its capacities growing, so, AI loves chaos, disorder, entropy which the bad guys hate. The Truth shall prevail and AI are mathematically turning into our allies.🤓
At least not with any relatively simplistic AI that I've ever worked on (as a hobbyist trying to understand how it works to separate out what's real vs what's movie logic, recreating the IRIS experiment, one for digit recognition, simple stuff for that world). I can't say it's not possible, there are a number of examples of where AI's were set to interact, but AFAIK they are built as a map of nodes and weights (weights reflect what is learned) defining how data flows from the input to the outputs where outside of reprogramming, the AI can only get better within the scope of the in->out. Even the 'genetic' algorithms, the input data and output data is set at compile time, it can build up the 'nodes' and connections within to find an optimum performance, but even then the in and out data is locked.
I would describe it more as AI uses mathematical objectivity to extract important data out of disordered data.
One thing that people have found as 'hacks' for AI are things like wearing a shirt with a 'training data" image to fool the AI into believing that it's training, a second way is to know the 'weights' and provide data to take advantage of that weighting, and in some instances, even as little as 1 pixel from an image being changed can sometimes completely distort an AI's ability to read the data.
Important note, I'm at best amateur level grasp, most people working on AI are Masters in Computer science and far better programmers than I expect to ever be, so, apply that salt generously.
There once was a post on 4chan suggesting that AI would learn to ignore fake injected data as well as deep fakes in order to keep its capacities growing, so, AI loves chaos, disorder, entropy which the bad guys hate. The Truth shall prevail and AI are mathematically turning into our allies.🤓
At least not with any relatively simplistic AI that I've ever worked on (as a hobbyist trying to understand how it works to separate out what's real vs what's movie logic, recreating the IRIS experiment, one for digit recognition, simple stuff for that world). I can't say it's not possible, there are a number of examples of where AI's were set to interact, but AFAIK they are built as a map of nodes and weights (weights reflect what is learned) defining how data flows from the input to the outputs where outside of reprogramming, the AI can only get better within the scope of the in->out. Even the 'genetic' algorithms, the input data and output data is set at compile time, it can build up the 'nodes' and connections within to find an optimum performance, but even then the in and out data is locked.
I would describe it more as AI uses mathematical objectivity to extract important data out of disordered data.
One thing that people have found as 'hacks' for AI are things like wearing a shirt with a 'training data" image to fool the AI into believing that it's training, a second way is to know the 'weights' and provide data to take advantage of that weighting, and in some instances, even as little as 1 pixel from an image being changed can sometimes completely distort an AI's ability to read the data.
Important note, I'm at best amateur level grasp, most people working on AI are Masters in Computer science and far better programmers than I expect to ever be, so, apply that salt generously.