I don't believe that any AI system can extract meaning from memes (at least not yet, or widely known).
Think about a common meme like the superhero button choice. You see 2 buttons with text in one frame and the stress drops and expression on the heroes face, and automatically make the connection of the choice to be made between 2 options and the implication that one button MUST be pressed.
For the computer; it can classify that the image matches one that was learned. It could possibly extract the text above the buttons. It could classify the buttons as buttons. It could classify the person. It could classify the drops as water. BUT, the currently available / known AI has no means of applying those traits to any specific meaning.
From what I hear from people at fakebook, not only Zuck has invested a lot of money and effort into this, he has made huge strides. It still requires human intervention, but the AI can filter out potential candidates very quickly and the humans can provide confirmation - which goes back to train the AI.
From what I gather outside of this insider insights is that, this technology is part of those military / CIA tech thats usually 20-30 years ahead of whats released publicly.
Putting the two together, I have to believe they have far more than we can imagine right now
Yes, they have probably put up billions in research and hardware for training.
As advanced as AI gets, it is still fundamentally based on digital representations of reality. They can get statistically strong scores but will individually produce some inexplicable results.
In other words, they can be hacked and manipulated.
For example; they did some tests on a visual AI system, you want to know what the computer believes the difference is between a wolf and a German shephard? The wolf has snow in the background.
Another example that fooled facial trackers was a miniature test image in the frame ( a known test image) this fooled the AI into believing it was in the testing phase.
And again, as with Tineye, adding a random transparent overlay on a known image fools the system into thinking it is unique.
While these systems are very good, none known are capable of taking a movie screenshot to determine the contextual meaning and then adding that meaning to a text in the image.
Sometimes as little as a singular pixel can make the difference between an image of a panda with 95% confidence to a boat with 99% confidence.
Statistically good, individually flawed results. That's why camouflage will continue to work for the foreseeable future.
OCR and getting meaning from text is one thing, but change the text to something a leftist would say and an image whose meaning would imply the text is stupid and it would require a human (unless it exists) OR apply camouflage to an existing meme and it can be reused.
Another option would be to make the text unreadable to OCR, jagged lines with a 6% alpha should work.
Ah the AI that runs on your Google Photos is very different than the AI that runs on their server detecting meaning from memes.
I don't believe that any AI system can extract meaning from memes (at least not yet, or widely known).
Think about a common meme like the superhero button choice. You see 2 buttons with text in one frame and the stress drops and expression on the heroes face, and automatically make the connection of the choice to be made between 2 options and the implication that one button MUST be pressed.
For the computer; it can classify that the image matches one that was learned. It could possibly extract the text above the buttons. It could classify the buttons as buttons. It could classify the person. It could classify the drops as water. BUT, the currently available / known AI has no means of applying those traits to any specific meaning.
From what I hear from people at fakebook, not only Zuck has invested a lot of money and effort into this, he has made huge strides. It still requires human intervention, but the AI can filter out potential candidates very quickly and the humans can provide confirmation - which goes back to train the AI.
From what I gather outside of this insider insights is that, this technology is part of those military / CIA tech thats usually 20-30 years ahead of whats released publicly.
Putting the two together, I have to believe they have far more than we can imagine right now
Yes, they have probably put up billions in research and hardware for training.
As advanced as AI gets, it is still fundamentally based on digital representations of reality. They can get statistically strong scores but will individually produce some inexplicable results.
In other words, they can be hacked and manipulated.
For example; they did some tests on a visual AI system, you want to know what the computer believes the difference is between a wolf and a German shephard? The wolf has snow in the background.
Another example that fooled facial trackers was a miniature test image in the frame ( a known test image) this fooled the AI into believing it was in the testing phase.
And again, as with Tineye, adding a random transparent overlay on a known image fools the system into thinking it is unique.
While these systems are very good, none known are capable of taking a movie screenshot to determine the contextual meaning and then adding that meaning to a text in the image.
Sometimes as little as a singular pixel can make the difference between an image of a panda with 95% confidence to a boat with 99% confidence.
Statistically good, individually flawed results. That's why camouflage will continue to work for the foreseeable future.
Not true. I created my own custom meme (based on a more poorly done Biden meme) and Facebook scanned it with OCR in order to apply a fact check on it.
I believe you misunderstood the point.
OCR and getting meaning from text is one thing, but change the text to something a leftist would say and an image whose meaning would imply the text is stupid and it would require a human (unless it exists) OR apply camouflage to an existing meme and it can be reused.
Another option would be to make the text unreadable to OCR, jagged lines with a 6% alpha should work.