AI networks have been able to create fake videos and images for sometime now which is called “deepfakes“. Deepfakes are so impressive, you cant tell them apart from real footage. Remember President Obama’s deepfake video?
The creators of the AI program CycleGAN were astonished to learn that their creation had cheated by hiding information from them, which it then used later. The research was conducted by Stanford and Google, which created CycleGAN — a technique for image-to-image translation.
The intention of the study was to create a tool that could more quickly adapt satellite images into Google’s street maps. But instead of learning how to transform aerial images into maps, the machine-learning agent learned how to encode the features of the map onto the visual data of the street map.
“So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.”
“In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed.”
The agent’s actions represented an inadvertent breakthrough in the capacity for machines to create and fake images.
“This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)”
Instead of finding a way to complete a task that was beyond its abilities, the machine learning agent developed its own way to cheat.
“One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.”
It only takes one AI network to go rogue and we are in real trouble. Being able to tell real from fake will be nearly impossible. Anyone, any State or any country could be held to ransom.
We promote the sharing, copying and distribution of articles
on this site.
No copyright restrictions. Spread this information as you please.