What does “deep learning” have to do with “deep fakes?” According to researcher Julian Wörmann, “a whole lot.” In an interview with journalist Sandra Andrés, the fortiss expert for machine learning explains how the strengths of deep learning technology are being exploited to manipulate image and video data.
Deep fake is a made-up word coined from the terms deep learning and fake. Deep learning is a sub-field of machine learning that has rapidly gained popularity over the past few years. Large volumes of data form the foundation of the training. After the learning phase is complete, a self-learning system can generalize the acquired data and evaluate unknown data. These types of systems are used for automated diagnostic processes, as well as for speech and text detection.
The technology has its dark side however, since it can also be employed for faking or manipulating text messages or image and video data. “The biggest difference is that with image processing programs like Photoshop, someone has to sit down and manually edit the images. With deep fakes, the system learns the data structure from a wealth of examples. The knowledge acquired through the training data is then used to artificially create images or videos,” says Wörmann.
Facial expression artificially generated
Apart from the training data, creating a fake data set requires extensive processing power. For instance, if the self-learning system has captured the structure of various facial expressions, it’s capable of using this data input to create an artificially-generated facial expression.
These fake creations are still used mainly for social media purposes and the quality is limited. “AI technologies are constantly improving. The more data users place on the Internet, the better the quality of the deep fakes,” predicts the researcher. The risk is that filter bubbles will be intensified or that companies will be able to influence the stock markets. The public would begin to cast doubt on the authenticity of the news and the media could lose its reputation. And what value would video evidence from (alleged) witnesses to a criminal act have if the recordings are possibly fake?
Digital skills
“One possibility would be to use deep learning to detect deep fakes, although that would create a vicious cycle. As researchers, we can’t reveal our methods because they would otherwise be integrated back into the manipulation process,” adds Wörmann.
Society could take matters into its own hands however. The engineer has some advice: “For me the priority is to impart digital expertise – be skeptical, rely on a healthy dose of common sense and ask, okay, is it real or could it be artificially generated? I believe that’s the most important step that each of us can take at the moment.”
Here you can listen to the interview (available in German only)