(ANTIMEDIA) — A new artificial intelligence (AI) algorithm is capable of manufacturing simulated video imagery that is indiscernible from reality, say researchers at Nvidia, a California-based tech company. AI developers at the company have released details of a new project that allows its AI to generate fake videos using only minimal raw input data. The technology can render a flawlessly realistic sequence showing what a sunny street looks like when it’s raining, for example, as well as what a cat or dog looks like as a different breed or even a person’s face with a different facial expression. And this is video — not photo.
We're revolutionizing the news industry, but we need your help! Click here to get started.
For their work, researchers tweaked a familiar algorithm, known as a generative adversarial network (GAN), to allow their AI to create fresh visual data. The technique involves playing two neural networks against each other, but Nvidia’s new program requires far less input and no labeled datasets. In other words, AI is getting much, much better at mimicking reality.
Nvidia researcher Ming-Yu Liu says it would normally require multiple pairs of datasets for an ‘image translation’ AI to generate this kind of information. The new iteration of GAN is a massive improvement and allows for the unsupervised growth of AI functionality.
“[And] there are many applications,” says Liu. “For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.”
The researchers note that in addition to uses in self-driving cars, realtors could also use the technology to show prospective homebuyers what properties might look like in different seasons. One can imagine a myriad of similar applications that could be integrated into existing industries or spawn entirely new services.
— Oli Franklin-Wallis (@olifranklin) December 4, 2017
Of course, there are also fears that the technology portends a dystopian future in which mega-corporations or governments can manipulate news media, eliminate or alter visual evidence of crimes, or even manufacture events that didn’t happen. We’re now beyond the phase of questioning whether something was photoshopped. Adding yet another wrinkle to the era of “fake news,” we may soon have to wonder whether or not video clips are AI generated.