“We have gone from the information age into the disinformation age” — Scott Pelley
Artificial Intelligence has scary implications for the world at large. AI is not the same as it is portrayed in movies like The Terminator or Wall-E, and those are not the real reasons you should be scared of it. From social media bots to audio and video deep fakes that will make you question reality, let’s look at the more sinister ways that Artificial Intelligence is being used.
Collin’s Dictionary defines a deepfake as “a way of adding a digital image or video over another image or video, so that it appears to be part of the original”. Deepfakes could be audio or video recordings that are doctored to make a person say something they have never said, or to make it seem like they are doing something they have never done. Two types of deepfakes can be created -audio and video deepfakes.
Audio Deepfakes: A tech startup called Lyrebird has had significant success with their algorithm that can replicate human language with just a few phrases as input. Here is a video from Bloomberg Quicktake about the process of cloning a voice.
Video Deepfakes: Open source software has made deepfaking incredibly easy by reducing the cost of creating doctored content. The two most popular open source software for deepfaking are Faceswap and DeepFaceLab. This video shows how deepfaking works within DeepFaceLab from Cinecom.
What is a GAN and how are they relevant?
Deepfake software utilizes GANs to make the product it creates more realistic. GAN stands for Generative Adversarial Network. A GAN is basically two neural networks in one. The first neural network is called the Generator. It creates output data. For our purposes, the Generator would create images of audio files or photos. The second neural network in the GAN is called the Discriminator. The Discriminator takes two inputs from the Generator classifies which is real and which has been fabricated. The job of the GAN is to fool itself into thinking that a generated image is real. GANs are used to train machines to perform creative tasks that a human would normally have to do. This can be creating music, art, or in this case — a deepfake.
The Problem with Recommendations
Social media is designed to exploit its users’ brain chemistry in order to keep them engaged with the content they are being presented. This is done in part by using deep learning to decide what a user would engage the most with. The reason these companies are so concerned with engagement is that they make money off advertisements which in turn only make money if they have been seen/clicked on. One of the big problems with this is that users are becoming less concerned about the world around them and more concerned about how many likes their latest post has.
According to DevOps, bots account for 50% of all web traffic. A chatbot might help you find exactly what you are looking for on a website or a copyright bot may help protect your intellectual property. These are two of the many ways that bots can be helpful, and are a useful addition to the internet. Unfortunately, there are both good bots and bad ones. The “bad bots” tend to be a lot more sinister and secretive about their actions. These include certain social media bots and the Russian bots that meddled in the 2016 American election, using advertisements and other posts. These bots were made to look like average people but they were being used to spread fake news and to divide voters.
The question is, how can we fix these problems? While the solution to this problem is not easy, laws can be implemented to regulate deepfake software. Companies should be allowed to develop deepfake software but they should be forced to create other software which identifies whether a video/image/voice has been deepfaked. Some companies have already implemented such solutions. For instance, Adobe, the developer of Photoshop, has created a software that can determine whether an image has been photoshopped. This should be mandatory for all developers of audio, video, or image alteration software.
The solution to the problem of social media addiction and the perpetuation of fake news is for users to have the choice to only receive content they subscribe to, rather than receiving what the algorithm recommends for them.
The problem of malicious bots on the internet can be fixed by social media companies showing users whether the post was either generated by a bot or by a human. Similar to the verified symbol, which shows users that an account is managed by a real person using their real identity, a bot symbol should show users which accounts are managed by bots. And, social media companies can be legislated whereby they are required to vet and identify users of their advertising networks.
AI is not all bad
Technology can be used for good and for bad. For example, Deepfakes can be used positively to bring deceased actors and musicians back to life for music videos (with their consent, of course). Or, in the case of DC comics, it can be used to remove a rogue moustache from superman (more on that here). Also, as I said before, chatbots and copyright bots can be extremely valuable and have a very positive impact on people’s livelihoods. Just because a technology can be used for bad things, doesn’t mean it should be suppressed. All that it means is that there should be better regulations so that it does not become uncontrollable.
If you have been paying attention thus far, you may have a dystopian, Fahrenheit 451-esque idea of our modern age. That being said, it is critical that you remember that AI is only as evil as the people creating/using it for nefarious purposes. AI is, and will be, a very useful and amazing technology. This article was not written to convince you that AI is scary or evil. In fact, I hope that I have enforced the fact that with proper regulation, AI can become our greatest tool as humanity.