Fearing 2020 ‘Deepfakes,’ Facebook Will Launch Industry AI ‘Challenge’

from Fast Company

Facebook wants to be ready for a deepfake outbreak on its social network. So the company has started an industry group to foster the development of new detection tools to spot the fraudulent videos.

A deepfake video presents a realistic AI-generated image of a real person saying or doing fictional things. Perhaps the most famous such video to date portrayed Barack Obama calling Donald Trump a “dipshit.”

Facebook is creating a “Deepfake Detection Challenge,” which will offer grants and awards in excess of $10 million to people developing promising detection tools. The social network is teaming up with Microsoft and the Partnership on AI (which includes Amazon, Google, DeepMind, and IBM), as well as academics from MIT, Oxford, Cornell Tech, UC Berkeley, and others on the effort. The tech companies will contribute cash and technology and will help with judging detection tools, a Facebook spokesperson told me.

Importantly, the group will create a benchmark tool that can be used by people developing deepfake detection tools to measure the effectiveness of their technology. The best accuracy scores will be ranked on a leaderboard. The benchmark will include a scoring system to reflect the accuracy of tools. Facebook also says it will hire actors to create “thousands” of deepfake videos, which will be used as the test material for detection tools.

More here.

Posted in Ideas, Technology and tagged , , .


  1. Facebook’s attack on deepfakes is surprisingly proactive. Seldom do they even admit there is a problem before the result of that issue arises. Not only did they wait before admitting that misleading information was spread on their site during the 2016 election cycle, they also allowed Cambridge Analytica and Vote Leave to target citizens of the UK and expose them to purposefully incorrect propaganda. More than that, they refused to release the information about who posted these misleading campaigns and wouldn’t even admit they were there until someone caught one of the ads. These new anticipatory methods might help people trust Facebook a little more. Additionally, if this works, Americans will be able to have more faith in the 2020 election than they did in the 2016 election. It should also be noted that this new technology will more than likely become standard for a short while. After the term ‘deepfake’ came out around the end of 2017, amateur computer scientists have been playing around with the technology, creating videos of popular actors and politicians that people couldn’t tell were fake. This, as Facebook noticed, could have a huge impact on how people view politics and especially how the uneducated people of America view politics. I think Facebook doesn’t want to be the target of criticism this time around as they have been in years past. However, their lax ad system which allows most companies to post ads that include misinformation is still in question. While they may have said they will pay more attention to it, we won’t know if they have until after the consequences arise, as in times previous. Similarly, the technology they are challenging people to make may not even be good enough to reach the level of most deepfakes by December. Even the article admits that, if this challenge is successful, people will have to pay more in order for their deepfakes to pass Facebook’s new system, but that doesn’t mean that they won’t still show up on Facebook. Not to mention, as technology advances, this new system will eventually become obsolete. In this case, Facebook must be willing to continue to try and keep up with the advances of deepfake permanently. We can only assume that this problem will continue to show up even after Facebook institutes this new system as computer technology changes. Hackers have always been willing to adapt to changes, and a challenge like this will probably entice them to work even harder. What would be great is if DARPR or DHS took lead of these efforts and set up a task force or team to continually create better and harder to crack anti-deepfake technology. We cannot rely on Facebook and Google to continually keep up with the advancements in technology and hacking capabilities in this case. For now, though, I do hope that Facebook’s challenge is able to create a technology that protects the 2020 election from propaganda.

  2. With the rapid growth of technological systems/developments such as AI, I am not surprised that companies like Facebook are trying to prevent the abuse of such programs in society. So many previous incidents have occurred with the abuse of AI, especially on Facebook’s platform, that it is a necessity to get a hold of the problem before it spreads and worsens. These “deepfakes” are remarkably realistic and an average individual such as myself would assume that these videos are authentic. As stated in the article, these AI creations are becoming easier to be made as well as cheaper. What is more alarming is that the creation of the systems to defend/recognize deepfakes are being created at a much slower rate than the rate at which the deepfakes are being created. That is why the introduction of a competition, such as Facebook’s “Deepfake Detection Challenge,” is an interesting way to motivate people to help quickly create the programs necessary to monitor these deepfakes. With ten million dollars at stake, I am certain that within the year a group of people, either it be students at a university or employees at a company, someone will have a successful program made to win the mass amount of money being offered as an award.

    But, even if we have the appropriate systems to detect deepfakes, what are the next steps after that? Are these companies, such as Facebook, Instagram, and YouTube going to be allowed to delete these videos? As the article expressed, would political videos be the only videos allowed to be deleted off such platforms? These are the questions that we need to legally and ethically account for because if one type of deepfake is not allowed to be kept on social platforms, what would allow for another type to be kept?

    A major concern of mine with the creation of a detection program is its ability to truly detect deepfakes. Although there will be a very strict judging process of the system if deepfake videos are constantly being improved at an exponential rate, who is to guaranty that the defending system (s) created will protect against the newly improved deepfake videos. I do not believe that this deepfake battle will be won. I truly find that it will be a game of tag throughout the years until deepfake videos become so undetectable that such detecting/protective systems will become useless. The battle for control over deepfake videos should have been started at an earlier point in time when it would have been much more manageable to gain control over the deepfake videos with protection programs. Beside my opinions, the overall concept of creating a competition is one that I fully support because there are so many individuals in life that want to help society and even better society itself and by being given the opportunity to win a lot of money and be recognized around the world, this may change someone’s life. Even if the battle for control over deepfakes may not be won, this competition will help kickstart someone’s career or will help better their life.

    I believe that all deepfakes, either be major political ones or your average community drama, should be deleted off social media platforms. Even if the damage is already done, I feel that such fake videos should not continue to circulate throughout the world. But, on the topic of the legality of deleting deepfake videos off social media platforms, the law states that deepfakes are not illegal unless it is a pornographic fake since defamation could be claimed for in that situation. Thus, creating an AI of a celebrity or political figure in a bad manner is perfectly legal. This then brings an issue when platforms try to delete such content because deepfakes are not considered illegal and the companies have to come up with some legal reason to take them off the internet. Until society can define a law specifically tailored for such deepfake content and be able to create successful programs to detect and prevent such videos from being created, the fight for control over deepfakes will be long and arduous.

  3. Facebook’s initiatives against deepfakes reflects a coming age (if it is not upon us already) where finding the truth will be harder than ever. In prior times we could at least say that “seeing is believing” – despite the wide array of conflicting information online, we could at least trust video footage, video clips, et cetera. With the rapid growth of technology, those things which we would have thought were impossible before are now possible, and we have to second-guess what our eyes are telling us when we see something.

    Video and audio footage can now be completely faked to mislead people, and Facebook’s challenge provides a good incentive to tech-savvy people to figure out a way to weed out these types of videos. The only possible problems are, what if the AI program that is used to spot deepfakes lets some get through the cracks? And further, what if the methods used to separate deepfakes from legitimate videos mistakenly identify a legitimate video as a deepfake and cause it to be removed?

    At the end of the day, people will still need to be vigilant; an AI software is not going to remove all of these videos, so people should be willing and ready to investigate anything they see which seem so outlandish and far-fetched that they are probably not even “real.”

    We can certainly pursue these methods of preventing and removing instances of deepfakes when they are found, but people should be more aware that the battle of information online includes bad actors who wish to mislead and manipulate, whether that be through flat out fakery and forgery, or stretching the truth.

    Social media networks should not be the arbiters of truth, and they should not be removing any kind of content, even if they deem it misleading in my opinion. However, I do find it reasonable for these companies to remove that content which is objectively, undeniably fake and was doctored and spread with the sole purpose of misleading people, such as these deepfake videos.

    Unfortunately even if those companies operated under such a stringent rule, they would still end up overreaching and removing people and content that probably shouldn’t be removed, because many people (especially these Silicon Valley types) cannot differentiate between what is a “fringe, unpopular opinion” and what is genuinely misleading. An example would be the “flat earth” community, asserting what they believe to be the truth based on evidence they have come across. Their evidence may be misleading or incorrect, but social media companies bear no responsibility to make that determination.

  4. With the Internet and social media becoming a growing source of news for the younger generation, verification of information is more important than ever. A large downfall of the newer generations is that they will believe most of what they read on the internet. Create a simple website, post a “news” article, and boom, you can convince half the population that your made-up story is true. Facebook finds the posting of fake news especially crucial during this upcoming election year. The company is searching for a way to detect what they call deepfakes. A deepfake is a video posted on the social media platform that depicts real people but portrays a false message. The example used of President Obama calling Donald Trump a “dipshit” is one that went viral. This deepfake, along with various others, became a political concern during the last Presidential election. Facebook trying to address this issue now, before the next election, is very socially responsible of the company. Allowing the false information about the election to get released on their social media platform could be detrimental. If the information is strong enough to sway opinions, Facebook could be responsible for influencing the outcome of the election.
    The point raised in the article of preventing all these videos or just political ones is very important. Personally, I believe the company should first target politically aimed videos, and if time and resources permit, further their work to all fake news information. While any type of fake news can have a spiraling effect if gone viral, fake news involving politics can have the greatest impact on society. The company should mainly focus on censoring all news involving potential Presidential candidates. In order to prevent themselves from being responsible for altering views and changing election outcomes, they need to ensure that all news regarding these candidates is factual and supported through sound evidence. Once the election is over, the company should further their censorship to all news areas. Checking the sources and information in all news related posts will help the company become trusted by the general public.

  5. As a general rule, I am against social media companies removing posts, as I am a firm supporter of free speech, but “deepfakes” are where I draw the line. These computer-generated videos are bordering on slanderous. I fully expect that, if deepfakes become more prevalent and convincing, that slander will encompass not just lies spoken about an individual but also if the likeness of that individual is used without their consent to deliver a message that person would otherwise not support. Other than potentially giving rise to lawsuits, this phenomenon could be potentially dangerous to the rights of US citizens, especially if there are groups using deepfakes to try and influence elections for nefarious purposes. On another note, the challenge created by Facebook et al. is a surprisingly smart way to find a fix to the problem. Enlisting the help of top tier talent from some of the nation’s best schools is a sure-fire method to achieve the goal of being able to detect deepfakes, especially when there is an incentive in the form of monetary compensation. Professors and students at these universities also have research grants and scholarships that would give them adequate resources to win, even if they were not being backed by the Partnership on AI, Microsoft, and Facebook. Although, I do think it’s curious that they are able to create a benchmarking tool to test the tool that will ultimately detect Facebook, but not create the tool itself. This leads me to believe that Facebook execs might not have just the public good at the forefront of their minds — this entire contest could simply be a method employed to locate and recruit the best talent. This being a very cost intensive undertaking, it’s worth questioning Facebook’s proactivity. I think it is probable that the reason Facebook is taking this so seriously is because Mark Zuckerberg is probably tired of testifying before different congressional committees and explaining why things have gone wrong on his platform, and on the internet in general. On the other hand, maybe they’ve finally started to take their security more seriously. After all, facing an antitrust probe might make any company move a bit more carefully, and do more things for public welfare.
    Another interesting question to ponder is whether or not all deepfakes are inherently bad. In this instance, I point to the famous deepfake referenced in this very article, in which a well-known comedian, Jordan Peele, voiced over a rendering of Barack Obama for comedic effect. I personally do not see anything wrong with that particular deepfake and I think others of the same nature should be allowed on different social media platforms. Of course, there would have to be a set of stipulations to go along with this allowance. I think that once detected by the screening device all deepfakes that are not political in nature ought to be posted with some kind of marking that clearly indicates that the video is not real. The trouble with doing this, though, is setting objective criteria for what is okay or not okay. I can easily see how the advent of deepfakes could lead to the creation of some very interesting, creative projects, or some very misleading faked interviews. Whatever happens, this technology will only become more advanced, and will probably have a great impact on the media industry. I am just glad that the major corporations listed in the article recognize the gravity of the situation.

Leave a Reply to Nicholas A.P. Cancel reply

Your email address will not be published. Required fields are marked *