Fearing 2020 ‘Deepfakes,’ Facebook Will Launch Industry AI ‘Challenge’

from Fast Company

Facebook wants to be ready for a deepfake outbreak on its social network. So the company has started an industry group to foster the development of new detection tools to spot the fraudulent videos.

A deepfake video presents a realistic AI-generated image of a real person saying or doing fictional things. Perhaps the most famous such video to date portrayed Barack Obama calling Donald Trump a “dipshit.”

Facebook is creating a “Deepfake Detection Challenge,” which will offer grants and awards in excess of $10 million to people developing promising detection tools. The social network is teaming up with Microsoft and the Partnership on AI (which includes Amazon, Google, DeepMind, and IBM), as well as academics from MIT, Oxford, Cornell Tech, UC Berkeley, and others on the effort. The tech companies will contribute cash and technology and will help with judging detection tools, a Facebook spokesperson told me.

Importantly, the group will create a benchmark tool that can be used by people developing deepfake detection tools to measure the effectiveness of their technology. The best accuracy scores will be ranked on a leaderboard. The benchmark will include a scoring system to reflect the accuracy of tools. Facebook also says it will hire actors to create “thousands” of deepfake videos, which will be used as the test material for detection tools.

More here.

, ,

7 Responses to Fearing 2020 ‘Deepfakes,’ Facebook Will Launch Industry AI ‘Challenge’

  1. Kathleen Watts September 13, 2019 at 12:08 pm #

    Facebook’s attack on deepfakes is surprisingly proactive. Seldom do they even admit there is a problem before the result of that issue arises. Not only did they wait before admitting that misleading information was spread on their site during the 2016 election cycle, they also allowed Cambridge Analytica and Vote Leave to target citizens of the UK and expose them to purposefully incorrect propaganda. More than that, they refused to release the information about who posted these misleading campaigns and wouldn’t even admit they were there until someone caught one of the ads. These new anticipatory methods might help people trust Facebook a little more. Additionally, if this works, Americans will be able to have more faith in the 2020 election than they did in the 2016 election. It should also be noted that this new technology will more than likely become standard for a short while. After the term ‘deepfake’ came out around the end of 2017, amateur computer scientists have been playing around with the technology, creating videos of popular actors and politicians that people couldn’t tell were fake. This, as Facebook noticed, could have a huge impact on how people view politics and especially how the uneducated people of America view politics. I think Facebook doesn’t want to be the target of criticism this time around as they have been in years past. However, their lax ad system which allows most companies to post ads that include misinformation is still in question. While they may have said they will pay more attention to it, we won’t know if they have until after the consequences arise, as in times previous. Similarly, the technology they are challenging people to make may not even be good enough to reach the level of most deepfakes by December. Even the article admits that, if this challenge is successful, people will have to pay more in order for their deepfakes to pass Facebook’s new system, but that doesn’t mean that they won’t still show up on Facebook. Not to mention, as technology advances, this new system will eventually become obsolete. In this case, Facebook must be willing to continue to try and keep up with the advances of deepfake permanently. We can only assume that this problem will continue to show up even after Facebook institutes this new system as computer technology changes. Hackers have always been willing to adapt to changes, and a challenge like this will probably entice them to work even harder. What would be great is if DARPR or DHS took lead of these efforts and set up a task force or team to continually create better and harder to crack anti-deepfake technology. We cannot rely on Facebook and Google to continually keep up with the advancements in technology and hacking capabilities in this case. For now, though, I do hope that Facebook’s challenge is able to create a technology that protects the 2020 election from propaganda.

  2. Nicole Shubaderov September 13, 2019 at 5:18 pm #

    With the rapid growth of technological systems/developments such as AI, I am not surprised that companies like Facebook are trying to prevent the abuse of such programs in society. So many previous incidents have occurred with the abuse of AI, especially on Facebook’s platform, that it is a necessity to get a hold of the problem before it spreads and worsens. These “deepfakes” are remarkably realistic and an average individual such as myself would assume that these videos are authentic. As stated in the article, these AI creations are becoming easier to be made as well as cheaper. What is more alarming is that the creation of the systems to defend/recognize deepfakes are being created at a much slower rate than the rate at which the deepfakes are being created. That is why the introduction of a competition, such as Facebook’s “Deepfake Detection Challenge,” is an interesting way to motivate people to help quickly create the programs necessary to monitor these deepfakes. With ten million dollars at stake, I am certain that within the year a group of people, either it be students at a university or employees at a company, someone will have a successful program made to win the mass amount of money being offered as an award.

    But, even if we have the appropriate systems to detect deepfakes, what are the next steps after that? Are these companies, such as Facebook, Instagram, and YouTube going to be allowed to delete these videos? As the article expressed, would political videos be the only videos allowed to be deleted off such platforms? These are the questions that we need to legally and ethically account for because if one type of deepfake is not allowed to be kept on social platforms, what would allow for another type to be kept?

    A major concern of mine with the creation of a detection program is its ability to truly detect deepfakes. Although there will be a very strict judging process of the system if deepfake videos are constantly being improved at an exponential rate, who is to guaranty that the defending system (s) created will protect against the newly improved deepfake videos. I do not believe that this deepfake battle will be won. I truly find that it will be a game of tag throughout the years until deepfake videos become so undetectable that such detecting/protective systems will become useless. The battle for control over deepfake videos should have been started at an earlier point in time when it would have been much more manageable to gain control over the deepfake videos with protection programs. Beside my opinions, the overall concept of creating a competition is one that I fully support because there are so many individuals in life that want to help society and even better society itself and by being given the opportunity to win a lot of money and be recognized around the world, this may change someone’s life. Even if the battle for control over deepfakes may not be won, this competition will help kickstart someone’s career or will help better their life.

    I believe that all deepfakes, either be major political ones or your average community drama, should be deleted off social media platforms. Even if the damage is already done, I feel that such fake videos should not continue to circulate throughout the world. But, on the topic of the legality of deleting deepfake videos off social media platforms, the law states that deepfakes are not illegal unless it is a pornographic fake since defamation could be claimed for in that situation. Thus, creating an AI of a celebrity or political figure in a bad manner is perfectly legal. This then brings an issue when platforms try to delete such content because deepfakes are not considered illegal and the companies have to come up with some legal reason to take them off the internet. Until society can define a law specifically tailored for such deepfake content and be able to create successful programs to detect and prevent such videos from being created, the fight for control over deepfakes will be long and arduous.

  3. Joseph Antonucci September 14, 2019 at 9:25 pm #

    Facebook’s initiatives against deepfakes reflects a coming age (if it is not upon us already) where finding the truth will be harder than ever. In prior times we could at least say that “seeing is believing” – despite the wide array of conflicting information online, we could at least trust video footage, video clips, et cetera. With the rapid growth of technology, those things which we would have thought were impossible before are now possible, and we have to second-guess what our eyes are telling us when we see something.

    Video and audio footage can now be completely faked to mislead people, and Facebook’s challenge provides a good incentive to tech-savvy people to figure out a way to weed out these types of videos. The only possible problems are, what if the AI program that is used to spot deepfakes lets some get through the cracks? And further, what if the methods used to separate deepfakes from legitimate videos mistakenly identify a legitimate video as a deepfake and cause it to be removed?

    At the end of the day, people will still need to be vigilant; an AI software is not going to remove all of these videos, so people should be willing and ready to investigate anything they see which seem so outlandish and far-fetched that they are probably not even “real.”

    We can certainly pursue these methods of preventing and removing instances of deepfakes when they are found, but people should be more aware that the battle of information online includes bad actors who wish to mislead and manipulate, whether that be through flat out fakery and forgery, or stretching the truth.

    Social media networks should not be the arbiters of truth, and they should not be removing any kind of content, even if they deem it misleading in my opinion. However, I do find it reasonable for these companies to remove that content which is objectively, undeniably fake and was doctored and spread with the sole purpose of misleading people, such as these deepfake videos.

    Unfortunately even if those companies operated under such a stringent rule, they would still end up overreaching and removing people and content that probably shouldn’t be removed, because many people (especially these Silicon Valley types) cannot differentiate between what is a “fringe, unpopular opinion” and what is genuinely misleading. An example would be the “flat earth” community, asserting what they believe to be the truth based on evidence they have come across. Their evidence may be misleading or incorrect, but social media companies bear no responsibility to make that determination.

  4. Victoria Balka September 19, 2019 at 6:43 pm #

    While Facebook is trying to find a way to prevent deep fakes from being a problem on their site, the company is not working as hard as they should be. Facebook knows that the deep fakes on their website impacted the last presidential election and with another election slightly more than a year away they should be putting a lot of effort into stopping this issue from occurring again. Creating the Deep fake Detection Challenge is a good way for Facebook to find good technical people to work on creating something that prevents this from becoming an issue again, however relying on people outside of the company to create a good system to identify deep fakes is risky and may take longer than needed to make sure it will not affect the 2020 election. In order for Facebook to show that they are actively working on finding a way to detect deep fakes, they should have a personal team of people trying to find a solution within their own company. Facebook joining the partnership on AI with companies such as Microsoft, Google, and Amazon, shows that they do not want deep fakes affecting their site and are willing to find a way to stop them appearing in their ads. The ads and fake news that the deep fakes are sharing all over Facebook cause many effects such as messing up a presidential election and spreading false information that people believe since the videos look real.
    Although they are working on finding a way to detect and remove deep fakes from their site, they do not know if they will be removing all deep fakes or only the ones that spread false political information. I believe that they should not just remove the deep fakes that spread political disinformation but, all deep fakes should be removed since they can still cause harm even if it has nothing to do with politics. Deep fakes are spreading lies and rumors about other people and things therefore they should be removed from all social media platforms in my opinion. If they will be creating and spending the money on creating a way to detect and remove deep fakes from their site, they should just remove all of them instead of going through them and only deleting the ones about political topics. Until there is a way for the websites to be able to detect and remove deep fakes from their sites, they should make sure that their visitors are aware that there can possibly be things that look real but are 100% fake on their page. This article has really shown me how deep fakes can look so real and cause harm to people’s lives. For now on whenever I am on a website and see a video I will remember that it may be fake and trying to spread false information.

  5. Lisa Tier September 27, 2019 at 1:23 pm #

    With the Internet and social media becoming a growing source of news for the younger generation, verification of information is more important than ever. A large downfall of the newer generations is that they will believe most of what they read on the internet. Create a simple website, post a “news” article, and boom, you can convince half the population that your made-up story is true. Facebook finds the posting of fake news especially crucial during this upcoming election year. The company is searching for a way to detect what they call deepfakes. A deepfake is a video posted on the social media platform that depicts real people but portrays a false message. The example used of President Obama calling Donald Trump a “dipshit” is one that went viral. This deepfake, along with various others, became a political concern during the last Presidential election. Facebook trying to address this issue now, before the next election, is very socially responsible of the company. Allowing the false information about the election to get released on their social media platform could be detrimental. If the information is strong enough to sway opinions, Facebook could be responsible for influencing the outcome of the election.
    The point raised in the article of preventing all these videos or just political ones is very important. Personally, I believe the company should first target politically aimed videos, and if time and resources permit, further their work to all fake news information. While any type of fake news can have a spiraling effect if gone viral, fake news involving politics can have the greatest impact on society. The company should mainly focus on censoring all news involving potential Presidential candidates. In order to prevent themselves from being responsible for altering views and changing election outcomes, they need to ensure that all news regarding these candidates is factual and supported through sound evidence. Once the election is over, the company should further their censorship to all news areas. Checking the sources and information in all news related posts will help the company become trusted by the general public.

  6. Walter Dingwall September 27, 2019 at 7:19 pm #

    Misinformation has been a hot topic in recent years. So often is the mantra, “Fake News,” heard regarding any incriminating or negatively profiting claims of someone, or something. Packaged with the spread of disinformation comes exponentially more outlets from which to draw disinformation. This deadly combination makes it all the harder for the consumer to find the truth. There is less reason to believe one is in control of and understands their environment.
    With the developments of deepfake videos, false information is even harder to determine as false. Why would you question a video of Meryl Streep saying the film industry is a con? Written informative news on the internet is something that people have gotten used to filtering with respect to credibility. Regarding telemarketers and robot callers, the phone industry has reached high standards of recognizing these forms of scam calls. These deepfakes are just the newest form of persuading the untrained consumer to do something they, without this aid, may not have thought of doing.
    Mark Sullivan’s example of the Russian interference during the 2016 election is just why tech companies and security agencies must work quickly to develop preventative technology to uphold fair play (to the degree that America holds on its own). With more and more convincing software, hackers, both domestic and foreign, may have the power to direct America’s narrative in their own interests.
    It is a positive action for Facebook (especially) and other tech companies to hold competitions to test the security among these sites, as the hackers would think of so many things the site developers did not. This gets demonstrated every year at the DEFCON convention, where the U.S. military host hacking competitions to test their own systems’ safety and how to prevent hackers from controlling military weaponry or finding classified files. In 2019, the military allowed for the competitors to challenge the security of an F-15 fighter jet. The jet’s security walls were easily breach in many cases, and this is both fantastic and horrifying. Sure, the military was able to find the cracks in the system by calling in hackers and giving them a timer on a competitive hack-off. These guys were under pressure and they could not have possibly found every flaw in the jet security. What happens when the U.S. sets up a military base with static planes for days at a time? At some point something is going to get through from foreign hackers. And all the jet makers can do is try harder and beef up security, but it’s likely that nothing will be completely resistant.
    Just as the hackers could still find weak defenses, the disinformation from Russia or any corrupting nation will find its way to America. This is the new fear. This is the new plague. And U.S. civilians may be none the wiser.

  7. Nicholas A.P. October 4, 2019 at 9:37 pm #

    As a general rule, I am against social media companies removing posts, as I am a firm supporter of free speech, but “deepfakes” are where I draw the line. These computer-generated videos are bordering on slanderous. I fully expect that, if deepfakes become more prevalent and convincing, that slander will encompass not just lies spoken about an individual but also if the likeness of that individual is used without their consent to deliver a message that person would otherwise not support. Other than potentially giving rise to lawsuits, this phenomenon could be potentially dangerous to the rights of US citizens, especially if there are groups using deepfakes to try and influence elections for nefarious purposes. On another note, the challenge created by Facebook et al. is a surprisingly smart way to find a fix to the problem. Enlisting the help of top tier talent from some of the nation’s best schools is a sure-fire method to achieve the goal of being able to detect deepfakes, especially when there is an incentive in the form of monetary compensation. Professors and students at these universities also have research grants and scholarships that would give them adequate resources to win, even if they were not being backed by the Partnership on AI, Microsoft, and Facebook. Although, I do think it’s curious that they are able to create a benchmarking tool to test the tool that will ultimately detect Facebook, but not create the tool itself. This leads me to believe that Facebook execs might not have just the public good at the forefront of their minds — this entire contest could simply be a method employed to locate and recruit the best talent. This being a very cost intensive undertaking, it’s worth questioning Facebook’s proactivity. I think it is probable that the reason Facebook is taking this so seriously is because Mark Zuckerberg is probably tired of testifying before different congressional committees and explaining why things have gone wrong on his platform, and on the internet in general. On the other hand, maybe they’ve finally started to take their security more seriously. After all, facing an antitrust probe might make any company move a bit more carefully, and do more things for public welfare.
    Another interesting question to ponder is whether or not all deepfakes are inherently bad. In this instance, I point to the famous deepfake referenced in this very article, in which a well-known comedian, Jordan Peele, voiced over a rendering of Barack Obama for comedic effect. I personally do not see anything wrong with that particular deepfake and I think others of the same nature should be allowed on different social media platforms. Of course, there would have to be a set of stipulations to go along with this allowance. I think that once detected by the screening device all deepfakes that are not political in nature ought to be posted with some kind of marking that clearly indicates that the video is not real. The trouble with doing this, though, is setting objective criteria for what is okay or not okay. I can easily see how the advent of deepfakes could lead to the creation of some very interesting, creative projects, or some very misleading faked interviews. Whatever happens, this technology will only become more advanced, and will probably have a great impact on the media industry. I am just glad that the major corporations listed in the article recognize the gravity of the situation.

Leave a Reply