Fearing 2020 ‘Deepfakes,’ Facebook Will Launch Industry AI ‘Challenge’

from Fast Company

Facebook wants to be ready for a deepfake outbreak on its social network. So the company has started an industry group to foster the development of new detection tools to spot the fraudulent videos.

A deepfake video presents a realistic AI-generated image of a real person saying or doing fictional things. Perhaps the most famous such video to date portrayed Barack Obama calling Donald Trump a “dipshit.”

Facebook is creating a “Deepfake Detection Challenge,” which will offer grants and awards in excess of $10 million to people developing promising detection tools. The social network is teaming up with Microsoft and the Partnership on AI (which includes Amazon, Google, DeepMind, and IBM), as well as academics from MIT, Oxford, Cornell Tech, UC Berkeley, and others on the effort. The tech companies will contribute cash and technology and will help with judging detection tools, a Facebook spokesperson told me.

Importantly, the group will create a benchmark tool that can be used by people developing deepfake detection tools to measure the effectiveness of their technology. The best accuracy scores will be ranked on a leaderboard. The benchmark will include a scoring system to reflect the accuracy of tools. Facebook also says it will hire actors to create “thousands” of deepfake videos, which will be used as the test material for detection tools.

More here.

, ,

3 Responses to Fearing 2020 ‘Deepfakes,’ Facebook Will Launch Industry AI ‘Challenge’

  1. Kathleen Watts September 13, 2019 at 12:08 pm #

    Facebook’s attack on deepfakes is surprisingly proactive. Seldom do they even admit there is a problem before the result of that issue arises. Not only did they wait before admitting that misleading information was spread on their site during the 2016 election cycle, they also allowed Cambridge Analytica and Vote Leave to target citizens of the UK and expose them to purposefully incorrect propaganda. More than that, they refused to release the information about who posted these misleading campaigns and wouldn’t even admit they were there until someone caught one of the ads. These new anticipatory methods might help people trust Facebook a little more. Additionally, if this works, Americans will be able to have more faith in the 2020 election than they did in the 2016 election. It should also be noted that this new technology will more than likely become standard for a short while. After the term ‘deepfake’ came out around the end of 2017, amateur computer scientists have been playing around with the technology, creating videos of popular actors and politicians that people couldn’t tell were fake. This, as Facebook noticed, could have a huge impact on how people view politics and especially how the uneducated people of America view politics. I think Facebook doesn’t want to be the target of criticism this time around as they have been in years past. However, their lax ad system which allows most companies to post ads that include misinformation is still in question. While they may have said they will pay more attention to it, we won’t know if they have until after the consequences arise, as in times previous. Similarly, the technology they are challenging people to make may not even be good enough to reach the level of most deepfakes by December. Even the article admits that, if this challenge is successful, people will have to pay more in order for their deepfakes to pass Facebook’s new system, but that doesn’t mean that they won’t still show up on Facebook. Not to mention, as technology advances, this new system will eventually become obsolete. In this case, Facebook must be willing to continue to try and keep up with the advances of deepfake permanently. We can only assume that this problem will continue to show up even after Facebook institutes this new system as computer technology changes. Hackers have always been willing to adapt to changes, and a challenge like this will probably entice them to work even harder. What would be great is if DARPR or DHS took lead of these efforts and set up a task force or team to continually create better and harder to crack anti-deepfake technology. We cannot rely on Facebook and Google to continually keep up with the advancements in technology and hacking capabilities in this case. For now, though, I do hope that Facebook’s challenge is able to create a technology that protects the 2020 election from propaganda.

  2. Nicole Shubaderov September 13, 2019 at 5:18 pm #

    With the rapid growth of technological systems/developments such as AI, I am not surprised that companies like Facebook are trying to prevent the abuse of such programs in society. So many previous incidents have occurred with the abuse of AI, especially on Facebook’s platform, that it is a necessity to get a hold of the problem before it spreads and worsens. These “deepfakes” are remarkably realistic and an average individual such as myself would assume that these videos are authentic. As stated in the article, these AI creations are becoming easier to be made as well as cheaper. What is more alarming is that the creation of the systems to defend/recognize deepfakes are being created at a much slower rate than the rate at which the deepfakes are being created. That is why the introduction of a competition, such as Facebook’s “Deepfake Detection Challenge,” is an interesting way to motivate people to help quickly create the programs necessary to monitor these deepfakes. With ten million dollars at stake, I am certain that within the year a group of people, either it be students at a university or employees at a company, someone will have a successful program made to win the mass amount of money being offered as an award.

    But, even if we have the appropriate systems to detect deepfakes, what are the next steps after that? Are these companies, such as Facebook, Instagram, and YouTube going to be allowed to delete these videos? As the article expressed, would political videos be the only videos allowed to be deleted off such platforms? These are the questions that we need to legally and ethically account for because if one type of deepfake is not allowed to be kept on social platforms, what would allow for another type to be kept?

    A major concern of mine with the creation of a detection program is its ability to truly detect deepfakes. Although there will be a very strict judging process of the system if deepfake videos are constantly being improved at an exponential rate, who is to guaranty that the defending system (s) created will protect against the newly improved deepfake videos. I do not believe that this deepfake battle will be won. I truly find that it will be a game of tag throughout the years until deepfake videos become so undetectable that such detecting/protective systems will become useless. The battle for control over deepfake videos should have been started at an earlier point in time when it would have been much more manageable to gain control over the deepfake videos with protection programs. Beside my opinions, the overall concept of creating a competition is one that I fully support because there are so many individuals in life that want to help society and even better society itself and by being given the opportunity to win a lot of money and be recognized around the world, this may change someone’s life. Even if the battle for control over deepfakes may not be won, this competition will help kickstart someone’s career or will help better their life.

    I believe that all deepfakes, either be major political ones or your average community drama, should be deleted off social media platforms. Even if the damage is already done, I feel that such fake videos should not continue to circulate throughout the world. But, on the topic of the legality of deleting deepfake videos off social media platforms, the law states that deepfakes are not illegal unless it is a pornographic fake since defamation could be claimed for in that situation. Thus, creating an AI of a celebrity or political figure in a bad manner is perfectly legal. This then brings an issue when platforms try to delete such content because deepfakes are not considered illegal and the companies have to come up with some legal reason to take them off the internet. Until society can define a law specifically tailored for such deepfake content and be able to create successful programs to detect and prevent such videos from being created, the fight for control over deepfakes will be long and arduous.

  3. Joseph Antonucci September 14, 2019 at 9:25 pm #

    Facebook’s initiatives against deepfakes reflects a coming age (if it is not upon us already) where finding the truth will be harder than ever. In prior times we could at least say that “seeing is believing” – despite the wide array of conflicting information online, we could at least trust video footage, video clips, et cetera. With the rapid growth of technology, those things which we would have thought were impossible before are now possible, and we have to second-guess what our eyes are telling us when we see something.

    Video and audio footage can now be completely faked to mislead people, and Facebook’s challenge provides a good incentive to tech-savvy people to figure out a way to weed out these types of videos. The only possible problems are, what if the AI program that is used to spot deepfakes lets some get through the cracks? And further, what if the methods used to separate deepfakes from legitimate videos mistakenly identify a legitimate video as a deepfake and cause it to be removed?

    At the end of the day, people will still need to be vigilant; an AI software is not going to remove all of these videos, so people should be willing and ready to investigate anything they see which seem so outlandish and far-fetched that they are probably not even “real.”

    We can certainly pursue these methods of preventing and removing instances of deepfakes when they are found, but people should be more aware that the battle of information online includes bad actors who wish to mislead and manipulate, whether that be through flat out fakery and forgery, or stretching the truth.

    Social media networks should not be the arbiters of truth, and they should not be removing any kind of content, even if they deem it misleading in my opinion. However, I do find it reasonable for these companies to remove that content which is objectively, undeniably fake and was doctored and spread with the sole purpose of misleading people, such as these deepfake videos.

    Unfortunately even if those companies operated under such a stringent rule, they would still end up overreaching and removing people and content that probably shouldn’t be removed, because many people (especially these Silicon Valley types) cannot differentiate between what is a “fringe, unpopular opinion” and what is genuinely misleading. An example would be the “flat earth” community, asserting what they believe to be the truth based on evidence they have come across. Their evidence may be misleading or incorrect, but social media companies bear no responsibility to make that determination.

Leave a Reply