‘Fake News’ Is Sparking an AI Arms Race

from Popular Mechanics

In 2018 the California-based company FireEye tipped Facebook and Google off to a network of fake social media accounts from Iran that was conducting campaigns to influence people in the United States.

In response, Google and Facebook, using backend data to determine that a branch of the Iranian government was responsible, removed dozens of YouTube channels, a score of Google+ accounts and a handful of blogs.

Lee Foster, manager of information operations at FireEye, was at the forefront of the firms’ investigation. “Right now, you know something’s automated just by the sheer volume of content pushing out,” he says. ”It’s not possible for a human to do this, so it’s clearly not organically created. Often you’ll see automated retweeting of some list of accounts that just to boost out a message. “

But the landscape is about to change, he says, as artificial intelligence comes online that can mask its automated roots.

“Imagine having a capability out there that can automate the organic creation of original content effectively enough that it looks real, but you don’t even have to have it operate or touch it,” Foster says.

His fears are shared by other analysts. A recent Brookings Institute report outlined some of the changes that are in store. “In the very near term, the evolution of AI and machine learning, combined with the increasing availability of big data, will begin to transform human communication and interaction in the digital space,” the report, The Future of Political Warfare, predicts. “It will become more difficult for humans and social media platforms themselves to detect automated and fake accounts, which will become increasingly sophisticated at mimicking human behavior.”

The days of AI catfishing is fast approaching. A sophisticated AI could detect information about people, determine who is susceptible to a particular message, and tailor the interaction as if the AI was a person. Brookings says AI will “micro-target citizens with deeply personalized messaging. They will be able to exploit human emotions to elicit specific responses. They will be able to do this faster and more effectively than any human actor.”

So what’s the solution? Artificial intelligence that can compete with the volume and analysis it will take to detect manipulated photos, articles and social media messages. It will take an AI to catch an AI, dueling each other to determine what’s real.

More here.

Posted in Ideas, Social Media, Technology and tagged , , , , , , , .

4 Comments

  1. Fake news is such a hot topic because in the past few years we have seen dangerous consequences of this propaganda. When there are so many sources to get news, including social media, there will always be a lot of opinions instead of news. But people influence people…and people are gullible. Most citizens are too busy or too lazy to check more than one source for their news. So it makes it very dangerous when there are these accounts that are specifically made to spread propaganda and influence people’s thoughts. Just the simple fact that people are so easily influenced is scary enough but to allow your vote, your voice in this country, to be influenced by fake accounts with a specific political agenda is undermining democracy. So I agree with the need to put a stop to this, because if an AI can confuse even the people who are aware of this problem and take preventative measures to avid it, then it will be hard to trust any news. So using AI to fight AI is very smart, but it also is important that it is all overseen by people, because ultimately we are still socially smarter that AI and can hopefully spot a bot if we know what to look for. So overall I agree with this approach, but I also believe that we should be teaching kids in class on how to decipher real sources, fake sources, and bias sources on the internet. This is a skill that I did not learn until college and it is very helpful not only for writing papers but for sourcing any information on the internet. If we teach them now, by the time they get to voting age they will be able to source information about candidates and policies without outside influences. I believe both the AI and sourcing information teachings together will help us as a society figure out how to navigate around all this fake news.

  2. Fake news is one of the biggest issues we face in our world today. It is allowing outside sources to invade in our country and know everything about us, which is a highly dangerous thing. Terrorism is a huge deal today and since the past it has become much easier for terrorists to plan their attacks on us. I do not believe that our internet is protected enough, simply due to the reasons stated above about the spreading of fake accounts and such to conduct campaigns. Things that are in the United States should only be available to the people that live here. The internet should simply be separated: one specifically for America and then others for various countries. Being too involved with the outside world isn’t really the best for us, unless it is mainly for protection. The idea of created an AI to help detect terrorist threats and scams is an excellent idea, but only if it can be effective. An idea like this would be very hard to pull off and would be very expensive, which would require a lot of funding. The question is where would we get this funding from, and especially the employees to do the work? There are ways to prevent negative situations from occurring, however it will not be an easy task.

  3. All of us have heard the phrase “fake news” used at some point recently, I’d bet that most of us have heard it within the past couple of weeks. At first, I thought that the concept of spreading false information was nothing new; all types of propaganda have existed as long as politicians have. However, as this article explains, in this new age of social media, “fake news” can be spread at never before seen rates. Artificial Intelligence has made it so that bots can literally send millions of the same message at once. This is concerning, as whoever has the power behind said bots can begin controlling what people see on social media, and in turn control the way they think.
    For me, I use twitter all the time. I tend to follow people who would be considered more left-leaning politically. Sometimes I think this may be an issue for me, since I am trapping myself inside a thought echo chamber, since I rarely see opinions that vary from my own. AI can create a targeted echo chamber, where ideas are seen by people who are vulnerable to believe them. Since they are bombarded with these ideas, they do not get a chance to hear from other perspectives, and they are trapped. Another issue with this is that since AI is being advanced on a daily basis, there can be little to no difference between a legitimate news source and an account designed to further a certain idea, using fact or fiction, whatever works best. The scariest part of this situation is that, as the article says, it is too fast and large to be handled by humans. This means we need to rely on “good” AI to fight and weed out the “bad” AI. Feels a little dystopian to me.
    The spread of misinformation on social media means that we as users must be diligent in knowing what sources produce the posts and advertisements we see. I know that diligent is not a word people think of when talking about how they use social media, it is usually a place for meaningless scrolling, but if we are not careful “fake news” may take over our lives.

  4. Honestly with how prevalent the internet and social media in particular are in the general public’s lives it was inevitable that the legitimacy of information would become questionable. The rise of artificial intelligence throughout the world at the same time complicates things even further. Using A.I to counter the spread of ‘fake news’ sent by other A.I is a logical step but it won’t solve everything. People tend to be drawn to sensationalized headlines and the truth is not always sensational.
    The true challenge in my opinion is figuring out how to make the public aware of these false sources. Many articles, internet personalities, and even politicians have been called out for misinformation but still continue on as though nothing ever happened. That challenge of awareness will most likely be the most difficult. However, the building of A.I as a sort of filter or alarm system is a great start and should still prove to be beneficial in the long term.

Leave a Reply to Michael Martini Cancel reply

Your email address will not be published. Required fields are marked *