An Artificial-Intelligence First: Voice-Mimicking Software Reportedly Used In A Major Theft

from WaPo

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

The request was “rather strange,” the director noted later in an email, but the voice was so lifelike that he felt he had no choice but to comply. The insurer, whose case was first reported by the Wall Street Journal, provided new details on the theft to The Washington Post on Wednesday, including an email from the employee tricked by what the insurer is referring to internally as “the false Johannes.”

More here.

, , , ,

8 Responses to An Artificial-Intelligence First: Voice-Mimicking Software Reportedly Used In A Major Theft

  1. Nicole Shubaderov September 9, 2019 at 10:26 pm #

    Knowing the amount of technological progress that has occurred throughout the 21st century, I am not surprised that people are starting to use these advanced technological skills to hack other people. To be completely honest, I am very worried where these new advances will lead us. My worries stem from the fact that these digital platforms are creating realistic looking AI robots that both resemble the appearances of humans as well as mimicking their human-like speech. Currently, on Instagram, there are a few AI robots that have millions of followers and are models/ influencers for companies such as Prada and Balenciaga. It is insane how realistic these AI models can be created as well as how they can make them seem like they are living in the real world. What is even crazier is the idea that AI models, specifically one named Miquela, can make an income off of modeling clothing and promoting items and fashion on her Instagram account. Thus, if something such as this is possible, hacking someone with a program that could mimic someone’s voice, a conference call, or some past televised event is not far from crazy.

    Recently “deepfakes,” which are AI-generated videos with synthetic audio, have become increasingly popular with the hacking community. These videos have caused panic in many situations and lead to disruption in everyday life. An example of this would be propaganda or political deepfakes that get released into the internet and everyday people such as myself may get ahold of it and get fake news presented to them. A certain instance that occurred in the recent past was the deepfake video about Obama giving a public service announcement. Although fake, the app used to generate this video made it truly seem as if Obama was speaking and he was presenting this certain speech to the people of the U.S. Another major incident was when a video appeared on social media about Mark Zuckerberg giving an announcement about how Facebook owns its users. Although these deepfakes seem very real, many experts and regular people have been able to catch flaws within them. But that does not take away from the dangerous characteristics pf these videos in society. If a simple app or processor can create such a realistic fake video, then the limits are endless for hackers to create much more realistic and tricking deepfakes.

    But, not all AI systems/programs are bad. A lot of people are trying to better the lives of those who cannot speak through these systems. The more human-like that the voices of these systems sound, the better the quality of life that they will have. But even if these things could better the lives of others, they are far from perfect. To release such systems are risky and that is what researchers are being warned to be extra cautious when releasing these AI programs as it may cause further issues in the future. I don’t know what to think of AI. On the one hand, I find it helpful for recognizing specific individuals—such as criminals—from a crowd or being able to help those without a voice. But on the other hand, these benefits would be completely useless if the AI programs are mainly used to harm people. Even with the plenty of programs made to help detect such fraudulent audios and videos, the AI programs are developing much faster and it is hard to keep up with the progress that it’s making. I would prefer to limit the use of such systems until researchers have found that it is safe to use them. Although there is always a risk when using technology, the risk of larger hacks and panic being spread around the world is not an event that I would want to have occurred. Additionally, since these programs have already been created and shared, it is practically impossible to fully prevent the further spread of AI. It is hard to monitor everything that happens online, especially when hackers get involved. Therefore, I already understand the impracticality of trying to prevent AI from being used in society. But I do hope for further improvements in the system to be able to easily detect flaws and false content. It would greatly benefit our society and would reinforce safety in an area of technology that is still sort of gray.

  2. Corinne Roonan September 12, 2019 at 10:24 am #

    Honestly, this article does not shock me. The fact that criminals are using technology to hack and steal money from people is as unsurprising as the sun appearing blue on a clear day. In every facet of our world, criminals find ways to cleverly use technology to manipulate others. With technology continuously advancing at the rate that it currently is, there are going to continue to be new technologies that criminals can use. The focus needs to not be against the technology, but instead on defense systems against criminal usage.
    The article drones on about how realistic the voices made by these AI softwares are, which is believable. Technology, as is clear in the modern day, is boundless. It is logically impossible that there is no technology that can recognize the difference between AI generated voices and real human voices. Whether that be a technology that can track origination of phone calls or recognize slight differences that are unrecognizable to the human ear, the technology is possible. Creating barriers before the phone call happens or installing post-phone call procedures in businesses to prevent huge losses of money is crucial to defense against tech criminals.
    The issue with that, though, is also the fast-paced progression of technological advancement. As soon as businesses instill protocol to guard them against tech criminals, there is going to be newer technology that continues to override any safeguard put in place. The most effective plan to protect against this theft is to instill protocol and to then continue to add to and change protocol in order to continuously ensure the safety of the business assets.
    Not only for businesses does this stand as a threat though. For common people this can become an issue. It may be on a much smaller scale (or at least it seems), but vulnerable groups such as the elderly are always easy targets, especially when it comes to technology. Elderly people commonly have phones to be able to stay in contact with family, but their skills do not go much farther than the ability to call. If businesses begin instilling security against the tech criminals, then the tech criminals are going to go to the next vulnerable group to get their money.
    With the advancement of technology, there is never going to be a sure-fire way to protect yourself or others from fraudulent scams. No matter how many protocols are instilled, there is always going to be criminals finding a way to get around it. As scary as that may be, the advantages we receive from this technology tends to surpass the damages we receive from it.

  3. Samuel Kihuguru September 12, 2019 at 11:13 pm #

    Technology has changed how the world works, influencing almost every aspect of modern life. But while modern technology undeniably brings a number of advantages across multiple sectors, it also has its share of downsides. The interconnectivity that ties all devices and systems to the internet has invited malicious forces into the mix, exposing users and businesses to a wide range of threats. The conspicuous use of Lyrebird’s globally-sourced voice mimicking software to wire more than $240,000 from the French insurance giant Euler Hermes to an uncertified account in Hungary is just one of many historic examples of how advancements in technology have been used to undermine physical safety measures and precedent digital security systems. I am reminded of the Harrods U.K. vs Sixty Domain Names in 2002, less than a decade before the internet had become the new playing field for business commerce. In this case, Harrods U.K. was authorized through the Anticybersquatting Consumer Protection Act to present the case under in rem jurisdiction to the District Court of Virginia. However, in this age of information and technology, the speed at which powerhouses, like Lyrebird, approach their zenith makes it increasingly harder for the creation of policies and regulations capable of mitigating its consequences. I do not think that Lyrebird should have been permitted by the state to sell the “most realistic artificial voice-mimicking software in the world” as a commercial product. The implications of allowing anyone to create a voice-mimicking “vocal avatar” by uploading at least a minute of real-world speech, fails to account for the fact that this form of technology could have been usurped as easily as it was by cyberbullies.

    Synonymous with these untied implications is the fact that the defense given by the American corporation also failed in providing a sufficient response to the insecurities of its product by relying on its subtle shortcomings – “some of the faked voices won’t fool a listener in a calm and collected environment”. But in some cases, thieves have employed methods to explain the quirks away, saying the fake audio’s background noises, glitchy sounds or delayed responses are the result of the speaker’s being in an elevator or car or in a rush to catch a flight. It is in my strongest belief, therefore, that this software should have been used and/or tested by capable bodies of federal authority, such as the army, before it became so accessible. While it is true that these changes are inevitable, and that the benefits it provides for the dumb are legitimate, we must be more further informed about the potential risks involved with voice-mimicking software and other products on the business technology frontier.

  4. Victoria Balka September 13, 2019 at 10:58 am #

    The technology talked about in this article is extremely frightening. While technology is expanding every day, more programs like these voice mimicking ones are getting more advanced and easier to use. While there are many pros to technology and its many advancements, there are also many flaws that exist like these voice mimicking features that are allowing many criminals to easily be able to impersonate a person’s voice over a free application. Even though I find the ideas of these applications scary, I agree with Lyrebird that they should be having this as a free app. Since it is a free app it is providing more awareness to this technology. If it wasn’t a free app there are high chances someone else would’ve gotten the technology available for them to sell on the black market. The article mentions how realistic these voices are after doing very little work to get the accurate voice recording. Through a simple YouTube search, you can see how realistic these voices are and it is very alarming. The article mentioned a story about a person who called the CEO after getting weird calls from criminals pretending to be the CEO but, if the employee just believed the criminals were the CEO his company would have lost thousands of dollars. In order to help combat this from becoming a much bigger problem in the future, companies need to start working on creating a technology that can detect when there is an AI software related voice on the other end of the phone call. Creating such a technology would help companies and prevent them from losing large sums of money based off of a real sounding phone call.
    This new AI technology is a harm to all people since the criminals can replicate anyone’s voice as long as they have about a minute of their voice recorded. I am sure that criminals will try to get money from whoever they can using this technology and they would probably aim to steal money from the more naïve people. As mentioned in the article people who are not aware of the technology or are in a state of chaos are more willing to give the money, so that is the groups that the criminals will most likely target. One way to prevent this from becoming a massive problem is to educate people about this technology and if they are not sure if the person for asking for money over the phone, make them ask in person. With the expansion of technology and the improvements being made to AI, I am sure it will be getting much harder to notice if the person you are on the phone with is a computer or a real person. I am interested to see how this technology develops in the future and what is created to help combat this from becoming a major issue.

  5. Nicholas Hicks September 13, 2019 at 5:32 pm #

    The technologies discussed in this article would have seemed like science fiction a few years ago but are now obviously very real capabilities. The possibility that criminals could use programs produced by major tech companies to reproduce my or my loved ones voices to such an extent that they could impersonate my identity is extremely alarming. While reading this I was reminded of the previous article I posted about on this blog, which talked about how tech companies are recording an obscene amount of personal information and activity from our public accounts on the internet and I realize this could lead to peoples voices and speech being utilized by major companies like google in order to fine tune these software programs that could one day be used to mimic our voice. This is concerning as it makes you wonder why there is no legislation or restrict the manner in which these programs are trained, perhaps allowing users to opt out of aiding in the development of these programs in a clearly presented manner. This situation with the vocal AI being trained by unknowing consumers is extremely similar to the way companies train image recognition programs. If you have ever had to complete a Captcha in order to access a sight in which you must ‘click the pictures with bikes’ or something along those lines, you have almost certainly been aiding in the training of image recognition software. In my opinion these methods for sneakily training AI programs which could potentially end up being used for nefarious purposes is a disappointment for our legislation which has failed in regulating these practices in any way.

  6. Walter Dingwall September 13, 2019 at 7:08 pm #

    In many things, it may be said that the ability to do something does not necessarily warrant the action as ethical and reasonable. For instance, in the years escalating towards the Second World War, German scientists were making discoveries in nuclear fusion and fission. These are some of the basic mechanics used to create the nuclear bomb. To make it clear, Germany was on the threshold of delivering on the greatest demonstration of mass destruction humanity would ever have ever seen, and with no redeeming reasons to support it.
    The modern tech world does not stray from the treacherous and devastating potentialities due to new advancements in humanity’s reach. With the growth of the internet and the personal information that is constantly being delivered to sites is dangerous enough in theory. With the information, there is much more opportunity to personal data breaches and identity theft cases. For this, there must be regulations and company policies created to ensure the users of these site taking in the users’ data that, either, they are safe, or that there are statements which may be called upon if a problem were to arise involving the release of theft of the data.
    Now, with the rise of “deepfakes,” via digital facial modifications or audio simulations and manipulations of a person’s voice, more and more variations of personal data can be used for malevolent reasons. In the case of the unnamed company in Drew Harwell’s article, the use of “voice-mimicking” tech is being used to scam companies into delivering great sums of money to a fraudulent caller posed as a person of regular importance (a boss, a CEO).
    The invention of the telephone and the spread of telephone lines has always allowed for scams to be carried out over the phone to unsuspecting people, soon to be exploited. This did not cut the production and use of phones. It just became a matter of time for people to catch on to scam calls and for agencies to develop in the motivation to cut down on these types of callers.
    With the relatively new concept of voice-mimicking technology, it could be questioned that there is no reason for it, due to the crimes which can appear and the seemingly negligible uses for the tech. Well, the ones developing the technology, and those who are to make money on it, know that this risk in the short-term should be well surpassed by the benefits later.
    There are mutes who would love to speak like the normal humans. Wouldn’t it have been great to here a Stephen Hawking with a regular voice? Wouldn’t it bring the approachability and connectivity up? There are many systems that already use automated, computer voices like those in the mall or from the bank. Wouldn’t it make sense to lessen the noticeable presence of artificiality and cold automation? This is what is believed among those in control of the advancement of this type of technology.
    Humans love to be more comfortable. Most things they do are for that reason somewhere on their road map. Even those things of stress and difficulty, between the hours of schoolwork or the on-foot mountainous excursions, those things lead to well-paying jobs or senses of accomplishment and confidence. More comfortable, more comfortable. That is humanity’s mantra.
    The first and second time the nuclear bomb was used outside of a test were the second to last and last uses of them outside of tests. Why? The world figured out regulations and has stayed in agreement to not use this type of mass destruction again, however close they may have gotten since WWII. The same will happen with voice-mimicking technology as it has with so many potentially dangerous technologies. The benefits will surpass the dangers.

  7. Mia Ferrante September 13, 2019 at 7:12 pm #

    It was only a matter of time before a scam this big took place, and I’m shocked it didn’t happen sooner. I know that phone scammers are popular among the United States, but I was not aware of this being so popular in foreign countries like the United Kingdom and Hungary. However, this is not shocking to me. Criminals will take any measures to scam people into giving away money or any personal information such as their card numbers, social security number, or insurance. People have to be really careful about the information they give out over the phone. The number of phone calls I receive daily of automated voice systems claiming someone has stolen my identity or my social security number is uncanny. They all sound similar and it is likely a voice-mimicking software discussed in the article that they are using. Personally, I know that phone scammers are common so as soon as I receive a suspicious call I either don’t answer or as soon I answer and hear their voice I hang up. It can be difficult because some of the programs used to scam use similar or identical phone numbers to where you are from. So, when I receive a call with the same area code where I am from, I always answer it in case it’s family or someone I know, but most of the time it ends up being a scam. According to Robokiller.com, 34.8 Billion illegal spam calls flood the United States phone lines year after year. That number is insanely high and the number of scams that takes place over the phone among everyday people is unfathomable. Especially since AI has a new technology where they can replicate any voice picked on the phone if they have about a minute or so of conversation, everyone is at risk. I think it is crucial for businesses and even everyday people to have software that protects themselves or their business against phone scammers. Even if these scammers sound like someone that might be calling regarding a bank or social security number, people have no reason to suspect criminal activity, so they give out their personal information. By simply giving out a couple of numbers related to personal information, one’s life can change dramatically if they find themselves in the middle of scam, identity theft, or fraud. I am hopeful that this topic will get more attention and new technology will come out to stop phone scammers from attacking innocent people.

  8. Javier Tovar September 13, 2019 at 8:32 pm #

    Imagine picking up the phone to answer a call from a loved one just to find out they never called you in the first place. The caller pretending to be your loved told you to meet them at a certain location with some money in your wallet. This article is something that is very concerning to me. When first thinking of technology as being a threat, most people think of AI taking over and eliminating humans like the terminator films. But after reading this article people can now see that terminators are the least of four worries. With the invention of this new voice-mimicking software, no one is safe from being deceived by criminals. When I think of criminals, I think of robbers on the street holding people against their will until they give up money or other valuables. In this new age of technology there is a whole new wave of criminals entering the world like we’ve never see before. Just thinking about the dangers of technology criminals have to utilize for their activity gives me a very bad feeling.
    It is very shocking how thieves were able to steal $240,000 from very large company. Just imagine what thieves can get away with when they begin to target the average human being. In my mind I think of many unnerving things they can do. For example, they can use this new voice mimicking technology to lure people into certain locations. A criminal can pretend to be loved one and tell you to meet them anywhere they choose. When you get there, they can possibly kidnap you, steal from you, or simply assault you. I would be scared if I knew someone who didn’t like me had this type of software and could use it against me in any way. This software can be used to get information out of people, which can expose them in many ways.
    Anything you can think of can be done against you using this technology. Every time you pick up a phone you will have question whether or not the person you are talking to is the real person; even if its your mother asking what you want for dinner on Sunday.

Leave a Reply