An Artificial-Intelligence First: Voice-Mimicking Software Reportedly Used In A Major Theft

from WaPo

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

The request was “rather strange,” the director noted later in an email, but the voice was so lifelike that he felt he had no choice but to comply. The insurer, whose case was first reported by the Wall Street Journal, provided new details on the theft to The Washington Post on Wednesday, including an email from the employee tricked by what the insurer is referring to internally as “the false Johannes.”

More here.

, , , ,

20 Responses to An Artificial-Intelligence First: Voice-Mimicking Software Reportedly Used In A Major Theft

  1. Nicole Shubaderov September 9, 2019 at 10:26 pm #

    Knowing the amount of technological progress that has occurred throughout the 21st century, I am not surprised that people are starting to use these advanced technological skills to hack other people. To be completely honest, I am very worried where these new advances will lead us. My worries stem from the fact that these digital platforms are creating realistic looking AI robots that both resemble the appearances of humans as well as mimicking their human-like speech. Currently, on Instagram, there are a few AI robots that have millions of followers and are models/ influencers for companies such as Prada and Balenciaga. It is insane how realistic these AI models can be created as well as how they can make them seem like they are living in the real world. What is even crazier is the idea that AI models, specifically one named Miquela, can make an income off of modeling clothing and promoting items and fashion on her Instagram account. Thus, if something such as this is possible, hacking someone with a program that could mimic someone’s voice, a conference call, or some past televised event is not far from crazy.

    Recently “deepfakes,” which are AI-generated videos with synthetic audio, have become increasingly popular with the hacking community. These videos have caused panic in many situations and lead to disruption in everyday life. An example of this would be propaganda or political deepfakes that get released into the internet and everyday people such as myself may get ahold of it and get fake news presented to them. A certain instance that occurred in the recent past was the deepfake video about Obama giving a public service announcement. Although fake, the app used to generate this video made it truly seem as if Obama was speaking and he was presenting this certain speech to the people of the U.S. Another major incident was when a video appeared on social media about Mark Zuckerberg giving an announcement about how Facebook owns its users. Although these deepfakes seem very real, many experts and regular people have been able to catch flaws within them. But that does not take away from the dangerous characteristics pf these videos in society. If a simple app or processor can create such a realistic fake video, then the limits are endless for hackers to create much more realistic and tricking deepfakes.

    But, not all AI systems/programs are bad. A lot of people are trying to better the lives of those who cannot speak through these systems. The more human-like that the voices of these systems sound, the better the quality of life that they will have. But even if these things could better the lives of others, they are far from perfect. To release such systems are risky and that is what researchers are being warned to be extra cautious when releasing these AI programs as it may cause further issues in the future. I don’t know what to think of AI. On the one hand, I find it helpful for recognizing specific individuals—such as criminals—from a crowd or being able to help those without a voice. But on the other hand, these benefits would be completely useless if the AI programs are mainly used to harm people. Even with the plenty of programs made to help detect such fraudulent audios and videos, the AI programs are developing much faster and it is hard to keep up with the progress that it’s making. I would prefer to limit the use of such systems until researchers have found that it is safe to use them. Although there is always a risk when using technology, the risk of larger hacks and panic being spread around the world is not an event that I would want to have occurred. Additionally, since these programs have already been created and shared, it is practically impossible to fully prevent the further spread of AI. It is hard to monitor everything that happens online, especially when hackers get involved. Therefore, I already understand the impracticality of trying to prevent AI from being used in society. But I do hope for further improvements in the system to be able to easily detect flaws and false content. It would greatly benefit our society and would reinforce safety in an area of technology that is still sort of gray.

  2. Corinne Roonan September 12, 2019 at 10:24 am #

    Honestly, this article does not shock me. The fact that criminals are using technology to hack and steal money from people is as unsurprising as the sun appearing blue on a clear day. In every facet of our world, criminals find ways to cleverly use technology to manipulate others. With technology continuously advancing at the rate that it currently is, there are going to continue to be new technologies that criminals can use. The focus needs to not be against the technology, but instead on defense systems against criminal usage.
    The article drones on about how realistic the voices made by these AI softwares are, which is believable. Technology, as is clear in the modern day, is boundless. It is logically impossible that there is no technology that can recognize the difference between AI generated voices and real human voices. Whether that be a technology that can track origination of phone calls or recognize slight differences that are unrecognizable to the human ear, the technology is possible. Creating barriers before the phone call happens or installing post-phone call procedures in businesses to prevent huge losses of money is crucial to defense against tech criminals.
    The issue with that, though, is also the fast-paced progression of technological advancement. As soon as businesses instill protocol to guard them against tech criminals, there is going to be newer technology that continues to override any safeguard put in place. The most effective plan to protect against this theft is to instill protocol and to then continue to add to and change protocol in order to continuously ensure the safety of the business assets.
    Not only for businesses does this stand as a threat though. For common people this can become an issue. It may be on a much smaller scale (or at least it seems), but vulnerable groups such as the elderly are always easy targets, especially when it comes to technology. Elderly people commonly have phones to be able to stay in contact with family, but their skills do not go much farther than the ability to call. If businesses begin instilling security against the tech criminals, then the tech criminals are going to go to the next vulnerable group to get their money.
    With the advancement of technology, there is never going to be a sure-fire way to protect yourself or others from fraudulent scams. No matter how many protocols are instilled, there is always going to be criminals finding a way to get around it. As scary as that may be, the advantages we receive from this technology tends to surpass the damages we receive from it.

  3. Samuel Kihuguru September 12, 2019 at 11:13 pm #

    Technology has changed how the world works, influencing almost every aspect of modern life. But while modern technology undeniably brings a number of advantages across multiple sectors, it also has its share of downsides. The interconnectivity that ties all devices and systems to the internet has invited malicious forces into the mix, exposing users and businesses to a wide range of threats. The conspicuous use of Lyrebird’s globally-sourced voice mimicking software to wire more than $240,000 from the French insurance giant Euler Hermes to an uncertified account in Hungary is just one of many historic examples of how advancements in technology have been used to undermine physical safety measures and precedent digital security systems. I am reminded of the Harrods U.K. vs Sixty Domain Names in 2002, less than a decade before the internet had become the new playing field for business commerce. In this case, Harrods U.K. was authorized through the Anticybersquatting Consumer Protection Act to present the case under in rem jurisdiction to the District Court of Virginia. However, in this age of information and technology, the speed at which powerhouses, like Lyrebird, approach their zenith makes it increasingly harder for the creation of policies and regulations capable of mitigating its consequences. I do not think that Lyrebird should have been permitted by the state to sell the “most realistic artificial voice-mimicking software in the world” as a commercial product. The implications of allowing anyone to create a voice-mimicking “vocal avatar” by uploading at least a minute of real-world speech, fails to account for the fact that this form of technology could have been usurped as easily as it was by cyberbullies.

    Synonymous with these untied implications is the fact that the defense given by the American corporation also failed in providing a sufficient response to the insecurities of its product by relying on its subtle shortcomings – “some of the faked voices won’t fool a listener in a calm and collected environment”. But in some cases, thieves have employed methods to explain the quirks away, saying the fake audio’s background noises, glitchy sounds or delayed responses are the result of the speaker’s being in an elevator or car or in a rush to catch a flight. It is in my strongest belief, therefore, that this software should have been used and/or tested by capable bodies of federal authority, such as the army, before it became so accessible. While it is true that these changes are inevitable, and that the benefits it provides for the dumb are legitimate, we must be more further informed about the potential risks involved with voice-mimicking software and other products on the business technology frontier.

  4. Victoria Balka September 13, 2019 at 10:58 am #

    The technology talked about in this article is extremely frightening. While technology is expanding every day, more programs like these voice mimicking ones are getting more advanced and easier to use. While there are many pros to technology and its many advancements, there are also many flaws that exist like these voice mimicking features that are allowing many criminals to easily be able to impersonate a person’s voice over a free application. Even though I find the ideas of these applications scary, I agree with Lyrebird that they should be having this as a free app. Since it is a free app it is providing more awareness to this technology. If it wasn’t a free app there are high chances someone else would’ve gotten the technology available for them to sell on the black market. The article mentions how realistic these voices are after doing very little work to get the accurate voice recording. Through a simple YouTube search, you can see how realistic these voices are and it is very alarming. The article mentioned a story about a person who called the CEO after getting weird calls from criminals pretending to be the CEO but, if the employee just believed the criminals were the CEO his company would have lost thousands of dollars. In order to help combat this from becoming a much bigger problem in the future, companies need to start working on creating a technology that can detect when there is an AI software related voice on the other end of the phone call. Creating such a technology would help companies and prevent them from losing large sums of money based off of a real sounding phone call.
    This new AI technology is a harm to all people since the criminals can replicate anyone’s voice as long as they have about a minute of their voice recorded. I am sure that criminals will try to get money from whoever they can using this technology and they would probably aim to steal money from the more naïve people. As mentioned in the article people who are not aware of the technology or are in a state of chaos are more willing to give the money, so that is the groups that the criminals will most likely target. One way to prevent this from becoming a massive problem is to educate people about this technology and if they are not sure if the person for asking for money over the phone, make them ask in person. With the expansion of technology and the improvements being made to AI, I am sure it will be getting much harder to notice if the person you are on the phone with is a computer or a real person. I am interested to see how this technology develops in the future and what is created to help combat this from becoming a major issue.

  5. Nicholas Hicks September 13, 2019 at 5:32 pm #

    The technologies discussed in this article would have seemed like science fiction a few years ago but are now obviously very real capabilities. The possibility that criminals could use programs produced by major tech companies to reproduce my or my loved ones voices to such an extent that they could impersonate my identity is extremely alarming. While reading this I was reminded of the previous article I posted about on this blog, which talked about how tech companies are recording an obscene amount of personal information and activity from our public accounts on the internet and I realize this could lead to peoples voices and speech being utilized by major companies like google in order to fine tune these software programs that could one day be used to mimic our voice. This is concerning as it makes you wonder why there is no legislation or restrict the manner in which these programs are trained, perhaps allowing users to opt out of aiding in the development of these programs in a clearly presented manner. This situation with the vocal AI being trained by unknowing consumers is extremely similar to the way companies train image recognition programs. If you have ever had to complete a Captcha in order to access a sight in which you must ‘click the pictures with bikes’ or something along those lines, you have almost certainly been aiding in the training of image recognition software. In my opinion these methods for sneakily training AI programs which could potentially end up being used for nefarious purposes is a disappointment for our legislation which has failed in regulating these practices in any way.

  6. Walter Dingwall September 13, 2019 at 7:08 pm #

    In many things, it may be said that the ability to do something does not necessarily warrant the action as ethical and reasonable. For instance, in the years escalating towards the Second World War, German scientists were making discoveries in nuclear fusion and fission. These are some of the basic mechanics used to create the nuclear bomb. To make it clear, Germany was on the threshold of delivering on the greatest demonstration of mass destruction humanity would ever have ever seen, and with no redeeming reasons to support it.
    The modern tech world does not stray from the treacherous and devastating potentialities due to new advancements in humanity’s reach. With the growth of the internet and the personal information that is constantly being delivered to sites is dangerous enough in theory. With the information, there is much more opportunity to personal data breaches and identity theft cases. For this, there must be regulations and company policies created to ensure the users of these site taking in the users’ data that, either, they are safe, or that there are statements which may be called upon if a problem were to arise involving the release of theft of the data.
    Now, with the rise of “deepfakes,” via digital facial modifications or audio simulations and manipulations of a person’s voice, more and more variations of personal data can be used for malevolent reasons. In the case of the unnamed company in Drew Harwell’s article, the use of “voice-mimicking” tech is being used to scam companies into delivering great sums of money to a fraudulent caller posed as a person of regular importance (a boss, a CEO).
    The invention of the telephone and the spread of telephone lines has always allowed for scams to be carried out over the phone to unsuspecting people, soon to be exploited. This did not cut the production and use of phones. It just became a matter of time for people to catch on to scam calls and for agencies to develop in the motivation to cut down on these types of callers.
    With the relatively new concept of voice-mimicking technology, it could be questioned that there is no reason for it, due to the crimes which can appear and the seemingly negligible uses for the tech. Well, the ones developing the technology, and those who are to make money on it, know that this risk in the short-term should be well surpassed by the benefits later.
    There are mutes who would love to speak like the normal humans. Wouldn’t it have been great to here a Stephen Hawking with a regular voice? Wouldn’t it bring the approachability and connectivity up? There are many systems that already use automated, computer voices like those in the mall or from the bank. Wouldn’t it make sense to lessen the noticeable presence of artificiality and cold automation? This is what is believed among those in control of the advancement of this type of technology.
    Humans love to be more comfortable. Most things they do are for that reason somewhere on their road map. Even those things of stress and difficulty, between the hours of schoolwork or the on-foot mountainous excursions, those things lead to well-paying jobs or senses of accomplishment and confidence. More comfortable, more comfortable. That is humanity’s mantra.
    The first and second time the nuclear bomb was used outside of a test were the second to last and last uses of them outside of tests. Why? The world figured out regulations and has stayed in agreement to not use this type of mass destruction again, however close they may have gotten since WWII. The same will happen with voice-mimicking technology as it has with so many potentially dangerous technologies. The benefits will surpass the dangers.

  7. Mia Ferrante September 13, 2019 at 7:12 pm #

    It was only a matter of time before a scam this big took place, and I’m shocked it didn’t happen sooner. I know that phone scammers are popular among the United States, but I was not aware of this being so popular in foreign countries like the United Kingdom and Hungary. However, this is not shocking to me. Criminals will take any measures to scam people into giving away money or any personal information such as their card numbers, social security number, or insurance. People have to be really careful about the information they give out over the phone. The number of phone calls I receive daily of automated voice systems claiming someone has stolen my identity or my social security number is uncanny. They all sound similar and it is likely a voice-mimicking software discussed in the article that they are using. Personally, I know that phone scammers are common so as soon as I receive a suspicious call I either don’t answer or as soon I answer and hear their voice I hang up. It can be difficult because some of the programs used to scam use similar or identical phone numbers to where you are from. So, when I receive a call with the same area code where I am from, I always answer it in case it’s family or someone I know, but most of the time it ends up being a scam. According to Robokiller.com, 34.8 Billion illegal spam calls flood the United States phone lines year after year. That number is insanely high and the number of scams that takes place over the phone among everyday people is unfathomable. Especially since AI has a new technology where they can replicate any voice picked on the phone if they have about a minute or so of conversation, everyone is at risk. I think it is crucial for businesses and even everyday people to have software that protects themselves or their business against phone scammers. Even if these scammers sound like someone that might be calling regarding a bank or social security number, people have no reason to suspect criminal activity, so they give out their personal information. By simply giving out a couple of numbers related to personal information, one’s life can change dramatically if they find themselves in the middle of scam, identity theft, or fraud. I am hopeful that this topic will get more attention and new technology will come out to stop phone scammers from attacking innocent people.

  8. Javier Tovar September 13, 2019 at 8:32 pm #

    Imagine picking up the phone to answer a call from a loved one just to find out they never called you in the first place. The caller pretending to be your loved told you to meet them at a certain location with some money in your wallet. This article is something that is very concerning to me. When first thinking of technology as being a threat, most people think of AI taking over and eliminating humans like the terminator films. But after reading this article people can now see that terminators are the least of four worries. With the invention of this new voice-mimicking software, no one is safe from being deceived by criminals. When I think of criminals, I think of robbers on the street holding people against their will until they give up money or other valuables. In this new age of technology there is a whole new wave of criminals entering the world like we’ve never see before. Just thinking about the dangers of technology criminals have to utilize for their activity gives me a very bad feeling.
    It is very shocking how thieves were able to steal $240,000 from very large company. Just imagine what thieves can get away with when they begin to target the average human being. In my mind I think of many unnerving things they can do. For example, they can use this new voice mimicking technology to lure people into certain locations. A criminal can pretend to be loved one and tell you to meet them anywhere they choose. When you get there, they can possibly kidnap you, steal from you, or simply assault you. I would be scared if I knew someone who didn’t like me had this type of software and could use it against me in any way. This software can be used to get information out of people, which can expose them in many ways.
    Anything you can think of can be done against you using this technology. Every time you pick up a phone you will have question whether or not the person you are talking to is the real person; even if its your mother asking what you want for dinner on Sunday.

  9. Dominic Caraballo September 15, 2019 at 5:00 pm #

    As modern technology progresses for the greater good of society, there will always be people to exploit technology and use it for their own benefit. This is exactly what happened here with the use of artificial intelligence software to copy the sound of someone’s voice and use it to steal money without even lifting a finger. The premise of this article seems to set the same tone as Charlotte Stanton, the director of the Silicon Valley office of the Carnegie Endowment for International Peace, who states that, “Researchers need to be more cautious as they release technology as powerful as voice-synthesis technology, because clearly it’s at a point where it can be misused.” (Harwell) My argument to that is that I believe it to be unfair to pin the actions of the criminals on the companies who developed the software because the company’s intentions of creating the software seem pure. For example, according to Drew Harwell of The Washington Post article from which this blog comes from, some beneficial uses of software like this would be to, “help humanize automated phone systems and help mute people speak again.” (Harwell) The takeaway being that we shouldn’t let criminals diminish the groundbreaking discoveries being made to help improve daily life. Decades ago, artificial intelligence was only a thing of science fiction and now it’s a reality. I understand the concern expressed with criminals utilizing modern technology in a negative way and how it poses a serious threat to society, (and I’d be lying if I said I didn’t have concerns too) but are we going to let criminals dictate the way the world operates?

    Criminals will utilize every possible means necessary in order to get what they want with no regard for the consequences, meaning that whatever laws are put in place to prevent wrongdoings, don’t mean anything to them. Therefore, questioning if AI like this is a good or bad thing, in my opinion, seems to be futile, what’s done is done. The important questions now are how to further educate, create, and improve protections against these kinds of actions for the future. Companies may have to expend greater resources in order to improve their security as well as train employees on how to face potential threats. According to an article from Entrepreneur.com written byAlniz Popat, he states that, “Employees are the most common cause of data breaches as many don’t recognize external threats when they occur or have a good understanding of the daily actions that leave a company vulnerable to a cyber attack. For example, the UK Cyber Security Breaches Survey 2018, carried out by the UK government and Portsmouth University found that 43% of UK businesses have experienced a cyber security breach or attack over the last 12 months, with only 20% of UK companies offering training to staff within the same time frame.” (https://www.entrepreneur.com/article/316886) One thing that is clear is that technology is growing at an exponential rate and unfortunate circumstances like this are far from over. We must learn from it and keep on improving the systems already in place, all while educating everyone on the potential risks.

  10. Sean Distelcamp September 19, 2019 at 9:49 pm #

    This is like a sci-fi gadget straight out of a mission impossible movie except instead of Tom Cruise saving the world its just internet scammers messing with executives. A simple solution for businesses to avoid incidents like this in the future is to simply have a spoken password that must be said in order to authorize larger transactions. However, as this technology becomes more and more accessible to people there are more problems that could surface. As large organizations and businesses become better informed about voice replicating software they can take the necessary precautions to protect themselves, but ordinary individuals may be at risk of being duped by a fake voice. Lyrebird, the software mentioned in the article claims to only need 60 seconds of an original voice in order to create the copy. If this software were to become available to the public, it would be extremely easy to go on someone’s Instagram account, copy their voice, then start calling their family and friends for money. Even simpler still, copy a local politicians voice, and start calling everyone in the area asking for donations during an election. Older people are already at risk for internet or phone scams, but deep faked voices could probably fool anyone if they are not careful. The article also mentions how this could effect political campaigns or public figures. It already feels like a full time job to stay up to date on current events in politics while finding reliable sources and filtering out the fake news. Having exact replicas of voices saying anything and everything would only make this harder for the average individual to figure out what is really going on. I also do not envy the celebrities who will have to constantly hear robot copies of themselves saying whatever sick stuff the internet can come up with. Hopefully public knowledge of these new software,as well ways to detect fakes, can outpace how quickly deep fakes are perfected and made available to people with bad intentions. The one thing that I think gives cause for a little optimism is that this reminds me a little bit of photo-manipulations but for your voice. Photoshop makes it harder to see if that model really has perfect skin or not, but other than that it hasn’t completely destroyed our society. On things that matter, it is pretty easy to notice when a photo is manipulated, so hopefully voice manipulation is used in a similar way.

  11. Stephen Hoffman September 20, 2019 at 2:50 pm #

    The issue of voice mimicking is one that is yet another example of technology operating and evolving faster than law has the capability for. This technique presents a brand-new threat, as public figures or the average person could be coerced into speaking just enough to reveal their voice to this threat. The theft mentioned in this article is just an early example of this type of hack. In the future, this will undoubtedly become more common, as hackers and thieves garner more sophisticated techniques to achieve their goals. This strategy is dangerous, as it is difficult to detect with the capabilities of an average person thinking they are getting a phone call from their boss or loved one. People are unlikely to mistrust a person’s real voice, and even with a bizarre request may follow through. This is increasingly problematic, as most people are unaware of the technological environment surrounding theft and crime, and how many of these crimes are evolving to incorporate predatory technological usage.
    This article reminds me of another story, which detailed how hackers posing as a polling center asked a random person questions over the phone, often detailing frequently used security questions or eliciting different sounds and letters of the alphabet to obtain an accurate profile of the person’s voice. This would be recorded and used in conjunction with other hacking efforts to call banks or places of employment in order to divert funds to a private account or source, often untraceable from there. This preys on the people who pay little attention to these sources on a consistent basis, making them susceptible to threats like this. Now, we are told to not speak to any of these callers on the phone, out of fear that it would open ourselves up to this type of theft. This was never the case in prior generations, as people would consistently pick up the phone and speak to almost anyone without much thought. With the technological environment we live in today, this type of unawareness could prove fatal, as it could lead to an individual’s funds being drained. This is again an example of an environment where law much catch up to technological innovations. If there is no regulation and control of this type of behavior, it will likely run rampant, as criminals are getting smarter but the laws are not changing to reflect their new behavior.

  12. Joe Antonucci September 20, 2019 at 7:30 pm #

    The advancement of technology has opened up new opportunities and ways to improve everyone’s lives. But as the adage goes, too much of a good thing is not a good thing. Technology is a mere tool, only as good as the person wielding it. By and large we feel the positive effects of these advancements, but inevitably there are people who will use these tools for evil and they will succeed in doing so.

    This instance of theft is very different from how we might have pictured a theft taking place a few years ago. There’s no one with a gun making physical threats, and no clear bad guy who intends to take your possessions. With the ability to replicate a person’s voice and appearance digitally, a criminal can simply “ask” for money, or some other favor, and the victim won’t even know what’s happening. To make matters worse, people committing these crimes know how to cover up their tracks and remain completely anonymous.

    It reminds me of this funny little tool I have that allows me to call anyone, and I can have the caller ID appear as any number I want. I used this for many antics, including prank calling my boss, or prank calling my friends from their friends’ numbers, or trying to place an order for a pepperoni pizza at a Home Depot in Tennessee.

    Obviously I was just messing around, but it never occurred to me that this “tool” could be used to do a lot of bad things, even beyond merely annoying people. My brother, who was less ethical in his use of it, found his religion teacher’s phone number online and called her with the number “666-666-6666.” That may still be seen as harmlessly joking to some, but it helped me consider the ramifications of such a program in the hands of someone who wasn’t so concerned with that “morality” thing.

    As I stated earlier, technology is a tool just like guns or cars and can be used for both good and evil. With instances like this, I would argue that the way to defend against the inappropriate use of “neutral tools” is the just use of those same tools. For example, a firearm is a dangerous tool, a weapon that is capable of doing a lot of damage. If you have foreknowledge that someone is coming to your house with a gun to kill you, the logical defense is to also arm yourself and better your odds.

    Applying this logic to our issue with high-tech theft, law enforcement and cyber security should adapt and create tools that can prevent and repel attacks like this; they should fight fire with fire.

  13. Alexander Nowik September 20, 2019 at 8:28 pm #

    As our ability to help people through technology grows, so does our ability to harm. Deepfakes for me is the scariest advancement in terms of social impact. Being able to duplicate the voice of a person means more than just cyber crime. We already see a “shoot first, ask questions later” mentality when it comes to public figures and possible wrong-doing. A future in which politicians, or corporations, use deepfake voice to sabotage their competition, seems imminent.

    But what can we even do about it? As the company Lyrebird stated in the article “Imagine that we had decided not to release this technology at all. Others would develop it and who knows if their intentions would be as sincere as ours.” The technology is already in place, and being used by criminals as well as others. This isn’t a problem that can be solved by merely government regulations or prohibitions. Then the other solution would be to find better ways to prevent ourselves and our companies from being affected. I do think that this issue will force high-profile companies to place even more security guidelines in place to prevent a single person in the company from being scammed. As a society I’m not sure if we are prepared for a wave of malicious voice deepfakes on public figures. On the flip-side of that, this threatens to trivialize any video and audio evidence of individuals actually complicit in wrong-doing. I wouldn’t be surprised to see changes in what type of evidence is no longer usable in court, (however court of public opinion is a different matter)

    In a more positive light, the advancing technology for voice and video, hopefully will create a solution to this growing problem, and the looming threat may end up not so problematic.

  14. Shamar Kipp September 23, 2019 at 2:15 pm #

    After reading the article and identifying the vital points made in it, I am not surprised at the fact that this technology exists and that it is being used for the detriment of businesses and the ordinary person. The rapid progression of technology is outgrowing the laws and regulation that need to be placed in order to control and protect the innocent.

    Technology is a great tool that innovates and progresses our world in the right direction. Whether that technology is used for the good or bad is up to the person using it. The tech company Lyrebird, who creates AI technology and more specifically the voice mimicking and recognition used to wire a large amount of money to a criminal account states that “Imagine that we had decided not to release this technology at all. Others would develop it and who knows if their intentions would be as sincere as ours.” The problem with the rapid advancement of technology is that people are trying to suppress it and not have it exist at all. There are people and places that need these specific type of technologies. The solution comes with having more barriers with communication amongst people. The fact that a criminal can tap into the phone line of a big Insurance business and essentially “ask” for money is absurd. The problem does not lie in the technology and whether it should be created or destroyed. We as a society need to embrace the technology and find ways to play defense against it when it ends up in the wrong hands.

  15. Jess N September 24, 2019 at 1:14 pm #

    One of the most difficult parts of dealing with innovating technology is that the law often lags behind the innovations. This seems to be the case in this article as well. While most people obviously know that this kind of criminal activity is wrong and this abuse of artificial vocal software is bad, since it is so new there is no clear legislation to regulate its use and availability. This often leads to more people using it for criminal purposes and leaves victims with little or no recourse, since there is no law to rely on. This has happened with many innovations; social media is the most prominent example and is still an ongoing struggle as social media rapidly innovated, especially in the field of data mining and analytics. It is also happening today with genetic information. So much can be learned about a person from their DNA, and there is not very much legal protections for people in terms of how their DNA and what people think it says about them can be used, especially when it comes to health insurance. These are just a couple of the areas where legislation has lagged behind the innovation to provide opportunities for people to abuse that information and applications.

    I have a feeling that in the near future we will be reading about more and more about cases artificial vocal manipulation occurring, especially as the technology becomes more lifelike and easily accessible. I would hope soon that this grabs the attention of law makers prior to it being used for something bad on a large scale. I find it is foreshadowing our future where we as people and professionals cannot trust communicating with each other on a digital platform, and I think it will be interesting to see how we as professionals will have to evolve to combat that. What is most important is that legislators start to realize the issues that new innovation brings early so they can write laws to protect these technologies from abuse.

  16. Andrew F September 27, 2019 at 9:59 pm #

    This article really raises a lot of concerns about the crazy increase of technology and all of the things that could happen from it. It is scary to think of what else people could be doing with technology from reading this article. This “voice-synthesis” software is so legitimate and advanced, that if a situation like this happens again, it would be so hard to figure out the difference between a a fake and an actual person. The way our world is changing, especially online, you have to verify your identity so many times it almost seems unnecessary, but when cases like this comes up, it proved why it is so important because stealing identities is such a major problem and can result in thousands of dollars in damages.

    Even though this situation seems straight out of a movie and could lead to further AI-Voice related crimes, we need better security and background checks when something fishy like this happens. This could have been prevented if even a video chat was used to see the person or by having an in-person meeting. When large sums of money are transferred and could potentially really hurt a business, there needs to be many forms of confirmation instead of just a simple phone call. There seems to be a massive amount of work that went into pulling off this crime, and I really hope the men behind it get punished severely, and that this case sets a huge precedent for future cases that would scare away people from doing this by making large consequences.

  17. Danielle Blanco September 27, 2019 at 10:26 pm #

    Businesses these days are finding a way to become be efficient. A common trend among all businesses is the usage of artificial intelligence. The British company in the article used a voice system which is common among all businesses. However, what happened to the business was considered the first artificial intelligence heist. An outside user used a voice system to match the tone of an executive at the company to make a transaction of thousands of dollars to an outside bank account. Even though it sounded suspicious, the employee said that it sounded exactly like the director. This is causing a privacy and trust issue.
    When it comes to privacy, outside sources could manipulate the voice system used by businesses to gain access to confidential parts of the businesses. There could be illegal transactions that could hurt the business’s profitability. Another issue when it comes to privacy is outside sources could gain access to the company’s employees’ private information such as their social security numbers which could lead to identity theft. Another issue was that controversies like this could lead to the investors to lose trust in the company. When investors have no trust, then the investors will stop investing in the company. Businesses need to realize that even though becoming more efficient is a vital part to compete in the industry, they need to understand that technology isn’t perfect. There is still a chance of fraud, theft, and mishaps. Most businesses are transforming and using technology. However, with businesses becoming more advanced in technology, hackers and thieves can also find new advanced ways to manipulate the system to gain access to company’s records and money.
    I believe that at this time there is no system in place that is completely perfect. The best option that a company can put in place is internal controls. For example, with the British company in the blog, they could have been a set of controls to prevent this. Although they used a voice manipulator to have money transferred, they could have had a second verification to see if this transaction is valid. Before artificial intelligence was widespread, it was only just a thought. At this moment, having the perfect system is unimaginable but as time goes on, it could be possible.

  18. Mai Le November 4, 2019 at 1:08 am #

    The development in the artificial intelligence technology behind deepfakes brings on a great challenge for facial and voice recognition software which has been used for security purposes for years. Still, the incident in this article has highlighted the fact that deepfakes has advanced to the point that they can even trick some of the most prudent individuals. It also shows how once again regulations are having a hard time catching up to the fast and ever changing technological advances as anyone can get their hands these mimicking software. Even the software developed by the company LyreBird that is mentioned in the article is offered for free with a full range of audio and editing feature, which only is set apart by the paid subscription version by the fact that it does not provide compatible file exports to other software among other minor features. This technology has also been made more readily available through phone apps as well. An example of this is the viral Chinese app called Zao where users can face swap their faces to a character in a movie in matters of seconds.

    While reading this article, I was immediately reminded of the deepfake video of Mark Zuckerberg, CEO of Facebook, that went viral in June. It is clearly proven to the general public in this video, which was only an art installation, how easy it is to recreate an important public figure performing a certain behavior they have never done. The incident reported in this article showcase the detrimental side of artificial intelligence when being used for insidious purposes. As policymakers still figuring how to regulate this threat, companies must recognize how it is relevant to their business operations and reevaluate their current internal controls.

    Source
    https://www.washingtonpost.com/technology/2019/09/03/viral-chinese-app-zao-replaces-your-face-with-leonardo-dicaprios-deepfake-videos/?tid=lk_interstitial_manual_13
    https://www.ft.com/content/4bf4277c-f527-11e9-a79c-bc9acae3b654
    https://www.descript.com/pricing

  19. Jackson Beltrandi November 6, 2019 at 7:10 pm #

    It is time where technology has passed its “good-faith peak.” That term I just created means that technology has reached the highest point where it aids human beings, without taking over. In my opinion, the future of automation and digital transformation is dangerous for the United States. With the heist aside, technology can be programmed to do essentially any low-skill job, replacing millions of workers. The dangers of technology are shown here in this article. Bank robbers did not need a ski mask or a gun to get half a million dollars wire-transferred to them. It is no surprise that Google is behind this. No, Sundar Pichai did not hold up a bank, but his company created “unrealistic voice cloning,” which is pretty easy to understand. Synthetic audio is created by tracking voice wavelengths, decibels, and other features that allow someone’s voice to be copied and manipulated. Like honestly, who thought that this would be a good idea? I have no clue what advantages could come from being able to mimic someone’s voice. Obviously events such as using a bank manager’s voice to wire transfer a quarter-million dollars into someone’s account is going to be a direct result of this new software. As I’ve been writing this, I just reached the part where this software can help mute people speak again. Going back on what I said, the creators of this software must make this product limited to those who need this item, and could be used for a positive reason. While it may be hard to regulate voice software, it must happen so more companies don’t get swindled by tech-savvy criminals. Another reason this software makes me skeptical is because Google is collecting data on our voices. Not only do they control every other aspect of our lives, they know what we like, where we live, how old we are, and basically other feature about us. Now that they can process data on our voice, they can make clones on us. What’s next, fun games involving our footprints?

  20. Jake Malek November 7, 2019 at 6:35 pm #

    This article was very striking because we got to read the perspective of Lyrebird while also seeing the negative ramifications that this voice mimicking AI could have. A person’s voice is a part of their identity. Having AI that can so closely mimic a person’s voice is a tremendous step towards humanizing AI. Lyrebird, the AI mimicking company mentioned in this article, understands the repercussions which can result from the evolution of this technology. Within the ethics section of their website, they acknowledge that “with great innovation comes great responsibility.” When reading the article, I could only come to believe that this technology would have more negative than positive repercussions. The positive implications that this technology offered were “helping humanize automated phone systems and helping mute people speak again” while the list of negative implications is limited only by the criminals’ imagination. This market’s unregulated growth has resulted in at least three cases of executives’ voices being mimicked to swindle companies, one totaling over $1 million.

    While we can discuss whether the technology is helpful for society and the negative impacts that come with public access to this technology, the only inevitable fact is that this technology is the new reality. Technological development is a process that can not be limited. As Lyrebird states on their new releases, this technology “will help acclimate people to the new reality of a fast-improving and “inevitable” technology so that society can adapt”. The society must adapt because if Lyrebird’s AI doesn’t meet market expectations, another company will develop another similar product.

    In the middle of the article, Harwell mentions Google Duplex and its capabilities within a similar space to Lyrebird. Google Duplex is an additional service being developed to implement within the google assistant which can use AI voice recognition, decision making, and human voice mimicking to make reservations for the user. This is a great example of how multiple companies are already working on similar technologies and each company is finding different applications that this software can be utilized for. I think this voice mimicking is truly the first step toward humanizing AI.

    Lyrebird Website: https://www.descript.com/lyrebird-ai?source=lyrebird
    Google Duplex video: https://www.youtube.com/watch?v=D5VN56jQMWM

Leave a Reply