Researchers Want Guardrails to Help Prevent Bias in AI

from Wired

Artificial intelligence has given us algorithms capable of recognizing faces, diagnosing disease, and of course, crushing computer games. But even the smartest algorithms can sometimes behave in unexpected and unwanted ways—for example, picking up gender bias from the text or images they are fed.

A new framework for building AI programs suggests a way to prevent aberrant behavior in machine learning by specifying guardrails in the code from the outset. It aims to be particularly useful for nonexperts deploying AI, an increasingly common issue as the technology moves out of research labs and into the real world.

The approach is one of several proposed in recent years for curbing the worst tendencies of AI programs. Such safeguards could prove vital as AI is used in more critical situations, and as people become suspicious of AI systems that perpetuate bias or cause accidents.

More here.

Posted in Ideas, Innovation, Technology and tagged , , .

7 Comments

  1. With Artificial Intelligence being improved to be better and better by the day, one has to wonder just a little bit if we are getting to ahead of ourselves as a society. Although AI is nice and makes our lives significantly easier than it has been in Earth’s history, it is important to remember the falls of AI.

    One of the main ways that AI will often fail is when using data and algorithms to compute and make decisions. For example, the article touched on Amazon’s scandal surround AI, “Amazon was forced to ditch a hiring algorithm that was found to be gender biased” (Knight 6). In the Amazon Scandal, it was found that the AI they had established was denying women interview opportunities simply for being female. This is not the first time an AI was found to be jittery like this, also mentioned in the article is when Google was put on the spotlight when “Google was left red-faced after the autocomplete algorithm for its search bar was found to produce racial and sexual slurs” (Knight 6), in this case the AI used accumulated data to predict what you are going to search but due to inappropriate suggestions the search bar gives you when you type on a Google Search, Google received big backlash. However in some cases companies are effected by these AI hiccups when they do not even happen, Apple found themselves in this pickle when they’re Apple Card was put into question given that “the algorithm behind its credit card offers much lower credit limits to women than men of the same financial means” (Knight 3). Although it ended up not being true, Apple did receive backlash which ultimately ended up hurting them which is ironic since AI is suppose to make our lives easier. This method of capping AI mistakes is genius because taking it from a logical perspective, AIs are nothing more than just equations using stats to predict things, however in statistics there are always outliers which will then translate into predictions that are outliers. However instead of these outliers resulting in being denied for a Apple Card one day, these outliers could be your automated car crashing so it is important guardrails be put in place.

  2. Artificial Intelligence is one of the most fascinating things in the world. Engineers are basically building a new level of consciousness. These robots pass other, non-AI robots because of their ability to adapt to the world around them. Like humans, they learn by doing. As such, not only do they become smarter as they work, they also begin to pick up on patterns that humans do. That is basically the whole point of why people desire AI. They are a system that can pick up on patterns in things that humans can’t always see too well. In the medical field, this ability can result in faster and more accurate diagnosis. However, they also pick up on patterns that are unsavory, just like humans do.
    At the heart of it, AI are learning from us. The ability to cloud-store big data has allowed us to build AI at an exponential rate. They learn to adapt form our processes. They are literally meant to replicate the human mind in the best way we know how. Like us, AI are not perfect and can hold bias and hurtful standards. While they may not be violent beliefs, they can result in oppression.
    In a video posted in July of 2018, Michael Reeves, Youtuber and Robotic Engineer, turned a Tickle Me Elmo into a race detecting machine. For this project he wrote a very simple algorithm in which the system learned from pictures and information fed to it, like any typical AI. He them put a camera into the system, had a friend of his record himself saying certain things in Elmo’s voice, and then hooked the systems up together. The camera captured an image, quickly compared the face to what it had in it’s system, and returned with different racist terms and slurs. Reeves took the liberty of pointing the Elmo at himself, showing that it could detect his Asain heritage and return with the recording he set up to respond to such images. This project really goes to show just how easy it is for people building AI to use it in ways that can arm others. Rather than having an Elmo doll, an employer may implement an AI system that discriminates against certain applicants and/or employees.
    This all being said, we should not shy away from AI. AI is incredibly helpful in the medical field, among other things. It is world changing. What we need, as this article suggests, is a standard. AI as only recently begun to enter the world, at least in terms of how fast legislation works. Now, you have people who go to law school and become technological lawyers who argue cases to protect against discriminatory practices. Just like us, these machines will learn things we may not want them to learn. That is the downside of having a system that is meant to learn and live on its own. If a system is going to process all of the news articles on a specific subject, it will learn things that may be against the beliefs of those that built it and may also include racist or prejudice ideations. As a result, we must limit the ability of AI. As the article suggested when discussing an automated Insulin pump, we must set certain limits and barriers on the AI we build.

  3. Artificial Intelligence has come to be something that is extremely helpful when it is used right. When artificial intelligence can complete complex equations that the human has no ability to do, it is times like these that can make one think that the overall success of AI is just tipping the iceberg. With the way AI works in always learning an algorithm, and you can be utterly certain that when running this system to answer a particular question, the more it runs, the more accurate the answer will be. These algorithms, however, are also what is holding AI back currently. When people are starting to test the ability of AI, it is becoming confident that there are problems with using these algorithms to predict things that will happen in an open environment. When the AI is used in a more open environment, questions of things like bias come into play that is incredibly complicated when trying to relate them to numbers. Even a tech monster like apple tried to use AI to offer credit to people, and it was found that the algorithm would offer lower credit limits to women than men. The more AI is being used in the world, the more biased and the more we see the problems in doing so. Amazon also fell victim to AI becoming bias. They attempted to create an algorithm that could choose which application to offer jobs too. However, after leaving it run Amazon ran into a big problem, the algorithm began to choose applications began to have a gender bias.

    In using something as powerful as AI can be, it is essential to remember that there has to be limited in what can be done to ensure everything is fair for all. Nevertheless, this seemingly simple addition to the algorithm is proving to be extremely complicated. Explaining fairness in a purely mathematical for is a task that is almost impossible. Explaining fairness in life is extremely complicated as it is. To truly understand what this all means, the first one must understand that we are talking about Deep Learning algorithms alone. This is where a software engineer would right lines of code as a set of limits, essentially its rules. The AI is then let to run, and the answers it comes up with are purely based on math and what the AI believes to be the best outcome after running the scenario thousands of times per second. One of the most significant examples of this deep learning AI would be SIRI, and even with how well this works, the people that set up the voice recognition were mostly upper-middle-class white males, this makes it hard for anyone is not white to be understood by SIRI.

    AI may be getting information exceptionally quickly and mostly right. The major problem is the creator, the user error with AI is why these problems are ultimately coming to light. This is not intended either and can almost never be avoided when writing an algorithm because no software writer can anticipate the millions of different ways AI can take the instructions. When the AI does take the directions to a place of bias, the only thing that is able to tell, are people. This one fact means that often it takes time for the bias even to be found. People are the only way for AI to be fact checked, the amount of information AI is capable of spitting out makes it almost impossible to keep up with. Certain mistakes are bound slip by. The biases that AI have are bound to stay in the near future.

    Other Links Used:
    https://www.forbes.com/sites/tomtaulli/2019/08/04/bias-the-silent-killer-of-ai-artificial-intelligence/#5e7df7207d87

    https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

  4. This article was an interesting read, mainly because of the importance of artificial intelligence in today’s society. A.I has the power to replace jobs that many humans do nowadays. The use of A.I in the medical field has been very important as well. Robots are being used to perform critical surgeries, as they have the power to be more precise than human hands. This article was intriguing because of the algorithms and even the biases behind them.
    The biases that are based on gender are very scary. These particular instances are scary because of the implications they can have on corporate business, such as ads based on gender facial recognition of the Apple Iphones. These algorithms showed how they can be unreliable when they incorrectly predicted the GPA based on male or female recognition. This is very important to understand because of the necessity of technology and artificial intelligence. These issues need to be prioritized because creating even a little controversy, especially involving sexism is unacceptable. The producers could face severe backlash from consumers if these issues continue to arise. If the consumer is unable to put faith in technology, business will suffer.

  5. This article was an interesting read, mainly because of the importance of artificial intelligence in today’s society. A.I has the power to replace jobs that many humans do nowadays. The use of A.I in the medical field has been very important as well. Robots are being used to perform critical surgeries, as they have the power to be more precise than human hands. This article was intriguing because of the algorithms and even the biases behind them.
    The biases that are based on gender are very scary. These particular instances are scary because of the implications they can have on corporate business, such as ads based on gender facial recognition of the Apple Iphones. These algorithms showed how they can be unreliable when they incorrectly predicted the GPA based on male or female recognition. This is very important to understand because of the necessity of technology and artificial intelligence. These issues need to be prioritized because creating even a little controversy, especially involving sexism is unacceptable. The producers could face severe backlash from consumers if these issues continue to arise. If the consumer is unable to put faith in technology, business will suffer.

  6. The title of this article caught my attention because the idea of an AI having bias seemed a bit strange to me. When I think about bias I think of emotions, and previous experience that shaped an opinion. When I think about AI, emotions and previous experiences are not what comes to mind. Without ever having given it much thought, I assumed AI was a pretty black and white thing. I did not consider that the machine learning process could in fact create bias among the AI. However, it makes sense that machine learning can be biased. If the AI is being fed bias information then it is obvious that the machine will become bias itself. A lot of people think that AI is a futuristic, free thinking machine, but in reality it is just making decisions based off of algorithms fed by humans.

    I think an effort to make the AI not bias should be put in place. AI is the future for a number of reasons. Boiled down, AI’s purpose is to make things faster and more efficient for humans. With AI we could reach places that humans could never reach before, however as we advance into the future we want to leave things like bias behind. The value of unbiased AI would be unequivocally more valuable than a biased AI, because being unbiased is something that is more or less impossible for a human. In conclusion, AI is a powerful thing. Guardrails and regulations should be very strict on the development of AI because in the wrong hands it could be very dangerous. There are many sci-fi movies and books about robots destroying humanity. I think the idea of an evil AI is even more dangerous because they don’t need a body, only connection to the internet.

  7. Artificial intelligence is a recurrent topic when it comes to our expectations regarding technological advances in the near future. As many companies are focusing on developing this technology, more questions about its effectivity are rising. The applications of AI are vast, it is expected to be the cornerstone of self driving vehicles, robots, and other technological tools with the purpose of assisting persons with certain tasks and processes. The principle of artificial intelligence is essentially the capability of a code to continuously edit itself based on new data. The data is also meant to be collected by the device for which the code has been programes. The principle is beneficial in theory because it allows the code to make distinctions between information that is valuable and information that is not. Here is where the doubt of a bias being possible appears. I believe that various parameters can be set to limit the type of information that the code absorbs and interprets as valuable, however, I don’t believe that there is a way to completely eradicate the possibility of a bias. At the end of the day, an artificial intelligence has the purpose of absorbing as much information as possible to learn from it. Limiting the amount of information that an artificial intelligence can gather would probably limit its learning curve, which is something that companies would try to avoid at the expense of a competitive market. It is also inefficient because as the code grows, more parameters will have to be created to keep it under control, which would take auto sufficiency away from AI. Artificial intelligence is a complicated matter because it has a bigger potential than most people think to do good but also to cause harm. I believe that artificial intelligence will be prone to committing mistakes due its nature, which is different from that of a concrete and inmutable code that has the exact data to complete a certain task in the same way every time. Artificial intelligence is developing at a fast pace, we’ll have to wait an see how this matter evolves.

Leave a Reply to Connor Kupres Cancel reply

Your email address will not be published. Required fields are marked *