Researchers Want Guardrails to Help Prevent Bias in AI

from Wired

Artificial intelligence has given us algorithms capable of recognizing faces, diagnosing disease, and of course, crushing computer games. But even the smartest algorithms can sometimes behave in unexpected and unwanted ways—for example, picking up gender bias from the text or images they are fed.

A new framework for building AI programs suggests a way to prevent aberrant behavior in machine learning by specifying guardrails in the code from the outset. It aims to be particularly useful for nonexperts deploying AI, an increasingly common issue as the technology moves out of research labs and into the real world.

The approach is one of several proposed in recent years for curbing the worst tendencies of AI programs. Such safeguards could prove vital as AI is used in more critical situations, and as people become suspicious of AI systems that perpetuate bias or cause accidents.

More here.

, ,

4 Responses to Researchers Want Guardrails to Help Prevent Bias in AI

  1. Walter Dingwall December 5, 2019 at 11:30 pm #

    There are clear reasons why the older demographic in any nation should have greater access to the services that everyone might want. It takes time to accumulate credit. There is a requirement on the credit hours necessary to complete a degree. With more time in a company’s work force, there is gained a greater, more credible resume to use for the next job. The older demographic has seen more world events, and generally have a greater confidence in how current events may play out, and how their decisions will affect anything. This comes with a general reliance and accountability on the older people, a hey appear to be the ones hat have solved the most problems. However, they are also the ones that have caused the problems left for the next generation to witness and attempt to solve their selves.
    It is not just in the polls that the young are misrepresented or misguided. The potential voters are constantly given reason, apparently, to believe that they do not need to vote. Likewise, the young are the ones that must undergo the most difficult trials to becoming creditworthy and obtaining the materials that are just as grand of a necessity for them as it is for any American. This is where start-ups, like Assure, are here to help the young get started as independent persons.
    By developing algorithms that track a user’s activity online, these lending start-ups allow users to obtain loans that would require a certain “creditworthiness” if the user were to approach a bank for the loan. There is a general need for things similar to this that provide the young with access to the things that require so much to work towards, though, they are completely necessary for any age group, like loans for furniture.
    With these programs, however, there can be development of bias in the code that will discriminate against the user for things that defeat the purpose for the platform for even existing in the first place. This could come in the form of recognizing improper capitalization and punctuation, which could demonstrate uneducation and a lesser worthiness to receive a loan. With this, there is a necessity for human intervention that indicates guardrails that do not allow the AI to outrun its initial purpose to deliver loans to those who could not receive one if they were to go through a bank.

  2. Daniel J Cambronero December 6, 2019 at 7:19 pm #

    With Artificial Intelligence being improved to be better and better by the day, one has to wonder just a little bit if we are getting to ahead of ourselves as a society. Although AI is nice and makes our lives significantly easier than it has been in Earth’s history, it is important to remember the falls of AI.

    One of the main ways that AI will often fail is when using data and algorithms to compute and make decisions. For example, the article touched on Amazon’s scandal surround AI, “Amazon was forced to ditch a hiring algorithm that was found to be gender biased” (Knight 6). In the Amazon Scandal, it was found that the AI they had established was denying women interview opportunities simply for being female. This is not the first time an AI was found to be jittery like this, also mentioned in the article is when Google was put on the spotlight when “Google was left red-faced after the autocomplete algorithm for its search bar was found to produce racial and sexual slurs” (Knight 6), in this case the AI used accumulated data to predict what you are going to search but due to inappropriate suggestions the search bar gives you when you type on a Google Search, Google received big backlash. However in some cases companies are effected by these AI hiccups when they do not even happen, Apple found themselves in this pickle when they’re Apple Card was put into question given that “the algorithm behind its credit card offers much lower credit limits to women than men of the same financial means” (Knight 3). Although it ended up not being true, Apple did receive backlash which ultimately ended up hurting them which is ironic since AI is suppose to make our lives easier. This method of capping AI mistakes is genius because taking it from a logical perspective, AIs are nothing more than just equations using stats to predict things, however in statistics there are always outliers which will then translate into predictions that are outliers. However instead of these outliers resulting in being denied for a Apple Card one day, these outliers could be your automated car crashing so it is important guardrails be put in place.

  3. Kathleen Watts December 7, 2019 at 4:16 pm #

    Artificial Intelligence is one of the most fascinating things in the world. Engineers are basically building a new level of consciousness. These robots pass other, non-AI robots because of their ability to adapt to the world around them. Like humans, they learn by doing. As such, not only do they become smarter as they work, they also begin to pick up on patterns that humans do. That is basically the whole point of why people desire AI. They are a system that can pick up on patterns in things that humans can’t always see too well. In the medical field, this ability can result in faster and more accurate diagnosis. However, they also pick up on patterns that are unsavory, just like humans do.
    At the heart of it, AI are learning from us. The ability to cloud-store big data has allowed us to build AI at an exponential rate. They learn to adapt form our processes. They are literally meant to replicate the human mind in the best way we know how. Like us, AI are not perfect and can hold bias and hurtful standards. While they may not be violent beliefs, they can result in oppression.
    In a video posted in July of 2018, Michael Reeves, Youtuber and Robotic Engineer, turned a Tickle Me Elmo into a race detecting machine. For this project he wrote a very simple algorithm in which the system learned from pictures and information fed to it, like any typical AI. He them put a camera into the system, had a friend of his record himself saying certain things in Elmo’s voice, and then hooked the systems up together. The camera captured an image, quickly compared the face to what it had in it’s system, and returned with different racist terms and slurs. Reeves took the liberty of pointing the Elmo at himself, showing that it could detect his Asain heritage and return with the recording he set up to respond to such images. This project really goes to show just how easy it is for people building AI to use it in ways that can arm others. Rather than having an Elmo doll, an employer may implement an AI system that discriminates against certain applicants and/or employees.
    This all being said, we should not shy away from AI. AI is incredibly helpful in the medical field, among other things. It is world changing. What we need, as this article suggests, is a standard. AI as only recently begun to enter the world, at least in terms of how fast legislation works. Now, you have people who go to law school and become technological lawyers who argue cases to protect against discriminatory practices. Just like us, these machines will learn things we may not want them to learn. That is the downside of having a system that is meant to learn and live on its own. If a system is going to process all of the news articles on a specific subject, it will learn things that may be against the beliefs of those that built it and may also include racist or prejudice ideations. As a result, we must limit the ability of AI. As the article suggested when discussing an automated Insulin pump, we must set certain limits and barriers on the AI we build.

  4. Connor Kupres January 24, 2020 at 7:38 pm #

    Artificial Intelligence has come to be something that is extremely helpful when it is used right. When artificial intelligence can complete complex equations that the human has no ability to do, it is times like these that can make one think that the overall success of AI is just tipping the iceberg. With the way AI works in always learning an algorithm, and you can be utterly certain that when running this system to answer a particular question, the more it runs, the more accurate the answer will be. These algorithms, however, are also what is holding AI back currently. When people are starting to test the ability of AI, it is becoming confident that there are problems with using these algorithms to predict things that will happen in an open environment. When the AI is used in a more open environment, questions of things like bias come into play that is incredibly complicated when trying to relate them to numbers. Even a tech monster like apple tried to use AI to offer credit to people, and it was found that the algorithm would offer lower credit limits to women than men. The more AI is being used in the world, the more biased and the more we see the problems in doing so. Amazon also fell victim to AI becoming bias. They attempted to create an algorithm that could choose which application to offer jobs too. However, after leaving it run Amazon ran into a big problem, the algorithm began to choose applications began to have a gender bias.

    In using something as powerful as AI can be, it is essential to remember that there has to be limited in what can be done to ensure everything is fair for all. Nevertheless, this seemingly simple addition to the algorithm is proving to be extremely complicated. Explaining fairness in a purely mathematical for is a task that is almost impossible. Explaining fairness in life is extremely complicated as it is. To truly understand what this all means, the first one must understand that we are talking about Deep Learning algorithms alone. This is where a software engineer would right lines of code as a set of limits, essentially its rules. The AI is then let to run, and the answers it comes up with are purely based on math and what the AI believes to be the best outcome after running the scenario thousands of times per second. One of the most significant examples of this deep learning AI would be SIRI, and even with how well this works, the people that set up the voice recognition were mostly upper-middle-class white males, this makes it hard for anyone is not white to be understood by SIRI.

    AI may be getting information exceptionally quickly and mostly right. The major problem is the creator, the user error with AI is why these problems are ultimately coming to light. This is not intended either and can almost never be avoided when writing an algorithm because no software writer can anticipate the millions of different ways AI can take the instructions. When the AI does take the directions to a place of bias, the only thing that is able to tell, are people. This one fact means that often it takes time for the bias even to be found. People are the only way for AI to be fact checked, the amount of information AI is capable of spitting out makes it almost impossible to keep up with. Certain mistakes are bound slip by. The biases that AI have are bound to stay in the near future.

    Other Links Used:
    https://www.forbes.com/sites/tomtaulli/2019/08/04/bias-the-silent-killer-of-ai-artificial-intelligence/#5e7df7207d87

    https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

Leave a Reply