Your Artificial Intelligence Is Not Bias-Free

from Forbes

Machines have no emotions. So, they must be objective — right? Not so fast. A new wave of algorithmic issues has recently hit the news, bringing the bias of AI into greater focus. The question now is not just whether we should allow AI to replace humans in industry, but how to prevent these tools from further perpetrating race and gender biases that are harmful to society if and when they do.

First, a look at bias itself. Where do machines get it, and how can it be avoided? The answer is not as simple as it seems. To put it simply, “machine bias is human bias.” And that bias can develop in a multitude of ways. For example:

More here.

, , ,

4 Responses to Your Artificial Intelligence Is Not Bias-Free

  1. Rebecca Hu September 22, 2017 at 2:40 pm #

    Data is everywhere in our life. As the internet has become a common necessity for living, information is well spread and every piece of data is being monitored. All of news or information presented to us have been through a filter. The article published on Forbes “Your Artificial Intelligence Is Not Bias-Free”, clearly demonstrate the scenario where all of the information provided to us on the internet, are already tailored to us.
    Everyone has certain things they don’t like and they like. We are biased towards certain things of situation. Since artificial intelligence are written by humans, they tend to follow these traits and be biased towards a certain subject. The article questioned “Will we ever build truly objective machines?” the answer is a simple statement of no “as humans are involved in the process, some bias will exist.” As we know in order for these programs or artificial intelligence to work, they need an equation. That algorithm decides their response with human interaction. I believe humans are special compared to other organisms is that we have the ability to think above and beyond the standard operating procedure.
    The article talks about 4 different types of bias, they are data-driven, interactive, emergent and similarity bias. All four of them remind me of how humans learn at a young age. Where data and interactive bias suggest the artificial intelligence just take in information instead of filtering. Just think about kids before kindergarten, our approach to teaching them is to order them to stop doing certain actions. We are just filling in information in their head. Of course, now, the education system has changed. There is a totally different approach on how children should be taught. Just think of examples in China. I’ve participated in Chinese style education and witness other kids. It is very simple, you just memorize large sums of information and accept it. There is no time to think and reflect upon the material. The only requirement as a student is to know the material. I think this reflects upon how we are teaching artificial intelligence.
    I can relate the emergent and similarity bias on the recent incident happened with the U.S. presidential election being hacked from Russia. Media control the majority of the population, in order to attract more time spent on social media or the internet. The companies are willing to present information that favors to the specific owner of the device. No one would spend the time to read on something that they disagree of that don’t interest them. The author state “that often means there are a lot of things we never even know about”, we often cannot see all the necessary pieces of information to complete the big picture.
    For me at least I am trying my best to not let my computer influence me due to my past history. For me the best and most easy approach is to not sign into your browser, I try to get a neutral search result through using the browser as incognito mode. Where my browsing data won’t be saved, and I am not signed into any personal account that could result from a difference in my search result. The author also suggests that there is multiple organization developed to monitor bias on artificial intelligence. I think it is a good thing that we are recognizing that artificial intelligence and technology can really influence people’s choices.
    Just as an article posted previously in this blog “Weapons of Mass Manipulation” technology and social network system can really have an impact on people’s decisions. “The algorithms major companies are using to feed us news and information are impacting the decisions we make in our business and personal lives”, the author here also recognize the importance of technology influence on people. The author’s propose that a possible solution is the “transparency of every algorithm being used”, I agree with the author. Since the only way to tackle a problem is to understand what the issue is. I believe especially on recent reports with the “hack” of U.S. election, people began to question Facebook. Whether Facebook can be a reliable source to get any information about the current events. As improvements made in the technology field, I think we can slowly reach to a conclusion that although life is much better with technology. People are beginning to lose the ability to think, to make decisions based on limited information. I am actually scared about the future, imagining a world where people cannot do anything by themselves and rely entirely on technology.

  2. KM October 7, 2017 at 8:06 pm #

    Daniel Newman’s article “Your Artificial Intelligence is Not Bias-Free” brings to light the idea that we, as humans, pass on our bias to artificial intelligence and that AI by its nature and design is not free from bias. We tend to think that technology can’t possibly be wrong or that it is infallible but this is not the case. We need to be aware of the limitations that exist with AI, in relation to bias, whether it is a fault in the design or through our own doing. I believe that part of the issues revolving around AI and bias that Newman discusses in his article arise from the fact that we may not have fully considered the impact of our own bias, as individuals and a society, when originally exploring AI. As I sat down to read this article, the title made me consciously aware of my own bias prior to reading the article. Also, I was acutely aware that when I read another individual’s comment, that too would affect my bias regarding the story and potentially my opinion. Bias is inherent in our nature and impossible to eliminate. Being that bias is an issue that we cannot erase we must learn how to manage it and reduce its impact in situations where AI is employed.
    The issues that bias cause extend far beyond the inappropriate responses of a chatbot named Tay. Bias has the real ability to affect people’s lives in a serious and impactful manner when it is included in the AI process, either knowingly or unknowingly. One example which Nelson discusses in his article is how data-driven bias has led to racially biased results that could have an impact on a person’s human freedom. Another example of how bias in AI can impact individuals is when it is applied to the healthcare field. According to an article by Robert Hart in Quartz magazine titled “If you’re not a white male, artificial intelligence’s use in healthcare could be dangerous” there is a real opportunity for AI to mitigate the bias that exists in healthcare data and provide opportunities to enhance care for individuals regardless of their demographics. However, this same article goes on to mention that if the bias in the data is not controlled the inequalities in healthcare will be perpetuated. If bias is regulated or mitigated the power afforded by AI could be unmatched.
    This leads me to a point that Nelson discusses in his article, that the main issue becomes what do we do about managing bias in AI? There are serious issues regarding bias that run from healthcare to information dissemination to issues impacting human freedom. I agree with Nelson that there needs to be more transparency in communicating what algorithms are employed by companies when they are supplying you with predetermined information, such as news feeds that might interest you. I think it needs to be made more explicitly clear in layman’s terms, as Nelson mentions, how the information that you are receiving is provided to you and through what process. In cases where you are accessing information that is generated by AI technology, such as when using Facebook, I feel that the company or provider of information should be responsible for notifying the user that the information has the potential to be biased. We all have the right to make the most well informed decisions for ourselves, and therefore we have the right to know if the information that we are consuming is skewed one way or another.

    Additional Sources:

    https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/

  3. Jeffrey Khoudary November 3, 2017 at 4:19 pm #

    To my surprise, machines are not as unbiased as I once believed. Daniel Newman reasons that because machines are programed by humans, they can develop data-driven, interactive, emergent and similarity biases. Data-driven bias refers to the false information that humans provide artificial intelligence. Because programmers’ knowledge and scientific data are often influenced by cultural and social biases, artificial intelligence equipped to form estimates about people’s behaviors may come up with false results. Neiman’s explained this by referring to a learning system that falsely identified black parolees as being more likely to be reoffenders. Likewise, KM made an interesting point about how artificial intelligence’s use in healthcare being dangerous because the data it is filled with inequalities. It is possible to decrease some of the effects of data-driven bias by encouraging more diversity in the business world. Kayla Mason pointed out in the article “Computer Science’s diversity gap starts early” that there are still racial and gender inequalities in computer science jobs. By encouraging people from a variety of backgrounds to become programmers and gather scientific data, it becomes possible to create artificial intelligences without these data-driven biases.

    Another bias that I commonly notice on social media is similarity bias. Programs today are designed with algorithms which identify the news and media content people like to look at and provide content that the viewers are most likely to agree with. This can become problematic because they only showing people the articles which are likely to support their views. In the recent presidential election, both the Democrats and the Republicans were sure that their candidate was going to win and strongly believed that the opposing candidate was awful. This may have been because Facebook and other social media sites have algorithms which skew the news presented to users to favor articles that matched their personal opinions.

  4. SK February 5, 2018 at 7:35 pm #

    Artificial intelligence or “AI” has become a trendy, wide-spread topic we can’t stop talking about. We see the advancements of technology progress at a rapid pace and that is no exception when we talk about artificial intelligence and the influence it has on our lives. Our minds are naturally inclined to favor any technology that helps us save time which AI does, however, it may be efficient but not effective. Mundane questions we ask “Siri” or Amazon’s “Alexa” such as “what is the weather today” provide us with knowledgeable answers, but more intellectual questions lead to biases. For example, if I decided to ask “Tay,” Twitter’s form of artificial intelligence, “Who should I vote for as president,” Twitter would probably collect all the jokes and commentary against Donald Trump to oppose him as a candidate with no information from a political standpoint. AI may be providing quick and “straight-forward” answers, however that does not necessarily mean it is the correct answer based on facts.
    Machines have no filter when providing information because of the algorithms associated with its software. Reiterating Daniel Newaman, machines follow one rule: “garbage in, garbage out.” AI is nothing more than a plethora of information concealed behind a voice. Referring to “predicting recidivism,” AI can’t differentiate opinion from facts hence AI’s skewed responses. Another factor to these skewed answers are caused by human “error.” AI is a code created by humans which is why human bias is AI bias. As humans, we process and analyze a question before providing a justified answer which is a critical step that AI does not do. AI skips the analyzing step to provide information found in its database whether it be true or not.
    Relying on AI can result in us losing valuable information that can affect our everyday lives. As stated by David Newman, AI can hide information on Faceboook such as what our friend may have planned for the day. This may seem miniscule, but what if AI is leaving out information about our financial standing and industry advancements? AI automatically assumes we do not want to view certain information based on trends that are not always accurate. AI tends to only provide information that might brighten our moods and provides a distorted view of the political or social world.
    By creating AI, software specialists have fabricated a repetitive code that informs AI to provide concise answers whether it be true or accurate. To prevent AI from giving us skewed information, and algorithm should be developed to counter-act its “unfiltered” and “not accurate” answers. If they created a way for AI to provide all this information, we can make AI less bias. Consumers, businesses, and to anyone who utilizes AI in any form will realize its will realize its imperative issues and how negatively it can impact our everyday lives. To prevent these biases, future creators of these AI algorithms must address how to mitigate these biases to as minimal as possible. So yes. AI is a technology that is “raising eyebrows” around the world, but has not been perfected yet so before we get too excited, we must be cautious of how to use it strategically to avoid its biases without completely relying on it.

Leave a Reply