The Real Reason Tech Struggles With Algorithmic Bias

from Wired

Are machines racist? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Warnings that AI and machine learning systems are being trained using “bad data” abound. The oft-touted solution is to ensure that humans train the systems with unbiased data, meaning that humans need to avoid bias themselves. But that would mean tech companies are training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it. Has anyone stopped to ask whether the humans that feed the machines really understand what bias means?

Companies such as Facebook—my former employer—Google, and Twitter have repeatedly come under attack for a variety of bias-laden algorithms. In response to these legitimate fears, their leaders have vowed to do internal audits and assert that they will combat this exponential threat. Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

More here.


15 Responses to The Real Reason Tech Struggles With Algorithmic Bias

  1. Diamond Vasquez February 20, 2019 at 6:37 pm #

    It is no surprise to me that bias is seen throughout social media. People express their opinions about any situation through the internet frequently, sticking to a specific side. Those in the data analytic and computer science professions do not realize that the ads they put on social media can be bias, and they are doing nothing to solve this situation, as describe in “The Real Reason Tech Struggles with Algorithmic Bias.” Yael Eisenstat, author of this article, elaborates on this issue of algorithmic bias, reflecting on his experience as ahead of global election integrity ops in Facebook and as a former CIA. He explained that during his six-month work period in Facebook, “I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.” He mentions how it does not matter how much training an analyst, he or she will depend on biases to make choice on something. Eisenstat says, “Become overly reliant on data- which is in itself is a product of availability bias- is a huge part of the problem.” This data that people are relying on does not answer the question of human nature, because there is no typical right or wrong answer to human nature.
    I found this article intriguing and valid. At times, we may not even realize we are being bias about something because it is habitual. I found it shocking how data analysts and computer scientists are not conscious of the fact that they are being bias when putting up ads, believing that they are using valid data, or “pure data,” to answer their questions. This is why no action is being taken to end these biases continuously being made. I believe, similar to Eisenstat, that data analysts and computer scientists should have further, in-depth training on how to be neutral in their work, not illustrating any sort of bias towards an ad. The author describes a workshop he attended in Sweden, describing, “… a trainer started a session with a typical test. As soon as he put the slide up, I knew this was a cognitive bias exercise; my scrambled to find the trick. Yet despite my critical thinking skills and analytic integrity, I still fell right into the trap of what is called “pattern bias…” He actually decides to use this test in his workshop and received the same results as the workshop he attended. I found this very interesting and key to training data analysts before they actually start working. Overall, this was an fascinating article.

  2. Richard Gudino February 21, 2019 at 12:31 pm #

    I had always presumed that people would always have biases, that no matter how much we try to teach everyone to be more inclusive we still have preconceived notions of anyone. I remember reading an article that describes a professor at Stanford by the name of Fei Fei Li. Her research and her experience managing and working with these tech engineers that were paving the way for our future. She noticed that everything that we make proves to be an extension of ourselves, that our work shows others who we are. That means that if our programmers have biases implemented in them, they do not always mean to do have these biases it ,ust be a part of human nature. Which is why Li has called for more diversity in the tech industry to ensure there is no bias, but even with all the diversity that we can have even those who sometimes have biases made against them have bias themselves. It is human to look for patterns to make assumptions and have them be biases. The Wired article describes the solution to eliminate bias is get the people in charge of our future to have real workshops and programs where we can eliminate bias. This however could be the hardest task that anyone can encounter because as the article states we face the “uncomfortable and often time-consuming work of rigorously evaluating one’s own biases when analyzing events.” Everyone wants to think that they are good people and that they are in the right. No one wants to experience discomfort when we have the comfort of our own safe spaces. This is most apparent as the article describes that “critical analytic thinking and, to the best of my knowledge and experience, is less common in technical fields”, which means that we do not expect these people to be able to think about the greater social impact of their work, but their work is gonna prove to be important to the world. I like the example that the author of the article uses of how a facebook employee took down an ad for the LGBTQ conservastive side because they assumed that it would be negative. I have seen this a lot on twitter recently, where i tend to lean more to the right of the political spectrum and I follow and retweet a lot of conservative articles, I still see my twitter feed flooded with liberal or leftist tweets. Sometimes the recommended news stories be politically biased towards the left. This ruins my experience as i feel they only push down on me the liberal side of the spectrum when I would like to hear both sides of the argument so I can be better informed and form my own thought. Our algorithms and tech are gonna suffer from bias until we are able to teach a more effective way to detect bias and eliminate it.

  3. Ashley Bock February 21, 2019 at 3:52 pm #

    Technology, such as Google and Facebook, is based on complex algorithms created in order to have the applications run. However, presented an issue held within the algorithms and the real source of the issue. This issue is that algorithms are biased. The author states that the tech industry is not doing enough to end the bias when creating the algorithms that artificial intelligence and computer applications run on. The argument posed is due to the fact that she saw it first hand when she was the head of integrity of elections position at Facebook from June to November 2018. She states that, “I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way”. (Eisenstat) This shows that it is inherent for people to have bias put forth in their work and daily lives. When working in tech companies and creating these algorithms, bias, albeit, not purposely done, will show and be transformed into the algorithms and that bias does show. Eisenstat showed that when doing certain exercises within her Facebook position, she saw that her voice in expressing the bias that was seen in the exercises were met with silence. This leads me to believe that more work must be done to encompass bias training in the tech community. Bias has to be eliminated from the person responsible to create the algorithms that put forth data and artificial intelligence in a non-judgmental way. The engineers and computer scientists need to go through training and learn how to take bias out of their everyday, so they are able to create technology without bias. It will be a challenge because it has been proven that, “Humans cannot wholly avoid bias, as countless studies and publications have shown”. (Eisenstat) However, avoiding bias is going to be hard if we do not understand where the bias is coming from.

  4. Allya Jaquez February 22, 2019 at 10:55 am #

    It does not shock me that social media is not doing enough to address these biases that go on almost every day. People use the internet all the time to judge others and make racial comments and those in the tech industry do not do much about it to protect those who are being verbally attacked. The main issue being stated in this article if that algorithms are biased. The individual who worked at facebook stated her experience on the situation and gave her insight about it as well. I believe that a lot more work needs to be done when training in the tech companies. Companies need to circle on the idea that they need to train their employees on the concept of bias and how to avoid it as well. But for these tech companies, it will be difficult to even find where the bias is coming from so I know it will be a little bit harder to avoid them and fix the problem.

  5. Edward Holzel February 22, 2019 at 2:36 pm #

    I am shocked that the data and statistics provided by programmers are biased. I was a firm believer in the concept that “numbers never lie”. I never conceived the idea that programmers would unintentionally be biased in their programming and data collection. Large technology companies, such as Google and Facebook, have programmers that are unintentionally biased in the programs and data that they create. The author claims that “Analysts and operatives must hone the ability to test assumptions and do the uncomfortable and often time-consuming work of rigorously evaluating one’s own biases when analyzing events”. I would not know where to begin in evaluating my own bias. Most people believe that they are unbiased and will not understand the ways that they are biased. The analyst and operatives for the CIA are specially trained to be able to isolate their personal bias. Computer programmers at large tech giants have not received the special training needed to isolate their bias, according to the author. “In one glaring example, an associate mistakenly categorized a pro-LGBT ad run by a conservative group as an anti-LGBT ad”. I learned that simple bias that people have with politics can make someone biased. It is the little things that people scoff at as being not important that can make people act biased even though they were not even aware of.

    The author claimed that training can help minimize the bias be allowing workers to understand their personal bias. I do not believe that training will allow the bias to be identified and removed from programs and data. The author even states that his trained CIA counterparts and himself fell victim to “pattern bias”. Even the specially trained CIA agents fell victim to the “pattern bias” as explained by the author. I believe if the people are coding with the belief that they are making a difference and working hard on their programs will be most unbiased work possibly produced

  6. Jack F Comfort February 22, 2019 at 2:55 pm #

    Bias is something none of us can get away from, no matter how hard we try. We will always hold biases deep down, even if we don’t think so. These biases occasionally leak into our everyday lives and sometimes even our work. In the case of the programmers, they let their biases leak into the algorithms occasionally causing racist algorithms. I don’t believe there’s much they can do about if they aren’t doing intentionally and I don’t think training will help that. I hope that they do eventually find a way to fix this but they are probably more caught up with all the lawsuits Facebook is facing

  7. Claudia Ralph February 22, 2019 at 5:48 pm #

    I am going to be completely honest, I hate algorithms. I miss the days where everything on social media was shiny and new. All posts were in chronological order and all the features seemed a little less complicated. Once algorithms were introduced, it opened the door to a new potential set of biases that a chronological feed does not allow for. Streaming posts into what we “believe” users want to see can have dangerous ramifications in terms of how social media platforms are able to almost control what it is that is being consumed by their users.
    This problem though, like many others in tech, is not being handled properly by many social media outlets. Just like data parsing and data sharing with third party advertisers, companies like Facebook and Google are lazy in their approach to many issues regarding their users. Instead of combatting potential issues with algorithms, social media companies are continuing to take a more passive approach to this issue. Using targeted ads with political bias is certainly a great way for these companies to make massive amounts of cash, but at the expense of its users. This is a place where the precision of a computer, but also human error can intersect and cause a lapse of judgement or oversight when it comes to ad placement.
    It is important for humans to understand what bias looks like as well, which is something that is touched on in the article. Silicon Valley is not doing an adequate job of equipping its engineers with proper bias training, which is something that I believe lacks in STEM already. In a field that is driven by data and numbers, it is important to bring the human element in from time to time.

  8. Jon Sozer February 22, 2019 at 6:03 pm #

    Algorithmic bias is nothing new. Many large companies use algorithms to filter posts, comments, videos, and other uploads to their servers in order to keep a certain image or focus. Issues regarding algorithms also have been in circulation for years now, specifically concerning YouTube and its content creators.

    Content creators on YouTube, in the beginning of its lifespan, weren’t in the business for making money. They posted what they did because they had a passion for the subject and for sharing it with other people. Later, investors and interested parties began putting money into the growing business, and content creators had the ability to monetize their videos and start earning from their hobby. As the incentive grew for new people to enter YouTube and hopefully hit the big time, YouTube itself began regulating what could be posted on their website. Even further down the line, YouTube’s algorithm began to push videos toward the general audience. So, forced to create a certain video in a certain fashion or lose money due to other videos getting preference from the algorithm, content creators were forced to either change their style of video, thumbnail creation, title generation, or one of many other factors, or they were to fade out trying. Now, the algorithm Google DeepMind is used as an artificial intelligence system that picks up on vulgarities or phrases that YouTube’s investors aren’t satisfied with being in videos they advertise, and are censoring or completely demonetizing their creator base. This has led to discontent with a great percentage of content creators, and the issue is still being discussed.

    Overall, algorithms aren’t perfect. They are a simple solution to complex problems, and can, and most likely will, create more issues in their wake. The ability to censor individuals based on data created by other humans not much different from simply denying people to their right of free speech, as dictated in the 1st Amendment.

  9. Demetri Allen February 22, 2019 at 8:42 pm #

    Yael Eisenstat’s article presents a huge underlying problem with most modern media and journalism, the threat of bias. The definition of bias is prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair, and in todays society is a huge inherit problem. While social media has created a platform in which anyone can put out an opinion or belief it has become easy for true facts to be lost behind a group’s personal agenda. One example of this was when a third-party company was acquiring Facebook user’s information in order to display specific political ads on their pages during the 2016 election. Yael talks about bias in the sense that creating programs and having unqualified people eliminate bias does not work. I want to present the biggest modern offender of this, YouTube. The YouTube trending page is a place where most “viral” videos end up or should end up. Videos that get a huge amount of attention in a short time end up on this page whether it be through views or likes. However, the YouTube trending page does not fully represent YouTubes actual creators. The page is dominated by music videos, ESPN sports highlights, and clips from late night talk shows such as Jimmy Kimmel. Most of these companies are paying YouTube to promote themselves, which goes against YouTube ideology that it is a website by creators for creators. YouTube shows their bias towards high paying companies over its own humble users by only putting what deem “safe” on the trending tab. Any video that is controversial no matter what the topic or how many views it gets will ever show up on the trending page. Even though YouTube says that they have no hand in controlling what videos show up on the trending page it is obvious that they are clearly lying. The problem here is that the trending page rarely represents real YouTube creators and the ones it actually does are people who abuse YouTubes broken algorithms to get more views on their videos. A blatant example of their bias was when the platforms largest YouTuber Pewdiepie made a video called “YouTube Rewind but 2018 but it’s actually good” in response to YouTubes own rewind video which did not properly represent all the events that happened in 2018. Pewdiepie’s video would go to become the most liked non-music video on YouTube to date and because of previous controversies with Pewdiepie, the never showed up on the trending page. This is what Yael is talking about when it comes to bias. It is deeply rooted in every media platform and systems designed to prevent it from happening simply don’t work.

  10. Daniel McNulty February 22, 2019 at 8:45 pm #

    Throughout history, racism has been a serious problem that has never seemed to go away. Although we have made tremendous strides in time, as a society we are absolutely not there yet. Everyday racism is seen on display, whether it be blatant or subtle. In the last several years, with the rise of social media, racism has become more and more apparent. Whether it be through racial injustice, police brutality, etc, racism has come to the forefront of our lives. Everywhere we look, racism is often brought into the conversation. Now, the current issue that is being talked about is racism from machines, artificial intelligence, and algorithms, and is this possible? The answer can be very complicated, but in short is yes. This is because these different things that we have in our lives today, are made by humans, so it is not out of the question. These systems are now becoming, and are trainable, which means that in order for these systems to not have a bias, whether it be race, gender, or political standing, then the people who train them have to go about it in an unbiased fashion. In reality, it is been proven to be physically impossible to remain totally unbiased, especially in a situation such as the one being presented. In order to understand your own biases, and the biases that you are receiving, one must receive training. Altogether, there is no tried and true way to measure a certain amount of bias, whether it be data, certain ads on a website, and so on. My question would be, how can a biases consistently be limited? Through more research, and more experiences, methods should absolutely be able to be developed to restrict these biases. Facebook seems to be one that is consistently receiving a bad wrap on their business, from managing accounts to only be able to see certain things, to managing one’s private information and sharing it. As a individual that uses these different platforms for information, you must be able to critically think, while understanding what you are reading and where the information is coming from. Being able to do this is crucial when obtaining information, in order to get the real information and weed out all of the biases.

  11. Abdulrafay Amir February 25, 2019 at 10:16 pm #

    Many big corporations and businesses have been using calculations to differentiate posts and comments and to their servers so that they can keep a specific picture or core interest. Issues with respect to calculations additionally have been available for use throughout recent years, explicitly concerning YouTube and its substance makers. Content makers on YouTube, in the start of its life expectancy, weren’t in the business for profiting. They posted what they did on the grounds that they had an enthusiasm for the subject and for offering it to other individuals. Also, many financial specialists and invested individuals have not started placing cash into the developing business, and substance makers had the capacity to adapt their recordings and begin gaining from their pastime. As the impetus developed for new individuals to enter YouTube and ideally hit easy street, YouTube itself started managing what could be posted on their site. Much sometime later, YouTube’s calculation started to push recordings toward the general group of onlookers. In this way, compelled to make a specific video in a specific design or lose cash because of different recordings getting inclination from the calculation, content makers were compelled to either change their style of video, thumbnail creation, title age, or one of numerous different variables, or they were to become dim attempting. Presently, the calculation on “deep mind” is utilized as a computerized reasoning framework that grabs on vulgarities or expressions that YouTube’s speculators aren’t happy with being in recordings they publicize and are controlling or totally demonetizing their maker base. This has prompted discontent with an incredible level of substance makers, and the issue is as yet being talked about. Generally speaking, calculations aren’t flawless. The capacity to control people dependent on information made by different people very little not the same as basically denying individuals on their right side of free discourse, as directed in the 1st Amendment.

  12. Josh Shupper February 26, 2019 at 4:09 pm #

    Racism and bias are two of the things that should not be new to us in today’s society. Bias has become a huge problem, especially with companies like Google, Facebook, and Microsoft who have been criticized by many for having some sort of bias involved in all of their algorithms. To me, I find that a little bit shocking. Many of these companies seem to have a good reputation, and all of the algorithms that are used by these companies are supposed to be helpful. Of course, it turns out to instead the reputation of these gigantic companies. But like the author said, humans can’t avoid bias. I can totally agree with that because everyone has different preferences and opinions about certain topics. Bias is on the same level as death and taxes. The common similarity between the three things I just mentioned that include bias, death, and taxes are things that we as humans can not just avoid. We will all have to face these things at some point in our lives. There is no way around it.
    Not everybody is perfect and can fix all of the problems that occur on a daily basis all over the neighborhood. I could definitely see why bias has become a huge problem for these companies that use certain algorithms to get work done. It interferes with the progress of a company and the criticisms that surround these companies are hurting the reputation of the organization and all of the employees and associates that are affiliated with that particular company.
    I liked the fact the author talked about his personal side of being a CIA officer and getting to see what someone with this type of occupation would be able to distinguish biased information and neutral information. The thing that the author mentions about his job is that a lot of it is based on critical thinking. Usually for many jobs, critical thinking is a crucial skill except for technology because we do not really need critical thinking skills. There are machines that do the work for us. I definitely think that critical thinking should be needed across all of the different types of jobs that the world has to offer. People with technology should need critical thinking to assess their algorithms and analyze all of the work that they do within their particular company that they work for.

  13. Peter Honczaryk March 1, 2019 at 11:46 am #

    Of course the machines are racist. They are created by people after all and everyone has their own opinions about everyone else. There is no one in the world who has discriminated against someone in any sort of way. It is just a natural way people look at the world and those around them. Take for example if someone wrights up an algorithm for an insurance company that gives out car insurance. The person who writes the algorithm would have wrote out that the person buying the insurance policy needs to provide their yearly salary, residence, and other personal information to figure out if they are capable of keeping up with the insurance rates. If it turns out that someone is not making much money per year, lives in a poor neighborhood, or has a bad background history, they are not going to get the insurance. If the algorithm is written by someone who is white, and someone is who black and falls under one of these categories, they will not get the insurance based on discrimination. If it was however, written by someone who is black, they are most likely to have received the insurance because of the general race debate that whites stick with whites and blacks with blacks. In order for this not to be discriminatory case, then both a black and white person would have had to writ the algorithm in order for everyone to have equal opportunity in this case. It was not a surprise that the companies mentioned in the article like Facebook, twitter, and Google are all under attack for bias. Just about every advertisement that is on these include someone who is white, with a few black people. This makes people want to attack these companies for being bias towards their race or beliefs. It is hard for companies to find ways to fight back but they should receive so much hate because they are all huge companies that have a variety of people from vast cultures who if really wanted to could speak up for their own races and change the companies views so they are less bias.

  14. Shegufta Tasneem March 1, 2019 at 3:50 pm #

    We can’t take any random person and accuse them of being more likely to commit a crime because others in their ethnic group have been convicted for it before. yet, that is exactly the case that happens frequently in our societies. Racism is something that people all over the world struggle with, depending on where they are. But now, with the rising application of technology in every sector of our lives, racism or racial bias has also taken over AI and the internet. While these prejudice and thoughts reside in human minds and the positive attitude towards this is that people are actually putting in a genuine effort to eliminate this social mishap, this effort cannot be completely absent from technology until their producers also get rid of it. One such example of technology being actively biased is the Coolpix S630 camera. It had been designed with such advanced and “sophisticated” technology that would prompt the user when it detected that someone had their eyes closed. But, the technology had been trained with Caucasian images. As a result, when the Taiwanese family tried to take a family photo, the camera simply spammed them with “Did someone blink?” messages (, The Bigger Picture). While this is not the intention or the original business model of the camera company, it was immediately engaged in racial bias when they processed the algorithms of the camera to only recognize Caucasian eyes. This is not the fault of technology or AI, of course, because they are produced and manipulated by human beings. This is yet another proof that we regularly see around us, that racial bias still exists among people, alarmingly, so much among those people in the society who have the authority to shape how our technology is going to be. The Coolpix S630 should have included the algorithms to recognize all skin colors as well as eye forms when a picture is taken and that would only happen if the original business minds and producers were completely off this bias. Just like this, there are many more racist implications we see through technology and yet choose to use it anyway because of our over-reliance on technology.

  15. Brian F March 8, 2019 at 6:46 pm #

    A lot of American like to think that we left racial division in the 1960s, and that the meteoric rise of internet communication would keep bias and prejudice at bay. Unfortunately, that rose colored view of our society does not line up with our reality. Despite the hope of many, bias crept its way into the foundation of technology, the internet, and data. The silicon valley programmers who created the technology heavy world that we know were human beings. Regardless of their intentions, their personal biases ended up being represented in their work. That isn’t the source of the problem though. The real issue is those who have deluded themselves into thinking that it was impossible for the others to keep that bias away from their work. It is human nature to let who you are influence the work that you do. That is why it was silly to assume the world of the internet could be made without the biases of the creators. Consciously removing your prejudices from your work takes specific, specialized training. The people who write those algorithms haven’t been instructed properly in that regard. The companies that employ the code writers, however, have been reluctant to agree with this premise. The problem, of course, is that training that specific would be expensive. Tech companies would prefer not to spend money on a problem that many people think does not exist. That is why a lot of workers who have recognized the issues, like this article’s author Yael Eisenstat, receive skeptical and dismissive treatment from tech executives. I have said I do not believe that it is possible to remove 100% of bias from a person’s work, but I do agree with the author’s premise that the tech companies need to pony up and deal with this problem now. I would expect more and more articles like this one to be written by former employees of technology companies, who discovered the stereotypes that were built into programming work but were not taken seriously when they tried to bring up the issue to their superiors. This is starting to take the form of countless corporate scandals that have already happened, where companies ignore the concern of experts right up until the public decides that the company was negligent. Additionally, some of the internet companies mentioned, particularly Facebook, have already done significant damage to their reputations by sharing user data. This is a perfect opportunity to avoid taking another hit in the opinions of consumers, by listening to the employees who have already identified the biases in algorithms and making at least some real effort to combat it.

Leave a Reply