Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination

from Wired

DURING THE LONG Hot Summer of 1967, race riots erupted across the United States. The 159 riots—or rebellions, depending on which side you took—were mostly clashes between the police and African Americans living in poor urban neighborhoods. The disrepair of these neighborhoods before the riots began and the difficulty in repairing them afterward was attributed to something called redlining, an insurance-company term for drawing a red line on a map around parts of a city deemed too risky to insure.

In an attempt to improve recovery from the riots and to address the role redlining may have played in them, President Lyndon Johnson created the President’s National Advisory Panel on Insurance in Riot-Affected Areas in 1968. The report from the panel showed that once a minority community had been redlined, the red line established a feedback cycle that continued to drive inequity and deprive poor neighborhoods of financing and insurance coverage—redlining had contributed to creating poor economic conditions, which already affected these areas in the first place. There was a great deal of evidence at the time that insurance companies were engaging in overtly discriminatory practices, including redlining, while selling insurance to racial minorities, and would-be home- and business-owners were unable to get loans because financial institutions require insurance when making loans. Even before the riots, people there couldn’t buy or build or improve or repair because they couldn’t get financing.

More here.

,

11 Responses to Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination

  1. Horace Bryce Jr February 26, 2019 at 6:45 pm #

    Algorithms should not hold so much weight that it can be use to tip the scale of social and racial justice. In my opinion after reading this article, I believe that insurance companies “developed sophisticated arguments about the statistical risk that certain neighborhoods presented” is just used as a scapegoat for or discrimination and racism.
    Racism in America is something so deep rooted that to remove it from the land would be to remove the land itself. Racism is so prevalent in America because it manipulates the masses into viewing a group of people as wild, incompetent and so badly in need of help; it does this so well that it gets to a point where it is okay for those in power to exploit and degrade that group without the masses even turning for a second glimpse. As a country that was brought up upon no base morality–upon the literal blood, sweat and tears of slaves, aka human beings– it does not surprise me scientific racism has made a comeback through the insurance companies use of statistics to justify discriminatory algorithms. In America the means of discrimination justifies the end, that being capitalism. At the same time, logically instead of morally speaking,I could see how the statistics would support the insurance companies “sophisticated argument” of redlining poor neighborhoods depriving them of financing and insurance coverage.
    As could be inferred from the last paragraph, racism in America is very much here and negatively impacts the livelihood of minorities on a day to day basis. This is done in a very systematic way with redlining fitting right into that system. Due to the stigma on the group it is fairly harder for them to get a job that pays well enough to support themselves and their circumstances. Because of this many do not have money therefore they’re forced to live off of the welfare of the government in highly populated areas with people in the same boat. Where there is people there are problems and now we have high populated areas of people who are struggling to survive; being something very different from living. With a life on defense in such a densely populated area of course more crimes tend to happen and of course “blacks should pay more for damage insurance when they lived in communities where crime and riots were likely to occur” which statistics takes a note of. Where there is more violence it is safe to say that there is a higher risk. With higher risk it does make sense why insurance companies would redline poor areas. But this risk is not the fault of the minority but of the system they have unsuccessful tried to assimilate into. Leading to the moral side of this conversation.
    Algorithms backed up by statistics should not be used to enforce discriminatory conditions. Just because it makes sense statistically does not mean that it is morally correct. Morals by definition is holding or manifesting high principles for proper conduct. Proper conduct would be to ensure that all people get the same amount of opportunity, funding, respect and insurance coverage. Redlining was actually outlawed under President Lyndon Johnson after his implementation of a National Advisory Panel on Insurance but it did not stick. This panel established that the redline “established a feedback cycle that continued to drive inequality and deprive poor neighborhoods of financing and insurance coverage. My question is why was it not important enough to stick if these poor people mattered to the system? The answer is that these poor people do not matter, what matters is productivity and capital. This leaves no room for what is moral. it is even stated that ” Insurers argued that their job was to adhere to technical, mathematical, and market-based notions of fairness and accuracy and provide what was viewed—and is still viewed—as one of the most essential financial components of society. They argued that they were just doing their jobs. Second-order effects on society were really not their problem or their business.” Simply saying that it is not in the business of morality but in the business of making money. On that note I agree with civil rights and feminist activist in the late 20th century questioning on whether “actuarial fairness—an individualistic notion of market-driven pricing fairness—was a valid way of structuring a crucial and fundamental social institution like insurance in the first place.” It simply should not be because it deprives people of rights they should have.

  2. Richard Gudino February 28, 2019 at 12:53 pm #

    This seems to be the argument of either the left or right of the political spectrum. That we are not just numbers in a system that a number cannot possibly define an entire person or their story. As a society we always hope that our morals and beliefs can translate to our policies that can govern us. That a federal or state statute can reflect every community and the realities everyone lives. What a perfect idea for a perfect world. This could not be further from the truth. We have so many schools of thought that it became impossible for everyone to agree on how we should conduct out laws and communities. The article cites instances of racism that happened during the 60s with insurance companies. Now what does this have to do with our increasing use of tech? What this means for out tech and goes back to an article that described how our algorithms are biased and that can make them racists. Why does this make a couple lines of code that are seemingly harmless all of a sudden bad? It all has to do with how the article defines fairness and accuracy, the article stating that “fairness and accuracy are not necessarily the same thing”. How can these two not be the same thing? It is because what is fair cannot always be determined by the accuracy of numbers, because again a number does not tell everything. The people that have created the algorithms followed a racist system that was adopted by the insurance companies, it’s the way that our algorithms are based on the data that has been used to model them is based on an old and racist tale. The reason that our computers tell the police where to stake out is based on data that has accumalted over time from areas that were meant to stay dangerous without hope of improvement. The other side counteracts this seemingly racist claim by stating that the code is just a “highly technical way involving only mathematics and code, which reinforces a circular logic”. Which makes sense the data is fed to computer, the computer spurs out the result, and the result is accurate because the computer cannot fail a calculation. It is a process that is able to come full circle. Still does not ignore the fact that the data it is based on some sort of racism. Again Fei Fei Li needs more diversity in the labs that make this code, to avoid injustice and make a more even playing field. Here is the thing though, there is bias on both sides of the argument, and people will always be biased and have preconceived notions of everyone around them, whether their skin has color or not, it’s unavoidable. Even if everyone is offered CIA level bias training, there will still be some form of racism or bias. Whether we notice or not our world has just become an uneven playing field. That just means that some of us have to work harder than others.

  3. Diamond Vasquez February 28, 2019 at 9:16 pm #

    During the 1960s, discriminatory issues have increased throughout the United States, even when it comes to insurance. “Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination,” published by Joi Ito, during the summer of 1967, race riots have occurred between police and African Americans living in poor neighborhoods because of redlining, which is “an insurance-company term for drawing a red line on a map around parts of a city deemed too risky.” To relieve this situation and further investigate redlining, President Lyndon Johnson established the President’s National Advisory Panel, which reveals how redlining the minority poor communities has taken part in decreasing economic conditions and has prevented these communities from receiving any finances. Not only does redlining play a major role in revealing the discrimination of insurance companies, but other factors contribute to this reason, including risk spreading. Risk spreading had the intention of engaging in the principle of solidarity, being “based on the notion that sharing risk bound people together encouraging a spirit of mutual aid and interdependence,” as explained by Ito. Though, these risks were going in the opposite direction from what they have initiated; one example is risk scores used by the criminal justice, viewed as “biased against people of color,” revealing the people of color are more likely to recommit a crime and go back to prison, increasing police bias and more police control in poor communities. Insurance companies say that it is all in the statistics, but that is just an excuse to hide the truth.
    I have the same feeling about the insurance companies’ excuse of these discriminatory beliefs they have towards minorities and women. They say that they are only doing their jobs, and everything is based on the mathematics and statistics displayed, believing that their system is “fair,” but that is not the case. How can they have minorities living in poor communities pay more than those who are living in a wealthier community of the majority race? They are already going through enough as it is, economically. Also, how can they have women pay more just because, statistically, “they lived longer,” as described by Ito? It does not make sense. I am content to see that today’s modern insurance companies are making sure that they are “building algorithms that are fair.” Fairness has various definitions, but one thing that fairness is not is “a statistical issue;” it is a “dynamic and social” issue, as Ito explains. It is constantly being improved. I believe this article is was an interesting read.

  4. Dylan Flego March 1, 2019 at 9:34 am #

    Algorithms are a tricky topic to handle. On one hand, algorithms are utilized in useful functions and resources every day. Some of the common activities involving built-in algorithms include searching up terms on a search engine or utilizing a GPS system for a vehicle. No matter what task it is, the algorithm programmed within will typically be utilized to create a subset of items and find the most effective way to “organize” these items, whether the organization method be about relevancy (like searching up key words on Google) or simply about speed (such as finding the quickest navigation route via GPS). However, another important consideration needs to be taken into account. Most of these types of algorithms are maintained and altered by the larger parent companies who created the algorithms. Now, why could this be a problem? Let us use YouTube as an example, using one of their more recent policies that would affect comments on the actual videos. YouTube plans to solidify a policy which relies on an algorithm to filter comments on videos, where if a large amount (unspecified as of now) of comments on a video are flagged as inappropriate, the content creator’s video may end up demonetized. Since YouTube is fully in charge of how their algorithm actually functions, there may be some hidden strings attached where certain types of videos could be more easily flagged and demonetized without the public knowing. Unfortunately, YouTube is not the only one pulling this trick from their sleeve. There are many other large corporations out there who skew algorithms and enact biases into their operations, specifying who gets affected in the way the company desires. Just because an algorithm is viewed as accurate in its respective regard does not mean it is entirely effective. Discrimination is in fact continuing to be perpetuated as long as these unjust algorithms are sustained in functions that average people rely on every day.

  5. Peter Honczaryk March 1, 2019 at 10:32 am #

    The title “fair” algorithms can be discriminatory is ironic considering that if something was fair, then it would not be considered discriminating. It is hard for people in general not to be discriminating even in todays society because of the ongoing news that continues to spread. Take for example our current president. He discriminates more than anyone and some people would consider him to be the most discriminating president that ever served in office. He always states what’s on his mind the people that potentially affected the country and 50% of the country agrees. The article talks about discrimination through insurance companies and how they were redlining. Redlining is when an insurance or loan company does not sell insurance or loans to someone because of the area they live in is a financial risk. But this still goes on today. Loan companies only give out loans to the people they know will be able to pay the loan back plus the interest that is added to the loan. That is how these loan companies make their potential earnings and how they continue to stay strong. People always want money and are eager to take money but when it comes to paying it back, it can be difficult. If the loan company knows that someone is not going to ever be able to repay a loan, they will not bother working with them. Insurance companies are the same way. They will sell insurance to people they know can afford it. Insurance is not an actual charity and are not going to just give out insurance for free. Having an algorithm that does the same thing as a regular person, not selling because of discrimination, is not a surprise in any way. Obviously, these companies are not going to just outright mention to the public that they will not sell to everyone and that they redline. That would shut them down immediately. Instead they just do what is best for the company, not for the people. Again, these companies are not charities. They do not care if someone needs or wants money, they only want to know that the person they are selling to is also able to pay back the money or able to pay for the insurance they desire.

  6. Jack F Comfort March 1, 2019 at 2:59 pm #

    In terms of insurance redlining and calculating risk, I don’t believe these algorithms are being discriminatory, contrary to what the article is saying. Insurance companies assign different categories of risk to different people, certain factors include gender, age, and where you live. People think these insurance companies are being discriminatory because they are charging certain people more than others. These insurance companies use statistics to determine how much they charge. When they charge a neighborhood, who happens to be majority African American, more than others, it isn’t because they’re majority black, it’s because the neighborhood might have a higher crime rate or more at risk than others. The same goes for why they charge men and women different rates, men are statistically proven to make more risky decisions, which puts them at greater rates of harm. The same goes for why the annuity is more for women than men, women have a higher life expectancy and therefore pay more. People who try to claim that these insurance companies are being discriminatory aren’t looking at the issue from their perspective. These companies would lose a lot of money if they were to charge everybody the same rate. Before these people assume that they’re being racist they should do more research.

  7. Jon Sozer March 1, 2019 at 5:27 pm #

    Can algorithms be completely fair in determining something’s outcome? Yes. It can be completely fair based on how it was designed. But the design itself could be unfair or discriminatory to individuals or groups under the purview of the algorithm. Not to harp too much on the same topic, but YouTube’s algorithm also faces backlash from its community. The algorithm itself might be fair and it might work as it should, but the design itself and the parameters placed within could be faulty.

    YouTube’s trending page is filled using an algorithm, pulling from videos that are popular, gaining popularity, and are being watched more frequently and consistently than others. This usually means that more popular YouTubers are found on the trending page, but it isn’t necessarily the case. Anecdotally, when I check the trending page, I generally find about three recently released music videos, a small handful of strictly YouTube related content from content creators, and far too many clips of late night shows to count. Does that mean that late night shows are the most consistently and frequently viewed YouTube videos? No, but the algorithm set into place is skewed to prefer those clips, which makes sense considering that the companies that host those late night shows would be more than happy to slide YouTube some money to bring their content to the spotlight.

    Algorithms can be fair on paper without being fair in practice. Algorithms by design are discriminatory: they filter information and data in order to separate items and determine how the processed information is dealt with.

  8. Doran Abdi March 1, 2019 at 8:40 pm #

    This issue highlighted in this article can be looked at several different perspectives. While one side will usually say that it is technical correct for an insurance company to practice redlining as it only makes sense to charge a person from an area with a higher risk of crime more money. I personally find myself finding myself of the other side of this argument during this discussion. The issue with redlining and discriminating towards certain groups of people and races is the fact that it is creating a very scary cycle. While, yes, it is true that for the insurance companies it is only logical for them to charge more or completely refuse their services if the risk of giving out their services is higher, this cycle that this idea is creating is a creating a larger and larger divide within ethnicity groups within income and general wellbeing. The truth is that the majority of these groups that are being “redlined” and are the face of these “discriminatory algorithms” are generally ethnicities including African-American, Hispanic, etc. So to create an example, one can narrow it down to one single person. This person is a black male who was born and raised in an entirely low-income and high crime rate area. As this person grows up, he can see that all of the people around him are constantly being denied any type of loan from a bank or a rate from an insurance company. This will lead to the haltering of ones education and will leave them out of work as they find that no one will hire him regarding his race and his background. This stops this person the ability to raise his income level and expand out of the poverty-stricken area in which he had grown up in leading to a never-ending cycle of institutionalization of all of the people within that neighborhood. So, while many can rationalize that these “discriminatory” practices from these insurance companies are in no way immoral as they are only insuring the stability for their own company. Many tend to forget that by continuing in these practices will only lead to a larger and larger divide within the general wellbeing and incomes of races and genders.

  9. Devero McDougal II March 8, 2019 at 7:39 pm #

    During the 1960’s racial tension was at an all-time high, during this time the civil rights movement was pushing for equality for minorities. Therefore because of this implementing polices like redlining is not a great idea considering what is taking place. Redlining a specific neighborhood deprives that neighborhood of financing and insurance coverage, this also contributes to poor economic conditions. When redlining proved to have discriminatory practices taking place it was outlawed but this did not stop some insurance companies and lenders from finding other ways to discriminate certain area. During this time in the 1960’s when the civil rights movements was taking place and racism was at a all time this was just another way for these insurance companies to have these discriminatory practices. After reading this article it definitely made me think about how insurance companies continue to have these practices to this day. One example is that insurance discriminates against new driver’s the rates for these driver’s are higher because they classify them as unsafe drivers. In this circumstance of being unsafe why are senior citizens not charged a higher rate as well. The likelihood of a senior citizen getting into a accident is just as high as a new driver. With this example it shows how insurance companies still continue to discriminate certain people and areas. I feel that this redlining was not a good a idea when it was implemented especially considering when this took place during the civil rights movement.

  10. Daniel McNulty March 15, 2019 at 2:33 pm #

    In today’s society, we see many of the social and racial biases from the past resurfacing in our lives. I believe algorithms should not hold the power to move the scale of social and racial justice. After reading this article, I come to believe that insurance companies came up with complex arguments about the statistical risk that certain areas presented, essentially using it a scapegoat for discrimination and racism.Algorithms backed up by statistics should not be used to enforce discriminatory conditions. As a society, we must understand that there are certain morals that should be followed, and not always go by the numbers. Just because it makes sense statistically does not mean that it is morally correct. Morals is defined as “holding or manifesting high principles for proper conduct. Everyone in this country deserves to have equal opportunity, funding, respect, and insurance coverage. Redlining was outlawed under President Lyndon Johnson after his implementation of a National Advisory Panel on Insurance but it did not last. This panel established that the redline “established a feedback cycle that continued to drive inequality and deprive poor neighborhoods of financing and insurance coverage. I want to know why was it not important enough to last if these poor people mattered to the system? The answer is that these poor people do not matter, what matters is productivity and capital. This leaves no room for what is moral. it is even stated that “Insurers argued that their job was to adhere to technical, mathematical, and market-based notions of fairness and accuracy and provide what was viewed—and is still viewed—as one of the most essential financial components of society. Simply saying that it is not in the business of morality but in the business of making money. On that note I agree with civil rights and feminist activist in the late 20th century questioning on whether “actuarial fairness—an individualistic notion of market-driven pricing fairness—was a valid way of structuring a crucial and fundamental social institution like insurance in the first place.” It simply should not be because it deprives people of rights they should have.

  11. Nicholas Meyerback March 19, 2019 at 10:00 pm #

    AI is now taking on a destructive activity that characterized the insurance industry for much of the 20th century. Artificial intelligence utilizes algorithms to determine various actuarial outcomes such as insurance risk. Historically speaking, insurance companies have been known to use data in similar matters. However, today AI is accelerating and intensifying negative outcomes on downtrodden communities that took place in the past because AI is faster and more efficient.

    Insurance companies derive insurance policy rates based on risk. If the company finds that an individual is more likely to file a claim to reap the benefits of the policy, then the company will charge a higher premium to compensate that risk or deny coverage altogether. To maximize profits, insurance companies often take on the practice of redlining, or drawing boundaries for areas that should be denied services (insurance). Redlining almost always leads to higher premiums for impoverished neighborhoods that disproportionately affects people of color. This stems from data collected that showed higher rates of crime for impoverished areas and therefore a higher chance that houses will be defaced or burglarized (home insurance). The result is these areas being deemed “uninsurable”. Inequality is only multiplied, families are unable to repair damage to their homes/cars and will be drowned by medical bills when denied health insurance. The snowball effect continues when loans are denied for lack of credit and insurance, stifling individuals from gaining higher education and starting businesses.

    The problem is worse with AI because AI solely uses data. Computers are mathematically wired to generalize and therefore stereotype, that’s just how their programed. Their flaw is that they can’t take into account outside factors or individual circumstances. For example, just because a statistic would show that a certain ethnic group is more likely to commit a crime, contract a disease or be at risk of an accident, doesn’t mean that everyone in that category is just as likely. Computers lack the ability to differentiate people within a community who may be mathematical outliers in terms of risk because of their own situation.
    An analogy can be drawn in baseball where a similar battle between robots and human judgment is occurring. Major League Baseball teams are utilizing analytics more and more to generate the probability of player performance based on past data. Critics of baseball analytics claim that this strays away from the traditional use of the “eye test”, or what managers think of players based on what they actually see. This may be different than what the computer thinks because a manager’s knowledge of a player’s injury history, technique and performance in the clutch delivers a more accurate representation. AI used by insurance companies cannot understand context the same way analytics cannot understand intangible factors of a player’s performance such as the death of a family member right before a game.

Leave a Reply