Researchers Want Guardrails to Help Prevent Bias in AI

from Wired Artificial intelligence has given us algorithms capable of recognizing faces, diagnosing disease, and of course, crushing computer games. But even the smartest algorithms can sometimes behave in unexpected and unwanted ways—for example, picking up gender bias from the text or images they are fed. A new framework for building AI programs suggests a way to prevent aberrant behavior in machine learning by specifying guardrails in the code from the outset. It aims to be particularly useful for nonexperts deploying AI, an increasingly common issue as the technology moves out of research labs and into the real world. The […]

Continue reading

Report: Amazon’s AI Recruiter Favored Men

from Axios An algorithmic recruiter meant to help Amazon find top talent was systematically biased against women, a Reuters investigation found. Why it matters: This is a textbook example of algorithmic bias. By learning from and emulating human behavior, a machine ended up as prejudiced as the people it replaced. The details: Amazon’s experiment, which dates back to 2014, was trained on 10 years of job applications, most of which came from men, reports Reuters’ Jeffrey Dastin. * The system concluded that men were better candidates for technical jobs. * In 2015, Amazon began to realize that the system was […]

Continue reading

Your Artificial Intelligence Is Not Bias-Free

from Forbes Machines have no emotions. So, they must be objective — right? Not so fast. A new wave of algorithmic issues has recently hit the news, bringing the bias of AI into greater focus. The question now is not just whether we should allow AI to replace humans in industry, but how to prevent these tools from further perpetrating race and gender biases that are harmful to society if and when they do. First, a look at bias itself. Where do machines get it, and how can it be avoided? The answer is not as simple as it seems. To […]

Continue reading