from Fast Company
We know that algorithms can outperform humans across an expanding range of settings, from medical diagnosis and image recognition to crime prediction. However, an ongoing concern is the potential for automated approaches to codify existing human biases to the detriment of candidates from underrepresented groups.
For example, hiring algorithms use information on workers they have previously hired in order to predict which job applicants they should now select. In many cases, relying on algorithms that predict future success based on past success will lead firms to favor applicants from groups that have traditionally been successful.
But this approach only works well if the world is static and we already have all the data we need. In practice, this simply is not the case. Women, for instance, have been entering STEM fields in record numbers, but if firms used their historical employment data to decide whom to hire, they would have very few examples of successful female scientists and engineers. At the same time, the qualities that predicted success yesterday may not continue to apply today: just think of how remote work during the pandemic has changed the nature of teamwork, communication, and teaching.
So instead of designing algorithms that view hiring as a static prediction problem, what if we designed algorithms that view the challenge of finding the best job applicants as a continual learning process? What if an algorithm actively seeks out applicants it knows less about, in order to continuously improve our understanding of which candidates will be a good fit?