When it comes to fairness, artificial intelligence (AI) is imperfect.
Despite their apparent superpowers, AI-based algorithms that drive important decision making can carry or even amplify the biases of their human creators. These invisible biases can lead to unintended consequences when AI is used to combine the preferences of multiple decision makers, some of whom may be biased, to rank candidates for jobs, scholarships, loans, awards, or other distinctions.
But Elke Rundensteiner, William Smith Dean's Professor in the Department of Computer Science and founding director of the Data Science program at WPI, and her students are developing a way to address this problem with algorithms that help ensure fairness in aggregated rankings that impact people in profound ways. The work has been supported by a grant of nearly $500,000 from the National Science Foundation.
“It’s a difficult problem to integrate preferences by multiple decision makers, who may harbor biases, into a combined consensus ranking and also make sure that this aggregated ranking fairly includes diverse individuals from underrepresented groups,” Rundensteiner says. “As AI plays a larger role in society, further impacting our way of life, we need effective mechanisms to achieve both fairness and consensus in rankings.”
Rankings are used everywhere for decisions that can alter individuals’ lives and can be created by combining the preferences of individual decision makers. Committee members who are interviewing job candidates, for example, might submit their preferred candidates to an AI-based program, which would then be used to produce an aggregated ranking of the candidates.