Dolores Romero Morales:
Opening black box models for fairer decision making
Abstract:
Despite excellent accuracy, Artificial Intelligence and Machine Learning algorithms are often criticised for their lack of transparency. When applied to sensitive situations with consequential impact on citizens’ lives, including social benefits allocation or parole decisions, the opaqueness of these algorithms may hide unfair outcomes for risk groups. Already for a while, transparency has been required by regulators for models aiding, for instance, credit scoring, and since 2018 the EU has extended this requirement by imposing the so-called right-to-explanation in algorithmic decision-making. From the Mathematical Optimization perspective, this means that we need to strike a balance between several objectives, namely accuracy, transparency, and fairness. In this presentation, we will navigate through some novel techniques that embed transparency and fairness in the construction of Data Science models. This includes the ability to provide global, local and counterfactual explanations, model cost-sensitivity and fairness requirements, and deal with complex data such as functional data.
|