Developing learning methods which do not discriminate subgroups in the population is the central goal of algorithmic fairness. One way to reach this goal is to learn a data representation that is expressive enough to describe the data and fair enough to remove the possibility to discriminate subgroups when a model is learned leveraging on the learned representation. This problem is even more challenging when our data are graphs, which nowadays are ubiquitous and allow to model entities and relationships between them. In this work we measure fairness according to demographic parity, requiring the probability of the possible model decisions to be independent of the sensitive information. We investigate how to impose this constraint in the different layers of a deep graph neural network through the use of two different regularizers. The first one is based on a simple convex relaxation, and the second one inspired by a Wasserstein distance formulation of demographic parity. We present experiments on a real world dataset, showing the effectiveness of our proposal.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo