Big brother is «gendering» you. Il diritto antidiscriminatorio alla prova dell’intelligenza artificiale: quale tutela per il corpo digitale?

Autori

  • Luca Giacomelli

DOI:

https://doi.org/10.15168/2284-4503-422

Parole chiave:

Anti-discrimination law, privacy, algorithms, big data, comparative law

Abstract

Recent advancements in artificial intelligence are revolutionizing how easily and readily organizations can collect data and perform «data-driven» decisions across institutional contexts. Companies and institutions can now link a great variety of data sources, sometimes innocuous on their own but not in the aggregate, to inform an increasingly broad range of decisions tied to activities like credit reporting, advertising, hiring, judging. In this article, I analyse how data-driven decisions can discriminate by explaining how even unprejudiced algorithms and decision-makers can generate biased decisions and I try to verify the effectiveness of the anti-discrimination law categories in the face of discriminatory results of automated decisions. Although many risks of big data are well-known, other problems can arise from the refusal to acknowledge or collect certain data. In fact, under the idea of AI neutrality, we end to ignore or hide, rather than prevent, discrimination, because decisions can be biased even in the absence of socially disadvantaged groups data. This leads us asking the following questions: What are the legal remedies to unmask the discrimination of an algorithmic decision? How can we protect our privacy and fundamental rights?

##submission.downloads##

Pubblicato

2019-07-17

Come citare

1.
Giacomelli L. Big brother is «gendering» you. Il diritto antidiscriminatorio alla prova dell’intelligenza artificiale: quale tutela per il corpo digitale?. BioLaw [Internet]. 17 luglio 2019 [citato 18 aprile 2024];(2):269-97. Available at: https://teseo.unitn.it/biolaw/article/view/1372

Fascicolo

Sezione

Saggi