Il diritto di comprendere, il dovere di spiegare. Explainability e intelligenza artificiale costituzionalmente orientata

Autori

  • Elisa Spiller

DOI:

https://doi.org/10.15168/2284-4503-832

Parole chiave:

Algorithmic Decision-Making, right to explanation, Explainable AI (XAI), AI society, Con-stitutional rights and AI

Abstract

In the last years, explainable algorithms and AI raise significant interest (and concerns) in the legal debate. However, adequate recognition of this elusive right to explanation remains ambiguous and uncertain, mainly because of the current legal framework's limits. This paper addresses the problem from the perspective of constitutional law, examining if and how the so-called law in action can consolidate rights and duties to explainable and comprehensible algorithms. Focusing on EU law, the paper considers the achievements gained in personal data protection, focusing on the innovations introduced by art. 22, reg. EU 679/2016 (GDPR). In light of the shortcomings that emerged in this field, the reflection thus recalls the constitutional rationale of data protection. It, therefore, examines how national case-law – using the same method – fosters a more solid recognition of an effective right to understand and a duty to explain algorithms.

##submission.downloads##

Pubblicato

2021-06-16

Come citare

1.
Spiller E. Il diritto di comprendere, il dovere di spiegare. Explainability e intelligenza artificiale costituzionalmente orientata. BioLaw [Internet]. 16 giugno 2021 [citato 23 novembre 2024];(2):419-32. Available at: https://teseo.unitn.it/biolaw/article/view/1671

Fascicolo

Sezione

Artificial Intelligence e Diritto - Saggi