Il diritto di comprendere, il dovere di spiegare. Explainability e intelligenza artificiale costituzionalmente orientata
DOI:
https://doi.org/10.15168/2284-4503-832Keywords:
Algorithmic Decision-Making, right to explanation, Explainable AI (XAI), AI society, Con-stitutional rights and AIAbstract
In the last years, explainable algorithms and AI raise significant interest (and concerns) in the legal debate. However, adequate recognition of this elusive right to explanation remains ambiguous and uncertain, mainly because of the current legal framework's limits. This paper addresses the problem from the perspective of constitutional law, examining if and how the so-called law in action can consolidate rights and duties to explainable and comprehensible algorithms. Focusing on EU law, the paper considers the achievements gained in personal data protection, focusing on the innovations introduced by art. 22, reg. EU 679/2016 (GDPR). In light of the shortcomings that emerged in this field, the reflection thus recalls the constitutional rationale of data protection. It, therefore, examines how national case-law – using the same method – fosters a more solid recognition of an effective right to understand and a duty to explain algorithms.Downloads
Published
2021-06-16
How to Cite
1.
Spiller E. Il diritto di comprendere, il dovere di spiegare. Explainability e intelligenza artificiale costituzionalmente orientata. BioLaw [Internet]. 2021 Jun. 16 [cited 2024 Nov. 23];(2):419-32. Available from: https://teseo.unitn.it/biolaw/article/view/1671
Issue
Section
Artifical Intelligence and Law - Essays