Empowering Vulnerability: Decolonizing AI Ethics for Inclusive Epistemological Innovation
DOI:
https://doi.org/10.15168/2284-4503-3299Parole chiave:
Discriminatory-sensitive bias, Algorithmic causality, bias-based determinism, AI justice, AI Decolonization, Vulnerability and empowermentAbstract
Recent studies reveal a convergence in the ethical guidelines of AI, emphasising the emergence of ‘fundamental principles’ for responsible AI. However, dissenting voices argue that these principles are insufficient to address the social impacts of AI, revealing a disconnect between ideals and implementation. This article indirectly explores the necessity of AI ethics. It delves into the complexity of cataloguing discriminatory biases generated throughout the lifecycle of AI systems, analysing various types of causal reasoning for discrimination: technical, counterfactual, and finally, constructivist/genealogical. From this exploration, the article derives two additional arguments. Firstly, a call to move beyond bias-based determinism as a singular approach to evaluating discrimination caused by AI systems, thereby recognising the influence of political and social dynamics, including strong appeals for AI decolonisation. Secondly, there is a need to reconsider advocacy actions for vulnerable subjects not merely as a mere claim of denied or marginalised identities but for their epistemic engagement with the world and with others. In this openness, where machine ethics also resides, vulnerability becomes a central epistemological construct to foster inclusive technological innovation, a decisive element in the context of the growing symbiosis between society and AI systems.
##submission.downloads##
Pubblicato
Come citare
Fascicolo
Sezione
Licenza
Copyright (c) 2024 Università degli Studi di Trento
Questo lavoro è fornito con la licenza Creative Commons Attribuzione - Non commerciale - Non opere derivate 4.0 Internazionale.