Ethical Challenges in Designing a Machine Learning System
Abstract
Machine learning is a specific type of artificial intelligence (AI), a machine or software that can learn based on statistical analysis of data, pattern recognition and prediction. Some machine learning systems (i.e. those used in preventive policing and justice, banking, employment, child abuse prevention, etc.) have a degree of autonomy in decision-making and action. However, as Josh Simons (2023) argued, the computer scientists who design and develop these systems lack the requisite knowledge to fully comprehend the nuances of decision-making as experienced by social workers, police officers, and judges (the experience gap). Moreover, they are often unable or unwilling to justify their design choices to the citizens whose lives they shape (the accountability gap). Furthermore, the professionals who will use the predictive tools lack the requisite technical expertise to fully comprehend computer science terminology (the language gap). Consequently, the task of designing an efficient and ethical predictive tool is quite difficult. And, even if we leave aside the aforementioned problems, we can observe that the process of machine learning itself raises some serious ethical questions. It involves moral and political choices in the design of the tools which are using data to generate predictions. These choices are about values, goals and priorities, and have the potential to reinforce social injustice. A typical response would be that we should try to avoid these risks by embedding a set of ethical rules in the design of the predictive tools (ethics by design). However, I will argue that this approach has to face other significant ethical challenges.
Keywords: Artificial Intelligence, AI Ethics, Machine Learning, social injustice, moral agency
[Full Article PDF]