Singapore Management University - Yong Pung How School of Law; Singapore Management University - Centre for AI & Data Governance
Josephine Seah
Singapore Management University - Centre for AI & Data Governance
Date Written: May 12, 2020
Abstract
In response to fears around the risky and irresponsible development of artificial intelligence (AI), the prevailing approach from states, intergovernmental organisations, and technology firms has been to roll out a ‘new’ vocabulary of ethics. This self-regulatory approach relies on top-down, broadly-stated ethics frameworks intended to moralise market dynamics and elicit socially responsible behaviour among top-end developers and users of AI software. At present, it remains an open question regarding how well these principles are understood and internalised by AI practitioners throughout the AI ecosystem. The promotion of AI ethics has so far proceeded with little input from this group, despite their essential role in choosing and applying this emerging ethical language and associated tools in their project designs and related decision-making. As AI principles shift from normative organisational guides to operational practice, this paper offers a methodology — a ‘shared fairness’ approach — aimed at addressing this gap. The goal of this method is to identify AI practitioners’ needs when it comes to confronting and resolving ethical challenges and to find a ‘third space’ where their operational language can be married with that of the more abstract principles that presently remain at the periphery of their work life. We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility. This methodology is centred around conversations intended to elicit practitioners perceived ethical attribution and distribution over key value-laden operational decisions, to identify when these decisions arise and what ethical challenges they confront, and to engage in a language of ethics and responsibility which enables practitioners to internalise ethical responsibility. The methodology bridges responsibility imbalances that rest in structural decision-making power and elite technical knowledge, by commencing with personal, facilitated conversations, returning the ethical discourse to those meant to give it meaning at the sharp end of the ecosystem. By attending to practitioners, our project aims to better understand ethics as a socio-technical practice, progressing from the appreciation that as a realistic force in regulation, ethics are dynamic and interdependent.
Keywords: artificial intelligence, AI ethics, ethics
Suggested Citation:
Findlay, Mark James and Seah, Josephine, An Ecosystem Approach to Ethical AI and Data Use: Experimental reflections (May 12, 2020). SMU Centre for AI & Data Governance Research Paper No. 2020/03, Available at SSRN: https://ssrn.com/abstract=3597912 or http://dx.doi.org/10.2139/ssrn.3597912