Introducing the principles of Audit4SG

[This blog post is written by Cheshta Arora and Debarun Sarkar]

When we started this project, we had one aim: to see if we could operationalize relational ethics and create an AI ethics tool which pivoted around a notion of relationality. While the current tool is not perfect and might even seem useless, its design is based upon a rejection of a series of ideas that we knew we didn’t want to implement.

We did not want the tool:

  • to have a checklist design
  • to give quick, easy answers
  • to standardize ethics
  • to be used without reflection


During the project, we identified the following abstract values that ought to pivot an AI ethics tool that is grounded in relational ethics:

  • it ought to be exploratory.
  • it helps ask questions.
  • it allows for emergent ethics.
  • it provokes the user to think
  • it provokes the user to care
  • it invites the user to step sideways


These values informed our discussions on the tool design. Some of the features of the tool were included to cover all or some aspects of this initial list of values. For instance, each entity has a corresponding question and a longer reference that provokes a user to think rather than explain.


An ideal user of this tool would be someone who accepts our invitation to slow down and think ethics rather than be told what ethics is. The entity ‘ethic’ has a corresponding question ‘what is ethics’ which may seem redundant or too broad or useless. However, underpinning the design is the idea that to think ethics is to never lose sight of that question. Each new sight, each new problem, each new encounter with the machine can throw us into a whirlpool where the only way out is to ask again, and yet again, the question ‘what is ethics’ to remember (and reinvent) the answer. Of course, ethics is not just the moral good or the ethical machine but a ‘mode of inquiry’.


Working on the tool over the year has led to the conclusion that we will never find an ideal user for this tool. We began with the goal of working with a targeted user defined as—anyone whatsoever. We remain sceptical of this initial goal as the tool lacks the simplicity and linearity expected to target anyone whatsoever, yet the tool has no well-defined user other than anyone wishing to relate to artificial intelligence and the ethical concerns surrounding it. The tool will not be able to provoke a reader to fully sidestep. The design is probably not experimental, speculative, or user-friendly enough. These are some limitations of this web tool that were evidently clear to us and failure was never construed as a problem.

“The tool was funded by a grant from the Notre Dame-IBM Tech Ethics Lab, University of Notre Dame. Such support does not constitute an endorsement by the sponsor of the views expressed in any of the project publications.”


However, we are quite sure that it wouldn’t hurt IBM or any of its funding intermediaries to have supported another useless experiment in tool design to fix ‘AI’. And we know that failures and seriously silly design can tell us more about the world than perfect machines and perfect solutions.


To care about AI is not to seek solutions to regulate or govern it, but to care enough to understand how it relates to the world around us. This is a concern we will touch on in another blog post among other non-failures of these projects, new openings that we believe the project has allowed to us to pursue.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *