Audit4SG

Explore. Audit. Ask questions.

Explore and audit ethical claims of artificial intelligence for social good.

To audit (audiō) in Latin is to hear, to listen to, to pay attention to.
You don’t need to be an expert to do that.

Audit4SG is an exploratory tool that sees ethics as emergent. Its premise is that ethical relationships develop in relation. To audit the ethical claims of AI is to pay attention to those relations. 

Audit4SG presents an incomplete ontology (knowledge graph) of entities and relations encircling AI systems. A user can interact with the ontology and build their own exploratory and reflective AI auditing methodology using the provocations or definitions corresponding to each entity or relation in the ontology. All the provocations are accompanied by at least one relevant reference to ground them. You can also explore the corresponding Zotero library which has been tagged according to the main ontology classes in case the tool inspires you to go down the AI ethics rabbit hole.

The tool is meant to be exploratory. It is not a checklist. It may take longer and appear cumbersome, but it might help thoughts move in new directions. The tool aims to provoke and not solve. The auditing questions are provocations to open up not just the black box of AI algorithms but the black box of the network of relationships that constitute AI. 

Keep in mind: It’s a proof-of-concept (a pre-alpha release). An early intervention to interrupt the AI checklist mode of AI ethical auditing. We are still looking for ways to take this interruption forward.   

Do you have questions, suggestions, or feedback? Please write to us at:
debarun[at]outlook.com
cheshtaarora[at]outlook.in


Recent Posts

  • Why OWL2?

    [This blog post is written by Debarun Sarkar and Cheshta Arora] In the last blog post, we stressed on principles that drove the project. Yet, at a more practical level, we were driven by the desire to intervene and play with semantic web technologies such as OWL2. This intervention we believe has produced interesting results, […]

  • Introducing the principles of Audit4SG

    [This blog post is written by Cheshta Arora and Debarun Sarkar] When we started this project, we had one aim: to see if we could operationalize relational ethics and create an AI ethics tool which pivoted around a notion of relationality. While the current tool is not perfect and might even seem useless, its design […]

How to

In the beginning you have two options:

  • Start with some broad topics that interest, concern, or fascinate you. Mix and match at varying degrees of granularity. 
  • Or explore the range of concerns that we could think of and mix and match as per your approach or situation. Choose only one aspect or choose them all! 

The web tool will throw you into a network that could be explored when exploring ethical concerns concerning artificial intelligence. As a user, you can determine which aspects are relevant for you and the system or organization you are investigating or exploring. You can start stacking of what interests you on the left side. You can also interrogate the fundamental ethical parameters and approaches that AI ethicists hold dear and sacrosanct. All is up for unpacking. If something does not make sense to you, let those aspects be. You can always come back to them later. 

If the network feels overwhelming, try using the search bar. You can drop in any word or phrase to see the closest possible semantic matches within the existing network of relationships. Note: It uses OpenAI API for semantic processing to show the best and most relevant results. It recommends both nodes (classes) and relationships (objects). You can stack the desired cards on the left interacting with the search results.

Once you have stacked up the desired cards on the left you can save/export/share your customized methodology for future reference. You can come back to the web tool from the export to edit the methodology if you so wish at any point in time. The export/share feature uses anonymized cookies and no personal information about the user is collected. 

For more research-y people, cards also include references and quotes that inform the provocations. All references can be found in the tagged Zotero library. You can also refer to and download the underlying ontology.

There is a node called “not listed”. Select it (as a performative act) if your concerns are not covered within the existing network. The network of relationships is constructed on the “open world assumption” i.e., anything that is not represented and covered within the existing ontology can possibly exist.  

Click the button below to open the tool

Funding

The project is supported by a grant from the Notre Dame-IBM Tech Ethics Lab, University of Notre Dame. Such support does not constitute an endorsement by the sponsor of the views expressed in any of the project publications. 

Team

Cheshta Arora
Researcher
ORCID: https://orcid.org/0000-0003-2470-7783

Debarun Sarkar
Researcher
ORCID: https://orcid.org/0000-0002-6873-4727
Website: https://debarun.noblogs.org/

Rocco Donà
Designer
Website: https://rocco-dona.com/

Tuhin Bhuyan
Developer
LinkedIn: https://www.linkedin.com/in/xtbhyn

Research Output

Journal articles (peer-reviewed)

Auditing Artificial Intelligence as a New Layer of Mediation: Introduction of a new black box to address another black box
Author(s):  (co-first) Chehsta Arora, Debarun Sarkar 
Published in:  Hipertext.net: Academic Journal on Digital Documentation & Interactive Communication special issue on The Impact of Artificial Intelligence in Communication, 26, 2023, Page(s) 65-68, ISSN 1695-5498 
Publisher:  Universitat Pompeu Fabra, Barcelona
DOI:  10.31009/hipertext.net.2023.i26.10


Conference proceedings (peer-reviewed)

Destabilizing Auditing: Auditing artificial intelligence as care-ful socio-analogue/digital relation
Author(s):  (co-first) Chehsta Arora, Debarun Sarkar 
Published in:  Conference Proceedings of the STS Conference Graz 2023 Critical Issues in Science, Technology‚ and Society Studies 8 – 10 May 2023, 2024, Page(s) 46-56, ISBN 978-3-85125-976-6 
Publisher:  Verlag der Technischen Universität Graz
DOI: 10.3217/978-3-85125-976-6

Interfacing Artificial Intelligence for Social Good (AI4SG) and Relational AI Ethics: A Systematic Literature Review
Author(s):  (co-first) Chehsta Arora, Debarun Sarkar 
Published in:  CEUR Workshop Proceedings Proceedings of the Conference on Technology Ethics 2023 – Tethics 2023, Vol 3582, 2023, Page(s) 61-78, ISSN 1613-0073 
Publisher:  Sun SITE Central Europe, RWTH Aachen University
URL:  https://ceur-ws.org/Vol-3582/FP_06.pdf


Position papers (peer-reviewed)

Audit4SG: Democratizing Auditing Artificial Intelligence and Ontology Development Methodologies
Author(s):  Debarun Sarkar, Cheshta Arora, Tuhin Bhuyan 
Published in:  Proceedings of HCI for Digital Democracy and Citizen Participation, Interact 2023 
Publisher:  HCI for Digital Democracy and Citizen Participation
DOI:  NA

On the Injunction of XAIxArt: Moving beyond explanation to sense-making
Author(s):  (co-first) Chehsta Arora, Debarun Sarkar 
Published in:  1st International Workshop on Explainable AI for the Arts (XAIxArts), ACM Creativity and Cognition (C&C) 2023 
Publisher:  1st International Workshop on Explainable AI for the Arts
DOI:  10.48550/arXiv.2309.06227


Conference Presentations

(co-first) Cheshta Arora and Debarun Sarkar, “Interfacing Artificial Intelligence for Social Good (AI4SG) and Relational AI Ethics: A Systematic Literature Review” presented at the 6th Conference on Technology Ethics – Tethics at the University of Turku, Finland. October 18-19, 2023.

Debarun Sarkar, Cheshta Arora and Tuhin Bhuyan, “Audit4SG: Democratizing Auditing Artificial Intelligence and Ontology Development Methodologies” to be presented at the IFIP WG 13.8 workshop on HCI for Digital Democracy and Citizen Participation at INTERACT2023 at York, UK. August 29, 2023.

(co-first) Cheshta Arora and Debarun Sarkar, “On the Injunction of XAIxArt: Moving beyond explanation to sense-making” presented at the 1st Workshop on Explainable AI for the Arts – XAIxArts at the 15th ACM Conference on Creativity & Cognition on Gather. June 19-21, 2023. 

(co-first) Cheshta Arora and Debarun Sarkar, “Why Govern? Producing AI as an object of governance” presented at (un)Stable Diffusions organized by Milieux Institute for Arts, Culture and Technology, Concordia University, Tiohtià:ke/Montréal, Canada. May 23-24, 2023. 

(co-first) Cheshta Arora and Debarun Sarkar, “Destabilizing Auditing: Auditing as ‘care-ful socio-analogue/digital relation’” presented at 21st Annual STS Conference Graz 2023 “Critical Issues in Science, Technology and Society Studies” at Graz University of Technology, Austria, May 8-9, 2023. 


Resources

Zotero Library

Relational AI Ethics Ontology