Computational Responsibility for Trustworthy Citizen-Centric AI
Wed Nov 1 2023
Developing methods to determine who is responsible for decisions made by or in collaboration with AI systems
As AI systems become more integrated into society, it is crucial to explore methods to ensure these technologies are reliable and trustworthy for citizens. This ongoing project focuses on using formal models to analyse AI system behaviour before and after deployment. By collaborating with legal and philosophical AI scholars, the researchers aim to ground these techniques in ethical principles and ensure applicability. The project mainly focuses on the design and development of computational techniques to formally specify, reason about, and determine responsibility in AI systems. The computational models serve as accountability tools and can help shape the development of citizen-centric AI aligned with social values. This interdisciplinary effort seeks to enable the effective, socially beneficial integration of AI through computational techniques rooted in law and ethics.
Our projects
View allSwipe