Improving transparency in AI by exploring new avenues for human feedback, robustness, and documentation

  • Formal specification of AI incidents or controversies grounded in a taxonomy of potential algorithmic harms and risks
  • Collective deliberation and sensemaking about AI incidents or controversies
  • Real-world action and changes in the target algorithmic systems that are taken up by ML Ops teams based on the human feedback
  • Consumers have a way to document, collect evidence, and make verifiable claims about their experience in interacting with AI
  • AI builders have a neuro-symbolic formal logic layer around a ML model that gives them improved understanding about its downstream impacts

--

--

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Bogdana Rakova

Bogdana Rakova

45 Followers

Senior Trustworthy AI Fellow at Mozilla Foundation, working on improving AI contestability, transparency, accountability, and human agency