07. September 2021

Event Series of the AI Research Group Event Series of the AI Research Group – Summer Program 2022

Winter Semester Program 2022-23

The guiding aim of the AI research group (CST & IWE) is to provide a forum in which researchers can gain new perspectives on their own work and regular input from across the university on AI-related topics.

We are now meeting in the CST building (Konrad-Zuse-Platz 1-3, 3rd floor). All the events will be organized in a hybrid format.

If you or someone from your team wishes to join, please send an email to Dr. Charlotte Gauvry (cgauvry@uni-bonn.de), and she will add you to the mailing list. Should you wish to present or discuss a paper, a work in progress, or any question regarding AI, please let her know.

Here is the full program, which will be updated regularly.

Next sessions

Tuesday, Nov 15, 11:30 am - 1 pm: Dr. Uwe Peters, "Bias Toward WEIRD People in Explainable AI Research"

Zoom link: 

https://uni-bonn.zoom.us/j/62966644802?pwd=cmlRNzBDVTJIcEZkcWZHN0cwZDM5UT09

Meeting-ID: 629 6664 4802

Password: 100330

Abstract: Many artificial intelligence (AI) models operate in ways too complex for humans to understand. They are thus often equipped with explainable AI (XAI) systems that are specifically designed to make the outputs of complex models intelligible to human users. We provide empirical evidence that many people’s explanatory needs may significantly differ across cultures and that these cultural differences are relevant for but currently overlooked in XAI research. After systematically reviewing and statistically analyzing a large number of XAI user studies (n = 220), we found that 91% of them tested predominantly only Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations but most (64%) contained generalizations of results that went much beyond these populations. There was also no statistically significant correlation between these generalizations and more diverse samples,
and most studies (94%) did not indicate any awareness of cultural variations in explanatory needs. In a subsequent meta-review (n = 39), we found that most reviews of XAI user studies, too, contained overgeneralizations of results and did not mention WEIRD population sampling or cultural differences in XAI needs. Our analyses highlight important oversights in XAI research and offer the first quantitative evidence of a generalization bias toward WEIRD people in XAI. 

Last sessions

Friday, Nov 4, 2-4 pm: Dr. Audrey Borowski, 'Gilles Deleuze: Individual, Dividual, Posthuman'

Friday, Oct 28, 2-4 pm: Dr. Olivia Erdelyi, "AI regulation"

Tuesday, Oct 18, 3:30 - 5 pm (IWE): Dr. Apolline Taillandier 'AI in a different voice: Rethinking Computers, Learning, and Gender Difference at MIT in the 1980s'

 

Wird geladen