07. June 2022

Conference on AI in Science at the LMU in Munich: Foundations and Applications Conference on AI in Science at the LMU in Munich: Foundations and Applications

On June 09 and 10, the conference on "AI in Science: Foundations and Applications" will be held at LMU in Munich, where Huw Price, Apolline Taillandier and Uwe Peters are invited to present. More information can be found here.

Artificial intelligence (AI) is all the rage these days, promising many new innovations that will make our lives easier. It is also significantly changing the way we do science, raising several fundamental and methodological questions, such as the role of bias, explainability, and the limits of empirical methods. Addressing these questions requires an interdisciplinary effort to which various sciences, from computer science to social science to philosophy, can contribute. This workshop brings together relevant researchers from Cambridge and LMU Munich to engage in the relevant discussions. It is part of the project "Decision Theory and the Future of AI", funded by the Cambridge-LMU Strategic Partnership Initiative. The workshop is also part of the Research Focus Next Generation AI at LMU’s Center for Advanced Studies (CAS).

Huw Price, Apolline Taillandier and Uwe Peters are invited to this workshop. Huw Price is part of the organizing team and Apolline Taillandier and Uwe Peters will give the following presentations:

Presentation on June, 9th: Uwe Peters (Cambridge/Bonn): Regulative Reasons: On the Difference in Opacity between Algorithmic and Human Decision-Making


Many artificial intelligence (AI) systems used for decision-making are opaque in that the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here I contend that this argument overlooks that human decision-making is often significantly more transparent than algorithmic decision-making. This is because when people report the reasons for their decisions, their reports have a regulative function that prompts them to conform to these ascriptions. AI explanation systems lack this feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason giving.

Presentation on June, 10th: Apolline Taillandier (Cambridge/Bonn): Feminist psychology and computer programming at MIT


This paper studies how the computer language Logo, first developed at MIT in the late 1960s as an educational programme for teaching mathematics, came to be understood as a feminist tool. Logo was initially described as an "applied artificial intelligence" project (McCorduck, 2004) that would contribute to popularising a pluralist, democratic approach to programming. School experiments with Logo brought evidence that different kinds of children practised programming in different ways: while boys often developed a traditional 'hard' style of programming, girls often programmed in a style that Logo advocates called 'tinkering' or bricolage. Drawing from Seymour Papert and Sherry Turkle's writings and archival material, I trace how Logo was recast over the 1980s as a tool for undermining sexist norms within computer science. This sheds light on how feminist debates about epistemology and morality contributed to reshaping the terms of gender politics in US academia.

Wird geladen