February 27, 2019, von: Hajo Greif

Making Deep Neural Networks Transparent

What are fair comparisons between the performances of Deep Neural Networks and the cognitive accomplishments of human beings and animals? Which idealizations and simplifications are permissible, and what are the possible limitations of DNNs with respect to biological plausibility? What are the implications of what is called ‘adversarial examples’? What is the nature of epistemic opacity in DNNs?
These are questions concerning new methods in machine learning that are relevant both on philosophical and societal levels – but that deserve more attention and analysis than they have received so far.
These are also some of the questions to be discussed in in a workshop on Deep Neural Networks with Cameron Buckner, Department of Philosophy, University of Houston, that is hosted by DML@MCTS on Thursday 28 February and 1 March, 2019 (from 10:00AM onwards on both days, in room 370, Augustenstraße 46) . Some of these questions will also be addressed in his public lecture at 17:00 on Friday 1 March, in room 0502.01.229, Arcisstraße 21.

Tags:  Epistemology, AI, Philosophy
Kategorien:  Updates, Events