March 12, 2019, von: Hajo Greif

The Comparative Psychology of Artificial Intelligence – DML@MCTS Public Lecture by Cameron Buckner

The major accomplishment of DNNs is best illustrated by AlphaGo’s victory against Go grandmaster Lee Sedol in 2016 – nearly 20 years after Deep Blue’s 1997 victory against chess grandmaster Gary Kasparov. The combinatorial complexity of Go exceeds that of chess by several orders of magnitude, and cannot be mastered by means of ‘brute force’ computation of all possible steps of the game. Nor would patterns of rule-based reasoning suffice to achieve mastery a game whose players rely on abstract but barely explicit or formalised strategies. Cameron Buckner (University of Houston, http://cameronbuckner.net/professional/) uses this comparison between the good-old-fashioned Deep Blue approach to chess and AlphaGo’s DNN-based approach to Go as a springboard for exploring the philosophical novelty of deep learning.

DNNs use partial computational analogues of the structure and operations of the human nervous system to develop gaming strategies, to recognise and classify images, and partly also to actually simulate the stages of perceptual processing in the human brain. Remarkably, many DNNs accomplish this without being fed with explicit models of either their task or the correct solutions. Even more remarkably, human observers will often be unable to tell how the individual operations of a DNN jointly generate a functioning and correct solution.

These properties of DNNs have given rise to charges of ‘epistemic opacity’: how shall we understand the decisions and the meanings of representations produced by a DNN if the underlying modelling steps are not tractable for us? If a DNN is tasked with simulating operations in the human nervous system, would an understanding of the modelling steps amount to an understanding of certain properties of the human mind?

Conversely, will apparent discrepancies between a DNN’s modelling steps and the processes in the human nervous system disqualify them as workable models? In his talk, Buckner highlighted the shortcomings of any attempt to answer these questions with a clear ’no’ and ‘yes’ respectively. After all, can we say that the human mind is epistemically transparent in terms of how it generates representations and decisions on a neuronal level? Conversely, are the accounts that an AI system may give of its decision procedures so as to justify them to human observers accurate representations of how it actually decided?

If DNNs are used in real-world applications that have real-world consequences, these modelling issues have direct ethical and political implications. For example, we may wish to know how and why an AI system mistook a pedestrian with a bicycle for a stationary bicycle (as in the fatal Uber autonomous vehicle accident in 2018), or how and why it reproduces patterns of social segregation and racial bias when classifying people and predicting their behaviours. We may also wish to know the extent to which any opacity involved in such cases concerns the computational steps involved – and to what extent it (also) is an issue of human decision-making and accountability. Buckner’s analysis of ‘adversarial examples’, opaque representations and skewed learning optima in DNNs provides some, to equal measure practically and philosophically well-informed, guidance to answering these questions. However, the partialness and imperfection of DNN-based models as such will not disqualify them as models, nor will it allow us to ignore either their potential or their risks.

 

Tags:  AI, Transparency, Philosophy, Software, Theory
Kategorien:  Updates, Blog, Science, Events, Research, Theory