In this video interview, Charlie Taben, editor of the Blog’s Philosophy and Technology series, interviews the authors of Why Machines Will Never Rule the World Barry Smith, Professor of Philosophy and Professor of Biomedical Informatics, Computer Science and Engineering, and Neurology at the University at Buffalo, and Jobst Landgrebe, a scientist and entrepreneur with a background in philosophy, mathematics, neuroscience, and bioinformatics. In this interview, they discuss Smith and Landgrebe’s backgrounds, their motivation for writing this book, and their argument’s philosophical implications.
What is this book about?
This book is about Artificial Intelligence (AI), which we conceive as the
application of mathematics to the modeling (primarily) of the functions of the
human brain. We focus specifically on the question of whether modeling of this
sort has limits, or whether—as proposed by the advocates of what is called
the ‘Singularity’—AI modeling might one day lead to an irreversible and
uncontrollable explosion of ever more intelligent machines.
As concerns the current state of the art, AI researchers are, for understandable
reasons, immensely proud of their amazing technical discoveries. It therefore
seems obvious to all that there is an almost limitless potential for further,
equally significant AI discoveries in the future. Enormous amounts of funding
are accordingly being invested in advancing the frontiers of AI in medical
research, national defense, and many other areas. If our arguments hold water,
then a significant fraction of this funding may be money down the drain. For
this reason alone, therefore, it is probably no bad thing for the assumption of
limitless potential for AI progress to be subjected to the sort of critical
examination that we have here attempted.
To do our job properly, we found it necessary to draw not merely on
philosophy, mathematics, and computer science, but also on linguistics,
psychology, anthropology, sociology, physics, and biology. Philosophers we deal
with at some length include David Chalmers, Nick Bostrom, and Max Scheler. We
raise what we believe are powerful arguments against the possibility of
engineering machines that would possess an intelligence that would equal or
surpass that of humans. These arguments have immediate implications for claims,
such as those of Elon Musk (and Bostrom), according to which AI could become ‘an
immortal dictator from which we would never escape’. Relax. Machines will not
rule the world.