Cambridge Festival asks: can robots ever truly mimic humans?


16-02-2021
Child with robot_ Empathetic Machines: can chatbots be built to care?

Could AI help us reach a more equitable and fair society? Should chatbots and AI be built to care and have empathy? If such machines are built, should we consider their moral and legal status? Or are we giving up too much control to machines that are too stupid to handle the tasks they are already charged with?

These questions and more are set to be debated during a series of fascinating events that can be viewed by anyone anywhere in the world during the inaugural Cambridge Festival (26 March – 4 April). Each event, including the launch of a new book, explores artificial intelligence (AI) and its ever-greater impact on how we communicate and interact.

Speakers and moderators include Dr Henry Shevlin, Dr John Zerilli, Dr Kerry Mackereth and Dr Eleanor Drage from the Leverhulme Centre for the Future of Intelligence; Gareth Mitchell, BBC Digital Planet; Dr James Weatherall Vice President, Data Science & Artificial Intelligence, R&D AstraZeneca; Professor Mihaela van der Schaar, Director Cambridge Centre for AI in Medicine; Anne Phelan, CSO Benevolent AI; Dr Junaid Bajwa, Chief Medical Scientist, Microsoft Research; and Jonnie Penn, a bestselling non-fiction author and artificial intelligence researcher.  

The Cambridge Festival brings together the hugely popular Cambridge Science Festival and the Cambridge Festival of Ideas to host an extensive programme of over 350 events that tackle many critical global challenges affecting us all. Coordinated by the University of Cambridge, the Festival features hundreds of prominent figures and experts in the world of science, current affairs and the arts, and has four key themes: health, environment, society and explore.

The full programme is set to be announced on Monday 22nd February.

From the abused simulants of Blade Runner to the neglected child-robot David in A.I. Artificial Intelligence, we seem to have little difficulty in imagining artificial beings feeling pain and distress. The kind of advanced AI we see in fiction is, of course, far removed from the capabilities of real-world machines. However, as the capacities of AI continue to improve, interest has grown concerning the question of whether and when artificial beings may reasonably come to possess or demand some form of moral status. In Suffer the little robots? The moral and legal status of artificial beings, Dr Henry Shevlin, Research Fellow at the Leverhulme Centre for the Future of Intelligence, suggests that there is good reason for lawyers, politicians, philosophers, and scientists to start grappling with how we could identify suffering in beings radically different from ourselves, and considers how we should respond to the moral concerns of sentient AI as a society.

Dr Shevlin again explores the frontiers of artificial and biological intelligence during another talk, From bees to bots: exploring the space of possible minds. He asserts that even with recent technologies the most advanced artificial systems still fall short of many of the capabilities of humans and even simpler animals like honeybees. Dr Shevlin reviews some of the most striking differences between artificial and biological systems and suggests how these might help inform a broader scientific conversation about the nature and variety of minds, intelligence, and consciousness.

Science fiction has tended to focus on nightmarish scenarios where machines acquire superhuman abilities and wrest power from unsuspecting human beings. These scenarios distract attention from the real problem – which is not that humans will unwittingly cede control to runaway superintelligence, but that they are already surrendering control to machines that are too stupid to handle the tasks they are charged with. In A Citizen's Guide to Artificial Intelligence, Dr John Zerilli, Research Fellow at the Leverhulme Centre for the Future of Intelligence, sets out what his forthcoming book (23 February 2021) hopes to achieve; the issues that AI and machine learning present to the average citizen: issues like privacy and data protection, transparency, liability, control, the future of work, and regulation of AI. He focuses on one chapter in particular; discussing the nature of human control over AI systems.

Data science and AI have the potential to transform the discovery and development of medicines. However, there is a lot of hype surrounding AI. AI: Hype vs reality covers what has been achieved to date; what is realistic to expect from data science and AI in the next 5 to 10 years; how data science experts can manage expectations; and how pharma and tech can better collaborate to realise the potential of data science and AI in healthcare. Chaired by Gareth Mitchell, BBC Digital Planet. Speakers include Dr James Weatherall Vice President, Data Science & Artificial Intelligence, R&D AstraZeneca; Professor Mihaela van der Schaar, Director Cambridge Centre for AI in Medicine; Anne Phelan, CSO Benevolent AI; and Dr Junaid Bajwa, Chief Medical Scientist, Microsoft Research.

Over the last few years, smart speakers, virtual personal assistants and other forms of ‘conversational AIs’ have become increasingly popular. In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship, particularly for the lonely in care homes. Chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used. But, for all their growing skills they remain machines driven by algorithms and data. In Empathetic Machines: can chatbots be built to care? Dr Shauna Concannon, Giving Voice to Digital Democracies research project, Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) examines what it means to be empathetic and the ethical implications that arise when positioning AI systems in roles that require them to communicate with empathy.

AI is becoming an increasingly pervasive and invasive part of our everyday lives, from our use of cloud computing, smart phones, and streaming services to news consumption and even household appliances. Virtually all these technologies rely on data. This data is often inherently equipped with different biases that reflect our own biases. If dealt with blindly, these biases are not only reproduced, but magnified by technology with potentially severe effects. In Artificial Intelligence and Unfair Bias: Addressing Gendered and Racialised Inequalities in AI, Dr Kerry Mackereth and Dr Eleanor Drage, Research Fellows from the Centre for Gender Studies and the Leverhulme Centre for the Future of Intelligence, break down some of the key ethical issues surrounding AI, gender, race, and bias. They explain how and why biased AI exists; how this results in real-world harms; offer steps forward with regards to policies that the AI sector could implement to address bias in AI; and consider whether AI can help us address racist and sexist biases. In Bias in Data: How Technology Reinforces Social Stereotypes, Dr Stefanie Ullmann from the Giving Voice to Digital Democracies research project at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) also explores the problems of bias in AI and discusses some of the possibilities for improvement.

It is a surprise for many to learn that no thorough professional history of AI and machine learning from World War II to the present exists, despite the longstanding importance of the field to intellectual history, the history of science and technology, and its spectacular and explosive rise in the last decade. In Histories of Artificial Intelligence: A Genealogy of Power, three leading experts debate how AI and its many applications stand to radically remake concepts of knowledge and the knower. With Dr Sarah Dillon, a broadcaster and scholar of late 20th – 21st century literature, film and philosophy; Dr Richard Staley, Director of Studies in History and Philosophy of Science; and Jonnie Penn, a bestselling non-fiction author and artificial intelligence researcher – he is the creator of the MTV documentary series The Buried Life and co-author of the book What Do You Want To Do Before You Die? which became a No. 1 New York Times Best Seller.

Since the onset of the worldwide pandemic, it seems that our lives have become more intertwined with social media than ever before. In a locked down world where physical proximity may not be even possible, social media often presents a lifeline to the rest of the world. How much of an effect do our daily social media habits have in our everyday lives? The apps that we check throughout the day influence how and where we receive news and information, who we spend time with, what we buy, and ultimately even how we think and feel. But what if we have more control over these aspects than we might assume? Tyler Shores (Jesus College Intellectual Forum), psychologist Dr Amy Orben, and sociologist Dr Mark Carrigan discuss what role everyday technology should have within our lives, our sense of self, and how we relate to the rest of the world during Is Social Media Changing Your Life?

Related events include:

  • Secrets and Lights – Professor of Materials Science Rachel Oliver examines the threats and opportunities that quantum technology poses for secure communication and explores some of the materials that will enable the technologies of the future. Professor Oliver shows how quantum technologies may allow current encoding methods to be busted open, but, ironically, how it also provides options for new ways to transmit information more securely, solving the very problem it creates.
  • Contemporary Significance of Artificial Intelligence for Religion – Dr Beth Singler, AI researcher and an associate research fellow at the Leverhulme Centre for the Future of Intelligence, explores the social, philosophical, ethical, and religious implications of advances in AI and robotics.
  • The fine print: Towards wearable electronics – an overview of the latest manufacturing techniques for skin-like or epidermal electronics and sensors that can adhere seamlessly to human skin or even within the body for applications such as health monitoring, medical treatment, and biological studies. The scope of this kind of technology extends to others including human-machine interfaces, soft robotics, and augmented reality.
  • What sensors can do for us – Dr Oliver Hadeler from CamBridgeSens discusses the opportunities and challenges arising from the use of sensors in smart phones, health care settings, buildings and more.
  • Imaging and vision in the age of artificial intelligence – Dr Anders Hansen, Department of Applied Mathematics and Theoretical Physics, examines the ethical concerns surrounding new developments in AI and demonstrates how systems designed to replace human decision processes, particularly in healthcare, can behave in very non-human ways.
  • How digital interventions influence our health-related behaviours? Behavioural scientist Dr Katerina Kassavou examines the evidence suggesting that digital health interventions, such as text messaging, the use of smartphone apps, wearables, and websites, are effective at changing behaviours related to health.

View the full programme via www.festival.cam.ac.uk from 22nd February. Many events require pre-booking, please check the events listings on the Festival website. 

Cambridge Festival 2021 logo

Keep up to date with the Festival on social media:

Instagram: @Camunifestivals | Facebook: @CambridgeFestival | Twitter: @Cambridge_Fest

The Festival sponsors and partners are AstraZeneca and RAND Europe. The Festival media partners are BBC Radio Cambridgeshire and Cambridge Independent.

 

The University of Cambridge is acknowledged as one of the world's leading higher education and research institutions. The University was instrumental in the formation of the Cambridge Network and its Vice- Chancellor, Professor Stephen Toope, is also the President of the Cambridge Network.

University of Cambridge (cam.ac.uk)