Today, AI is everywhere – and yet virtually invisible to us. So what is AI, and why should we care? As far back as the 1950s, it was described (by Marvin Minsky of MIT Media Lab and John McCarthy, a US-based computer scientist who became known as the ‘father of AI’ and coined the term ‘artificial intelligence’) as a technology, or machine, that would perform a task which if conducted by a human would require intelligence to complete. This is obviously a very broad definition, and human intelligence is applied to a vast array of tasks – planning, learning, problem-solving, decision-making, interpretation, knowledge presentation, manipulation as well as social intelligence and creativity. AI is now commonly used in speech and language recognition systems such as the virtual assistants, Amazon’s Alexa and Apple’s Siri, as well as in applications (APIs) that enable you to identify your friends and family members in photographs online, and by businesses such as your bank to detect fraudulent activity. It will increasingly be used to make sense of the ‘internet of things’ sensor datasets that will in future connect our ‘smart cities’.
AIs fall into two broad categories: narrow AI and general AI. Narrow AIs are used for very specific and tightly defined tasks but artificial ‘general’ intelligence (AGI) is quite different. AGIs are intended to be flexible and capable of learning, possibly in the future becoming ‘superintelligences’ – it is the type of AI that is often depicted in films such as Hal (2001: Space Odyssey), Skynet (Terminator) or Ava (Ex Machina). These AIs are ‘trained’ using powerful computer processors and machine learning models (eg., Google’s TensorFlow) to ‘deep learn’, working with huge datasets to produce new data on which they continue to learn. In effect, they create very large databases that enable them to pattern match in order to assimilate a learned task. This learning can be seen in, for example, IBM’s Watson, which won the US quiz show Jeopardy in 2011 by beating the best human players, and Google’s DeepMind AlphaGo, which won the ancient Chinese game of Go against a human Grandmaster in 2016. Both these examples demonstrate the capability of the technologies to recognize and respond to a problem almost instantly. You can watch and hear more about AlphaGo in our Festival film screening (see programme for details).
Fuelled by science fiction, robots that are increasingly autonomous, that can navigate the world and communicate with us in human-like ways, are now overlapping with AI technologies. These kinds of devices are evidenced in the near future of self-driving vehicles, package delivery bots and drones and various service robots that will increasingly interact with us as hotel and business receptionists, through transcription services and social media – such as, for example, Poltronieri’s #LoveApparatus (created for our Festival and running at Highcross until 13 May), although not a robot, the AI communicates as if it were human. In March 2018, Tesla successfully drove its first semi-autonomous electric trucks across California whilst Uber’s fully autonomous taxi killed a pedestrian in Arizona, the first by an autonomous device but probably not the last. The technologies also regularly produce realistic imagery which replicate image and voice of humans – in 2013, Eguchi Aimi, a singer with the Japanese all-girl band AK48 was revealed as ‘not real’ – this was some time after they had achieved a number of chart-topping songs – and today we rarely turn to any media channel without confronting the spectre of fakery in our midst. These stories are generated, targeted and distributed using data miners and AIs. And yet, AIs are capable of much good: they may be used to detect healthcare problems before symptoms appear, can provide comfort and companionship to the lonely and elderly, such as Cera and Bloomsbury AI’s Martha, and support executive decision-making, such as Tieto’s Alicia T, the latter potentially ensuring business profits are maximised and its finite resources well managed.
Stephen Hawking, the ground-breaking Cambridge physicist who passed away in March, famously stated that AIs pose a fundamental threat to human civilization, a statement also supported by Tesla’s Elon Musk. Their claims, among the voices of many others working in research and industry as well as government and policy, pave the way for a more informed and open discussion on devising a working code of ethics in the development, application and deployment of AIs. Whilst practically, at present, the technologies are a long way, maybe decades, away from becoming the super intelligences portrayed in our films, the future threats are real. Robot AIs are very close to conducting routine, manual tasks that were formerly the domain of low-skill workers (eg., fruit picking and warehouse packing). Forecasters suggest, however, that AIs are more likely to enhance and augment our everyday mundane jobs than replace us.