By Ian Lavis, on behalf of Praxity
Artificial Intelligence (AI) is one of the wave of technologies that experts believe will revolutionise how we communicate, do business, even how we live our lives.
As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. But worries emerge too – who controls this technology? Will it take over our jobs? Is it dangerous?
The Bank of England has warned that up to 15 million jobs across the UK could be under threat and physicist Stephen Hawking famously warned “The development of full artificial intelligence could spell the end of the human race”.
Others are more hopeful. Earlier this year, Google CEO Sundar Pichai declared that “AI is one of the most important things humanity is working on. It’s more profound than electricity or fire.”
Sage’s Practice of Now 2018 survey report, demonstrates the pace of change facing businesses as AI has a “transformative effect” across many industries:
“Once just a hazy mirage on the technological horizon, AI is now a reality with the world of business. […] While a robot in every workplace tending to its primitive photocopier ancestor is still science fiction, machine learning is one of the underlying components of modern AI that describes the ability for computers to essentially program themselves by making their own predictions based on probability.”
As Wired Magazine put it: “It’s hard to think of a single technology that will shape our world more in the next 50 years than artificial intelligence.”
What exactly is Artificial Intelligence?
There is no single universally accepted definition of AI. Researchers have distinguished between narrow and general AI.
‘Narrow AI’ refers to applications that provide domain-specific expertise or task completion, whereas ‘general AI’ refers to an AI application that exhibits intelligence comparable to (or beyond) a human, across the range of contexts in which humans interact. Although there has been considerable progress in developing AI that outperforms humans in specific domains, some believe that general AI is unlikely to be achieved for decades.
The current AI boom was catalysed by developments in ‘machine learning’, which involves ‘training’ computers to perform tasks based on examples, rather than by relying on human programmers.
Machine-learning systems are a bundle of algorithms taking in vast amounts of data to produce inferences, correlations, recommendations and decisions. And they’re already involved in nearly every interaction with algorithm-rich businesses like Google, Amazon and Facebook.
It’s widely regarded that there are three waves of development, characterised by the capability of technology to perceive, learn, abstract and reason.
- The first wave is when programmers craft rules to represent knowledge in well‐defined domains, law for example, encoding it in a computer algorithm – an ‘expert system’.
- Second-wave AI is based on machine (or statistical) learning, when programmers create statistical models for specific problem domains and then ‘train’ them on big data. These are designed to perceive and learn and include voice-recognition and computer-vision technology.
- Third-wave technologies combine the strengths of first- and second-wave AI and are also capable of contextual sophistication, abstraction, and explanation – providing models for dealing with real world phenomena.
Dr John Launchbury, Chief Scientist at Galois, Inc. and Fellow of the Association for Computing Machinery, suggests we are just at the beginning of the third wave of AI, and further research remains before these technologies become prevalent.
People v. programs
In the 2018 Nelson Mandela lecture, President Barack Obama voiced the understandable concerns of many, and reflected on the responsibilities of those that manage and govern: “The biggest challenge to workers in countries like mine today is technology … because artificial intelligence is here and it is accelerating, and you're going to have driverless cars, and you're going to have more and more automated services, and that's going to make the job of giving everybody work that is meaningful tougher, and we're going to have to be more imaginative, and the pace of change is going to require us to do more fundamental re imagining of our social and political arrangements, to protect the economic security and the dignity that comes with a job.”
All leaps in innovation are accompanied by challenges and threats – especially if their impact is global. Although there is great potential for increases in income and improvements to quality of life, much commentary has focused on broader economic and social concerns.
Longstanding calls for a ban on ‘killer robots’ have intensified, for example, as campaigners and international law experts, including Human Rights Watch and Harvard Law School’s International Human Rights Clinic, claim the use of fully autonomous weapons in a theatre of war would breach international humanitarian law. Twenty-six countries now support a prohibition on fully autonomous weapons – joining scientists, AI experts and more than 20 Nobel peace prize laureates.
In contrast, scientists in other fields are only too happy to celebrate new machine-learning systems that are as good as the best human experts. For example, a ground-breaking AI system developed by technicians from DeepMind with Moorfields eye hospital and University College London, has been correctly referring patients with more than 50 different eye diseases for further treatment with 94% accuracy, matching or beating world-leading eye specialists.
Hope, hype and hysteria
Roy Amara was a co-founder of the Institute for the Future in Silicon Valley and is best known for the adage now referred to as ‘Amara’s Law’: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” An optimist can read this one way, and a pessimist in quite another way.
“Any sufficiently advanced technology,” wrote the sci-fi eminence grise Arthur C Clarke, “is indistinguishable from magic”, while Google’s Ali Rahimi, one of the AI field’s leading commentator would rather things were more down-to-earth: “We are building systems that govern healthcare and mediate our civic dialogue. We would influence elections. I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on alchemy.”
AI pioneers are uncomfortable about how some people portray the emerging technology. Renowned roboticist Rodney Brooks criticised “the hysteria about the future of artificial intelligence” for MIT Technology Review, and Zachary Lipton, assistant professor at the machine learning department at Carnegie Mellon University, has suggested in the Guardian that the broader interest in topics like machine learning and deep learning had led to an unhinged discourse that he calls the “AI misinformation epidemic” – a deluge of opportunistic journalism that was misrepresenting research for personal attention and self-promotion when “making real progress in AI requires a public discourse that is sober and informed”.
Without such sobriety, specialists feel that progress might be set back and the benefits overlooked. New York University Professor Gary Marcus prophesied in the New Yorker in 2013 that hype would create unrealistic expectations followed by disillusionment, leading to an “AI winter” like that following frustration at the speed of technological progress in the 1970s.
“Public discourse that is sober and informed”
Considering how “AI technologies are already impacting a wide array of economic sectors” the US Government Accountability Office (GAO) published a research report on AI earlier this year, and presented findings to the House of Representatives on the emerging opportunities, challenges and implications for policy and research.
The GAO acknowledged that although AI holds substantial promise for improving human life and economic competitiveness – helping to solve some of society’s most pressing challenges – it also poses new risks and could exacerbate socioeconomic inequality.
Focusing on four high-consequence sectors – cybersecurity, automated vehicles, criminal justice and financial services – the report suggested that to deliver the greatest economic and social benefits, technological changes would need to align with developments in regulation, assurance and the safeguarding of privacy. There would also need to be clearer arrangements around issues of liability and enforcement.
Although in previous periods “investments in automation have been highly correlated with improvements in productivity and economic outcomes” the GAO stressed that developments in AI are unprecedented and faced considerable challenges to progress, with implications for regulators, businesses and individual citizens:
- barriers to collecting and sharing data
- lack of access to adequate computing resources and requisite human capital
- adequacy of current laws and regulations, particularly around civil rights and economic protections
- absence of an ethical framework for AI.
As AI systems grow more powerful, they will rightly invite more scrutiny. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on safety and ethics. But, for humans to understand, trust and manage AI effectively, the GAO suggests cross-cutting policy considerations need to be addressed by a broad variety of stakeholders, and not solely scientists and statisticians – including economists, legal scholars, philosophers, and others involved in policy formulation and decision-making. Key issues include:
- assessing acceptable risks and ethical decision-making
- developing high-quality labelled data
- understanding AI’s effect on employment and re imagining training and education
- exploring computational ethics and explainable AI
Looking to the future, the GAO is clear that AI offers insights into complex and pressing problems and that “there may be benefits related to AI that cannot yet be predicted or may even be hard to imagine”.
Wired magazine has taken a pragmatic approach to the hype and the hope, the challenges and the potential: “Super-intelligent algorithms aren’t about to take all the jobs or wipe out humanity … but they are learning faster than ever… Artificial intelligence is over-hyped. It’s also incredibly important”.
Watch this space for more about emerging technologies, and how AI in particular is impacting on the accounting profession.