The Positives of AI for Law – and 4 Key Areas of Concern
Although the term ‘robot’ was coined in the 1920s, it is only in the last two decades that machines that think and act like humans have become a reality. With machines playing an increasingly significant role in our lives, we are now facing one of the most challenging eras in human history.
Professor Toby Walsh spoke about the rise of artificial intelligence (AI) and robotics at Legal Innovation & Tech Fest. Walsh is Scientia Professor of Artificial Intelligence at UNSW and leads the Algorithmic Decision Theory Group at Data61, Australia’s Centre of Excellence for ICT Research.
An enthusiastic exponent of AI, Walsh has much to say about the exciting possibilities it opens up. But he also raises certain problems AI poses for humans who are used to making decisions and interacting with other humans.
Walsh has some fascinating insights into our AI future, and a few words of caution too.
From Science Fiction to Science Fact
As a young boy, Walsh admits he read “too much” science fiction. These were stories teeming with intelligent computers and robots. Now, he says, that imagined future is upon us.
For many people it arrived two years ago when a computer called AlphaGo beat the best human player on the planet in the ancient Chinese game of go,” he says.
AlphaGo’s victory came almost two decades after the 1997 defeat of world chess champion Gary Kasparov by IBM’s Deep Blue computer.
The reason for the almost two-decade delay? Go is considered the more difficult game, so it took more time to build a machine with enough computing power to be a world-beater.
The rise of these human-conquering juggernauts is down to four exponential growth spurts behind AI computing in the last 20 years:
- The doubling of the number of transistors on integrated circuits each year.
- The doubling of the amount of data every two years.
- The doubling of the performance of algorithms each year.
- The doubling of AI investment every two years.
While these individually may not have had such an impact, together they have turned science fiction into science fact.
The Potential Negatives of AI
AI has great promise for the four D’s: the dirty, difficult, dull and dangerous tasks. But there are four other elements that Walsh believes we should be worried about when letting this technology into our lives:
- The impact on employment.
- The impact on fairness and equality in society.
- The impact on war.
- The rise of China as global AI leader.
1. The Impact on Employment
The first of these concerns, Walsh admits, is often exaggerated.
“You read some dire predictions about the impact AI will have on employment, such as: ‘47% of jobs at risk of automation in the next two decades’. The chief economist of the Bank of England predicted half of all jobs are at risk from automation. We have to treat these number with a huge pinch of salt.”
As a scientist, Walsh looked at some predictions being made and found in many cases the analysts used AI to generate some of their over-predictions about job loss.
“There’s some irony in that!” he remarks.
But some areas are up for grabs. Uber is trialling autonomous taxis, and earlier this year NAB announced it was laying off 6,000 employees due to digitisation.
“We also have some of the most automated ports and mines in the world,” he says.
“Companies like Rio Tinto are at the forefront of that. In fact, if they weren’t, the mining boom that has given Australia the longest period of uninterrupted growth of any economy ever would have ended much sooner.
“Technologies create lots of new jobs. We have to worry how we can reskill ourselves and the people who are put out of work by automation.”
2. Maintaining a Fair and Equal Society
Algorithms, it seems, are not as objective as we might think. They can reflect biases depending on the data fed into them.
“We’re discovering that algorithms can be just as biased as humans,” Walsh says.
“It depends what data they were trained on and what assumptions were made when they were written.”
Walsh points to a “very troubling” story in the US about a program called COMPAS, by Northpointe.
“It’s a tool used in courts across the country to help make bail and sentencing decisions. But it was discovered that the program is racially biased. It’s more likely to predict that black people will reoffend than they actually will,” he says.
“I’m sure the people writing the program didn’t intend it to be biased – they explicitly didn’t include ‘race’ as one of the inputs. But one of the inputs is the person’s zip code, and in many parts of the United States, zip code is a good proxy for race.”
3. AI and the Science of War
The third concern for Walsh is warfare: handing the decision about whether a person lives or dies to autonomous machines like drones.
“At the moment there is still a human in the loop, controlling the drone,” he says.
“But it’s not a big technological leap to remove the human and have a computer make all the decisions.”
Walsh calls weapons autonomy the third revolution in warfare and says he and his colleagues feel strongly about it. He has even spoken at the United Nations warning of the risk.
“There are some decisions we should not hand over to machines, and this is one of them.”
4. China as an AI World Leader
In the last five years, Walsh points out that China has started to aggressively invest in artificial intelligence. Last year, it was announced that Chinese tech giant Alibaba will invest $15 billion in R&D over the next five years, largely in AI and quantum computing.
“We have to look at what is coming out of China because they pose a significant challenge,” Walsh says.
“They have a much more relaxed attitude to citizens’ data, privacy and rights – things we take for granted.”
Why Some Things are Best Left to the Humans
Machines can be programmed to do difficult things, like playing go and chess. Yet they often struggle with things we find easy, like folding towels.
“It’s called Moravec’s Paradox,” Walsh says.
“Hans Moravec is a famous roboticist who once said that for robots the easy things are hard and the hard things are easy. A robot that folds towels was developed by colleagues of mine at the University of California at Berkeley. It originally took 25 minutes to fold a towel.
“They’ve been working on this for a number of years, and it’s a little bit quicker now. But there are things that machines aren’t going to do any time soon.”
The Future is not Fixed
Walsh concludes by saying human beings now face one of the most challenging eras in history. The only hand we have to play, he says, is the same hand our grandparents had.
“Which is to believe in technology, to invest in it. It has given us much better lives than our grandparents. Our grandchildren have only one hope, which is to embrace technology,” he says.
“We should use it for the good things – productivity, government efficiency, better use of limited resources, healthcare. But equally we should make sure that technology doesn’t intrude into our lives and isn’t used in some areas. Like deciding who to sentence to prison or who gets to live on the battlefield.
“We should use technology to improve the common good, as it did in the first industrial revolution. The future is not fixed. The future is the product of the decisions we make today.”
About the Speaker
Toby Walsh is Scientia Professor of Artificial Intelligence at the University of New South Wales. Professor Walsh is a strong advocate for limits to ensure AI is used to improve our lives. He has been a leading voice in the discussion about lethal autonomous weapons (aka killer robots) speaking at the UN in New York and Geneva on the topic. He is a Fellow of the Australia Academy of Science. He appears regularly on TV and radio, and has authored two books on AI for a general audience.