In Spy in the Sky, the film starring Helen Mirren, we see an ethical dilemma up close.
Should the British forces use a drone to kill people preparing right now to be suicide bombers? Taking out the house with a Hellfire missile should be simple enough. Action may save scores, even hundreds of lives.
Yet here’s the dilemma. Bombing the house will also kill or injure an innocent young girl nearby, selling bread.
It’s an awful choice. Various military and political leaders struggle with the ethical and practical implications. There is rampant buck passing. Just what is the right thing to do?
This sort of choice needs human judgement. But for how long?
Soon maybe, live decision makers will make way for AI–artificial intelligence.
Handing such decisions to a robot is far more complicated than it appears. Ethical standards are often at odds from one group of people to the next.
As the Danish philosopher Kierkegaard argued:
“The instant of decision is madness.”
He meant our best actions seldom reflect a simple weighing of options. The decision to bomb or not to bomb remains a hard choice.
No amount of probability calculations can avoid this human dilemma. There will always be irrational elements to resolve.
Could we ever trust a robot to do this, and to do the right thing?
Putting AI to work
Gavin Hood director of Spy in the Sky says his film is
“…not about where technology is at; it’s about where technology is going.”
Well where is it going and how do we prepare for it? For example, could we boost company compliance rates using AI?
It seems to promise a huge leap forward. For example, making sense of vast data flows, AI could issue early warnings of risky company employee behaviour.
Such warnings would surface long before human beings saw the signs and started ringing alarm bells.
Since the tsunami of regulations since the 2008 financial crisis, banks have hired thousands of compliance officers. The result is a “black hole”, sucking in endless amount of money. But this approach is unsustainable.
Banks are therefore turning to AI to stay on top of the ever changing regulatory landscape. AI offers help with an increasingly diverse range of activity:
anti money laundering programs
checking customer payments
monitoring sanctions lists.
oversight on billing fraud
NextAngles, specialises in using AI to solve compliance issues. With millions of transactions taking place, which are compliant and which are not?
By weeding out false results AI reduces costs and makes better use of workers’ time. For example, the company’s natural language system read through the mass of existing regulations. Then it reassembled them into a set of rules which a computer could understand and use.
One thing for sure, AI is coming to a place you inhabit.
Luminaries like Professor Stephen Hawking, Bill Gates and Elon Musk and others have issued dire warnings about the dangers of self-aware machines–broadly called AI.
Already thirty one million robots will be working in companies and elsewhere between 2014-2107. Few could be called AI or thinking machines, let alone self-aware. But even that is coming.
It is therefore sensible to be concerned about the evolution of such machines. Soon they may be able to deal with true ethical dilemmas, even ones like those Gavin Hoods film.
In his chill inducing artificial voice Hawking explains:
“Humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”
Meanwhile in a 2016 survey of senior financial executives almost half expected their organization to use AI in risk assessment within three years. One in three said it would help with preventing money laundering.
Many also believed within 15 years AI will bring big changes to their jobs. At the present rate of development this could well much sooner.
So how worried should we be about economic displacement? That is, a robot–intelligent or otherwise—taking away our work?
Responsible estimates already suggest automation will affect almost half (47%) of all job categories over the next two decades. So if you’re in financial compliance, for example, AI is something to take most seriously.
But if the jobs are gong, it does not mean work will go too. According to McKinsey, new roles and demand for different types of skills will change as much as they are being replaced.
In compliance work for example, the work is likely to shift to influencing culture or making sure genuine ethical dilemmas get the attention they deserve.
With so much attention on AI, it’s right to start thinking about a personal survival strategy. Here are five tips for surfing the coming AI wave.
[message type=”custom” width=”100%” start_color=”#F0F0F0 ” end_color=”#F0F0F0 ” border=”#BBBBBB” color=”#333333″]
5 TIPS T0 SURVIVE THE COMING AI WAVE
Survival Tip 1: Remain adaptable, be open to regularly diversifying personal skills
Survival Tip 2: Keep learning and be willing to re-learn how to learn; in particular hone critical thinking skills
Survival Tip 3: Develop personal resilience—persistence, determination, focus
Survival Tip 4: Trust your own humanity to navigate the AI challenge; who you are and what you believe matters as much as any single job skill
Survival Tip 5: Accept AI can help you do more and do it better; look to AI to expand and augment your humanity not attack it; resisting the technology is a sure way to unemployment.[/message][su_spacer]
C. Currier,Drone warfare’s ethical dilemmas are focus of film “eye in the sky” The Intercept
B. Dipietro, Financial Firms Turn to Artificial Intelligence to Handle Compliance Overload, Wall Street Journal May 19, 2016
Ghosts in the machine, Baker & McKenzie
M. Sainato, Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence, Observer Opinion. 19th August 2015
How to Survive an Explosion of Artificial Intelligence and Increasingly automated world, Reinvent, February 2016
R. Cellan-Jones, Stephen Hawking warns artificial intelligence could end mankind BBC 2 December 2014
R Droit, Explaining Human Nature to Robots, Sopra Steria blog June 2016