10.09.2018 - Kablamo

THE(HUMAN)ETHICSOFARTIFICIALINTELLIGENCE

Intelligence without ethics is dangerous. Human intelligence is tempered with human empathy.

Remember those three laws of robotics penned by Isaac Asimov?

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As far back as 1942, we were wrestling with the ethical implications of thinking machines. We are rapidly approaching the intersection of silicon and cognition. It is more important than ever that we update Asimov’s laws to ensure that humans are the beneficiaries rather than the victims of artificial intelligence,: a concept that is no longer in the domain of science fiction.

Our first brush with these concerns at a practical level surfaced a few decades ago when wide scale automation became a reality. The fear was that hard-working humans would lose their jobs to less expensive and more efficient robot workers. Companies were focused more on short-term profits than the long-term effects on of millions of newly unemployed factory workers who were unprepared to reenter the labor force.

This ethical dilemma has not been resolved. And with the advent of AI, it might well have been exacerbated. Now that we are that much closer to producing machines that can think for themselves, we must consider ethical implications that even Asimov couldn’t fathom. Before jumping on the AI bandwagon, consider the following:

A Proper Education

Intelligence without ethics is dangerous. Human intelligence is tempered with human empathy. This is not a bug, but a feature. We do not want people making laws that affect us who demonstrate a lack of human empathy. And we should be just as wary of machines that wield intelligence sans empathy.

Part of the way humans learn empathy is via the process of education. From the time a baby is placed in a mother’s arms, she is engaged in the process of education. Something very similar happens with AI.

Ethical AI begins with ethical training. AI is not merely programmed, it is trained like a human. It learns like a human.

To learn the right lessons, AI has to be trained in the right way, or ethical problems will inevitably arise. They already are.. Recently, facial recognition systems for law enforcement have come under fire because it had the tendency to misidentify people of colour as criminals based on mugshots.

This is a training issue. If AI is predominantly trained in Euro-American white faces, it will disadvantage ethnic groups and minorities. As a startup, you cannot settle for the first off-the-shelf solutions that give you a short-term advantage. You have to vet your AI solutions like you do employees, ensuring to the best of your ability that they have received a proper, ethical training.

When Better Is Worse Every company wants the best, most efficient, most productive processes possible. But there are times when better is worse. One example is customer service. There is a good chance that in time, an AI solution such as Google Assistant will do a better job at making appointments, answering questions, and making outgoing sales calls. When that time comes, AI still might not be the right solution.

The simple matter is that humans want to talk to other humans. They do not want to talk to technology when they are stressed. If a person is calling for customer service, that means something has gone awry. And they are experiencing a higher state of stress.

What they need is human contact. They want their problem resolved. But they also want to feel like they have been heard. That is something AI cannot, and possibly should not attempt to do. The decision to eliminate humans from call centres has ethical implications.

You have to consider the ethical implication for every AI deployment in your business. If there is one thing we have learned about customer behaviour, it is that they repeatedly bypass better systems for more human friendly systems.

The Final Analysis AI is inevitable. You might already be using it right now without being aware of it. There is no doubt that the proper application of AI will make you more efficient and save you money. It will even help you avoid blunders that would have put an end to your venture without it.

That presents us with the temptation to rely on AI as the final arbiter of all matters relating to our business ventures. What we have to remember is that all business is a human to human enterprise. AI can make it better. But you should always reserve the final analysis for yourself: the human with everything on the line.

Author

Kablamo

Kablamo Blog contributor

Tags

Our Partners

Amazon Web Services and Auth0

Sydney HQ

Level 1, 20 York St

Sydney, NSW 2000

Australia

Melbourne HQ

Level 7, 180 Flinders St

Melbourne, VIC 3000

Australia

Toronto HQ

240 Richmond St W

Toronto, ON M5V 1V6

Canada

Copyright © 2024. All rights reserved. ABN: 36 618 932 737