Proud to see Allan’s latest piece on AI published by The Ethics Centre. Here is the full text or you can read it there:
The great promise of artificial intelligence is efficiency. The finely tuned mechanics of AI will free up societies to explore new, softer skills while industries thrive on automation.
However, if we’ve learned anything from the great promise of the Internet – which was supposed to bring equality by leveling the playing field – it’s clear new technologies can be rife with complications unwittingly introduced by the humans who created them.
The rise of artificial intelligence is exciting, but the drive toward efficiency must not happen without a corresponding push for strong ethics to guide the process. Otherwise, the advancements of AI will be undercut by human fallibility and biases. This is as true for AI’s application in the pursuit of social justice as it is in basic business practices like customer service.
The ethical questions surrounding AI have long been the subject of science fiction, but today they are quickly becoming real-world concerns. Human intelligence has a direct relationship to human empathy. If this sensitivity doesn’t translate into artificial intelligence the consequences could be dire. We must examine how humans learn in order to build an ethical education process for AI.
AI is not merely programmed – it is trained like a human. If AI doesn’t learn the right lessons, ethical problems will inevitably arise. We’ve already seen examples, such as the tendency of facial recognition software to misidentify people of colour as criminals.
In the United States, a piece of software called Correctional Offender Management Profiling for Alternative Sanctions (Compas) was used to assess the risk of defendants reoffending and had an impact on their sentencing. Compas was found to be twice as likely to misclassify non-white defendants as higher risk offenders, while white defendants were misclassified as lower risk much more often than non-white defendants. This is a training issue. If AI is predominantly trained in Caucasian faces, it will disadvantage minorities.
This example might seem far removed from us here in Australia but consider the consequences if it were in place here. What if a similar technology was being used at airports for customs checks, or part of a pre-screening process used by recruiters and employment agencies?
“Human intelligence has a direct relationship to human empathy.”
If racism and other forms of discrimination are unintentionally programmed into AI, not only will it mirror many of the failings of analog society, but it could magnify them.
While heightened instances of injustice are obviously unacceptable outcomes for AI, there are additional possibilities that don’t serve our best interests and should be avoided. The foremost example of this is in customer service.
AI vs human customer service
Every business wants the most efficient and productive processes possible but sometimes better is actually worse. Eventually, an AI solution will do a better job at making appointments, answering questions, and handling phone calls. When that time comes, AI might not always be the right solution.
Particularly with more complex matters, humans want to talk to other humans. Not only do they want their problem resolved, but they want to feel like they’ve been heard. They want empathy. This is something AI cannot do.
AI is inevitable. In fact, you’re probably already using it without being aware of it. There is no doubt that the proper application of AI will make us more efficient as a society, but the temptation to rely blindly on AI is unadvisable.
We must be aware of our biases when creating new technologies and do everything in our power to ensure they aren’t baked into algorithms. As more functions are handed over to AI, we must also remember that sometimes there’s no substitute for human-to-human interaction.
After all, we’re only human.
Allan Waddell is founder and Co-CEO of Kablamo, an Australian cloud based tech software company.