
The Human Ethics of Artificial Intelligence
Intelligence without ethics is dangerous. As we approach the intersection of silicon and cognition, Asimov's three laws of robotics need updating, and the technology industry needs to lead that conversation.
Remember those three laws of robotics penned by Isaac Asimov? As far back as 1942, we were wrestling with the ethical implications of thinking machines. We are rapidly approaching the intersection of silicon and cognition, and it is more important than ever that we update Asimov's laws to ensure that humans are the beneficiaries, not the victims, of artificial intelligence.
The empathy gap
Human intelligence is tempered with human empathy. When a doctor delivers a difficult diagnosis, they consider not just the clinical facts but the person receiving them: their fears, their support network, their capacity to process the information.
AI systems optimised purely for accuracy or efficiency have no such constraint. They produce the statistically optimal output without any model of what it means to receive it. In low-stakes contexts, this is fine. In high-stakes ones (healthcare, justice, welfare) it can cause real harm.
Where the risk is real
The most consequential AI deployments are often the least visible. Recidivism prediction tools that inform sentencing. Credit scoring systems that determine access to housing. Content moderation algorithms that shape what billions of people see.
These systems embed value judgements (about risk, about fairness, about who deserves what) without the transparency that would allow those judgements to be challenged. The ethics of AI isn't an abstract philosophical problem. It's a practical question about who designs these systems, what objectives they're optimised for, and who bears the consequences when they're wrong.
What responsible AI practice looks like
At Kablamo, we've developed a set of principles that guide how we build AI systems for clients:
- Explainability: if a system makes a decision that affects someone, they should be able to understand why
- Accountability: there must always be a human who is responsible for the system's outputs
- Reversibility: consequential AI decisions should be contestable and reversible
- Diversity in design: the people building these systems need to represent the people they affect
Intelligence without ethics is not just dangerous. It's not intelligence at all. It's optimisation without wisdom.
Originally published on the Kablamo blog.