artificial intelligence

The Ethics Centre: Injecting artificial intelligence with human empathy

sophia-hanson-robotics-AI.jpg

Proud to see Allan’s latest piece on AI published by The Ethics Centre. Here is the full text or you can read it there:

The great promise of artificial intelligence is efficiency. The finely tuned mechanics of AI will free up societies to explore new, softer skills while industries thrive on automation.

However, if we’ve learned anything from the great promise of the Internet – which was supposed to bring equality by leveling the playing field – it’s clear new technologies can be rife with complications unwittingly introduced by the humans who created them.

The rise of artificial intelligence is exciting, but the drive toward efficiency must not happen without a corresponding push for strong ethics to guide the process. Otherwise, the advancements of AI will be undercut by human fallibility and biases. This is as true for AI’s application in the pursuit of social justice as it is in basic business practices like customer service.

Empathy

The ethical questions surrounding AI have long been the subject of science fiction, but today they are quickly becoming real-world concerns. Human intelligence has a direct relationship to human empathy. If this sensitivity doesn’t translate into artificial intelligence the consequences could be dire. We must examine how humans learn in order to build an ethical education process for AI.

AI is not merely programmed – it is trained like a human. If AI doesn’t learn the right lessons, ethical problems will inevitably arise. We’ve already seen examples, such as the tendency of facial recognition software to misidentify people of colour as criminals.

Biased AI

In the United States, a piece of software called Correctional Offender Management Profiling for Alternative Sanctions (Compas) was used to assess the risk of defendants reoffending and had an impact on their sentencing. Compas was found to be twice as likely to misclassify non-white defendants as higher risk offenders, while white defendants were misclassified as lower risk much more often than non-white defendants. This is a training issue. If AI is predominantly trained in Caucasian faces, it will disadvantage minorities.

This example might seem far removed from us here in Australia but consider the consequences if it were in place here. What if a similar technology was being used at airports for customs checks, or part of a pre-screening process used by recruiters and employment agencies?

“Human intelligence has a direct relationship to human empathy.”

If racism and other forms of discrimination are unintentionally programmed into AI, not only will it mirror many of the failings of analog society, but it could magnify them.

While heightened instances of injustice are obviously unacceptable outcomes for AI, there are additional possibilities that don’t serve our best interests and should be avoided. The foremost example of this is in customer service.

AI vs human customer service

Every business wants the most efficient and productive processes possible but sometimes better is actually worse. Eventually, an AI solution will do a better job at making appointments, answering questions, and handling phone calls. When that time comes, AI might not always be the right solution.

Particularly with more complex matters, humans want to talk to other humans. Not only do they want their problem resolved, but they want to feel like they’ve been heard. They want empathy. This is something AI cannot do.

AI is inevitable. In fact, you’re probably already using it without being aware of it. There is no doubt that the proper application of AI will make us more efficient as a society, but the temptation to rely blindly on AI is unadvisable.

We must be aware of our biases when creating new technologies and do everything in our power to ensure they aren’t baked into algorithms. As more functions are handed over to AI, we must also remember that sometimes there’s no substitute for human-to-human interaction.

After all, we’re only human.

Allan Waddell is founder and Co-CEO of Kablamo, an Australian cloud based tech software company.

Our Friendly AI Survey

tesla coil image.jpg

At Kablamo, we work with artificial intelligence at a pretty high technical level and with often very specific objectives (e.g., archival video management).

Being at the coal face means it sometimes really helps to gather a more general, everyday, "real" world take on how people think of AI and how they think it could change our future.

This survey is about helping us all to understand that perspective a little better (and, hey, it might even be fun).

Allan Talks AI and Slime Moulds in The Australian

slime mould.jpeg

The future of tech?

The Australian has just published Allan Waddell’s take on the biological future of AI. You can read it at The Australian here or the original text below:

Artificial intelligence is taking off. Virtual assistants, computer chips, cameras and software packages are increasingly taking advantage of machine learning to create pseudo-intelligent, versatile problem-solvers. Neural networks, AI modelled on the human brain, strike fear into anyone convinced that AI is ­already too close to being alive. The truth is it has ­always been easy to tell the artificial and the living apart — until now.

This biological-artificial distinction is about to get blurrier, and all of us need to pay attention. Among other developments, ­researchers at ­Lehigh University recently ­secured funding to grow neural networks out of living cells. ­Essentially, the researchers are going to recreate the neural-network architecture of an artificially intelligent algorithm using living cells. Theoretically, the ­algorithm should work identically in a petri dish as it does in a computer; the structure of the neural network is irrelevant in computational systems. This is a property of computers for which Justin Garson coined the term “medium independence”. In 2003, Garson said the medium used for computation didn’t matter — a computer could be made out of silicon or wood — as long as the logical basis of the computation was unchanged.

While this research is revolutionary, procedures involving cell-based neural networks have ethicists, law professors, philosophers and scientists raising concerns about using cerebral ­organoids — what you might think of as minibrains — for neurological research. Regular human brains are generally unavailable for study, so minibrains are a great ­research alternative. However, because minibrains are, well, ­actual brain material, ethicists worry they could become conscious if they reach a certain level of complexity. It takes only a small leap to raise these same concerns about growing cell-based neural networks. After all, neural networks are designed to work in the same way as a brain. So, what’s the difference between a human (or more likely, simple organism) brain and a neural network in a petri dish? And what if a research team combined these two approaches and grew neural networks out of human brain cells? All of these questions and more are rapidly forcing their way into public discussion as our biotechnology advances.

And it doesn’t stop there. The next big thing could actually be more advanced organisms like the slime mould. ­Believe it or not, slime moulds are solving organisational problems that have confounded the brightest math­ematicians in human history, and the mould isn’t even trying. Japanese and British ­researchers created a miniature map of Tokyo, stuck a bit of Physarum polycephalum mould on Tokyo, and put some oatmeal on other major cities in the greater Tokyo Area. Within a week, the mould had recreated a pathway model of Tokyo’s train system, simply by doing what mould does best: growing and seeking out ­nutrients. The New Jersey Institute of Technology boasts a “Swarm Lab” that studies “swarm intelligence” found in everything from colonies of ants to dollops of mould, in an attempt to learn how organisms make decisions — ­research that could one day refine the algorithms behind self-driving cars, among other things.

Network design by slime mould is an astounding breakthrough. Consider that when Japan began building its high-speed rail network in the late- 1950s, it was financed in part with an $US80 million loan from the World Bank. Adjusting for inflation, that totals more than $US680m, and some estimates put the actual cost of the train system at twice the original loan amount. Of course, a lot of this money was spent on materials and paying construction workers, but using general project cost estimates from a civil engineer, we can guess at a design cost of roughly $US54m. So, give a little mould a week to grow, and it will replicate tens of millions of dollars of design work for practically nothing. Furthermore, Tokyo’s rail system wasn’t designed and built all in one go; the rail system has been in some stage of development since 1872. The design produced by the mould nearly mimicked the final result of more than a century of development.

Network design is no simple task and the problems involved are some of the hardest to solve in computer science, generally ­requiring lots of approximations and algorithms. The slime mould isn’t concerned about the fancy mathematics, though. It simply spreads out, finds food, and then develops the most energy-efficient way to move nutrients around its mouldy network-body. The ­researchers involved in this project crunched some numbers and determined that, if constructed, the mould’s design would be “comparable in efficiency, reliability, and cost to the real-world infrastructure of Tokyo’s train network”.

The humble slime mould is teaching a lesson many business leaders should heed. Technologies like AI and machine learning are developing at an amazing pace, but we don’t yet know where they’re taking us. What we do know is that just like the mould, environments need to have the right conditions for these new technologies to thrive.

Allan Waddell is founder and CEO of Australian enterprise IT specialist Kablamo.

The (Human) Ethics of Artificial Intelligence

robot thinker.jpg

Remember those three laws of robotics penned by Isaac Asimov?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

As far back as 1942, we were wrestling with the ethical implications of thinking machines. We are rapidly approaching the intersection of silicon and cognition. It is more important than ever that we update Asimov’s laws to ensure that humans are the beneficiaries rather than the victims of artificial intelligence,: a concept that is no longer in the domain of science fiction. 

Our first brush with these concerns at a practical level surfaced a few decades ago when wide scale automation became a reality. The fear was that hard-working humans would lose their jobs to less expensive and more efficient robot workers. Companies were focused more on short-term profits than the long-term effects on of millions of newly unemployed factory workers who were unprepared to reenter the labor force.

This ethical dilemma has not been resolved. And with the advent of AI, it might well have been exacerbated. Now that we are that much closer to producing machines that can think for themselves, we must consider ethical implications that even Asimov couldn’t fathom. Before jumping on the AI bandwagon, consider the following:

A Proper Education

Intelligence without ethics is dangerous. Human intelligence is tempered with human empathy. This is not a bug, but a feature. We do not want people making laws that affect us who demonstrate a lack of human empathy. And we should be just as wary of machines that wield intelligence sans empathy.

Part of the way humans learn empathy is via the process of education. From the time a baby is placed in a mother’s arms, she is engaged in the process of education. Something very similar happens with AI.

Ethical AI begins with ethical training. AI is not merely programmed, it is trained like a human. It learns like a human.

To learn the right lessons, AI has to be trained in the right way, or ethical problems will inevitably arise. They already are.. Recently, facial recognition systems for law enforcement have come under fire because it had the tendency to misidentify people of colour as criminals based on mugshots.

This is a training issue. If AI is predominantly trained in Euro-American white faces, it will disadvantage ethnic groups and minorities. As a startup, you cannot settle for the first off-the-shelf solutions that give you a short-term advantage. You have to vet your AI solutions like you do employees, ensuring to the best of your ability that they have received a proper, ethical training.

When Better Is Worse

Every company wants the best, most efficient, most productive processes possible. But there are times when better is worse. One example is customer service. There is a good chance that in time, an AI solution such as Google Assistant will do a better job at making appointments, answering questions, and making outgoing sales calls. When that time comes, AI still might not be the right solution.

The simple matter is that humans want to talk to other humans. They do not want to talk to technology when they are stressed. If a person is calling for customer service, that means something has gone awry. And they are experiencing a higher state of stress.

What they need is human contact. They want their problem resolved. But they also want to feel like they have been heard. That is something AI cannot, and possibly should not attempt to do. The decision to eliminate humans from call centres has ethical implications.

You have to consider the ethical implication for every AI deployment in your business. If there is one thing we have learned about customer behaviour, it is that they repeatedly bypass better systems for more human friendly systems.

The Final Analysis

AI is inevitable. You might already be using it right now without being aware of it. There is no doubt that the proper application of AI will make you more efficient and save you money. It will even help you avoid blunders that would have put an end to your venture without it.

That presents us with the temptation to rely on AI as the final arbiter of all matters relating to our business ventures. What we have to remember is that all business is a human to human enterprise. AI can make it better. But you should always reserve the final analysis for yourself: the human with everything on the line.


AI - The Outer Reaches

Screen Shot 2018-08-20 at 1.53.39 AM.png

We'll be the first to admit that AI is scary.  

Recently Ibrahim Diallo, a software developer, was fired by AI.  He wrote a blog post about the experience.  It's an eyeopener, and a reminder of something that we talk about alot --AI without the right implementation doesn't instantly solve problems and can easily create more.

But there's a larger story worth exploring here and it's about the kind of world AI may eventually bring us.  From visions of super-intelligent machines seizing control of the internet, to robot overlords with little regard to human life, to complete human obsolescence, there are an unthinkable number of ways this whole AI thing could go horribly, horribly wrong.  And yet, many scientists, futurists, cyberneticists, and transhumanists are downright excited for the coming dawn of the post-information age lying just beyond the invention of the first superhuman, general AI.  To get in on the hype (and maybe quell some worries), let’s take a look at what exactly has all these smart folks giddy.  What are the possibilities of a world with supergenius silicon titans?  Here’s some possibilities, roughly in order of future development, although general AI technology is so powerful, there certainly are many more mind-bending possibilities than these. 

Filtering noise, making connections, and expanding knowledge (5-10 years away)

What do vitamins, horse racing, and the price of bread have in common?  Possibly, nothing.  But a general AI can figure it out for sure, and also find the other hidden connections between, essentially, everything.  After all, as was put by naturalist John Muir: “When we try to pick out anything by itself, we find it hitched to everything else in the Universe.”  This sentiment is often repeated, but harnessing the power of a general AI offers a real, viable path to discovering the depth of connections present in the world.  

Today, machine-learning algorithms often work by looking at data, making some tweaks, and charting responses to determine how one variable affects another.  Interpreting these changes allows the AI to predict outcomes, determine how to act, or simply classify information.  And when a general AI gets to work on the huge amounts of data that already exist (and will continue to be generated), we as people will be able to learn so much.  A growing percentage of scientific research centers on secondary data analysis, or searching for new trends in data that have already been collected.  Secondary data-driven research is extremely low-cost, efficient, and accessible, in fields ranging from sociology to medicine.  A sophisticated general AI could conduct millions of these studies, 24 hours a day, every day, in every field, at astounding speeds.  And, with advancements in natural language processing, the AI could publish the research for people to look at and understand.  Of course, this sort of connection-searching is problematic; after all, correlation does not equal causation (here’s a website that does a great job of pointing out examples of false-causation trends).  However, the benefit of a true general AI is that it will be able to discern whether or not the correlations it spots are true connections or merely coincidences, at least as well as a human researcher, and much, much faster.  Because testing connections and processing data are the current uses for artificial intelligences, you can be sure there will be rapid development in this field in the coming years.

The end of work (10-20 years away)

Yes, this has been promised before, and is often promised again after new scientific breakthroughs, but it could really be it this time.  Automation in factories didn’t free us from work, exactly, but it did take over jobs within its domain, such as mass-production and precision-assembly positions.  Self-driving cars are on the verge of swooping up 5 million transportation jobs in the United States alone.  And for a general AI, its domain bridges both computation and creative thought.  So, just as automation has taken over factory positions, general AI could replace skilled cognitive workers like programmers, engineers, mathematicians, and even artists like poets, painters and musicians, leaving a whole lot of nothing for people to do all day (for some cool examples of AI-created art, look here, or here, or here).

Even if work doesn’t end entirely, though, count on your workload and life changing for the better.  Have you ever wished for a great personal assistant, someone who can scan through your email and shoot back answers to basic questions automatically, someone who will schedule and manage your appointment schedule, someone who can pick up a little slack when you’re feeling slow?  Google is on it.  Already, Google is rolling out a “basic” reservation-making AI which can call restaurants, make reservations, ask for hours, and the restaurants don’t even know they’re talking to a machine.  Seriously, natural language processing has become sophisticated enough to trick humans in some standard cases, like asking for a table for 5 at your favorite Chinese restaurant (you can see a video of the announcement here).  Soon enough, the AI will function as a full-on secretary, available for everyone, and some of your daily work headache will be alleviated by a tireless computer assistant.  

Intelligence upgrades (and AI camouflage, too?) (20-30 years away)

People have been trying to directly interface with computers since the 1970s, when the first human-computer brain-machine (BMI) interface was invented.  So far, the development has been therapeutic, alleviating symptoms from neurological disorders like epilepsy and ALS, restoring hearing with cochlear implants, and helping quadriplegics move mouse cursors and robotic arms directly with their thoughts.  It’s only a matter of time before enhancements are developed for neurologically-healthy people.  Elon Musk has already thrown his hat in the ring with Neuralink, a company aiming to develop the first surgically-implanted, whole-brain computer interface, for the express purposes of enhancing human computational limits and, secondarily, connecting human intelligence with artificial intelligence (a great, really long write-up for the interested here).  Not only does Musk hope that such a system could allow for offloading mental tasks to computers (would you like to be able compute 196003x3313 in your head?), he also hopes it’ll give us a lifeline when the AIs rise up.  From Musk’s perspective, if you can’t beat them, why not become so intricately intertwined that destroying one would destroy the other?  It’s a pretty neat survival strategy, blurring the line between us and the machine so any self-preservation instincts in the new machine-consciousnesses would automatically extend to us people, too.  A hard pill to swallow, sure, but if we really become second best in the face of general AI, mutually-assured destruction can be a good deal for the little guy (us).

Human immortality (30-??? years away)

Here’s a biggie: general AI could offer a viable path to massive extensions in human life, depending on how far one is willing to stretch the concept of “life.”  Is a mind (and presumably, a consciousness) without a body “alive”?  If you say yes, you could be the first in line to get your brain uploaded.  By encoding your specific neural pathways into a general AI, it’s possible you could continue life, free from physical ailments, disease, and accidents, snugly inside a computer.  And if your new computer architecture allows connections to form within itself, and disconnects old connections no longer considered useful, well, have you lost much besides your squishy, meaty body and brain?  Many techies say no, and amazingly, the first companies promising brain uploads are already starting to crop up.  A particularly grisly startup called Nectome has developed an embalming procedure so advanced that every synapse in your brain is identifiable after under an electron microscope, and will remain perfectly preserved for hundreds of years.  The kicker?  The process is 100 percent fatal.  In order to embalm the brain so efficiently, their preservation fluids need to be pumped in while you’re still alive, euthanizing you but preserving the brain.  Then your perfect brain can sit around for any amount of time until brain-uploading technology is developed, and Nectome will resurrect you in a computer.  Not surprisingly, their target market is terminally-ill patients.  And who knows?  It just might work.

Not only could life be extended by physical protections and material upgrades, it could also be extended, at least perceptually, by upgrading processing speeds (Warning: far-fetched sci-fi logic incoming).  The human brain has a few fundamental frequencies, called brain waves, that seem to dictate perceived consciousness.  These waves range in frequency from 0.5 Hz when sleeping deeply to 30 Hz when absolutely alert, and other estimates put the maximum possible neuron firing rate at about 1000 Hz.  Now, consider that the 8th generation Intel i7 processor released last year is capable of pulling 4.7 gigaHz (that’s 4,700,000,000 Hz!), and try to imagine what it would be like to live in one of those.  Would you think 4 billion times faster?  Would you perceive time as passing 4 billion times slower?  And if you, a mere mortal human, could pack 4 billion seconds (that’s nearly 127 years) into every second, would you?  Even if you lived a normal 70 years, by using a little back-of-the-envelope math, we’re talking about a perceived lifespan of 8.8 exa-years (8,800,000,000,000,000,000 years!).  It’s been 13.8 billion years since the Big Bang, which is .0000138% of one single exa-year.  And all this has been calculated using processor speeds that already exist.  Who knows what our processors will be capable of in 2045 (Ray Kurzweil’s estimation for the first human-computer merger)?

The takeaway

Obviously, the possibilities of generally-intelligent AI are enormous.  As AI technology rapidly progresses, the future is looking more and more like a wild ride.  No matter who you are, AI will both have something tempting to offer and something appalling that will make your skin crawl.  Whether general AI will provide lightning-speed research or human-computer cyborgs is still unclear, but we can be sure the artificial intelligence future holds some drastic changes to human work, health, and the world as we know it.  And look out; it’s all coming sooner than you think.  One things certain, though, especially if you are an enterprise in 2018 --you still need to get your own IT house in order,, AI won't be up to that job for a while.

Is your computer a racist? We know AI’s ‘how’ but we need ‘why’

Is your computer a racist? We know AI’s ‘how’ but we need ‘why’

As more decisions are handed over to AI, regulating their behaviour could become increasingly important. Part of the challenge stems from the fact that while we can understand “how” an AI has reached a certain conclusion, discovering the “why” is much more problematic.