AI

Explainer: What are deepfakes?

machine-learning-cloud-ai-artifical-intelligence-100678134-orig.jpg

Check out Allan’s thoughts in this CMO piece on Deep Fakes (below or click here to read on CMO)

Have you seen the video with US speaker, Nancy Pelosi, appearing to be drunk and slurring her words? Or perhaps you’ve seen the one with Facebook CEO Mark Zuckerberg joking about knowing the public’s secrets? Or the videos of Rasputin singing Beyonce, Andy Warhol eating a Burger King burger or Salvadore Dali being brought back to life? 

In 2018, former US president, Barack Obama, warned in a video about enemies making it look like anyone is saying anything at any time.

While seeing long-dead artists animated in 2019 can seem like a bit of fun, it hides a darker problem about authenticity on the Web. And in an era of heightened concerns about the fake news label being thrown around to undermine news that doesn’t suit, deepfakes now look set to accelerate a crisis in trust in the Web.

Deepfakes are essentially videos, and in some cases audio, which purportedly show someone doing or saying something they haven’t in real life. They may show real people such as politicians, historical figures or just anonymous individuals. And they may be entirely manufactured, such as the ones showing long-dead people like Andy Warhol or Salvadore Dali speaking or interacting with things they never could have during their lifetime.

Machine learning brings photos to life

Simon Smith, cyber forensic investigator and cybercrime expert witness, told CMO the term 'deepfake' comes from the deep learning tech used to manufacture the fake videos.

“The very best technology is used to map out every muscle movement of a person’s face [if looking at the face only] and replicated into a learning algorithm and associated with a word, phrase, attitude or feeling,”  he explained.

“Once enough learning has been attained, it is possible to attain an almost life-like effect with the assistance of morphing graphical technology that takes into account the person’s age, muscles that move when other muscles move, stretching and expressions to give a realistic approach.”

The answer to why we are seeing and hearing about deepfakes now lies in a confluence of advances in technology and the work of the darker parts of the Web.

There's an exponential growth in computing power behind the surge of deepfakes, according to founder and co-CEO, Kablamo, a cloud-based enterprise software outfit, Allan Waddell. Adding fuel to the fire is the sophistication of artificial intelligence (AI) and pattern recognition on image datasets.

“It’s taking a set of images, and it’s been a large number of images, to create models to overlay on an existing people. Traditionally, the more images you have, the more accurate it becomes," he said. "There’s been breakthroughs in the number of images and datasets needed to create these [deepfakes].”

The rise of fake marketing?

Once upon a time, fake videos might have been created for a bit of humour, like politicians or celebrities with fake lipreading to have them say something which parodies themselves. Mostly they were harmless because they were easily identified as fake and exaggerated enough to defy believability. 

However, such advances in machine learning technology have enabled the creation of realistic-looking videos. Combine that with the pervasiveness of social media, where fake news and videos can spread without proper scrutiny or verification, and you bring the issue of deepfakes to the fore.

“The technology has been used for many years to help animatronics by mapping out joints and movements in cartoons. This is one step above that and [in the wrong hands] could cause identity theft, false impersonation, setup for crimes a person did not commit and much more serious repercussions,” Smith told CMO.

The Ethics Centre: Injecting artificial intelligence with human empathy

sophia-hanson-robotics-AI.jpg

Proud to see Allan’s latest piece on AI published by The Ethics Centre. Here is the full text or you can read it there:

The great promise of artificial intelligence is efficiency. The finely tuned mechanics of AI will free up societies to explore new, softer skills while industries thrive on automation.

However, if we’ve learned anything from the great promise of the Internet – which was supposed to bring equality by leveling the playing field – it’s clear new technologies can be rife with complications unwittingly introduced by the humans who created them.

The rise of artificial intelligence is exciting, but the drive toward efficiency must not happen without a corresponding push for strong ethics to guide the process. Otherwise, the advancements of AI will be undercut by human fallibility and biases. This is as true for AI’s application in the pursuit of social justice as it is in basic business practices like customer service.

Empathy

The ethical questions surrounding AI have long been the subject of science fiction, but today they are quickly becoming real-world concerns. Human intelligence has a direct relationship to human empathy. If this sensitivity doesn’t translate into artificial intelligence the consequences could be dire. We must examine how humans learn in order to build an ethical education process for AI.

AI is not merely programmed – it is trained like a human. If AI doesn’t learn the right lessons, ethical problems will inevitably arise. We’ve already seen examples, such as the tendency of facial recognition software to misidentify people of colour as criminals.

Biased AI

In the United States, a piece of software called Correctional Offender Management Profiling for Alternative Sanctions (Compas) was used to assess the risk of defendants reoffending and had an impact on their sentencing. Compas was found to be twice as likely to misclassify non-white defendants as higher risk offenders, while white defendants were misclassified as lower risk much more often than non-white defendants. This is a training issue. If AI is predominantly trained in Caucasian faces, it will disadvantage minorities.

This example might seem far removed from us here in Australia but consider the consequences if it were in place here. What if a similar technology was being used at airports for customs checks, or part of a pre-screening process used by recruiters and employment agencies?

“Human intelligence has a direct relationship to human empathy.”

If racism and other forms of discrimination are unintentionally programmed into AI, not only will it mirror many of the failings of analog society, but it could magnify them.

While heightened instances of injustice are obviously unacceptable outcomes for AI, there are additional possibilities that don’t serve our best interests and should be avoided. The foremost example of this is in customer service.

AI vs human customer service

Every business wants the most efficient and productive processes possible but sometimes better is actually worse. Eventually, an AI solution will do a better job at making appointments, answering questions, and handling phone calls. When that time comes, AI might not always be the right solution.

Particularly with more complex matters, humans want to talk to other humans. Not only do they want their problem resolved, but they want to feel like they’ve been heard. They want empathy. This is something AI cannot do.

AI is inevitable. In fact, you’re probably already using it without being aware of it. There is no doubt that the proper application of AI will make us more efficient as a society, but the temptation to rely blindly on AI is unadvisable.

We must be aware of our biases when creating new technologies and do everything in our power to ensure they aren’t baked into algorithms. As more functions are handed over to AI, we must also remember that sometimes there’s no substitute for human-to-human interaction.

After all, we’re only human.

Allan Waddell is founder and Co-CEO of Kablamo, an Australian cloud based tech software company.

Neobanks and the coming disruption?

neo bank.jpg

Check out the full text of Angus Dorney’s take on Neobanks (originally appeared in The Australian, if you want to read it there click here):

When the internet challenged the global media industry, companies were confronted with a stark choice — innovate or perish. Smart people saw the need to prepare for the digital era, and those that successfully digitised their offering stayed ahead of their disrupters.

Despite the media industry constantly managing financial pressures and shrinking budgets, media leaders were quick to innovate because they had to. They drew from a relatively limited war chest and embraced innovation throughout the organisation. In fact, those that failed to do so endured major business losses, including many that were fatal.

The media companies that survived were acutely aware of the need to innovate and actively sought to do so. From a corporate culture perspective, the entire organisation saw the need for technical and product innovation and it was seized enthusiastically throughout the business.

The wide-scale disruption visited upon the media industry should have served as a warning for other sectors. Yet, some industries failed to learn from the lessons of others.

Increasingly, smart people in financial services are passionately waving their hands as the industry finds itself similarly on the brink of a wave of disruption — in this instance at the rise of neobanks.

A neobank is a branchless financial services provider that operates exclusively with customers on digital interfaces, like mobile devices. Uninhibited by the practices of traditional banking, these upstarts are free to weaponise technology to their advantage, in what is essentially a form of guerilla warfare against the incumbents. Much like AirBnB and Uber shook the hospitality and transportation industries to their core, neobanks are set to challenge the banking sector.

In Australia, among the 2.1 million adults over 18 who are looking to change their main financial institution, about 16 per cent of them indicated in a recent Nielsen study that they’d prefer to use a digital bank. This is a five-percentage point year-over-year increase from a previous Nielsen study.

Since the big banks historically operate outside the start-up mentality, and in some instances have been known to dilute the essence of the start-ups they acquire, it’s important to consider what that might mean at scale.

As new entrants to the market, neobanks could be the catalyst that makes the financial services sector sit up and take innovation seriously. It’s critical that when this happens, executives and other leaders understand that innovation can’t happen in isolation. Like the media organisations who got ahead of their disrupters, innovation must be embraced throughout the entire organisation if it’s to have any chance of success.

The shift is already well underway. Neobanks are on the verge of exploding in the Australian market. A surge in applications for restricted banking licenses have been submitted to the prudential regulator APRA, with the first license going to a start-up called Volt. It was the first time in 28 years that a new bank was created in Australia. Start-ups like Xinja and 86 400 will also be important to monitor.

Despite their position as disrupters, neobanks have their own challenges to overcome. As a player in a heavily regulated industry, they can’t afford to forget the importance of network and data security, and governance. While chasing rapid innovation, some fintechs can concentrate too much on their own product features while deprioritising the development of APIs that will enable them to integrate with other products and service providers (including incumbent players). If not managed correctly, this inability to integrate with other platforms can become a major handbrake to future growth.

On the other hand, most incumbent companies in the financial services sector have been slower to innovate because they haven’t yet been forced to. When they do try and innovate, it is not always an organisational priority and is often an experiment made in isolation, separate from the rest of the business.

The finance industry, however, has an advantage that many media organisations don’t — access to massive war chests of capital, much of it within their own control. Whereas the media industry embraces innovation with comparatively limited resources, financial services have significant levels of capital available to invest. All the ingredients are there for the incumbents to lead the trend toward truly branchless digital banking before the newcomers beat them to it, but the big banks must first find the appetite to do so.

Financial services executives need to monitor the nascent neobanks closely and keep pace with their hunger for innovation. By learning from the media industry, they can get ahead of shaping the inevitable change before it’s too late.

How facial recognition can unlock video archive value

photosleuth-1496457709-44.jpg

Kablamo’s co-CEO, Angus Dorney, recently spoke to ComputerWorld about how facial recognition and AI can unlock tremendous amounts of value in video archives. Read the full story here. An excerpt is below.

Archive value

The capability also has enterprise applications – particularly for media organisations wanting to find relevant footage or stills in their video archives.

“They have millions of hours of video content and its typically stored in multiple legacy systems, there is no or varying meta-tagging, and the search processes for finding content are extremely old and they’re manual and they cut across multiple systems,” explains Angus Dorney, co-CEO of Sydney and Melbourne-based cloud technology firm Kablamo.

“If you’re a newsmaker in a media organisation or work for a government archive and somebody asks you for a specific piece of footage it’s very difficult and time consuming and expensive to try and find,” he adds.

Kablamo builds solutions that have a “YouTube-like user experience” to find relevant archive footage. Using AWS face and object recognition tools, users simply type in a person or thing “and get a list back of prioritised rankings, where it is, and be able to click and access that example right away,” Dorney – a former Rackspace general manager – says.

The machine learning models behind the capability, over time, can refine and adjust their behaviour, making results more accurate and more useful to users.

“You really have a computer starting to function like a human brain around these things which is incredibly exciting,” Dorney adds.

Allan Talks AI and Slime Moulds in The Australian

slime mould.jpeg

The future of tech?

The Australian has just published Allan Waddell’s take on the biological future of AI. You can read it at The Australian here or the original text below:

Artificial intelligence is taking off. Virtual assistants, computer chips, cameras and software packages are increasingly taking advantage of machine learning to create pseudo-intelligent, versatile problem-solvers. Neural networks, AI modelled on the human brain, strike fear into anyone convinced that AI is ­already too close to being alive. The truth is it has ­always been easy to tell the artificial and the living apart — until now.

This biological-artificial distinction is about to get blurrier, and all of us need to pay attention. Among other developments, ­researchers at ­Lehigh University recently ­secured funding to grow neural networks out of living cells. ­Essentially, the researchers are going to recreate the neural-network architecture of an artificially intelligent algorithm using living cells. Theoretically, the ­algorithm should work identically in a petri dish as it does in a computer; the structure of the neural network is irrelevant in computational systems. This is a property of computers for which Justin Garson coined the term “medium independence”. In 2003, Garson said the medium used for computation didn’t matter — a computer could be made out of silicon or wood — as long as the logical basis of the computation was unchanged.

While this research is revolutionary, procedures involving cell-based neural networks have ethicists, law professors, philosophers and scientists raising concerns about using cerebral ­organoids — what you might think of as minibrains — for neurological research. Regular human brains are generally unavailable for study, so minibrains are a great ­research alternative. However, because minibrains are, well, ­actual brain material, ethicists worry they could become conscious if they reach a certain level of complexity. It takes only a small leap to raise these same concerns about growing cell-based neural networks. After all, neural networks are designed to work in the same way as a brain. So, what’s the difference between a human (or more likely, simple organism) brain and a neural network in a petri dish? And what if a research team combined these two approaches and grew neural networks out of human brain cells? All of these questions and more are rapidly forcing their way into public discussion as our biotechnology advances.

And it doesn’t stop there. The next big thing could actually be more advanced organisms like the slime mould. ­Believe it or not, slime moulds are solving organisational problems that have confounded the brightest math­ematicians in human history, and the mould isn’t even trying. Japanese and British ­researchers created a miniature map of Tokyo, stuck a bit of Physarum polycephalum mould on Tokyo, and put some oatmeal on other major cities in the greater Tokyo Area. Within a week, the mould had recreated a pathway model of Tokyo’s train system, simply by doing what mould does best: growing and seeking out ­nutrients. The New Jersey Institute of Technology boasts a “Swarm Lab” that studies “swarm intelligence” found in everything from colonies of ants to dollops of mould, in an attempt to learn how organisms make decisions — ­research that could one day refine the algorithms behind self-driving cars, among other things.

Network design by slime mould is an astounding breakthrough. Consider that when Japan began building its high-speed rail network in the late- 1950s, it was financed in part with an $US80 million loan from the World Bank. Adjusting for inflation, that totals more than $US680m, and some estimates put the actual cost of the train system at twice the original loan amount. Of course, a lot of this money was spent on materials and paying construction workers, but using general project cost estimates from a civil engineer, we can guess at a design cost of roughly $US54m. So, give a little mould a week to grow, and it will replicate tens of millions of dollars of design work for practically nothing. Furthermore, Tokyo’s rail system wasn’t designed and built all in one go; the rail system has been in some stage of development since 1872. The design produced by the mould nearly mimicked the final result of more than a century of development.

Network design is no simple task and the problems involved are some of the hardest to solve in computer science, generally ­requiring lots of approximations and algorithms. The slime mould isn’t concerned about the fancy mathematics, though. It simply spreads out, finds food, and then develops the most energy-efficient way to move nutrients around its mouldy network-body. The ­researchers involved in this project crunched some numbers and determined that, if constructed, the mould’s design would be “comparable in efficiency, reliability, and cost to the real-world infrastructure of Tokyo’s train network”.

The humble slime mould is teaching a lesson many business leaders should heed. Technologies like AI and machine learning are developing at an amazing pace, but we don’t yet know where they’re taking us. What we do know is that just like the mould, environments need to have the right conditions for these new technologies to thrive.

Allan Waddell is founder and CEO of Australian enterprise IT specialist Kablamo.

The (Human) Ethics of Artificial Intelligence

robot thinker.jpg

Remember those three laws of robotics penned by Isaac Asimov?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

As far back as 1942, we were wrestling with the ethical implications of thinking machines. We are rapidly approaching the intersection of silicon and cognition. It is more important than ever that we update Asimov’s laws to ensure that humans are the beneficiaries rather than the victims of artificial intelligence,: a concept that is no longer in the domain of science fiction. 

Our first brush with these concerns at a practical level surfaced a few decades ago when wide scale automation became a reality. The fear was that hard-working humans would lose their jobs to less expensive and more efficient robot workers. Companies were focused more on short-term profits than the long-term effects on of millions of newly unemployed factory workers who were unprepared to reenter the labor force.

This ethical dilemma has not been resolved. And with the advent of AI, it might well have been exacerbated. Now that we are that much closer to producing machines that can think for themselves, we must consider ethical implications that even Asimov couldn’t fathom. Before jumping on the AI bandwagon, consider the following:

A Proper Education

Intelligence without ethics is dangerous. Human intelligence is tempered with human empathy. This is not a bug, but a feature. We do not want people making laws that affect us who demonstrate a lack of human empathy. And we should be just as wary of machines that wield intelligence sans empathy.

Part of the way humans learn empathy is via the process of education. From the time a baby is placed in a mother’s arms, she is engaged in the process of education. Something very similar happens with AI.

Ethical AI begins with ethical training. AI is not merely programmed, it is trained like a human. It learns like a human.

To learn the right lessons, AI has to be trained in the right way, or ethical problems will inevitably arise. They already are.. Recently, facial recognition systems for law enforcement have come under fire because it had the tendency to misidentify people of colour as criminals based on mugshots.

This is a training issue. If AI is predominantly trained in Euro-American white faces, it will disadvantage ethnic groups and minorities. As a startup, you cannot settle for the first off-the-shelf solutions that give you a short-term advantage. You have to vet your AI solutions like you do employees, ensuring to the best of your ability that they have received a proper, ethical training.

When Better Is Worse

Every company wants the best, most efficient, most productive processes possible. But there are times when better is worse. One example is customer service. There is a good chance that in time, an AI solution such as Google Assistant will do a better job at making appointments, answering questions, and making outgoing sales calls. When that time comes, AI still might not be the right solution.

The simple matter is that humans want to talk to other humans. They do not want to talk to technology when they are stressed. If a person is calling for customer service, that means something has gone awry. And they are experiencing a higher state of stress.

What they need is human contact. They want their problem resolved. But they also want to feel like they have been heard. That is something AI cannot, and possibly should not attempt to do. The decision to eliminate humans from call centres has ethical implications.

You have to consider the ethical implication for every AI deployment in your business. If there is one thing we have learned about customer behaviour, it is that they repeatedly bypass better systems for more human friendly systems.

The Final Analysis

AI is inevitable. You might already be using it right now without being aware of it. There is no doubt that the proper application of AI will make you more efficient and save you money. It will even help you avoid blunders that would have put an end to your venture without it.

That presents us with the temptation to rely on AI as the final arbiter of all matters relating to our business ventures. What we have to remember is that all business is a human to human enterprise. AI can make it better. But you should always reserve the final analysis for yourself: the human with everything on the line.


Screen Shot 2018-06-28 at 2.17.06 PM.png

A BUSINESS-PRACTICAL WAY TO THINK ABOUT AI

A salesperson representing a company that deploys artificial intelligence solutions walks into a CTO’s office one day and starts talking about the company’s product. You’re the executive assistant observing the conversation. What do you see? Can you see an animated individual talking incessantly about how great his company and its products are, and another one nodding and looking knowledgeable while not seeing the relevance of any of that to his business?

This scenario plays out everyday somewhere in the world, and it’s not uncommon for such meetings to end with the prospect holding an even bigger bag of questions than before. Questions like “How appropriate is this product or service to my business, specifically?” are often left unanswered. Nothing like “It will reduce your manpower requirement by 32 FTEs” or “This will speed up your average response time by up to 23%” ever breaches the surface.

If you’re the business owner or executive responsible for the decision, questions like that might leave you wondering whether it’s worth the effort at all. More often than not, the salesperson will gloss over many of the challenges, which only makes the decision even harder.

To help put your train of thought on the right track, we’ve identified some key elements that any business eyeing AI deployment needs to think about no matter what its size. Hopefully, these points will help clarify your position on AI and whether it’s really viable or even necessary for your organization.

First things first.

Quantify It

A lot of businesses fail to calculate the benefits of new, AI tech at work in a tangible way.

For example, if you’re responsible for customer support at your company, you need to ask how much AI chatbots will help reduce your issue resolution time.

If you operate an online store like Amazon.com, you need to know if a machine-learning-based inventory management system bring down your backorder levels or prevent the system from displaying out-of-stock items. Will customer ratings go up as a result? That’s the sort of tangible measurement that will help you develop your digital work.

It’s the same when adopting any new technology, like moving to a cloud computing environment. Moving to the cloud is a good thing for the most part, but unless you know exactly what workloads should be moved and how that will materially impact your revenue or other key metrics, it’s only going to be a trial and error exercise.

Ask metrics-related questions to help you pinpoint the areas that can positively impact your business. If you’re measuring a specific metric like backorder levels, the AI system you’re considering should ideally move the needle for that metric in the right direction, and considerably so.

Just as you quantified the benefits, you also need to consider the cons.

Understand the Downside

AI is a sensitive topic because of the perceived threat to jobs. What’s good for your business metrics might not be too great for your company when it comes to attracting new talent. If a lot of your business depends on bringing in the right people and your company is known for rapidly deploying efficient automated systems that result in job cuts, you might end up facing an HR crunch or labor unrest at some point. Here are some examples:

In early June 2018, casino workers in Las Vegas threatened a city-wide strike against companies like MGM Resorts International. One of their sticking points was the increased level of automation threatening their jobs.

Dock workers around the world regularly call for strikes because of the rampant automation efforts by freight companies and port authorities. In reference to the labor strike in Spain on June 29, 2017, the following comment was made by a leading port technology portal:

“Given the contemporary state of globalised business, it also means we inhabit a world of globalised unions, and with Spain seeking to gain Europe-wide support for its tangle with the EU, it is not impossible to imagine a much larger response to the burgeoning trend of AI automation in the near future.”

After a conversation with several human resources department heads, Deloitte vice chairman and London senior partner Angus Knowles-Cutler said,“The general conclusion was that people have got to (come to) grips with what the technology might do but not the implications for workforces.”

You need hard data to help you make the final decision, especially when facing strong opposition from the company’s stakeholders. It makes it easier to filter out unwanted investments that could hurt you versus those that can take your business to the next level in a positive way. In a future post, we'll explore an example of how one company implemented automation too quickly and too widely, and finally called the whole thing off.

 

Buzzword: AI

Screen Shot 2018-06-28 at 2.19.39 PM.png

In this week's buzzword chat the team tackles how machine learning & AI aren't going to solve your IT woes -- robust discussion ensues.

Ben: Artificial intelligence, yeah?

Liam: Yeah, that and machine learning probably. The coupling of those two as a discrete entity. I need to do the AI.

Marley: A lot of businesses seem to wrap up that they need AI and machine learning when it’s really just about a bad process issue.

Liam: The amount of times we have companies that have a ridiculous amount of data, we've got so much data we don't know roughly how to use it.  And again, they look at some of the technologies that people are doing around machine learning models and sort of the idea of artificial intelligence... again because there are a lot of groups publishing it. When realistically what they really need to do is just start performing data transformation over the top of it and standard data science pieces on it.

Not every solution needs to be in the machine-learning model. A lot of the data points in terms of what they're actually looking to address and find in terms of the data and about their audience, or looking at some of their outcomes in terms of what they're not doing in terms of their product or in their marketing or in their pitch...you don't need to necessarily put that through copious amounts of machine learning or even the idea of creating an AI engine or compute layer to do that. They are probably the two words that get misused the most if we're honest.

Ben: It's not the future everyone thinks it is. Everyone thinks that you're going to ask AI "How do I..." I don't know, "talk to me in a natural way." And every chatbot that I've ever seen is, is utterly terrible. They're not very intuitive. And everyone's going, "Oh, we want that fantastic chatbot experience," but what they really should be aiming for in AI is something that's, you know, solving the customer problem. And yeah, I think it's just been oversold. It's really, really oversold.

Allan: Yeah, but it's rapidly changing. I mean, I think, I think we're rapidly moving towards a place where the Turing test is going to hold. And I think that is, we're talking about 44% of job replacement by, by AI in the next 20 years across Australia.

Liam: That’s more around robotics and machine learning…

Allan: Actually, no. It’s actually not.

Liam: Okay.

Allan: That's not the physical, it's not the white collar, sorry, it's not the blue collar workers that are going to get hit by AI role-replacement first, it's going to be all the white collar. It’s going to be banking, finance in general, and legal.

Liam: What, accountants, really?

Allan: Yes, effectively.  And software developers. Software development as the processes today is going to change rapidly. We deal a lot with the chat services that would apply at a call center or contact center. And I think you're right. It does need to solve the customer problem first, and I think that's where most companies are really behind is the detailed workflows, you know, consistent form that AI would need in order to be effective. And that's going to take some time just to map it out. To be fair, it's like training your replacement. People are going to resist that change. But yeah, in the background it's really that evolution as well of the humanization of those technologies. I can imagine a time, and it's inside our lifetimes, that you're going to pick up the phone and speak to someone and think you're speaking to somebody, and if it doesn't solve the problem it's moot. So yeah, I completely agree.

Ben: The classic example is I know a guy whose name is Paris, and the chatbot says, "What's your name?" He types in "Paris" and then it goes, "Oh, you want to go to France?" And he's like, "No, my name is Paris. You just asked me that." They can’t even do that right so I’m very skeptical.

Allan: It’s got a way to go.

Liam: I think that’s more an implementation issue there. In terms of the context of what it’s capturing or otherwise.

Ben: Ah, maybe. Maybe...

Liam: If you take Alexa and Google Home, I think there’s an element of their chatbot integration, and again some of their machine learning aspects in terms of NLP space. It has done reasonably well, not to say they’re like the be-all end-all.

Ben: Just saying, I’m yet to see any chatbot that impresses me. And I’ve tried them all.