AI

How facial recognition can unlock video archive value

photosleuth-1496457709-44.jpg

Kablamo’s co-CEO, Angus Dorney, recently spoke to ComputerWorld about how facial recognition and AI can unlock tremendous amounts of value in video archives. Read the full story here. An excerpt is below.

Archive value

The capability also has enterprise applications – particularly for media organisations wanting to find relevant footage or stills in their video archives.

“They have millions of hours of video content and its typically stored in multiple legacy systems, there is no or varying meta-tagging, and the search processes for finding content are extremely old and they’re manual and they cut across multiple systems,” explains Angus Dorney, co-CEO of Sydney and Melbourne-based cloud technology firm Kablamo.

“If you’re a newsmaker in a media organisation or work for a government archive and somebody asks you for a specific piece of footage it’s very difficult and time consuming and expensive to try and find,” he adds.

Kablamo builds solutions that have a “YouTube-like user experience” to find relevant archive footage. Using AWS face and object recognition tools, users simply type in a person or thing “and get a list back of prioritised rankings, where it is, and be able to click and access that example right away,” Dorney – a former Rackspace general manager – says.

The machine learning models behind the capability, over time, can refine and adjust their behaviour, making results more accurate and more useful to users.

“You really have a computer starting to function like a human brain around these things which is incredibly exciting,” Dorney adds.

Allan Talks AI and Slime Moulds in The Australian

slime mould.jpeg

The future of tech?

The Australian has just published Allan Waddell’s take on the biological future of AI. You can read it at The Australian here or the original text below:

Artificial intelligence is taking off. Virtual assistants, computer chips, cameras and software packages are increasingly taking advantage of machine learning to create pseudo-intelligent, versatile problem-solvers. Neural networks, AI modelled on the human brain, strike fear into anyone convinced that AI is ­already too close to being alive. The truth is it has ­always been easy to tell the artificial and the living apart — until now.

This biological-artificial distinction is about to get blurrier, and all of us need to pay attention. Among other developments, ­researchers at ­Lehigh University recently ­secured funding to grow neural networks out of living cells. ­Essentially, the researchers are going to recreate the neural-network architecture of an artificially intelligent algorithm using living cells. Theoretically, the ­algorithm should work identically in a petri dish as it does in a computer; the structure of the neural network is irrelevant in computational systems. This is a property of computers for which Justin Garson coined the term “medium independence”. In 2003, Garson said the medium used for computation didn’t matter — a computer could be made out of silicon or wood — as long as the logical basis of the computation was unchanged.

While this research is revolutionary, procedures involving cell-based neural networks have ethicists, law professors, philosophers and scientists raising concerns about using cerebral ­organoids — what you might think of as minibrains — for neurological research. Regular human brains are generally unavailable for study, so minibrains are a great ­research alternative. However, because minibrains are, well, ­actual brain material, ethicists worry they could become conscious if they reach a certain level of complexity. It takes only a small leap to raise these same concerns about growing cell-based neural networks. After all, neural networks are designed to work in the same way as a brain. So, what’s the difference between a human (or more likely, simple organism) brain and a neural network in a petri dish? And what if a research team combined these two approaches and grew neural networks out of human brain cells? All of these questions and more are rapidly forcing their way into public discussion as our biotechnology advances.

And it doesn’t stop there. The next big thing could actually be more advanced organisms like the slime mould. ­Believe it or not, slime moulds are solving organisational problems that have confounded the brightest math­ematicians in human history, and the mould isn’t even trying. Japanese and British ­researchers created a miniature map of Tokyo, stuck a bit of Physarum polycephalum mould on Tokyo, and put some oatmeal on other major cities in the greater Tokyo Area. Within a week, the mould had recreated a pathway model of Tokyo’s train system, simply by doing what mould does best: growing and seeking out ­nutrients. The New Jersey Institute of Technology boasts a “Swarm Lab” that studies “swarm intelligence” found in everything from colonies of ants to dollops of mould, in an attempt to learn how organisms make decisions — ­research that could one day refine the algorithms behind self-driving cars, among other things.

Network design by slime mould is an astounding breakthrough. Consider that when Japan began building its high-speed rail network in the late- 1950s, it was financed in part with an $US80 million loan from the World Bank. Adjusting for inflation, that totals more than $US680m, and some estimates put the actual cost of the train system at twice the original loan amount. Of course, a lot of this money was spent on materials and paying construction workers, but using general project cost estimates from a civil engineer, we can guess at a design cost of roughly $US54m. So, give a little mould a week to grow, and it will replicate tens of millions of dollars of design work for practically nothing. Furthermore, Tokyo’s rail system wasn’t designed and built all in one go; the rail system has been in some stage of development since 1872. The design produced by the mould nearly mimicked the final result of more than a century of development.

Network design is no simple task and the problems involved are some of the hardest to solve in computer science, generally ­requiring lots of approximations and algorithms. The slime mould isn’t concerned about the fancy mathematics, though. It simply spreads out, finds food, and then develops the most energy-efficient way to move nutrients around its mouldy network-body. The ­researchers involved in this project crunched some numbers and determined that, if constructed, the mould’s design would be “comparable in efficiency, reliability, and cost to the real-world infrastructure of Tokyo’s train network”.

The humble slime mould is teaching a lesson many business leaders should heed. Technologies like AI and machine learning are developing at an amazing pace, but we don’t yet know where they’re taking us. What we do know is that just like the mould, environments need to have the right conditions for these new technologies to thrive.

Allan Waddell is founder and CEO of Australian enterprise IT specialist Kablamo.

The (Human) Ethics of Artificial Intelligence

robot thinker.jpg

Remember those three laws of robotics penned by Isaac Asimov?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

As far back as 1942, we were wrestling with the ethical implications of thinking machines. We are rapidly approaching the intersection of silicon and cognition. It is more important than ever that we update Asimov’s laws to ensure that humans are the beneficiaries rather than the victims of artificial intelligence,: a concept that is no longer in the domain of science fiction. 

Our first brush with these concerns at a practical level surfaced a few decades ago when wide scale automation became a reality. The fear was that hard-working humans would lose their jobs to less expensive and more efficient robot workers. Companies were focused more on short-term profits than the long-term effects on of millions of newly unemployed factory workers who were unprepared to reenter the labor force.

This ethical dilemma has not been resolved. And with the advent of AI, it might well have been exacerbated. Now that we are that much closer to producing machines that can think for themselves, we must consider ethical implications that even Asimov couldn’t fathom. Before jumping on the AI bandwagon, consider the following:

A Proper Education

Intelligence without ethics is dangerous. Human intelligence is tempered with human empathy. This is not a bug, but a feature. We do not want people making laws that affect us who demonstrate a lack of human empathy. And we should be just as wary of machines that wield intelligence sans empathy.

Part of the way humans learn empathy is via the process of education. From the time a baby is placed in a mother’s arms, she is engaged in the process of education. Something very similar happens with AI.

Ethical AI begins with ethical training. AI is not merely programmed, it is trained like a human. It learns like a human.

To learn the right lessons, AI has to be trained in the right way, or ethical problems will inevitably arise. They already are.. Recently, facial recognition systems for law enforcement have come under fire because it had the tendency to misidentify people of colour as criminals based on mugshots.

This is a training issue. If AI is predominantly trained in Euro-American white faces, it will disadvantage ethnic groups and minorities. As a startup, you cannot settle for the first off-the-shelf solutions that give you a short-term advantage. You have to vet your AI solutions like you do employees, ensuring to the best of your ability that they have received a proper, ethical training.

When Better Is Worse

Every company wants the best, most efficient, most productive processes possible. But there are times when better is worse. One example is customer service. There is a good chance that in time, an AI solution such as Google Assistant will do a better job at making appointments, answering questions, and making outgoing sales calls. When that time comes, AI still might not be the right solution.

The simple matter is that humans want to talk to other humans. They do not want to talk to technology when they are stressed. If a person is calling for customer service, that means something has gone awry. And they are experiencing a higher state of stress.

What they need is human contact. They want their problem resolved. But they also want to feel like they have been heard. That is something AI cannot, and possibly should not attempt to do. The decision to eliminate humans from call centres has ethical implications.

You have to consider the ethical implication for every AI deployment in your business. If there is one thing we have learned about customer behaviour, it is that they repeatedly bypass better systems for more human friendly systems.

The Final Analysis

AI is inevitable. You might already be using it right now without being aware of it. There is no doubt that the proper application of AI will make you more efficient and save you money. It will even help you avoid blunders that would have put an end to your venture without it.

That presents us with the temptation to rely on AI as the final arbiter of all matters relating to our business ventures. What we have to remember is that all business is a human to human enterprise. AI can make it better. But you should always reserve the final analysis for yourself: the human with everything on the line.


Screen Shot 2018-06-28 at 2.17.06 PM.png

A BUSINESS-PRACTICAL WAY TO THINK ABOUT AI

A salesperson representing a company that deploys artificial intelligence solutions walks into a CTO’s office one day and starts talking about the company’s product. You’re the executive assistant observing the conversation. What do you see? Can you see an animated individual talking incessantly about how great his company and its products are, and another one nodding and looking knowledgeable while not seeing the relevance of any of that to his business?

This scenario plays out everyday somewhere in the world, and it’s not uncommon for such meetings to end with the prospect holding an even bigger bag of questions than before. Questions like “How appropriate is this product or service to my business, specifically?” are often left unanswered. Nothing like “It will reduce your manpower requirement by 32 FTEs” or “This will speed up your average response time by up to 23%” ever breaches the surface.

If you’re the business owner or executive responsible for the decision, questions like that might leave you wondering whether it’s worth the effort at all. More often than not, the salesperson will gloss over many of the challenges, which only makes the decision even harder.

To help put your train of thought on the right track, we’ve identified some key elements that any business eyeing AI deployment needs to think about no matter what its size. Hopefully, these points will help clarify your position on AI and whether it’s really viable or even necessary for your organization.

First things first.

Quantify It

A lot of businesses fail to calculate the benefits of new, AI tech at work in a tangible way.

For example, if you’re responsible for customer support at your company, you need to ask how much AI chatbots will help reduce your issue resolution time.

If you operate an online store like Amazon.com, you need to know if a machine-learning-based inventory management system bring down your backorder levels or prevent the system from displaying out-of-stock items. Will customer ratings go up as a result? That’s the sort of tangible measurement that will help you develop your digital work.

It’s the same when adopting any new technology, like moving to a cloud computing environment. Moving to the cloud is a good thing for the most part, but unless you know exactly what workloads should be moved and how that will materially impact your revenue or other key metrics, it’s only going to be a trial and error exercise.

Ask metrics-related questions to help you pinpoint the areas that can positively impact your business. If you’re measuring a specific metric like backorder levels, the AI system you’re considering should ideally move the needle for that metric in the right direction, and considerably so.

Just as you quantified the benefits, you also need to consider the cons.

Understand the Downside

AI is a sensitive topic because of the perceived threat to jobs. What’s good for your business metrics might not be too great for your company when it comes to attracting new talent. If a lot of your business depends on bringing in the right people and your company is known for rapidly deploying efficient automated systems that result in job cuts, you might end up facing an HR crunch or labor unrest at some point. Here are some examples:

In early June 2018, casino workers in Las Vegas threatened a city-wide strike against companies like MGM Resorts International. One of their sticking points was the increased level of automation threatening their jobs.

Dock workers around the world regularly call for strikes because of the rampant automation efforts by freight companies and port authorities. In reference to the labor strike in Spain on June 29, 2017, the following comment was made by a leading port technology portal:

“Given the contemporary state of globalised business, it also means we inhabit a world of globalised unions, and with Spain seeking to gain Europe-wide support for its tangle with the EU, it is not impossible to imagine a much larger response to the burgeoning trend of AI automation in the near future.”

After a conversation with several human resources department heads, Deloitte vice chairman and London senior partner Angus Knowles-Cutler said,“The general conclusion was that people have got to (come to) grips with what the technology might do but not the implications for workforces.”

You need hard data to help you make the final decision, especially when facing strong opposition from the company’s stakeholders. It makes it easier to filter out unwanted investments that could hurt you versus those that can take your business to the next level in a positive way. In a future post, we'll explore an example of how one company implemented automation too quickly and too widely, and finally called the whole thing off.

 

Buzzword: AI

Screen Shot 2018-06-28 at 2.19.39 PM.png

In this week's buzzword chat the team tackles how machine learning & AI aren't going to solve your IT woes -- robust discussion ensues.

Ben: Artificial intelligence, yeah?

Liam: Yeah, that and machine learning probably. The coupling of those two as a discrete entity. I need to do the AI.

Marley: A lot of businesses seem to wrap up that they need AI and machine learning when it’s really just about a bad process issue.

Liam: The amount of times we have companies that have a ridiculous amount of data, we've got so much data we don't know roughly how to use it.  And again, they look at some of the technologies that people are doing around machine learning models and sort of the idea of artificial intelligence... again because there are a lot of groups publishing it. When realistically what they really need to do is just start performing data transformation over the top of it and standard data science pieces on it.

Not every solution needs to be in the machine-learning model. A lot of the data points in terms of what they're actually looking to address and find in terms of the data and about their audience, or looking at some of their outcomes in terms of what they're not doing in terms of their product or in their marketing or in their pitch...you don't need to necessarily put that through copious amounts of machine learning or even the idea of creating an AI engine or compute layer to do that. They are probably the two words that get misused the most if we're honest.

Ben: It's not the future everyone thinks it is. Everyone thinks that you're going to ask AI "How do I..." I don't know, "talk to me in a natural way." And every chatbot that I've ever seen is, is utterly terrible. They're not very intuitive. And everyone's going, "Oh, we want that fantastic chatbot experience," but what they really should be aiming for in AI is something that's, you know, solving the customer problem. And yeah, I think it's just been oversold. It's really, really oversold.

Allan: Yeah, but it's rapidly changing. I mean, I think, I think we're rapidly moving towards a place where the Turing test is going to hold. And I think that is, we're talking about 44% of job replacement by, by AI in the next 20 years across Australia.

Liam: That’s more around robotics and machine learning…

Allan: Actually, no. It’s actually not.

Liam: Okay.

Allan: That's not the physical, it's not the white collar, sorry, it's not the blue collar workers that are going to get hit by AI role-replacement first, it's going to be all the white collar. It’s going to be banking, finance in general, and legal.

Liam: What, accountants, really?

Allan: Yes, effectively.  And software developers. Software development as the processes today is going to change rapidly. We deal a lot with the chat services that would apply at a call center or contact center. And I think you're right. It does need to solve the customer problem first, and I think that's where most companies are really behind is the detailed workflows, you know, consistent form that AI would need in order to be effective. And that's going to take some time just to map it out. To be fair, it's like training your replacement. People are going to resist that change. But yeah, in the background it's really that evolution as well of the humanization of those technologies. I can imagine a time, and it's inside our lifetimes, that you're going to pick up the phone and speak to someone and think you're speaking to somebody, and if it doesn't solve the problem it's moot. So yeah, I completely agree.

Ben: The classic example is I know a guy whose name is Paris, and the chatbot says, "What's your name?" He types in "Paris" and then it goes, "Oh, you want to go to France?" And he's like, "No, my name is Paris. You just asked me that." They can’t even do that right so I’m very skeptical.

Allan: It’s got a way to go.

Liam: I think that’s more an implementation issue there. In terms of the context of what it’s capturing or otherwise.

Ben: Ah, maybe. Maybe...

Liam: If you take Alexa and Google Home, I think there’s an element of their chatbot integration, and again some of their machine learning aspects in terms of NLP space. It has done reasonably well, not to say they’re like the be-all end-all.

Ben: Just saying, I’m yet to see any chatbot that impresses me. And I’ve tried them all.