Challenges of applying Artificial Intelligence in IoT using Deep Learning

Written By: Veselin Pizurica | Date:
s read

Introduction

The Internet-of-Things provides us with lots of sensor data. However, the data by itself does not provide value unless we can turn it into actionable, contextualised information. Big data and data visualisation techniques allow us to gain new insights through batch-processing and off-line analysis. Real-time sensor data analysis and decision-making is often done manually but to make it scalable, it is preferable to be automated.

Artificial Intelligence provides us the framework and tools to go beyond trivial real-time decision and automation use cases for IoT, or as Gartner describes it here: "In order to operate in real time, companies must leverage predefined analytical models, rather than ad-hoc models, and use current input data rather than just historical data."

In that respect, we can see IoT evolving through these three phases:

· Phase one – connecting devices
· Phase two - analysing and visualising collected data
· Phase three - automation

The promise of Deep Learning

Deep Learning has fascinating potential, solving various non-linear multi-dimensional problems. This one below, applied to chaos systems, is my absolute favourite these days. Chaos systems are notoriously hard to predict due to their inherited sensitivity to initial inputs (even though this was achieved by reservoir computing, which is a little bit different):

trainingcomputers

Similarly, Deep Learning is superior to other methods when it comes to sensory discovery/enhancement sort of problems: find machine anomalies by looking into historical records, NLP, or finding the cancer by searching through MRI scans.

Deep Learning is also well suited in the domain of reinforced learning, when the problem space is well defined and the environment is known and stable. Such as a Go or chess game:

featuresofAIapplications

Here are some great slides on this topic by Pieter Abbeel:

pieterabeel1

With the latest advances in Deep Neural Networks, companies flock to Deep Learning for process automation, such as predictive maintenance. Over time, they naturally become more ambitious and try to apply Deep Learning for automation across all building blocks of IoT. But, that’s where problems start appearing...

Challenges of applying AI in IoT using Deep Learning

One of the biggest arguments about Deep Learning is whether it is "deep" enough - whether DL systems can learn high-level abstractions about the world around them (which I would call infer models from data). And here, I don’t mean abstractions such as figuring out that a group of pixels in the image are the eyes of a cat, or the tail of an elephant, but rather the context in which these discovered objects interact with an environment.

For instance, if some people tell you that the only reason neural networks mislabelled sheep for birds or giraffes is due to the missing data set with similar pictures, they are missing the point, as shown in this funny post:

neuralnets_aflockofbirds

There is no better way to describe what is the current problem with Deep Learning in the domain of decision making than this excerpt from this post where a team at the University of Pittsburgh Medical Center used machine learning to predict whether pneumonia patients might develop severe complications.

"The goal was to send patients at low risk for complications to outpatient treatment, preserving hospital beds and the attention of medical staff. The team tried several different methods, including various kinds of neural networks, as well as software-generated decision trees that produced clear, human-readable rules.

The neural networks were right more often than any of the other methods. But when the researchers and doctors took a look at the human-readable rules, they noticed something disturbing: One of the rules instructed doctors to send home pneumonia patients who already had asthma, despite the fact that asthma sufferers are known to be extremely vulnerable to complications. The model did what it was told to do: Discover a true pattern in the data. The poor advice it produced was the result of a quirk in that data. It was hospital policy to send asthma sufferers with pneumonia to intensive care, and this policy worked so well that asthma sufferers almost never developed severe complications. Without the extra care that had shaped the hospital’s patient records, outcomes could have been dramatically different."

This is what we call the “explainability” problem, or as Michael Jordan puts it:

“We do not want to build systems that help us with medical treatments, transportation options and commercial opportunities to find out after the fact that these systems don’t really work — that they make errors that take their toll in terms of human lives and happiness“.

There is huge research ongoing in trying to fix that problem, and this is a great lecture on this topic: "Bringing deep learning to higher-level cognition" by Yoshua Bengio.

In my next blog, I will discuss how Waylay solves this problem by combining Deep Learning with a Bayesian inference engine, so stay tuned!

Shapou-Rules-Engine

Veselin Pizurica

Co-founder and CTO @Waylay, R&D, background in IoT/M2M, Cloud Computing, Semantic Web, Artificial Intelligence, Signal and Image Processing, Pattern Recognition, author of 12 patent applications.


Back To Top

Deep Learning and Bayesian Modelling, building the automation of the future

This is the second part of a three-part blog series.…

Spend time with a Waylay expert
Please complete this form and we will contact you shortly

* indicates required
Close