In this final part of the series, I will be using the real-life example of an elderly care field trial, in order to explain how we combine different AI techniques using the Waylay platform. The DIoTTO project was developed by three partner companies (Studio Dott, Sensolus and Waylay) and was funded through the Flanders Care Living Labs and supported by the VOKA Health Community. Studio Dott prototyped the app and was leading the overall concept, Sensolus provided sensors for the project and Waylay worked on data mining and rules implementations.
The central goal of the project was to provide a solution that would re-assure caretakers and family that the person in their care was alright. The diagram below explains the concept and presents all involved actors.
There are a number of reasons why this use case comes with difficult challenges:
- At any moment in time, we need to have a good understanding of what the platform is suggesting to seniors, caretakers and family members
- We can’t simply train the network by adding sensors all over the place and letting people die to learn what works and what doesn’t. (Although one of today's biggest names in the deep learning research community has been arguing lately that this is precisely what is going on in the pharmaceutical industry – referring to clinical trials)
- Some of the rules that we want to apply should be expressed without the need for any “learning” (using heuristics or the wish of the caretaker or senior)
- Less intrusive but “good enough” rules let people live independently while providing them with a sense of dignity (e.g. we can put cameras in every corner of the house, but who would like to live like that?)
We often say, data speaks for itself, so let’s first see what sort of data we got during one of the trials. We placed sensors on fridges, front doors, cabinets, curtains and in one case even on the person's wheelchair. Next to using sensor data, what made this trial different was that seniors, caretakers and family members were able to use the app through which they could share pictures and send messages. Here are histograms of a few typical days, form different houses:
I will not go into much technical detail here on how we processed the data (as this post's subject is more about combining different ML/AI techniques in automation rules), but just by looking into these histograms we can see what sort of information we can deduct:
- We can deduct what is the likelihood of people moving in each time slot, per day, on weekdays or weekends etc.
- We can deduct the frequency at which people use different objects
- Using Markov process, we can estimate how likely it is for a person to move between different rooms
- We can deduct sleeping patterns
- Using unsupervised clustering, we can group seniors that have similar patterns
- We can periodically check if the moving patterns for the past weeks are unusual compared to the previous period, which can be the first indication of Alzheimer's disease
Rules based on input from seniors and caretakers
As mentioned earlier, some of the rules that we want to apply should be expressed without the need for any “learning”. During the project, Studio Dott came with a great way to capture these rules:
- Select an object (fridge, curtain, closet, door etc.)
- Select “when” condition (morning, evening etc)
- If the object moves or not in the time window, over weekday or weekend
- More than or less than X times
- Send SMS and/or email to a person if conditions met
Finding whether seniors have developed a new sleeping disorder
In this rules template below, we combine motion sensors, a likelihood sensor, a day/night sensor and an hourly sensor to deduct whether the person has a sleeping problem. Note that we are taking into account the movement profile of the person, so even if they wake up in the night, and they do it often, we will not label that night as being with a sleeping problem. This way, we are only looking into changes of the pattern.
Appliances not switched off after a period of time
If the person forgot to switch off the appliance, we can send the alarm after a period of time (with different periods based on the type of appliance):
In this example, based on the sensor input and type of the object (a TV for example), Waylay sends the alarm in case the appliance was switched on and not switched off again (the exact window can be controlled by the delay sensor). We also discard if multiple on/off events have been registered within the same time window (to avoid on-off-on type of situations ).
Modelling Markov Chains in Waylay
A Markov chain is a random process that undergoes transitions from one state to another.
Let's now assume that we have a house with four rooms and the connection between the rooms is shown in the picture above. The total movement probability in every room should be equal to 1 (labelled by the same color). One of the problems modelling this way is that the transition table can change during a day. (Note that this is not the same as the “memorylessness” feature of Markov chains, since the overall process might not only depend on the previous state).
Now, I often say that engineering is the art of applying scientific hacks, so before we get to paralysis/analysis mode on how to solve this problem, let’s see how else we can simplify the model without loosing any valuable information.
In this animation, we can see how we capture a pattern where person moves from bedroom, takes stairs, pass the living room and opens the fridge. In the same rule, we send a SMS if the fridge is not open at least once before 11.00 AM.
Putting everything together - combining AI and IoT
Our platform allows for seamless blending of different AI and ML techniques. As described above, in the Waylay rules engine we can use information coming from:
- events in real-time
- historical data captured in time-series databases
- meta models on the object level (is the object a TV set, water boiler, or if the water meter is in a given region/street etc)
- real-time analytics module (with connection to ML for training parameters - e.g. proactive maintenance, time to target etc.). These functions can be updated at runtime, through a plugin interface
- ML models, such as Deep Learning or similar
- API calls from third parties (for notifications, image object recognition, NLP etc.)