TransWikia.com

Where can I find an algorithm for human activity classification using thigh and shank sensors?

Data Science Asked on December 19, 2020

I am working on a project where I initially need to classify the type of activity a subject is performing based on raw accelerometer and gyroscope data from the thigh and shank of both legs. The type of activities would be day-to-day tasks such as standing, sitting, running, sit-to-stand, stair climbing, if playing sports, cutting, etc.

I have tried reading papers and looking on git to find an algorithm that will do this for me. I know that there are a lot of algorithms available, but none seem to utilize sensors on the leg. Most are sensors on the chest or through a person’s phone.

Where would I be able to find an algorithm that will take the data I have an classify the activities being performed? Can I do this with algorithms that aren’t initially made for the sensor placement I am using?

Any help would be greatly appreciated.

2 Answers

What you want to do can be called activity recognition or time-series classification. Almost all classification algorithms can be made to do this. A popular choice is to use a type of RNN (recurrent neural network) called a LSTM (long short-term memory). It is made to work with sequences and handles time series quite well.

Here are some relevant readings that might get you started:

Activity recognition tutorial using RNN models

Similar tutorial using various models

Answered by Simon Larsson on December 19, 2020

Human activity recognition generally tends to classify various activities from data collected through sensors.

  • As @Eric stated, RNNs and LSTMs can do the best job since, they can handle temporal data or time series. But, they lack the ability of extracting useful hierarchical features from the data.

  • Here comes 1 Dimensional Convolutional networks. They are similar to ones which are used in image classification but they work on a single dimension. They can efficiently parse the features of the sequential data of the sensors. But, they lack the temporal feature.

Why can't we fuse both the CNN and LSTM together?

Yes, this is the correct solution. We can first stack the Convolutional and maxpooling layers and then reshape the output. This output could be fed to the LSTM which will handle the temporal part.

You can read this and this.

Hence, good feature extraction is accompanied with temporal data handling.

Answered by Shubham Panchal on December 19, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP