Samenvatting
Whereas the gait phase detection defines where in the gait cycle we are, locomotion mode recognition provides the prosthesis control with the information on when to switch between different walking modes. Since powered prostheses often implement a different control strategy for each locomotion mode to improve the functionality of the prosthesis. Existing studies have employed several classical machine learning methods for locomotion mode recognition. However, these methods are less effective for data with complex decision boundaries and result in misclassifications of motion recognition. Deep learning-based methods potentially resolve these limitations because deep learning is a special type of machine learning but is more sophisticated. Therefore, in this study, we evaluate three deep learning-based models for locomotion mode recognition: Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) neural network and Convolutional Neural Network (CNN), and we also compare the recognition performance of deep learning models to the machine learning model with Random Forest Classifier (RFC).} The models are trained from data of one inertial measurement unit (IMU) placed on four able-bodied subjects' lower shanks to perform four walking modes, including level ground walking (LW), standing (ST), and stair ascent/stair descent (SA/SD). The results indicate that CNN and LSTM models outperform other models, and these models are promising for applying locomotion mode recognition in real-time for robotic prostheses.
Originele taal-2 | English |
---|---|
Artikelnummer | 923164 |
Pagina's (van-tot) | 1-15 |
Aantal pagina's | 15 |
Tijdschrift | Frontiers in Neurorobotics |
Volume | 16 |
DOI's | |
Status | Published - 29 nov 2022 |
Bibliografische nota
Funding Information:This project was partly supported by the Innoviris' Talaris project, the AI Flanders program, and Vietnamese Government for university and college lecturers on doctoral training during 2010–2020.
Publisher Copyright:
Copyright © 2022 Vu, Cao, Dong, Verstraten, Geeroms and Vanderborght.
Copyright:
Copyright 2022 Elsevier B.V., All rights reserved.