Osaka University

Walk This Way: A Better Way To Identify Gait Differences

ArticlePress release
Current Medical News
Contributed byKrish Tangella MD, MBANov 12, 2017

Biometric-based person recognition methods have been extensively explored for various applications, such as access control, surveillance, and forensics. Biometric verification involves any means by which a person can be uniquely identified through biological traits such as facial features, fingerprints, hand geometry, and gait, which is a person's manner of walking.

Gait is a practical trait for video-based surveillance and forensics because it can be captured at a distance on video. In fact, gait recognition has been already used in practical cases in criminal investigations. However, gait recognition is susceptible to intra-subject variations, such as view angle, clothing, walking speed, shoes, and carrying status. Such hindering factors have prompted many researchers to explore new approaches with regard to these variations.

Research harnessing the capabilities of deep learning frameworks to improve gait recognition methods has been geared to convolutional neural network (CNN) frameworks, which take into account computer vision, pattern recognition, and biometrics. A convolutional signal means combining any two of these signals to form a third that provides more information.

An advantage of a CNN-based approach is that network architectures can easily be designed for better performance by changing inputs, outputs, and loss functions. Nevertheless, a team of Osaka University-centered researchers noticed that existing CNN-based cross-view gait recognition fails to address two important aspects.

"Current CNN-based approaches are missing the aspects on verification versus identification, and the trade-off between spatial displacement, that is, when the subject moves from one location to another," study lead author Noriko Takemura explains.

Considering these two aspects, the researchers designed input/output architectures for CNN-based cross-view gait recognition. They employed a Siamese network for verification, where an input is a pair of gait features for matching, and an output is genuine (the same subjects) or imposter (different subjects) probability.

Notably, the Siamese network architectures are insensitive to spatial displacement, as the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers, which reduces the gait image dimensionality and allows for assumptions to be made about hidden features. They can therefore be expected to have higher performance under considerable view differences. The researchers also used CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement.

"We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences," coauthor Yasushi Makihara says.

As spatial displacement is caused not only by view difference but also walking speed difference, carrying status difference, clothing difference, and other factors, the researchers plan to further evaluate their proposed method for gait recognition with spatial displacement caused by other covariates.


Materials provided by Osaka UniversityNote: Content may be edited for style and length.

Disclaimer: DoveMed is not responsible for the accuracy of the adapted version of news releases posted to DoveMed by contributing universities and institutions.

References:

Noriko Takemura, Yasushi Makihara, Daigo Muramatsu, Tomio Echigo, Yasushi Yagi. (2017). On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait RecognitionIEEE Transactions on Circuits and Systems for Video Technology. DOI: 10.1109/TCSVT.2017.2760835

Was this article helpful

On the Article

Krish Tangella MD, MBA picture
Approved by

Krish Tangella MD, MBA

Pathology, Medical Editorial Board, DoveMed Team

0 Comments

Please log in to post a comment.

Related Articles

Test Your Knowledge

Asked by users

Related Centers

Loading

Related Specialties

Loading card

Related Physicians

Related Procedures

Related Resources

Join DoveHubs

and connect with fellow professionals

Related Directories

Who we are

At DoveMed, our utmost priority is your well-being. We are an online medical resource dedicated to providing you with accurate and up-to-date information on a wide range of medical topics. But we're more than just an information hub - we genuinely care about your health journey. That's why we offer a variety of products tailored for both healthcare consumers and professionals, because we believe in empowering everyone involved in the care process.
Our mission is to create a user-friendly healthcare technology portal that helps you make better decisions about your overall health and well-being. We understand that navigating the complexities of healthcare can be overwhelming, so we strive to be a reliable and compassionate companion on your path to wellness.
As an impartial and trusted online resource, we connect healthcare seekers, physicians, and hospitals in a marketplace that promotes a higher quality, easy-to-use healthcare experience. You can trust that our content is unbiased and impartial, as it is trusted by physicians, researchers, and university professors around the globe. Importantly, we are not influenced or owned by any pharmaceutical, medical, or media companies. At DoveMed, we are a group of passionate individuals who deeply care about improving health and wellness for people everywhere. Your well-being is at the heart of everything we do.

© 2023 DoveMed. All rights reserved. It is not the intention of DoveMed to provide specific medical advice. DoveMed urges its users to consult a qualified healthcare professional for diagnosis and answers to their personal medical questions. Always call 911 (or your local emergency number) if you have a medical emergency!