Bootstrap

论文阅读 [TPAMI-2022] On Learning Disentangled Representations for Gait Recognition

论文阅读 [TPAMI-2022] On Learning Disentangled Representations for Gait Recognition

论文搜索(studyai.com)

搜索论文: On Learning Disentangled Representations for Gait Recognition

搜索论文: http://www.studyai.com/search/whole-site/?q=On+Learning+Disentangled+Representations+for+Gait+Recognition

关键字(Keywords)

Legged locomotion; Databases; Gait recognition; Clothing; Feature extraction; Face recognition; Cameras; Gait recognition; deep convolutional neural networks; disentangled representation learning; auto-encoder; LSTM; canonical representation; face recognition

机器学习; 机器视觉; 自然语言处理; 机器人/无人机

步态控制; 人脸识别; 步态识别; 递归神经网络; 自编码器; 语言表示学习

摘要(Abstract)

Gait, the walking pattern of individuals, is one of the important biometrics modalities.

步态是个体的行走模式,是重要的生物特征识别方法之一。.

Most of the existing gait recognition methods take silhouettes or articulated body models as gait features.

现有的步态识别方法大多采用轮廓或关节体模型作为步态特征。.

These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and viewing angle.

这些方法在处理服装、携带和视角等混杂变量时会降低识别性能。.

To remedy this issue, we propose a novel AutoEncoder framework, GaitNet, to explicitly disentangle appearance, canonical and pose features from RGB imagery.

为了解决这个问题,我们提出了一个新的自动编码器框架GaitNet,以明确地从RGB图像中分离外观、规范和姿势特征。.

The LSTM integrates pose features over time as a dynamic gait feature while canonical features are averaged as a static gait feature.

LSTM将随时间变化的姿势特征作为动态步态特征进行集成,而标准特征作为静态步态特征进行平均。.

Both of them are utilized as classification features.

它们都被用作分类特征。.

In addition, we collect a Frontal-View Gait (FVG) dataset to focus on gait recognition from frontal-view walking, which is a challenging problem since it contains minimal gait cues compared to other views.

此外,我们收集了一个前视步态(FVG)数据集,重点研究前视步行的步态识别,这是一个具有挑战性的问题,因为与其他视图相比,它包含的步态线索最少。.

FVG also includes other important variations, e.g., walking speed, carrying, and clothing.

FVG还包括其他重要的变化,例如步行速度、携带和服装。.

With extensive experiments on CASIA-B, USF, and FVG datasets, our method demonstrates superior performance to the SOTA quantitatively, the ability of feature disentanglement qualitatively, and promising computational efficiency.

通过对CASIA-B、USF和FVG数据集的大量实验,我们的方法在数量上优于SOTA,在定性上具有特征解纠缠的能力,并且具有良好的计算效率。.

We further compare our GaitNet with state-of-the-art face recognition to demonstrate the advantages of gait biometrics identification under certain scenarios, e.g., long-distance/lower resolutions, cross viewing angles.

我们进一步将我们的GaitNet与最先进的人脸识别技术进行比较,以展示步态生物特征识别在某些情况下的优势,例如远距离/较低分辨率、交叉视角。.

Source code is available at http://cvlab.cse.msu.edu/project-gaitnet.html…

源代码可在http://cvlab.cse.msu.edu/project-gaitnet.html…

作者(Authors)

[‘Ziyuan Zhang’, ‘Luan Tran’, ‘Feng Liu’, ‘Xiaoming Liu’]

;