Self-similarity Analysis for Motion Capture Cleaning

Published in Computer Graphics Forum, 37(2): 297–309, 2018.
Presented at Eurographics 2018.
Andreas Aristidou1, Daniel Cohen-Or2, Jessica K. Hodgins3, Ariel Shamir1
1 The Interdisciplinary Center Herzliya
·
2 Tel-Aviv University
·
3 Carnegie Mellon University
Self-similarity analysis for motion capture cleaning

Overview

Our method automatically analyzes mocap sequences of closely interacting performers based on self-similarity. We define motion-words consisting of short-sequences of joints transformations, and use a time-scale invariant similarity measure that is outlier-tolerant to find the KNN. This allows detecting abnormalities and suggesting corrections.

Abstract

Motion capture sequences may contain erroneous data, especially when performers are interacting closely in complex motion and occlusions are frequent. Common practice is to have professionals visually detect the abnormalities and fix them manually. In this paper, we present a method to automatically analyze and fix motion capture sequences by using self-similarity analysis. The premise of this work is that human motion data has a high-degree of self-similarity. Therefore, given enough motion data, erroneous motions are distinct when compared to other motions. We utilize motion-words that consist of short sequences of transformations of groups of joints around a given motion frame. We search for the K-nearest neighbors (KNN) set of each word using dynamic time warping and use it to detect and fix erroneous motions automatically. We demonstrate the effectiveness of our method in various examples, and evaluate it by comparing to alternative methods and to manual cleaning.

Overview of self-similarity analysis for motion capture cleaning

Figure 1: When capturing multiple performers simultaneously, such as in dance scenes, occlusions and noise lead to missing and erroneous data. We define a motion-texture map (top) where each row represents the rotation angles of a joint through time, and each column is a single pose-frame. Motion-words (shown as rectangles) are short sequences of joints transformations around a motion frame. Our selfsimilarity analysis is based on comparing each motion-word with its K-nearest neighbors, and building the movement digression map, or MDM(bottom), that indicates unusual movements in time on specific joints. Cold colors (lower values) depict common motions in the MDM, while hot colors (indicating higher values) depict distinct motions that are often erroneous.

Main Contributions

The main contributions of this work include:

  • Instead of using absolute marker positions we use joint angles. Joint angles are relative measures that allow more self-similarities to be found in the motion, regardless of the global pose and absolute position of markers.
  • Inspired by patch-based self-similarity techniques used in images and video, we do not examine individual motion frames or poses. Instead, we define motion-words as our basic elements for analysis.
  • At the core of our self similarity analysis is a time-scale-invariant similarity measure between two motion-words. Since similar motions can vary in duration, as well as have local speed variations, we use dynamic time-warping (DTW) to compare motion-words.
  • We build an outlier-tolerant distance measure between motion-words. Our approach does not consider noisy pose parts for reconstructing the erroneous motion and allows a more fine-grained representation of the errors by only replacing the erroneous parts instead of full body poses.

Video

Our Eurographics 2018 Fast Forward Video:

BibTeX

@article{Aristidou:2018:SSA,
 author    	= {Aristidou, Andreas and Cohen-Or, Daniel and Hodgins, Jessica K. and Shamir, Ariel},
 title     	= {Self-similarity Analysis for Motion Capture Cleaning},
 journal   	= {Comput. Graph. Forum},
 volume    	= {37},
 number    	= {2},
 pages     	= {297--309},
 publisher 	= {The Eurographs Association \&\#38; John Wiley \&\#38; Sons, Ltd.},
 month     	= may,
 numpages  	= {13},
 url 	  	= {http://dx.doi.org/10.1111/cgf.13362},
 doi 	  	= {10.1111/cgf.13362},
 year      	= {2018}
}

Acknowledgments

This research was supported by the Israel Science Foundation as part of the ISF-NSFC joint program grant number 2216/15.