Dept. of Robotics Science and Technology,
Chubu University

Human Detection

Human Detection by Binary-based HOG Features

Histograms of oriented gradients (HOG) are utilized as features in human detection. Since HOG features describe arrays in each cell (local region) of histograms of oriented gradients in nine directions as features, it can be scaled up to higher dimensions. We are working on the following two approaches to digitized HOG features, in order to reduce the amount of memory for these HOG features. First, we propose features that are digitized according to the relationships between different regions. The second is a classification method that considers the quantization errors that occur during the digitization. 

Binary Masking using Relational HOG Features and Wild Cards
In order to reduce the volume of information of HOG features, we propose a relational HOG feature (R-HOG) digitized by the magnitude relationship between HOG features extracted from two local regions. Since we do not digitize intensity gradients themselves but utilize the magnitude relationship between the intensity gradients of different regions, there is no need to set a cumbersome threshold. However, if the magnitude relationship is not clear, an unstable binary pattern could be output. We therefore propose a method of masking with the AdaBoost learning process which introduces a wild card (*) that permits the two binary values “0” and “1” so as to not observe part of the binary that has an adverse effect on the classification, during learning using Real AdaBoost. Through evaluation experiments, we confirmed that the proposed method not only reduces the memory amount but also maintains the detection performance of HOG features to at least a level comparable to those of previous methods. 

segmentation

Proposal of Classifiers with Transition Likelihood Model based on Quantization Residual Errors 
With the digitization of features represented by real numbers, such as HOG features, there is a problem in that a lot of information that is included in the features goes missing. We propose an approach that focuses on the “quantization residual error”, which is information that goes missing during the digitization of features. In order to consider the possibility that a binary code string observed from an image will transition to another binary code string, we introduce into the classifiers a transition likelihood model that predicts transitions based on quantization residual error. This enables classifications from consideration of transitions to the originally desired binary code string, even if the observed binary code string differs from the genuinely desired binary code string due to some influence. 

segmentation

Previous Next