My Ssec Capstone Project LITERATURE REVIEW OF GESTURE MATCH SYSTEMS Hassan [4] used multivariate Gaussian distribution to match hand gestures using nongeometric features

LITERATURE REVIEW OF GESTURE MATCH SYSTEMS Hassan [4] used multivariate Gaussian distribution to match hand gestures using nongeometric features

LITERATURE REVIEW OF GESTURE MATCH SYSTEMS
Hassan 4 used multivariate Gaussian distribution to match hand gestures using nongeometric features. The input hand picture is segmented using two different method 5; skin colour based segmentation by applying HSV colour model and clustering based thresholding technique 5. Few calculations are performed to capture the shape of the hand to extract hand feature; the modified Directions Analysis Formula are adopted to find a relationship between statistical parameters (variance and covariance) 4 from the data, and used to compute object (hand) slope and trend 4 by finding the directions of the hand gesture 4, As shown in Figure 5.

Figure. computing hand directions 4.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Then Gaussian difference is used on the segmented picture, and it takes the directions of the hand as shown in figure 6.

Figure. Gaussian distribution used on the segmented picture 5.

Form the resultant Gaussian function the picture has been divided into circular areas in other words that areas are formed in a terrace shape so that to eliminate the rotation affect 45. The shape is divided into 11 terraces with a 0.1 width for each terrace 45. 9 terraces are resultant from the 0.1 width division which are; (1-0.9, 0.9-0.8, 0.8-0.7, 0.7-0.6, 0.6, 0.5, 0.5-0.4, 0.4-0.3, 0.3-0.2, 0.2-0.1), and one terrace for the terrace that has value smaller than 0.1 and the last one for the external area that extended out of the outer terrace 45. An explanation of this division is demonstrated in Figure 7.

Figure. Terraces division with 0.1 likelihood 45.

Each terrace is divided into 8 sectors which named as the feature areas, empirically discovered that number 8 is suitable for features divisions 4, To attain best capturing of the Gaussian to fit the segmented hand, re-estimation are performed on the shape to fit capturing the hand object 4, then the Gaussian shape are matched on the segmented hand to prepare the final hand shape for extracting the features, Figure 8 shown this method 4.

Figure. Features divisions 5. a) Terrace area in Gaussian. b) Terrace area in hand picture.

After capturing the hand shape, two types of features are extracted to form the feature vector 45; local feature, and global features. Local features using geometric central moments which provide two different moments ?00 , ?11 as shown by equation (1):

Where ?x and ?y is the mean value for the input feature area 4, x and y are the coordinated,
and for this, the input picture is represent by 88*2 features, as explained in detail in equation (2). While the global features are two features the first and second moments 45 that are the computed for the whole hand features area 4. These feature areas are computed by multiplying feature area intensity plus feature area’s map location 4. In this case, any input picture is represent with 178 features 45. The system carried out using 20 different gestures 5, 10 samples for each gesture, 5 samples for training and 5 for testing, with 100% match percentage and it decreased when the number of gestures are more than 14 gestures 5. In 4 6 gestures are matchd with 10 samples for each gesture. Euclidian distance used for the classification of the feature 45.

Kulkarni 6 match static posture of American Sign Language using neural connections formula. The input picture are converted into HSV colour model, resized into 80×64 and few picture premethoding calculations are used to segment the hand 6from a similar background 6, features are extracted using histogram technique and Hough formula. Feed forward Neural Connections with three layers are used for gesture classification. 8 samples are used for each 26 characters in sign language, for each gesture, 5 samples are used for training and 3samples for testing, the system achieved 92.78% match rate using MATLAB language.6.

Hassan 1 used scaled normalization for gesture match based on brightness factor matching. The input picture with is segmented using thresholding technique where the background is black. Any segmented picture is normalized (trimmed), and the center mass 1 of the picture are determined, so that the coordinates are shifted to match the centroid of the hand object at the origin of the X and Y axis 1. Since this method based on the center mass of the object, the generated pictures have different sizes 1 see figure 9, for this reason a scaled normalization calculation are used to overcome this problem which maintain picture dimensions and the time as well 1, where each block of the four blocks are scaling with a factor that is different from other block’s factors. Two method are used for extraction the features; firstly by using the edge mages,
and secondly by using normalized features where only the brightness values of pixel are calculated and other black pixel are neglected to reduce the length of the feature vector 1. The database consists of 6 different gestures, 10 samples per gesture are used, 5 samples for training and 5 samples for testing. The match rate for the normalized feature problem achieved better performance than the normal feature method, 95% match rate for the former method and 84% for the latter one 1.

Figure. applying trimming method on the input picture, followed by scaling normalization method 1.

Wysoski et al. 2 presented rotation invariant postures using boundary histogram. Camera used for acquire the input picture, filter for skin colour detection has been used followed by clustering method to find the boundary for each group in the clustered picture using ordinary contourtracking formula. The picture was divided into grids and the boundaries have been normalized. The boundary was represent as chord’s size chain which has been used as histograms, by dividing the picture into number of areas N in a radial form, according to specific angle. For classification method Neural Connections MLP and Dynamic Programming DP matching were used. Many experiments have implemented on different features format in addition to use different chord’s size histogram, chord’s size FFT. 26 static postures from American Sign
Language used in the experiments. Homogeneous background was used in the work. Stergiopoulou 3 suggested a new Self-Growing and Self-Organized Neural Gas (SGONG) connection for hand gesture match. For hand area detection a colour segmentation technique based on skin colour filter in the YCbCr colour space was used, an approximation of hand shape morphology has been detected using (SGONG) connection; Three features were extracted using finger identification method which determines the number of the raised fingers and characteristics of hand shape, and Gaussian distribution model used for match.

REFERENCES
1 Mokhtar M. Hassan, Pramoud K. Misra, (2011). “Brightness Factor Matching For Gesture
Match System Using Scaled Normalization”, International Journal of Computer Science ;
Information Technology (IJCSIT), Vol. 3(2).

2 Simei G. Wysoski, Marcus V. Lamar, Susumu Kuroyanagi, Akira Iwata, (2002). “A Rotation
Invariant Approach On Static-Gesture Match Using Boundary Histograms And Neural Connections,” IEEE Proceedings of the 9th International Conference on Neural Information Methoding, Singapura.

3 E. Stergiopoulou, N. Papamarkos. (2009). “Hand gesture match using a neural connection shape fitting technique,” Elsevier Engineering Applications of Artificial Intelligence, vol. 22(8), pp. 1141–
1158, doi: 10.1016/j.engappai.2009.03.008
4 Mokhar M. Hassan, Pramod K. Mishra, (2012) “Features Fitting using Multivariate Gaussian Distribution for Hand Gesture Match”, International Journal of Computer Science ; Emerging
Technologies IJCSET, Vol. 3(2).

5 Mokhar M. Hassan, Pramod K. Mishra, (2012). “Robust Gesture Match Using Gaussian Distribution for Features Fitting’, International Journal of Machine Learning and Computing, Vol. 2(3).

6 V. S. Kulkarni, S.D.Lokhande, (2010) “Appearance Based Match of American Sign Language Using Gesture Segmentation”, International Journal on Computer Science and Engineering (IJCSE),
Vol. 2(3), pp. 560-565.

x

Hi!
I'm Ava

Would you like to get a custom essay? How about receiving a customized one?

Check it out