Descriptive statistics for accuracy of inputs Out-of-rang values Plausible means and standard deviations Missing data Outliers
Descriptive statistics for accuracy of inputs Out-of-rang values
Plausible means and standard deviations Missing data
Outliers (univariate and multivariate)
Check for nonlinearity and homoscedasticity Inspected by scatter plot and graphical plot of the
Identify nonnormal variables Inspected by skewness and kurtosis of the variables
Evaluate construct validity (multicollinearity and
singularity) Inspected by Factor Analysis
Assess measurement reliability Estimated by Cronbach’s Alpha
Table 5.8 Processes to screen the collected data (Tabachnick and Fidell, 2007; Hair et al., 2010)
Analysing the data
The collected data was analysed in this step in order to test the hypothesised causal relationships between the application of production shop floor management tools vis-a-vis the performance of Kaizen. Structural equation modelling was adopted to test the hypothesised causal relationships. This analytic tool is described in Section 5.3 and the results of the analyses are presented in Chapter 7.
Generating the report
This was the final step of the research. Chapter 8 presents the results of the analyses to test the research hypotheses and explains their implications. Finally, Chapter 9 makes conclusions based on the findings, and identifies the limitations of the study as well as scope for future research.
Once the data are collected, the stability of the measurement instruments need to be analysed and examined (Hensley, 1999). Factor analysis (Pearson, 1901; Spearman, 1904a; Thurstone, 1931) is a multivariate analysis procedure of data reduction (Bruin, 2006) which can be used to classify data and examine measurement instruments (Flynn et al., 1990). The objectives of a factor analysis are to:
(1) summarise the patterns within samples of collected data (Tabachnick and Fidell, 2007);
(2) identify their underlying relationships to create factors (or components) (Walker and Maddan, 2012); and
(3) Test their relationships to ensure the construct validity (the extent to which the factors are correctly correlated for measurement) (Emory and Cooper, 1991; Lu, 2006).
The approaches and analysing assumptions for factor analyses
There aretwo types of factor analysis, namely, Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) (Kline, 1994; Pallant, 2007). EFA has been defined as an orderly simplification of interrelated measures (Suhr, 2006). It is commonly applied for exploring and uncovering the possible or proposed underlying structure of a large set of variables (Pedhazur and Schmelkin, 1991). CFA is a far more rigorous statistical technique that is mostly applied to test or verify a set of variables developed previously (Pedhazur and Schmelkin, 1991; Suhr, 2006), but it permits a few minor modifications in terms of grouping (Walker and Maddan, 2012). Despite the differences between the two methods, a mutually exclusive distinction in their applications is not always recommended (Jöreskog, 1974; Anderson and Gerbing, 1988). Thus, it is not pragmatic to have a strict dichotomy of these two methods.
For the purpose of Factor Analysis, it is assumed that data are normally distributed (Tabachnick and Fidell, 2007). Neveretheless, if the data approximates interval level (i.e., Likert-scales data provided the scale item has at least 5 and preferably 7 categories), factor analysis can be applied (Lehmann and Hulbert, 1972; Field, 2005; Walker and Maddan, 2012). Apart from this, a sufficient sample size is required (Pallant, 2007; Tabachnick and Fidell, 2007). However, there is no consensus among researchers regarding a suitable sample size (Zhao, 2009). For example, Hatcher (1994) proposed that the sample size should be at least 100. Tabachnick and Fidell (2007) were of the view of a minimum size of 300 cases. On the same line, Field (2005) concluded that 300 cases is probably adequate, but only if the communality values (the extent to which a variable correlates with all other variables) after factor extraction are above 0.5. Nunnally and Bernstein (1994) articulated for a more rigorous requirement of using at least 10 cases for each variable. Above all, the larger sample size is always better (Bruin, 2006; Pallant, 2007).
The application of factor analysis
Commonly Factor analysis is made of five steps (Pallant, 2007; Walker and Maddan, 2012) (Figure 5.6).
Figure 5.6 A five-step process for factor analysis (Walker and Maddan, 2012)
This first step checks the normality of data measuring the skewness and kurtosis (Walker and Maddan, 2012). In case the data contains skewed or kurtoses variables, a ‘bootstrap’ method is used to determine the bias (Ichikawa and Konishi, 1995; West et al., 1995; Loehlin, 2004, p. p60). It is a technique applied to maximise the accuracy of the estimation by creating multiple repeated samples from the original data set and examining each of the repeated samples (Efron, 1979; Efron and Tibshirani, 1993)
This is the second step in which the suitability of the data for factor analysis is assessed. Although, it is recommended that the variables required to be intercorrelated, but high multicollinearity makes it difficult to measure the unique contribution to a factor (Field, 2005). Bartlett’s Test of Sphericity measures if there is sufficient inter-correlation between variables for factor analysis (p