Bandwidth fidelity in psychometric testing

2.8.4 Bandwidth fidelity in psychometric testing

The point above relates to a debate that is often referred to as the bandwidth-fidelity dilemma (Cronbach & Gleser, 1965); that is the assessment of gain or loss in analytical and predictive power from using broad-band versus narrow-band personality assessments. Goldberg’s (1972) study, using scales developed from the California Psychological Inventory (Consulting Psychologists Press, 1969) item pool with a sample size of 179, led him to conclude that five or six factors could predict a series of 7 criteria (including Grade Point Average, dating success and years spent at college) as well as could 11 narrower factors. Then more recently, Ones and Viswesvaran (1996) advocated the use of broad personality factors such as those within the FFM, rather than narrower traits such as those of the 16PF primary scales in the prediction of behaviours. Based on a large meta-analysis and using broad factors, Salgado (2003) found that personality measures developed within the FFM showed higher criterion-related validity than those based on alternative theoretical viewpoints. This only held true for Conscientiousness and Emotional Stability however with the other three FFM scales showing no differences. Conversely, Hogan and Roberts (1996) argue that the use of narrow personality traits accounts for variance that is situation specific and that broad trait measures are unable to tap into this variance. They argue that there is no evidence to suggest that the fidelity-bandwidth trade-off issue has become a crisis, suggesting that the nature of the criterion should dictate the choice of predictors in order to enhance validity. In support of Hogan and Robert’s first contention, Paunonen (1998) demonstrated that the Personality Research Form (Jackson, 1984; a narrow-band trait measurement) was able to account for more variance than the NEO-PI-R broad-based measure and concluded that the aggregation of narrow personality traits into broad factors may lead to decreased predictive ability due to a loss of trait-specific variance. Additionally, Mershon and Gorsuch’s (1988) meta-analysis found that the 16 factors of the 16PF were able to explain at least twice as much variance in the criterion (of which there were various) as would a 6-factor approach. They discovered a 110% median increase in the proportion of variance accounted for when moving from six factors to sixteen.

Black (2000) suggests that the broad five factors of the NEO may limit their usefulness in selection settings and both Saville, Nyfield, Sik, and Hackston (1991) and Driskell, Hogan, Salas and Hoskin (1994) found that specific facets of the Big-Five constructs were better predictors of performance than global level measures such as the five factors themselves. The Driskell et al. study however found that, although personality was able to predict academic criteria in Naval (electronics) trainees, it contributed no additional variance in academic performance to that offered by the Armed Force’s own cognitive assessment — despite personality being associated with attitudinal and motivational factors that were implicated in training success.

In summary, and although the evidence regarding exactly what types(s) of performance can be predicted from what type(s) of personality dimensions may be in dispute, it is clear that personality does have utility in the performance prediction arena. Given the evidence and more specifically, Barrick, Mount and Judge’s (2001) meta-analysis findings, one is able to conclude that, when used responsibly and in a standardised manner by appropriately trained personnel, personality assessments based on the FFM add an element to the prediction of an individual’s workplace performance that is not accounted for by other human resource tools and methods.

When reviewing literature on this topic it is notable that the correlations that are reported between personality and performance are typically not strong (Robertson & Smith, 2001) given the complex interplay between all predictor variables and job performance. This may lead one to critique that although relationships found in many studies may have been statistically significant, they ultimately remain meaningless given the small size of the coefficient. Meyer et al. (2001) provided a review and extensive tables of correlation coefficients from psychological testing research. The reason for the low coefficient is often simply due to the fact that there is a very small relationship between the two variables. However, on many occasions, criterion reliability and validity issues mean that an observed correlation between predictor and dependent variable would in reality have been higher if correction for attenuation was made (Salgado, Moscoso & Lado, 2003; Salgado, Ones & Viswesvaran, 2001). Nunnally (1978) provided an equation that corrects for such attenuation, although Muchinsky (1996) noted that such an equation cannot be subjected to statistical significance testing. Moreover, it is of greater importance and necessity to work to increase criterion reliability and validity and thus to eradicate the necessity to apply such corrective measures.