“Factor
analysis is really not concerned with exactness, only good approximation.”
-Nunnally
& Bernstein, 1994, p. 509.
We have repeatedly recommended
that readers and researchers keep in mind the exploratory nature of
EFA —a procedure that by nature is quirky, temperamental, valuable,
and interesting. As we discussed in Chapter 5, exploratory factor analysis
takes advantage of all the information in the interrelationships between
variables, whether those interrelationships are representative of
the population or not. In other words, EFA tends to overfit a model
to the data such that when the same model is applied to a new sample,
the model is rarely as good a fit. When we as readers see a single
EFA, often on an inadequate sample (as discussed in Chapter 5), we
have no way of knowing whether the results reported are likely to
generalize to a new sample or to the population. But it seems as though
this might be useful information.
If you read enough articles
reporting the results from factor analyses, too often you will find
confirmatory language used regarding exploratory analyses. We need
to re-emphasize in our discipline that EFA is not a mode for testing
of hypotheses or confirming ideas (e.g., Briggs & Cheek, 1986;
Floyd & Widaman, 1995), but rather for exploring the nature of
scales and item interrelationships. EFA merely presents a solution
based on the available data.
These solutions are
notoriously difficult to replicate, even under abnormally ideal circumstances
(exceptionally clear factor structure, very large sample to parameter
ratios, strong factor loadings, and high communalities). As mentioned
already, many point estimates and statistical analyses vary in how
well they will generalize to other samples or populations (which is
why we are more routinely asking for confidence intervals for point
estimates). But EFA seems particularly problematic in this area.
We find this troubling,
and you should too. Of course, we have no specific information about
how replicable we should expect particular factor structures to be
because direct tests of replicability are almost never published.
As Thompson (1999) and others note, replication is a key foundational
principle in science, but we rarely find replication studies published.
It could be because journals refuse to publish them, or because researchers
don’t perform them. Either way, this is not an ideal situation.