Citing Claims

From Gaskination StatWiki
Revision as of 18:29, 4 December 2022 by Gaskination (talk | contribs) (Protected "Citing Claims" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite)) [cascading])
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Take the online course on MyEducator: SEM Online 3 credit Graduate Course Invitation Video

I find that there are specific claims in SEM research that float around, but where they come from is often forgotten. So, I've made a list of these claims with some quotes and explanations below. Of course, I have also included a citation if the claim can be substantiated. If you have heard of a claim and know its source, feel free to email me and I'll determine if it should be added here. If you would like to cite this page in addition to the sources provided below, here is the recommended citation:


Four Indicators Per Factor

Claim

Have you heard the one about the "optimal number of indicators" per factor? I have heard it a few times, and I know I have read it in multiple places. I include one of those sources below.

Source

  • Hair, J. F., Jr., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall.

On page 678:

"In summary, when specifying the number of indicators per construct, the following is recommended:

  • Use four indicators whenever possible.
  • Having three indicators per construct is acceptable, particularly when other constructs have more than three.
  • Constructs with fewer than three indicators should be avoided."

Rationale

Joe's logic is that a minimum of three indicators are needed for identification, but four is a safer and more reliable configuration. More than four may result in a failure of unidimensionality (i.e., there may be multiple dimensions being captured). He also suggests four is the optimal number of indicators because it balances parsimony (simplest solution) with requisite reliability (all-else-equal: reliability increases as number of indicators increases).

Covarying Error Terms in a Measurement Model

Claim

Some claim you can covary error terms in a measurement model (CFA) under certain conditions in order to improve model fit. Others say you should always avoid it. In the past, I have taken both stances, with logic to support my decisions. However, as I've grown in understanding of SEM, I am more inclined to avoid covarying error terms if at all possible. However, if you read page 9 and 10 of Hermida (2015), you'll find two justifiable reasons:

  1. When you are conducting a longitudinal study and have the same variable in two time periods
  2. When the variables in questions share components by design

“…it is not uncommon for studies to allow for correlated errors in initial model testing when the research design is longitudinal and the errors that are allowed to covary are the same indicators at different time periods. It is also possible for researchers to hypothesize a priori that errors will be correlated, based on the nature of study variables. For example, many studies allow errors to correlate when the variables have shared components. Kenney and Judd (1984) suggested using all possible cross-products of latent variable indicators as indicators of a latent product to be used for testing multiplicative structural equation models. Some of these cross-products will share components, so it is almost certain that their errors will correlate. Ultimately, these two reasons for allowing errors to correlate are part of the design, and are not necessarily related to sampling error or omitted variables issues. Thus, this study will help determine if the majority of studies which allow measurement errors to correlate are doing so for theoretically justifiable reasons, such as longitudinal research, or for unjustifiable reasons, such as improvement of model fit.”

Source

  • Hair, J. F., Jr., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall.
  • Hermida, R. 2015. "The Problem of Allowing Correlated Errors in Structural Equation Modeling: Concerns and Considerations," Computational Methods in Social Sciences (3:1), p. 5.

On page 675 of Hair et al (2010):

"You also should not run CFA models that include covariances between error terms... Allowing these paths to be estimated (freeing them) will reduce the chi-square, but at the same time seriously question the construct validity of the construct."

Rationale

Including a covariance arrow between errors implies that there is some relationship between the items of these variables that you are not accounting for properly in your model. Allowing their errors to covary essentially ignores the problem, much like putting a light bandage over a bullet wound without removing the bullet. It covers up the issue on the surface, but does nothing to address the underlying concerns.

To Standardize or Mean-center in Interactions

Claim

You don't need to standardize or mean-center variables in interactions.

Source

  • Hayes, A. F. 2017. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. Guilford Publications.

Rationale

Andrew discusses this "myth" at length in his book. The short answer is that failing to mean-center or standardize does not lead to multicollinearity issues. The primary result of mean-centering or standardizing is just a greater difficulty in interpretation of the output. However, standardizing does enable you to more easily interpret Johnson-Neyman plots. So, if you're using J-N plots, standardize. Otherwise, no need (according to Hayes).

Interaction with Mediator

Claim

Because you must correlate the moderator and the interacting variable, but you cannot correlate with an endogenous variable, then when your mediator is part of the interaction (i.e. moderated mediation) you can correlate the moderator with the mediator's error term.

Source

  • Kristopher J. Preacher, Derek D. Rucker and Andrew F. Hayes (2007) "Addressing Moderated Mediation Hypotheses: Theory, Methods, and Prescriptions," MULTIVARIATE BEHAVIORAL RESEARCH, 42:1, 185-227

Rationale

This is the closest you can get to accounting for shared variance between the moderator and the mediator. See Model 3 in the above paper.