Corresponding author: James B Grace (

Academic editor: Stoyan Nedkov

It is possible that model selection has been the most researched and most discussed topic in the history of both statistics and structural equation modeling (SEM). The reason for this is because selecting one model for interpretive use from amongst many possible models is both essential and difficult. The published protocols and advice for model evaluation and selection in SEM studies are complex and difficult to integrate with current approaches used in biology. Opposition to the use of

U.S. Geological Survey

No conflicts of interest.

Model selection is one of the more challenging aspects of structural equation modeling. The selection decision typically follows a multi-step process of model evaluation that considers numerous possible models and various types of evidence. Traditionally, this process has depended strongly on the use of

When evaluating models estimated using traditional methods, there are two instances where SEM investigations encounter

associated with measures of global model fit and

associated with test statistics for individual parameters.

The likelihood-ratio ^{2}^{2}^{2}

Throughout the modern history of SEM, which can be thought of as the time since the LISREL synthesis in the early 1970s (^{2}_{ml}_{ml}

Within the field of ecology,

Shifting away from a strict reliance on dichotomous hypothesis testing towards model comparisons has implications for model selection in SEM studies. Results reported by SEM software includes

There are numerous types of evidence to be weighed when evaluating and comparing models. First and foremost is the scientific knowledge of the investigative team.

As an explanatory method, SEM requires the scientist to play an active role in the model evaluation process. A priori scientific knowledge is essential for the construction of the initial models. However, there may be a tendency for those beginning to use SEM to imagine that model evaluation, based on the data, is a tightly-scripted process defined by the rules of statistics. Earlier treatments of SEM tend to reinforce this impression. This is perhaps true of my own writing (e.g. comparing the presentations in Grace 2006 to the current paper), but also the writing in more general treatments of SEM (^{st} Edition compared to ^{th} Edition). Some of the shift in recommendations reflects a broader shift in the view of the role of

The goal of model selection is not simply to describe the relationships in data. In my view, the goal is to balance twin objectives. First is the narrow task of evaluating the model-data inter-relationships. We must address the specific question, “What do these data say about the hypothesis?” SEM philosophy, however, imagines a sequence of studies and a process of sequential learning (

The initial model construction process in SEM relies heavily on investigator knowledge. The reasoning process adopted during model construction (

For links not included in the initial model, post-estimation evaluations should reveal whether these assumptions need to be reconsidered. For direct links, these are straightforward to detect and include. However, we should not ignore the possibility that errors will be non-independent. Correlated errors are definite indications of omitted confounders. Finding error correlations may spark a need to consider model modifications. It is wise to consider how one might interpret such findings based on a priori knowledge, as these represent factors being omitted from explicit inclusion in the initial model.

A broad view of the quantitative sciences must recognise that we aspire for our models to transition over time from assumption-testing to assumption-based. Numerous sub-disciplines within ecology rely on assumption-based models. So-called ‘mechanistic models’ incorporate processes that operate on biological systems with enough regularity that the form of the model is accepted as given and data are used purely to estimate the parameters. Population models often fall into this group. For this model type, some of the processes may be of known functional form, while others may be of unknown form. When studying multi-species ecological communities, we often encounter mechanisms that are contingent on so many factors that relationship forms cannot be taken as given (e.g. effects of species additions or removals;

The preceding material is presented to make two important points relevant to model evaluation and selection within SEM. First, when alternative models are suggested, based on empirical results, we should avoid constructing alternative models for consideration that we know are false representations of the system. Perhaps a birth rate estimate is low and its 95% confidence interval includes a value of zero. Do we prune the model to adhere to the principle of parsimony? That would mean we might end up presenting a final model that, by omission, suggests that births are not a contributing factor for population size. Scientific logic would suggest that we should not prune in this case, but what are the consequences? I will address this question in the context of our illustrative example in the section below. In summary, the rule I would suggest is for the investigator to not include models in your comparison set that you, as a scientist, are not willing to defend.

Historically, the use of

As mentioned in the Introduction, for SEM applications that utilise global estimation methods (e.g. LISREL, Mplus, AMOS or lavaan), model estimation returns an initial set of measures that quantify the overall correspondence between observed and model-implied covariances. Immediate focus is directed to the ^{2}

There are a number of factors that limit our ability to provide an omnibus model evaluation using the ^{2}^{2}

Regarding approximate fit indices, it is probably true that some are, on average, better than others. However, simulation studies indicate that the capacity to detect mis-specifications based on recommended thresholds depends on the particular mis-specification (

Under the WOE approach, I will demonstrate in this paper approximate fit indices can be useful measures to report.

The second phase in evaluating models after first examining global fit measures is often to search for indications of what changes could be made to improve model-data concordance. Specially designed for this purpose are so-called modification indices (MI). All global-estimation-based software packages, with which I am familiar, report this information upon request. The critical role of the investigator’s scientific judgement comes into sharp focus once one tries to make sense of the MI table provided for a model that is mis-specified.

MI values are expressed in terms of the drop in the ^{2}

Perhaps the best way to gain some intuition about the challenge MI values attempt to overcome is to look at the raw materials for computing evidence of mis-specification, which are the residuals. In this case, the residuals are not those between predicted and observed individual data values, but instead, between the observed and model-implied variance-covariance matrices. Requesting to see residuals in a standardised metric will illustrate where model-implied and observed matrices are most discrepant. Because As the parts of a model are intercorrelated, there are many different model changes that might be implied. From a very practical standpoint, the investigator must realise that any change in the model can potentially resolve many of the listed modification possibilities. Therefore, one should decide on a single addition to the model before re-estimation. It is essential that the chosen modifications make substantive sense, because they must explain to the reviewers the scientific basis for the modifications of their initial model (see Grace 2006 for an in-depth discussion of this issue).

When working with models having latent variables with multiple indicators, MI tables sometimes return no usable advice, even though global model fit is poor. In this case, it becomes essential to consult the residual matrix unless theoretically predefined alternative specifications are available. While matrices of residuals provide a more undistilled source of information, they are commonly a fundamental source of evidence for selecting alternative models for consideration.

As with all other types of fit measures, information-based measures have a long history of use in SEM. This is a complex topic that I will treat lightly because the jury is still out on whether universally-applicable recommendations are even possible. Also Complicating things is also the sheer variety of information metrics that have been proposed for use. Fortunately, a recent review of past studies and set of simulation studies by

Two types of information measures have captured most of the recommendations,

In their book, Burnham and Anderson (2004) suggest models separated by more than 2 AIC units could be seen as distinct, while later (

In this paper, I do not wish to attempt to propose a definitive answer to the question of which information index is best nor consider the detailed studies and arguments association with that question. My intent is to show how the various types of evidence, associated with SEM, can be used to build up a set of candidate models for comparison and how information measures can be used to assist in the comparisons. For that purpose, I will rely on the following synoptic view, taken from various sources.

The relative performance of different information measures has been shown to vary with a number of factors, including (a) sample size, (b) the composition of candidate model sets, (c) the magnitude of effects to be detected and (d) heterogeneity in the data. I will try to summarise our current understanding (and my own experience) in a few summary statements:

The behaviour of

There are two key questions for the investigator to consider that influence the guidance to take home from simulation studies. First, is the true data-generating process complex and with tapering effect sizes? Second, is it likely that not all the important variables in the true model are in your candidate models?

If you answer yes to both questions in number 2,

If your answer to only the first question is yes,

In this paper, my focus is on globally-estimated models where the investigator must contend with multiple forms of evidence encountered within a sequential evaluation of overall model fit and individual links included in the model. However, many investigators use local estimation methods (e.g.

Pearl’s redescription of SEM in foundational terms, referred to as the Structural Causal Model (

In 2013, Shipley subsequently developed a version of his method based on

Most recently,

The use of the above-described types of evidence is illustrated next. To assist in the process, I provide a sequence of steps for a weight-of-evidence approach.

^{2}^{2}

a. The Root Mean Square Error of Approximation (^{2}^{2}^{2}

b. The Comparative Fit Index (

c. The Standardized Root Mean Square Residual (

^{2}^{2}

^{2}

^{2}

To illustrate the ideas presented in this paper, I will rely on an example related to the biological control of invasive plants. The invasive plant

In this paper, I have used the results published by

Fig.

For the illustration below, I simulated 10,000 replicates using the lavaan package simulateData function and then captured the covariance matrix as input for illustrations. The illustrations in this paper assume a sample size of 150 (original investigation was based on 165 samples). Code used for data simulation is in the supplementary materials (Suppl. material

In this section, I follow the Proposed Sequence described above to illustrate its application. In this example, I start with a single proposed a priori model and work from there. In other cases, we might have multiple candidate models from the beginning to evaluate, which would modify the sequence slightly.

Model 1 (Fig.

Estimation of Model 1, using lavaan, returns the measures of overall model fit shown in Table

^{2}

a.

b.

c.

d. Summary: This evidence suggests that when model-data discrepancies are average across the whole model, there is reasonable correspondence. However, this level of fit could be the result of averaging many equal-sized minor mis-specifications OR averaging many areas of the model with very close fit and a few important model-data mismatches. The limitation of global fit measures is their inability to distinguish these two possibilities.

Fig.

^{2}^{2}

A new model for evaluation is presented in Fig.

^{2}

In this situation, we must decide which of the models that have been estimated are scientifically defensible. Model 1 was found to be obviously mis-specified due to lack of empirical support and is not a contender for model selection. Model 2, while exhibiting a close model-data fit, was found to lack theoretical support for the originally hypothesised effect of

A model comparison table is presented in Table

Additional summary results for the selected model are shown in Table

A question raised earlier in the paper was about the consequences of retaining a weakly-supported link in a model for the other model parameters. It is known that leaving out an important link can have a major impact on estimated parameter values for the included links. This sensitivity is illustrated by the fact that model ^{2}^{2}

In this paper, I describe a way to bring the necessary evaluation of

At the present time, the greatest challenge for future studies is to provide defensible advice for the use of information measures in model comparisons. Most researchers investigating this topic have sought to identify a single index for all-purpose use. The most visible discussions in the field of ecology have debated the use of

The presentation here would be incomplete without mentioning that the ultimate solution to selecting the best model involves the data itself and not the methods of analysis. The bigger and better the sample, the more confidence we may have in the conclusions. If our goal is to generalise beyond the current sample, there is no substitute for sound mechanistic knowledge and sequential learning across linked studies (

I thank Lori Randall, USGS, Maria Felipe-Lucia, Helmholtz Center for Environmental Research and Frank Pennekamp, University of Zurich, for helpful review comments and suggestions. This work was supported by the USGS Land Change Science and Ecosystems Programs. Any use of trade, firm or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

No conflicts of interest.

Structural equation model representing an initial hypothesis regarding the potential effects of two biocontrol flea beetles,

Model 2, which includes an error correlation between species A and B at time 1 (parameter 12).

Model 3, with link from

Model 3B.

Description of ecological linkages numbered in Fig.

Link # | Description of potential mechanisms and expected sign of effect |

1 | Change in stem density is expected to depend on initial density of stems. A positive parameter estimate would indicate positive density dependence, while a negative parameter estimate would indicate negative density dependence (negative effect). |

2 | Dependence of flea beetle density on plant stem density for |

3 | Site fidelity for |

4 | Effect of |

5 | Lag food effect on |

6 | Competitive effect of |

7 | Dependence of flea beetle density on plant stem density for |

8 | Site fidelity for |

9 | Effect of |

10 | Lag food effect on |

11 | Competitive effect of |

Code for estimating Model 1 (Fig.

### load libraries |
|||||

########## Simulation Study #1 ########## |
|||||

sim.cov <- ' | |||||

1.2472 | |||||

-0.1492 | 1.019 | ||||

0.8442 | -0.178 | 1.6417 | |||

0.0358 | 0.704 | 0.0357 | 1.5512 | ||

-0.2922 | -0.233 | -0.1217 | -0.5276 | 1.488 | |

0.5235 | 0.149 | 0.5335 | 0.3277 | -0.655 | 1.029' |

# Convert matrix and name variables |
|||||

##### Scenario 1 - Set N=150 ##### |
|||||

mod1 <- ' |
|||||

# Estimate model ‘mod1’ using data matrix ‘sim.cov.dat’, N = 150 |

Global fit statistics obtained for Model 1.

> summary(mod1.fit, fit.measures=T) | |

lavaan 0.6-5 ended normally after 15 iterations | |

Estimator | ML |

Optimisation method | NLMINB |

Number of free parameters | 16 |

Number of observations | 150 |

Model Test User Model: | |

Test statistic | 9.125 |

Degrees of freedom | 4 |

P-value (Chi-square) | 0.058 |

Model Test Baseline Model: | |

Test statistic | 247.786 |

Degrees of freedom | 15 |

P-value | 0.000 |

User Model versus Baseline Model: | |

Comparative Fit Index (CFI) | 0.978 |

Tucker-Lewis Index (TLI) | 0.917 |

Loglikelihood and Information Criteria: | |

Loglikelihood user model (H0) | -1274.742 |

Loglikelihood unrestricted model (H1) | -1270.179 |

Akaike (AIC) | 2581.483 |

Bayesian (BIC) | 2629.654 |

Sample-size adjusted Bayesian (BIC) | 2579.017 |

Root Mean Square Error of Approximation: | |

RMSEA | 0.092 |

90 Percent confidence interval - lower | 0.000 |

90 Percent confidence interval - upper | 0.173 |

P-value RMSEA <= 0.05 | 0.153 |

Standardized Root Mean Square Residual: | |

SRMR | 0.055 |

Modification indices for Model 1.

> # Modification Indices |
||||

lhs | op | rhs | mi | epc |

1 BioA1 | ~~ | BioB1 | 7.762 | -0.224 |

2 BioA1 | ~ | BioB1 | 7.762 | -0.226 |

3 BioA1 | ~ | BioA2 | 7.762 | 1.723 |

4 BioA1 | ~ | BioB2 | 7.762 | -0.341 |

5 BioB1 | ~ | BioA1 | 7.762 | -0.229 |

6 BioB1 | ~ | BioA2 | 7.762 | -0.414 |

7 BioB1 | ~ | BioB2 | 7.762 | -12.411 |

Code for estimating Model 2. Parameters are now labelled b1-12.

### Model 2 (parameter numbers correspond to those in Fig. 5) |

mod2 <- ' |

Global fit statistics obtained for Model 2.

> summary(mod2.fit, fit.measures=T) | |

lavaan 0.6-3 ended normally after 16 iterations | |

Optimisation method | NLMINB |

Number of free parameters | 17 |

Number of observations | 150 |

Estimator | ML |

Model Fit Test Statistic | 1.155 |

Degrees of freedom | 3 |

P-value (Chi-square) | 0.764 |

Model test baseline model: | |

Minimum Function Test Statistic | 247.786 |

Degrees of freedom | 15 |

P-value | 0.000 |

User model versus baseline model: | |

Comparative Fit Index (CFI) | 1.000 |

Tucker-Lewis Index (TLI) | 1.040 |

Loglikelihood and Information Criteria: | |

Loglikelihood user model (H0) | -1270.757 |

Loglikelihood unrestricted model (H1) | -1270.179 |

Number of free parameters | 17 |

Akaike (AIC) | 2575.513 |

Bayesian (BIC) | 2626.694 |

Sample-size adjusted Bayesian (BIC) | 2572.892 |

Root Mean Square Error of Approximation: | |

RMSEA | 0.000 |

90 Percent Confidence Interval | 0.000 0.093 |

P-value RMSEA <= 0.05 | 0.848 |

Standardized Root Mean Square Residual: | |

SRMR | 0.013 |

Parameter-specific statistics for Model 2.

> # For examination of individual parameter support |
||||||||||

lhs | op | rhs | label | est | se | z | pvalue | ci.low | ci.up | |

1 | BioA1 | ~ | Stems1 | b2 | 0.509 | 0.080 | 6.382 | 0.000 | 0.353 | 0.665 |

2 | BioB1 | ~ | Stems1 | b7 | 0.145 | 0.080 | 1.801 | 0.072 | -0.013 | 0.302 |

3 | BioA2 | ~ | BioA1 | b3 | 0.554 | 0.085 | 6.496 | 0.000 | 0.387 | 0.721 |

4 | BioA2 | ~ | Stems1 | b5 | 0.256 | 0.094 | 2.718 | 0.007 | 0.071 | 0.440 |

5 | BioA2 | ~ | BioB1 | b11 | -0.131 | 0.085 | -1.549 | 0.121 | -0.297 | 0.035 |

6 | BioB2 | ~ | BioB1 | b8 | 0.662 | 0.085 | 7.834 | 0.000 | 0.497 | 0.828 |

7 | BioB2 | ~ | Stems1 | b10 | 0.213 | 0.094 | 2.266 | 0.023 | 0.029 | 0.397 |

8 | BioB2 | ~ | BioA1 | b6 | 0.018 | 0.085 | 0.217 | 0.828 | -0.149 | 0.186 |

9 | StemChg12 | ~ | Stems1 | b1 | -0.643 | 0.091 | -7.077 | 0.000 | -0.821 | -0.465 |

10 | StemChg12 | ~ | BioA2 | b4 | 0.139 | 0.069 | 2.005 | 0.045 | 0.003 | 0.275 |

11 | StemChg12 | ~ | BioB2 | b9 | -0.208 | 0.067 | -3.078 | 0.002 | -0.340 | -0.075 |

12 | BioA1 | ~~ | BioB1 | b12 | -0.224 | 0.082 | -2.717 | 0.007 | -0.385 | -0.062 |

13 | BioA1 | ~~ | BioA1 | 0.974 | 0.113 | 8.660 | 0.000 | 0.754 | 1.195 | |

14 | BioB1 | ~~ | BioB1 | 0.991 | 0.114 | 8.660 | 0.000 | 0.767 | 1.215 | |

15 | BioA2 | ~~ | BioA2 | 1.008 | 0.116 | 8.660 | 0.000 | 0.780 | 1.236 | |

16 | BioB2 | ~~ | BioB2 | 1.008 | 0.116 | 8.660 | 0.000 | 0.780 | 1.236 | |

17 | StemChg12 | ~~ | StemChg12 | 0.968 | 0.112 | 8.660 | 0.000 | 0.749 | 1.187 | |

18 | Stems1 | ~~ | Stems1 | 1.022 | 0.000 | NA | NA | 1.022 | 1.022 |

Code for estimating Model 3.

### Model 3 (parameter b4 now estimates an error correlation) |

Mod3 <- ' |

Parameter-specific statistics for Model 3.

> # For examination of individual parameter support |
||||||||||

lhs | op | rhs | label | est | se | z | pvalue | ci.low | ci.up | |

1 | BioA1 | ~ | Stems1 | b2 | 0.509 | 0.080 | 6.382 | 0.000 | 0.353 | 0.665 |

2 | BioB1 | ~ | Stems1 | b7 | 0.145 | 0.080 | 1.801 | 0.072 | -0.013 | 0.302 |

3 | BioA2 | ~ | BioA1 | b3 | 0.551 | 0.084 | 6.573 | 0.000 | 0.387 | 0.716 |

4 | BioA2 | ~ | Stems1 | b5 | 0.257 | 0.094 | 2.747 | 0.006 | 0.074 | 0.441 |

5 | BioA2 | ~ | BioB1 | b11 | -0.133 | 0.084 | -1.594 | 0.111 | -0.297 | 0.031 |

6 | BioB2 | ~ | BioB1 | b8 | 0.662 | 0.085 | 7.834 | 0.000 | 0.497 | 0.828 |

7 | BioB2 | ~ | Stems1 | b10 | 0.213 | 0.094 | 2.266 | 0.023 | 0.029 | 0.397 |

8 | BioB2 | ~ | BioA1 | b6 | 0.018 | 0.085 | 0.217 | 0.828 | -0.149 | 0.186 |

9 | StemChg12 | ~ | Stems1 | b1 | -0.565 | 0.083 | -6.786 | 0.000 | -0.729 | -0.402 |

10 | Stemchg12 | ~ | BioB2 | b9 | -0.224 | 0.067 | -3.332 | 0.001 | -0.355 | -0.092 |

11 | BioA1 | ~~ | BioB1 | b12 | -0.224 | 0.082 | -2.717 | 0.007 | -0.385 | -0.062 |

12 | BioA2 | ~~ | StemChg12 | b4 | 0.181 | 0.083 | 2.184 | 0.029 | 0.019 | 0.344 |

Code for estimating Model 3B.

### Model 3B (parameter b6 now set to zero) |

Mod3B <- ' |

Parameter-specific statistics for Model 3B.

> # For examination of individual parameter support |
||||||||||

lhs | op | rhs | label | est | se | z | pvalue | ci.low | ci.up | |

1 | BioA1 | ~ | Stems1 | b2 | 0.509 | 0.080 | 6.382 | 0.000 | 0.353 | 0.665 |

2 | BioB1 | ~ | Stems1 | b7 | 0.145 | 0.080 | 1.801 | 0.072 | -0.013 | 0.302 |

3 | BioA2 | ~ | BioA1 | b3 | 0.551 | 0.084 | 6.573 | 0.000 | 0.387 | 0.716 |

4 | BioA2 | ~ | Stems1 | b5 | 0.257 | 0.094 | 2.747 | 0.006 | 0.074 | 0.441 |

5 | BioA2 | ~ | BioB1 | b11 | -0.133 | 0.084 | -1.594 | 0.111 | -0.297 | 0.031 |

6 | BioB2 | ~ | BioB1 | b8 | 0.658 | 0.082 | 7.993 | 0.000 | 0.497 | 0.820 |

7 | BioB2 | ~ | Stems1 | b10 | 0.223 | 0.082 | 2.723 | 0.006 | 0.063 | 0.384 |

8 | BioB2 | ~ | BioA1 | b6 | 0.000 | 0.000 | NA | NA | 0.000 | 0.000 |

9 | StemChg12 | ~ | Stems1 | b1 | -0.565 | 0.083 | -6.786 | 0.000 | -0.729 | -0.402 |

10 | StemChg12 | ~ | BioB2 | b9 | -0.224 | 0.067 | -3.332 | 0.001 | -0.355 | -0.092 |

11 | BioA1 | ~~ | BioB1 | b12 | -0.224 | 0.082 | -2.717 | 0.007 | -0.385 | -0.062 |

12 | BioA2 | ~~ | StemChg12 | b4 | 0.181 | 0.083 | 2.184 | 0.029 | 0.019 | 0.344 |

Model comparison table.

> ##### Multimodel Comparisons |
||||||

Model selection based on AICc: | ||||||

K | AICc | Delta_AICc | AICcWt | Cum.Wt | LL | |

MOD3B | 16 | 2576.63 | 0.00 | 0.39 | 0.39 | -1270.27 |

MOD3C | 15 | 2576.64 | 0.01 | 0.39 | 0.77 | -1271.53 |

MOD3D | 14 | 2579.03 | 2.40 | 0.12 | 0.89 | -1273.96 |

MOD3 | 17 | 2579.13 | 2.50 | 0.11 | 1.00 | -1270.25 |

Select results for model selected for interpretation, Model 3B.

Model Fit | ||||||

Estimator | ML | |||||

Model Fit Test Statistic | 0.180 | |||||

Degrees of freedom | 4 | |||||

P-value (Chi-square) | 0.996 | |||||

Comparative Fit Index (CFI) | 1.000 | |||||

RMSEA | 0.000 | |||||

90 Percent Confidence Interval | 0.000 0.000 | |||||

P-value RMSEA <= 0.05 | 0.998 | |||||

SRMR | 0.006 | |||||

Regressions: | ||||||

Estimate | Std.Err | z-value | P(>|z|) | Std.all | ||

BioA1 ~ | ||||||

Stems1 | (b2) | 0.509 | 0.080 | 6.382 | 0.000 | 0.462 |

BioB1 ~ | ||||||

Stems1 | (b7) | 0.145 | 0.080 | 1.801 | 0.072 | 0.146 |

BioA2 ~ | ||||||

BioA1 | (b3) | 0.551 | 0.084 | 6.573 | 0.000 | 0.481 |

Stems1 | (b5) | 0.257 | 0.094 | 2.747 | 0.006 | 0.204 |

BioB1 | (b11) | -0.133 | 0.084 | -1.594 | 0.111 | -0.105 |

BioB2 ~ | ||||||

BioB1 | (b8) | 0.658 | 0.082 | 7.993 | 0.000 | 0.534 |

Stems1 | (b10) | 0.223 | 0.082 | 2.723 | 0.006 | 0.182 |

BioA1 | (b6) | 0.000 | 0.000 | |||

StemChg12 ~ | ||||||

Stems1 | (b1) | -0.565 | 0.083 | -6.786 | 0.000 | -0.470 |

BioB2 | (b9) | -0.224 | 0.067 | -3.332 | 0.001 | -0.228 |

Covariances: | ||||||

Estimate | Std.Err | z-value | P(>|z|) | Std.all | ||

.BioA1 ~~ | ||||||

.BioB1 | (b12) | -0.224 | 0.082 | -2.717 | 0.007 | -0.227 |

.BioA2 ~~ | ||||||

.StmChg12 | (b4) | 0.181 | 0.083 | 2.184 | 0.029 | 0.181 |

R-Square: | ||||||

Estimate | ||||||

BioA1 | 0.214 | |||||

BioB1 | 0.021 | |||||

BioA2 | 0.381 | |||||

BioB2 | 0.346 | |||||

StemChg12 | 0.328 |

Results if

Model Fit | ||||||

Estimator | ML | |||||

Model Fit Test Statistic | 3.390 | |||||

Degrees of freedom | 5 | |||||

P-value (Chi-square) | 0.640 | |||||

Comparative Fit Index (CFI) | 1.000 | |||||

RMSEA | 0.000 | |||||

90 Percent Confidence Interval | 0.000 0.092 | |||||

P-value RMSEA <= 0.05 | 0.793 | |||||

SRMR | 0.050 | |||||

Regressions: | ||||||

Estimate | Std.Err | z-value | P(>|z|) | Std.all | ||

BioA1 ~ | ||||||

Stems1 | (b2) | 0.541 | 0.078 | 6.974 | 0.000 | 0.485 |

BioB1 ~ | ||||||

Stems1 | (b7) | 0.000 | NA | 0.000 | ||

BioA2 ~ | ||||||

BioA1 | (b3) | 0.551 | 0.084 | 6.573 | 0.000 | 0.481 |

Stems1 | (b5) | 0.257 | 0.093 | 2.769 | 0.006 | 0.201 |

BioB1 | (b11) | -0.133 | 0.083 | -1.610 | 0.107 | -0.104 |

BioB2 ~ | ||||||

BioB1 | (b8) | 0.658 | 0.081 | 8.079 | 0.000 | 0.541 |

Stems1 | (b10) | 0.223 | 0.081 | 2.752 | 0.006 | 0.184 |

BioA1 | (b6) | 0.000 | 0.000 | |||

StemChg12 ~ | ||||||

Stems1 | (b1) | -0.565 | 0.082 | -6.903 | 0.000 | -0.474 |

BioB2 | (b9) | -0.224 | 0.067 | -3.343 | 0.001 | -0.227 |

Covariances: | ||||||

Estimate | Std.Err | z-value | P(>|z|) | Std.all | ||

.BioA1 ~~ | ||||||

.BioB1 | (b12) | -0.228 | 0.083 | -2.743 | 0.006 | -0.230 |

.BioA2 ~~ | ||||||

.StmChg12 | (b4) | 0.181 | 0.083 | 2.184 | 0.029 | 0.181 |

R-Square: | ||||||

Estimate | ||||||

BioA1 | 0.235 | |||||

BioB1 | 0.000 | |||||

BioA2 | 0.397 | |||||

BioB2 | 0.327 | |||||

StemChg12 | 0.316 |

A 'Weight of Evidence' Approach to Evaluating Structural Equation Models- Supplement1

R code

This text file contains the R code used to develop the demonstrations included in Grace JB (2020) A 'weight of evidence' approach to evaluating structural equation models. One Ecosystem

File: oo_373240.txt