using DataFrames
using MixedModels
using MixedModelsMakie
using SMLP2024: dataset
using Statistics: mean, std
const progress=false
false
Phillip Alday, Douglas Bates, and Reinhold Kliegl
2024-09-09
Packages we (might) use.
using DataFrames
using MixedModels
using MixedModelsMakie
using SMLP2024: dataset
using Statistics: mean, std
const progress=false
false
Row | variable | mean | min | median | max | nmissing | eltype |
---|---|---|---|---|---|---|---|
Symbol | Union… | Any | Union… | Any | Int64 | DataType | |
1 | subj | S01 | S73 | 0 | String | ||
2 | item | ACHE | YEAR | 0 | String | ||
3 | trial | 239.958 | 2 | 239.0 | 480 | 0 | Int16 |
4 | F | HF | LF | 0 | String | ||
5 | P | rel | unr | 0 | String | ||
6 | Q | clr | deg | 0 | String | ||
7 | lQ | clr | deg | 0 | String | ||
8 | lT | NW | WD | 0 | String | ||
9 | rt | 647.173 | 301 | 601.0 | 2994 | 0 | Int16 |
This section covers the general terminology and advice for model specification.
Linear mixed models (LMMs), like many other types of statistical models, describe a relationship between a response variable and covariates that have been measured or observed along with the response. The statistical model assumes that the residuals of the fitted response (i.e., not the responses) are normally – also identically and independently – distributed. This is the first assumption of normality in the LMM. It is standard practice that model residuals are inspected and, if serious skew is indicated, that the response is Box-Cox transformed (unless not justified for theoretical reasons) to fulfill this model assumption.
In the following we distinguish between categorical covariates and numerical covariates. Categorical covariates are factors. The important characteristic of a factor is that, for each observed value of the response, the factor takes on the value of one of a set of discrete levels. The levels can be unordered (nominal) or ordered (ordinal). We use the term covariate when we refer to numerical covariates, that is to continuous measures with some distribution. In principle, statistical models are not constrained by the distribution of observations across levels of factors and covariates, but the distribution may lead to problems of model identification and it does implications for statistical power.
Statistical power, especially for the detection of interactions, is best when observations are uniformly distributed across levels of factors or uniform across the values of covariates. In experimental designs, uniform distributions may be achieved by balanced assignment of subjects (or other carriers of responses) to the levels of factors or combinations of factor levels. In observational contexts, we achieve uniform distributions by stratification (e..g., on age, gender, or IQ scores). Statistical power is worse for skewed than normal distributions (I think …). Therefore, although it is not required to meet an assumption of the statistical model, it may be useful to consider Box-Cox transformations of covariates.
In LMMs the levels of at least one of the factors represents units in the data set that are assumed to be sampled, ideally randomly, from a population that is normally distributed with respect to the response. This is the second assumption of normal distribution in LMMs. In psychology and linguistics the observational units are often the subjects or items (e..g., texts, sentences, words, pictures) in the study. We may use numbers, such as subject identifiers, to designate the particular levels that we observed; we recommend to prepend these numbers with “S” or “I” to avoid confusion with numeric variables.
Random sampling is the basis of generalization from the sample to the population. The core statistics we will estimate in this context are variances and correlations of grand means and (quasi-)experimental effects. These terms will be explained below. What we want to stress here is that the estimation of (co-)variances / correlations requires a larger number of units (levels) than the estimation of means. Therefore, from a practical perspective, it is important that random factors are represented with many units.
When there is more than one random factor, we must be clear about their relation. The two prototypical cases are that the factors are nested or crossed. In multilevel models, a special case of mixed models, the levels of the random factors are strictly nested. For example, at a given time, every student attends a specific class in a specific school. Students, classes, and schools could be three random factors. As soon as we look at this scenario across several school years, the nesting quickly falls apart because students may move between classes and between schools.
In psychology and linguistics, random factors are often crossed, for example, when every subject reads every word in every sentence in a word-by-word self-paced reading experiment (or alternatively: when every word in every sentence elicits a response from every subject). However, in an eye-movement experiment (for example), the perfect crossing on a measure like fixation duration is not attainable because of blinks or skipping of words.
In summary, the typical situation in experimental and observational studies with more than one random factor is partial crossing or partial nesting of levels of the random factors. Linear mixed models handle these situations very well.
Fixed experimental factor or covariate. In experiments the units (or levels) of the random factor(s) are assigned to manipulations implemented in their design. The researcher controls the assignment of units of the random factor(s) (e.g., subjects, items) to experimental manipulations. These manipulations are represented as factors with a fixed and discrete set of levels (e.g., training vs. control group) or as covariates associated with continuous numeric values (e.g., presentation times).
Fixed quasi-experimental factor or covariate. In observational studies (which can also be experiments) the units (or levels) of random factors may “bring along” characteristics that represent the levels of quasi-experimental factors or covariates beyond the control of the researcher. Whether a a subject is female, male, or diverse or whether a word is a noun, a verb, or an adjective are examples of quasi-experimental factors of gender or word type, respectively. Subject-related covariates are body height, body mass, and IQ scores; word-related covariates are their lengths, frequency, and cloze predictability.
The distinction between between-unit and within-unit factors is always relative to a random (grouping) factor of an experimental design. A between-unit factor / covariate is a factor for which every unit of the random factor is assigned to or characterized by only one level of the factor. A within-unit factor is a factor for which units of the random factor appear at every level of the factor.
For the typical random factor, say Subject, there is little ambiguity because we are used to the between-within distinction from ANOVAs, more specifically the F1-ANOVA. In psycholinguistics, there is the tradition to test effects also for the second random factor Item in an F2-ANOVA. Importantly, for a given fixed factor all four combinations are possible. For example, Gender is a fixed quasi-experimental between-subject / within-item factor; word frequency is a fixed quasi-experimental within-subject / between-item covariate; Prime-target relation is a fixed experimental within-subject / within-item factor (assuming that targets are presented both in a primed and in an unprimed situation); and when a training manipulation is defined by the items used in the training, then in a training-control group design, the fixed factor Group is a fixed experimental between-subject / between-item factor.
These distinctions are critical for setting up LMMs because variance components for (quasi-)experimental effects can only be specified for within-unit effects. Note also that loss of data (within limits), counterbalancing or blocking of items are irrelevant for these definitions.
The simplest fixed factor has two levels and the model estimates the difference between them. When we move to factors with k levels, we must decide on how we spend the k-1 degrees of freedom, that is we must specify a set of contrasts. (If we don’t do it, the program chooses DummyCoding contrasts for us.)
The simplest specification of a covariate is to include its linear trend, that is its slope. The slope (like a contrast) represents a difference score, that is the change in response to a one-unit change on the covariate. For covariates we must decide on the order of the trend we want to model.
Fixed factors and covariates are expected to have effects on the response. Fixed-effect model parameters estimate the hypothesized main and interaction effects of the study. The estimates of factors are based on contrasts; the estimates of covariates are based on trends. Conceptually, they correspond to unstandardized regression coefficients in multiple regression.
The intercept is a special regression coefficient; it estimates the value of the dependent variable when all fixed effects associated with factors and trends associated with covariates are zero. In experimental designs with higher-order interactions there is an advantage of specifying the LMM in such a way that the intercept estimates the grand mean (GM; mean of the means of design cells). This happens if (a) contrasts for factors are chosen such that the intercept estimates the GM (positive: EffectsCoding, SeqDifferenceCoding, or HelmertCoding contrasts; negative: DummyCoding), (b) orthogonal polynomial trends are used (Helmert, anova-based), and (c) covariates are centered on their mean before inclusion in the model. As always, there may be good theoretical reasons to depart from the default recommendation.
The specification of contrasts / trends does not depend on the status of the fixed factor / covariate. It does not matter whether a factor varies between or within the units of a random factor or whether it is an experimental or quasi-experimental factor. Contrasts are not specified for random (grouping) factors.
Variance components (VCs) and correlation parameters (CPs) are within-group model parameters; they correspond to (some of the) within-unit (quasi-)experimental fixed-effect model parameters. Thus, we may be able to estimate a subject-related VC for word frequency. If we included a linear trend for word frequency, the VC estimates the subject-related variance in these slopes. We cannot estimate an item-related VC for the word-frequency slopes because there is only one frequency associated with words. Analogously, we may able to estimate an item-related VC for the effect of Group (training vs. control)
, but we cannot estimate a subject-related VC for this effect.
The within-between characteristics of fixed factors and covariates relative to the random factor(s) are features of the design of the experiment or observational study. They fundamentally constrain the specification of the LMM. That’s why it is of upmost importance to be absolutely clear about their status.
In this outline of the dimensions underlying the specification of an LMM, we have said nothing so far about the conditional modes of random effects (i.e., the results shown in caterpillar and shrinkage plots). They are not needed for model specification or model selection.
The VC is the prior variance of the random effects, whereas var(ranef(model))
is the variance of the posterior means/modes of the random effects. See Kliegl et al. (2010, VisualCognition); Rizopoulos (2019, stackexchange.
This semantic-priming experiment was reported in Masson, Rabe, & Kliegl (2017, Exp. 1, Memory & Cognition). It is a direct replication of an experiment reported in Masson & Kliegl (2013, Exp. 1, JEPLMC). Following a prime word a related or unrelated high- or low-frequency target word or a nonword was presented in clear or dim font. The subject’s task was to decide as quickly as possible whether the target was a word or a nonword, that is subjects performed a lexical decision task (LDT). The reaction time and the accuracy of the response were recorded. Only correct reaction times to words are included. After filtering there were 16,409 observations recorded from 73 subjects and 240 items.
The data (variables and observations) used by Masson et al. (2017) are available in file MRK17_Exp1.RDS
Variable | Description |
---|---|
Subj | Subject identifier |
Item | Target (non-)word |
trial | Trial number |
F | Target frequency is high or low |
P | Prime is related or unrelated to target |
Q | Target quality is clear or degraded |
lQ | Last-trial target quality is clear or degraded |
lT | Last-trail target requires word or nonword response |
rt | Reaction time [ms] |
lagQlty
and lagTrgt
refer to experimental conditions in the last trial.
Corresponding indicator variables (-1/+1):
cells = combine(
groupby(dat, [:F, :P, :Q, :lQ, :lT]),
nrow => :n,
:rt => mean => :rt_m,
:rt => std => :rt_sd
)
cells
Row | F | P | Q | lQ | lT | n | rt_m | rt_sd |
---|---|---|---|---|---|---|---|---|
String | String | String | String | String | Int64 | Float64 | Float64 | |
1 | LF | unr | deg | deg | NW | 491 | 683.831 | 197.074 |
2 | LF | unr | deg | deg | WD | 488 | 676.848 | 198.409 |
3 | LF | unr | deg | clr | NW | 499 | 685.265 | 186.73 |
4 | LF | unr | deg | clr | WD | 519 | 676.711 | 167.794 |
5 | LF | unr | clr | deg | NW | 528 | 658.225 | 213.144 |
6 | LF | unr | clr | deg | WD | 494 | 653.03 | 196.71 |
7 | LF | unr | clr | clr | NW | 506 | 648.057 | 209.625 |
8 | LF | unr | clr | clr | WD | 499 | 641.669 | 183.413 |
9 | LF | rel | deg | deg | NW | 492 | 667.872 | 175.869 |
10 | LF | rel | deg | deg | WD | 498 | 653.43 | 180.58 |
11 | LF | rel | deg | clr | NW | 539 | 650.16 | 167.51 |
12 | LF | rel | deg | clr | WD | 512 | 678.119 | 217.209 |
13 | LF | rel | clr | deg | NW | 499 | 635.076 | 192.738 |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
21 | HF | unr | clr | deg | NW | 553 | 644.6 | 203.261 |
22 | HF | unr | clr | deg | WD | 495 | 633.046 | 172.604 |
23 | HF | unr | clr | clr | NW | 495 | 619.626 | 176.55 |
24 | HF | unr | clr | clr | WD | 532 | 620.838 | 192.689 |
25 | HF | rel | deg | deg | NW | 508 | 659.266 | 192.637 |
26 | HF | rel | deg | deg | WD | 546 | 637.793 | 177.496 |
27 | HF | rel | deg | clr | NW | 507 | 645.493 | 179.962 |
28 | HF | rel | deg | clr | WD | 517 | 667.685 | 231.145 |
29 | HF | rel | clr | deg | NW | 527 | 615.647 | 176.826 |
30 | HF | rel | clr | deg | WD | 504 | 620.569 | 173.764 |
31 | HF | rel | clr | clr | NW | 546 | 635.319 | 205.19 |
32 | HF | rel | clr | clr | WD | 499 | 613.094 | 192.13 |
The following LMM is not the maximal factorial LMM because we do not include interaction terms and associated correlation parameters in the RE structure.
contrasts =
Dict( :F => EffectsCoding(; levels=["LF", "HF"]) ,
:P => EffectsCoding(; levels=["unr", "rel"]),
:Q => EffectsCoding(; levels=["deg", "clr"]),
:lQ =>EffectsCoding(; levels=["deg", "clr"]),
:lT =>EffectsCoding(; levels=["NW", "WD"])
);
m_cpx = let
form = @formula (1000/rt) ~ 1+F*P*Q*lQ*lT +
(1+F+P+Q+lQ+lT | subj) +
(1 +P+Q+lQ+lT | item);
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_cpx)
Column | Variance | Std.Dev | Corr. | |||||
item | (Intercept) | 0.00320181 | 0.05658456 | |||||
P: rel | 0.00012913 | 0.01136337 | +0.05 | |||||
Q: clr | 0.00015972 | 0.01263799 | +0.36 | +0.38 | ||||
lQ: clr | 0.00003493 | 0.00591002 | +0.38 | +0.03 | +0.03 | |||
lT: WD | 0.00015749 | 0.01254958 | -0.11 | -0.87 | -0.01 | -0.35 | ||
subj | (Intercept) | 0.03062115 | 0.17498899 | |||||
F: HF | 0.00003858 | 0.00621157 | +0.39 | |||||
P: rel | 0.00012861 | 0.01134062 | +0.35 | +0.95 | ||||
Q: clr | 0.00078982 | 0.02810367 | +0.41 | +0.49 | +0.72 | |||
lQ: clr | 0.00011565 | 0.01075416 | +0.06 | +0.02 | +0.22 | +0.58 | ||
lT: WD | 0.00104490 | 0.03232486 | +0.26 | -0.14 | -0.01 | +0.37 | +0.50 | |
Residual | 0.08570468 | 0.29275361 |
(item =
Principal components based on correlation matrix
(Intercept) 1.0 . . . .
P: rel 0.05 1.0 . . .
Q: clr 0.36 0.38 1.0 . .
lQ: clr 0.38 0.03 0.03 1.0 .
lT: WD -0.11 -0.87 -0.01 -0.35 1.0
Normalized cumulative variances:
[0.4227, 0.6868, 0.9061, 1.0, 1.0]
Component loadings
PC1 PC2 PC3 PC4 PC5
(Intercept) -0.3 0.68 0.01 0.67 0.07
P: rel -0.6 -0.38 0.22 0.05 0.67
Q: clr -0.31 0.34 0.7 -0.46 -0.28
lQ: clr -0.32 0.41 -0.62 -0.55 0.2
lT: WD 0.6 0.34 0.26 -0.15 0.66, subj =
Principal components based on correlation matrix
(Intercept) 1.0 . . . . .
F: HF 0.39 1.0 . . . .
P: rel 0.35 0.95 1.0 . . .
Q: clr 0.41 0.49 0.72 1.0 . .
lQ: clr 0.06 0.02 0.22 0.58 1.0 .
lT: WD 0.26 -0.14 -0.01 0.37 0.5 1.0
Normalized cumulative variances:
[0.48, 0.7523, 0.8971, 0.9605, 1.0, 1.0]
Component loadings
PC1 PC2 PC3 PC4 PC5 PC6
(Intercept) -0.34 0.01 0.83 0.41 -0.14 -0.09
F: HF -0.46 0.44 -0.07 -0.26 -0.39 0.61
P: rel -0.53 0.29 -0.2 -0.2 -0.02 -0.74
Q: clr -0.52 -0.18 -0.14 0.18 0.76 0.27
lQ: clr -0.29 -0.54 -0.42 0.45 -0.5 -0.0
lT: WD -0.19 -0.63 0.27 -0.7 -0.06 0.0)
Variance-covariance matrix of random-effect structure suggests overparameterization for both subject-related and item-related components.
We don’t look at fixed effects before model selection.
We can also look separately at item- and subj-related VCs and CPs for subjects and items.
5×5 LinearAlgebra.LowerTriangular{Float64, Matrix{Float64}}:
0.193284 ⋅ ⋅ ⋅ ⋅
0.00204529 0.0387616 ⋅ ⋅ ⋅
0.0156009 0.0156464 0.0370864 ⋅ ⋅
0.00758797 0.000204232 -0.00258088 0.0185273 ⋅
-0.00482077 -0.0370884 0.0173333 -0.0117634 0.0
VP is zero for last diagonal entry; not supported by data.
6×6 LinearAlgebra.LowerTriangular{Float64, Matrix{Float64}}:
0.597735 ⋅ ⋅ ⋅ ⋅ ⋅
0.00818596 0.019575 ⋅ ⋅ ⋅ ⋅
0.0134379 0.0343446 0.0118528 ⋅ ⋅ ⋅
0.0392815 0.0347888 0.0803882 0.0 ⋅ ⋅
0.00206606 -0.000103841 0.0242869 0.0108592 0.0252461 ⋅
0.0283518 -0.0290294 0.0475194 -0.0533221 0.0552833 0.0488642
VP is zero for fourth diagonal entry; not supported by data.
We take out correlation parameters.
m_zcp = let
form = @formula (1000/rt) ~ 1+F*P*Q*lQ*lT +
zerocorr(1+F+P+Q+lQ+lT | subj) +
zerocorr(1 +P+Q+lQ+lT | item);
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_zcp)
issingular(m_zcp)
MixedModels.PCA(m_zcp)
(item =
Principal components based on correlation matrix
(Intercept) 1.0 . . . .
P: rel 0.0 1.0 . . .
Q: clr 0.0 0.0 1.0 . .
lQ: clr 0.0 0.0 0.0 1.0 .
lT: WD 0.0 0.0 0.0 0.0 1.0
Normalized cumulative variances:
[0.2, 0.4, 0.6, 0.8, 1.0]
Component loadings
PC1 PC2 PC3 PC4 PC5
(Intercept) 1.0 0.0 0.0 0.0 0.0
P: rel 0.0 0.0 0.0 0.0 1.0
Q: clr 0.0 1.0 0.0 0.0 0.0
lQ: clr 0.0 0.0 1.0 0.0 0.0
lT: WD 0.0 0.0 0.0 1.0 0.0, subj =
Principal components based on correlation matrix
(Intercept) 1.0 . . . . .
F: HF 0.0 1.0 . . . .
P: rel 0.0 0.0 1.0 . . .
Q: clr 0.0 0.0 0.0 1.0 . .
lQ: clr 0.0 0.0 0.0 0.0 1.0 .
lT: WD 0.0 0.0 0.0 0.0 0.0 1.0
Normalized cumulative variances:
[0.1667, 0.3333, 0.5, 0.6667, 0.8333, 1.0]
Component loadings
PC1 PC2 PC3 PC4 PC5 PC6
(Intercept) 1.0 0.0 0.0 0.0 0.0 0.0
F: HF 0.0 1.0 0.0 0.0 0.0 0.0
P: rel 0.0 0.0 1.0 0.0 0.0 0.0
Q: clr 0.0 0.0 0.0 1.0 0.0 0.0
lQ: clr 0.0 0.0 0.0 0.0 1.0 0.0
lT: WD 0.0 0.0 0.0 0.0 0.0 1.0)
model-dof | deviance | χ² | χ²-dof | P(>χ²) | |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + F + P + Q + lQ + lT | subj) + zerocorr(1 + P + Q + lQ + lT | item) | 44 | 7188 | |||
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + (1 + F + P + Q + lQ + lT | subj) + (1 + P + Q + lQ + lT | item) | 69 | 7148 | 41 | 25 | 0.0241 |
Looks ok. It might be a good idea to prune the LMM by removing small VCs.
m_zcp2 = let
form = @formula (1000/rt) ~ 1+F*P*Q*lQ*lT +
zerocorr(1 +P+Q+lQ+lT | subj) +
zerocorr(1 +P+Q +lT | item);
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_zcp2)
Column | Variance | Std.Dev | Corr. | ||||
item | (Intercept) | 0.00320205 | 0.05658667 | ||||
P: rel | 0.00007137 | 0.00844788 | . | ||||
Q: clr | 0.00014719 | 0.01213239 | . | . | |||
lT: WD | 0.00011707 | 0.01081986 | . | . | . | ||
subj | (Intercept) | 0.03061184 | 0.17496240 | ||||
P: rel | 0.00009921 | 0.00996026 | . | ||||
Q: clr | 0.00077306 | 0.02780389 | . | . | |||
lQ: clr | 0.00011890 | 0.01090433 | . | . | . | ||
lT: WD | 0.00106148 | 0.03258042 | . | . | . | . | |
Residual | 0.08591586 | 0.29311407 |
model-dof | deviance | χ² | χ²-dof | P(>χ²) | |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + P + Q + lQ + lT | subj) + zerocorr(1 + P + Q + lT | item) | 42 | 7189 | |||
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + F + P + Q + lQ + lT | subj) + zerocorr(1 + P + Q + lQ + lT | item) | 44 | 7188 | 0 | 2 | 0.9634 |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + (1 + F + P + Q + lQ + lT | subj) + (1 + P + Q + lQ + lT | item) | 69 | 7148 | 41 | 25 | 0.0241 |
We can perhaps remove some more.
m_zcp3 = let
form = @formula (1000/rt) ~ 1+F*P*Q*lQ*lT +
zerocorr(1 +Q +lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_zcp3)
Column | Variance | Std.Dev | Corr. | ||
item | (Intercept) | 0.0032048 | 0.0566108 | ||
subj | (Intercept) | 0.0306166 | 0.1749760 | ||
Q: clr | 0.0007628 | 0.0276190 | . | ||
lT: WD | 0.0010644 | 0.0326259 | . | . | |
Residual | 0.0864753 | 0.2940668 |
model-dof | deviance | χ² | χ²-dof | P(>χ²) | |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + Q + lT | subj) + (1 | item) | 37 | 7196 | |||
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + P + Q + lQ + lT | subj) + zerocorr(1 + P + Q + lT | item) | 42 | 7189 | 8 | 5 | 0.1781 |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + F + P + Q + lQ + lT | subj) + zerocorr(1 + P + Q + lQ + lT | item) | 44 | 7188 | 0 | 2 | 0.9634 |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + (1 + F + P + Q + lQ + lT | subj) + (1 + P + Q + lQ + lT | item) | 69 | 7148 | 41 | 25 | 0.0241 |
And another iteration.
m_zcp4 = let
form = @formula (1000/rt) ~ 1+F*P*Q*lQ*lT +
zerocorr(1 +lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_zcp4)
Column | Variance | Std.Dev | Corr. | |
item | (Intercept) | 0.0032067 | 0.0566278 | |
subj | (Intercept) | 0.0306608 | 0.1751023 | |
lT: WD | 0.0010896 | 0.0330097 | . | |
Residual | 0.0872330 | 0.2953523 |
model-dof | deviance | χ² | χ²-dof | P(>χ²) | |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + lT | subj) + (1 | item) | 36 | 7259 | |||
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + Q + lT | subj) + (1 | item) | 37 | 7196 | 63 | 1 | <1e-14 |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + P + Q + lQ + lT | subj) + zerocorr(1 + P + Q + lT | item) | 42 | 7189 | 8 | 5 | 0.1781 |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + F + P + Q + lQ + lT | subj) + zerocorr(1 + P + Q + lQ + lT | item) | 44 | 7188 | 0 | 2 | 0.9634 |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + (1 + F + P + Q + lQ + lT | subj) + (1 + P + Q + lQ + lT | item) | 69 | 7148 | 41 | 25 | 0.0241 |
Too much removed. Stay with m_zcp3
, but extend with CPs.
m_prm = let
form = @formula (1000/rt) ~ 1+F*P*Q*lQ*lT +
(1+ Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_prm)
Column | Variance | Std.Dev | Corr. | ||
item | (Intercept) | 0.0032009 | 0.0565764 | ||
subj | (Intercept) | 0.0306183 | 0.1749808 | ||
Q: clr | 0.0007627 | 0.0276168 | +0.42 | ||
lT: WD | 0.0010621 | 0.0325904 | +0.26 | +0.38 | |
Residual | 0.0864769 | 0.2940696 |
m_prm = let
form = @formula (1000/rt) ~ 1+F*P*Q*lQ*lT +
(1+ Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_prm)
Column | Variance | Std.Dev | Corr. | ||
item | (Intercept) | 0.0032009 | 0.0565764 | ||
subj | (Intercept) | 0.0306183 | 0.1749808 | ||
Q: clr | 0.0007627 | 0.0276168 | +0.42 | ||
lT: WD | 0.0010621 | 0.0325904 | +0.26 | +0.38 | |
Residual | 0.0864769 | 0.2940696 |
model-dof | deviance | χ² | χ²-dof | P(>χ²) | |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + zerocorr(1 + Q + lT | subj) + (1 | item) | 37 | 7196 | |||
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + (1 + Q + lT | subj) + (1 | item) | 40 | 7180 | 16 | 3 | 0.0011 |
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + (1 + F + P + Q + lQ + lT | subj) + (1 + P + Q + lQ + lT | item) | 69 | 7148 | 33 | 29 | 0.2971 |
The LRT favors the complex LMM, but not that χ² < 2*(χ²-dof); AIC and BIC suggest against selection.
gof_summary = let
nms = [:m_zcp3, :m_prm, :m_cpx]
mods = eval.(nms)
lrt = MixedModels.likelihoodratiotest(m_zcp3, m_prm, m_cpx)
DataFrame(;
name = nms,
dof=dof.(mods),
deviance=round.(deviance.(mods), digits=0),
AIC=round.(aic.(mods),digits=0),
AICc=round.(aicc.(mods),digits=0),
BIC=round.(bic.(mods),digits=0),
χ²=vcat(:., round.(lrt.tests.deviancediff, digits=0)),
χ²_dof=vcat(:., round.(lrt.tests.dofdiff, digits=0)),
pvalue=vcat(:., round.(lrt.tests.pvalues, digits=3))
)
end
Row | name | dof | deviance | AIC | AICc | BIC | χ² | χ²_dof | pvalue |
---|---|---|---|---|---|---|---|---|---|
Symbol | Int64 | Float64 | Float64 | Float64 | Float64 | Any | Any | Any | |
1 | m_zcp3 | 37 | 7196.0 | 7270.0 | 7270.0 | 7555.0 | . | . | . |
2 | m_prm | 40 | 7180.0 | 7260.0 | 7260.0 | 7568.0 | 16.0 | 3.0 | 0.001 |
3 | m_cpx | 69 | 7148.0 | 7286.0 | 7286.0 | 7817.0 | 33.0 | 29.0 | 0.297 |
The LMM is not nested in the previous sequence.
m_mrk17_crossed =let
form = @formula (1000/rt) ~ 1 + F*P*Q*lQ*lT +
(1+Q | subj) + zerocorr(0+lT | subj) + zerocorr(1 + P | item) ;
fit(MixedModel, form, dat; contrasts, progress)
end
VarCorr(m_prm)
Column | Variance | Std.Dev | Corr. | ||
item | (Intercept) | 0.0032009 | 0.0565764 | ||
subj | (Intercept) | 0.0306183 | 0.1749808 | ||
Q: clr | 0.0007627 | 0.0276168 | +0.42 | ||
lT: WD | 0.0010621 | 0.0325904 | +0.26 | +0.38 | |
Residual | 0.0864769 | 0.2940696 |
Linear mixed model fit by maximum likelihood
:(1000 / rt) ~ 1 + F + P + Q + lQ + lT + F & P + F & Q + P & Q + F & lQ + P & lQ + Q & lQ + F & lT + P & lT + Q & lT + lQ & lT + F & P & Q + F & P & lQ + F & Q & lQ + P & Q & lQ + F & P & lT + F & Q & lT + P & Q & lT + F & lQ & lT + P & lQ & lT + Q & lQ & lT + F & P & Q & lQ + F & P & Q & lT + F & P & lQ & lT + F & Q & lQ & lT + P & Q & lQ & lT + F & P & Q & lQ & lT + (1 + Q | subj) + zerocorr(0 + lT | subj) + zerocorr(1 + P | item)
logLik -2 logLik AIC AICc BIC
-3593.4086 7186.8171 7264.8171 7265.0077 7565.3349
Variance components:
Column Variance Std.Dev. Corr.
item (Intercept) 0.00320568 0.05661869
P: rel 0.00006693 0.00818097 .
subj (Intercept) 0.03061760 0.17497884
Q: clr 0.00076248 0.02761309 +0.42
lT: WD 0.00106209 0.03258968 . .
Residual 0.08640800 0.29395238
Number of obs: 16409; levels of grouping factors: 240, 73
Fixed-effects parameters:
─────────────────────────────────────────────────────────────────────────────────────
Coef. Std. Error z Pr(>|z|)
─────────────────────────────────────────────────────────────────────────────────────
(Intercept) 1.63743 0.0209304 78.23 <1e-99
F: HF 0.019366 0.00431955 4.48 <1e-05
P: rel 0.0188081 0.00236113 7.97 <1e-14
Q: clr 0.042901 0.00396828 10.81 <1e-26
lQ: clr 0.00174653 0.00231531 0.75 0.4506
lT: WD 0.00836023 0.00446182 1.87 0.0610
F: HF & P: rel -0.00711249 0.00236171 -3.01 0.0026
F: HF & Q: clr -0.00141259 0.00230067 -0.61 0.5392
P: rel & Q: clr 0.00134142 0.00230239 0.58 0.5601
F: HF & lQ: clr -0.00105227 0.00232139 -0.45 0.6503
P: rel & lQ: clr -0.00240743 0.00232056 -1.04 0.2995
Q: clr & lQ: clr 0.00759804 0.00231922 3.28 0.0011
F: HF & lT: WD 0.000536555 0.00231365 0.23 0.8166
P: rel & lT: WD 2.93223e-5 0.00231751 0.01 0.9899
Q: clr & lT: WD 0.00185532 0.00231773 0.80 0.4234
lQ: clr & lT: WD -0.00524632 0.00231608 -2.27 0.0235
F: HF & P: rel & Q: clr -0.000362238 0.00230272 -0.16 0.8750
F: HF & P: rel & lQ: clr -0.00117754 0.00232025 -0.51 0.6118
F: HF & Q: clr & lQ: clr 0.00273699 0.00232283 1.18 0.2387
P: rel & Q: clr & lQ: clr -0.0039323 0.0023222 -1.69 0.0904
F: HF & P: rel & lT: WD 0.0019864 0.00231879 0.86 0.3916
F: HF & Q: clr & lT: WD -0.00112586 0.00231657 -0.49 0.6270
P: rel & Q: clr & lT: WD 0.000261827 0.00231924 0.11 0.9101
F: HF & lQ: clr & lT: WD 0.00150677 0.00232198 0.65 0.5164
P: rel & lQ: clr & lT: WD 8.67964e-5 0.00232177 0.04 0.9702
Q: clr & lQ: clr & lT: WD 0.00916751 0.00231827 3.95 <1e-04
F: HF & P: rel & Q: clr & lQ: clr -0.00234443 0.00231978 -1.01 0.3122
F: HF & P: rel & Q: clr & lT: WD -0.00148721 0.00232057 -0.64 0.5216
F: HF & P: rel & lQ: clr & lT: WD 0.00308764 0.00232145 1.33 0.1835
F: HF & Q: clr & lQ: clr & lT: WD 0.00393598 0.00232128 1.70 0.0900
P: rel & Q: clr & lQ: clr & lT: WD 0.00202623 0.00232125 0.87 0.3827
F: HF & P: rel & Q: clr & lQ: clr & lT: WD 0.00144888 0.00231838 0.62 0.5320
─────────────────────────────────────────────────────────────────────────────────────
Finally, a look at the fixed effects. The four-factor interaction reported in Masson & Kliegl (2013) was not replicated.
m_mrk17_nested =let
form = @formula (1000/rt) ~ 1 + Q/(F/P) +
(1+Q | subj) + zerocorr(0+lT | subj) + zerocorr(1 + P | item) ;
fit(MixedModel, form, dat; contrasts, progress)
end
Est. | SE | z | p | σ_item | σ_subj | |
(Intercept) | 1.6374 | 0.0209 | 78.20 | <1e-99 | 0.0565 | 0.1751 |
Q: clr | 0.0428 | 0.0040 | 10.74 | <1e-26 | 0.0278 | |
Q: deg & F: HF | 0.0209 | 0.0049 | 4.26 | <1e-04 | ||
Q: clr & F: HF | 0.0179 | 0.0049 | 3.65 | 0.0003 | ||
Q: deg & F: LF & P: rel | 0.0244 | 0.0047 | 5.18 | <1e-06 | ||
Q: clr & F: LF & P: rel | 0.0278 | 0.0047 | 5.94 | <1e-08 | ||
Q: deg & F: HF & P: rel | 0.0110 | 0.0047 | 2.35 | 0.0186 | ||
Q: clr & F: HF & P: rel | 0.0126 | 0.0046 | 2.71 | 0.0067 | ||
P: rel | 0.0086 | |||||
lT: WD | 0.0337 | |||||
Residual | 0.2944 |
Include parenthesis
m_mrk17_nested =let
form = @formula (1000/rt) ~ 1 + Q/(F/P) +
(1+Q | subj) + zerocorr(0+lT | subj) + zerocorr(1 + P | item) ;
fit(MixedModel, form, dat; contrasts, progress)
end
Est. | SE | z | p | σ_item | σ_subj | |
(Intercept) | 1.6374 | 0.0209 | 78.20 | <1e-99 | 0.0565 | 0.1751 |
Q: clr | 0.0428 | 0.0040 | 10.74 | <1e-26 | 0.0278 | |
Q: deg & F: HF | 0.0209 | 0.0049 | 4.26 | <1e-04 | ||
Q: clr & F: HF | 0.0179 | 0.0049 | 3.65 | 0.0003 | ||
Q: deg & F: LF & P: rel | 0.0244 | 0.0047 | 5.18 | <1e-06 | ||
Q: clr & F: LF & P: rel | 0.0278 | 0.0047 | 5.94 | <1e-08 | ||
Q: deg & F: HF & P: rel | 0.0110 | 0.0047 | 2.35 | 0.0186 | ||
Q: clr & F: HF & P: rel | 0.0126 | 0.0046 | 2.71 | 0.0067 | ||
P: rel | 0.0086 | |||||
lT: WD | 0.0337 | |||||
Residual | 0.2944 |
using RegressionFormulae
# m_prm_5 is equivalent to m_prm
m_prm_5 = let
form = @formula (1000/rt) ~ 1+(F+P+Q+lQ+lT)^5 + (1+Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end;
m_prm_4 = let
form = @formula (1000/rt) ~ 1+(F+P+Q+lQ+lT)^4 + (1+Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end;
m_prm_3 = let
form = @formula (1000/rt) ~ 1+(F+P+Q+lQ+lT)^3 + (1+Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end;
m_prm_2 = let
form = @formula (1000/rt) ~ 1+(F+P+Q+lQ+lT)^2 + (1+Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end;
m_prm_1 = let
form = @formula (1000/rt) ~ 1+ F+P+Q+lQ+lT + (1+Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end;
# Compare the fits
gof_summary = let
nms = [:m_prm_1, :m_prm_2, :m_prm_3, :m_prm_4, :m_prm_5]
mods = eval.(nms)
lrt = MixedModels.likelihoodratiotest(m_prm_1, m_prm_2, m_prm_3, m_prm_4, m_prm_5)
DataFrame(;
name = nms,
dof=dof.(mods),
deviance=deviance.(mods),
AIC=aic.(mods),
AICc=aicc.(mods),
BIC=bic.(mods),
χ²=vcat(:.,lrt.tests.deviancediff),
χ²_dof=vcat(:.,lrt.tests.dofdiff),
pvalue=vcat(:., round.(lrt.tests.pvalues, digits=3))
)
end
Row | name | dof | deviance | AIC | AICc | BIC | χ² | χ²_dof | pvalue |
---|---|---|---|---|---|---|---|---|---|
Symbol | Int64 | Float64 | Float64 | Float64 | Float64 | Any | Any | Any | |
1 | m_prm_1 | 14 | 7237.39 | 7265.39 | 7265.41 | 7373.27 | . | . | . |
2 | m_prm_2 | 24 | 7209.2 | 7257.2 | 7257.27 | 7442.13 | 28.1868 | 10 | 0.002 |
3 | m_prm_3 | 34 | 7187.57 | 7255.57 | 7255.71 | 7517.56 | 21.6317 | 10 | 0.017 |
4 | m_prm_4 | 39 | 7180.61 | 7258.61 | 7258.8 | 7559.13 | 6.95863 | 5 | 0.224 |
5 | m_prm_5 | 40 | 7180.21 | 7260.21 | 7260.41 | 7568.44 | 0.398591 | 1 | 0.528 |
Depending on the level of precision of your hypothesis. You could stay with main effect (BIC), include 2-factor interactions (AIC; also called simple interactions) or include 3-factor interactions [χ² < 2*(χ²-dof); also called 2-way interactions].
We are using only three factors for the illustruation.
m_prm3 = let
form = @formula (1000/rt) ~ 1 + lT*lQ*Q +
(1+ Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end
Est. | SE | z | p | σ_item | σ_subj | |
(Intercept) | 1.6376 | 0.0210 | 78.10 | <1e-99 | 0.0596 | 0.1750 |
lT: WD | 0.0087 | 0.0045 | 1.93 | 0.0530 | 0.0328 | |
lQ: clr | 0.0016 | 0.0023 | 0.70 | 0.4818 | ||
Q: clr | 0.0428 | 0.0040 | 10.78 | <1e-26 | 0.0276 | |
lT: WD & lQ: clr | -0.0056 | 0.0023 | -2.39 | 0.0167 | ||
lT: WD & Q: clr | 0.0020 | 0.0023 | 0.87 | 0.3862 | ||
lQ: clr & Q: clr | 0.0076 | 0.0023 | 3.26 | 0.0011 | ||
lT: WD & lQ: clr & Q: clr | 0.0092 | 0.0023 | 3.94 | <1e-04 | ||
Residual | 0.2949 |
The lT & lQ & Q
interactions is significant. Let’s follow it up with a post-hoc LMM, that is looking at the lQ & Q
interaction in the two levels of whether the last word was a target or not.
m_prm3_posthoc = let
form = @formula (1000/rt) ~ 1 + lT/(lQ*Q) +
(1+ Q+lT | subj) + (1 | item);
fit(MixedModel, form, dat; contrasts, progress)
end
Est. | SE | z | p | σ_item | σ_subj | |
(Intercept) | 1.6376 | 0.0210 | 78.10 | <1e-99 | 0.0596 | 0.1750 |
lT: WD | 0.0087 | 0.0045 | 1.93 | 0.0530 | 0.0328 | |
lT: NW & lQ: clr | 0.0072 | 0.0033 | 2.19 | 0.0285 | ||
lT: WD & lQ: clr | -0.0039 | 0.0033 | -1.19 | 0.2326 | ||
lT: NW & Q: clr | 0.0408 | 0.0046 | 8.87 | <1e-18 | ||
lT: WD & Q: clr | 0.0448 | 0.0046 | 9.73 | <1e-21 | ||
lT: NW & lQ: clr & Q: clr | -0.0016 | 0.0033 | -0.49 | 0.6275 | ||
lT: WD & lQ: clr & Q: clr | 0.0167 | 0.0033 | 5.08 | <1e-06 | ||
Q: clr | 0.0276 | |||||
Residual | 0.2949 |
The source of the interaction are trials where the last trial was a word target; there is no evidence for the interaction when the last trial was a nonword target.
The original and post-hoc LMM have the same goodness of fit.
Julia Version 1.10.5
Commit 6f3fdf7b362 (2024-08-27 14:19 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 8 × 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-15.0.7 (ORCJIT, tigerlake)
Threads: 8 default, 0 interactive, 4 GC (on 8 virtual cores)
Environment:
JULIA_LOAD_PATH = @:@stdlib