[prev in list] [next in list] [prev in thread] [next in thread]
List: r-sig-mixed-models
Subject: Re: [R-sig-ME] non-orthogonal contrasts
From: Jake Westfall <jake987722 () hotmail ! com>
Date: 2014-01-22 21:50:06
Message-ID: BAY172-W3065CB688DD98B48BF86FDCBA70 () phx ! gbl
[Download RAW message or body]
Tobias,
Now I am even more confused. The coding scheme that you elaborated definitely does \
*not* represent an orthogonal set. None of the proposed contrasts are orthogonal to \
any of the others, nor are any of them orthogonal to the intercept (i.e., they don't \
sum to 0).
I also think you misinterpret what the interaction term means with the dummy \
variables. You are trying to interpret it "naively" -- it appears to only add \
something for the 4th condition and not any of the others, so that must be what it's \
doing -- but again, this cannot be done (dummies are not orthogonal). An interaction \
among dummy-coded factors has the same basic interpretation as an interaction among \
contrast-coded factors, although the scaling of the coefficient might differ. It is \
the interpretations of the lower-order dummy factors that take on a totally different \
meaning.
There is a trick you can use for specifying possibly non-orthogonal sets of codes. If \
you take a matrix of contrasts like the one you wrote out (but also including an \
initial column for the constant intercept term) and invert it, then the rows of this \
inverted matrix give the actual weights that are applied to each group mean in the \
linear combinations tested by each regression coefficient. If you do this with a set \
of orthogonal codes, then the rows/weights are exactly what you think they are. That \
is the advantage of orthogonal codes. And if the set is non-orthogonal, this shows \
you what is *actually* being tested by each contrast. I encourage you to test this \
out with a set of dummy codes to get a feel for it.
So the trick then is that you can specify the weights in this inverted matrix \
directly. Let the weights in the first row be 1/k for each group (where k = number of \
groups), and let the other rows be whatever linear combinations of group means you \
are interested in, as long as none of the rows are linearly dependent with the \
others. Then invert this matrix, and the columns in this inverted matrix now give the \
contrast codes that you need to specify in your model. This technique is explained \
briefly in John Fox's 'Applied Regression Analysis' text.
Jake
Subject: Re: [R-sig-ME] non-orthogonal contrasts
From: tobias.heed@uni-hamburg.de
Date: Tue, 21 Jan 2014 15:04:22 +0100
CC: r-sig-mixed-models@r-project.org
To: jake987722@hotmail.com
Hi Jake,
thanks for your reply.I’ll try to explain what I am looking for and why, and hope it \
makes sense. I would like to use effect coding because the interaction has a specific \
meaning, which will be different with treatment coding given that the interaction is \
the multiplication of the contrasts of factor 1 with the contrast of factor 2.If you \
think of a simpler design with 2 factors, A and B, that each have 2 \
levels:combination of conditions / contrast weight A / contrast weight B / \
interactionA1-B1 / -1 / -1 / 1A1-B2 / -1 / 1 / -1A2-B1 / 1 \
/ -1 / -1A2-B2 / -1 / 1 / 1 from googling more, as far as I \
understand, here weights signify the test as one would “naively” expect, because the \
coding scheme is orthogonal.The interaction “fits” the effect coding idea, because it \
adds and subtracts for each combination of conditions from a common mean. (If you use \
treatment coding in all contrasts, the interaction will be 0 0 0 1, so it would only \
add something for the 4th combination of conditions, but not do anything for the \
other 3 conditions.) I (erroneously) extended this scheme by adding the third \
condition to A (and then called that C, E1, E2 in my last mail). As you pointed out, \
because the resulting scheme is not orthogonal, I cannot read the weights “naively”, \
but must convert them into the contrast matrix. I had overlooked that.By googling \
further, I found that the coding scheme I probably need is probably simple coding or \
indicator coding (http://www.ats.ucla.edu/stat/spss/library/contrast.htm). That would \
mean I have to set my contrasts to -1, -1, 2 / -1, 2, -1, and that would convert into \
a contrast matrix which tests the 2nd and 3rd condition each against the \
first.However, here comes my problem: I would like to look at the separate \
interaction of each contrast of my factor with the contrast for the second factor \
(the hand posture one). Basically, what I would like is some coding which gives me \
exactly what I detailed above, but for 3 conditions.Now, if I use simple coding, then \
I get this interaction (again, last column) for the first contrast of factor 1 with \
factor 2:A1-B1 / -1 / -1 / 1A1-B2 / -1 / 1 / -1A2-B1 / -1 / \
-1 / 1A2-B2 / -1 / 1 / -1A3-B1 / 2 / -1 / -2A3-B2 / 2 / \
1 / 2
> From my naive reading, I wanted for the interaction: -1, 1, 0, 0, 1, -1because I \
> thought that that would mean that my interaction is concerned only with A1 and A3 \
> (call that C and E1 from before), given that A2 is always coded with 0.The same \
> kind of interaction would also come out for the second contrast of factor 1 (an \
> interaction of A1 and A2 without any contribution from A3). Thus, I would be \
> comparing A1 with A2 and with A3 separately, in an “effect coding” kind of \
> way.However, looking at the interaction above naively, what I do get appears to \
> involve all 3 conditions. Now I wonder: the contrasts of A and B are independent \
> (their product sum adds up to 0), so can I read the interaction coding scheme \
> “naively”, or do I have to convert it somehow to know what it is actually coding \
> (like I had to do for the non-orthogonal contrasts that code factor 1)?Or, said \
> differently, am I correct that when I use simple/indicator coding, I DO get what I \
> want for my factor 1, but I do NOT get what I want for my interaction?And, if that \
> is true, then what coding scheme (if any) CAN I use that would give me what I \
> want…?
Best,Tobias
On 21 Jan 2014, at 03:52, Jake Westfall <jake987722@hotmail.com> wrote:Hi Tobias,
I'll take a stab at your questions. Others should feel free to jump in and correct me \
if necessary.
First some general comments. I don't think I understood your motivation for wanting \
to use the contrasts that you specified rather than dummy codes. In what way would \
dummies "mess up" your interaction whereas the codes you propose would not?
Also, note that the particular coding scheme you have chosen is commonly called \
"effect coding." Like all non-orthogonal coding schemes, these do NOT test the mean \
differences that they naively appear to test based on the specified contrast weights! \
In your case, contrasts 1 and 2 compute mean differences between, respectively, \
condition E1 vs. mean of group means, and condition E2 mean vs. mean of group means. \
The E groups are *not* directly compared to the C group.
Now for your questions:
1. Yes. (Taking into account the usual additional complexities that come with mixed \
model analyses.) 2. Personally I do not find usually inspecting the \
covariance/correlation matrix of the fixed effect parameters to be particularly \
interesting or informative. I always thought it was a little strange that lmer \
printed this as part of the default output. Maybe others feel more strongly about it \
than I do. 3. Sure, I guess so, but interpret the estimated correlations among the \
random effects with caution, for at least two reasons. First, the correlations among \
the random effects depend to some extent on the scaling of the predictors associated \
with random slopes (i.e., you can rescale your predictors and substantially "push \
around" the estimated correlations even though the model has not changed in a deep \
sense). Second, there could be considerable uncertainty around these estimates. If \
you really want to make firm statements about these, you should probably try testing \
them more systematically using something like the tools in the RLRsim package (or \
maybe others have even better suggestions). 4. Yes.
Hope this helps,
Jake
From: tobias.heed@uni-hamburg.de
Date: Mon, 20 Jan 2014 15:23:53 +0100
To: r-sig-mixed-models@r-project.org
Subject: [R-sig-ME] non-orthogonal contrasts
Dear List,
I am wondering about the consequences of using non-orthogonal contrasts in an LMM.
I come from psychological experiments in which we often code factorial effects in \
contrasts as -1 and 1 (rather than use treatment coding, 0 and 1). This coding \
affects interactions differently from treatment coding which seems to be what most \
other disciplines use. Now, I have an experiment with one control condition (C), and \
two experimental conditions (E1, E2). (I am measuring tactile localization \
performance under 3 different head postures; C is head straight; E1 and E2 are 2 \
different head postures. I have an additional 2-level factor, hand posture, and I \
have acquired all 3 conditions with both levels of this hand posture factor).
I am interested in the difference between the control condition and each experimental \
condition. I would like to specify the contrasts coding for conditions C, E1, and E2 \
like this: contrast 1 (C, E1, E2): -1, 1, 0
contrast 2 (C, E1, E2): -1, 0, 1
These contrasts are not independent.
I am aware that I could use treatment coding (contrasts 0, 1, 0 and 0, 0, 1), but \
this will “mess up” the interactions with my second factor: what I really want to \
know is how the difference between C and each E is changed in interaction with the \
hand posture factor.
When I use the contrasts as specified, I see in the fixed effects correlations that \
the two contrasts are correlated (-0.5 for contrast 1 with contrast 2), and \
similarly, the interaction of contrast 1 with the hand posture factor correlates \
negatively with the interaction of contrast 2 and the hand posture factor. Other \
correlations are small.
Questions:
1, can I still interpret the fixed effects as I would in a “classical" ANOVA?
2, is it the correct approach to check the correlation structure of the fixed \
effects, and if so, what should I infer from it? 3, can I interpret the random \
effects in the model? Specifically, I get negative correlations also for the random \
effects of my contrasts (i.e., the random effect correlations more or less mirror the \
fixed effect correlations)? 4, I have read \
(http://www.utdallas.edu/~herve/abdi-contrasts2010-pretty.pdf) that the regression \
model deals with non-orthogonal contrasts by partialing out the other variables from \
each contrast. Would the interpretation in an LMM be similar?
Thanks for any help.
Best,
Tobias
Model output:
“crossingStatus.c” is the hand posture factor (2 levels, coded -1 and 1 in the \
contrast) “condition.c” is the factor with conditions C, E1, and E2. It is coded with \
2 contrasts that are named “hemifield” (coded -1, 1, 0) and “hemispace” (coded -1, 0, \
1). summary(model.body.head.3)
Generalized linear mixed model fit by the Laplace approximation
Formula: accuracyClean ~ crossingStatus.c * condition.c + (1 + crossingStatus.c * \
condition.c | subNo) Data: dataSet.1
AIC BIC logLik deviance
42374 42611 -21160 42320
Random effects:
Groups Name Variance Std.Dev. Corr \
subNo (Intercept) 0.2423548 0.492295 \
crossingStatus.ccrossingStatus 0.0826755 0.287534 0.184 \
condition.chemifield 0.0050947 0.071377 0.133 \
0.074
condition.chemispace 0.0030593 0.055311 -0.044 \
0.360 -0.696
crossingStatus.ccrossingStatus:condition.chemifield 0.0090415 0.095087 0.015 \
-0.474 0.470 -0.284
crossingStatus.ccrossingStatus:condition.chemispace 0.0169650 0.130250 0.002 \
0.508 0.132 0.191 -0.624 Number of obs: 46928, groups: subNo, 23
Fixed effects:
Estimate Std. Error z value \
Pr(>|z|) (Intercept) 1.601920 0.103541 \
15.471 < 2e-16 *** crossingStatus.ccrossingStatus -0.480008 \
0.061458 -7.810 5.7e-15 *** condition.chemifield \
0.061866 0.023619 2.619 0.00881 ** condition.chemispace \
0.004818 0.021711 0.222 0.82439 \
crossingStatus.ccrossingStatus:condition.chemifield 0.080060 0.027042 2.961 \
0.00307 ** crossingStatus.ccrossingStatus:condition.chemispace -0.038528 0.032899 \
-1.171 0.24156
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) crsS.S cndtn.chmf cndtn.chms \
crssngStts.ccrssngStts:cndtn.chmf crssngStt.S 0.170 \
cndtn.chmfl 0.084 0.050 \
cndtn.chmsp -0.021 0.183 -0.576 \
crssngStts.ccrssngStts:cndtn.chmf 0.013 -0.339 0.098 -0.036 \
crssngStts.ccrssngStts:cndtn.chms 0.000 0.411 0.125 -0.045 -0.577 \
[[alternative HTML version deleted]]
_______________________________________________
R-sig-mixed-models@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
_______________________________________________
R-sig-mixed-models@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
_______________________________________________
R-sig-mixed-models@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic