[prev in list] [next in list] [prev in thread] [next in thread] 

List:       r-sig-mixed-models
Subject:    Re: [R-sig-ME] Obtaining all effects in a multinomial regression
From:       "Lenth, Russell V" <russell-lenth () uiowa ! edu>
Date:       2016-08-17 1:15:04
Message-ID: CO2PR04MB22142CEC5CEFA5AF032A3326F1140 () CO2PR04MB2214 ! namprd04 ! prod ! outlook ! com
[Download RAW message or body]

I think maybe the lsmeans package can help with this...

> library("lsmeans")

> # Look at the reference grid that it uses by default for this model:

> ref.grid(fit)

'ref.grid' object with variables:
    target = a, b, c
    input = multivariate response levels: A, B, C, D, E

> # Note the multinomial response is treated like a factor 'input' with 5 levels


> # Here are the lsmeans - by default, they are on the probability scale:

> lsmeans(fit, ~ input | target)

target = a:
 input       prob         SE df    lower.CL  upper.CL
 A     0.55264789 0.09783283 12  0.33948846 0.7658073
 B     0.10525965 0.02727796 12  0.04582608 0.1646932
 C     0.11841616 0.02998773 12  0.05307850 0.1837538
 D     0.11841615 0.04028909 12  0.03063376 0.2061985
 E     0.10526015 0.03444923 12  0.03020173 0.1803186

target = b:
 input       prob         SE df    lower.CL  upper.CL
 A     0.06249061 0.01820814 12  0.02281847 0.1021627
 B     0.33928661 0.06201426 12  0.20416916 0.4744041
 C     0.38393000 0.04978587 12  0.27545592 0.4924041
 D     0.09821839 0.02201614 12  0.05024935 0.1461874
 E     0.11607438 0.02249977 12  0.06705159 0.1650972

target = c:
 input       prob         SE df    lower.CL  upper.CL
 A     0.09523627 0.07565354 12 -0.06959863 0.2600712
 B     0.11904618 0.13054235 12 -0.16538117 0.4034735
 C     0.38095561 0.27551283 12 -0.21933527 0.9812465
 D     0.16666102 0.16942976 12 -0.20249472 0.5358168
 E     0.23810092 0.22856258 12 -0.25989417 0.7360960

Confidence level used: 0.95 


> # But I think what you want are effects on the linear-predictor scale.
> # For that, specify mode = "latent" in the call (see ?models):

> fit.lsm <- lsmeans(fit, ~ input | target, mode = "latent")
> fit.lsm

target = a:
 input      lsmean        SE df   lower.CL      upper.CL
 A      1.27952178 0.3164462 12  0.5900446  1.9689989170
 B     -0.37876916 0.1737829 12 -0.7574095 -0.0001287816
 C     -0.26099411 0.1684176 12 -0.6279446  0.1059563636
 D     -0.26099417 0.2625914 12 -0.8331316  0.3111432930
 E     -0.37876434 0.2460970 12 -0.9149636  0.1574349505

target = b:
 input      lsmean        SE df   lower.CL      upper.CL
 A     -0.91573300 0.2450969 12 -1.4497534 -0.3817126475
 B      0.77609593 0.2237678 12  0.2885478  1.2636441045
 C      0.89971096 0.1502380 12  0.5723706  1.2270513543
 D     -0.46355580 0.1965806 12 -0.8918682 -0.0352433817
 E     -0.29651809 0.1734101 12 -0.6743463  0.0813101280

target = c:
 input      lsmean        SE df   lower.CL      upper.CL
 A     -0.61708150 0.6836341 12 -2.1065923  0.8724293349
 B     -0.39393084 0.9663707 12 -2.4994717  1.7116100068
 C      0.76924052 0.9107898 12 -1.2151999  2.7536809661
 D     -0.05748044 0.9413821 12 -2.1085759  1.9936149928
 E      0.29925226 0.9717157 12 -1.8179343  2.4164388165

Results are given on the log (not the response) scale. 
Confidence level used: 0.95 


> # To get the sum-to-zero effects, use contrasts of type "eff":

> contrast(fit.lsm, "eff")

target = a:
 contrast    estimate        SE df t.ratio p.value
 A effect  1.27952178 0.3164462 12   4.043  0.0081
 B effect -0.37876916 0.1737829 12  -2.180  0.1248
 C effect -0.26099411 0.1684176 12  -1.550  0.1872
 D effect -0.26099417 0.2625914 12  -0.994  0.3399
 E effect -0.37876434 0.2460970 12  -1.539  0.1872

target = b:
 contrast    estimate        SE df t.ratio p.value
 A effect -0.91573300 0.2450969 12  -3.736  0.0071
 B effect  0.77609593 0.2237678 12   3.468  0.0077
 C effect  0.89971096 0.1502380 12   5.989  0.0003
 D effect -0.46355580 0.1965806 12  -2.358  0.0452
 E effect -0.29651809 0.1734101 12  -1.710  0.1130

target = c:
 contrast    estimate        SE df t.ratio p.value
 A effect -0.61708150 0.6836341 12  -0.903  0.9523
 B effect -0.39393084 0.9663707 12  -0.408  0.9523
 C effect  0.76924052 0.9107898 12   0.845  0.9523
 D effect -0.05748044 0.9413821 12  -0.061  0.9523
 E effect  0.29925226 0.9717157 12   0.308  0.9523

P value adjustment: fdr method for 5 tests



I did this separately for each target (due to the formula '~ input | target' in the \
lsmeans call. But you can change that formula, and/or add a 'by' argument to the \
'contrast' call. You can also use 'adjust' to specify a different p-value adjustment \
method (see ?contrast)

Hope this helps.

Russ

-----Original Message-----

Date: Mon, 15 Aug 2016 17:38:05 +0000 (UTC)
From: Daniel Rubi <daniel_rubi@ymail.com>
To: R-sig-mixed-models <r-sig-mixed-models@r-project.org>
Subject: [R-sig-ME] Obtaining all effects in a multinomial regression
	where the grand mean is set as baseline
Message-ID: <1981606920.2663264.1471282685613@mail.yahoo.com>
Content-Type: text/plain; charset="UTF-8"

I have data from these set of experiments:In each experiment I infect a neuron with a \
rabies virus. The virus climbs backwards across the axon of the infected neuron and \
jumps across the synapse to the input axons of that neuron. The rabies will then \
express a marker gene in the input neurons it jumped to thereby labeling the input \
neurons to the infected target neuron. The purpose of this is thus to trace the \
neurons that provide input to the neurons I'm infecting (target cells), thereby \
creating a connectivity map of this region in the brain.In each experiment I obtain \
counts of all the infected input neurons.?Here's a simulation of the data: (3 targets \
and 5 inputs):set.seed(1) probs <- \
list(c(0.4,0.1,0.1,0.2,0.2),c(0.1,0.3,0.4,0.1,0.1),c(0.1,0.1,0.4,0.2,0.2)) mat <- \
matrix(unlist(lapply(probs,function(p) rmultinom(1, as.integer(runif(1,50,150)), \
p))),ncol=3) inputs <- LETTERS[1:5] targets <- letters[1:3] df <- data.frame(input = \
c(unlist(apply(mat,2,function(x) rep(inputs ,x)))),target = rep(targets \
,apply(mat,2,sum))) What I'd like to estimate is the effect of each target neuron on \
these counts, relative to the grand mean. I was thinking that \
a?multinomial?regression model is appropriate in this case, where the contrasts are \
set to the?contr.sum?option: library(foreign)
library(nnet)
library(reshape2)

df$input <- factor(df$input,levels=inputs) df$target <- \
factor(df$target,levels=targets) fit <- multinom(input ~ target, data = df,contrasts \
= list(target = "contr.sum")) # weights:  20 (12 variable) initial  value 505.363505 \
iter  10 value 445.057386 final  value 441.645283 convergedWhich gives me:
> summary(fit)$coefficients
  (Intercept)   target1   target2
B  0.08556288 -1.743854 1.6062660
C  0.55375003 -2.094266 1.2616939
D -0.17624590 -1.364270 0.6284231
E -0.04091248 -1.617374 0.6601274
So the effects for input?A?are not reported. In an?lm?they would be retrieved as:
-1*(apply(summary(fit)$coefficients,2,sum))
But I'm wondering whether it is the same for?multinom?using?contrasts = list(target = \
"contr.sum")?Also, it's not clear to me whether?target1?and?target2?refer \
to?targets?a?and?b.More generally, I would like to obtain both the effects of \
all?targets?on all?inputs?and the associated standard errors and p-values.Thanks,Dan  \
[[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic