Page 1 of 1

beta standard errors - difference between RMark and Mark

PostPosted: Thu Mar 26, 2009 8:31 am
by maelle
Hi all,

I am trying to run an analysis on joint live and dead data but I’m currently encountering several problems…
I have 84 marked animals (otters), 6 recapture occasions and 3 factor covariates : sex, age group (juvenile, adult, sub adult) and origin of the animals (released, newborn). However not all combinations exist in the data, and I have a total of seven different groups of animals.

I tried to run a ‘complete’ Burnham model S(sex*age*origin) p(sex*age*origin) r(.) F(fixed=1) both in RMark and in Mark, and I obtain the same real parameters estimations, but not at all the same betas. The main problem is that the beta estimates in RMark have huge standard errors, from negative to app. 900! Whereas in Mark the SE of betas for S seem ‘normal’, but there are still problems with those for p.
How come these beta estimates are that different between RMark and Mark, and that high in RMark?
Is that because I’m trying to adjust a model too complex for my data?
I also ran models with S and p depending on only one or two covariates, with and without interactions, and estimation problems arise when I include more than two covariates.

What model selection method would you consider in this case?

Thanks in advance for your answers,

Maëlle

Re: beta standard errors - difference between RMark and Mark

PostPosted: Thu Mar 26, 2009 9:04 am
by cooch
maelle wrote:Hi all,

I am trying to run an analysis on joint live and dead data but I’m currently encountering several problems…
I have 84 marked animals (otters), 6 recapture occasions and 3 factor covariates : sex, age group (juvenile, adult, sub adult) and origin of the animals (released, newborn). However not all combinations exist in the data, and I have a total of seven different groups of animals.

I tried to run a ‘complete’ Burnham model S(sex*age*origin) p(sex*age*origin) r(.) F(fixed=1) both in RMark and in Mark, and I obtain the same real parameters estimations, but not at all the same betas. The main problem is that the beta estimates in RMark have huge standard errors, from negative to app. 900! Whereas in Mark the SE of betas for S seem ‘normal’, but there are still problems with those for p.
How come these beta estimates are that different between RMark and Mark, and that high in RMark?
Is that because I’m trying to adjust a model too complex for my data?
I also ran models with S and p depending on only one or two covariates, with and without interactions, and estimation problems arise when I include more than two covariates.

What model selection method would you consider in this case?

Thanks in advance for your answers,

Maëlle


Are you sure you're using the same link function? Same design matrix?

PostPosted: Thu Mar 26, 2009 9:50 am
by maelle
I just checked the design matrix are the same and I'm using a Logit link function in both case, but the coding of the covariates differ:
- for RMark I have three columns, one per factor covariate
- for Mark I have seven columns, one for each group, and a zero or a one if the indivual belongs to the given group

If this is the reason why the beta estimates differ, which coding (factor or binary) should I use? And how should I handle the confidence intervals which are not acceptable at the moment?

PostPosted: Thu Mar 26, 2009 10:28 am
by maelle
Jeff suggested me offlist the following R command :

Use ~-1+sex:age:origin such that only the betas are constructed for the observed levels of the factors.
Also, the betas depend on the design matrix. Unless you are constructing it by hand the same way in MARK you won't get the same betas but can get the same real parameters because there are alternate ways to create a design matrix.


This is working, I obtain the same beta estimates.

Thanks a lot, I will now try to work my precision problem out.