Page 1 of 1

adjust.chat

PostPosted: Tue Sep 03, 2013 3:17 pm
by Snail_recapture
Hi,

I have a question about accounting for overdispersion.
So, I have gathered that the estimate for c-hat is derived from test2 and test3 when using release.gof (test2 + test3 chi squares, divided by d.f.)

I have been through the dipper example in appendix C, which provides the example

Code: Select all
>dipper.cjs.results=adjust.chat(2,dipper.cjs.results)


but i'm a little unclear as to why '2' was selected, as the estimate of c-hat (from the above formula) is <1. Have I misinterpreted what I should be doing with my c-hat value? Or is this because there are issues with underdispersion?

When I calculate c-hat for own data, it comes out at 4.15. So by my current understanding I am running adjust.chat as the following:

Code: Select all
>tbh.results.adj=adjust.chat(4.15,tbh.results)


Have I interpreted this correctly? Thanks in advance.

Re: adjust.chat

PostPosted: Tue Sep 03, 2013 3:49 pm
by jlaake
The dipper example is just an example and the value of 2 was a random choice to demonstrate and not relevant to the data. What you showed is correct but 4+ is quite high. Is it possible that you have some model miss-specification? Is the assumed model for Release gof sufficient to describe your global model?

--jeff

Re: adjust.chat

PostPosted: Tue Sep 03, 2013 6:30 pm
by Snail_recapture
Thanks Jeff.

My global model is quite simple, I only have one group factor variable ("type"), and one individual covariate ("distance") - but am I correct in thinking that individual covariates are not included in the processed frame?

Code: Select all
tbh.process=process.data(tbh,model="CJS",time.intervals=c(0.14286,0.14286,0.14286,0.14286,0.14286,1.28571,0.14286,0.14286,0.14286,0.14286,1.28571,0.14286,0.28571,0.14286,1.57143,0.14286,0.42857,0.42857,0.14286,3.85714,2.28571,1.14286),groups=c("type"))

release.gof(tbh.process)
RELEASE NORMAL TERMINATION
      Chi.square  df      P
TEST2   573.4425 132 0.0000
TEST3     4.4329   7 0.7288
Total   577.8754 139 0.0000


However, my data is quite sparse - I have 624 individuals, and over 23 recapture occasions most individuals were only resighted between 1 and 4 times. I have read that this can be an issue with gof tests?

Re: adjust.chat

PostPosted: Tue Sep 03, 2013 6:38 pm
by jlaake
You are correct that release.gof does not include your distance covariate and that is a problem if it is a source of heterogeneity in capture probability. Sparseness of data is only a problem in that many of the tests will have insufficient data and I believe this will understate the value of over-dispersion. Not your problem here. Please explain the distance covariate. Is it a single value for each individual that doesn't change through time? If so, you may want to consider binning its value say into 2 or 3 levels (lo,hi) or (lo,med,hi) and using the binned value with type in your groups.

--jeff

Re: adjust.chat

PostPosted: Tue Sep 03, 2013 7:04 pm
by Snail_recapture
Yes, it is a single value, so I can definitely try binning. My experiment was a reciprocal transplant, so the distance covariate describes the distance each individual was transplanted in a habitat. I'll have a go with the bins and see if that helps. So although i'm using bins in my process.data, am I still able to use the continuous data as a model term?

Thanks again for your help :)

Re: adjust.chat

PostPosted: Tue Sep 03, 2013 7:14 pm
by jlaake
So although i'm using bins in my process.data, am I still able to use the continuous data as a model term?


I suggest using the bins group only for the process.data that is used for release.gof. What it will do is fit a Phi(g*t)p(g*t) as your global model where g is the groups created with type and distance bins, so don't create too many bins.

For model fitting, you can do another process.data that excludes distance bins to fit models with numeric distance covariate. You could use the distance bins for groups but it just creates extra real parameters that aren't needed unless the model you want to fit uses binned distance.

You can have multiple processed dataframes, just don't use the same names and make sure to keep them straight.

--jeff

Re: adjust.chat

PostPosted: Tue Sep 03, 2013 7:42 pm
by Snail_recapture
Perfect, thanks for explaining!

Re: adjust.chat

PostPosted: Tue Sep 03, 2013 11:00 pm
by Snail_recapture
Ok, so I created 4 bins (for each quartile), which brings c-hat down to 2.4 - that seems a lot better! Thanks again.