lepidoptera wrote:I have attempted to do a median c-hat gof test in MARK on a phi(.)p(.) model with losses on capture incorporated into the data set.
The problem is that MARK gives me a warning that all encounter histories recorded as losses on capture (i.e. 10010011000 -1) will be deleted (not incorporated into the gof test). If the data used to fit the model is not included in the gof test - is this even a useful test? What I mean is, can I continue the test without the data and use the resulting c-hat? My guess is no, and thus I am looking for an alternate way to accomodate overdispersion. HELP!
Yes...the median c-hat is generated based on bootstrapped (simulated) data - simulated under the parameters estimated using all the data, which includes information about individuals lost on capture. What I think MARK does during the simulation is not simulate encounter histories for those lost on capture - so, losses on capture influence effective sample size, which is reflected in the simulated histories.
As an aside, RELEASE was able to estimate c-hat through a gof test without a warning...but I don't know how it did this (did it delete by default and just not warn me?) and of course, gof within RELEASE is only good for fully time dependent CJS models.
Thats because RELEASE is essentially a glorified contingency analysis, and works quite differently than the median c-hat approach. This is pretty well explained in Chapter 5.
Another question - what happens if you use a RELEASE-generated c-hat estimate to adjust a constant-time CJS model???
Read Chapter 5 again - to remind yourself that c-hat is estimated for the most general model in the candidate model set, and then applied to all the models in the model set. Since phi(t)p(t) (tested by RELEASE) is more general than phi(.)p(.), then the c-hat for the more general model (the time-dependent model) clearly applies to the reduced paramter model. Again, if you've read Chapter 5 - carefully - you'd know this.