bmitchel wrote:You should probably expect to get different answers with different GOF tests, since tests can vary widely in bias and precision. From what I've read, the first approach (divide model deviance by d.f.) is considered biased and uninformative for mark-recapture models (I think Evan Cooch covers this in his GOF chapter). Your two other methods are both probably fine, although some might argue for more bootstraps (500 or 1,000). Your results (2.6 for bootstrap and 1.7 for median c-hat) are different but not dramatically so (given that these GOF tests are fairly imprecise estimates of fit). The conservative approach would be to take the higher value and use that as your estimate of c-hat. However, I think Gary White has argued that the deviance statistic is also biased, and he has suggested that the median c-hat is a better approach (see the MARK help files). I have not seen any documentation on the median c-hat approach beyond what is written in the MARK help files, so I have not been using it much.
All of the various c-hat estimation procedures available in MARK are documented in Chapter 5 - including a fair bit on the median c-hat. Of the various tests that are available, (i) RELEASE is still prefered for 'adequate' CMR datasets (meaning, without too much sparseness that causes pooling problems in RELEASE), and (ii) median c-hat for everything else (median c-hat is stil a work in progress, but results using it have been very promising).
My personal opinion from running a lot of bootstrap GOF simulations is that estimating c-hat is extremely imprecise; I have seen c-hats for simulated data (known to have c-hat = 1) that range up to 4 or 5.
This is a good point - and is also discussed in some detail in Chapter 5 - see pp. 33-35 in that chapter - especially the figure on p. 34, which is a graphical representation of what Brian refers to. Remember, we are estimating c, based on a single data set, which we consider as one realization of an underlying probabilisitic process. Even if the true c is one, estimated c-hat could be wuite different from one. What we need (and some folks are working on) is a robust way to estimate both c-hat, and the SE of the estimate. Stay tuned...
Until then, there are some recommendations in Chapter 5 on how to proceed.