I’ve come upon a new situation and I need some guidance on what it means, and how best to handle it in particular with regard to goodness-of-fit testing. Sorry this is rather a long message. I kept digging myself into more confusion.
I’ve been using spatial variants of the CJS live-recaptures model to estimate survival and detection efficiency for migratory salmon.
I was using Mark for the GOF tests for a new dataset when I noticed in the txt results file that the deviance degrees of freedom (DoF) for my most parameterized model (general model) was negative and the observed chat was set to one. As I understand, the negative DoF means that I’m estimating more parameters than I have data points and thus my model is saturated (or something beyond saturated!). So if the model is saturated, then the fit should be as good as it’s going to get; however, my general model has a -2LogL that is greater than that listed for the ‘real’ saturated model used to generate the deviance. The median chat GoF test, and the bootstrap GoF test that is based on chat, both didn’t work for this dataset given the negative DoF, but the bootstrap GoF test based on deviance returned a chat of 1.8. Am I right that my general model is saturated and can I just go ahead and use it with chat=1...or chat=1.8? The parameter estimates are reasonable.
I will also be building less parameterized models and using AIC to compare model performance. Generally in GoF testing, we estimate chat for the most general model and apply that value to all candidate models. Given that my most general model has negative DoF, is it still reasonable to take this route (i.e. proceed with chat=1.....or 1.

This has led me to wonder how the degrees of freedom are calculated for the saturated model. I did a few tests and have come up with the number of different capture history sequences minus 1; if there were multiple covariate groups, the total for each group was summed. Is this right? If so, then for this particular dataset, I think the degrees of freedom is small because the number of recapture occasions is smallish (5) and because the detection probability is 100% for at least 2 of these occasions.
Finally, as part of this investigation, I wanted to get an idea of the size of chat for a less parameterized model which had deviance DoF above 0. Because the detection probability for several recapture occasions was 100%, the parameter count in Mark was incorrect (near the boundary). I adjusted this count in the table of results, but in the txt file of model output, the deviance DoF did not change to reflect the updated number of parameters. As I understand, the deviance DoF is the difference in the number of parameters between the saturated and general models. So for the bootstrap GoF test, if the parameter count is incorrect in Mark, do I use the deviance DoF from the txt file or do I recalculate it with the updated number of parameters?
Whew! Please let me know if I need to add extra information or clarify. Thank you very much for advice!
Aswea