by gwhite » Thu Dec 03, 2009 4:18 pm
Bryan:
You are correct that there is a discrepancy in MARK on the deviance calculation and the effective sample size. The parameter space for this model is the daily survival rate (DSR) of each individual.
For the effective sample size, the number of days that an individual is known to survival is taken as the starting point, and then 1 is added for the interval at the end if the individual died. This is because we don't have the information on the exact day of death.
However, for the deviance, the number of individuals is taken as the sample size -- just because using the effectivve sample size as calculated above will be too large. For known fate and nest survival models, there is no reliable GOF test because the saturated model is a reasonable model to consider, and obviously the deviance of the saturated model is always zero. In other words, you can construct the saturated model in MARK, and get the deviance of zero. But, there is no information left to consider GOF. Further, any model that you construct that has fewer parameters than the saturated model is then just a likelihood ratio test between the saturated model and the model being considered, with the inherent assumption that you made to obtain this model assumed to be true so that the resulting test is strictly from lack of fit.
There is a lot more to all this than meets the eye, but my best example is a single survival interval with n animals in a known fate model. There is only 1 estimable parameter, and this happens to be the saturated model as well. There is no information on GOf. Yet, we publish this type of survival estimate all the time -- e.g., Kaplan-Meier estimates. I think MARK users have been obsessing over GOF tests when in reality there is no reliable GOF test available.
Gary