I've been using MARK for 3+ years, but this is my first post to this forum. Usually I just bother gwhite (sorry, Gary), so maybe this will take some pressure off him

So anyway, here's my question:
I understand the deviance of a candidate model, relative to deviance of a null model, may be interpreted as the proportion of deviance/variation in the data explained by a model.
Example:
Null deviance = 1000
Model deviance = 400
Proportion of variance explained by data = 1 - Dev(mod)/Dev(null) = 1 – 400/1000 = 0.60
This interpretation would be analagous to R^2 in linear regression
However, my best models in a current occupancy analysis I'm running explain 10 – 20% of the variation in my data, based on this above interpretation, even though I think my models are pretty good models.
To check this, I simulated some data (525 rows...5 occasions...3 groups), and then ran it through MARK. I fit a null model and the true model, which simply includes a group effect on psi (this is an occupancy model). Indeed, when I fit the ‘correct’ model to the data, the parameter estimates are very accurate. And yet, null deviance is 1286, and model deviance is 1177, so proportion of deviance explained by this model is only 1 – 1177/1286 = 0.08. Such a low “R^2” doesn’t make sense, given that the data literally came from this true model (since I simulated it), and given that the parameter estimates are close to correct. I would think this “R^2” analog should be close to 1 in this situation.
Any thoughts? I’d like to be able to estimate some measure of how much variation in data my models are explaining, but clearly the approach I’ve been assuming so far isn’t telling the correct story.
thanks very much,
jeff