With spase closed captures data, models including trap responses and/or individual heterogeneity often yield unrealistic abundance (N^) estimates, even when all beta variances were positive. Unrealistic N^'s seem to be tied to p or c estimates near boundaries (i.e. one of two mixtures has p^ near 0, or there is an apparent trap-happy response such that c^ is high and p^ is near 0, leading to overestimation of N^, or apparent trap-shy responses lead to p near 0 and N^ = Mt+1 with apparent high precision. Using the retry argument in RMark to reset initial values didn't improve estimates from our data sets.
Such models may have low AIC values relative to simpler models that yield plausible estimates, even when parameters were counted correctly or adjusted upwards. The ultimate problem is likely that the data are inadequate support models including these effects. However, if one includes them in a candidate set and finds they have low AICc, how does one proceed with estimation? Does one infer 1) that the effect(s) is supported but data are inadequate to estimate parameters, and therefore that no reliable N estimate may be obtained unless an alternative estimator (i.e. Mhjackknife) for the same model is available? OR (2) that data are insufficient to test for and model the effect and then estimate N using a simpler model? Simpler models tend to produce estimates with wide CIs in these cases, which seems appropriate given sparse data.
Also, even one of these models in a candidate set can bias model averaged estimates. Is it appropriate to average across only a subset of the models in the a priori set? To model average at all? If we average across a subset, have we really included model selection uncertainty?
Sorry about the length.