Alex wrote:Thanks Cooch, well said.
I've been practicing.
We are most concerned with how N(hat) compares to previous studies and although still important we are less concerned with N(hat) itself. We've assessed this population 5 times in the last 20+ years. Each time Mt was the model of choice. This past year however Mt was still the highest ranking model but Mb showed up with some weight. Given that Mt has over double the weight of Mb (the 2nd highest ranking biologically plausible model), is that enough that we could easily justify why we did not model average? I don't have much experience here in model selection using AIC weights.
I'm guessing in years past that no model averaging was done. At any rate, one of the criterion for deciding if a model should be averaged is if the estimates and CI for same for a given (individual) model are plausible/reasonable. While this can sometimes be tricky, sometimes its pretty obvious. I should probably add something on this in that chapter in the book. At any rate, when in doubt, model average.
Our dilemma is that if we use the model averaged estimate, N(hat) will show a decline yet the CI's are so large compared to past studies that people are very likely to disregard the estimate all together, yet all models including Mt, MB, and any model averaged estimates show a good decline.
Such things are a dilemma only when you (or your audience) have preconceived notions about what the 'results should be'. They are what they are. If your point estimates show a trend, but the CI bound a plausible model with no trend, then you might in fact have no trend. Simply because the point estimates trend may or may not mean anything at all. For example, suppose I have 5 years of estimates of N: N(1)=100 +/-50, N(2)=102 +/- 50, N(3)=104 +/- 50, N(4)=106 +/- 50, and N(5)=108 +/- 50. A clear trend in N - if you simply plot the estimates and look at them, but given the uncertainty in the estimates, you probably wouldn't in fact find said trend to be 'significant' (in the usual sense).
Whereas, if we use Mt (with our justification being that it is by far the highest ranking model) N(hat) will show a decline, the CI's are fairly tight and we can say something about the population. Its a tough one - results from analysis shouldn't sway model selection but in this one seems a case could easily be made for either Mt or model averaging. Any further insight before I put this to rest? Thanks and I apologize for wordiness!
Sorry, you're still 'trying too hard' to find a trend. As an aside, there are some technical hoops to jump through to take a time series of N estimates and ask if they trend or not. More on this in a minute.
However, given that this is your agenda, you'd be well-recommended to consider a Pradel model approach - estimation of N in an open population is notoriously 'twitchy' (from the Latin from varying degrees of lousy depending on how much heterogeneity - and the form - there is in encounter probability). In fact, it may be more powerful to ignore what N is per se, and simply ask the trend question, by generating the time series of realized growth rate increments (lambda=N(t+1)/N(t)). It turns out you can estimate the trajectory (time series of lambda estimates) far more precisely (thus yielding a better and more defensible story) than you can estimate N (this always annoys the folks who get paid to estimate N, since in fact you can estimate trajectory in many cases just fine without every enumerating the population). The Pradel models (Chapter 12) let you do this, and avoids some of the technical hoops noted earlier for asking if population is increasing or decreasing based purely on a time series of abundance estimates. If you have marked individuals that show up in multiple years, a robust design Pradel model, with a constraint imposing a trend on

as one of the candidate models, might be a very powerful approach. If you don't have data which let you do this, then you're resigned to playing games with estimating trend(s) (or not) from time series of abundance. The basic ideas are straightforward, but some of the technical issues (like handling non-independence of estimates among years) are somewhat less so.