Model Averaging-Closed Mark Recapture Study

Forum for discussion of general questions related to study design and/or analysis of existing data - software neutral.

Model Averaging-Closed Mark Recapture Study

Postby Owen » Tue Feb 09, 2010 10:03 pm

I'm trying to estimate abundance of rainbow trout in a river from a multi-event closed mark recapture study. (Schnabel expt.) "Clostest" test of closure was not significant. There are 3 models with most of the AICc weight.

The first is a Huggins(Length cov) time-varying capture probability model (Mt; weight of 0.55), the second is a Huggins(Length cov) time-behaviour model (Mtb; weight of 0.23), and the third is a Huggins(Length cov) behaviour model (Mb; weigth of 0.22).

The abundance(SE) estimates from these are disturbingly different (Mt=> 5,200 (SE~500), Mtb=>3,100 (SE=1,900) and Mb=> 2,600 (SE=400)). I thought model averaging may be my best bet. This yields a weighted estimate of about 4,200 but a whopping unconditional SE of 1,500. My question: Is this the right approach ? Something seems out of whack to me that I get such different estimates.

Thanks very much for any insight/direction.

(p.s have 5 events, but analysis based on 4 due to major change in length distribution of fish in event 5; correlated with high water and turbidity in river in event 5).
Owen
 
Posts: 11
Joined: Tue Feb 09, 2010 9:29 pm

Re: Model Averaging-Closed Mark Recapture Study

Postby cooch » Tue Feb 09, 2010 10:08 pm

Owen wrote:I'm trying to estimate abundance of rainbow trout in a river from a multi-event closed mark recapture study. (Schnabel expt.) "Clostest" test of closure was not significant. There are 3 models with most of the AICc weight.

The first is a Huggins(Length cov) time-varying capture probability model (Mt; weight of 0.55), the second is a Huggins(Length cov) time-behaviour model (Mtb; weight of 0.23), and the third is a Huggins(Length cov) behaviour model (Mb; weigth of 0.22).

The abundance(SE) estimates from these are disturbingly different (Mt=> 5,200 (SE~500), Mtb=>3,100 (SE=1,900) and Mb=> 2,600 (SE=400)). I thought model averaging may be my best bet. This yields a weighted estimate of about 4,200 but a whopping unconditional SE of 1,500. My question: Is this the right approach ? Something seems out of whack to me that I get such different estimates.

Thanks very much for any insight/direction.

(p.s have 5 events, but analysis based on 4 due to major change in length distribution of fish in event 5; correlated with high water and turbidity in river in event 5).


Simple unconditional SE of averaged values is incorrect. See section 14.9.1 in the 'closed abundance' chapter of the 'book':

http://www.phidot.org/software/mark/doc ... chap14.pdf
cooch
 
Posts: 1652
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Model Averaging-Closed Mark Recapture Study

Postby Owen » Mon Feb 15, 2010 5:07 pm

Thanks. I cranked out the CI described in section 14.9.1. The upper bound of this CI is about 8,000. The upper bound for the individual model that has the highest abundance in the model-averaged list is about 6,400.

If I believe that the true model lies within the list of models that are averaged, does it make sense that the model averaged upper bound is higher (the 8000) than the upper bound of the member model associated with the highest abundance (the 6,400)? Thanks very much for any help.
Owen
 
Posts: 11
Joined: Tue Feb 09, 2010 9:29 pm

Re: Model Averaging-Closed Mark Recapture Study

Postby cooch » Mon Feb 15, 2010 8:34 pm

Owen wrote:Thanks. I cranked out the CI described in section 14.9.1. The upper bound of this CI is about 8,000. The upper bound for the individual model that has the highest abundance in the model-averaged list is about 6,400.

If I believe that the true model lies within the list of models that are averaged, does it make sense that the model averaged upper bound is higher (the 8000) than the upper bound of the member model associated with the highest abundance (the 6,400)? Thanks very much for any help.


The only CI you should pay attention to is the CI calculated for the model-averaged unconditional variance (as per cited section). Looking at CI for individual models is somewhat missing the point. Double-check your calculations while you're at it - its easy to make small mistakes here.
cooch
 
Posts: 1652
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Model Averaging-Closed Mark Recapture Study

Postby Alex » Mon Mar 08, 2010 2:56 pm

I worked w/ Owen on this closed capture M/R study. We're trying to decide whether to model average or use the model w/ highest AIC weight. 3 models had the most AIC weight (Mt; weight of 0.55, Mtb; weight of 0.23, and Mb; weigth of 0.22). Mt and Mb are biologically plausible (Mtb no so much). One could make the case to pick model Mt due to highest weight, or model average Mt and Mb. Any insight into whether model averaging or selecting the highest ranking model would be most acceptable in this case? Thanks in advance.
Alex
 
Posts: 2
Joined: Fri Mar 05, 2010 4:12 pm

Re: Model Averaging-Closed Mark Recapture Study

Postby cooch » Mon Mar 08, 2010 5:29 pm

Alex wrote:I worked w/ Owen on this closed capture M/R study. We're trying to decide whether to model average or use the model w/ highest AIC weight. 3 models had the most AIC weight (Mt; weight of 0.55, Mtb; weight of 0.23, and Mb; weigth of 0.22). Mt and Mb are biologically plausible (Mtb no so much). One could make the case to pick model Mt due to highest weight, or model average Mt and Mb. Any insight into whether model averaging or selecting the highest ranking model would be most acceptable in this case? Thanks in advance.



Unless (i) there are mitigating circumstances under which you might exclude lower-ranked models, or (ii) the top model has by far the largest amount of support in the data, then you should always model-average, if your interest is exclusively in getting the best (i.e., most defensible, least-affected by model selection uncertainty) estimate of the parameter. How you model average can occasionally get complicated (e.g., certain types of individual covariate models), but otherwise...
cooch
 
Posts: 1652
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Model Averaging-Closed Mark Recapture Study

Postby Alex » Mon Mar 08, 2010 10:06 pm

Thanks Cooch, well said. We are most concerned with how N(hat) compares to previous studies and although still important we are less concerned with N(hat) itself. We've assessed this population 5 times in the last 20+ years. Each time Mt was the model of choice. This past year however Mt was still the highest ranking model but Mb showed up with some weight. Given that Mt has over double the weight of Mb (the 2nd highest ranking biologically plausible model), is that enough that we could easily justify why we did not model average? I don't have much experience here in model selection using AIC weights.

Our dilemma is that if we use the model averaged estimate, N(hat) will show a decline yet the CI's are so large compared to past studies that people are very likely to disregard the estimate all together, yet all models including Mt, MB, and any model averaged estimates show a good decline. Whereas, if we use Mt (with our justification being that it is by far the highest ranking model) N(hat) will show a decline, the CI's are fairly tight and we can say something about the population. Its a tough one - results from analyses shouldn't sway model selection but in this one seems a case could easily be made for either Mt or model averaging. Any further insight before I put this to rest? Thanks and I apologize for wordiness!
Alex
 
Posts: 2
Joined: Fri Mar 05, 2010 4:12 pm

Re: Model Averaging-Closed Mark Recapture Study

Postby cooch » Mon Mar 08, 2010 10:37 pm

Alex wrote:Thanks Cooch, well said.


I've been practicing.

We are most concerned with how N(hat) compares to previous studies and although still important we are less concerned with N(hat) itself. We've assessed this population 5 times in the last 20+ years. Each time Mt was the model of choice. This past year however Mt was still the highest ranking model but Mb showed up with some weight. Given that Mt has over double the weight of Mb (the 2nd highest ranking biologically plausible model), is that enough that we could easily justify why we did not model average? I don't have much experience here in model selection using AIC weights.


I'm guessing in years past that no model averaging was done. At any rate, one of the criterion for deciding if a model should be averaged is if the estimates and CI for same for a given (individual) model are plausible/reasonable. While this can sometimes be tricky, sometimes its pretty obvious. I should probably add something on this in that chapter in the book. At any rate, when in doubt, model average.

Our dilemma is that if we use the model averaged estimate, N(hat) will show a decline yet the CI's are so large compared to past studies that people are very likely to disregard the estimate all together, yet all models including Mt, MB, and any model averaged estimates show a good decline.


Such things are a dilemma only when you (or your audience) have preconceived notions about what the 'results should be'. They are what they are. If your point estimates show a trend, but the CI bound a plausible model with no trend, then you might in fact have no trend. Simply because the point estimates trend may or may not mean anything at all. For example, suppose I have 5 years of estimates of N: N(1)=100 +/-50, N(2)=102 +/- 50, N(3)=104 +/- 50, N(4)=106 +/- 50, and N(5)=108 +/- 50. A clear trend in N - if you simply plot the estimates and look at them, but given the uncertainty in the estimates, you probably wouldn't in fact find said trend to be 'significant' (in the usual sense).

Whereas, if we use Mt (with our justification being that it is by far the highest ranking model) N(hat) will show a decline, the CI's are fairly tight and we can say something about the population. Its a tough one - results from analysis shouldn't sway model selection but in this one seems a case could easily be made for either Mt or model averaging. Any further insight before I put this to rest? Thanks and I apologize for wordiness!


Sorry, you're still 'trying too hard' to find a trend. As an aside, there are some technical hoops to jump through to take a time series of N estimates and ask if they trend or not. More on this in a minute.

However, given that this is your agenda, you'd be well-recommended to consider a Pradel model approach - estimation of N in an open population is notoriously 'twitchy' (from the Latin from varying degrees of lousy depending on how much heterogeneity - and the form - there is in encounter probability). In fact, it may be more powerful to ignore what N is per se, and simply ask the trend question, by generating the time series of realized growth rate increments (lambda=N(t+1)/N(t)). It turns out you can estimate the trajectory (time series of lambda estimates) far more precisely (thus yielding a better and more defensible story) than you can estimate N (this always annoys the folks who get paid to estimate N, since in fact you can estimate trajectory in many cases just fine without every enumerating the population). The Pradel models (Chapter 12) let you do this, and avoids some of the technical hoops noted earlier for asking if population is increasing or decreasing based purely on a time series of abundance estimates. If you have marked individuals that show up in multiple years, a robust design Pradel model, with a constraint imposing a trend on $\lambda$ as one of the candidate models, might be a very powerful approach. If you don't have data which let you do this, then you're resigned to playing games with estimating trend(s) (or not) from time series of abundance. The basic ideas are straightforward, but some of the technical issues (like handling non-independence of estimates among years) are somewhat less so.
cooch
 
Posts: 1652
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Model Averaging-Closed Mark Recapture Study

Postby murray.efford » Tue Mar 09, 2010 2:03 pm

Further to Evan's tutorial: there's another option for trend estimation, a sort of 'middle way'. Fit N to each closed session while constraining the Ns to follow a trend (linear, loglinear or other) i.e. estimate intercept & slope of N over years rather than separate annual N. I don't know how you do this in MARK, but it's mathematically straightforward if N is in the likelihood - we do it routinely with density.
Murray
murray.efford
 
Posts: 712
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand


Return to analysis & design questions

Who is online

Users browsing this forum: Majestic-12 [Bot] and 1 guest