Robust Design v POPAN

questions concerning analysis/theory using program MARK

Robust Design v POPAN

Postby tommyg » Fri Sep 02, 2011 4:41 pm

When there is no assumed temporary emigration or immigration out of a study area, what are the advantages of the Robust design v an open JS model (say POPAN) when estimating abundance ? The obvious one is that the Roubst design produces estimates of abundance in the first and last primary periods that would otherwise not be reliable in POPAN (assuming time-varying recruitment and survival). Are there any other ?

I've been running some simulations, generating data under a Robust design (primary and secondary varying detection probability - survival and recruitment varying between primary periods - I realize my description is very cursory here) in R and analyzing the data using RMark. Generally, I've found that POPAN estimates of abundance (in primary periods 2. . . k-1) are more unbiased and more precise when pooling the primary period data compared to Robust design estimate of abundance. This is especially case when the detection probabilities are small (0.02 - 0.1).

Does this all sound reasonable ? Are there any papers comparing the performance of the Robust Design v. open JS models ? Part of my confusion is that there seems to be a lot of literature out there in strong support of the Robust design versus other open JS approaches. I can see this being the case if there is perhaps temporary movement in and out of the study area, the sampling area is excluding the home range of some animals, etc., but otherwise I've yet to realize the advantages of the robust design when estimating abundance. When animals that are alive and observable, and always observable, are you better off with a POPAN like approach conducting Schnabel estimates on the first and last primary periods ?
tommyg
 
Posts: 21
Joined: Tue Mar 01, 2011 1:55 pm

Re: Robust Design v POPAN

Postby Bill Kendall » Fri Sep 09, 2011 12:30 pm

The best place to go for a discussion of the robust design without temporary emigration is Kendall et al. (Kendall, W. L., K. H. Pollock, and C. Brownie. 1995. A likelihood-based approach to capture-recapture estimation of demographic parameters under the Robust Design. Biometrics 51:293-308.) and Kendall and Pollock (Kendall, W. L., and K. H. Pollock. 1992. The Robust Design in capture-recapture studies: a review and evaluation by Monte Carlo simulation. Pages 31-43 in Wildlife 2001: Populations, D. R. McCullough and R. H. Barrett (eds), Elsevier, London, UK.), and the papers by Pollock and Nichols cited in those two papers (I can provide my papers if you need).

The results of the simulations you mention (p <= .1) do not surprise me. You should find reference to similar results in the papers I've listed, and in Otis et al. (1978, Wildlife Monographs). You are talking about very small sample sizes (unless your source population is huge). As sample size gets very small the estimation process becomes unstable due to data sparseness, resulting in spurious results. So there is a point where it is advisable to pool data into a JS (or multistate depending on the situation) model. To determine that point for a given study the best thing to do is what you have done: simulate the situation.

On the other hand, if you are finding bias and poorer precision for larger sample sizes using the robust design, that is contrary to everything I have done or seen. If you are analyzing under the model you use to generate the data, you should get no bias under either the RD or JS approaches (I assume you're not simulating trap response or individual heterogeneity in p). Precision of survival is not much improved by the RD, although at the end of the time series there is greater benefit. Precision in abundance and recruitment (and state transition probabilities for multistate models) often shows great improvement under the RD.

You already mentioned that all parameters are estimable under the RD. The other advantages discussed in the past are that you can separate immigration from in situ recruitment with only two age classes rather than three, and estimates of abundance and survival have little sampling covariance to contend with under the RD.

In conclusion, if you are finding more bias and poorer precision for RD estimates except for very small sample sizes, I'd be glad to see more detail. Let me know if you need those papers.

Bill Kendall
Bill Kendall
 
Posts: 96
Joined: Wed Jun 04, 2003 8:58 am

Re: Robust Design v POPAN

Postby tommyg » Fri Sep 09, 2011 1:35 pm

Thanks for the thorough reply, Bill.

The bias and poor precision definitely goes away with high detection probabilities (or large sample sizes).

The RD likelihood surface seems to challenge the estimation routine in MARK more so than other models that I have played with - especially at small detection probabilities. This is unfortunate. I wonder if other optimization algorithms would more suitable for this situation ?

Thanks.

Tommy
tommyg
 
Posts: 21
Joined: Tue Mar 01, 2011 1:55 pm

Re: Robust Design v POPAN

Postby cooch » Fri Sep 09, 2011 2:24 pm

tommyg wrote:Thanks for the thorough reply, Bill.

The bias and poor precision definitely goes away with high detection probabilities (or large sample sizes).

The RD likelihood surface seems to challenge the estimation routine in MARK more so than other models that I have played with - especially at small detection probabilities. This is unfortunate. I wonder if other optimization algorithms would more suitable for this situation ?


Try the simulated annealing algorithm ('alternative optimization') -- it is slow, but seems to do a nice job in handling ugly likelihood surfaces. See the extended discussion in the multi-state chapter of the book (chapter 10) -- specifically the sidebar beginning on p. 37

Simulated annealing is pretty robust, but (as noted) can take a long time to converge.
cooch
 
Posts: 1654
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Robust Design v POPAN

Postby Bill Kendall » Fri Sep 09, 2011 2:48 pm

Simulated annealing is a good idea, although if things are sparse enough I wonder whether that would do the trick. Keep in mind also that what you are doing when you pool for a JS analysis is replacing multiple detection parameters within a season with one for the season. So if there is a way of economizing on parameters, what would happen? In the simplest case, if you assume constant p across let's say five sampling periods, how does that perform vs. POPAN (a priori I would guess you get no more bias than with POPAN, and perhaps a bit more precision)? A covariate on p also might "cure" the problem. In the case of a linear function you would have 2 parameters, slightly more than with POPAN. If you have time to explore these things, that would be interesting.
Bill Kendall
 
Posts: 96
Joined: Wed Jun 04, 2003 8:58 am


Return to analysis help

Who is online

Users browsing this forum: No registered users and 1 guest