by murray.efford » Mon Jan 23, 2012 2:24 pm
I think Darryl's original point about sampling, with which I agree, was that you could screw up if you estimated detection (functions) in a nonrepresentative subset of the region and extrapolated that to other sites. The way around this is to use a formal probability-based sampling design that ensures representativeness for both the intensive and extensive phases. A systematic grid with random origin is effective in this way, but has some drawbacks. 'Generalised random tesselation stratified' sampling (!) is a compromise that lies between a simple random sample and a systematic sample - it's popular in some US agencies. I suspect it doesn't really add much, but it's easy to implement (e.g. library spsurvey in R) and looks good, so why not? Exactly what is appropriate for the bears would need more thought... The original Conroy et al. idea was a sort of adaptive sampling, which is a little different and requires some fairly strong model assumptions (it seems unlikely to work, even in SECR form, if there is strongly density-dependent variation in detection).
Murray