Closed Capture queries

questions concerning analysis/theory using program MARK

Closed Capture queries

Postby Jalgeo » Sat Mar 28, 2015 11:48 am

Hey

I am an undergrade student, doing my honours project on the estimation of civets using different techniques; namely capture-recapture and REM. I have carried out live-trapping on the civets and caught 14 individuals over a period of 7 weeks. My encounter history is as follows:

10000000010000000001000000000011000000001000 1;
10000000010001000001000000100010000001001000 1;
00000010000001000000000001000000000000001001 1;
00000000000000000000000000000000000000000001 1;
00000000001010010101010010000000000000000000 1;
00000000000010000001000000000000000001000000 1;
00000000000010011001000000000110000000000000 1;
00000000000001000000000000000000000000000010 1;
00000000000000100000000001000000000000000000 1;
00000000000000010000000000000000000000000000 1;
00000000000000001001000001000000000001000000 1;
00000000000000000001000000000000000000000000 1;
00100000000000000000000000000000000000000000 1;
00000100000000000000000000000000000000000000 1;

I am assuming a closed population and therefore using the closed capture-models, specifically Huggins heterogeneity pi, p & c. I have ran a few models and they have all given me back a N-hat of ~14, (which I am concerned about). The best fitting model is pi,p(t)=c(t) (Mt). However I have seen in the literature that many people have used Mh models but I cannot figure out how to implement this with my data I have attempted a linear model but this did not work and I have tried it Full liklihood p & c, after reading the Gentle Introduction extensively. Does anyone have any hints or advise for me?

Many Thanks
Jalgeo
 
Posts: 5
Joined: Thu Mar 12, 2015 12:10 pm

Re: Closed Capture queries

Postby cooch » Sat Mar 28, 2015 12:30 pm

Good news, bad news:

1\ good news -- congratulations on working through the material in the book yourself, and for having a pretty good idea about the models you'd like to fit.

2\ bad news -- you have very little data to work with (this isn't a criticism of you, just your data). You have >3 times the number of encounter occasions relative to marked individuals, and your estimated probability of initial encounter is virtually 0 (driven by the fact that ~40% your data consists of individuals seen once, and never again). Far too little data for a mixture model (heterogeneity), and even too little to come up with plausible estimates for p or c. In the end, almost nothing is estimable (since there is so little data to support fitting even the simplest of models).

I'm afraid your data aren't sufficient to generate a robust estimate of abundance. You know you have at least M(t+1)=14 animals, but correcting that for detection probability <1 (which it clearly is) will be difficult in the extreme.

Sorry to be the bearer of bad news.
cooch
 
Posts: 1652
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Closed Capture queries

Postby murray.efford » Sat Mar 28, 2015 11:17 pm

correcting that for detection probability <0 (which is clearly is) will be difficult in the extreme

Indeed :wink:

Here's another perspective: Evan is clearly right that you don't have many data (you knew that), but I don't think the absolute magnitude of p is the issue. The magnitude of p always depends on how finely you slice time (= visit your traps). You have a substantial number of recapture events (I count 33). M0 and Mt models often do give estimates near M(t+1) in the presence of heterogeneity, and heterogeneity is what we expect in a spatial trapping study. Fitting a spatial model will both make the best use of the data and allow for that source of heterogeneity. Confidence limits on density will be wide, but the analysis itself should work. I predict that the main uncertainty will not be in the estimate of p ('a' in the spatial model) but in the small absolute number of individuals (n = M(t+1)). The variance of n is a large chunk of the sampling variance of D (go with a binomial model rather than Poisson if you can justify it - for that you'll need to read a different manual!).

Murray
murray.efford
 
Posts: 712
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand

Re: Closed Capture queries

Postby murray.efford » Sun Mar 29, 2015 5:46 am

Having stuck my neck out on this one, I thought I should run some simulations (anything to take my mind off the cricket). I imagined a grid of 36 traps at 200 m spacing, and proposed sigma = 300 m as the scale-of-detection parameter. I selected some parameter values that yield on average 14 first captures and 34 recaptures, as in the dataset (using tricks in an appendix to a paper we published in 2009). The required parameter values are D = 0.039 civets/ha (4/sq km) and g0 = 0.01374. Simulating new data lets us check bias, precision and CI coverage (500 replicates). Bias is not a problem (2% SE 1.4%). Precision is not great (relative SE 32%). Coverage of 95% CI is close to nominal (94%). When we breakdown the sampling variance we see that 'CVn' - the component due to variation in M(t+1) - tends to be greater than 'CVa' - the uncertainty in the detection parameters g0 and sigma - as predicted. Eyeballing simulated datasets, there are often around 4-5 animals caught just once, as in the original data.

I have only guessed the actual trapping scenario, but I bet it's not too far from this.

Murray

Code: Select all
library(secrdesign)
grid36 <- make.grid(nx = 6, ny = 6, spacing = 200)
mask <- make.mask(grid36, buffer = 2000)
scen <- make.scenarios (g0 = 0.01374, sigma = 300, D = 0.039008, noccasions = 44)

fits <- run.scenarios(scen, nrepl = 500, fit.args = list(CL=TRUE), fit = TRUE, trapset = grid36,  maskset = mask, extractfn = derived, distribution = 'binomial')

# Completed scenario  1
# Completed in 95.43 minutes

summary(fits)

run.scenarios(nrepl = 500, scenarios = scen, trapset = grid36,
    maskset = mask, fit = TRUE, fit.args = list(CL = TRUE), extractfn = derived,
    distribution = "binomial")

Replicates    500
Started       20:39:00 29 Mar 2015
Run time      95.43  minutes
Output class  selectedstatistics

$constant
               value
trapsindex         1
noccasions        44
nrepeats           1
D           0.039008
g0           0.01374
sigma            300
detectfn           0
recapfactor        1
popindex           1
detindex           1
fitindex           1
maskindex          1

$varying
data frame with 0 columns and 1 row

$detectors
 trapsindex trapsname
          1    traps1

$fit.args
 fitindex   CL
        1 TRUE

OUTPUT
$1
               n    mean      se
estimate    500 0.03980 0.00057
SE.estimate 500 0.01223 0.00012
lcl         500 0.02223 0.00039
ucl         500 0.07201 0.00082
CVn         500 0.25520 0.00177
CVa         500 0.19273 0.00325
CVD         500 0.32316 0.00306
RB          500 0.02038 0.01455
RSE         500 0.32316 0.00306
COV         500 0.93800 0.01080
murray.efford
 
Posts: 712
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand

Re: Closed Capture queries

Postby cooch » Sun Mar 29, 2015 8:32 am

murray.efford wrote:
correcting that for detection probability <0 (which is clearly is) will be difficult in the extreme

Indeed :wink:


Ah yes. Of course I meant <1, but <0 would prove very problematic.

Here's another perspective: Evan is clearly right that you don't have many data (you knew that), but I don't think the absolute magnitude of p is the issue. The magnitude of p always depends on how finely you slice time (= visit your traps).


Which was more or less the point I was making when I pointed out the number of occasions relative to number of encounters, and number of individuals.

You have a substantial number of recapture events (I count 33).


But, fact still remains that ~40% only encountered once. In the extreme, imagine all the second and beyond encounters come from one individual animal, and every other animal encountered only once. Number of total encounters is not always a good proxy for 'amount of data'.

M0 and Mt models often do give estimates near M(t+1) in the presence of heterogeneity, and heterogeneity is what we expect in a spatial trapping study. Fitting a spatial model will both make the best use of the data and allow for that source of heterogeneity. Confidence limits on density will be wide, but the analysis itself should work. I predict that the main uncertainty will not be in the estimate of p ('a' in the spatial model) but in the small absolute number of individuals (n = M(t+1)). The variance of n is a large chunk of the sampling variance of D (go with a binomial model rather than Poisson if you can justify it - for that you'll need to read a different manual!).

Murray


Of course, Murray is entirely correct. His perspective starts with the assumption that data come from spatially referenced 'traps' (which they might do in your case). My perspective always starts from the 'fishbowl perspective' (my study organisms of choice wander around like non-sentient random particles. We mass them together in larger capture groups, and where the aggregate is captured has no spatial referencing of any interest).

If you have 'trap location' or some such to bring to bear, there may be light at the end of the tunnel (although we must always make sure said light isn't an oncoming train).

Cheers...
cooch
 
Posts: 1652
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Closed Capture queries

Postby gwhite » Sun Mar 29, 2015 9:18 am

Yep -- make strong enough assumptions, and you no longer need data. There is always a ladder of assumptions to climb out of any dataless hole.
gwhite
 
Posts: 340
Joined: Fri May 16, 2003 9:05 am

Re: Closed Capture queries

Postby Jalgeo » Sun Mar 29, 2015 10:46 am

Hey guys

Thanks for your replies and help, I will give your suggestions a try and let you know if I have any success (or problems). At the minute there is another student using SECR with the same data as I am, which is one of the reasons I decided to using closed-capture models. I am not too worried about problems like this as it is just more for me to write in my discussion.

Murray, your estimate was close, however there where 19 traps out (with varying trapping effort). From other literature the Malay civet populations have been found at densities of ~2 per/km^2. So 4 is not that far off, which is a great start.

Again thanks for your help!
Jalgeo
 
Posts: 5
Joined: Thu Mar 12, 2015 12:10 pm

Re: Closed Capture queries

Postby murray.efford » Sun Mar 29, 2015 3:05 pm

My point about the relative contributions of n and p to the sampling variance follows directly from Huggins (1989) - it applies whether or not you think of the problem spatially.

Similarly - the observation that it's the total number of recapture events that most strongly influences precision of p-hat has roots in non-spatial CR, though I can't put my finger on a citation.

Maybe Gary could expand on what he thinks are the problematic assumptions. SECR methods are robust to the obvious ones. My simulations are another matter - just meant for illustration.

Murray
murray.efford
 
Posts: 712
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand


Return to analysis help

Who is online

Users browsing this forum: No registered users and 1 guest

cron