Fletcher c-hat with losses on capture

questions concerning analysis/theory using program MARK

Fletcher c-hat with losses on capture

Postby stshroye » Fri Apr 29, 2022 10:57 am

I am analyzing Schnabel-style capture-recapture data for Largemouth Bass. Because the closure assumption is questionable, I am comparing Huggins closed-capture models with Link-Barker and POPAN models. I have sparse data so I would like to evaluate GOF with Fletcher c-hat, but there are some losses on capture.

1. Should I completely abandon Fletcher c-hat and estimate median c-hat instead, or is Fletcher possibly still best for both closed and open models as long as I don't have too many losses?
2. Is the Pearson c-hat reported by MARK ever useable as a last resort? It often seems to work better than RELEASE for my datasets, even though my understanding is that it's not reliable for sparse data.

Thanks.
stshroye
 
Posts: 28
Joined: Wed Sep 22, 2021 2:26 pm

Re: Fletcher c-hat with losses on capture

Postby stshroye » Mon May 09, 2022 2:00 pm

Update:

The MARK help on Fletcher c-hat says, "losses on capture or dots in the encounter history will create encounter histories that are not considered in the total number of possible encounter histories." Does this mean that as long as losses on capture are duplicates of other encounter histories in the dataset, then Fletcher c-hat is correct? Or, does the "total number" include duplicates?

Disregard my question about using Pearson c-hat. I have compared it to other available estimates of c-hat and it is often substantially different, presumably due to my sparse data. Median c-hat seems to be a reasonable alternative to Fletcher for the closed-capture models, although it can be tricky to specify the upper bound, number of intermediate points, and number of replicates so as to get a reasonably precise estimate. The Bootstrap GOF using deviance seems to work better than median c-hat for my open-population models (although I am aware the bootstrap can underestimate c-hat, that does not appear to be the case with my datasets). Unfortunately, MARK gives a warning about losses on capture for median c-hat or bootstrap GOF, which is the same reason I am questioning Fletcher c-hat.
stshroye
 
Posts: 28
Joined: Wed Sep 22, 2021 2:26 pm

Re: Fletcher c-hat with losses on capture

Postby cooch » Mon May 09, 2022 4:04 pm

1\ any significant losses on capture renders almost every GOF test based on permutations of contingency tables suspect at best.

2\ what constitutes 'significant' number of losses? Simulation is your friend here.

3\ consider working with a more robust taxa that doesn't die when you catch it. [Or, get better techs who aren't so ham-fisted they kill some of what they catch]. ;-)
cooch
 
Posts: 1652
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Fletcher c-hat with losses on capture

Postby stshroye » Thu May 12, 2022 3:04 pm

stshroye wrote:The MARK help on Fletcher c-hat says, "losses on capture or dots in the encounter history will create encounter histories that are not considered in the total number of possible encounter histories." Does this mean that as long as losses on capture are duplicates of other encounter histories in the dataset, then Fletcher c-hat is correct? Or, does the "total number" include duplicates?


What if the losses are just duplicate encounter histories?

I have compared results for my original data; original data with "-1" encounter histories changed to "1" as if they were actually released; and original data with losses deleted as if they never existed -- and it makes very little difference to estimates of Fletcher c-hat. Is that good enough, or do I need to do more sophisticated simulations?
stshroye
 
Posts: 28
Joined: Wed Sep 22, 2021 2:26 pm


Return to analysis help

Who is online

Users browsing this forum: No registered users and 0 guests