Detection probabilities model

Hi,
I am interested in using detection probabilities (p) to compare sampling techniques for a rare fish species in several river systems. We are currently using the multi-method technique under the single-season model that was outlined in the Nichols et al. 2008 manuscript. I would like to determine if there is a gear effect (i.e., do detection probabilities vary by sampling technique). To do this I have coded a model (p(.)) where detection probabilities do not vary by technique (i.e., they are constant) and a second model where detection probabilities vary by technique (p(gear)). To generate the comparison of detection probabilities by sampling technique I used several sample-level covariates to code for the different sampling techniques. I should also mention that due to our interest in comparing detection rates, we held psi and theta constant in these analyses. When I had fit these models, I noticed that my global model (p(gear)) had poor fit (probability of test statistic greater than or equal to observed (pr(TS≥OBS)) = 0.009 and had a c-hat of 15.348). The model with the sampling technique covariates (p(gear)) had a much lower AIC score than (p(.)), but since the models did not fit well is it still valid to compare them? When I adjusted the c-hat score to 15.348 the model with constant detection probabilities had lower quasi-AIC scores.
Such poor fit was not unexpected in my model because I detected many more fish in one river system compared to the other. Once I added a river system covariate, the fit improved dramatically, although it still doesn’t fit well ((pr(TS≥OBS)) = 0.03 and a c-hat of 2.94). I would like to explore some a priori models in addition to the simple models of gear type and system, so fit would likely improve with several additional covariates. I do not believe there are any independence issues between sampled river reaches. Is there something I am missing, or is it likely the poor fit is simply from not including some influential covariates?
Finally, I would like to get estimates of the gear-specific detection probabilities and standard error for each river system. As expected, the hoop net technique that detected fish in more sampling events than other gears had a higher detection probability in the first river system. However, in the second river system the electrofishing technique accounted for two of the three sampling event detections. Despite a higher number of sampling detections in the second river system with the electrofishing technique the model estimates for detection probabilities were higher for the hoop net (0.044) in the second river system than electrofishing (0.0158) in the second river system. Does this seem reasonable? Shouldn’t the sampling technique that detected fish on more sampling events in the second river system have a higher detection probability that the other technique that only detected fish at one sampling event in that river system? What are your thoughts?
Sorry for so many questions,
Chris
I am interested in using detection probabilities (p) to compare sampling techniques for a rare fish species in several river systems. We are currently using the multi-method technique under the single-season model that was outlined in the Nichols et al. 2008 manuscript. I would like to determine if there is a gear effect (i.e., do detection probabilities vary by sampling technique). To do this I have coded a model (p(.)) where detection probabilities do not vary by technique (i.e., they are constant) and a second model where detection probabilities vary by technique (p(gear)). To generate the comparison of detection probabilities by sampling technique I used several sample-level covariates to code for the different sampling techniques. I should also mention that due to our interest in comparing detection rates, we held psi and theta constant in these analyses. When I had fit these models, I noticed that my global model (p(gear)) had poor fit (probability of test statistic greater than or equal to observed (pr(TS≥OBS)) = 0.009 and had a c-hat of 15.348). The model with the sampling technique covariates (p(gear)) had a much lower AIC score than (p(.)), but since the models did not fit well is it still valid to compare them? When I adjusted the c-hat score to 15.348 the model with constant detection probabilities had lower quasi-AIC scores.
Such poor fit was not unexpected in my model because I detected many more fish in one river system compared to the other. Once I added a river system covariate, the fit improved dramatically, although it still doesn’t fit well ((pr(TS≥OBS)) = 0.03 and a c-hat of 2.94). I would like to explore some a priori models in addition to the simple models of gear type and system, so fit would likely improve with several additional covariates. I do not believe there are any independence issues between sampled river reaches. Is there something I am missing, or is it likely the poor fit is simply from not including some influential covariates?
Finally, I would like to get estimates of the gear-specific detection probabilities and standard error for each river system. As expected, the hoop net technique that detected fish in more sampling events than other gears had a higher detection probability in the first river system. However, in the second river system the electrofishing technique accounted for two of the three sampling event detections. Despite a higher number of sampling detections in the second river system with the electrofishing technique the model estimates for detection probabilities were higher for the hoop net (0.044) in the second river system than electrofishing (0.0158) in the second river system. Does this seem reasonable? Shouldn’t the sampling technique that detected fish on more sampling events in the second river system have a higher detection probability that the other technique that only detected fish at one sampling event in that river system? What are your thoughts?
Sorry for so many questions,
Chris