Hi,
I’m interested in accounting for “observer experience” in Royle & Link’s model that allows for both false negative and false positive errors.
Here’s our situation. We used 4 groups of 3 observers each to survey for a forest pest. Two of the groups were comprised entirely of 3 volunteers with no prior experience while the remaining two groups had 1 and 2 experienced observers each, the remaining individuals in these groups being volunteers. Groups were assigned sites at random and individuals within groups visited sites independently.
As a proxy for abundance, we returned to all trees where the pest was detected and counted the number of individuals observed. Because this pest is sessile and because the surveys were completed on the same day (1) heterogeneity in surveys should be entirely a function of observer skill and (2) there will be no change in abundance between surveys. Of course we may not have detected / counted all individuals on our return surveys however.
We considered four models that either allowed for false positives or did not and in which surveys (observers in this case) differed in detection and misclassification probabilities.
We found that: (1) groups with experienced observers detected the pest at a greater proportion of sites, (2) experienced individuals detected smaller populations than volunteers, (3) experienced individuals had a higher probability of detecting populations than volunteers, and (4) surprisingly, experienced individuals ALSO HAD A (MUCH) HIGHER PROBABILITY OF MISIDENTIFYING the target species than volunteers.
This latter finding is easy to explain when we consider the detection histories. The most common detection history was one in which an experienced observer detected populations when two of the volunteers did not – thus in terms of the model, it is more likely that the experienced observer is ‘wrong’ and the two volunteers are correct. Thus the model assigns a high misclassification probability to the experienced observer, resulting in a biased estimate of site occupancy. However, our return surveys to count abundance suggest the opposite, namely that the experienced individual detected a very small population that the two volunteers failed to detect.
Our initially surprising results arise from the fact that observers with different abilities occur in the same group, but these differences are not incorporated in the model. This situation causes great problems for the misidentification models and results in estimates of site occupancy that are extremely biased.
My question is: Is there a way to account for heterogeneity in detection / misclassification probabilities given that they are related to observer experience and abundance? Would a simple covariate (1 = experienced, 0 = volunteer) do it?
Thanks in advance!