Hi - I've been running analyzes to examine false positive errors, specifically how these vary by observer experience level. I've been using a combination of Royle & Link's (2006) R code, Presence and the Chapter 12 Excel spreadsheet from the (excellent) tutorial exercises.
I've modified Royle & Link's R code to give me survey-specific detection and misclassification probabilities. My R code produces values for the probabilities and psi that match those I get from the Excel sheet & Presence. However, for some of the models, the AIC values are not the same between R, presence, and Excel.
To get to the bottom of this, I also tried using Royle & Link's data in both R and Presence & again, I get different AIC values. Using the R code and Royle & Link's BLJA dataset, I get the AIC value of 168.08 for the constrained model - same as that reported in the paper. When I run these data in Presence, I get an AIC of 443.21.
The R code models psi_p(.)_alpha=0, whereas Presence models psi_p(.). Shouldn't these be equivalent?