Dear MARK-er,
I am trying to estimate annual variation in age-specific true survival and fidelity (as the inverse of permanent emigration, by setting F'=0) of spoonbills to a large breeding area, using the Barker model.
Many of the fidelity parameters of adult birds (and also some survival rates) are estimated at 1. I used the simulated annealing algorithm to be sure to not end up at a local minimum.
Most of the real parameter estimates that were estimated at 1 have confidence intervals of 0.9999999-1.000000. However, the SE and CI of the beta estimates are actually huge, and transforming them into real values results in a CI of 0.00-1.00.
I fairly dare to say that most of the parameters that were estimated at 1 are truly close to 1 (spoonbills are very faithful to their breeding area), and are therefore not an artifact of sparse data (the dataset is pretty large, and annual estimates somewhat below 1 are estimated with reasonable CI's, such as 0.85-0.97). Moreover, the CI of the boundary estimates are considerably reduced when applying data cloning.
My questions:
1) How can the difference in CI between the real values (0.99-1.00) and the back-transformed beta values (0.00-1.00) be explained?
2) When I try to correct the CI of the real estimates for overdispersion, when doing this in MARK, the CI's remain approximately 0.99-1.00, whereas when doing this in RMark using the get.real() function, the CI's become 0-1. It seems that the get.real() function simply back-transforms the chat-adjusted beta CI, but was it is that MARK actually does there? Why are the RMark and MARK results different here?
3) Unrelated to the CI of boundary estimates: is it true that the Barker model is in fact a multistate model with two states (being at risk of capture or not at risk of capture during capture occasions), and if so, should we in general be suspicious about local minima (even if there are no values at the boundaries as in my case) when applying Barker models?
I hope anyone can help!
Kind regards,
Tamar