Hi,
I am running a GOF (10k bootstraps) test on my global (single-season, spatial dependance) model. Detection data was collected along a stream reach, detection in the first 50 m segment was allowed to differ from subsequent detections as all surveys commenced at the head of a riffle section, subsequent detection data was broken into 50 m segments. Consequently the p1 detection parameter is ~0.95 (p2+~0.75) and the GOF output indicates a c-hat of 0.0000 (chi-square around 0.3)! Distribution of simulated test statistics indicates a very skewed distribution with a high proportion of results in the lower categories but a few massive test statistics that are pulling up the average test stat. When I remove the p1 detection parameter from the global model, test seems to run fine with a more balanced test distribution and very little evidence of over/under dispersion (c-hat 1.02, p = 0.37).
Is it possible the high detection probability in p1 is causing this problem? How would you handle this situation?
Thanks in advance,
Mike