goodness of fit tests for multi-season models

questions concerning analysis/theory using program PRESENCE

goodness of fit tests for multi-season models

Postby Lea » Fri Jan 30, 2015 4:56 pm

Hi there,
Last I heard there was no way to assess the fit of multi-season model. Is this still the case? I had someone suggest changing C-hat to other values to see if it would affect the ranking of my models and thus how sensitive they might be to overdispersion. It seems like an interesting idea but I wanted to see what others thought of this approach. I wondered what values I should use? values <4?
Cheers,
Lea
Lea
 
Posts: 33
Joined: Thu Oct 24, 2013 1:09 pm

Re: goodness of fit tests for multi-season models

Postby Lea » Mon Feb 09, 2015 1:57 pm

Can anyone think of a paper that might have taken this approach?
Lea
 
Posts: 33
Joined: Thu Oct 24, 2013 1:09 pm

Re: goodness of fit tests for multi-season models

Postby Lea » Wed Feb 11, 2015 7:58 pm

Well in case anyone is interested, I finally found reference to it in the MARK book on pg 181.

4. it is also worth looking qualitatively at the ‘sensitivity’ of your model rankings to
changes in ˆ c.Manually increase ˆ c in the results browsers from 1.0, 1.25, 1.5 and so on
(up to, say, 2.0), and look to see how much the ‘results’ (i.e., relative support among the
models in your candidate model set) changes. In many cases, your best model(s) will
continue to be among those with high AIC weight, even as you increase ˆ c. This gives
you some grounds for confidence (not much, perhaps, but some). Always remember,
though, that in general, the bigger the ˆ c, the more ‘conservative’ your model selection
will be - AIC will tend to favor reduced parameter models with increasing ˆ c (a look at
equation for calculating AIC will show why). This should make intuitive sense as well
- if you have ‘noise’ (i.e., lack of t), perhaps the best you can do is t a simple model.
In cases where the model rankings change dramatically with even small changes in
ˆ c, this might suggest that your data are too sparse for robust estimation, and as such,
there will be real limits to the inferences you can make from your candidate model
set.
Lea
 
Posts: 33
Joined: Thu Oct 24, 2013 1:09 pm

Re: goodness of fit tests for multi-season models

Postby Daisy » Sun Nov 22, 2015 5:47 am

Thank you, your sleuthing helped me :) Sorry nobody answered you!
Daisy
 
Posts: 9
Joined: Sat Feb 07, 2015 7:38 pm
Location: Michigan, USA

Re: goodness of fit tests for multi-season models

Postby Lea » Thu Dec 03, 2015 5:43 pm

Glad it was helpful!
Lea
 
Posts: 33
Joined: Thu Oct 24, 2013 1:09 pm

Re: goodness of fit tests for multi-season models

Postby Lea » Tue Feb 23, 2016 7:06 pm

Further to this topic, I wonder if there is any value in running GOF tests on each single season model and using the average c-hat value to adjust the multi-season model?
Lea
 
Posts: 33
Joined: Thu Oct 24, 2013 1:09 pm

Re: goodness of fit tests for multi-season models

Postby pennyb » Mon Sep 05, 2016 11:33 pm

Lea wrote:Further to this topic, I wonder if there is any value in running GOF tests on each single season model and using the average c-hat value to adjust the multi-season model?


Lea, did you try this method out/find out if it is appropriate? I'm currently trying to figure out the best way to test goodness of fit for a multi-season model too
pennyb
 
Posts: 13
Joined: Tue Sep 23, 2014 9:06 pm

Re: goodness of fit tests for multi-season models

Postby Lea » Tue Dec 20, 2016 12:28 pm

Oops, sorry Penny, I just saw your question. It looked like some years had evidence of underdispersion and other had evidence of overdispersion so averaging c-hat wouldn't be of much use. I haven't seen any other suggestions on how to assess goodness of fit other than altering c-hat to see how sensitive models are to overdispersion.
Lea
 
Posts: 33
Joined: Thu Oct 24, 2013 1:09 pm


Return to analysis help

Who is online

Users browsing this forum: No registered users and 1 guest