Page 1 of 1

Changing results when re-running models

PostPosted: Wed Oct 30, 2019 5:54 pm
by ewhite
Hi all,

I've been running multi-state models in MARK and noticed today that when I re-run models (not changing anything, just re-loading the model and re-running it), I get different results (in terms of deviance, number of parameters, and AIC values) each time I run the model.

Any ideas why this would happen, or how I can get consistent results with each model run?

Thanks,
Emma

Re: Changing results when re-running models

PostPosted: Wed Oct 30, 2019 6:24 pm
by jlaake
Try setting threads=1. Using multiple CPUs can lead to slightly different results. It will take longer with a single thread.

Re: Changing results when re-running models

PostPosted: Thu Oct 31, 2019 9:28 am
by nperlut
I am having the same issues with CJS models, where I run the exact same model and get different results. I tried your suggestion and set threads=1, and the deviance is still 0.2 different. Any other thoughts on what is going on?

Re: Changing results when re-running models

PostPosted: Thu Oct 31, 2019 10:13 am
by jlaake
Are you using MARK interface or RMark. If the latter, send me data and code and I'll look into it. If it is MARK interface, Gary will have to address.

Re: Changing results when re-running models

PostPosted: Thu Oct 31, 2019 10:18 am
by ewhite
I'm using MARK interface.

Re: Changing results when re-running models

PostPosted: Thu Oct 31, 2019 5:08 pm
by cooch
I was just about to make the same suggestion Jeff did. Since that proposed solution didn't work, more likely remaining possiblity is one or more parameters estimated up near the boundary, causing some problems.

Send me the .fpt and .dbf files, and I'll have a look.

Re: Changing results when re-running models

PostPosted: Sat Nov 02, 2019 7:45 pm
by gwhite
The reason you are getting different result is that your models are horribly over parameterized. You have specified 215 beta parameters, but MARK is estimating 52 or 53. The likelihood is nearly flat. Plus, you are running these models with 4 threads, which means that in a situation like this, you will get small differences.

You need to start with a much simpler model and use the estimates from it to build more complex models. The Gentle Introduction provides details of how to do this. You cannot expect a model with 215 parameters to optimize correctly without good starting values.

Gary