GDistiller wrote:Thanks Gary! That was the problem, model is now busy running...
I have another larger multi-state model that I am trying to estimate with simulated annealing. It has been running for 2 weeks now...does this sound reasonable? I know that the documentation says it can take a very long time to run. Am I correct that if it got stuck somewhere it would abort the process by itself?
Depending on the problem, it is possible for the chain to 'get stuck'. This is the nature of the beast - for distributions with multiple modes, there are all sorts of issues which can cause a chain to get stuck. While there are some 'technical' solutions (which require changing the underlying sampler - which is a general Metropolis-Hastings), these would require changes to the underlying code base. Not going to happen in the short run. Alternatively, you can try very different starting points, and run multiple chains, to see what happens. Finally, the usual strategy of taking your time series data from one or more chains and dumping it into CODA (or some such) to look at various traces and diagnostics, is always an option.
As for the time to run, this always amuses me. In some fields, a single experiment can takes months -> years to complete. But, in quantitative ecology, we seem to think we should get answers in seconds -> minutes. If the result is important to you, how long it takes should be relatively unimportant. And, the moment you move into MCMC, the time to completion can take much, much longer (I have jobs that took 2-3
months to complete - for a single model).
Of course, this presumes that your compute environment is stable (your second query)....
This leads me to my next question: are there any pc clusters around that can run Mark jobs? If there is a power failure then I lose everything and have to start again (this has already happened and cost me several days)...plus both my pc and macbook are busy running models making it difficult for me to use them for other work at the same time...
Thanks!
Greg
Short answer - no (not that I'm aware of). If I'm guessing correctly that you're at a University, then - thats what University compute infrastructure is designed to do (in other words, look for something at your end). In the modern era of computational intensive statistical inference, you need a compute environment that is going to never be turned off, and is 100% failsafe reliable. Which often means big central server farms with 24x7 maintenance (this would include the 'cloud' paradigm in many cases). Which is what old farts like me remember back from the 'central mainframe' days. Interesting to see the paradigm swinging back away from the desktop to 'big iron' which is designed for reliability. Consider the acquisition of high-end, reliable computing a long-term 'research investment'.