Handling unestimable parameters

questions concerning analysis/theory using program MARK

Handling unestimable parameters

Postby JonL » Mon Nov 12, 2007 10:21 am

Sometimes (to say the least) a run of a model produces parameter estimates that are clearly non-sensical (or at least unusable). E.g., SE is absurdly small or large.

When this happens, also other parameter estimates that "look correct" tend to differ vastly from the corresponding estimates from runs where all estimates make sense.

Does this mean (1) I did something wrong when specifying the model (paremeter indices, link functions etc), (2) is there something "wrong" with the data, (3) or is it simply that my data does not fit the particular model (or at least a possibility that it is so)?

When this happens, can I simply dicard the model, claiming that it is "unfit" for the data (even if it is the model with the lowest AIC) and keep my interpretations to the models that produce all "sensible" parameter esitmates?

In a particular case of the "Robust model", a Markov model produces a S2=0.9999999+-0.0000001 (or so) but a Random model gives S2=0.58+-0.13 and also different estiamtes for the other Ss (and a gamma that is smaller than both gamma'' and gamma' in the strange model (the Markov). I would prefer to trust the Random model for all values.

Of course I do not expect anybody to debug my results but give the example so you get sort of a feeling for my problem.
JonL
 
Posts: 14
Joined: Mon May 07, 2007 5:38 am

Postby abreton » Mon Nov 12, 2007 4:23 pm

Concerning your three questions: Does this mean (1) I did something wrong when specifying the model (paremeter indices, link functions etc), (2) is there something "wrong" with the data, (3) or is it simply that my data does not fit the particular model (or at least a possibility that it is so)?

Short answers: (1) possibly; (2) possibly; (3) no. What you're experiencing, when you see wonky SEs, is partial convergence failure; when complete convergence failure occurs MARK crashes. By convergence failure I'm referring to failure of the solution algorithm employed by MARK to solve the likelihoood function specified by your model. When it 'partially fails' it fails to find an estimate for one or a few of the models structural parameters (betas) and these are typically detected in the output (betas not reals) provided by MARK (as you've done) through their anomolous SEs.

Regarding your question #3, convergence is not a function of how well a dataset 'fits' the structure specified by a model (e..g, age and sex dependent resighting probabilities). Instead, it is limited by how much data you have. Stated differently, convergence failure is a funciton of model complexity and available data. As available data increases so does access to more complex models; as it decreases, users become more restricted in the complexity of the models that they can specify without experience convergence failure.

In an ideal world, data would not be limiting; and as a result, any model imagined by the analyst could be fitted and estimates of model parameters found. Unfortunately, you and I and many others have discovered that this ideal is not available - and in its place we have 'convergence failure'.

Returning to your list, if you replaced #3 with "or do I have too few data to support the model where I detected convergence failure" then you probably have a complete set of problems you can run into when fitting CMR and other types of models. As you guessed, its not possible to say what is causing your convergence error - it may be 1, 2 or 3 (or more than one of these). Try searching the MARK forum archives for 'convergence error' and/or "wonky SEs" (etc.) and you'll likely discover lots of suggestions. Good luck.
abreton
 
Posts: 111
Joined: Tue Apr 25, 2006 8:18 pm
Location: Insight Database Design and Consulting


Return to analysis help

Who is online

Users browsing this forum: Google [Bot] and 2 guests

cron