Sometimes (to say the least) a run of a model produces parameter estimates that are clearly non-sensical (or at least unusable). E.g., SE is absurdly small or large.
When this happens, also other parameter estimates that "look correct" tend to differ vastly from the corresponding estimates from runs where all estimates make sense.
Does this mean (1) I did something wrong when specifying the model (paremeter indices, link functions etc), (2) is there something "wrong" with the data, (3) or is it simply that my data does not fit the particular model (or at least a possibility that it is so)?
When this happens, can I simply dicard the model, claiming that it is "unfit" for the data (even if it is the model with the lowest AIC) and keep my interpretations to the models that produce all "sensible" parameter esitmates?
In a particular case of the "Robust model", a Markov model produces a S2=0.9999999+-0.0000001 (or so) but a Random model gives S2=0.58+-0.13 and also different estiamtes for the other Ss (and a gamma that is smaller than both gamma'' and gamma' in the strange model (the Markov). I would prefer to trust the Random model for all values.
Of course I do not expect anybody to debug my results but give the example so you get sort of a feeling for my problem.