by abreton » Tue Jan 22, 2008 3:08 pm
Off the top of my head, I can think of three scenarios when it is reasonable to fix a parameter(s): (1) when you know a priori the value of the parameter; (2) when the parameter is not estimable given the model structure; (3) when a a biologically plausible value is available to fix a parameter is normally estimable given the model but, due to data sparsity or some other issue with the data, it cannot be estimated with your dataset.
I recently completed a simple CJS analysis and applied scenario 1: all animals released on occasion 3 were immediately seen on occasion 4 to all survived with probability 1.0. In this case, since I knew the valueof the parameter a priori, it made no sense to ask the model to 'estimate' it for me. In fact, if I relied on the model in this case to estimate the parameter this may have caused convergence problems since this parameter was 'on' the upper boundary (=1).
There are instances when a parameter cannot be estimated by the model no matter how much data are available; in these cases, you are advised to fix the parameter(s) to some value or to set it/them equal to an estimable parameter in the model. Here is an example from the Robust Design from Chapter 16, Section 16.3.3, 2nd paragraph: "To provide identifiability of the parameters for the Markovian emigration model (where an animal
“remembers" that it is off the study area) when parameters are time-specific, Kendall et al. (1997) stated that γ′′ k and γ′k need to be set equal to γ′′ t and γ′t , respectively, for some earlier period. Otherwise these parameters are confounded with St−1. They suggested setting them equal to γ′′ k−1 and γ′k −1, respectively, but it really should depend on what makes the most sense for your situation. This problem goes away if either movement or survival is modeled as constant over time."
In my experience, convergence failure is an extremely common experience when for analysts fitting capture-recapture models. And this may be for the simple reason that many of us are slow to realize that estimability not only declines as a function of decreasing data - it also declines as the number of parameters in a model are increased. People love to dream up complex models but they often fail to realize how much data they'll need to get reasonable or even any estimates at all from the model.
In my view, all estimable parameters in a model should be succesfully estimated given the model and the data otherwise the model should not be included in the model set (not available for inference). The only exception I'd make is when a biologically plausible value is available for an estimate that cannot be found by the numerical search procedure/algorithm deployed by the software. If this is available (e.g., based on a previous study), then it might be reasonable to fix the paramter to this value in the model and retain it in the model set. I should note here that when one or a small fraction of the parameters in a model are not estimated given the data, model, and solution strategy deployed by MARK (and other software) analysts have several options before giving up on the model or fixing parameters - e.g., a different form of the same model might do the trick see CHapter 7, MARK manual); or a different link function when one or more parameters are on a boundary (close to 0 or 1). But in the end, if a parameter(s) are not estimated by the model and a biologically supported value is not available, my view is that the model should not be included in the results/inference.
This latter suggestion is often not taken well by collaborators - especially when the model is necessary to test their hypothesis(es). Unfortunately, what they need to do in this scenario is back-up, design/redesign their study and acquire more (or the right) data. Of course, this advice will not make you popular. These are just a few thoughts, some of which may be helpful...