Adding a few things to Gary's response:
npseudacris wrote:The median c-hat output that is displayed in notepad says that the analysis was preformed as a known fate model which I am assuming is the default/only way since that is how the help section explains that the logistic regression analysis (for median c-hat) is preformed in MARK and it doesn’t matter that my model is a Live Recapture model, am I correct?
Correct - as noted in the most current version of chapter 5, MARK uses the known-fate analysis to handle logistic regression, which is used to derive an estimate of median c-hat. See the -sidebar- on p. 27 of Chapter 5.
Finally, in setting the upper bound for the estimation, the example on page 5-27 of the book uses 5.5 which is slightly higher than the observed deviance c-hat. Should you always set the upper bound slightly higher than the observed deviance c-hat? I also checked the MARK help on this and it said “….to find the approximate range in which to simulate c to focus the simulated data around the likely value of c that will result.” Not the clarity I was hoping for, any further guidelines for setting the upper bound would be appreciated.
You seem to have an older version of the chapter. But, nonetheless, the strategy I generally recommend is (i) run the design points from 1 -> some point greater than the observed value (i.e., bracket it, as Gary describes). Use 5 or 6 design points, with say 3-4 replicates per design point - just enough to give you a 'quick and dirty' idea of where the median c-hat is. Say its ~1.65. Then (ii) re-run the analysis, using a bracketing around this first estimate (say, 1 -> 2), again with 5-6 design points, but many more replicates (say, 15-20). The motivation is based on the fact that running the median c-hat routine s compute intensive. Trying it in two stages - one quick and dirty, to find the general part of the curve you want to be in), and then a second, more intensive run in the range of this part of the curve, is often a time-saver in the end, especially if your data set is large...
In the book, I mention (in the particular example being discussed) actually using an upper bound lower than the observed 5.2 (I use 3.0). The reason (as noted) is that in that case, is that if your c-hat is >3, then you're probably going to have have greater problems anyway - the general recommendation is that c-hat<= 3.0 is a reasonable adjustment to make. Anything >3 pretty well guarantees your best models will typically be of very low complexity (often 'dot' models). So, I simply went 1 -> 3 (but, the dipper data set being used in that example is one we're all *rather* familiar with, so I knew I was pretty safe).