How to evaluate survival probability 1?

questions concerning analysis/theory using program MARK

How to evaluate survival probability 1?

Postby caracarn » Fri Apr 11, 2008 7:25 am

We are doing a Bachelor about Capture-Recapture methods and have found MARK to be a good program to use for help. We have (among other things) used it to calculate the survival- and capture-probabilities using the Cormack method but we are wondering alittle on how to approach that we have gotten several cases were the survival probabilities are 1.00. Is this just a case of a "bad" model or can there be another reason?
We noticed similiar problems with the standard Jolly-Seber model also.

Small info: We're doing an investigation on an open population of birds that have been captured and recaptured for a period of 14 years where each bird is tagged individualy.

At the moment we're looking at some methods using Martingales and also at Burnhams method since we have some cases of dead recoveries were birds have been shot but how Burnham's method is built up seems both hard to find and also abit complicated.
caracarn
 
Posts: 3
Joined: Wed Mar 26, 2008 3:22 pm
Location: Gothenburg - Sweden

Postby caracarn » Fri Apr 11, 2008 8:32 am

As another little question: Does MARK take survival-estimation that are greater than 1 and replace them with 1?
caracarn
 
Posts: 3
Joined: Wed Mar 26, 2008 3:22 pm
Location: Gothenburg - Sweden

estimates on a boundary, logit link

Postby abreton » Fri Apr 11, 2008 3:02 pm

The estimate of '1.0' does not necessarily reflect a problem, e.g., sparsity, with your data (equivalently, an overparameterized model relative to available data). It could be that the parameter is actually '1.0' and for the logit link function this means it is on a 'boundary' (exactly zero, lower boundary; or exactly 1, upper boundary). One aspect of the logit link is that it does not perform well when estimates are near or on a boundary - and to answer your second question, the logit link constrains estimates to the range 0 to 1; thus, estimates are forced to fall within this range and 'corrected' when they fall outside of it. In contrast, the sin link and others allow estimates to fall outside of the 'sensible' range for probabilities (0-1). One way to assess whether or not the estimate is falling above or below zero is to use the sin link (when available) to fit the same model that produced the 1.0 'estimates' under the logit link.

Whenever I deploy the logit link and detect estimation problems, including the issue you posted, I always (1) have a look at the m-array (with your analysis open in MARK, click Output > Input Data Summary) and (2) start asking questions about the field study (design, data collection). And often times, I discover that, e.g., resighting probabilities for some years were close to or exactly unity (1.0). Since the logit link has 'problems' with estimates on a boundary, these boundary estimates often cause estimation problems which are, in my experience, most apparent when looking at estimates of the models structural parameters (betas) and the parameter counts reported by MARK. However, they can reveal themselves as an estimate of 1 for a real parameter (the demographic parameters specified by the model) as well. If, after asking questions and looking at the m-array the parameter appears to be = 1, my response is to fix it to 1 using the 'Fix Parameter' option in MARK - why estimate something that you know is 1 (or 0) a priori.

You can read about m-arrays in Burnham et al. 1987 (see source materials at http://welcome.warnercnr.colostate.edu/ ... nfo/fw663/), Williams et al. 2002 (Analysis and Management of Animal Populations), the Gentle Intro or Mark Help file. Also use these sources to search for the key word 'boundary' in the context of the logit link, CJS and JS models (and others). For example, there is a short but informative discussion on page 438 in Williams et al. regarding a capture probability that was legitimately estimated = 1. Also, see Chapter 4, bottom of page 34 and top of 35 for a similar discussion in the Gentle Intro. Try the key word 'boundary' to search the Analysis Forum archives as well...
abreton
 
Posts: 111
Joined: Tue Apr 25, 2006 8:18 pm
Location: Insight Database Design and Consulting

Postby abreton » Fri Apr 11, 2008 5:59 pm

In case there was any confusion, I should clarify that I was referring to the 'reduced m-array' as opposed to the 'full m-array' in my last post. Both are covered in Burnham et al. 1987.
abreton
 
Posts: 111
Joined: Tue Apr 25, 2006 8:18 pm
Location: Insight Database Design and Consulting

Postby cooch » Fri Apr 11, 2008 6:26 pm

abreton wrote:In case there was any confusion, I should clarify that I was referring to the 'reduced m-array' as opposed to the 'full m-array' in my last post. Both are covered in Burnham et al. 1987.


They're also covered in considerable detail in Chapter 5 of the MARK book.
cooch
 
Posts: 1654
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Postby caracarn » Mon Apr 14, 2008 5:02 am

Many thanks for the help!
caracarn
 
Posts: 3
Joined: Wed Mar 26, 2008 3:22 pm
Location: Gothenburg - Sweden


Return to analysis help

Who is online

Users browsing this forum: No registered users and 2 guests

cron