missing parameter

questions concerning analysis/theory using program MARK

missing parameter

Postby mcallen » Mon Dec 07, 2009 2:11 pm

In nest survival analysis, does anyone know why MARK sometimes reports that a model has one fewer parameter than it is supposed to? This usually happens with small sample sizes of nests (too small, really) and I'm guessing this has something to do with it...

Mike
mcallen
 
Posts: 31
Joined: Thu Nov 19, 2009 1:45 pm
Location: New Jersey

Boundary estimate?

Postby dhewitt » Mon Dec 07, 2009 2:37 pm

If you don't have enough information in the data to estimate a parameter, it often goes to a boundary (0 or 1) and MARK will often not count that parameter with the LOGIT link. Most times it recovers the right number of parameters with the SIN link, but that is not an option for every model. You should be able to figure out the ones that are not counted by looking at the real estimates (huge SEs e.g.). Bottom line is that with sparse data (and sometimes with good data) you need to know the number and check MARK for each model. We adjust parameter counts in almost all models anymore, as boundary estimates cause MARK to miscount.
dhewitt
 
Posts: 150
Joined: Tue Nov 06, 2007 12:35 pm
Location: Fairhope, AL 36532

model invalid?

Postby mcallen » Mon Dec 07, 2009 2:44 pm

Thanks. I'm assuming this means that the model is invalid, i.e., it shouldn't be included in the AICc model rankings?
mcallen
 
Posts: 31
Joined: Thu Nov 19, 2009 1:45 pm
Location: New Jersey

Re: model invalid?

Postby cooch » Mon Dec 07, 2009 2:51 pm

mcallen wrote:Thanks. I'm assuming this means that the model is invalid, i.e., it shouldn't be included in the AICc model rankings?


Not unless you manually change the number of parameters that should be estimable given the structure of the model - which (as Dave pointed out), is what you should be doing.
cooch
 
Posts: 1654
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Maybe not

Postby dhewitt » Mon Dec 07, 2009 4:17 pm

I wouldn't necessarily conclude that the model needs to be chucked. Maybe, maybe not. If it's just one boundary estimate and that estimate makes sense because of the data limitations, adjust the parameter count and move on. The effects of this on your inference based on model selection get a little tricky, but sorting that out can be interesting. Some of my ramblings on the topic are here:

http://www.phidot.org/forum/viewtopic.php?t=1269

I'd be curious to hear the implications in your model set.
dhewitt
 
Posts: 150
Joined: Tue Nov 06, 2007 12:35 pm
Location: Fairhope, AL 36532

probably sample size

Postby mcallen » Tue Dec 08, 2009 11:08 am

Thanks. I suspect sample size was the issue. Both models that dropped a parameter included one dummy variable (1/0 = mowed/unmowed area or nest), which had very low sample sizes in the mowed groups (1 and 2 nests respectively). Also, there were no failures in the mowed groups.

Under what circumstances would good data reach a "boundary"?

Mike
mcallen
 
Posts: 31
Joined: Thu Nov 19, 2009 1:45 pm
Location: New Jersey

When the actual value is near 0 or 1

Postby dhewitt » Tue Dec 08, 2009 4:01 pm

Good data is relative, but we get boundary estimates with > 100 releases in CJS models because so few die that the model cannot differentiate from 100% survival. So this can happen when "truth" is somewhere very near 0 or 100%, or other similar situations depending on the type of model you're dealing with.
dhewitt
 
Posts: 150
Joined: Tue Nov 06, 2007 12:35 pm
Location: Fairhope, AL 36532


Return to analysis help

Who is online

Users browsing this forum: No registered users and 4 guests

cron