confidence intervals of a product

Forum for discussion of general questions related to study design and/or analysis of existing data - software neutral.

confidence intervals of a product

Postby Karin » Fri Jan 22, 2010 9:50 am

Dear all,

I have data from a capture-recapture study and used this data to calculate the monthly survival rates over several years. Next to monthly survival I'm also interested in yearly survival rates. In order to obtain yearly survival I calculated the product of the monthly survival rates within one year (I hope this is correct so far).
How do I get confidence intervals for those yearly survival estimates?

I hope that this is not an entirely stupid question and that someone can help me.

Kind regards,
Karin
Karin
 
Posts: 11
Joined: Wed Nov 19, 2008 5:13 am

Re: confidence intervals of a product

Postby cooch » Fri Jan 22, 2010 10:22 am

Karin wrote:Dear all,

I have data from a capture-recapture study and used this data to calculate the monthly survival rates over several years. Next to monthly survival I'm also interested in yearly survival rates. In order to obtain yearly survival I calculated the product of the monthly survival rates within one year (I hope this is correct so far).
How do I get confidence intervals for those yearly survival estimates?


Delta method - see second appendix of 'the book':

http://www.phidot.org/software/mark/doc ... /app_2.pdf
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Postby Karin » Fri Jan 22, 2010 11:01 am

Thank you!
Karin
 
Posts: 11
Joined: Wed Nov 19, 2008 5:13 am

Delta method - Var, SD and SE

Postby Karin » Tue Jan 26, 2010 5:29 am

As far as I know the square root of the Var is the SD. To obtain the SE I have to divide the SD through the square root of the sample size. Why is in Appendix B the SE is calculated as the square root of the Var?

Karin
Karin
 
Posts: 11
Joined: Wed Nov 19, 2008 5:13 am

Postby jlaake » Tue Jan 26, 2010 10:32 am

I think I've posted a response to this type of question before on this list. This is a very commom misunderstanding and is a consequence of being first taught statistics with a focus on the mean of the data. Your confusion is because you are not clearly discriminating between data and parmeters. The spread of data measurements can be represented by the variance or its square root which is the standard deviation. The uncertainty about a parameter is represented by its variance or its square root which is called the standard error. In the case of a mean (which is a parameter) of a series of data measurements, then the standard error of the mean is the standard deviation divided by the square root of the number of measurements (n). The variance of the mean is the variance of the data divided by n. Both of them are called a variance. So the square root of the variance of the data is a standard deviation and the square root of the variance of a parameter is its standard error. What you are deriving from the delta method is the variance of the parameter and thus the square root is the standard error of the parameter. It may help to consider the names here.. we use "error" in standard error in reference to a parameter estimate which has error but we don't use the term "error" in referencing variation in data. We wouldn't say it is an error that one tree is larger than another. We describe such variation as a deviation from a central tendency.
jlaake
 
Posts: 1417
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: Delta method - Var, SD and SE

Postby cooch » Tue Jan 26, 2010 10:42 am

Karin wrote:As far as I know the square root of the Var is the SD. To obtain the SE I have to divide the SD through the square root of the sample size. Why is in Appendix B the SE is calculated as the square root of the Var?

Karin


You're thinking SE of sample mean. In the context in question, we're think root mean square error (RMSE) - standard error of the estimate. You should consult a decent stats book, but in brief, the RMSE is the square-root of the MSE (mean square error), which is the sum of the estimated variance of a parameter, and the square of the bias. For an unbiased estimator, the RMSE is the square root of the variance, known as the standard error.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Postby cnagy » Tue Jan 26, 2010 8:27 pm

jlaake wrote:I think I've posted a response to this type of question before on this list. This is a very commom misunderstanding and is a consequence of being first taught statistics with a focus on the mean of the data...


This post removed a couple of gremlins in my brain that had been running around causing trouble for some time now. Thanks.
cnagy
 
Posts: 7
Joined: Tue Nov 06, 2007 1:06 pm
Location: NYC

Postby Karin » Wed Jan 27, 2010 7:33 am

Thanks a lot for the explanations!
Karin
 
Posts: 11
Joined: Wed Nov 19, 2008 5:13 am


Return to analysis & design questions

Who is online

Users browsing this forum: No registered users and 20 guests

cron