Setting up sample periods...

questions concerning analysis/theory using program MARK

Setting up sample periods...

Postby Fish_Boy » Fri Oct 28, 2005 4:06 pm

I have capture recapture data over 4 years collected at approximately the same season each year. The encounter histories were set up at 1 week intervals and so far the data indicate that survival is constant and high (as assumed) however, the catchability is variable and time dependent.

My question is… Can the data be set up so that the sample periods are of variable length such that sampling continues until a given number of newly marked fish are released?

For example: sample until 10 newly marked fish are released and the sample periods would then range from a couple days to a couple weeks depending on where that time period falls in the spawning period (the bell curve). I cannot find specific references to statistical violations, but I know that they may be there.

The main issues causing the inter-annual variation are net set locations and fish movements in relation to spawning specifically temperature regime which changes among years despite consistent starting and duration periods.
Fish_Boy
 
Posts: 65
Joined: Fri Oct 28, 2005 2:12 pm
Location: Winnipeg

Setting up sample periods.

Postby cschwarz@stat.sfu.ca » Fri Oct 28, 2005 7:25 pm

There shouldn't be any problems in defining your sampling intervals of different lengths. You will need to be careful about fitting some of the simpler models (such as constant phi) as this should then be PER UNIT TIME. You will need to specify the sampling intervals when you define the number of sampling occasions.

The intervals will run from the MID-point of the pooled times to the mid-POINT of the next pooled time.

You might want to also look at the paper by Hargrove JWand Borland C.H. (1994) Pooled population parameter estimates from mark-recapture data. Biometrics, 50, 1129-1141 who look at the effects of "pooling" sample times during an analysis.
cschwarz@stat.sfu.ca
 
Posts: 43
Joined: Mon Jun 09, 2003 1:59 pm
Location: Simon Fraser University

Postby Fish_Boy » Fri Oct 19, 2007 11:08 am

It has been a while, but a similar question is relevant still for another project...

There is often a large variation in total catch across years of such projects. One of the main issues of concern for many of our projects concerns impact assessments for populations that are exploited. Because of the variation in catch and tag returns the population estimates vary enough that a population trend is simply not feasible, but is always asked for. So here is the question...

Is it a major violation to randomly subsample each year of a study for example for 100 individuals calculate an estimate then repeat say 1000 times for each year to examine whether your data has any 'trend' through time? In theory, if we resample each year sufficiently then the estimate simply represents the effect of the recapture proportion on an abundance estimate. The data structure would be unchanged with respect to recapture proportion, but the number of captures would be 'capped' at 100. Would this not provide some estimate of trend over time 'controlling' for total number of animals caught?

These projects often are not planned out from the onset to answer such questions, but the questions are invariably asked.
Fish_Boy
 
Posts: 65
Joined: Fri Oct 28, 2005 2:12 pm
Location: Winnipeg


Return to analysis help

Who is online

Users browsing this forum: No registered users and 1 guest

cron