Page 1 of 1

large RD data sets and computation time

PostPosted: Mon Nov 16, 2015 3:54 pm
by rasrage
Dear all,
I am new to RMARK and was wondering how large a data set it can handle. I am looking at a Robust Design data set with 27 years of monthly trapping data (so that's what - 324 primary periods), with 4 occasions per primary period and individual counts across this entire study in the 10,000's to 100,000's (I think R should just about be able to handle a 100,000 x 1300 matrix). Any opinion on whether fitting a model to this amount of data would be feasible, and any idea on how much computational time that might take (just an order of magnitude), would be greatly appreciated.
Thank you!
Rahel

Re: large RD data sets and computation time

PostPosted: Mon Nov 16, 2015 4:01 pm
by jlaake
I have never attempted a problem of that size. Computation time depends on the model you run. RMark simplifies the design matrix to the unique rows prior to sending it to MARK. Most of your questions relate to MARK limits rather than RMark. If I understand correctly your capture history will be over 1200 characters (324*4). Not sure what limits you'll encounter with MARK.

I'd just suggest trying it and see what happens. I'd start with a very simple model and then use it to provide starting values for the more complex models which will take longer to fit.

--jeff

Re: large RD data sets and computation time

PostPosted: Mon Nov 16, 2015 4:07 pm
by rasrage
Thanks, Jeff!