Input file character limit?

questions concerning analysis/theory using program MARK

Input file character limit?

Postby rlong » Thu Jun 23, 2005 3:12 pm

Hi everyone,

I'm using Mark for occupancy (and detectability) estimation with a large number of site- and visit-specific covariates. As a result, my input file is quite large, and I seem to be running into an input file size limitation. Based on some trial and error, it seems to occur somewhere between 41,200 and 41,600 characters (including spaces). Has anyone experienced this and is there some kind of workaround? Is the limitation likely caused by Mark or Windows?

Thanks,

Robert

_________________________________________________________
Robert Long
Vermont Cooperative Fish and Wildlife Research Unit
The Rubenstein School of Environment and Natural Resources
212 Aiken Center
University of Vermont
Burlington, VT 05405
(802)656-3388
robert.long@uvm.edu
_________________________________________________________
rlong
 
Posts: 6
Joined: Fri Mar 19, 2004 5:27 pm
Location: University of Vermont

Postby Andrea » Fri Jun 24, 2005 5:43 pm

Hi Robert,
I also have a big input file and the problem that I was running into was that Mark died when trying to generate the PIMs. However if you open Mark again you'll be able to open the results database file and work from there. I don't know if this is the kind of problem that you are running into.
I also learned that Mark doesn't have a limit in the amount of data it can handle. The problems with big datasets are mainly due to the computer running out of memory, so upgrading the memory may help.
Andrea
Andrea
 
Posts: 10
Joined: Mon Jun 13, 2005 12:22 pm
Location: Cary Institute

input file character limit ?

Postby rlong » Sat Jun 25, 2005 8:07 am

Hi again,

I probably should have been a bit more specific with my first post.

Mark lets me define a new project (occupancy estimation), reads in my encounter history files, and loads my covariate names (75 of them) with no problem. It is only after I try to run a model, even a simple dot model WITHOUT including covariates, that I get an error message.

The message is usually

"ERROR -- Encounter history must consist of only these characters: '1.0' Exit code 30 Exit Window?"

but can also at times be

"ERROR -- Error with floating point read of -"

I've been able to run a model with different input files containing the exact same covariates, as long as I truncate my input file (i.e., include fewer covariates) to stay within the bounds I mentioned in my first post.

My computer is relatively new, and I suspect this is not a memory problem.

Thanks again for any thoughts!

Robert


_________________________________________________________
Robert Long
Vermont Cooperative Fish and Wildlife Research Unit
The Rubenstein School of Environment and Natural Resources
212 Aiken Center
University of Vermont
Burlington, VT 05405
(802)656-3388
robert.long@uvm.edu
_________________________________________________________
rlong
 
Posts: 6
Joined: Fri Mar 19, 2004 5:27 pm
Location: University of Vermont

input file character limit ?

Postby gwhite » Sat Jun 25, 2005 1:16 pm

Robert:
The error messages you show are likely not from a memory error, but rather from a mistake in your input file. You might try checking the "List data" option in the run window to isolate the input line(s) that are causing the problem.
Gary
gwhite
 
Posts: 340
Joined: Fri May 16, 2003 9:05 am


Return to analysis help

Who is online

Users browsing this forum: Google [Bot] and 1 guest

cron