B.K. Sandercock wrote:Here's an example of a memory model fit with SURVIV for a large dataset of Canada geese.
.
Key point being
large, with very high encounter probability. It will not surprise you that a fair number of the applications of 'memory' models are applied to studies of goose populations: high density nesting, extremely high female philopatry, easy to catch, both live encounter and dead recovery data, virtually none of that pesky 'spatially explicit' stuff to worry about (geese don't have territories, and perhaps more than most species, nicely match the statistical starting assumption that your organism is a randomly moving Brownian particle). The goose data I worked with for the bulk of my career was about 75-100K marked individuals, with encounter probabilities (at least in the early years) of
p > 0.5. [Of course, this pales compared to Emmanuelle Cam's kittiwake data -- when you have encounter probabilities that approach 1.0, as her study does, combined with relatively large sample sizes, you can fit whatever models you want.]
There is a long-standing tradition of the 'smart folks' coming up with clever and interesting models, which are 'demonstrated' in some paper with (typically) 'very good -> perfect data'. Masses read the papers, try to apply the clever idea to their own data, only to find with some frustration that it might not work, because their data aren't equivalent to the frequently near-optimal 'empirical example' data used in the paper describing the models in the first place.
Summarizing -- memory models are 'data hungry', which is absolutely one of the reasons they don't get used much. And, building some of the models can be challenging, since the 'chain of conditional probabilities' (e.g., state A to B this year, conditional on A to A last year, B to A last year, C to A last year...) you might need to model can get large, and complicated, very quickly.