---
title: "R Notebook 3 - CJS model simulations"
output:
html_document:
df_print: paged
---
```{r setup, include=FALSE, warning=FALSE, message=FALSE}
knitr::opts_chunk$set(echo = TRUE)
require(ggplot2)
```
This notebook allows you to generate data for a simple CJS study under model $\phi_.,p_.$. The way the code is written, animals are only released on the 1st occasion and $\phi$ and $p$ are constant over the 3-year study. The goal is to help you better understand the concepts for:
1. how data are generated based on underlying parameter values for $\phi$ and $p$ from 2 random binomial processes (survival and detection),
2. how variable replicate samples are (compare summary features of capture histories for different parameter values and sample sizes),
3. see how the multinomial distribution can be applied to obtain $ln\mathcal{L}$ values and MLEs for the model, and
4. gain experience with how repeatability and precision vary for different parameter values and sample sizes.
Let me know which parts of the code are hard for you to follow. There is a fair bit going on, and I suspect that some of it will use new aspects of R for at least some of you. If that's true, see if you can work through it by working with small pieces of code at a time. Also, at the very least, try to grasp the broader concepts (see above list) from the graphics and how they change as you change parameter values and sample sizes.
### Generate data given parameters and sample size
```{r generate data}
# Code for generating CJS capture histories
# Adapted from code from Marc Kéry and Michael Schaub's excellent book,
# "Bayesian population analysis using WinBUGS: A hierarchical perspective"
# https://www.vogelwarte.ch/de/projekte/publikationen/bpa/
# Define parameter values
n.occasions <- 3 # Number of capture occasions
n.mark <- 100
marked <- c(n.mark, 0) # Number of newly marked individuals each year
# Vectors of values for phi and p for each year
# Each vector needs to have (n.occasions - 1) values or fewer and have values
# between 0 and 1. If it's fewer, the values will be recycled
# Here, we just give 1 value and it's used in every year
phi <- 0.8 # or, for example, could use phi <- c(0.6, 0.9)
p <- 0.4 # or, for example, could use p <- c(0.3, 0.5)
# Define matrices with survival and recapture probabilities
PHI <- matrix(phi, ncol = n.occasions-1, nrow = sum(marked), byrow = TRUE)
P <- matrix(p, ncol = n.occasions-1, nrow = sum(marked), byrow = TRUE)
# Define function to simulate a capture-history (CH) matrix
simul.cjs <- function(PHI, P, marked){
n.occasions <- dim(PHI)[2] + 1
CH <- matrix(0, ncol = n.occasions, nrow = sum(marked))
# Define a vector with the occasion of marking
# note: length(marked) is the number of marking occasions
# so this next line indicates which occasion each individual was 1st marked on
mark.occ <- rep(1:length(marked), marked[1:length(marked)])
# Fill the CH matrix
for (i in 1:sum(marked)) {
CH[i, mark.occ[i]] <- 1 # Write a 1 at the release occasion
if (mark.occ[i] == n.occasions) next # ignore animals 1st caught on last occasion
for (t in (mark.occ[i] + 1):n.occasions) {
# Bernoulli trial: does individual survive occasion?
sur <- rbinom(1, 1, PHI[i, t - 1])
if (sur == 0)
break # If dead, move to next individual
# Bernoulli trial: is individual recaptured?
rp <- rbinom(1, 1, P[i, t - 1])
if (rp == 1) CH[i,t] <- 1
} #t
} #i
return(CH)
}
# Simulate capture-histories
CH <- simul.cjs(PHI, P, marked)
# collapse the numbers in each row into character data
ch <- apply(CH, 1, paste, collapse = "")
# summarize how many animals have each observed capture history
table(ch)
```
#### Questions on data generation
1. When you release 100 animals on occasion 1, what is a typical proportion of animals that are never seen again when you have $\phi=0.8$ and $p=0.2$?
2. Do you think that animals with an encounter history of `100` provide much information for estimating $\phi$ and $p$?
3. For one of your simulation runs with $R_1 = 100$ and $\phi=0.8$ and $p=0.2$, use the intuitive method described in the introduction to the CJS model to calculate $\hat{p}$ and $\hat\phi$.
4. What is the typical proportion of animals that are never seen again when you use $\phi=0.8$ and $p=0.4$? How does it change when you use $\phi=0.8$ and $p=0.8$? Guess what those changes would do to your estimation ability.
5. Reset your parameters in the code block above to be $R_1 = 100$, $\phi=0.8$ and $p=0.2$ and re-run that code chunk before moving on to the next chunk.
#### Estimation with likelihood
The next block works with the number of animals with each encounter history as observations. The probability of those observations depend on the values of $\phi$ and $p$. The multinomial distribution is used to evaluate the probability density of the multinomial for different combinations of $\phi$ and $p$. Those values (on the log scale) are presented as contour lines on the plot that is produced. The green line is approximating a 95% confidence contour, and the shape of that contour provides information on the uncertainty of estiamtes for each parameter and the covariance between the estimated parameters. The red point indicates the true parameter values. The black point represents the point estimate for the 2 parameters.
```{r, message=FALSE, warning=FALSE}
# Store the numbers of animals with each of the 4 histories
x111 <- sum(ch == "111")
x101 <- sum(ch == "101")
x110 <- sum(ch == "110")
x100 <- sum(ch == "100")
# Make a data frame with a grid of combinations of
# possible parameter combinations
# Look up `expand.grid` in R's help if the function is new to you
parms <- expand.grid(phi = seq(0.01, 0.99, by = 0.01),
p = seq(0.01, 0.99, by = 0.01))
# calculate the probability of each capture history
# for each combination of parameter values in the data frame
parms$p111 <- with(parms, phi*p*phi*p)
parms$p101 <- with(parms, phi*(1-p)*phi*p)
parms$p110 <- with(parms, phi*p*(1-phi*p))
parms$p100 <- with(parms, 1-p111-p101-p110)
# work out the multinomial pdf value (log scale) for each pair of
# possible parameter values GIVEN the data and the model
# structure (contstant phi and p over time)
parms$log.lik <- NA
for (i in 1:nrow(parms)) {
parms$log.lik[i] <- dmultinom(x = c(x111, x101, x110, x100),
size = n.mark,
prob = c(parms$p111[i],
parms$p101[i],
parms$p110[i],
parms$p100[i]),
log = TRUE)
}
# print out values of phi and p associated with highest pdf value
parms$phi[parms$log.lik == max(parms$log.lik)]
parms$p[parms$log.lik == max(parms$log.lik)]
# make a contour plot of log-likelihood values
ggplot(parms, aes(x = p, y = phi, z = log.lik)) +
geom_contour(bins = 150) +
# add green contour line as a form of 95% CI
stat_contour(breaks=c(max(parms$log.lik) - 1.92), color = "green") +
# add black point to show estimates
geom_point(data = data.frame(x = parms$p[parms$log.lik == max(parms$log.lik)],
y = parms$phi[parms$log.lik == max(parms$log.lik)]),
aes(x = x, y = y, z = 0),
color = "black") +
# add red point to show the true parameter values
geom_point(data = data.frame(x = p,
y = phi),
aes(x = x, y = y, z = 0),
color = "red") +
scale_x_continuous(limits = c(0, 1), breaks = seq(0, 1, by = 0.2)) +
scale_y_continuous(limits = c(0, 1), breaks = seq(0, 1, by = 0.2))
# review a table of the capture histories
table(ch)
```
#### Questions on properties of estimates
Reset your parameters in the 1st code block to be $R_1 = 100$, $\phi=0.8$ and $p=0.2$ and run all code chunks above a number of times. Review the graphs and consider the following.
6. How well does the model do at estimating values close to the true underlying parameters on average?
7. Based on the green contour line, how precise do the estimates appear to be? Is one parameter estimated more precisely than the other?
8. Does there seem to be covariance between the 2 estimated quantities? If so, how strong is it and is it positive or negative? Does that make sense to you?
9. How do your answers to questions 6-8 change as you change $p$ from 0.2 to 0.4?
10. How do your answers to questions 6-8 change as you change $\phi$ from 0.8 to 0.4 while holding $p$ constant?
11. How do your answers to questions 6-8 change when you change $R_1$ from 100 to 1,000?
12. What questions do you have about the CJS model, the multinomial distribution as it's used in the class this week, and/or about the code or notes so far?