This month I focused mostly on studies about the value of
information, and if you're short on time I'd start with McGowan et al 2016 and
Runge et al 2011.
If you're super excited about the book "data not
dogma" (previewed a few months ago, it includes chapters from several TNC
authors including myself), you can now pre-order it. It should be published in
mid-October: https://global.oup.com/academic/product/effective-conservation-science-9780198808985?facet_narrowbybinding_facet=Paperback&lang=en&cc=se
The chapters I've read so far are very interesting, so
hopefully it's worth checking out.
VALUE OF INFORMATION (VOI):
I have a paper coming out soon that examines the value of
using high resolution (1m) vs coarser resolution (30m) spatial data in a water
funds context, asking the question of whether or not it's worth buying the
high-res imagery and spending a lot more time to analyze it, or if the coarse
free data would lead you to make the same decision (stay tuned for details). It
turns out there is a whole field around this called Value of Information (or
VOI) - thanks to Timm Kroeger and especially Hugh Possingham for getting me
started on this, as it is a theme in much of my research. For this month's
review I've started getting up to speed on existing literature around this. I'm
going to be doing a lot of thinking about VOI in the near to mid future so let
me know if you'd like to discuss further.
McGowan and Possingham 2016 is a short commentary article on
the topic of value of information (VOI), specifically looking at how movement
ecology (related to wildlife tracking) can inform decision making. They
emphasize the importance of translating broad goals (e.g. reversing the decline
in salmon stocks) into quantitative objectives (e.g. boost salmon population to
X by time Y, or intermediate objectives like removing river barriers so Z% of
salmon enters upstream spawning habitat), and they provide a flow chart to help
decide when to collect additional data vs. making a decision with the data you
have (although a similar flow chart in the following article is more clear).
McGowan et al 2016 explores the idea of the article above
more fully. The abstract actually sums up the paper quite nicely; it centers
around asking two questions about animal telemetry data (although the concept
applies much more broadly): 1) would (or could) I take a different action if I
had more data, and 2) is the expected gain of making the different decision
worth the cost ($ and time) to collect more data? She provides a continuum for
how data is expected to be used from more abstract to highly concrete: pure
research, engaging the public, raising awareness, tactical research, active
adaptive management, and state-dependent management (e.g. quota-setting for
harvestable species).
Runge et al 2011 shows a real-world example of applying VOI
to whooping crane conservation (figuring out why it wasn't working), and I
think it will really help conservationists to see how incorporating VOI can
actually be useful (it's a good read), and not too technical. Essentially,
there was a lot they didn't know, and many options for taking action. They
evaluated many hypotheses for why whooping crane nests were failing (based on
expert input), along with accompanying management actions to address each. The
cool thing is that they found optimal strategies for each hypothesis, but also
an optimal strategy if we had no additional information (suboptimal under any
hypothesis, but useful across all of them). They also looked at the potential
value of investigating each of the hypotheses and were able to determine which
hypotheses were the most important to resolve, and what data would be most
useful to resolve it.
Maxwell et al 2015 is an example of why considering the
value of information is important. They looked at how to best manage a
hypothetical declining koala population using a theoretical modeling framework
that examined which management actions would be ideal depending on how much
data you had (what was known and what was uncertain). They found that the
optimal management decisions were fairly fixed (based on how cost efficient
those options were), and that the value of collecting data on things like koala
survival and fecundity (as well as how habitat cover affects mortality threats)
was fairly low since it wouldn't lead you to make a different decision. The
point is not that additional information is generally not useful, but rather
that if more information won't lead you to make a different decision in support
of your specific objectives, it's likely not worth spending much time and money
on it.
If you have the patience to work through the equations and
concepts in the two case studies, Canessa et al 2015 does a really nice job of
explaining VOI in a quantitative way. Essentialy using expected probabilities
for a range of variables (e.g. whether or not a disease is actually present at
a given site, the chance of false positives or negatives of a test for the
disease, etc.) and the expected outcomes of different scenarios, you can
calculate how much value collecting data is likely to have in terms of your
objective. Fig 1 makes the point that with more uncertainty the VOI is higher,
and as our sampling density increases the VOI also increases. However, as the
authors note, they do not include the issue of cost. There is the cost of
collecting the actual information you need to support the decision, the time
cost of actually running a formal VOI analysis, and potentially the cost of
providing input data into the VOI analysis (e.g., if you don't even have
credible guesses). Nonetheless, this is a great paper for understanding the key
concept, and they provide spreadsheets for the two case studies.
GENERAL:
There is an increasing trend of greater transparency in
science, and for the most part that's a very good thing. With more requirements
to share data in public repositories we get better peer review, make it easier
for researchers to build on each other's work, and improve the credibility of
science. But a new essay (Lindenmayer & Scheele, 2017) makes a point near
to TNC's heart: by sharing information on rare and endangered species
(especially online) scientists are making it easier for poachers to find those
species. TNC and NatureServe have dealt with this issue for a long time; our
ecoregional portfolio sites (aka conservation areas) that were based primarily
on rare species are typically buffered and sometimes only shared with other
conservation organizations (removed from the public version of our data). This
essay argues that in addition to facilitating poaching, it's upsetting
landowners (who may be angry at scientists if trespassers start looking for
rare species), and that even well-intentioned tourists can cause habitat damage
in their search. Accordingly we should always be thinking about potential
benefits vs harms in publishing this kind of data.
You can read this one at http://science.sciencemag.org/content/356/6340/800.full
AGRICULTURE:
Roy et al 2009 is a good overview of life cycle assessments
(LCAs), specifically in an ag context. They explain what they are (essentially
a cradle to grave assessment of all of the inputs and outputs/impacts involved
in producing a given product) & what the components of them are, give
examples, list standards, etc.
REMOTE SENSING:
Mello et al 2013 uses a Bayesian network to estimate where
current soybean production is most likely in Mato Grosso, Brazil. A Bayesian
approach relies on expert input (and training data) to infer a variable of
interest (in this case, soybean production) based on known context variables
(e.g. distance to road, soil suitability, slope, etc.). Their accuracy ~90% was
a lot higher than I'd expect; it's not clear to me whether the model is that
good, or if the model is over-trained. Typically these kinds of models perform
pretty well once you train them as long as drivers of the outcome variable
don't shift much (e.g. if soy expands into smaller new fields in different
areas, the model is much less likely to find them until it is updated). But
it's a good overview of how Bayesian models work, and it looks like an approach
worth replicating where we need crop maps that don't exist.
SOIL:
Minasny 2017 provides more detail on the "4 per
mille" soil organic matter program (aiming to increase soil organic matter
by 0.4% each year), including a suite of 20 case studies around the world
showing what this target would look like in different places. They also provide
a nice overview of different management strategies, key limitations, and
compare what implementation would look like in different contexts.
REFERENCES:
Canessa, S., Guillera-Arroita, G., Lahoz-Monfort, J. J.,
Southwell, D. M., Armstrong, D. P., Chadès, I., … Converse, S. J. (2015). When
do we need more data? A primer on calculating the value of information for
applied ecologists. Methods in Ecology and Evolution, 6(10), 1219–1228. https://doi.org/10.1111/2041-210X.12423
Lindenmayer, B. D., & Scheele, B. (2017). Do not
publish. Science, 356(6340), 800–801. https://doi.org/10.1126/science.aan1362
Maxwell, S. L., Rhodes, J. R., Runge, M. C., Possingham, H.
P., Ng, C. F., & Mcdonald-Madden, E. (2015). How much is new information
worth? Evaluating the financial benefit of resolving management uncertainty.
Journal of Applied Ecology, 52(1), 12–20. https://doi.org/10.1111/1365-2664.12373
McGowan, J., & Possingham, H. P. (2016). Commentary:
Linking Movement Ecology with Wildlife Management and Conservation. Frontiers
in Ecology and Evolution, 4(March), 1–3. https://doi.org/10.3389/fevo.2016.00030
McGowan, J., Beger, M., Lewison, R. L., Harcourt, R.,
Campbell, H., Priest, M., … Possingham, H. P. (2016). Integrating research
using animal-borne telemetry with the needs of conservation management. Journal
of Applied Ecology, 54(2), 423–429. https://doi.org/10.1111/1365-2664.12755
Mello, M. P., Risso, J., Atzberger, C., Aplin, P., Pebesma,
E., Vieira, C. A. O., & Rudorff, B. F. T. (2013). Bayesian networks for
raster data (BayNeRD): Plausible reasoning from observations. Remote Sensing,
5(11), 5999–6025. https://doi.org/10.3390/rs5115999
Minasny, B., Malone, B. P., McBratney, A. B., Angers, D. A.,
Arrouays, D., Chambers, A., … Winowiecki, L. (2017). Soil carbon 4 per mille.
Geoderma, 292, 59–86. https://doi.org/10.1016/j.geoderma.2017.01.002
Roy, P., Nei, D., Orikasa, T., Xu, Q., Okadome, H.,
Nakamura, N., & Shiina, T. (2009). A review of life cycle assessment (LCA)
on some food products. Journal of Food Engineering, 90(1), 1–10. https://doi.org/10.1016/j.jfoodeng.2008.06.016
Runge, M. C., Converse, S. J., & Lyons, J. E. (2011).
Which uncertainty? Using expert elicitation and expected value of information
to design an adaptive program. Biological Conservation, 144(4), 1214–1223. https://doi.org/10.1016/j.biocon.2010.12.020