Wednesday, December 1, 2021

December 2021 Science Summary

Yellowjacket on bird bath

Happy December,

This month I have three articles all about how we can do better at conservation!

The first one (Guadagno et al. 2021) is about how we can get better at learning from failure and improving on success (as individuals, teams, and organizations) - make time to read this one (or at least skim the sections that look most relevant to you). It's worth it. The next one (LeFlore et al. 2021) looked at what aspects of research led to it being used to inform decision-making. And Pressey et al. 2021 argues that by focusing on area of protection (rather than avoided habitat and species loss), conservation is having less impact than we hope for.

If you know someone who wants to sign up to receive these summaries, they can do so at

Guadagno et al. 2021 is an an excellent new report from the Wildlife Conservation Society (as part of the Failure Factors Initiative) looking at the experience of conservation NGOs with using “pause and reflect” sessions to learn from failure (and success). Here are my key take-aways:

  • People are reticent to talk publicly about failure for fear of losing respect, status and support for their work.
  • Documenting “lessons learned” in reports is not as important as staff going through the process of talking together and informally learning from each other.
  • Regularly reflecting on both what is working well and what could be improved (even for minor things, and for both successes and failures) makes teams better equipped to respond to serious or major failures (see Example 4 on p14). Having already built both trust and familiarity with a healthy way to learn from failure is excellent preparation when crises arise, allowing the group to work together to pivot effectively. These sessions don't have to take much time.
  • Those regular reflections work best when there is high psychological safety (be respectful, focus on what happened and what to do next time and not who is to blame, recognize and address bias) and they are structured around a few core questions (what did we expect to happen vs. what happened, what went well and why, what can be improved and how). See page 21-25 for recommendations on how to do this, as well as guides / questions you can use.
  • Sometimes a failure looks like a success at first, and in these reflections you can look for other explanations for apparent success (as per Example 1 on p7) allowing you to identify hidden problems and resolve them.
  • Other times, a failure is at least partially a success, and these sessions can also identify some aspects of work going well even when we don’t achieve the outcomes we hoped for (see Example 2 on p11). Also even successes can probably be improved! (see Example 3 on p13).

LeFlore et al. 2021 looks at factors that tend to result in research being used via a focus on 40 small-scale conservation research projects on the Salish Sea. They found having a government collaborator was key, as was stakeholder engagement throughout the process, and that publishing a journal article didn't increase the chances of the research being used to inform decision-making. The impact bit was self-reported so I was pretty surprised only 40% of the projects were reported as leading to impact! It's hard to know how generalizable their results are, but I think it's fair to ask researchers to compare the time it takes to substantially engage w/ decision makers and other stakeholders, compare that to the time needed to publish, and to reflect on which is a higher priority use of their time. Full disclosure: I was a peer-reviewer of this paper.

Pressey et al. 2021 is an opinion piece arguing that conservationists need to shift focus from area-based protection targets (even those including representation) to avoided biodiversity loss (species extinction and habitat destruction) and ecological recovery. They make a fair and important point: despite increasing protection, the overall global trend of species and habitat loss isn't declining. So protected areas aren't working effectively (whether they're not managed well, or in the wrong places, or there still aren't enough of them, or a mix). That's hard to argue with, and it's key that we find a way to better mitigate acute threats. But they lose me when they call for a lot more modeling of counterfactuals and monitoring of outcomes relative to the modeling. I've done that modeling, and it's slow, expensive, and subject to lots of assumptions and uncertainty. So rather than shifting lots of implementation dollars to more science, I'd favor using 'just enough' science to identify key needs for conservation, push advocacy to focus more on those needs than is currently happening, and do more spot monitoring to check efficacy and adapt.

Guadagno, L., Vecchiarelli, B. M., Kretser, H., & Wilkie, D. (2021). Reflection and Learning from Failure in Conservation Organizations: A Report for the Failure Factors Initiative.

LeFlore, M., Bunn, D., Sebastian, P., & Gaydos, J. K. (2021). Improving the probability that small‐scale science will benefit conservation. Conservation Science and Practice, October.

Pressey, R. L., Visconti, P., McKinnon, M. C., Gurney, G. G., Barnes, M. D., Glew, L., & Maron, M. (2021). The mismeasure of conservation. Trends in Ecology & Evolution, 36(9), 808–821.

p.s. Anyone know what species of ground yellowjacket this is? It was drinking from the rim of my birdbath!