Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, February 3, 2025

February 2025 science summary

Wetlands institute marsh

 Good day,


I have a mixture of papers as usual this week, covering using AI to do lit review (on natural climate solutions), more on seaweed to reduce cattle emissions, one on fire management, and one bad study on return-to-office mandates.

Also, I have no idea how the cost and complexity of this new AI monitoring tool (SPARROW, https://blogs.microsoft.com/on-the-issues/2024/12/18/announcing-sparrow-a-breakthrough-ai-tool-to-measure-and-protect-earths-biodiversity-in-the-most-remote-places/) compares to simpler camera trap setups but I’m especially curious if the acoustic component might be able to detect amphibians or birds unlikely to be visible in a camera trap (thanks to Tara Schnaible for passing this on). If any of you are using camera traps and/or microphones for biomonitoring I'd love to hear your thoughts!

If you know someone who wants to sign up to receive these summaries, they can do so at http://bit.ly/sciencejon (no need to email me).


CLIMATE CHANGE AND AI METHODS:
Chang et al., 2024 uses specialized large language models to assess evidence for 11 co-impacts (positive or negative) of natural climate solutions (NCS). They extracted data from 257,266 studies (after screening 2.3 million)! Most (87%) focused on management, ~30% covered protection and/or restoration, but <2% mentioned cost or equity or Indigenous peoples or local communities. Fig 2 has a nice breakdown of NCS pathways, biome, co-impacts, etc., and Fig 4b maps countries by both evidence and mitigation potential. Paraguay and Republic of the Congo pop as highest carbon w/ lowest evidence (needing research), while the Americas and E/SE Asia have high evidence and high carbon (needing implementation). Fig 3 shows the volume of evidence for how 22 NCS pathways intersect with the 11 co-impacts (9 of which relate to human well-being). They argue that West and Central Africa deserves special research attention as evidence and human development index are relatively low while NCS potential and threatened species are high. One key caveat: evidence volume does not mean evidence quality. For example, there are many papers on sowing legumes in pasture to reduce enteric methane, but I'm not aware of any that are both accurate and precise enough for implementation (since putting them in the wrong places would lead to increased emissions). But this is still a great example of using AI to review a much larger body of evidence than would be possible manually.


CATTLE AND CLIMATE CHANGE:
Meo-Filho et al. 2024 is an important study (part of a special issue of papers around the sustainability of animal foods and plant alternatives, https://www.pnas.org/topic/561). There's been lots of research with hyperbolic claims about red algae reducing methane production in cattle (mostly in vitro, or tests in a petri dish). I believe this is the first paper to measure methane reductions not only 1) in vivo (a test of what happens in real animals) but 2) in the grazing phase of their life cycle. Most American cattle graze for very roughly 15 months before spending 3 months in a feedlot, and they emit more methane per day when grazing. Only 15% of total cattle production GHGs come from the feedlot phase, so even big reductions then can't touch cattle's high carbon footprint. But this paper found supplementing with the seaweed reduced GHGs relative to control animals by 38% over 90 days with no side effects! That is great news and this is work worth following up, BUT a few key caveats: 1) this was a study of only 24 animals, 2) the animals began at 15 months when they would typically go to a feedlot, to show how much this could really reduce the total carbon footprint of cattle (and if any side effects crop up eventually) it would need to be tested on calves from when they are weaned off of milk to when they go to slaughter, and 3) variations in cattle breed, the dominant type of grass they're eating, and climate could all affect the results. So it's very good news, but does not yet mean it's possible to produce a burger w/ 1/3 less carbon. Also, while the numbers are hotly contested, remember that the carbon footprint of beef is very roughly 10* that of pork or chicken (~50* the GHG of beans), so even if the reduction IS scalable, beef will still be a high carbon food.

Cowley et al. 2024 is a paper with very exiting results about reducing the carbon footprint of beef. They fed Australian Angus cows in a feedlot red seaweed (Asparagopsis) infused into vegetable oil, and added it to their diet at three concentrations of CHBr3 from the sewaweed, plus a control w/ no seaweed (5 cows for each of the four treatments). The medium and high doses reduced enteric methane (CH4) by 98% and 99%, with no significant reported side effects. Other studies have seen less impressive reductions in live cattle (e.g. George et al. 2024 found ~50% reduction), although in petri dishes 99% reductions have been achieved before. The discussion covers potential factors influencing the CH4 reduction, including dose, how much of the feedlot diet is grain vs grass, and cattle breed. Caveat - one high dose cow (20% of the sample) got acidosis and had to be removed, all groups had some health problems, and the medium and high doses did increase neutrophil and platelet count. Overall - at medium and high doses the methane was almost eliminated without affecting meat taste or safety, and without clear cattle health impacts. So the study is very promising, but needs to be replicated with larger groups and different contexts (including eventually a study using Asparagopsis during both the entire grazing phase and feedlot phase).


FIRE MANAGEMENT:
Lacey et al. 2024 highlights opportunities for prescribed fire and/or thinning (removing trees and/or brush) to proactively reduce risk for vulnerable communities and areas of high ecological value. Socially vulnerable communities are less likely to get this preventive fire management, but the authors find there are "win-win" opportunities for places high in: ecological value (specifically biodiversity, connectivity, and climate resilience), ecosystem services (carbon and drinking water), social vulnerability (see Table 1 for indicators), and the potential for wildfire mitigation to be effective (the last is based on fire hazard and vegetative cover). Fig 2c has their top priority areas, including big chunks in the Appalachians, Ozarks, and Rocky Mountains (esp. CO and ID), and Fig 4 shows in yellow where those overlap with the USFS top 10 firesheds. Fig 3 shows how much better their priority areas perform against several indicators than a focus solely in fire mitigation potential; as you'd expect optimizing for social vulnerability, ecosystem services, and ecological value results in sites that score a lot better on those indicators! That may seem obvious, but to reframe: in conservation we OFTEN don't include certain objectives in our planning but still expect to have great stories about all the co-benefits we got anyway. This paper is a great pitch for the value of inclusion; planning for what you care about up front will help you focus work on the places where you can have the most impact.


RETURN TO OFFICE MANDATES:
I already put this on LinkedIn but also wanted to flag here that I thought the recent Ding & Ma 2024 study from University of Pittsburgh (which has had a lot of media attention w/o scrutiny) was misleading. They found that 1) return to office mandates don't improve employee or company performance and 2) these mandates are used more to blame employees and "grab power" than to try and improve performance. This doesn't appear to be peer-reviewed research, which makes me more skeptical by default (it could still be right, but there's no screen for bad methods or misleading results). The paper's tone and methods make it look like they set out to prove some preconceived notions rather than exploring what's going on with an open mind, and the way they assert attribution and causation to some findings they report appear unsupported by the data. Their interpretations of their data are plausible, but even if you trust the data there are other valid interpretations which are also plausible. 

The reason I posted this despite it seeming like a "bad" study is to flag the role of bias in reading science. I believe that if I was required to go to the office more often, it would negatively impact my performance and happiness. I can point to personal experience backing that up (I'm more willing to work longer hours when I don't have a commute or when I'm getting hungry but don't have food nearby to keep working). But in situations like this where I have an opinion and feel inclined to "right on!" what I read, that's exactly when I need to slow down and read carefully since I know I have bias leading me to accept certain results and interpretations as convincing. Also - read a news article about science and wondering if it's misleading? I generally first look for the press release from the host institution (to read a summary that the authors would have had a chance to review to reduce accidental misunderstanding, although they are still unreliable whether on accident or on purpose to hype it: https://business.pitt.edu/return-to-office-mandates-dont-improve-employee-or-company-performance/ ), then read the actual paper.

Don't assume the press release, or even my summary is accurate! Always check before you share or act on a research summary.


REFERENCES:
Chang, C. H., Erbaugh, J. T., Fajardo, P., Lu, L., Molnár, I., Papp, D., Robinson, B. E., Austin, K. G., Castro, M., Cheng, S. H., Cook-Patton, S., Ellis, P. W., Garg, T., Hochard, J. P., Kroeger, T., McDonald, R. I., Poor, E. E., Smart, L. S., Tilman, A. R., … Masuda, Y. J. (2024). Global evidence of human well-being and biodiversity impacts of natural climate solutions. Nature Sustainability. https://doi.org/10.1038/s41893-024-01454-z

Cowley, F. C., Kinley, R. D., Mackenzie, S. L., Fortes, M. R. S., Palmieri, C., Simanungkalit, G., Almeida, A. K., & Roque, B. M. (2024). Bioactive metabolites of Asparagopsis stabilized in canola oil completely suppress methane emissions in beef cattle fed a feedlot diet. Journal of Animal Science, 102(April). https://doi.org/10.1093/jas/skae109

Ding, Y., & Ma, M. (Shuai). (2024). Return-to-Office Mandates. In S&P Global Market Intelligence. https://doi.org/10.2139/ssrn.4675401

Lacey, L. M., Suraci, J. P., Littlefield, C. E., Busse, B. S., & Dickson, B. G. (2024). Informing proactive wildfire management that benefits vulnerable communities and ecological values. People and Nature, August, 1–15. https://doi.org/10.1002/pan3.10733

Meo-Filho, P., Ramirez-Agudelo, J. F., & Kebreab, E. (2024). Mitigating methane emissions in grazing beef cattle with a seaweed-based feed additive: Implications for climate-smart agriculture. Proceedings of the National Academy of Sciences, 121(50), 2–9. https://doi.org/10.1073/pnas.2410863121


Sincerely,
 
Jon
 
p.s. the photo is of a salt marsh at The Wetlands Institute in Stone Harbor, NJ

Tuesday, October 1, 2024

October 2024 science summary

Sylvester the hairless cat

Greetings,

My knee is still healing and as I write this I have COVID. So: I am again behind on proper science reading. BUT I have some fun AI content anyway plus one flashy new paper.

AI TO GENERATE PODCASTS?!
Many of you requested updates on important AI features and this month is a biggie. Google's Gemini Notebook (https://notebooklm.google/) which among other more established features turns a body of text into an NPR-style podcast (a blog about the feature is here: https://blog.google/technology/ai/notebooklm-audio-overviews/). 
 
I would share a direct link to the audio it produces to avoid the energy use of you all trying it on your own, but my employer's AI guidance doesn't let me do that, so I'll just say: watch a little less streaming video this week to reduce your carbon footprint, and give this a go. Just 1) upload a document to https://notebooklm.google, and 2) click Generate in "Deep dive conversation". Wow. The results are conversational, natural, and interesting, with relatively few hiccups / artifacts that I noticed (although not none), although it does consistently hype up each paper (both the magnitude and significance of results are exaggerated, albeit not quantitatively). I could see this being useful to help you digest that report you can't bring yourself to read while jogging instead. Not that you should ever trust an AI summary at this point to be accurate (please don't!) - but for legit reports it appears to be much better than not reading the report at all from my limited testing.

Here are the results of four quick tests: 

1. A colleague fed in a research report which was written for a lay audience already (https://www.pewresearch.org/data-labs/2024/05/23/electric-vehicle-charging-infrastructure-in-the-u-s/), and I was gobsmacked by the quality. Not as good as a legit pro quality podcast, but MUCH better than something I would put out and a hell of a good first draft.
2. What about a peer-reviewed science article which is much more technical and hard to read? I uploaded my 2nd favorite paper that I'm first author on (since my favorite is written in plain English already) which is about how spatial resolution impacts accuracy, cost and making the right decision: http://onlinelibrary.wiley.com/doi/10.1002/rse2.61/full Once again it did really surprisingly well. There was one weird audio screwup, and they focused a moderate amount on an unimportant finding but also had a line I loved: “sharper images leading to sharper insights” (meaning higher spatial resolution of satellite imagery is more accurate and makes for better decisions). Damn that's good!
3. OK, but what about a paper even a proper scientist would have a hard time slogging through? I literally searched all the notes I have written in my reference library notes for the phrase "hard to read" and found this doozy looking at how cow genetics affects methane emissions: https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1005846 The writing is very technical BUT at least for a reader like me you can skim and sift out some high level conclusions so I was curious to see how it would perform. The answer is still quite well with one big caveat! The use of sire progeny groups was charmingly described as "like a bovine family reunion, but for science" and they had some great metaphors like how a microbe has a "key" to unlock the "lock" of some important sugar molecules. BUT they described the results as finding a MASSIVE and INSANE impact of genetics on methane and how it was a TOTAL GAME CHANGER. In fact, while the authors never clearly state this (though it's obvious in Fig 1), genetic impacts are MUCH smaller than the impact of diet (grass / forage vs. concentrated feed). So like a cow's stomach, this tool will regurgitate back what you put in w/o any filtering or warning.
4. What if you upload a blatantly false document? You guessed it - the tool performs the same way and makes it seem engaging and interesting. A colleague uploaded an Onion article and the tool did NOT waver or flag any parody or provide any warning, it just made it sound like real news. This one is pretty scary.

Note that there were literally no other prompts or edits to produce this, just upload a PDF and click "generate" which I find staggering. I would have believed that these were real podcasts w/ real people, and it definitely took the core content and made it interesting and engaging (albeit over-hyped). Hearing three of these in a row, there is some noticeable repetition in key phrases and style which will make my spidey-sense tingle if I hear similar podcasts out in the wild not attributed to Gemini Notebook.

As the always wise Bob Lalasz from Science+Story noted - this tool cannot (yet) simulate specific voices like a researcher or celebrity, and the output will quickly saturate and land as inauthentic. But at this moment it's both a tool that can provide some useful and engaging summaries, AND could be a way to make misinformation seem more appealing, as the Onion example shows.

Enough AI, here's a review of one actual paper I read w/o AI helping:

CATTLE AND CLIMATE CHANGE:
OK, for years people have been hyping the potential of a kind of seaweed (Asparagopsis) to reduce methane emissions in cattle. But the in vitro evidence was mixed - with potential toxicity and downsides a concern if the dose wasn't just right. George et al is a live trial with good news! They found methane emissions from cattle given an Asparagopsis additive were cut roughly in half compared to control, with 6.6% higher weight gain per unit of feed and no substantial impacts on quality or health (fat color maybe a bit better, tenderness maybe a bit worse). The methane reduction peaked at day 21 and declined afterwards, but since most cattle are in feedlots only ~3 months (ranging from 1.5-4) the decline after day 100 is pretty moot. That's a lot of potential! The only potential downsides were ~50% higher bromine residues in kidney and muscle (I couldn't quickly find guidance on safe levels). Caveats: 1) there were no conflicts declared but the authors appear to almost all work as feedlot consultants and it's single rather than double-blinded, 2) in the US the feedlot phase is only about 15% of the lifecycle emissions of a cow (and of that, some is from growing crops and nitrous oxide) so TOTAL impact on CO2e / kg beef is not as dramatic. Overall my take is 1) this should absolutely be tested and replicated more - anything that reduces the very high carbon of beef is worth pursuing. BUT 2) if this is marketed as "green beef" or license to avoid reductions, it could be net harmful for the climate. and 3) IF spraying this solution on grass worked similarly and didn't inhibit development of calves, the impact could be much higher (since ~80%ish of a cow's life is grazing prior to feedlot, and methane emissions are higher on grass than feed). So let's test that too. There's a blog about this at https://www.theguardian.com/australia-news/article/2024/aug/18/feeding-seaweed-supplement-to-cattle-halved-methane-emissions-in-australian-feedlot-study-finds

As always, if you know someone who wants to sign up to receive these summaries, they can do so at http://bit.ly/sciencejon (no need to email me).


REFERENCES:

Fisher, J. R. B., Acosta, E. A., Dennedy-Frank, P. J., Kroeger, T., & Boucher, T. M. (2018). Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. Remote Sensing in Ecology and Conservation, 4(2), 137–149. https://doi.org/10.1002/rse2.61

George, M. M., Platts, S. V., Berry, B. A., Miller, M. F., Carlock, A. M., Horton, T. M., & George, M. H. (2024). Effect of SeaFeed, a canola oil infused with Asparagopsis armata , on methane emissions, animal health, performance, and carcass characteristics of Angus feedlot cattle. Translational Animal Science, 8(August). https://doi.org/10.1093/tas/txae116

Roehe, R., Dewhurst, R. J., Duthie, C.-A., Rooke, J. A., McKain, N., Ross, D. W., Hyslop, J. J., Waterhouse, A., Freeman, T. C., Watson, M., & Wallace, R. J. (2016). Bovine Host Genetic Variation Influences Rumen Microbial Methane Production with Best Selection Criterion for Low Methane Emitting and Efficiently Feed Converting Hosts Based on Metagenomic Gene Abundance. PLOS Genetics, 12(2), e1005846. https://doi.org/10.1371/journal.pgen.1005846

Sincerely,
 
Jon
 
p.s. This hairless cat named Sylvester lives near me, is extremely friendly, and is as weird to touch as you might guess

Thursday, January 25, 2024

December 2023 science summary

Anglerfish pumpkin / jack-o'-lantern

 Hi,


On Friday Nov 17, the earth was 2C warmer than historic pre-industrial average (1850-1900), and 1.17C over the 1991-2020 average. This does not mean the earth has warmed 2C yet! That would require a sustained set of temperatures high above average. But still not great news. https://wapo.st/3MLWDes

This month I have three science articles but am also sharing some informal thoughts about how scientists might want to consider using (and not using) generative artificial intelligence tools.

If you know someone who wants to sign up to receive these summaries, they can do so at http://bit.ly/sciencejon (no need to email me).

CLIMATE CHANGE:
Knauer et al. 2023 has good news - better modeling estimates forests could sequester more carbon than we thought. But it's likely very small good news. Their best case is 20% more "gross primary productivity" (GPP, energy captured by photosynthesis), BUT a) that's using an extremely unlikely 'worst case' cliamte scenario which is actually hard to achieve, (RCP8.5) and b) only a fraction of GPP ends up sequestered as carbon (see Cabon et al. 2022 for more). Since forests offset roughly 25% of annual human emissions, the results likely mean <1% of annual emissions could be offset. I'll take it, but we still need to reduce gross emissions as fast as possible.

CATTLE RANCHING:
Chiaravalloti et al. 2023 assessed how well cattle ranches in the Brazilian Pantanal (among the world's largest wetlands) aligned w/ Elinor Ostrom's principles (for sustainable use of natural resources, see Table 2 for a nice summary). They interviewed 49 cattle ranchers, other people working in the beef supply chain, conservationists, and policy makers. Flooding, very low stocking density, lack of transportation, and the fire regime all make ranching in the Pantanal unusual. They found the Pantanal ranches do well on the first 3 principles: clearly defined boundaries, appropriate rules for resource use reflecting local conditions, and collective decision making. But there is a lack of the other principles: limited monitoring, no graduated sanctions, lack of accessible conflict resolution, lack of recognition for self-governance on sustainability, and nested enterprises that coordinate governance and monitoring and the rest. They call for a series of specific recommendations to address these deficits, including celebrating early examples of things that are working well.

Gomes et al. 2023 is a case study in the Brazilian Pantanal assessing 14 cattle ranches (cow-calf operations) using the Fazenda Pantaneira Sustentável (FPS) tool. The tool assesses 1) financial performance (costs of management, inputs, labor, etc., along w/ gross income), 2) productive performance (which favors native grass forage availability and producing calves), and 3) environmental performance (landscape diversity conservation index, which favors diverse vegetation types that have been maintained on the ranch), and combined them into a composite score. Table 2 has the results and highlights how much variation there is across ranches. Table 3 has an easier to read narrative summary of how the ranches are performing. They recommend using the relative high performing ranches as baselines for what performance level the others should aspire to.

THOUGHTS ON GENERATIVE ARTIFICIAL INTELLIGENCE (AI):
I am in no way an expert on AI. I am a person who has played with a variety of tools, and is sometimes asked for my opinion. The thoughts here are my own, and don't reflect the views of my employer or anyone who actually knows what they're taking about. They are general guidance skewed by what I've tried, and the tools evolve fast so could already be wrong. Two things to keep in mind throughout - 1) when you ask AI for answers they often confidently provide wrong answers, and 2) don't put sensitive / non-public information in there as some of these tools have the right to reuse or share what you put in. Watch out for those!

With that caveat, I wanted to share some suggestions for how to use generative AI (including large language models [LLMs] like ChatGPT and Bard, plus image generating tools like DALL-E). Other kinds of AI are not included. I split use cases into three categories:

GREEN: relatively safe uses/ DO:

  • Reword emails / blogs / reports, including for length or tone or clarity. LLMs perform very well at producing text which is clear and understandable to a general audience, with few to no grammatical errors. It's also surprisingly good at adjusting emotional tone, e.g., doctors are using ChatGPT to write more empathetic emails to patients. Again, be wary of putting in sensitive info. A final edit and review for factual accuracy is essential. 
  • Use Elicit.com for screening a set of science paper PDFs (it produces a spreadsheet with summaries, methods, etc. which a researcher can use to decide where to start reading). Use “detailed summary” as the shorter summary leaves important stuff out (the link does to a detailed review I wrote of Elicit). This used to be free but they charge you now.
  • Help finding other kinds of information hard to find with traditional search engines. For example, a search for strategic planning frameworks for nonprofits (roughly similar to the conservation standards) was mostly unproductive, but similar queries to Bard produced an excellent science paper with a comparison between 5 frameworks.
    • Note that this only means looking for sources you will actually read, NOT asking it to pull out facts. It's a great way to find references you may otherwise miss.
  • Help finding hotels that meet criteria you can’t easily filter on in other travel sites (like quiet, dining options, offering special rates, etc.). Again - read reviews and the hotel website to verify the info, in some cases I was offered hotels that did not meet my criteria, but in other cases it helped me.
 
YELLOW: offers value but also some risk/ CAUTIOUSLY DO:
  • Look for key facts buried in long reports / science papers (either w/ ChatPDF or online LLMs) - then verify those facts are real / correct / actually in the source (ChatPDF will often tell you the correct page number for a factual assertion if asked). It is typically faster than reading long documents or searching through them (e.g.,  I used for the suites of IPCC AR6 reports). The 'cautiously' part is that I can't stress enough that these tools often make things up and provide fictional sources!
  • Summarize science papers or other long reports in plain language. Again, check any key points for veracity. Here's a long review comparing how ChatGPT summaries of papers compare to my own, and another one evaluating Elicit's short and detailed summaries of papers I co-authored.
  • Write sample code you either don't know how to write or that would take a long time. It may perform poorly and/or be hard to debug, but may be ‘good enough’ in cases where time is limited. On the other hand, in some cases it could introduce security and/or performance risks. This is best when you know how to code (and understand code you read) and are looking for sample code to start with.
  • Use Elicit.com to identify potential issues / problems with science papers. While existing functions around critiques of papers and methodological limitations or conflicts of interest do not work very well, in some cases they do work and have potential if refined. I'll say it again: verify anything it tells you.
  • List common arguments for or against a given topic. This can provide helpful context but should not be treated as definitive
  • Produce an initial outline for something like a paper or a report – suggesting possible topics and how to organize them as a way to stimulate thought and get started. My teacher friends also said it can be great for suggesting things to include in a syllabus.
 
RED: use cases to be avoided/ DO NOT:
  • Ask for facts and trust the results w/o carefully checking references (LLMs regularly fabricate false info and provide fictional references [hallucitations] for it)
  • Assume content provided (code, images, text) can be used w/o copyright issues. Often it cannot, and using a bit of LLM-generated content can screw w the copyright of the bigger report it goes into.
  • Assume LLMs will include caveats or methodological limitations when reporting results from reports (they generally do not)
  • Put sensitive, nonpublic, or other confidential text, data, or code into LLMs
  • Assume you know how the tools work. They change so fast you probably don't. Treat it as a black box which sometimes spits out candy, but sometimes you get those Harry Potter jelly beans that taste like vomit or earwax.


Again, please use the above as ideas you can try out and verify for yourself. Don't trust my judgment on AIs any more than you would trust their assessment of their limitations.


REFERENCES:

Cabon, A., Kannenberg, S. A., Arain, A., Babst, F., Baldocchi, D., Belmecheri, S., Delpierre, N., Guerrieri, R., Maxwell, J. T., McKenzie, S., Meinzer, F. C., Moore, D. J. P., Pappas, C., Rocha, A. V., Szejner, P., Ueyama, M., Ulrich, D., Vincke, C., Voelker, S. L., … Anderegg, W. R. L. (2022). Cross-biome synthesis of source versus sink limits to tree growth. Science, 376(6594), 758–761. https://doi.org/10.1126/science.abm4875

Chiaravalloti, R. M., Tomas, W. M., Akre, T., Morato, R. G., Camilo, A. R., Giordano, A. J., & Leimgruber, P. (2023). Achieving conservation through cattle ranching: The case of the Brazilian Pantanal. Conservation Science and Practice, September. https://doi.org/10.1111/csp2.13006

Gomes, E. G., Santos, S. A., Paula, E. S. de, Nogueira, M. A., Oliveira, M. D. de, Salis, S. M., Soriano, B. M. A., & Tomas, W. M. (2023). Multidimensional performance assessment of a sample of beef cattle ranches in the Pantanal from a data envelopment analysis perspective. Ciência Rural, 53(12), 1–12. https://doi.org/10.1590/0103-8478cr20220595

Knauer, J., Cuntz, M., Smith, B., Canadell, J. G., Medlyn, B. E., Bennett, A. C., Caldararu, S., & Haverd, V. (2023). Higher global gross primary productivity under future climate with more advanced representations of photosynthesis. Science Advances, 9(46), 24–28. https://doi.org/10.1126/sciadv.adh9444

Sincerely,
 
Jon
 
p.s. This anglerfish jack-o-lantern was carved by my wife and me; we got the pumpkin with a really long stem and wanted a theme that would make good use of it

Monday, July 3, 2023

July 2023 science summary

Little mermaid food 
Greetings,

This month I have two articles on climate resilience, one on climate mitigation, and one on science-implementation partnerships. Plus a couple articles on AI as usual.

While it's a Canadian science fair project and not peer-reviewed science, I was interested to see this test of "you catch more flies with honey than vinegar." The experiment found that you catch more flies with honey and vinegar than with vinegar alone, which catches more than honey alone. But bringing it back to the saying: please don't be a jerk regardless.

My obligatory article on AI (and specifically Large Language Models [LLMs] like ChatGPT) is extra-fascinating this month. Can (and should) LLMs help us communicate more empathically? Check out this NY times article 'When Doctors Use a Chatbot to Improve Their Bedside Manner.' I love the idea of someone wanting to be kind by using the right words, not knowing what to say, and getting help with that. I have found it's very common for people to stay silent when they can't find 'the right words' around illness and death and grief, so I am all for helping people (including doctors) to get unstuck. I asked Bard (Google's LLM) for advice on how to support a friend who is grieving and found it mostly excellent. Not ideal but really good and something most of us could learn from. The idea of machines helping us to express empathy more effectively is so intriguing and I'm all for it.

The latest AI tool I tried out is "ChatPDF" which lets you upload a PDF and ask the tool questions about it. It works pretty well for some things, but is oddly dull in others. Like for one paper I asked it which species was responsible for most of the primary effect they reported (t CO2e of climate mitigation) and it didn't know. But when I asked it what the contribution was of the species I knew drove >80% of the effect, it reported it numerically. Apparently it was unable to divide the number it knew for the species in question by the total effect size number (which it also knew). But I thought it performed reasonably well with my questions about the IPCC report. TL;DR is that it seems better at finding / extracting info than any kind of reasoning.

If you know someone who wants to sign up to receive these summaries, they can do so at http://bit.ly/sciencejon (no need to email me).


CLIMATE MITIGATION:
Duncanson et al. 2023 estimates how much global forested protected areas may be reducing climate change. They matched forested protected areas to similar forested unprotected areas using data from 2000 (land cover, ecoregion, and biome; with additional control pixels that accounted for population etc. - see Table S1). Then they used the new (2019) GEDI lidar data to estimate aboveground forest biomass in 2020. 63% of forested PAs had significantly higher biomass than matched unprotected areas; on average PAs have 28% more aboveground biomass. Over a third of that effect globally comes from Brazil; Africa had less C dense forests and more human pressures on both PAs and unprotected areas. As you'd guess, most of the difference in unprotected sites was due to deforestation. But in 18% of PAs carbon was higher than unprotected sites even though optical sensors didn't detect deforestation (implying LiDAR is detecting either avoided degradation and/or enhanced growth in PAs). As a final note, other research has shown that both ICESat-2 and GEDI LiDAR satellites tend to underestimate forest canopy heights (mostly irrelevant here given the matching approach, but good to know for other global estimates).


CLIMATE RESILIENCE:
Anderson et al. 2023 is the latest paper supporting the data in The Nature Conservancy's Resilient Land Mapping Tool (https://maps.tnc.org/resilientland/). They looked for overlap in three layers across the US: biodiversity value (the union of TNC's ecoregional priority areas), resilient sites (places with diverse and connected microclimates), and 'climate flow' (a circuit theory analysis of where wildlife is likely to shift in response to climate change). See Fig 1 for their main results, or the web map is better since you can separate out the three main layers. The way 'biodiversity value' was assessed varies a lot by ecoregion, and some are more ambitious than others (e.g., the biggest biodiversity patch is in the Nebraska Sandhills, but other ecoregions also have some big blocks of intact habitat). So not every green blob is equally high-priority, but collectively it does have representation across all ecoregions which is good. On the main map, both blue and dark green blobs offer the most value for climate resilience, but again the web map lets you see the continuous data.

Rubenstein et al. 2023 is a systematic review of how documented range shifts (by plants and animals, presumably in response to climate change) compare to predictions. Across 311 papers, only 47% of shifts due to temperature were in expected directions (higher latitudes & elevations, and marine movement to deeper depths was seen but was non-significant). See Fig 4 for how results varied by taxonomic group, ecoystem type, and type of shift. Not many studies looked at precip but of those that did only 14% found species moving to stay in a precip niche. Note: this means simple assumptions of how species will move are of limited value, but NOT that local or regional predictions are inherently flawed. The authors note that considering local predictions of changing temp and precip will often depart from these simple assumptions, and other factors like water availability, fire, etc. are likely to be relevant. A final note on the last page was helpful: not all range shifts have equal relevance to management. In some cases a few individuals are moving to new places but most of the wildlife population doesn't shift at all. Both shifts AND non-shifts have implications for how management should change to keep species and ecosystems healthy! This paper has a LOT of nuance and variation in this paper, and a very detailed methods section with good recommendations for how scientists should continue these investigations. 


SCIENCE-IMPLEMENTATION PARTNERSHIPS:
Carter et al. 2020 is a call for better coordination of science across landscapes in the Western US to better inform land management. They walk through 5 examples of how it has worked (standard monitoring for national parks, tools to help restore arid & semi-arid landscapes, predictive soil maps of where reclaiming disturbed land could work, frameworks for sage-grouse monitoring, and targeted interventions to improve big-game connectivity). They ask agencies to better support boundary-spanning partnerships w/ scientists, and to make more specific asks to scientists about what information they need. I'm not convinced that's likely; White et al. 2019 and others have found land managers don't always see science as a key input, they're often too busy to even know where they need help, and a high-engagement partnership may not always be the best pitch to agency staff who are stretched thin.


REFERENCES:
Anderson, M. G., Clark, M., Olivero, A. P., Barnett, A. R., Hall, K. R., Cornett, M. W., Ahlering, M., Schindel, M., Unnasch, B., Schloss, C., & Cameron, D. R. (2023). A resilient and connected network of sites to sustain biodiversity under a changing climate. Proceedings of the National Academy of Sciences, 120(7), 1–9. https://doi.org/10.1073/pnas.2204434119

Carter, S. K., Pilliod, D. S., Haby, T., Prentice, K. L., Aldridge, C. L., Anderson, P. J., Bowen, Z. H., Bradford, J. B., Cushman, S. A., DeVivo, J. C., Duniway, M. C., Hathaway, R. S., Nelson, L., Schultz, C. A., Schuster, R. M., Trammell, E. J., & Weltzin, J. F. (2020). Bridging the research-management gap: landscape science in practice on public lands in the western United States. Landscape Ecology, 35(3), 545–560. https://doi.org/10.1007/s10980-020-00970-5

Duncanson, L., Liang, M., Leitold, V., Armston, J., Krishna Moorthy, S. M., Dubayah, R., Costedoat, S., Enquist, B. J., Fatoyinbo, L., Goetz, S. J., Gonzalez-Roglich, M., Merow, C., Roehrdanz, P. R., Tabor, K., & Zvoleff, A. (2023). The effectiveness of global protected areas for climate change mitigation. Nature Communications, 14(1), 2908. https://doi.org/10.1038/s41467-023-38073-9

Rubenstein, M. A., Weiskopf, S. R., Bertrand, R., Carter, S. L., Comte, L., Eaton, M. J., Johnson, C. G., Lenoir, J., Lynch, A. J., Miller, B. W., Morelli, T. L., Rodriguez, M. A., Terando, A., & Thompson, L. M. (2023). Climate change and the global redistribution of biodiversity: substantial variation in empirical support for expected range shifts. Environmental Evidence, 12(1), 7. https://doi.org/10.1186/s13750-023-00296-0



Sincerely,
 
Jon

p.s. The photo is of food we made for a little mermaid party, including crab cupcakes (no crab in them, they were vegan), mermaid tail ice cream cones, sugar cookies, and veggie sushi 

Thursday, June 1, 2023

June 2023 science summary

Space pansies

Greetings,


I haven't had much time to read science this month, so I'm just sharing a newspaper article about AI training, and one science article along w/ alternative summaries of it to highlight Bing chat options.

This is a fascinating article about which web sites AI tools are trained on, and what that implies for their accuracy, bias, etc. 
https://wapo.st/42ULddK
When I asked ChatGPT "What are the most critical weaknesses of ChatGPT" one answer was 'Bias in Training Data: The training data used to develop ChatGPT is sourced from the internet and can contain biases, inaccuracies, and offensive content. This can lead to the model reproducing and amplifying these biases in its responses.'

Another good read comes from the New York Times, with detail about how chatbots regularly completely fabricate untrue information (or 'hallucinate'). It's an important read if you ever ask chatbots for what you hope is factual information. How bad can the fabrications be? They have falsely accused people of sexual harassment and bribery, pointing to non-existent sources.


For those who haven't used it yet, Bing's version of ChatGPT has some nice features, namely 1) it provides actual references by default (the normal ChatGPT will provide references upon request, but they often don't have the info ChatGPT provided), 2) you can switch between 'creative' or 'precise' or 'balanced' response styles (I include examples below). To try it out, open Microsoft Edge (it won't work in other browsers), do a search on bing.com, and right below the search box click "Chat" to the right of "Search". Note that as per the article above, the references may still be false!

If you know someone who wants to sign up to receive these summaries, they can do so at http://bit.ly/sciencejon (no need to email me).

SCIENCE COMMUNICATIONS:
Toomey et al. 2023 is a nice reminder that just sharing information doesn't usually change minds. They challenge the idea that facts & scientific literacy lead to research being applied, and that broad communications targeting as many individuals as possible are the most effective way to share those facts. Instead they recommend appealing to values and emotions, and strategically targeting audiences by considering social networks (drawing on science about behavior change) and social norms. I love the conclusion that "this article may not change your mind" but that they hope it will inspire reflection. I also like the use of the backronym WEIRD (Western, Educated, Industrialized, Rich, Democratic) to describe countries like the US.

Here's a Bing AI summary using the "precise" option  (99 words):
'The paper “Why facts don’t change minds: Insights from cognitive science for the improved communication of conservation research” discusses four common myths about how to best communicate science for decision-making. These myths are: that facts change minds, scientific literacy will lead to enhanced research uptake, individual attitude change will shift collective behaviors, and broad dissemination is best. The article provides four alternative insights that can support effective science communication and impact: engaging the social mind for optimal decision-making, understanding the power of values, emotions, and experience in swaying minds, changing collective behavior, and thinking strategically for biggest impact.'

Here's a Bing AI summary using "creative" (262 words):
'The article argues that conservation scientists need to update their models of science communication and behavioral change based on insights from cognitive science. It challenges four common myths about how to best communicate science for decision-making, and provides four alternative insights that can support effective science communication and impact. The myths and insights are:
•    Myth: Facts change minds. Insight: Engage the social mind for optimal decision-making. The article explains that humans are social animals who rely on their peers and trusted sources for information, and that facts alone are not enough to persuade people who have strong prior beliefs or identities.
•    Myth: Scientific literacy will lead to enhanced research uptake. Insight: Understand the power of values, emotions, and experience in swaying minds. The article shows that people tend to filter information based on their values, emotions, and personal experiences, and that scientific literacy does not necessarily lead to more rational or pro-environmental choices.
•    Myth: Individual attitude change will shift collective behaviors. Insight: Change collective behavior by targeting social norms and networks. The article suggests that individual attitudes are often influenced by social norms and networks, and that changing collective behavior requires understanding and leveraging these social factors.
•    Myth: Broad dissemination is best. Insight: Think strategically for biggest impact. The article advises that conservation scientists should be more strategic and selective in their communication efforts, and focus on the most relevant and influential audiences, messages, and channels.
The article concludes by calling for more interdisciplinary collaboration between conservation scientists and cognitive scientists to design and evaluate effective science communication interventions for conservation impact.'

Note I started w/ "balanced" but found it the least useful of the three so am skipping it here.


REFERENCES:
Toomey, A. H. (2023). Why facts don’t change minds: Insights from cognitive science for the improved communication of conservation research. Biological Conservation, 278(December 2022), 109886. https://doi.org/10.1016/j.biocon.2022.109886

Sincerely,
 
Jon
 p.s. These 'space pansies' always make me think my elevated planter has somehow become sci-fi whenever I walk by

Wednesday, March 1, 2023

March 2023 science summary

Vader trying to play while I work

Happy March!

A paper I'm an author on just came out (James et al. 2023) and I'm super excited about it. It looks at how science and conservation staff at The Nature Conservancy perceive gender equity in their work, and I learned a lot while working on it. Please check out the summary below, and read the whole paper if you have time, or at least read this blog overview.

Also, many of you were intrigued by the use of ChatGPT last month, so I thought I'd try another AI tool to highlight what it does well and what it's limitations are. It's part of elicit.org and it's called "TL;DR papers" (for too long; didn't read). I first read about it here. Elicit does many cool things, but I tested just one feature: uploading a science article and getting: 1) a 1-sentence summary of the abstract, 2) key info about the paper's design and methods, 3) critiques of the paper if it finds them, and 4) the ability to ask specific questions about the paper. I put what I did and the results below under "AI RESEARCH TOOLS." The TL;DR is that I highly recommend the "detailed abstract summary" over the default ones. I also like this guide to things to use AI for (thanks to Bob Lalasz for sending it).

If you know someone who wants to sign up to receive these summaries, they can do so at http://bit.ly/sciencejon (no need to email me).

GENDER AND CONSERVATION:
James et al. 2023 asked over 900 science & conservation staff of The Nature Conservancy about their careers and influence, and how they perceived their gender as impacting that. We found that women had less influence, experienced many barriers to their careers (including harassment, discrimination, and fear of retaliation for speaking out), and that men overestimated gender equity. Only have 5 minutes? Skip to the recommendations on page 7 (we ask orgs to: show public leadership on equity, improve transparency and accountability, diversify teams and improve career pathways for women, be flexible, include training and mentoring as part of broader change, help women connect, address sexual discrimination and harassment, and consider intersectionality). If you have 15 minutes more, read the quotes in Table 2 (p5-8) because they're really compelling and illustrative. Or if you're with the half of men and 3/4 of women in our sample who think we have more to do on gender equity (rather than that we've already "gone overboard" or that it's not an issue as some men reported), just read the whole damn paper because there's a lot of interesting detail and nuance in the results. I learned a ton while helping out on it, and I'm excited to start advocating for the recommendations. You can read it at: https://bit.ly/TNCgenderpaper or a short blog at https://blog.nature.org/science-brief/gender-bias-holds-women-back-in-conservation-careers/ 


AI RESEARCH TOOLS:
I decided to test out Elicit.org on some papers I'm an author on (to make it easy for me to judge how good or bad the info was). I picked my 5 papers that have the most citations (I'm a minor author on all but one of those), the paper which has generated the most response outside of academia (Fisher / Wood / Bradford / Kelsey 2020), and the two most recent papers I've worked on (James et al. 2023 and James et al. 2022). I used the "upload PDFs" option and uploaded all 8 PDFs at once. I pulled info from the summaries, and for each paper asked Elicit one question that I knew the paper had a good answer to. AFTER I was done I discovered Elicit has "detailed summaries" which you get by clicking in "search for paper information", so I went back and added those in with brief comments. I list direct output from Elicit in italics.

Bradford et al. 2019. Soil carbon science for policy and practice. PAPER BLOG
Elicit summary: "Controversy about the role soils might play in climate change mitigation is undermining actions to restore soils for improved agricultural and environmental outcomes."
Jon comments on summary: We had a 2-sentence abstract, and this is the key one. I wish it had the other key message of the paper (we know how to build soil C, and that it can help farmers) but that wasn't in our abstract so that's on us.
Elicit detailed summary: "This paper discusses soil carbon science for policy and practice. It argues that controversy about the role soils might play in climate change mitigation is undermining actions to restore soils for improved agricultural and environmental outcomes." I really like this summary.
Jon comments on other findings: They classified the paper as a literature review, but it's a 'comment' so more opinion although it draws on literature. It flagged we didn't mention our funding which surprised me but it was correct! It was interesting that they said the paper had "0 citations" (it has 88) but then it did list the text from a few papers citing this paper which was cool.
Jon's open question: "What does building soil carbon do?" 
Answer: "Building soil carbon increases yields and reduces erosion." That is pulled from the paper and is accurate.

Fisher et al. 2018. Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. PAPER BLOG
Elicit summary: "The choice of spatial resolution affected the estimated return on investment in watershed conservation for the municipal water company that finances much of this conservation."
Jon comments on summary: I struggled with this one, but have to agree that is the most important sentence from the abstract. I really wish they hadn't cut the 2nd half of the sentence though: ", although it is unlikely that this would have affected the company's decision to invest in the program." My TL;DR would be something like "Using free coarse imagery reduced the accuracy of our water quality modeling, but the coarse data sufficed for the purposes of the intended decision maker (a water treatment company)."
Elicit detailed summary: "This paper compares the impact of satellite image spatial resolution on land use classification and water quality modeling. It finds that higher resolution data is more accurate but also more expensive." This is the key result, but the more interesting implication is how that wouldn't have affected the decision the research was meant to inform.
Jon comments on other findings:
The test and outcomes were good, the answer to "who were the participants" was "Brazil" which isn't right. It even successfully pulled the funding source from the paper's main text (this is the only article that worked for). It listed four "possible critiques" of the paper which I was excited to see and evaluate! All four papers basically echoed some of our findings so it was interesting to see they showed up as critiques (they were not): it's a ton of work to use very high resolution data (need to do lots of precise ground references, long analysis times, etc.), data resolution choice can affect your results, and one time when high resolution imagery makes sense is to classify small and heterogeneous features.
Jon's open question: "When is it good to use low resolution data?"
Answer: "The paper concludes by saying that there are key criteria to assist in choosing an appropriate spatial resolution for different contexts, so it depends on the context." This is true, but Table 5 has a summary of what those criteria are, so I was bummed it didn't pull from the list of 12 answers we had to the question like: low budget, large study area, larger patches, need for frequent updates, etc.

Fisher / Wood / Bradford / Kelsey 2020 (we had equal contributions). Improving scientific impact: How to practice science that influences environmental policy and managementPAPER BLOG
Elicit summary: "Scientists who want to improve the impact their research has on decision-making should identify and understand the audience."
Jon comments on summary: Oh boy, this is a case where shorter is not better. We had 4 steps and it cut off the last 3. Here's the sentence it pulled this from "We outline a set of practical steps for scientists who want to improve the impact their research has on decision-making: (a) identify and understand the audience; (b) clarify the need for evidence; (c) gather 'just enough' evidence; and (d) share and discuss the evidence"
Elicit detailed summary: "This paper outlines a set of practical steps for scientists who want to improve the impact their research has on decision-making. These steps include: (a) identifying and understanding the audience; (b) clarifying the need for evidence; (c) gathering "just enough" evidence; and (d) sharing and discussing the evidence." Yup - that's a good summary albeit almost a direct quote!
Jon comments on other findings
: N/A (nothing was interesting enough to discuss)
Jon's open question: "Why should scientists read this paper?"
Answer: "The paper provides advice on how to make sure that 1's research has the desired impact." OK, it's weird to say "1's" for "one's", and I was hoping for noting how much research goes to waste, but this is not a bad answer.

James et al. 2022. Gender and conservation science: Men continue to out-publish women at the world’s largest environmental conservation non-profit organizationPAPER BLOG
Elicit summary: "The proportion of women to men remains below gender parity at TNC."
Jon comments on summary: This is a great example of where truncating a sentence makes it inaccurate. While this text does appear in our abstract, it's unclear from Elicit that women are PUBLISHING less scientific articles than men, despite the fact that 45% of conservation and science staff are women.
Elicit detailed summary: "This paper looks at the number of women publishing in conservation science at the Nature Conservancy (TNC) from 1968-2019. It finds that women are underrepresented, making up only 36% of authors, 31% of first authors, and 24% of last authors." This is a good summary and includes probably the most important high level result.
Jon comments on other findings: The question "what did they test" answers "The duration is 1968-2019" (as opposed to something like "The authors tested whether women and men at The Nature Conservancy published science articles at equal rates from 1968-2019"). The outcomes (# women publishing at TNC, proportion of women:men publishing at TNC, proportion of women from the Global South publishing at TNC) are incomplete but true. It didn't find the 3 citations Google know about.
Jon's open question: "How can gender equity in science publishing be improved?"
Answer: "The paper suggests that 'specific strategies and ongoing investments' are needed to address gender diversity among authors, especially women from outside the United States and specifically the Global South." That is frustrating, because it missed that there were seven bulleted and bolded recommendations listed under a heading called "Recommendations for building gender equity in publishing." A better (longer, but actually useful) answer would be "Gender equity in science publishing can be improved via building awareness around bias and systemic barriers, men should collaborate more with women (including in the Global South), include women performing 'invisible labor' as authors, support women to focus on publishing, support women who don't speak English as a 1st language to publish, offer dedicated publishing support for women, and manage impacts of COVID on women" (some of these don't really make sense when shortened like this, though.

James et al. 2023. Gender bias and inequity holds women back in their conservation careersPAPER
Elicit summary: "Gender bias and inequity holds women back in their conservation careers."
Jon comments on summary: I mean, I'm glad it thinks the title is a good summary! But a summary of the abstract could complement the title. Maybe something like "Surveying about 900 staff of The Nature Conservancy revealed gender equity problems; the authors have recommendations to improve them"?
Elicit detailed summary: "This paper discusses how gender bias and inequity affects women in their conservation careers." I actually think the title is still about as good as this one.
Jon comments on other findings: It got the participants right, and it pulled out a few details relevant to "can I trust this paper?" It couldn't find any citations b/c the paper is so new.
Jon's open question: "What are ways to improve gender equity?"
Answer: "The paper proposes a series of recommendations to improve gender equity in conservation, which involve continual reviewing and adjusting policies, systems, and norms to create a culture that fully leverages the benefits of diversity" Again, I wanted it to know that a question like "what are ways" means I am looking for specifics, not that "ways exist." A better answer could be "The authors recommend that organizations: show public leadership on equity, improve transparency and accountability, diversify teams and improve career pathways for women, be flexible, include training and mentoring as part of broader change, help women connect, address sexual discrimination and harassment, and consider intersectionality."

Kroeger et al. 2014. Reforestation as a novel abatement and compliance measure for ground-level ozone. PAPER BLOG
Elicit summary: "Reforestation is a novel abatement and compliance measure for ground-level ozone."
Jon comments on summary: Again, this is true, but why not go beyond the title? Better to have a longer answer than to waste text by repetition: "Reforestation was predicted to abate ozone at or below the cost of conventional alternatives if land doesn't need to be purchased, and can save money if carbon offsets are generated."
Elicit detailed summary: For some reason Elicit declined to produce a detailed abstract summary for this one, I have no idea why!
Jon comments on other findings: Elicit says "no mention found of funding source" but the paper clearly states that "This analysis was done as part of the collaboration between The Nature Conservancy and Dow Chemical Company and was funded by the Dow Chemical Company Foundation." Also the one paper listed as critique this paper didn't criticize it, and just listed it as a rare case of ecosystem service valuation within industry being published in a peer-reviewed journal.
Jon's open question: "What is the national potential for reforestation to abate ozone?"
Answer: "Reforestation could be a viable, novel approach for abating ground level O3 pollution that complements conventional technology based controls." I was hoping for "Figure 2 maps the places in the contiguous United States where reforestation could be used to abate ozone."

Mcdonald et al. 2009. Urban effects, distance, and protected areas in an urbanizing worldPAPER
Elicit summary: "The median distance from a park to an urban area is already less than 50 km in many regions."
Jon comments on summary: That is not the key point. A better summary would be "A global review of how cities affect protected areas found that 2/3 of effects only apply within 50 km, but this proximity is common in many regions so cities are already having an impact."
Elicit detailed summary: "This paper looks at the effects of urbanization on protected areas. It finds that thousands of protected areas are already impacted by urban areas and many more will be impacted in the future." Not bad, I like my summary better but this is still good.
Jon comments on other findings: Again Elicit missed a partial funding statement "One of us (R.I.M.) was funded by a David H. Smith Conservation Biology Fellowship." It also couldn't access the text of any of this paper's 246 citations!
Jon's open question: "What were the most common threats cities posed to nature?"
Answer: "The most common threats cities posed to nature were "housing and urban areas," "household sewage and urban waste water," "dams & water management," and "logging and wood harvesting."." Sadly this is badly wrong. It misinterpreted a few sentences this was pulled from. To be fair, the best answer is in graphic format (Fig 3) and I don't think this tool can yet read the text of images and extract insights. A better answer would be "Invasive species, housing & urban areas, fire & fire suppression, and dams & water management / use."

Reddy et al. 2017. Advancing Conservation by Understanding and Influencing Human Behavior. PAPER  
Elicit summary: "Integrating two or more behavior change approaches that target multiple, context-dependent factors may be most successful."
Jon comments on summary: True, but a more helpful summary would be broader: "The authors review behavior change models, providing examples and guidance on how to choose an effective one for a given context."
Elicit detailed summary: "This paper discusses how behavioral sciences can help inform conservation policy. It provides a set of guiding questions for applying behavioral insights to conservation policy, including how to define the conservation problem as a behavior change problem, understand behavioral mechanisms, and identify and evaluate behavior change approaches." This is a good summary, no complaints.
Jon comments on other findings: The measured outcomes were wrong, but they got the funding source right (it was in the acknowledgments).
Jon's open question: "What kinds of behavior change interventions use incentives?"
Answer: "The paper mentions that incentives are 1 of 3 behavior change approaches." This is true and it pointed me to the text listing them (awareness, incentives, and nudges). But I was looking for something like "financial or social incentives can be used to create behavior change" as per Fig 3 and a good bit of text. But to be fair I don't think we use the same terms consistently throughout.


REFERENCES:

James, R., Fisher, J. R. B., Carlos-Grotjahn, C., Boylan, M. S., Dembereldash, B., Demissie, M. Z., Diaz De Villegas, C., Gibbs, B., Konia, R., Lyons, K., Possingham, H., Robinson, C. J., Tang, T., & Butt, N. (2023). Gender bias and inequity holds women back in their conservation careers. Frontiers in Environmental Science, 10(January), 1–16. https://doi.org/10.3389/fenvs.2022.1056751 or https://bit.ly/TNCgenderpaper

Sincerely,
 
Jon
 
p.s. This is a photo of our former foster dog Vader letting me know that it was time for hand-chewing and playing, and not time for working.