Thursday, January 25, 2024

December 2023 science summary

Anglerfish pumpkin / jack-o'-lantern


On Friday Nov 17, the earth was 2C warmer than historic pre-industrial average (1850-1900), and 1.17C over the 1991-2020 average. This does not mean the earth has warmed 2C yet! That would require a sustained set of temperatures high above average. But still not great news.

This month I have three science articles but am also sharing some informal thoughts about how scientists might want to consider using (and not using) generative artificial intelligence tools.

If you know someone who wants to sign up to receive these summaries, they can do so at (no need to email me).

Knauer et al. 2023 has good news - better modeling estimates forests could sequester more carbon than we thought. But it's likely very small good news. Their best case is 20% more "gross primary productivity" (GPP, energy captured by photosynthesis), BUT a) that's using an extremely unlikely 'worst case' cliamte scenario which is actually hard to achieve, (RCP8.5) and b) only a fraction of GPP ends up sequestered as carbon (see Cabon et al. 2022 for more). Since forests offset roughly 25% of annual human emissions, the results likely mean <1% of annual emissions could be offset. I'll take it, but we still need to reduce gross emissions as fast as possible.

Chiaravalloti et al. 2023 assessed how well cattle ranches in the Brazilian Pantanal (among the world's largest wetlands) aligned w/ Elinor Ostrom's principles (for sustainable use of natural resources, see Table 2 for a nice summary). They interviewed 49 cattle ranchers, other people working in the beef supply chain, conservationists, and policy makers. Flooding, very low stocking density, lack of transportation, and the fire regime all make ranching in the Pantanal unusual. They found the Pantanal ranches do well on the first 3 principles: clearly defined boundaries, appropriate rules for resource use reflecting local conditions, and collective decision making. But there is a lack of the other principles: limited monitoring, no graduated sanctions, lack of accessible conflict resolution, lack of recognition for self-governance on sustainability, and nested enterprises that coordinate governance and monitoring and the rest. They call for a series of specific recommendations to address these deficits, including celebrating early examples of things that are working well.

Gomes et al. 2023 is a case study in the Brazilian Pantanal assessing 14 cattle ranches (cow-calf operations) using the Fazenda Pantaneira Sustentável (FPS) tool. The tool assesses 1) financial performance (costs of management, inputs, labor, etc., along w/ gross income), 2) productive performance (which favors native grass forage availability and producing calves), and 3) environmental performance (landscape diversity conservation index, which favors diverse vegetation types that have been maintained on the ranch), and combined them into a composite score. Table 2 has the results and highlights how much variation there is across ranches. Table 3 has an easier to read narrative summary of how the ranches are performing. They recommend using the relative high performing ranches as baselines for what performance level the others should aspire to.

I am in no way an expert on AI. I am a person who has played with a variety of tools, and is sometimes asked for my opinion. The thoughts here are my own, and don't reflect the views of my employer or anyone who actually knows what they're taking about. They are general guidance skewed by what I've tried, and the tools evolve fast so could already be wrong. Two things to keep in mind throughout - 1) when you ask AI for answers they often confidently provide wrong answers, and 2) don't put sensitive / non-public information in there as some of these tools have the right to reuse or share what you put in. Watch out for those!

With that caveat, I wanted to share some suggestions for how to use generative AI (including large language models [LLMs] like ChatGPT and Bard, plus image generating tools like DALL-E). Other kinds of AI are not included. I split use cases into three categories:

GREEN: relatively safe uses/ DO:

  • Reword emails / blogs / reports, including for length or tone or clarity. LLMs perform very well at producing text which is clear and understandable to a general audience, with few to no grammatical errors. It's also surprisingly good at adjusting emotional tone, e.g., doctors are using ChatGPT to write more empathetic emails to patients. Again, be wary of putting in sensitive info. A final edit and review for factual accuracy is essential. 
  • Use for screening a set of science paper PDFs (it produces a spreadsheet with summaries, methods, etc. which a researcher can use to decide where to start reading). Use “detailed summary” as the shorter summary leaves important stuff out (the link does to a detailed review I wrote of Elicit). This used to be free but they charge you now.
  • Help finding other kinds of information hard to find with traditional search engines. For example, a search for strategic planning frameworks for nonprofits (roughly similar to the conservation standards) was mostly unproductive, but similar queries to Bard produced an excellent science paper with a comparison between 5 frameworks.
    • Note that this only means looking for sources you will actually read, NOT asking it to pull out facts. It's a great way to find references you may otherwise miss.
  • Help finding hotels that meet criteria you can’t easily filter on in other travel sites (like quiet, dining options, offering special rates, etc.). Again - read reviews and the hotel website to verify the info, in some cases I was offered hotels that did not meet my criteria, but in other cases it helped me.
YELLOW: offers value but also some risk/ CAUTIOUSLY DO:
  • Look for key facts buried in long reports / science papers (either w/ ChatPDF or online LLMs) - then verify those facts are real / correct / actually in the source (ChatPDF will often tell you the correct page number for a factual assertion if asked). It is typically faster than reading long documents or searching through them (e.g.,  I used for the suites of IPCC AR6 reports). The 'cautiously' part is that I can't stress enough that these tools often make things up and provide fictional sources!
  • Summarize science papers or other long reports in plain language. Again, check any key points for veracity. Here's a long review comparing how ChatGPT summaries of papers compare to my own, and another one evaluating Elicit's short and detailed summaries of papers I co-authored.
  • Write sample code you either don't know how to write or that would take a long time. It may perform poorly and/or be hard to debug, but may be ‘good enough’ in cases where time is limited. On the other hand, in some cases it could introduce security and/or performance risks. This is best when you know how to code (and understand code you read) and are looking for sample code to start with.
  • Use to identify potential issues / problems with science papers. While existing functions around critiques of papers and methodological limitations or conflicts of interest do not work very well, in some cases they do work and have potential if refined. I'll say it again: verify anything it tells you.
  • List common arguments for or against a given topic. This can provide helpful context but should not be treated as definitive
  • Produce an initial outline for something like a paper or a report – suggesting possible topics and how to organize them as a way to stimulate thought and get started. My teacher friends also said it can be great for suggesting things to include in a syllabus.
RED: use cases to be avoided/ DO NOT:
  • Ask for facts and trust the results w/o carefully checking references (LLMs regularly fabricate false info and provide fictional references [hallucitations] for it)
  • Assume content provided (code, images, text) can be used w/o copyright issues. Often it cannot, and using a bit of LLM-generated content can screw w the copyright of the bigger report it goes into.
  • Assume LLMs will include caveats or methodological limitations when reporting results from reports (they generally do not)
  • Put sensitive, nonpublic, or other confidential text, data, or code into LLMs
  • Assume you know how the tools work. They change so fast you probably don't. Treat it as a black box which sometimes spits out candy, but sometimes you get those Harry Potter jelly beans that taste like vomit or earwax.

Again, please use the above as ideas you can try out and verify for yourself. Don't trust my judgment on AIs any more than you would trust their assessment of their limitations.


Cabon, A., Kannenberg, S. A., Arain, A., Babst, F., Baldocchi, D., Belmecheri, S., Delpierre, N., Guerrieri, R., Maxwell, J. T., McKenzie, S., Meinzer, F. C., Moore, D. J. P., Pappas, C., Rocha, A. V., Szejner, P., Ueyama, M., Ulrich, D., Vincke, C., Voelker, S. L., … Anderegg, W. R. L. (2022). Cross-biome synthesis of source versus sink limits to tree growth. Science, 376(6594), 758–761.

Chiaravalloti, R. M., Tomas, W. M., Akre, T., Morato, R. G., Camilo, A. R., Giordano, A. J., & Leimgruber, P. (2023). Achieving conservation through cattle ranching: The case of the Brazilian Pantanal. Conservation Science and Practice, September.

Gomes, E. G., Santos, S. A., Paula, E. S. de, Nogueira, M. A., Oliveira, M. D. de, Salis, S. M., Soriano, B. M. A., & Tomas, W. M. (2023). Multidimensional performance assessment of a sample of beef cattle ranches in the Pantanal from a data envelopment analysis perspective. Ci├¬ncia Rural, 53(12), 1–12.

Knauer, J., Cuntz, M., Smith, B., Canadell, J. G., Medlyn, B. E., Bennett, A. C., Caldararu, S., & Haverd, V. (2023). Higher global gross primary productivity under future climate with more advanced representations of photosynthesis. Science Advances, 9(46), 24–28.

p.s. This anglerfish jack-o-lantern was carved by my wife and me; we got the pumpkin with a really long stem and wanted a theme that would make good use of it

No comments:

Post a Comment

Questions, comments, suggestions, and complaints welcome.