AI has raised the bar for literature reviews

research-skills
genAI
Published

December 1, 2025

I’ve been reviewing papers recently that provide lists of applications - particularly common in reviews about AI use in ecology and fisheries. But lists of topics don’t provide a compelling read.

More importantly, any chatbot can generate a well researched list now. ChatGPT, Claude, and Copilot all have web search integrated. Perplexity AI can generate detailed accurately referenced 10,000 word reports that are essentially lists on everything about a topic. Try asking “how can AI be used in ecology with references?” and you’ll get a comprehensive list in seconds.

If readers can generate that content themselves with AI, why would they bother reading your journal article? And if you’re producing content that AI can replicate, why does science need your career?

Generative AI raises the standard for what needs to go into literature reviews. Reviews need to be informed by deep human thought. AI can be a tool to help develop a review, but you need original insights that go beyond what everyday chatbots can produce.

One example that gets this right is a recent paper reviewing multimodal large language models in Methods in Ecology and Evolution. They do list different applications of the technology, but there’s greater depth in the examples than what you could get out of an AI. They provide ecology-specific examples that take work to develop and find. More importantly, they synthesize these potential uses and discuss future challenges and directions with tangible actions.

Going forward, AI has raised the bar for literature reviews. Don’t write or produce anything that AI can easily generate. There’s no point - anyone can grab that content themselves and you are making yourself redundant. Reviews need to bring deep domain expertise, synthesis, and original thought.