seascape models

What does generative AI mean for literature reviews?

Literature reviews are about to change in a big way thanks to artificial intelligence that can generate text, like chatGPT. This technology can be used to define search times to identify literature, filter papers to review, and write the review itself. These tasks will save researchers time that they could spend on other things, like sipping cocktails or spending more time with their families.
However, experience tells me researchers won’t spend their idle time on enjoying a better personal life. Research is competitive, and a tool that speeds up literature reviews will just increase the standard of what is expected. There may also be new innovations in literature reviews, like self-updating reviews, but I’ll focus here on how AI will raise the bar for typical literature reviews.

Published literature reviews

Generative AI will allow review and synthesis of greater numbers of papers in the same amount of time. For instance, a recent paper shows how chatGPT can be used to define search terms and then filter articles for relevance. So we should see much bigger reviews in the near future, and consequently, top journals will also expect more literature to be covered in reviews they publish. We may also see researchers invest their time in further synthesis activities. Many stock-standard reviews just identify themes in a body of literature and suggest some future research activities. Generative AI can now read the literature for you and come up with common themes. Pushing this further we should see reviews that go further into synthesis by building conceptual or even quantitative models out of the literature.

Undergraduate literature reviews

Teaching academics should not shy away from including literature reviews in undergraduate assessments. Graduates still need to know how to produce literature reviews. I use the word ‘produce’, instead of ‘write’, intentionally But undergraduate assessments need to meet a different standard. Universities will have to assume undergrads are using these tools. This means assessment items will involve reviewing more literature, or require additional synthesis, such as with conceptual figures or presentations of the review. University assessment also needs to be rigorous, and may be harder work for many academics. The academics marking will need to check more carefully that the content is accurate (since generative AI is prone to make up facts).

What should I do?

If you are researcher: Start experimenting with generative AI to find ways you can use it to help with literature reviews. The quantity and quality that used to get you published in good journals won’t meet muster very soon. You’ll need to review more literature and/or include new types of synthesis.

Don’t expect your PhD student will be able to just spend 6 or so months reading and synthesizing a 100 or so papers and then be able to pubilsh a good review, like we used to. You should be encouraging them to use these new tools and pushing the review further.

If you are a teaching academic: Don’t drop literature reviews from your assessment altogether! They are still an important synthesis tool. But do consider how you will change assessment criteria. You may also want to include written assessments in controlled environments (e.g. exam room) where you know generative AI isn’t used.



Contact: Chris Brown

Email Tweets YouTube Code on Github

home

Designed by Chris Brown. Source on Github