Chapter 13 Conclusion
Time: 3:30pm-4:00pm
We’ll discuss as a group what LLMs mean for the way we do science, and creating community standards.
This chapter synthesizes the key insights from the workshop and explores the broader implications of LLMs for scientific practice:
13.1 The changing landscape of scientific computing
- How LLMs are transforming research workflows
- Potential impacts on reproducibility and transparency
- Changes in skill requirements and education
- Democratization of advanced programming capabilities
13.2 Developing community standards
- Ethical considerations for LLM use in scientific research
- Documentation and reporting practices
- Peer review in the age of LLM-assisted research
- Balancing innovation with methodological rigor
13.3 Future directions
- Emerging trends in LLM technology
- Potential developments in R-specific LLM tools
- Opportunities for community contribution and development
- Preparing for the next generation of AI-assisted data science
This concluding discussion encourages critical reflection on how we can harness the power of LLMs while maintaining the integrity and quality of scientific research and analysis.
13.4 Recommendations for students and ECRs
Manage your learning. If it is ‘easy’ you aren’t learning as much. Still strive to do things ‘AI free’ as you will learn more that way.
Use the AI to aid learning, rather than replace you. e.g. ask it for stats, code or writing advice, rather than asking it just to do the task.
13.5 What should supervisors (PIs) do?
Discuss standards with your lab
Define boundaries for AI use (e.g. how much for coding, citations etc… )
Consider $ cost and plan for that.
Try to avoid over-dependence on AI.
For prompting for specific tasks, I suggest its the supervisor who’s writing System prompts that the lab uses. For example, this could be for lit reviews, stats guidelines, writing etc… The students are writing the user prompts to interact with their specific problem.