Chatbots: a critical look into the future of the academia

Samuel Ariyo Okaiyeto, Arun S. Mujumdar, Parag Prakash Sutar, Wei Liu, Junwen Bai, Hongwei Xiao

Abstract


Like every other societal domain, science faces yet another reckoning caused by a bot called ChatGPT (Chat Generative Pre-Trained Transformer). ChatGPT was introduced in November 2022 to produce messages that seem like they were written by humans and are conversational. With the release of the latest version of ChatGPT called GPT-4, and other similar models such as Google Bard, Chatsonic, Collosal Chat, these chatbots combine several (about 175 billion) neural networks pre-trained on large Language Models (LLMs), allowing them to respond to user promptings just like humans. GPT-4 for example can admit its mistakes and confront false assumptions thanks to the dialogue style, which also enables it to write essays and to keep track of the context of a discussion while it is happening. However, users may be deceived by the human-like text structure of the AI models to believe that it came from a human origin[1]. These chatbot models could be better, even though they generate text with a high level of accuracy. Occasionally, they produce inappropriate or wrong responses, resulting in faulty inferences or ethical issues. This article will discuss some fundamental strengths and weaknesses of this Artificial intelligence (AI) system concerning scientific research.
Keywords: ChatGPT, AI generative models, academia, ethical and moral restraints
DOI: 10.25165/j.ijabe.20241702.9075

Citation: Okaiyeto S A, Mujumdar A S, Sutar P P, Liu W, Bai J W, Xiao H W. Chatbots: a critical look into the future of the academia. Int J Agric & Biol Eng, 2024; 17(2): 287–288.

Keywords


ChatGPT, AI generative models, academia, ethical and moral restraints

Full Text:

PDF

References


Else H. Abstracts written by ChatGPT fool scientists. Available at: https://www.nature.com/articles/d41586-023-00056-7. Accessed on [2023-04-01].

van Dis E A M, Bollen J, Zuidema W, van Rooij R, Bockting C L. ChatGPT: five priorities for research, Nature, 2023; 614(7947): 224-226. DOI: 10.1038/D41586-023-00288-7

Buriak J M, Akinwande D, Artzi N, Brinker C J, Burrows C, Chan W C W, et al. Best practices for using ai when writing scientific manuscripts. ACS Nano, 2023; 17(5): 4091−4093. DOI: 10.1021/ACSNANO.3C01544

Park M, Leahey E, Funk R J. Papers and patents are becoming less disruptive over time. Nature, 2023; 613: 138–144. doi: 10.1038/s41586-022-05543-x

Karim R. ChatGPT: Old AI problems in a new guise, new problems in disguise. Monash University, 2023. Available at: https://lens.monash.edu/@politics-society/2023/02/13/1385448/chatgpt-old-ai-problems-in-a-new-guise-new-problems-in-disguise

Stokel-Walker C, Noorden R V. The promise and peril of generative AI. Nature, 2023; 614: 214-216.

Charo R A. Yellow lights for emerging technologies: All-or-none regulatory systems are not adequate for revolutionary innovations. Science, 2015; 349(6246): 384-385.




Copyright (c) 2024 International Journal of Agricultural and Biological Engineering

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

2023-2026 Copyright IJABE Editing and Publishing Office