Opinion

How Capable is ChatGPT?

Loading

In recent years, artificial intelligence has made significant strides in the field of natural language processing, leading to the development of ChatGPT, a chatbot that is capable of generating human-like text in response to prompts. While ChatGPT and other similar technologies have the potential to revolutionize many industries, there are also concerns about how they may negatively impact college education.

One of the primary concerns is that ChatGPT and other AI-powered chatbots could be used to cheat on assignments and exams. ChatGPT’s impressive language skills could make it easier for students to cheat on exams or quizzes by quickly generating answers to questions. This could lead to a decline in the quality of education as students rely on the AI model rather than their own knowledge and understanding of the material. Since these chatbots are able to generate human-like text and engage in conversation, students could potentially use them to write papers or complete other assignments without actually doing the work themselves. The human-like text could be used to create convincing fake essays or other assignments, leading to an increase in academic dishonesty. This could lead to widespread cheating and undermine the integrity of college education and erode the value of a college degree.

Another potential negative impact of ChatGPT is that it could lead to a decrease in critical thinking skills among students. If students are able to rely on ChatGPT or other AI-powered chatbots to complete their assignments, they may not be motivated to engage in the intellectual rigor and critical thinking that are essential to a college education. There is also the concern that the use of AI writing could lead to a homogenization of content, as AI writing tools may not be able to produce the same level of creativity and originality as human writers. This could lead to a lack of diversity in the content that students consume and produce, which could limit their learning experiences. This could ultimately lead to a decline in the quality of college graduates and make it more difficult for them to succeed in their careers.

There are also concerns that AI-powered chatbots could lead to the loss of jobs for educators. If students are able to use ChatGPT to complete their assignments, there may be less demand for human instructors. As AI technology improves, it may be possible for automated systems to take over certain tasks currently performed by educators, such as grading papers and providing feedback. While this could potentially save time and resources, it also raises the question of whether or not human interaction and guidance is necessary for a high-quality education. This could result in the loss of jobs for professors, teaching assistants, and other educators, potentially leading to a decline in the overall quality of college education.

Overall, while ChatGPT and other AI-powered chatbots have the potential to revolutionize many industries, there are also significant concerns about how they may negatively impact college education. From cheating and a decline in critical thinking skills to the loss of jobs for educators and a less engaging learning experience for students, the adoption of these technologies in the field of education could have far-reaching and negative consequences.

To demonstrate my point, I will now reveal that everything the reader has read thus far was written by ChatGPT (from this point on, everything is my original work). I ran the following text prompt through the program: Write about how AI chatbots could impact college education; write in a journalistic style. After running the prompt once, I was dissatisfied with the results, so I ran it twice more. I then combined the text from all three responses into this article. I did not edit any text; I only stacked the AI-generated paragraphs on top of one another to make a coherent narrative. It took me less than 8 minutes. With minor amounts of editing, I could easily improve the quality of the writing so that even the sharpest readers would be none the wiser.

This technology is undeniably impressive, but I wasn’t convinced it would work with more niche writing prompts, which you usually use in upper-division classes. After all, asking an AI to write about AI doesn’t seem like much of a challenge, does it?  So, the obvious next step was comparing the bot’s performance with some of my past assignments. I opened up Blackboard, downloaded some old syllabi and found my old papers to match the prompts in the syllabi. My criteria for success were 1) whether I could beat my original paper with minimal editing (syntax and grammar only, no content editing) and 2) whether the content was accurate and engaging. With everything ready to go, I began working through some of my old writing assignments.

The most fascinating result I got was from a book review assignment for behavioral neuroscience. Behavioral neuroscience was a four-thousand-level psychology class—one of my most challenging classes. In the syllabus, we were assigned to read a book and write 3-5 pages reflecting on relevant themes. I adapted the assignment prompt to make more sense to an AI bot and fed it the following request: Write a review of The Man Who Mistook his Wife for a Hat by Oliver Sacks. Reflect on how the book adds to the discussion on neurological psychology and comment on at least 3 themes/cases in the book. I expected the AI to struggle with this assignment since I was not asking it to regurgitate facts. I needed a comprehensive review of the book with meaningful personal reflections. I hit enter and awaited the results, hopeful that my old paper would not be outperformed.

It took approximately 15 seconds for the bot to produce my requested essay—much faster than the six days it took me to read the book and write my paper. To my dismay, the bot’s writing was profoundly mediocre. That may not sound like high praise, but the fact that it produced mediocre writing (which can be manually improved, mind you) with accurate information in under a minute speaks volumes about the capabilities of this technology. The content was well-structured, precise and natural. As I read over the essay, it felt like I was reading a real person’s reflections on the book. I would not have been able to distinguish the AI-generated essay from a real student’s work. It certainly wasn’t A+ writing, but it could be easy to improve it by hand. I would predict a C+ or a B- if it were turned in as is. I could conceivably generate my essay, edit for syntax and accuracy, format for APA and submit it on Blackboard in under an hour—all for a very comparable grade to my original paper, which received a 49/50. ChatGPT seems entirely capable of covering most of the topics undergraduate students might encounter, all with natural human prose and semi-accurate content.

I do not envy professors right now. It will be profoundly challenging for them to counteract this technology. There is no real way to determine whether a student has used the bot to generate their work. It isn’t plagiarism since everything generated by the bot is 100% original. Additionally, teachers can’t compare work between students because students can replace the bot’s prose with their own, effectively wiping its fingerprints from the content. The only conceivable way to prevent AI papers would be to have students write their papers in class, which only works for brief comprehension papers. A student cannot be expected to produce a substantial research paper during a single class period. Alternatively, professors could mandate that students use university computers for all assignments, but that poses an issue for commuters or students who do their work outside library hours.

Conventional wisdom suggests that the future is uncertain. Still, I remain confident that this technology will continue to improve at an exponential rate—a rate faster than any institution can be expected to cope. I am hopeful that JBU will be able to adapt to the changes but to say that I am skeptical would be an understatement. I only hope students will have the wisdom to avoid these shortcuts lest they stunt their development as writers, scholars and career professionals. I would implore professors to tread carefully and educate their students about the risks of AI writing. Ultimately, each of us will have to decide how to use these new tools. We must make our decisions carefully.

Comments are closed.