« Back

ChatGPT: Seeing Beyond the Hype and Hysteria

April 11th, 2023


April 3, 2023 

For some, generative AI (Artificial Intelligence) spells the beginning of a machine-initiated end to humanity, as depicted by sci-fi flicks like the Matrix. For others, the advent of AI means that with enough data-crunching, and given enough time, AI programs like ChatGPT will solve all of humanity's problems. The truth likely lies somewhere in the middle. To start with definitions, ChatGPT and other similar AI chatbots are Large Language Models (LLMs) that work by analyzing troves of data. Through this process, these programs can identify how language is structured (and can thus sound increasingly “human”) and employ analytical methods inspired by the human brain to rapidly analyze, weigh and make new connections to the data provided. Since the LLM can readily access both Dr. Seuss and astrophysics content, it can readily blend the two to produce a brand new, Seuss-inspired poem about the formation of black holes. Watching these twin capabilities in action, you’d be forgiven for thinking you are interacting with a highly intelligent, conscious being. 

Cue the end of the world hysteria and the wildly optimistic euphoria.        

While many such LLMs exist, the most prominent by some distance is ChatGPT. As capable as these models are, their widespread, unregulated use raises significant concerns. For one, training these models requires significant amounts of energy. The carbon footprint of training a single AI model produces as much carbon dioxide as five-lifetimes worth of emissions of an average car. Another concern is that LLMs have the curious tendency to “hallucinate” every so often by generating completely inaccurate responses with the same confidence they exude when providing precise responses. The better these AI models become through training on more data, the fewer hallucinations, but OpenAI cautions against relying on LLM responses in “high stakes” contexts.  

Education is one such high stakes context and where I’d like to focus the rest of this post. The responsible use of AI in education has been particularly controversial. These LLMs can conjure up graduate level essays in seconds, which is particularly concerning from an ethical standpoint, leading some to conclude that the college essay is dead. New York City's Department of Education banned ChatGPT from school devices and networks until they can decide how best to integrate it into the classroom environment. Beyond these obvious ethical concerns, there are also genuine worries that these LLMs will result in a generation of learnedly helpless, passive students who rely on technology to do most of their critical thinking and writing for them. 

As significant and valid as these concerns remain, LLMs also have considerable potential to positively shape the world of education. During their highly anticipated GPT-4 launch, OpenAI discussed education as potentially the most compelling use for these LLMs. They likened GPT-4 to a tutor with unlimited patience that could teach an endless variety of subjects.

As an educator, I jumped at the chance to test out GPT-4.  First I asked about Algebra 2 course topics and went back and forth with sample problems, intentional errors and corrections. Not only did GPT-4 offer excellent explanations of Algebra 2 concepts, but also provided sample problems, accurate corrections, step-by-step instructions and even praise and nuanced coaching on my errors. Glorious. I then asked GPT-4 if it could read over an essay I wrote and provide feedback for improvements. I received excellent advice about my claims and evidence, grammatical corrections, and even coaching on writing more concisely. In seconds, done.  I was able to find some limitations when asking for cause and effect analysis and was directed by GPT-4 to do further research. 

I was genuinely impressed by how well the LLM could analyze my work and generate highly personalized, constructive feedback. On a larger scale, developments in this area could significantly democratize the sort of personalized attention that only 1:1 tutoring currently provides, but so few can afford. Khan Academy, a longstanding organization aiming to provide high quality education for all students, is already piloting a version of this sort of support. They are integrating a GPT-4 powered chatbot into their lessons, and partnering with school districts across the country to integrate it into classrooms. Once a live teacher presents math content in a lesson, the chatbot assistant would interact separately with each student to identify points of clarity and confusion, providing personalized feedback and follow-up practice problems. 

Of course there are many caveats, and we need to be aware that a chatbot can never replace the human touch of an experienced educator who can read emotions, deftly redirect attention and be a kind and understanding presence on a child’s educational journey. But I believe there is a real reason for optimism about what this new era may bring to the world of education. At Hayutin Education, we are keeping a close eye on how our students’ school policies surrounding ChatGPT are unfolding.  By default, our educators encourage students to follow school policy on the use of AI. However, we are also ready to show our students how to use ChatGPT responsibly and effectively to bolster, rather than replace, their reading, critical thinking skills and executive functioning skills. We can all wholeheartedly welcome any resource with the potential to provide more equity in education access for all.  

But for now, we’ll all have to wait and see how this plays out in the world of education and beyond. Like ChatGPT, we must continue gathering and sifting data, as we rely on our distinctly human ability to navigate this brave new world together.


~Hunja Koimburi, M.A.

Posted in the categories Featured, Virtual Learning, Parenting Tips.