AI in Math Education
With a surge of AI on the rise, it has raised a lot of concern for the classroom. Though math teachers have dealt with AI tools that can solve math for some time, they have not dealt with AI tools that can create detailed work to pair with it too. This causes fundamental learning gaps in a person's education as they bypass the learning that comes from making mistakes and experiencing failure. So what exactly have we done to try to negate the negative effects while utilizing the greatest multitool made to date?
Some educators have taken the stance of outright abolishing the use of AI for their course, others have tried to integrate some more restrictive usages, and a few even say "run wild" in hopes that learning will still be induced by failure to yield the same results as a learner previously did without using AI, compared to using AI. Though these three scenarios do not cover the entirety of what has been attempted, they cover a large portion of approaches.
The issue with attempting to stop learners from seeking out AI while doing work at home or in class is that you really can't. If you send somebody home with work and they get stuck, why would they not seek out a solution on the internet? The honest truth is that this has been the case for almost three decades. However, now there is no paywall preventing mass use, and what's worse is that there is no guarantee the material consumed is legit. So the worry from educators is very understandable. If a student is truly stuck and they seek out answers online, who's to guarantee the educator won't end up having to correct incorrect learning or have to continuously reteach the same material because it is not sticking?
To my preference, some have integrated AI tools into their math courses during my education. They are designed to provide tips and guide you through problems as opposed to providing you work you might not understand paired with a solution you cannot reproduce. The issue with some of these tools is that while most of them are still in their early stages, it is hard to make sure they can meet the needs of all learners. They might try to be "better" and avoid AI tools like ChatGPT and opt for the provided tools. Sadly, though, today most of our learners have been raised with much quicker dopamine hits in their youth and might not be as willing to sit down and be patient while doing something they are struggling with. So the downside of these tools is that there is no opportunity to see the fully worked-out problem and derive how it came together.
Lastly, the approach of using every tool available to you. This is a less common stance taken by educators, but still one of the more popular choices. The idea is that by allowing students freedom to use whatever tools they deem helpful, so long as they produce results that indicate learning is still happening, then that is awesome. The pitfall of this approach is that memorization tends to be a common goal in higher-level math courses. Students might plug a high number of similar problems into an AI tool, memorize the pattern, and skip the conceptual understanding. All it takes for these learners to crumble during exams is one minor tweak to how the problem is arranged, which does not allow them to simply follow the memorized steps.
So now that we have established that no matter the application, AI will cause some level of issue, does this stand to show we should steer away from it now? Alternatively, we could work with students on using AI with the best intentions in mind so that the tool is not going anywhere. I personally believe that this is the best option and one that educators should embrace. While in Huron for the SD Stem conference this year, I attended a session that talked about this specific topic, and it highlighted a lot of areas where educators fail to bridge the gap between students cheating and students mistaking copying down work they don't really understand for learning.
In our STEM tech class, we did quite a bit of reading on AI recently. One of the studies we read showcased an alarmingly large drop in test scores for a sample of students who used AI to study versus those who did not. Similar studies showed similar results, as retention was not as high as the learner perceived it to be. So it seems that despite having the best interests in mind while using these AI tools, they could not escape the fabricated sense of understanding.
As a final note, I would say as a future educator and as a full-time learner, it is very important that we still feel comfortable failing. Without those mistakes, the true learning is evaded, and long-term retention is harder to obtain. Using the AI tools to initially scope out the material, help learn new concepts, and create study tools are all good options, but we must work on not leaning on it too much as a supplement for aha moments when understanding is achieved. How can we do it? Maybe I can just ask ChatGPT.
Comments
Post a Comment