A Path of Appropriate Resistance
Using LLMs for learning is a mistake. Learning is a complex topic, and because the human mind has a baseline for intuitively grasping concepts, we all believe we know how to do it effectively. However, this intuition is built on insidious habits developed during peak malliability, most involving rote memorization and aimless re-reading without reflection. These ill-formed habits lead to rapid rates of knowledge decay (refer to the Ebbinghaus Forgetting Curve) and a severe lack of competency.
Part of the AI hype revolves around its evangelists purporting its utility as an adept educator on a variety of topics. A popular example is using an LLM to learn programming, where a learner prompts the LLM to code a specific use case and then explain the output. The optimistic result would be the LLM outputting correct code with accurate explanations, leading to the transfer of knowledge to the learner.
The issue with this approach is the lack of required struggle. To learn and master concepts, one must apply a certain level of effort to modify the current state of their cognitive schema. The path to this involves consistent, iterative trial and error, followed by reflection. It is a misconception that merely remembering, or even understanding a concept is equivalent to learning. Having an LLM generate code and tell you what the code does is akin to having a professor create a math problem, describe it, then solve it, expecting this to be the end of the process.
You have to do the work. Over and over. Until mistakes are eliminated. Until you can apply it in the appropriate context without an LLM leading you by the hand across the street. Until you can take that concept and compare and contrast it against similar concepts. Until you’re able to consider alternative options, before deciding on this newly learned pattern. You must do the work before you’re able to achieve this higher order of learning (refer to Bloom’s Taxonomy and Hartman’s Proficiency Taxonomy).
Using an LLM as a tool for exploration and discovery can work, but it must be used to supplement your efforts, rather than substituting. One potentially effective strategy is the rapid prototyping of an idea. Going back to the example of programming, a programmer could prompt an LLM to build a potential solution simply to gauge the validity of an implementation idea. The pre-requisite here would be a solid foundation in the underlying programming language, framework/library, and software design. Having the LLM quickly generate a potential implementation could help answer questions surrounding acceptance criteria and allow for faster iteration when testing conceptual hypotheses. “Explain the code” never even comes into it. The output is expected to be understood based on minimum competency. This prototyping strategy isn’t meant to replace the learning process that preceded it, but instead leans more into the idea of “building one to throw away.”
Outside of this specific use case, I would not recommend using AI assistance while learning foundational material. I would even caution against its overuse in the above prototyping example. Not doing the coding yourself (or any skill, for that matter) will result in a lower order understanding of new subjects and atrophy of existing knowledge. You should manually reason through a problem and code for yourself, as much as possible. When you hit a problem, read the debug output and try to make sense of it. Read the docs. Read other implementations of similar logic on GitHub. The point is to find the right balance of just enough struggling and frustration to apply enough pressure for your brain to reorganize its cognitive scaffolding. Eventually, you will fix incorrect assumptions and fill in gaps in knowledge, and those changes will last.
This approach to intentional struggle is optimal for the learning process, but I doubt modern society will shift to adopting it. If anything, we’re moving in the opposite direction. Grit and determination aren’t celebrated - aren’t instilled in us - as they were in the past. Our minds have a natural tendency to take the path of least resistance. Pocket dopamine generators only amplify the impulse to snatch the quick reward, rather than cutting through the brush of repeated failures to discover enlightenment.
Will this inertia diminish our abilities until rendered completely replaceable? What will happen to our ability to critically reason? What happens if AI doesn’t advance at a rate that exceeds the decay of reasoning?