With the rise of large language models (LLM), it’s relatively easy to sound credible about topics that you know little about. Jargon and basic mental models are just a prompt away. In the past, not knowing the right term could trigger imposter syndrome. If anything, now I fear that we may be entering an era where the opposite dynamic exists. Instead of people fearing that they may not know enough, there’s a temptation to think that the response to a clever LLM prompt will yield all the information you need to be considered an “expert”, even if you do not yet know how to discern whether the model is providing you with an answer that effectively balances the nuances of the topic at hand.
Learning requires struggle. For example, imagine the difference between taking a practice test with an answer key available to you. The short-sighted thing to do to get the best test score on the practice test would be to quickly look at the answer key if you’re concerned that you may get a question wrong. However, this will undermine the value of taking a practice test in the first place, since you will be denying yourself the opportunity for a new concept to “click”, which will allow you to effectively handle a future question about a similar concept.
Turning too quickly to AI for the answer is similar to taking a quick peek at the answer key on a practice test when you’re learning something new. At least for now, there’s still a gap between the judgments of AI and human experts.
For me, the most valuable use cases for AI have been to:
Each of these use cases for AI are supplements, not direct outsourcing of my thinking. In many ways, I treat AI in a similar way that I would a person that I’m outsourcing a task to. Even if I paid them to do a task for me, I’d still want to generally understand how and why they were doing things so that I can confirm that they are meeting my goals. In both casual personal and professional settings, increasingly, I’m seeing people just turn to AI for the answer. I’m fully in favor of people using AI to accelerate their work. However, I feel like many people are turning to AI before they understand what they are giving up for their own learning. This is like using advanced power tools before understanding the basics of woodworking.
Before relying too heavily on AI for a task, I suggest trying it first in “muggle mode”. This will help you to better understand where there may be subtle issues that AI might get wrong. In the long run, this will help you to craft an AI based workflow that is more scalable and less prone to mistakes that can propagate in unpredictable ways.
Understanding your own limitations is a key ingredient to building confidence. This will allow you to intentionally focus on areas for improvement. AI will help you to get good at a specific task, but it may not adapt particularly well when given a slightly different set of circumstances. Overfitting to training data can be a real problem. Be mindful of how this may impact your own learning, especially if you may be suffering from imposter syndrome. This is an instance where going slow at the beginning to make sure you are firmly in control of the fundamentals will lead to faster progress in the medium- to long-term.
I’m curious to hear how others balance their use of AI when they are familiarizing themselves with new disciplines. If you have strategies that you’re willing to share, I’d love to hear from you.