
Let me get this out of the way before anything else:
I am not a complete AI-fanatic, nor am I some AI hater. I am just a guy, trying to use tools and figure out if they are useful or not.
This will probably likely make me upset people on both sides, but what can you do.
While everyone is currently going crazy over vibe-coding and whatever the zeitgeist is, I’ve been experimenting (largely) with AI for learning.
The question I generally have is whether or not we can use these tools to learn new technical concepts, domains, or programming languages effectively.
The hardest part, in my opinion when programming usually has very little to do with the syntax and much more to do with the domain. How does authentication work? What is a database “transaction”? What is a language server? What happens in the linking process?
Or, if you’re learning a new paradigm (Rust, FP, Lisp), you likely have some questions about the idiomatic way to do things.
I’ve been playing around with these tools for a good while with the focus on using them to actually improve my knowledge, rather than mercilessly “vibecode” without bothering to read the output of the prompt I just typed in.
So, can these tools help us learn effectively? To immediately ruin the clickbait, my honest answer is I’m still trying to work that out myself.
But I think sharing my thoughts here might be useful to some others.
First I want to talk about my general approach to learning, applying it to AI, and then where I think things fall short - whether through the tools, or my own learning strategy.
Inquiry-based learning
As a self-learner, I believe the strongest path you can choose to take is inquiry-based learning. This is especially true for areas like programming, computing-related concepts, etc.
The general idea is to learn by creating projects and asking lots of questions about what you’re doing. What happens if I change X variable to Y? Why does it give me an error? What does the error mean?
And using this to form your own beliefs and understandings about a concept, and testing them (either by learning more, consulting experienced people on forums, etc.).
This is kind of similar to the Socratic method and other techniques to help improve your critical thinking.
The obvious next step here is trying to apply this concept to our LLM usage.
The need for discomfort
Socrates and many other teachers over the course of modern and ancient history seem to agree on the fact that you need some amount of discomfort in order to learn effectively.
To be honest it really is as simple as the common phrase “learning from your mistakes”.
If you break a client’s website or run into a bug which drives you insane for 3 days, you’re probably never going to make that mistake again because, well, you don’t want to experience that mistake again.
Along these lines, if you’re tackling something new or complex (or both) it’s of course natural to feel a level of discomfort as it’s unknown, and you will likely make mistakes (and learn from them).
So with all of this said, this is where I want to talk about AI.
Talking to AI
I have taken this inquiry-based approach when learning new topics and using LLMs.
There is no “master” strategy here. I always feel I work best by trusting my instincts when it comes to what I should learn next or upskill with.
But here’s a few different approaches I took:
- Using AI as a “tutor”, asking questions while I was learning a new concept from a book or blog post
- Using AI while I was programming a new project, asking it to not give me the solution, but guide me and ask appropriate questions
- Asking AI about my design decisions and bouncing ideas off it about how I could improve the performance/readability of my projects
- Asking many questions about how a concept worked in a rabbit-hole like fashion
The least effective way to learn with LLMs, in my opinion, is to use only LLMs. I’m sure you could ask the chatbot to act like a teacher and teach you a lesson on topic X, but I think it’s not making the most of the chat format. We need diagrams, videos, and other methods of written explanation that do not fit the “voice” given to the current AI models.
However, using them alongside other resources, or as a fellow code reviewer, assistant, or something along these lines, seems to have given me some remarkable results - at least output-wise.
I’ve recently been coding Neovim plugins by reading the documentation the best I can, and using AI to help me if I get stuck on how a certain API works or what I’m doing wrong.
I think the issue is thouh it makes me think two things:
- Am I being a bad programmer by asking the AI to find an API instead of digging through the source code myself?
- Am I actually harming the learning process for myself at all?
Coming back to the point I made earlier, we need discomfort to learn. The “goldilocks” zone of not too much, not too little, but just right.
Right now, I feel at least that AI makes things just a little bit too easy for me. So I have to ask myself: am I really still learning effectively? Am I destroying my critical thinking and offloading too much to these tools?
I want to think that I’m not, considering that I have the mental capability to reflect on this experience and try and decide for myself. But it does make me think.
The right tool for the job, and the need for human content
These tools are all about how we use them. You can generate slop short-form media, or you can try and use them to learn new concepts. It’s all about you. But having the discipline to use them in a “good” way is definitely hard.
I do think we’re moving closer to the goldilocks zone, and I can really see these tools augmenting our learning process for the better.
At the same time, I don’t want to see traditional resources fade away, and I do think there’s room for both. Or, at least I hope there is.
If you have any thoughts on this I’d be glad to hear them.