Subscribe for more posts like this →

How to Become a Better Developer in the Age of LLMs: Thinking Rituals That Actually Work

Most developers are using LLMs wrong.

They treat them like glorified autocomplete - type a prompt, get some code, copy-paste, move on. It's faster than StackOverflow, sure. But it's just as shallow.

The real power of LLMs isn't that they write code for you. It's that they create friction where you need it most - in your thinking.

The gap between I can make this work and I understand why this works is wider than you think. LLMs can expose that gap in seconds if you use them right.

This isn't about prompting techniques or AI-assisted coding workflows. This is about using LLMs as thinking partners to upgrade how you reason about software - not just how fast you write it.

Competence Without Understanding

You've been writing code for years. You know your framework. You can solve most problems you encounter. You're productive.

But here's what happens when something breaks in a novel way:

  • You google variations of the error message
  • You try solutions from StackOverflow until something works
  • You move on without really knowing why it broke or why the fix worked

This worked fine when technology changed slowly. Learn a stack, master it over 5-10 years, become senior.

But now? Frameworks evolve every six months. New paradigms emerge constantly. AI is generating code that works, but you can't explain. The half-life of specific knowledge is shrinking.

The developers who thrive aren't the ones who know the most patterns. They're the ones who can reason from first principles about any system.

That's what these thinking rituals build.

Why Rituals, Not Courses

People overcomplicate learning.

You don't need another Udemy course. You don't need one more certification. You don't need to stay up to date by reading every tech blog.

What you need is deliberate friction - systematic ways to expose and fill the gaps in your understanding.

Courses give you knowledge in someone else's order, at someone else's pace, about problems someone else chose. Rituals let you learn exactly what you need, when you need it, by poking at your own understanding until it breaks.

Think of it like strength training. You don't get stronger by watching workout videos. You get stronger by putting weight on the bar and lifting until failure. These rituals are your workout plan for thinking.

The Rituals

Ritual 1

The Practice: Pick a GitHub repo you use but never explored. Something in your node_modules or pip packages. Delete everything except the README. Now try to rebuild the project from scratch using just your brain and an LLM.

Why It Works: We use libraries every day without understanding them.

This ritual forces you to answer:

  • What problem does this library actually solve?
  • What design decisions did the authors make, and why?
  • What would break if you made different choices?

Example: Take something like date-fns. You use it for date manipulation. You know the API. But can you rebuild it?

As you try:

  • You'll discover why immutability matters for date operations
  • You'll understand timezone complexity firsthand
  • You'll see why the API is designed the way it is

The LLM becomes your rubber duck that actually talks back. When you get stuck, you're not asking how do I implement parseISO? You're asking why does this approach handle edge cases better than that one?

This exercise doesn't just teach you about one library. It teaches you how to think like a library author, which means you start seeing patterns across all the libraries you use.

Ritual 2

The Practice: Find a StackOverflow answer you've reused 10 times. You know, that useState hook pattern. That regex. That database query. Ask the LLM why it works, and more importantly, when it breaks.

Why It Works: Most of what we call experience is just unexamined cargo culting. We copy solutions that worked once and keep using them because they keep working.

Example: You always use Promise.all() to run async operations in parallel. It's fast. It works.

But ask the LLM: "When does Promise.all break down? What are the failure modes?"

You'll learn:

  • If one promise rejects, all fail (maybe you wanted Promise.allSettled)
  • Memory usage scales with array size (maybe you need batching)
  • No progress visibility (maybe you need a different pattern for long-running tasks)

This ritual transforms solutions that work into patterns I understand. You stop being someone who knows what to type. You become someone who knows what NOT to type and why.

Ritual 3

The Practice: Pick an RFC - any one. HTTP/1.1, JSON Schema, OAuth2, WebSockets. Feed it to an LLM and ask: "What are the real-world consequences of each clause?"

Why It Works: Standards seem boring. But they're the closest thing we have to fundamental truths in software. Every weird constraint, every verbose requirement exists because someone learned something painful.

Example: Take the HTTP spec on idempotency. GET and PUT should be idempotent. POST shouldn't be.

Ask the LLM: Why? What breaks if I make POST idempotent?

You'll discover:

  • How caching layers make assumptions about methods
  • Why APIs that misuse verbs cause subtle bugs
  • How idempotency keys solve real problems in payment systems

You stop seeing specs as rules to memorize and start seeing them as compressed experience from thousands of real-world failures. You start predicting problems before they happen.

Ritual 4

The Practice: Take one function from your own codebase. Ask the LLM to rewrite it three different ways - functional, reactive, dataflow. Watch how your design assumptions shift with every rewrite.

Why It Works: We code in the style we learned first. Imperative? Object-oriented? Functional? Whatever got you your first job becomes your default lens.

But every paradigm makes different tradeoffs. Different things are easy or hard, visible or hidden, safe or risky.

Example: You have a function that processes user input, validates it, saves to database, and sends a notification.

Ask for:

  • Functional version: Pure functions, explicit data flow, composition
  • Reactive version: Event streams, observers, backpressure
  • Dataflow version: Explicit dependencies, transformations, pipelines

Each version will expose different concerns:

  • The functional version makes error handling paths explicit
  • The reactive version makes timing and ordering visible
  • The dataflow version makes dependencies and testing obvious

You realize that good design isn't absolute - it depends on what complexity you're trying to manage. You stop being religious about paradigms and start being pragmatic about tradeoffs.

Ritual 5

The Practice: Pick a technology you dislike. Really dislike. PHP, Java, MongoDB, whatever triggers you. Ask the LLM to defend it. Steelman the thing you hate.

Why It Works: Most tech opinions are tribal signaling, not reasoned positions. React is better than Vue often just means I know React and change is scary.

When you force yourself to understand the strongest case for something you dislike, you stop being a framework loyalist and start being an engineer.

Example: You hate PHP. It's messy, inconsistent, has weird behaviors.

Ask the LLM: "What problems does PHP solve better than Python or Node?"

You might learn:

  • Shared-nothing architecture makes scaling easier
  • Request isolation prevents whole-server crashes
  • Deployment is literally copy-paste (no build step)
  • Lower hosting costs matter for 90% of websites

You develop intellectual humility. You stop saying X is bad and start saying X optimizes for Y, which isn't my problem. This makes you better at choosing tools - and better at working with teams who chose differently.

Ritual 6

The Practice: Explain one complex topic to three personas: a five-year-old, a senior architect, and a CFO. If you can't adjust your language appropriately, you don't understand the idea - you're just reciting it.

Why It Works: True understanding is compression + decompression. You understand something when you can zoom from 30,000 feet to ground level without losing the thread.

Example: Explain eventual consistency to:

Five-year-old: Imagine you and your friend both have notebooks and you write notes to each other. Sometimes your friend writes something before they see your note, so your notebooks don't match for a little while. That's okay! They'll match eventually.

Senior Architect: We're trading immediate consistency for availability and partition tolerance. CAP theorem forces this choice in distributed systems. Vector clocks or CRDTs resolve conflicts deterministically.

CFO: The system stays online even if parts fail, but users might see slightly outdated data for a few seconds. For a shopping cart, that's fine. For bank balances, we'd use different guarantees and accept slower performance.

This isn't about dumbing things down. It's about understanding which details matter at each level of abstraction. When you can do this, you can communicate with anyone—and you can think at any level the problem demands.

Ritual 7

The Practice: Upload your own notes, blog posts, or documentation. Ask the LLM: "What patterns or biases do you see in my thinking?"

Why It Works: We all have intellectual blind spots. Favorite patterns we overuse. Problems we always solve the same way. Assumptions we don't question.

An LLM can spot these in seconds because it's not inside your head. It's pattern-matching across everything you've written.

Example: You upload 10 of your technical decisions from the past year.

The LLM might notice:

  • You consistently choose performance over maintainability
  • You avoid async code even when it's appropriate
  • You frame every problem as a data modeling challenge
  • You dismiss new tools without evaluating their specific advantages

You'll see your intellectual loops exposed, the places where you stop thinking and fall into habit. But once you see them, you can't unsee them. You start catching yourself mid-pattern.

Ritual 8

The Practice: Take a conference talk you liked. Grab the transcript. Ask the LLM to turn it into a Socratic dialogue - questions and answers between you and the speaker.

Why It Works: Passive watching doesn't create understanding. Active questioning does.

A Socratic dialogue forces you to:

  • Identify what you don't understand
  • See counterarguments to the speaker's points
  • Connect the idea to what you already know

Example: You watch a talk on "Why We Moved from Microservices Back to a Monolith."

The Socratic version might go:

You: Why did microservices fail for them?
Speaker: We didn't have the team size or ops maturity to manage the complexity.
You: So is this about microservices being bad, or their context being wrong?
Speaker: Context. At 50 engineers, the communication overhead dominated the benefits.
You: What threshold changes this equation?

You internalize the reasoning behind the conclusion, not just the conclusion. You can apply the thinking to your own context instead of just copying their architecture decision.

Ritual 9

The Practice: Feed the LLM your own opinions from blog posts, tweets, code comments, whatever. Ask: What's inconsistent or contradictory here?

Why It Works: Your worldview is probably less coherent than you think. We all hold conflicting beliefs without realizing it because we never examine them side by side.

Example: You've written:

  • Code should be self-documenting
  • Complex algorithms need detailed comments
  • Comments lie but tests don't

Ask the LLM to find the tension.

It might say: "You claim code should be self-documenting, but you also believe complex code needs comments. This suggests either: 1) some code can't be self-documenting, which contradicts the first claim, or 2) complex is a code smell indicating the need for refactoring rather than comments."

Your thinking becomes a coherent system instead of a collection of unexamined beliefs. This makes you better at design decisions because your reasoning process becomes trustworthy.

Ritual 10

The Practice: Take a random press release from a major tech company - AWS, Google, Meta, doesn't matter. Ask: What internal problem are they actually trying to solve?

Why It Works: Marketing is never about the announced feature. It's about the problem they're not saying out loud. Usually: cost, control, or competitive positioning.

Example: Google announces a new cloud service for simplified Kubernetes management.

Ask the LLM: What's the real problem?

You might uncover:

  • Kubernetes is too complex, causing customer churn to simpler platforms
  • Their support costs are high because users misconfigure it
  • AWS has a competing service eating their market share
  • They need lock-in before Kubernetes commoditizes cloud infrastructure

You develop a bullshit detector. You stop taking technical announcements at face value. This makes you better at evaluating new tools - you see past the marketing to the actual tradeoffs.

Ritual 11

The Practice: Feed an LLM the changelog of a framework over several major versions. Ask it to find patterns. You'll start predicting tech trends without reading hype posts.

Why It Works: Changelogs are compressed history. They show what broke, what users complained about, and what tradeoffs the maintainers are making.

Example: Look at React's changelog from v15 to v18:

  • Fiber rewrite (v16): Wanted interruptible rendering
  • Hooks (v16.8): Wanted to eliminate classes
  • Concurrent rendering (v18): Wanted better UX for slow operations

Pattern: Progressive enhancement of rendering control

The LLM might predict: Next focus will likely be on streaming server rendering and partial hydration. They're systematically removing render blocking operations.

You see the direction of evolution, not just individual features. You can make better bets about what to learn and when to adopt new versions.

Ritual 12

The Practice: Pick a best practices article. Ask: Which of these fail in an AI-assisted workflow?

Why It Works: Best practices exist for a context - usually the pre-AI era of manual coding. Many don't survive when LLMs generate boilerplate or tests or documentation.

Example: Best Practice: Always write comprehensive unit tests before implementation (TDD).

Ask: Does this still apply when an LLM can generate passing tests and implementation simultaneously?

The LLM might say:

  • TDD's value was forcing API design thinking—still valuable
  • But tests first sequencing matters less when generation is instant
  • New failure mode: tests and code both wrong but passing
  • New practice needed: verification testing that checks AI-generated tests

You stop cargo culting best practices and start understanding why they were best practices and adapting them to new contexts.

Ritual 13

The Practice: Take a Hacker News argument. Feed both sides into the model. Ask it to summarize the real disagreement in one line.

Why It Works: Most technical arguments aren't about technology. They're about values, experience, or context that neither side is making explicit.

Example: HN thread: Microservices vs Monoliths

One side: Microservices are overcomplicated. Monoliths are simpler.
Other side: Microservices enable team independence. Monoliths create bottlenecks.

Ask the LLM: What's the real disagreement?

Answer: Organization size. First person has 10-person team where coordination is easy. Second person has 100-person team where coordination is the bottleneck. They're both right for their context.

You stop having religious arguments about technology. You start identifying the context that makes one approach better than another. This makes you better at making decisions and explaining them.

Ritual 14

The Practice: Ask the LLM to simulate a code review between two legendary engineers - say Dijkstra and Linus Torvalds. Run it on a random GitHub repo.

Why It Works: Great engineers have perspectives - systematic ways of evaluating code quality. Watching these perspectives clash on real code teaches you more about engineering philosophy than any course.

Example: Pull up some random open-source project. Ask for a review from:

Dijkstra's perspective:

  • This function has three exit points. Structured programming demands single entry, single exit.
  • Variable names are abbreviations. Mathematics uses precise notation; code should too.
  • The algorithm's correctness proof is unclear. How do we know this terminates?

Linus's perspective:

  • This is overengineered. Just use a simple loop.
  • Who cares about theory if it's slow? Show me benchmarks.
  • The abstraction is pointless. Inline this until it hurts.

You internalize that there's no universal good code. There are tradeoffs based on values: correctness vs performance, abstraction vs simplicity, maintainability vs immediacy. You become better at consciously choosing your tradeoffs.

Ritual 15

The Practice: After doing several of these rituals, ask the LLM: What patterns do you see in how my thinking is changing?

Why It Works: Learning compounds. The rituals don't just teach you individual facts, they reshape how you approach problems. Tracking this meta-level change helps you see your own cognitive evolution.

Example: After a month of these rituals, you might discover:

  • You're asking 'why' three levels deeper than before
  • You're considering failure modes before implementation
  • You're more comfortable with ambiguity
  • You're less dogmatic about tooling choices

You don't just learn things. You learn how you learn. You debug your own thinking process. This is when you stop being someone who knows a lot and start being someone who can figure out anything.

Why This Actually Works

These rituals work because they create cognitive dissonance—the uncomfortable gap between what you think you know and what you actually understand.

Traditional learning tries to avoid this discomfort. Courses are linear. Tutorials are polished. Everything is designed to make you feel smart as quickly as possible.

But real learning happens at the edge of your competence, where things don't quite make sense yet. That friction - the struggle to reconcile conflicting ideas, to rebuild something you thought you understood, to defend a position you disagree with, that's where understanding deepens.

LLMs are perfect for this because:

  1. They have infinite patience - You can ask why ten times in a row
  2. They can steelman any position - Including ones you find ridiculous
  3. They can simulate any perspective - From beginner to domain expert
  4. They expose your assumptions - By taking you literally when you're being vague
  5. They don't judge - You can admit ignorance without ego cost

The Trap to Avoid

Here's what won't work: Using these rituals to collect more facts.

Cool, now I know why Promise.all fails. Let me add that to my mental catalog of things I know.

That's not the point.

The point is to change how you think, not what you know.

After these rituals, you should be:

  • Asking deeper questions before implementing
  • Seeing tradeoffs instead of right answers
  • Comfortable with I don't know, let me figure it out
  • Suspicious of your own assumptions
  • Better at reasoning from first principles

If you're just accumulating trivia, you're using them wrong.

How to Start

Pick one ritual. Do it once. See what breaks in your understanding.

Don't try to do all 15. Don't make it a daily checklist. These aren't productivity hacks, they're cognitive tools. Use them when you feel stuck, when you've stopped learning.

These rituals aren't about becoming an AI power user. They're about becoming the kind of engineer who doesn't need the latest framework, the newest certification, or the most popular tech stack to be effective.

The kind of engineer who can:

  • Walk into any codebase and understand it
  • Evaluate any technology without falling for hype
  • Make architectural decisions that survive for years
  • Solve problems they've never seen before

The kind of engineer who doesn't live in constant fear of being replaced or being left behind.

Because you're not optimizing for knowing the most. You're optimizing for understanding the deepest. And depth doesn't become obsolete.

One More Thing

The best developers I know don't use more tools. They use fewer tools, better.

They don't read more blogs. They read fewer blogs, deeper.

They don't chase novelty. They master fundamentals so thoroughly that novelty becomes obvious, just old ideas in new clothes.

These rituals are designed to create that depth. To transform you from someone who knows patterns into someone who understands principles.

The way you see problems changes first.
The way you solve them follows naturally.


Want more thinking rituals? This is Series I. I'll share Series II once I see how these land. But honestly? You don't need Series II. You need to actually do these 15 first.

Most people won't. They'll bookmark this, feel productive, and move on.

Don't be most people.

Read more

Subscribe for more posts like this →