Catching Up with AI

Ever felt stuck in a project without clear direction? Working alone without a senior engineer to provide feedback or validate your work? Or being part of a team where the SDLC process feels chaotic, but you're hesitant to offer suggestions due to lack of experience?​

In the world of software engineering, such situations are not uncommon. Many of us face challenges like the absence of mentorship, unsupportive work environments, or pressure to deliver without adequate support.​

However, with the emergence of generative AI tools like ChatGPT, Claude, and Gemini, we now have new tools that can help us catch up and enhance our skills as software engineers.

AI as a Buddy Engineer

Imagine you're building a monorepo app—Next.js frontend, Hono backend API.
You’re unsure whether to call the backend directly or proxy via a Next.js API route using TanStack Query. You second-guess every architecture choice. You need feedback—but there’s no one to ask.

Now AI steps in.

It reviews trade-offs, explains flow patterns, breaks down cache strategies, and even drafts API handlers with mock data. It’s not perfect—but it’s fast, patient, and relentless. You’re no longer stuck googling for hours. You're iterating.

Think of it as a pair programming partner that doesn’t sleep.
One that helps you:

  • validate architecture decisions
  • debug error stacks
  • prototype design patterns
  • review code for consistency
  • simulate alternate flows before you commit

AI as a Mentor

So how about AI as a mentor?

Imagine you're debugging a Tailwind CSS error in your Next.js monorepo—something like:

Cannot apply unknown utility class: border-border

You’re staring at the terminal, confused. The docs don’t help. Google gives you five outdated Stack Overflow threads.
Instead of digging through random GitHub issues, you drop the error into ChatGPT.

Boom—explanation.
It tells you Tailwind v4 changed how CSS variables are handled, that @apply needs to be explicit, and why border-border isn't valid unless defined in your theme.

That’s mentoring.

It’s not just about fixing bugs. You can:

  • Ask why this is happening, not just how to solve it.
  • Get a second opinion on code structure and architecture.
  • Learn alternatives and best practices (e.g., "Should I debounce this search input or use a custom hook?")
  • Validate patterns: "Is this Intersection Observer setup okay for infinite scrolling?"

Even better: AI doesn’t shame you for asking “stupid” questions.
It encourages curiosity, explains at your pace, and can go as deep or as abstract as you want.

AI mentorship is like having a senior engineer on-call—one who doesn’t mind explaining something 5 different ways until it finally clicks.

And when used consistently, this kind of feedback loop becomes a powerful force for closing knowledge gaps—fast.

Boosting Learning and “Ngulik” with AI

One of my Twitter mutuals recently built a Chrome extension in just five hours.

"cooking some chrome extension 🔍 wdyat? — WIP. only 5 hours progress"

It honestly blew my mind.

Not just because of the speed—but because it shows what’s now possible when you combine technical skill with the right tools, especially AI. These days, you can go from an idea to a working prototype in a single sitting. No waiting, no gatekeeping—just build.

For me personally, I’m not quite at that speed yet. Right now, I’m taking the slow route—building a fullstack monorepo using Hono for the backend and Next.js for the frontend. It’s not lightning fast, but I’m intentionally going deep to understand how backend and frontend systems connect. Every part of the stack is an opportunity to learn something new—from API routing and integration, to client-side data fetching and caching strategy.

And that’s totally fine. Everyone learns and builds at a different pace. What matters isn’t how fast we ship, but how well we understand what we’re building.


That said, having AI in the loop makes a huge difference. The feedback is instant. The iteration cycles are tighter. The confidence to explore things I don’t fully understand? Much stronger. I can validate my approach, ask for code suggestions, refactor with a second opinion, and even simulate edge cases—without context switching away from my editor.

Of course, this doesn’t mean we abandon the fundamentals. We still dive into official docs. We still browse GitHub issues. We still ask for help from peers and mentors. AI doesn’t replace those things—but it reinforces the process. It gives us another layer of support. When we're stuck, AI helps us move again. When we're unsure, it gives a starting point to work from.

The result? More momentum in how we learn and build—especially when we’re working alone.

At the end of the day, AI isn’t a shortcut. It’s a multiplier.
It doesn’t replace documentation—it amplifies it.