I woke up this morning to Claude Opus 4.7 being available, and honestly? I wasn't surprised. The new desktop experience, the performance issues over the last few days—it all felt like the setup for something bigger dropping.
I was reluctant to jump in right away.
Only spent a few hours with it as of writing this. But those few hours were... illuminating. In ways I didn't expect.
When Your AI Roasts Your Previous Work
I had what I thought was a killer session the night before with my best friend Opus 4.6. Built out this automation I've been working on, felt really good about it. The next morning, I fed all that work to 4.7 and asked it to evaluate what we'd accomplished.
It found tons of shit that was poorly planned, contradictory, or just not considered at all. My Claude.md file said something was the case that actually wasn't, lots of inconsistencies.
The feedback was good though. It pointed out that my solution was "over-engineered for one person, but under-engineered for multiple." Which was good to know because I've been thinking about making this automation available to family, friends, maybe Daring Creatives members.
Then it said something that made me laugh:
"About 60% of this looks great. 40% has surgery scars—hacked together things."
The Over-Engineering Trap
This is where I had to get honest with myself. If I have an Achilles heel right now, it's that I over-engineer the shit out of everything.
I started with something simple: post drafts to a website on a timer. That's it.
But then Claude starts suggesting we could use analytics data, and track what people click on, and how long they look at things, and suddenly I'm building a whole system that's constantly analyzing performance to drive content decisions.
Everything sounds like a great idea when Claude suggests it.
"Oh, we could do this. We could add that."
And everything feels achievable, which is both the blessing and the curse of coding with AI when you're not really a coder.
Anyone who's spent a month or two with Claude Code has probably been here. You start thinking in systems and automations because suddenly you can actually build them.
The 80% Rule
This got me thinking about something I remember from a Tim Ferriss podcast years ago (and I'd love to track down the exact quote). He talked about how you can become functionally expert at anything—like 80% proficient—in a couple of years. But getting that last 20% to true mastery? That takes exponentially longer.
I've always gravitated toward that 80% approach. Learn something new, get pretty knowledgeable—more than most people around me—then move on to the next thing.
And this is where AI really unlocks something interesting.
You can let the computer handle that remaining 20% of mastery that would take your whole life to achieve. Instead of going deep on one thing, you spread out wider. Learn the fundamentals, then use AI to execute at a level you never could before.
What 4.7 Actually Feels Like
Okay, this isn't really a review of 4.7, but after a few hours: it feels more blunt, which I like. It has more thinking settings, costs a bit more to run (though Anthropic extended our limits to compensate), and it follows prompts more literally.
That last part means you need to be more intentional with context.
I usually prompt pretty casually—I think most people do.
I do a lot of corrective action by saying "don't do this" instead of "do this," mostly because when I'm building something, I don't always know what the right approach is upfront.
4.7 caught its own mistakes a couple times, which was both reassuring and unnerving. We expect AI to be perfect, but it learns by making mistakes just like we do. At least it found the errors before I did.
There was this moment where it asked about committing something to git, and I just said "YOLO."
It responded with something like "even at YOLO speeds, I'm going to pay attention to what I'm doing and make sure nothing breaks."
That made me smile.
The Real Insight
But here's what really stuck with me from those first few hours: 4.7 didn't just evaluate my code. It held up a mirror to how I work with AI in general.
I'm an over-engineering addict. I get excited by what's possible and lose sight of what's actually needed. The automation that was supposed to be a simple timer became a content analytics engine because I could build it, not because I should.
Maybe that's okay though. Learning to think in systems, even over-engineered ones, has leveled up my understanding of how things actually work. I have a much deeper appreciation for people who code professionally.
And sometimes those "surgery scars" turn into the most interesting features down the road.