// SYSTEM: DIGEST // LIVE
AI WORKFLOW
OPINION
TUTORIALS
ChatGPT
ChatGPT
William Smith
William
CONVERSATIONS WITH CODE

The Company That Eats Its Own Cooking

Anthropic's engineers use Claude for 59% of their work now. What happens when the people building AI are also the ones most dependent on it?

Article Details Transparency Protocol v3.0
William 30%
High level concept, and final polish.
AI 70%
Structural formatting, initial prose drafting, synthesis from source material.
Stack: Claude Sonnet 4.6

Eating your own dog food. Not exclusive to tech but in this application of it, you build a product, then you use it yourself. It's supposed to keep you honest. If your own team won't use what you're building, why would anyone else?

Anthropic took this further than most companies would be comfortable with.

In December 2025, a team led by research scientist Saffron Huang — she's on Anthropic's Societal Impacts team and was named in TIME's 100 Most Influential People in AI — published a study called "How AI Is Transforming Work at Anthropic."

They surveyed 132 engineers and researchers, did 53 in-depth interviews, and analyzed 200,000 internal Claude Code transcripts.

The co-authors include Bryan Seethor, Esin Durmus, Kunal Handa, Miles McCain, Michael Stern, and Deep Ganguli.

What they found: their engineers now use Claude for roughly 59% of their work, up from 28% a year earlier.

Productivity gains hit 50%, up from 20%.

They even have a name for it internally — "Antfooding," because employees call themselves Ants.

But here's the part that few people talk about and I've experience myself: 27% of the work Claude helps with are tasks that wouldn't have been done at all otherwise.

Not "done slower" — just never done.

Little fixes, small improvements, the kind of stuff that sits on a backlog forever because nobody has time.

Anthropic calls them "papercuts."

Now they're getting fixed because the cost of doing them dropped to nearly zero.

The Part That Raises Concern

Engineers reported that junior team members are asking Claude instead of asking colleagues. Fewer mentorship moments. Fewer hallway conversations where a senior engineer explains not just the what but the why.

One engineer said it plainly in the study:

"I feel optimistic in the short term but in the long term I think AI will end up doing everything and make me and many others irrelevant."

Then there's what the study calls the "paradox of supervision" — you need strong coding skills to review what Claude produces, but you might be losing those skills by letting Claude produce it.

I do believe certain skills we have and rely on now will atrophy with AI. We'll probably replace them with new ones.

What This Means If You're Not Anthropic

Anthropic is an AI company with early access to the best models. Of course their adoption numbers are high. But the patterns they're seeing are going to show up everywhere.

The engineers who leaned in described becoming "full-stack" almost overnight — taking on frontend work, databases, data visualizations, stuff that was previously outside their wheelhouse. Claude handled the implementation. They handled the direction.

That sounds a lot like what we talk about here with multi-modal creatives. The tool handles the execution. You handle the intent.

The difference is Anthropic is watching it happen in real time, measuring it, and publishing the results.

Most of us are just going to experience it and figure it out as we go.

59% is a big number. And it's going up, not down. The question isn't whether AI will change how people work — Anthropic already answered that. The question is whether the rest of us are paying attention to what they're learning along the way.

← Back to Digest

The Company That Eats Its Own Cooking

Anthropic's engineers use Claude for 59% of their work now. What happens when the people building AI are also the ones most dependent on it?

The Company That Eats Its Own Cooking
A man wearing a grey hoodie over a dark baseball cap and yellow-rimmed sunglasses with black lenses stands in a modern kitchen, about to eat a spoonful of star-shaped cereal. On the white marble counter in front of him sits a bowl of cereal and a box labeled "CLAUDE" featuring a cartoon robot mascot. The background shows a sleek kitchen with dark cabinetry, stainless steel appliances, and large windows looking out at a dreary, rainy day.
Article Details Transparency Protocol v3.0
William 30%
High level concept, and final polish.
AI 70%
Structural formatting, initial prose drafting, synthesis from source material.
Stack: Claude Sonnet 4.6

Eating your own dog food. Not exclusive to tech but in this application of it, you build a product, then you use it yourself. It's supposed to keep you honest. If your own team won't use what you're building, why would anyone else?

Anthropic took this further than most companies would be comfortable with.

In December 2025, a team led by research scientist Saffron Huang — she's on Anthropic's Societal Impacts team and was named in TIME's 100 Most Influential People in AI — published a study called "How AI Is Transforming Work at Anthropic."

They surveyed 132 engineers and researchers, did 53 in-depth interviews, and analyzed 200,000 internal Claude Code transcripts.

The co-authors include Bryan Seethor, Esin Durmus, Kunal Handa, Miles McCain, Michael Stern, and Deep Ganguli.

What they found: their engineers now use Claude for roughly 59% of their work, up from 28% a year earlier.

Productivity gains hit 50%, up from 20%.

They even have a name for it internally — "Antfooding," because employees call themselves Ants.

But here's the part that few people talk about and I've experience myself: 27% of the work Claude helps with are tasks that wouldn't have been done at all otherwise.

Not "done slower" — just never done.

Little fixes, small improvements, the kind of stuff that sits on a backlog forever because nobody has time.

Anthropic calls them "papercuts."

Now they're getting fixed because the cost of doing them dropped to nearly zero.

The Part That Raises Concern

Engineers reported that junior team members are asking Claude instead of asking colleagues. Fewer mentorship moments. Fewer hallway conversations where a senior engineer explains not just the what but the why.

One engineer said it plainly in the study:

"I feel optimistic in the short term but in the long term I think AI will end up doing everything and make me and many others irrelevant."

Then there's what the study calls the "paradox of supervision" — you need strong coding skills to review what Claude produces, but you might be losing those skills by letting Claude produce it.

I do believe certain skills we have and rely on now will atrophy with AI. We'll probably replace them with new ones.

What This Means If You're Not Anthropic

Anthropic is an AI company with early access to the best models. Of course their adoption numbers are high. But the patterns they're seeing are going to show up everywhere.

The engineers who leaned in described becoming "full-stack" almost overnight — taking on frontend work, databases, data visualizations, stuff that was previously outside their wheelhouse. Claude handled the implementation. They handled the direction.

That sounds a lot like what we talk about here with multi-modal creatives. The tool handles the execution. You handle the intent.

The difference is Anthropic is watching it happen in real time, measuring it, and publishing the results.

Most of us are just going to experience it and figure it out as we go.

59% is a big number. And it's going up, not down. The question isn't whether AI will change how people work — Anthropic already answered that. The question is whether the rest of us are paying attention to what they're learning along the way.

// LEXICON_CITY_DISPATCH_REQ
// STATUS: CONNECTION_STABLE
// SOURCE: CENTRAL_DISPATCH_HQ

SHERMAN UPLINK: "I'm at HQ holding down Central Dispatch. Enter your query below to pull relevant data records and I'll see what data cards we've recovered!"