- Operators
- Posts
- 2/12: AI for Operators
2/12: AI for Operators
Hi there,
Welcome back to AI for Operators. Here’s what we’ve got for you this week:
- The Essay: How to Escape the AI Metrics Mirage
- The Jobs: 4 AI strategy roles
- The Links: 10 curated reads
- The Events: 3 upcoming
How to Escape the AI Metrics Mirage
By Diane Sadowski-Joseph
How to Escape the AI Metrics Mirage
Most teams are measuring AI usage — logins, seat activations, tokens — because that's what's available. And it's not nothing. If you're paying for 100 seats and 11 people logged in last quarter, that's a signal worth having.
But usage alone can create a mirage. Lots of activity that looks like progress without anyone asking: is the work actually getting better?
The fix isn't throwing out what you're tracking. It's adding layers.
Think of metrics in three tiers:
Usage and output metrics — What you're probably already tracking. Logins, prompts, automations triggered. Keep these. They're your baseline.
Operational impact metrics — What's changing in the workflow. Hours saved on specific tasks. Fewer revision cycles. Faster turnaround. This is where you start seeing if AI is making work better, not just making more of it.
Business outcome metrics — What moves for the org. Revenue influenced. Risk flagged earlier. Capacity unlocked for work that's been stuck on the backlog for months.
Most teams are at Tier 1 — not because they're doing it wrong, but because nobody helped them define Tier 2 and 3 before they started building.
One question that helps: Ask your functional leaders, "What would prove to you that AI is actually working for your team?"
Not "are people using it." But what outcomes would you need to see?
This does two things: It surfaces metrics that matter to that team, and it creates ownership. The leader sets the bar. Now they've got skin in the game.
Adoption stops being something you're pushing. It becomes something they're measuring.
About Diane and Clarinet: Diane is an AI adoption and organizational transformation leader who’s advised 50+ companies on AI adoption and trained 100,000+ professionals over her career. She speaks and writes on human–AI collaboration and responsible implementation, and previously helped scale LifeLabs Learning. She’s now Co-Founder and Head of Product at Clarinet, where she leads the company in partnering with organizations to learn, build, and scale with AI in high-ROI, human-friendly ways.
Some AI strategy and implementation roles that caught our eye:
Head of AI Strategy & Enablement — ButterflyMX
- United States
Manager, GenAI Strategy & Business Operations — Mammoth Brands
- New York, NY
- Comp: $120,000 - $145,000
- United States
- Comp: $130,000 - $250,000
Vice President, AI, Collaboration and Data — Dynatrace
- Boston, MA
- Comp: $270,000 - $300,000
Practical
- How to Set Up OpenClaw for $20 (Step by Step) — OpenClaw lets you deploy an autonomous AI agent in the cloud for under $20/month that can handle tasks like research, data entry, and web automation around the clock. Perfect for operators looking to automate repetitive workflows without investing in expensive infrastructure or learning complex deployment processes.
- Intelligent AI Delegation — A new framework tackles one of the trickiest operational challenges in AI: teaching agents to break down complex work and delegate tasks to other AIs or humans with proper accountability and trust mechanisms. The research moves beyond simple rule-based handoffs to create adaptive delegation systems that can handle failures and changing conditions—essential reading if you're planning to orchestrate multiple AI agents in your workflows.
- No Coding Before 10am | Michael Bloch — A startup's radical approach treats code as the last resort rather than the first step—engineers spend mornings defining requirements and letting AI agents handle implementation, fundamentally rethinking how software gets built. The "no coding before 10am" rule forces teams to clarify what they're building and why before jumping into implementation, potentially unlocking 10x productivity gains.
- A Guide to Which AI to Use in the Agentic Era — The AI landscape has evolved beyond simple chat interfaces into specialized agents, custom GPTs, and workflow tools—here's a practical framework for deciding when to use Claude for deep thinking, ChatGPT for quick tasks, Gemini for Google integration, or purpose-built agents for repetitive workflows. Operators need this mental model now, before the options multiply even further and decision paralysis sets in.
Perspectives
- Beyond Technical Debt: How AI Coding Assistants Created "Comprehension Debt" in Our Indie Game — AI coding assistants helped an indie game team ship faster, but left behind code that nobody fully understood—a new kind of "comprehension debt" that made debugging and modifications surprisingly painful. A cautionary tale about velocity versus understanding that's especially relevant as teams rush to adopt AI coding tools without considering the maintenance burden.
- AI Doesn’t Reduce Work—It Intensifies It — New research shows AI tools don't lighten workloads—they accelerate pace, expand scope, and blur work boundaries, creating a hidden productivity trap that can lead to burnout and declining quality. Companies need deliberate "AI practices" like intentional pauses and work sequencing to prevent the initial efficiency gains from turning into unsustainable workload creep.
- the lottery of career success: or why you may want to bribe OpenAI — The author makes a provocative case that paying to work at OpenAI or similar AI leaders might be rational career arbitrage, given how dramatically these roles can accelerate your trajectory and future earning potential. It's a thought experiment that reframes professional development as an investment decision rather than just collecting paychecks.
- AI Won’t Automatically Make Legal Services Cheaper — Despite AI's potential to reduce legal costs, three structural bottlenecks—regulatory barriers, business model inertia, and market power concentration—mean savings are unlikely to reach everyday consumers without intentional policy intervention. A sobering reality check for operators expecting technology alone to democratize professional services in any industry.
- The financialisation of AI is Just Beginning — Wall Street is finding new ways to slice, dice, and securitize AI investments—from bundling compute credits into tradable instruments to creating derivatives around model performance metrics. If you're planning AI infrastructure investments or evaluating vendor contracts, understanding these emerging financial products could help you negotiate better terms or hedge operational risks.
News
- Payrolls to Prompts: Firm-Level Evidence on the Substitution of Labor for AI — A rigorous study using real payment data from thousands of firms reveals that companies with higher freelancer spending adopted AI faster after ChatGPT launched—and for every $1 reduction in freelance spending, they spent only $0.03 more on AI, suggesting dramatic 97% cost savings on outsourced tasks. This is the first micro-level evidence quantifying how generative AI is actually substituting for human labor, not just in theory but in practice.
- Anthropic launches new enterprise offerings, raising the heat on software companies — Anthropic is rolling out enterprise features for Claude that could disrupt traditional SaaS vendors—a reminder that AI-native competitors are moving fast to capture business workflows. Operators should watch how these capabilities stack up against existing tools in their stack and consider where AI models might replace point solutions.
- How AI Agents Actually Get Work Done — March 26, 1pm ET
- When to Buy and When to DIY — February 26 at 3pm ET
- Chief of AI Fellowship — April 2, 12pm ET
Thanks for reading,
Tom Guthrie