Welcome to my blog, where I share insights on product management, combining lessons from education, certifications, and experience. From tackling challenges to refining processes and delivering results, I offer practical advice and perspectives for product managers and teams. Whether you’re new or experienced, I hope these articles inspire and inform your journey.

Tom Rattigan Tom Rattigan

Everywhere Agents: AI that meets you where you work

The Note-Taking Trap

We're living through the golden age of AI-powered note-taking apps. Notion's AI writes summaries. Obsidian connects ideas automatically. Otter transcribes meetings with superhuman accuracy. Roam builds knowledge graphs from scattered thoughts.

These tools are genuinely impressive. They make it easier than ever to capture, organize, and retrieve information. But they all share the same fundamental limitation: they're productivity cul-de-sacs.

Here's what I mean. You have a productive meeting with your product team. Your AI note-taking app dutifully transcribes everything, identifies action items, and even generates a clean summary. Beautiful. But then what?

You still have to:

  • Copy the bug reports into Jira tickets

  • Paste the feature requirements into your product spec

  • Send the timeline updates to Slack

  • Add the action items to Asana

  • Update the project status in Monday.com

  • Email the client summary to stakeholders

Your AI did the easy part—capturing and organizing information. You're still stuck with the tedious part—moving that information to where it actually needs to live and work.

This is the app-switching tax, and it's killing our productivity. Not because any individual tool is bad, but because our workflows span multiple tools, while our AI assistants are trapped in single apps.

The Workflow Reality

Let's be honest about how knowledge work actually happens in 2025.

Your marketing brief doesn't live in your note-taking app—it lives in Google Docs where your team can collaborate on it. Your bug reports don't belong in your meeting notes—they belong in Jira where engineers can prioritize and fix them. Your client updates don't stay in your AI assistant—they become Slack messages, email threads, and project dashboard updates.

The most useful information is information in motion. It's data that flows from capture to action, from idea to implementation, from individual insight to team coordination.

But today's AI productivity tools treat information like static artifacts. They help you create better notes, but they don't help those notes become better work.

This is backwards. The future of AI productivity isn't about building the perfect note-taking app. It's about building AI that understands your entire workflow and can act across all your tools.

Introducing Everywhere Agents

Imagine an AI assistant that doesn't just capture your thoughts—it understands what those thoughts need to become and makes it happen automatically.

You're in a client meeting, and you mention that the login flow is confusing users. Instead of creating another note that you'll forget to act on, your AI agent:

  1. Recognizes this as a UX issue that needs tracking

  2. Creates a properly formatted Jira ticket with relevant context

  3. Assigns it to the UX team based on your team structure

  4. Adds it to the current sprint if it's high priority

  5. Updates your client dashboard to show issue acknowledgment

  6. Schedules a follow-up task to check resolution status

All of this happens in the background while you continue your conversation. No app switching. No copy-pasting. No forgotten action items.

This is what I call an Everywhere Agent—AI that meets you where you work, understands what you're trying to accomplish, and takes action across your entire tool ecosystem.

How Everywhere Agents Work

The technology stack for Everywhere Agents builds on three core capabilities that are finally mature enough to make this vision practical:

1. Context Detection

Modern language models have gotten remarkably good at understanding intent and categorizing content. They can reliably distinguish between:

  • Product requirements that need to become feature specs

  • Bug reports that need to become Jira tickets

  • Marketing ideas that need to become campaign briefs

  • Meeting action items that need to become calendar events

  • Client feedback that needs to become customer support tickets

  • Team updates that need to become Slack messages

But context detection goes deeper than content classification. The AI also understands:

  • Urgency indicators: "This is blocking the launch" vs "nice to have for v2"

  • Ownership patterns: Which team members typically handle which types of work

  • Workflow stages: Whether something is an initial draft, needs review, or is ready for implementation

  • Relationship mapping: How different pieces of information connect to existing projects and priorities

The key breakthrough is that LLMs can now perform this contextual analysis with enough reliability to automate routing decisions. You don't need perfect accuracy—you need good enough accuracy with easy correction mechanisms.

2. Automatic Routing

Once the AI understands what information it's dealing with, it can automatically route that information to the appropriate destination:

Jira Integration: Product bugs become properly formatted tickets with appropriate labels, components, and priority levels. The AI includes enough context for engineering teams to understand and prioritize the issue without additional meetings.

Document Creation: Marketing ideas become Google Doc drafts with proper templates and sharing permissions. Product requirements become structured specs in Confluence with all the necessary sections pre-populated.

Task Management: Action items become tasks in Asana, Monday.com, or your team's preferred project management tool. The AI sets due dates based on urgency indicators and assigns ownership based on team structure.

Communication Routing: Team updates become Slack messages in the right channels. Client communications become email drafts with proper formatting and recipients. Status updates become dashboard entries that stakeholders can access.

Calendar Integration: Scheduling requests become calendar invites with proper attendees and agenda items. Follow-up reminders become calendar blocks with relevant context and preparation materials.

The routing isn't just about moving information—it's about transforming it into the format and structure that works best for each destination tool.

3. Two-Way Synchronization

This is where Everywhere Agents become truly powerful. They don't just push information out—they monitor changes across your entire tool ecosystem and keep everything synchronized.

When someone updates a Jira ticket status, your central knowledge base reflects that change. When a client responds to an email, the relevant project dashboard gets updated. When a deadline shifts in Asana, related calendar items automatically adjust.

This creates a single source of truth that actually stays true, because it's constantly synchronized with all the distributed sources where real work happens.

The synchronization also enables intelligent notifications. Instead of getting bombarded with updates from every tool, your AI agent filters and prioritizes changes that actually matter to you. It knows the difference between routine progress updates and critical blockers that need your immediate attention.

Agent Chaining: Workflow Automation

The real magic happens when you combine context detection, automatic routing, and two-way sync into agent chains—sequences of automated actions that handle entire workflow processes.

Consider a typical product team workflow:

Input: "Our conversion rate dropped 15% after the checkout redesign"

Agent Chain:

  1. Analysis Agent recognizes this as a critical product issue

  2. Ticket Agent creates a high-priority Jira ticket for investigation

  3. Communication Agent posts an alert in the #product-critical Slack channel

  4. Research Agent pulls relevant analytics data and attaches it to the ticket

  5. Scheduling Agent sets up an emergency triage meeting with key stakeholders

  6. Documentation Agent creates a incident response document template

  7. Monitoring Agent sets up automated alerts for further conversion rate changes

This entire workflow happens automatically based on a single input. No manual coordination, no forgotten steps, no delayed responses.

Or consider a marketing campaign workflow:

Input: "We should create a case study about the DataCorp implementation"

Agent Chain:

  1. Planning Agent creates a content brief in the marketing folder

  2. Assignment Agent identifies the best writer based on technical expertise and current workload

  3. Research Agent gathers relevant customer data, project outcomes, and quotes

  4. Coordination Agent schedules an interview with the customer success manager

  5. Timeline Agent adds the case study to the content calendar with realistic deadlines

  6. Review Agent sets up approval workflows with legal and customer success teams

  7. Distribution Agent prepares social media and email campaign templates for post-publication

Each agent in the chain adds value while maintaining context about the overall goal.

The Business Impact

Everywhere Agents aren't just about individual productivity—they're about organizational intelligence. When AI can automatically route, format, and track information across your entire tool stack, several powerful things happen:

Reduced Context Switching

Knowledge workers switch between apps an average of 1,100 times per day. Each switch costs cognitive overhead and breaks flow state. Everywhere Agents reduce app switching by eliminating the manual work of moving information between tools.

Instead of "capture in notes app → remember to create ticket → open Jira → format information → assign and prioritize," the workflow becomes "capture anywhere → AI handles the rest." This isn't just faster—it removes the cognitive burden of remembering and executing multi-step workflows.

Improved Team Coordination

When information automatically flows to the right places in the right formats, teams stay aligned without coordination overhead. Product managers don't need to chase developers for bug reports. Marketing teams don't need to hunt through meeting notes for campaign ideas. Client feedback automatically reaches the people who can act on it.

The AI becomes a coordination layer that connects different team workflows without requiring everyone to use the same tools or follow the same processes.

Better Decision Making

Everywhere Agents create organizational memory that persists across tool boundaries. When someone makes a decision in Slack, that context is preserved in the relevant project documentation. When a client raises a concern in email, it's automatically connected to the product roadmap discussion.

This connected information enables better decisions because people have access to relevant context without having to manually hunt for it across multiple tools and conversations.

Reduced Information Silos

Traditional productivity tools create information silos. Marketing ideas live in marketing tools, engineering discussions live in engineering tools, and executive decisions live in executive tools. Everywhere Agents break down these silos by automatically sharing relevant information across team boundaries.

A customer complaint in Zendesk can automatically create a product requirement in Jira and a marketing response task in Asana. The information flows to where it's needed without manual coordination or political negotiations about which tool everyone should use.

Real-World Implementation Scenarios

Let's walk through how Everywhere Agents would work in practice across different types of organizations:

Software Development Teams

Daily Standup Scenario: During your standup, you mention "The user authentication is taking too long to load on mobile devices."

Traditional Workflow: Someone remembers to create a ticket after the meeting. Maybe. If they remember the details correctly. And if they have time between other priorities.

Everywhere Agent Workflow:

  • AI recognizes this as a performance issue

  • Creates a properly formatted Jira ticket with mobile performance labels

  • Attaches relevant user analytics data showing mobile load times

  • Assigns to the platform team based on component ownership

  • Adds to the current sprint backlog with medium priority

  • Creates a follow-up task to measure improvement after implementation

  • Posts a brief update in the #engineering-updates channel

Customer Support Scenario: A customer emails about difficulty canceling their subscription.

Traditional Workflow: Support agent resolves the immediate issue, maybe mentions it in a team meeting, possibly creates a ticket if they remember and have time.

Everywhere Agent Workflow:

  • AI recognizes this as both a customer service issue and a UX improvement opportunity

  • Resolves the immediate customer request through the support agent

  • Creates a UX research ticket to investigate cancellation flow friction

  • Adds a data point to the "churn reasons" dashboard

  • Schedules a follow-up survey with the customer

  • Creates a knowledge base update task to prevent similar confusion

Marketing and Content Teams

Content Planning Scenario: During a client call, you learn about an interesting use case for your product.

Traditional Workflow: Add to notes, remember to share with marketing team, eventually becomes a case study idea that may or may not get prioritized.

Everywhere Agent Workflow:

  • AI recognizes this as valuable case study material

  • Creates a content brief with initial structure and key points

  • Adds to the content calendar based on current campaign priorities

  • Identifies the best writer based on expertise and availability

  • Sets up stakeholder interviews and approval workflows

  • Creates social media and email promotion materials templates

  • Schedules publication and distribution timeline

Campaign Management Scenario: You notice a competitor launched a feature similar to yours.

Traditional Workflow: Mental note, maybe share in Slack, possibly discuss in next marketing meeting.

Everywhere Agent Workflow:

  • AI recognizes this as competitive intelligence

  • Updates competitive analysis documents with feature comparison

  • Alerts product team about market movement

  • Creates messaging review task to ensure differentiation is clear

  • Adds to next product marketing meeting agenda

  • Schedules competitive response analysis task

  • Updates sales team with talking points for competitive deals

Consulting and Professional Services

Client Meeting Scenario: During a strategy session, the client mentions they're struggling with employee retention.

Traditional Workflow: Include in meeting notes, remember to follow up with relevant resources, maybe create a proposal if you remember and have capacity.

Everywhere Agent Workflow:

  • AI recognizes this as a potential service opportunity

  • Creates a proposal template for HR consulting services

  • Researches relevant case studies and methodologies from your knowledge base

  • Schedules a follow-up call to dive deeper into their challenges

  • Adds potential project to your pipeline tracking

  • Prepares relevant thought leadership content to share

  • Creates a task to connect them with your HR practice lead

Project Delivery Scenario: You realize a client project needs additional resources to meet the deadline.

Traditional Workflow: Discuss with project manager, maybe escalate to account manager, eventually get resources allocated if available.

Everywhere Agent Workflow:

  • AI recognizes this as a resource allocation issue

  • Updates project status dashboard with resource constraint flag

  • Analyzes team capacity across similar projects to identify available resources

  • Creates budget adjustment proposal with clear justification

  • Schedules resource allocation meeting with project stakeholders

  • Prepares client communication about timeline or scope adjustments

  • Updates project risk register with mitigation plans

Technical Architecture and Challenges

Building Everywhere Agents requires solving several complex technical challenges:

Authentication and Authorization

The biggest technical hurdle is securely managing access to multiple external services. Your AI agent needs permission to create Jira tickets, update Google Docs, post Slack messages, and modify project management tools—all on behalf of different users with different permission levels.

OAuth Management: Each user needs to authenticate with each integrated service, and those authorizations need to be managed securely with proper token refresh mechanisms.

Permission Mapping: The AI needs to understand not just what actions are possible, but what actions are appropriate for each user in each context. A junior developer might be able to create tickets but not assign them to others.

Audit Trails: Every automated action needs clear logging and attribution so teams can understand what the AI did and why.

Intent Recognition Accuracy

The AI's usefulness depends entirely on its ability to accurately recognize intent and route information appropriately. This is both a technical challenge and a user experience challenge.

Context Window Management: The AI needs sufficient context to make good routing decisions, but processing too much context becomes expensive and slow.

Edge Case Handling: The system needs graceful degradation when intent is unclear, with easy mechanisms for users to correct misrouted information.

Learning and Adaptation: The AI should improve its accuracy over time based on user corrections and organizational patterns.

Data Privacy and Security

When AI agents move information across multiple tools, they become a potential security risk if not properly designed.

Data Minimization: The AI should only access and store the minimum information necessary to perform its routing functions.

Encryption in Transit: All information flowing between tools needs to be encrypted and secured.

Compliance Requirements: Different organizations have different compliance requirements (GDPR, HIPAA, SOC 2) that affect how information can be processed and stored.

Rate Limiting and API Management

Everywhere Agents will make heavy use of external APIs, which creates both performance and cost challenges.

API Rate Limits: Most productivity tools have rate limits that could be quickly exhausted by aggressive automation.

Cost Management: API usage costs need to be predictable and manageable, especially for organizations with large teams.

Reliability: The system needs to handle API outages and service degradation gracefully.

User Control and Override

The most critical design challenge is balancing automation with user control. Users need to trust that the AI won't take unwanted actions while still providing meaningful automation.

Confidence Thresholds: The AI should only take automatic action when it's highly confident in its intent recognition. Uncertain cases should prompt for user confirmation.

Easy Reversal: Any automated action should be easily reversible or correctable.

Learning from Corrections: When users correct or modify automated actions, the AI should learn from those corrections to improve future accuracy.

Implementation Strategy

Building Everywhere Agents requires a thoughtful implementation strategy that balances ambitious vision with practical constraints:

Phase 1: Single-Domain Automation

Start with one workflow domain—for example, engineering team workflows around bug reports and feature requests. This allows deep integration with a small number of tools (Jira, Slack, GitHub) while proving the core concept.

Key Features:

  • Meeting transcription with automatic ticket creation

  • Slack message parsing for bug reports

  • Simple routing between conversation and task management

  • Basic two-way sync between tools

Success Metrics:

  • Reduction in time from bug report to ticket creation

  • Improvement in ticket quality and completeness

  • User satisfaction with automated routing accuracy

Phase 2: Cross-Domain Integration

Expand to multiple workflow domains with smart routing between them. A bug report might create both an engineering ticket and a customer communication task.

Key Features:

  • Advanced intent recognition across multiple content types

  • Cross-team workflow automation

  • Smart notification filtering and prioritization

  • Robust user control and override mechanisms

Success Metrics:

  • Reduction in coordination overhead between teams

  • Improvement in information flow and organizational alignment

  • Increased user trust in automated actions

Phase 3: Organizational Intelligence

Full-scale implementation with AI that understands organizational patterns, priorities, and relationships.

Key Features:

  • Predictive routing based on organizational context

  • Advanced agent chaining for complex workflows

  • Integration with business intelligence and analytics

  • Custom workflow creation and modification

Success Metrics:

  • Measurable improvement in organizational productivity

  • Reduction in information silos and communication overhead

  • AI agents that actively contribute to strategic decision-making

The Competitive Landscape

Several companies are building pieces of the Everywhere Agent vision, though none have achieved the full integration yet:

Zapier and Automation Tools: Zapier, Microsoft Power Automate, and similar tools provide the infrastructure for connecting different applications. However, they require manual setup and don't provide intelligent routing based on content analysis.

AI-Powered Productivity Tools: Tools like Notion AI, Otter.ai, and Jasper provide intelligent content processing, but they're limited to their own platforms and don't integrate deeply with external workflows.

Enterprise Integration Platforms: Companies like MuleSoft and Workato provide robust integration capabilities for large organizations, but they focus on data integration rather than intelligent workflow automation.

Emerging AI Agent Platforms: Companies like LangChain, AutoGPT, and various AI agent frameworks provide the technical foundation for building autonomous agents, but they require significant technical expertise to implement effectively.

The opportunity exists for a company that combines the workflow understanding of productivity tools, the integration capabilities of automation platforms, and the intelligence of modern AI systems into a cohesive, user-friendly product.

Why Now?

Several technological and market factors make this the right time for Everywhere Agents:

API Infrastructure Maturity

Most major productivity tools now provide robust APIs with reasonable rate limits and authentication mechanisms. Slack, Microsoft Teams, Google Workspace, Atlassian, Asana, Monday.com, and dozens of other tools provide the integration points necessary for cross-platform automation.

This wasn't true even five years ago, when many tools had limited or unreliable APIs.

Language Model Capabilities

Modern LLMs have reached the threshold of accuracy needed for reliable intent recognition and content routing. They can distinguish between different types of content, understand organizational context, and make routing decisions with confidence levels that make automation practical.

The combination of large context windows, instruction following, and structured output generation makes it possible to build AI agents that can reliably understand and act on complex workflow requirements.

Remote Work Normalization

The shift to remote and hybrid work has accelerated the adoption of digital productivity tools while also highlighting the coordination overhead of managing information across multiple platforms.

Teams that once coordinated through hallway conversations now need systematic ways to ensure information flows to the right people at the right time. Everywhere Agents provide a solution to this coordination challenge.

Cost-Effectiveness of AI Processing

The cost of language model inference has dropped dramatically, making it economically feasible to process large volumes of workplace content with AI. What would have been prohibitively expensive two years ago is now practical for everyday business use.

User Expectations

Workers have become comfortable with AI automation in other domains (email filtering, content recommendations, smart replies) and are ready for more sophisticated AI assistance in their workflows.

The expectation has shifted from "AI might be able to help" to "AI should be helping with this repetitive work."

The Future of Work

Everywhere Agents represent a fundamental shift in how we think about AI productivity tools. Instead of building better versions of existing tools, we're building AI that understands and participates in the workflows that span across all our tools.

This shift has profound implications for how work gets done:

From Tool-Centric to Workflow-Centric: Instead of optimizing individual tools, we'll optimize entire workflows. The AI becomes the connective tissue that makes our existing tools work better together.

From Individual to Organizational Productivity: Everywhere Agents create organizational intelligence that persists beyond individual knowledge and memory. The AI becomes a repository of institutional knowledge about how work flows through the organization.

From Reactive to Proactive: Instead of waiting for humans to identify and act on information, AI agents can proactively identify patterns, flag potential issues, and suggest actions based on organizational context.

From Siloed to Connected: Information flows freely between teams and tools based on relevance and need, rather than being trapped by tool boundaries or organizational hierarchies.

Getting Started Today

While the full vision of Everywhere Agents requires significant technical development, organizations can start building toward this future today:

Audit Your Tool Stack: Map out how information currently flows (or fails to flow) between your tools. Identify the highest-friction handoffs that would benefit most from automation.

Start with Simple Automation: Use existing tools like Zapier or Microsoft Power Automate to connect your most commonly used applications. Even simple automation can provide immediate value while building organizational comfort with AI-assisted workflows.

Experiment with AI Content Processing: Try using AI tools to categorize, route, or format content from meetings, emails, or documents. Build familiarity with intent recognition and content classification.

Design for Integration: When evaluating new productivity tools, prioritize those with robust APIs and integration capabilities. Avoid tools that create information silos.

Train Your Team: Help your team understand the vision of connected workflows and AI-assisted coordination. The technology is only as useful as the organizational willingness to adopt it.

Conclusion

The productivity software industry has spent the last decade building increasingly sophisticated tools for capturing, organizing, and managing information. These tools have made us better at creating content, but they haven't made us better at turning that content into action.

Everywhere Agents represent the next evolution: AI that doesn't just help us work with information, but actively moves that information through our workflows to create organizational intelligence and coordination.

This isn't about replacing human decision-making or creativity. It's about eliminating the friction and overhead that prevents our best ideas from becoming reality. It's about building AI that meets us where we work, understands what we're trying to accomplish, and handles the tedious coordination work that keeps us from focusing on what matters most.

The companies that figure this out first won't just build better productivity tools—they'll fundamentally change how work gets done. And for the rest of us, that means more time for the work that actually matters: solving problems, creating value, and building the future.

The age of AI note-taking apps is ending. The age of AI workflow partners is just beginning.

Read More
Tom Rattigan Tom Rattigan

What’s wrong with AI Products (not LLM’s)

The core issue with AI products today is simple: they barely exist. What most companies are calling “AI products” are little more than wrappers around large language models. Instead of building dedicated, thoughtfully designed tools that unlock the potential of this technology, they’ve taken a shortcut—drop a chatbot into a box, slap on a label that says “What can I help you with?”, and call it a product.

This is a far cry from how software used to be made. Traditional digital products are built with intent. They have structure. They provide menus, views, workflows, and feedback loops that are tailored to what the user is trying to do. A video editing app gives you a timeline. A project management tool gives you a calendar, kanban board, and task hierarchy. A note-taking app gives you folders, tags, and version history. These interfaces exist for a reason—they reflect the underlying logic of the task at hand.

AI products, by contrast, are largely interface-less. Or more accurately, they’re interface-minimalist to a fault. They rely almost entirely on natural language input and thread-based output. For a few narrow use cases—like asking a trivia question, summarizing an article, or drafting a quick email—this can be incredibly powerful. But for more ambitious tasks, it quickly falls apart.

Let’s look at a few real examples:

  • Brainstorming: AI can help generate ideas, but all the output is dumped into a linear thread. There’s no way to cluster, group, or visually navigate your ideas. Want to compare options side by side? Too bad—you’ll be scrolling.

  • Project management: Some models can suggest tasks or timelines, but there’s no actual structure. No deadlines, dependencies, boards, or task hierarchies. You’re expected to track everything manually in one long, unstructured thread.

  • Research: LLMs are great at summarizing and analyzing content, but there’s no dedicated interface to store sources, organize findings, or highlight contradictions over time. It’s up to the user to keep things coherent across sessions.

  • Writing and editing: Drafting is easy. Refining is not. You can’t compare tone options side by side. You can’t manage multiple drafts. There’s no way to define a consistent style across sessions or versions.

  • Thread management: Long conversations become a mess. You can’t branch, tag, summarize, or version a conversation. There’s no notion of scope, only an endless scroll of loosely related messages.

It doesn’t have to be this way.

Take Notion, for example. At its core, it’s just a digital notebook. But Notion doesn’t stop there. It offers structure. Formatting options. Templates. Embeds. Plugins. Databases. Views. Page linking. In short, it gives you the tools to build with your notes, to shape information in ways that reflect how you think and work. You’re not just left with a blank page—you’re given a set of flexible building blocks. That’s what a good product does: it turns potential into power. It gives you leverage.

By comparison, most AI products hand you the equivalent of a blank notepad and expect you to remember the right incantations to make it useful. And if it doesn’t work the way you want? You’re told to keep prompting it until it does. That’s not a product—that’s a sandbox with a model inside.

Worse, these products can’t even get the basics right. I’ve asked ChatGPT over 50 times to use sentence case—that is, to capitalize the first letter of each sentence, as has been standard in written English for centuries. Despite this, the system frequently defaults to lowercase. When I reached out to OpenAI support, their advice was to start each new prompt by restating my formatting preference. Imagine opening the New York Times app and seeing every headline, subhead, and article in all lowercase. You write to support and they tell you to type “Please use proper formatting” every time you launch the app. That’s the level of absurdity users are being asked to accept.

And if that weren’t enough, AI products often test changes directly on paying users—without warning or opt-in. You might open the product and suddenly be part of an A/B test you never asked for. The interface changes. Or instead of a single response, you get three different replies with a note saying “We’re experimenting with multiple perspectives.” But you weren’t asking for multiple perspectives. You were asking for a quick answer. Now you’re stuck parsing experimental results instead of getting what you came for.

This isn’t innovation. It’s friction disguised as progress. It turns users into test subjects without consent and erodes trust in the experience.

The irony is that LLMs themselves are capable of nuance, depth, and flexibility. But most AI products aren’t. Because most of them aren’t actually products. They’re thin interfaces sitting on top of models, with almost no thought given to experience design, workflow, or structure.

Until AI companies start treating product design as a first-class priority—not a cosmetic layer—we’ll continue to have powerful technology trapped inside frustrating, inconsistent experiences. The next wave of breakthroughs won’t come from better models alone. They’ll come from better products—designed with care, built around real user needs, and capable of supporting real work.

Read More
Tom Rattigan Tom Rattigan

Crafting a Compelling Product Vision

A strong product vision serves as the North Star for any product team, providing clarity, alignment, and inspiration. It articulates the long-term goal of the product, defines its purpose, and sets the direction for development and decision-making. Without a clear product vision, teams can lose focus, and products may struggle to resonate with their intended audience.

This post delves into what a product vision is, why it’s critical, and the steps to create and maintain a vision that inspires teams and delivers value to users.

What Is a Product Vision?

A product vision is a high-level statement that describes the future state a product aims to achieve. It explains the “why” behind the product, addressing its purpose, the problem it solves, and the impact it aims to have on users and the market.

A strong product vision is:

Aspirational: It provides an inspiring picture of the future.

Customer-Centric: It focuses on the needs and desires of the target audience.

Clear and Concise: It communicates its message simply and effectively.

Long-Term: It is not tied to specific features but rather to overarching goals.

Example:

Amazon’s product vision: “To be Earth’s most customer-centric company, where customers can find and discover anything they might want to buy online.”

This vision goes beyond specific products or services, providing a guiding principle for Amazon’s entire business.

Why Is a Product Vision Important?

A well-crafted product vision offers several key benefits:

1. Alignment

The vision ensures that everyone—team members, stakeholders, and leadership—is working toward the same goal. It helps resolve conflicts and prioritize initiatives by providing a shared understanding of what success looks like.

2. Focus

With countless opportunities and potential distractions, a product vision keeps the team focused on what matters most. It acts as a filter to assess whether a new idea or feature aligns with the overarching goal.

3. Motivation

A compelling vision inspires teams by showing the meaningful impact their work can have. It connects day-to-day tasks to a larger purpose, fostering engagement and commitment.

4. Consistency

As teams grow and evolve, the product vision provides continuity. It serves as a constant reminder of the product’s purpose, ensuring decisions remain consistent over time.

The Process of Crafting a Product Vision

Creating a product vision requires collaboration, insight, and clarity. Here’s a step-by-step process:

Step 1: Understand the Problem Space

Before defining a vision, deeply understand the problem your product aims to solve. Conduct user research, market analysis, and stakeholder interviews to identify key pain points and opportunities.

Questions to Ask:

• What problem are we solving for our users?

• Why does this problem matter?

• How is this problem currently addressed, and what are the gaps?

Step 2: Define the Target Audience

Identify the primary users of your product. Understanding their needs, behaviors, and aspirations will ensure your vision is rooted in real user experiences.

Activities:

• Develop user personas to represent your audience.

• Map user journeys to understand their pain points and goals.

Step 3: Collaborate Across Teams

Involve cross-functional teams in the vision-setting process. Diverse perspectives ensure the vision is comprehensive and addresses technical, business, and user needs.

Tips for Collaboration:

• Hold brainstorming workshops to gather input.

• Use frameworks like the Vision Board to structure discussions.

• Encourage open dialogue and iterative refinement.

Step 4: Draft the Vision Statement

Translate your findings into a concise and compelling vision statement. Use language that is inspiring, clear, and customer-focused.

Formula for a Vision Statement:

For [target audience], our product [product name] will [what it will do] by [how it will solve the problem], resulting in [impact or benefit].

Example:

“For small businesses, our invoicing software will simplify financial management by automating recurring tasks, resulting in more time to focus on growth.”

Step 5: Validate the Vision

Test your vision with key stakeholders and, if possible, with users. Validation ensures the vision resonates and aligns with user needs and business goals.

Questions to Validate:

• Does this vision address a meaningful user problem?

• Is it aligned with business objectives?

• Is it clear, inspiring, and actionable?

Step 6: Communicate and Reinforce

Once finalized, share the vision widely and reinforce it regularly. Use the vision as a foundation for strategy, planning, and day-to-day decisions.

Ways to Reinforce:

• Include the vision in onboarding materials and team meetings.

• Refer to the vision when evaluating new ideas or features.

• Create visual representations (e.g., infographics) to keep the vision top of mind.

Maintaining and Evolving the Product Vision

A product vision is not static. While it should remain stable to provide continuity, it may need to evolve as the market, users, or business goals change.

How to Maintain and Adapt the Vision:

Regularly Review: Periodically revisit the vision to ensure it remains relevant.

Incorporate Feedback: Gather input from users and stakeholders to refine the vision over time.

Align with Strategy: Update the vision if the business pivots or enters new markets.

Real-World Example: Tesla’s Product Vision

Tesla’s vision—“To create the most compelling car company of the 21st century by driving the world’s transition to electric vehicles”—encapsulates its long-term goal. This vision informs everything Tesla does, from developing innovative technologies to expanding its charging network, and it inspires employees and customers alike.

Conclusion

A well-crafted product vision is essential for driving alignment, focus, motivation, and consistency. It provides a clear direction for teams, ensuring that every decision contributes to a larger goal. By following a structured process to define and maintain your vision, you can inspire your team, resonate with users, and position your product for long-term success. Whether you’re launching a new product or scaling an existing one, a compelling vision is your roadmap to achieving meaningful impact.

Read More
Tom Rattigan Tom Rattigan

Unlocking the Power of AI Prompt Engineering

AI prompt engineering is an emerging discipline at the intersection of artificial intelligence and human communication. It involves crafting effective prompts to interact with AI systems like GPT models, image generation tools, or other large language models (LLMs). As these systems become integral to various industries, understanding how to communicate with them effectively is key to maximizing their potential.

This post explores the fundamentals of AI prompt engineering, practical techniques, real-world use cases, and best practices to enhance the performance and utility of AI systems.

What Is AI Prompt Engineering?

AI prompt engineering is the process of designing input queries or instructions (prompts) to elicit desired outputs from an AI model. Since models like GPT are trained on vast datasets and can generate diverse outputs, the quality and specificity of the prompt play a crucial role in determining the usefulness of the response.

Effective prompt engineering involves:

Precision: Clearly specifying the task or request.

Context: Providing relevant background or framing to guide the model.

Creativity: Experimenting with phrasing or format to refine outputs.

In essence, the prompt acts as a bridge between human intent and machine interpretation.

Why Is Prompt Engineering Important?

Prompt engineering has become a critical skill for leveraging AI effectively across a variety of domains. Its importance lies in its ability to:

Enhance Accuracy: Well-crafted prompts lead to more relevant and accurate responses.

Improve Efficiency: Precise instructions reduce the need for multiple iterations.

Unlock AI Potential: Thoughtful prompts can uncover advanced capabilities within AI systems, such as creative writing, coding, or generating complex data visualizations.

Core Techniques in Prompt Engineering

Prompt engineering is both an art and a science. Below are key techniques to craft effective prompts:

1. Be Specific and Clear

Clearly define the task and desired output. Vague prompts often lead to ambiguous or irrelevant responses.

Example

Ineffective Prompt: “Write about technology.”

Effective Prompt: “Write a 300-word article about the impact of 5G technology on telemedicine.”

2. Provide Context

Context helps the model understand the scope and nuances of the task. Include background information or define the audience.

Example

With Context: “You are a marketing expert. Write an email to customers announcing a new product launch for an eco-friendly water bottle.”

3. Use Role Assignments

Assigning a role or persona to the AI can influence the tone and style of the response.

Example

“Act as a financial advisor and explain the benefits of diversifying investments to a beginner.”

4. Experiment with Prompt Formatting

Different formats, such as bullet points, questions, or examples, can guide the model effectively.

Example

Bullet Format:

• “List three benefits of electric vehicles.

• Provide examples of leading manufacturers.

• Discuss potential drawbacks.”

5. Incorporate Examples

Providing examples can help the AI better understand the desired structure or style of the output.

Example

“Generate a social media post promoting a sale. Example: ‘Summer Savings! Get 20% off all sunglasses this weekend only.’ Now create a similar post for winter jackets.”

6. Iterate and Refine

Prompt engineering often requires multiple iterations. Adjust prompts based on the initial outputs to achieve the desired result.

Real-World Applications of AI Prompt Engineering

Prompt engineering is being used across industries to unlock the potential of AI tools. Here are some notable applications:

1. Content Creation

AI tools like GPT are being used to generate blog posts, social media content, and marketing copy. Effective prompt engineering ensures that the content is tailored to the target audience and aligns with brand voice.

2. Customer Support Automation

Well-designed prompts enable chatbots to handle customer inquiries effectively. For instance, prompts can be engineered to provide concise, helpful, and empathetic responses.

3. Data Analysis and Insights

Prompt engineering is used to extract insights from large datasets by querying AI models in a structured way. Analysts can refine prompts to generate detailed reports or predictions.

4. Programming Assistance

Developers use AI tools to write code snippets, debug errors, or explain complex algorithms. Precise prompts ensure that the AI provides accurate and efficient coding solutions.

5. Education and Training

AI-powered tutoring systems rely on prompt engineering to provide personalized learning experiences. For example, prompts can be designed to deliver adaptive quizzes or explain concepts at varying levels of complexity.

6. Creative Industries

Artists, writers, and designers use AI to brainstorm ideas, generate storylines, or create visual assets. Prompt engineering allows for greater control over the creative process.

Challenges in Prompt Engineering

While prompt engineering offers significant advantages, it also presents challenges:

Ambiguity: Poorly crafted prompts can lead to irrelevant or nonsensical outputs.

Bias: AI models may reflect biases present in training data, which can affect outputs.

Complexity: Creating highly specific prompts for complex tasks may require advanced domain knowledge.

Best Practices for AI Prompt Engineering

To overcome challenges and maximize effectiveness, follow these best practices:

Start Simple: Begin with a straightforward prompt and build complexity iteratively.

Test and Iterate: Experiment with variations to refine results.

Keep Prompts Concise: Avoid unnecessary details that may confuse the model.

Leverage System Instructions: Use initial instructions to set the tone or behavior of the AI system.

Anticipate Edge Cases: Test prompts for unexpected outputs and refine as needed.

The Future of AI Prompt Engineering

As AI systems continue to evolve, prompt engineering will play an increasingly important role in maximizing their potential. Future advancements may include:

Dynamic Prompting: AI systems that adapt prompts based on user interactions.

Multi-Turn Conversations: Prompts that build on context over extended interactions.

AI-Generated Prompts: Tools that assist users in creating optimized prompts for specific tasks.

Conclusion

AI prompt engineering is a powerful tool that enables users to communicate effectively with AI systems, unlocking their full potential across diverse applications. By mastering prompt design techniques, iterating on inputs, and adhering to best practices, professionals can harness AI to drive innovation, efficiency, and creativity. As AI technology advances, prompt engineering will remain a critical skill for navigating the future of human-AI collaboration.

Read More
Tom Rattigan Tom Rattigan

The Permission System That Made Everyone an Admin (By Accident)

The Quest for Perfect Permissions

Our client ran a complex digital publication with a labyrinthine editorial hierarchy. Managing editors, section editors, staff writers, freelance contributors, fact-checkers, copy editors, social media managers, and interns—all with different access needs and responsibilities.

Twill's built-in user roles felt too basic for their operation. "We need granular control," the editorial director explained. "Sarah should be able to edit tech articles but only publish lifestyle pieces. Mark can manage media but shouldn't access user data. Interns should see drafts but not financial reports."

I was excited by the challenge. Instead of Twill's standard role-based system, we'd build something sophisticated—a fine-grained permission matrix that could handle any conceivable combination of access needs.

The system I designed was genuinely elegant:

  • Resource-based permissions: Separate controls for articles, media, users, settings, analytics

  • Action-based granularity: Create, read, update, delete, publish, feature, archive permissions for each resource

  • Conditional permissions: Time-based access, content-category restrictions, approval workflows

  • Permission inheritance: Role hierarchies with override capabilities

  • Dynamic rule engine: Custom permission logic based on user attributes, content metadata, and editorial calendar

During the demo, I walked through dozens of permission scenarios. "Watch this," I said proudly, "Sarah gets 'tech-writer' role, which grants her edit access to technology category posts, but only during business hours, and only for posts she created or that are assigned to her team."

The client was impressed. "This is exactly what we needed. Finally, a permission system that matches how we actually work."

The Configuration Nightmare

Three months after launch, I got a panicked call from the editorial director.

"I think we have a security problem," she said. "Jenny, our new social media intern, somehow published a draft article that wasn't supposed to go live until next week. And she was able to delete media files that she definitely shouldn't have access to."

I logged into the admin panel to investigate. What I found was a permission configuration that looked like someone had thrown a handful of checkboxes at a wall and deployed whatever stuck.

Jenny's user profile showed:

  • Base role: "Social Media Intern"

  • Override permissions: "Can edit featured content" (granted because she needed to update social previews)

  • Inherited permissions: "Content Manager" (assigned accidentally during a bulk user import)

  • Time-based access: "Publishing rights during social media posting hours" (which happened to be 9 AM - 11 PM)

  • Category overrides: "Can manage lifestyle content" (because she helped with Instagram posts)

The permission system had resolved this complex rule set into: effectively full admin access.

When Flexibility Meets Reality

The more I investigated, the more horrifying the situation became. Our beautiful, granular permission system had become a permission explosion.

We had 47 different permission types across 12 resource categories with 8 action levels each. A single user could have permissions granted through:

  • Their base role assignment

  • Direct permission grants

  • Group membership inheritance

  • Time-based conditional access

  • Category-specific overrides

  • Workflow-based temporary elevations

  • Emergency access provisions

Nobody—including me—could look at a user's permission profile and quickly understand what they could actually do.

The Sarah Problem: Remember Sarah, our tech writer with carefully crafted conditional access? Her permission profile had evolved over six months of "quick fixes" and "temporary grants" into an incomprehensible matrix. She now had:

  • Tech article editing (original requirement)

  • Lifestyle article publishing (added for a special project)

  • Media library full access (granted during a photo shoot)

  • User management rights (accidentally inherited from a group)

  • Analytics dashboard access (added for a reporting deadline)

  • Emergency publishing override (granted during a weekend crisis)

Sarah had become a de facto admin without anyone realizing it, including Sarah herself.

The Escalation Cascade

The real security nightmare wasn't what people could do—it was what they didn't know they could do.

During a routine content audit, we discovered:

  • The intern who could delete entire article categories (through a combination of media access + workflow overrides + inherited group permissions)

  • The freelance writer who had access to subscriber analytics (time-based access grant that never expired + category permission overlap)

  • The copy editor who could create new user accounts (inherited from a temporary "content manager" group assignment during a staff shortage)

None of these users knew they had these permissions. They were all following their understood role boundaries, but our system had quietly granted them much broader access through the complex interaction of permission rules.

The Permission Debugging Hell

When the editorial director asked me to audit everyone's access, I realized we'd created a permission system that was fundamentally unauditable.

To understand what any single user could actually do, I had to:

  1. Check their base role permissions

  2. Review all direct permission grants

  3. Evaluate group membership inheritance chains

  4. Calculate time-based conditional access windows

  5. Process category-specific overrides and exceptions

  6. Factor in workflow-based temporary elevations

  7. Account for emergency access provisions

  8. Resolve conflicts between competing permission rules

For a single user, this process took me nearly an hour. We had 23 active users.

Worse, the permission resolution logic was so complex that I'd introduced several bugs in the rule evaluation engine. There were edge cases where permissions were granted that shouldn't exist, and other cases where legitimate access was accidentally blocked.

The Friday Afternoon Bug: We discovered that users who joined the system on Fridays inherited different permissions than users who joined on other days, due to a timezone calculation error in the conditional access logic. This had been happening for months without anyone noticing.

The Security Theater Revelation

The most embarrassing realization was that our "secure" system was actually less secure than Twill's simple role-based approach.

With basic roles, security is obvious:

  • Admins can do admin things

  • Editors can edit and publish

  • Writers can write and suggest

  • Viewers can view

Everyone understands their boundaries. Violations are immediately obvious. Auditing access takes minutes, not hours.

Our granular system had created security through obscurity—not obscurity from attackers, but obscurity from ourselves. We couldn't quickly verify who had access to what, which meant we couldn't quickly identify when something was wrong.

The Breaking Point

The system collapsed during a routine staff reorganization.

When the editorial director tried to move three writers from the "Lifestyle" team to the "Technology" team, she had to navigate through dozens of permission changes across multiple categories. The process that should have taken five minutes required two hours and three different admin users to complete.

But the real disaster was what we missed: the permission changes accidentally granted one of the moved writers access to unpublished financial reports. We didn't discover this until a confidential revenue document showed up in the preview queue for social media posting.

The editorial director's feedback was swift and clear: "This system is too complex to manage safely. We need something we can understand."

The Humbling Rebuild

We redesigned around role simplicity:

  • Four primary roles: Admin, Editor, Writer, Contributor

  • Clear permission boundaries with no inheritance overlap

  • Temporary access grants that expired automatically after 48 hours

  • Single-purpose overrides that could only add specific, named permissions

  • Permission audit trail showing exactly what changed and when

  • Plain English permission summaries for each user

The new system handled 95% of their use cases with roles alone. The remaining 5% were handled through temporary, explicit permission grants that automatically expired.

What We Actually Learned

1. Complexity Is a Security Vulnerability

The more complex your permission system, the more likely you are to accidentally grant access you didn't intend. Simple systems have obvious security boundaries.

2. Auditability Is a Feature

If you can't quickly verify who has access to what, your permission system is broken. Security you can't audit isn't security.

3. Edge Cases Aren't Worth the Edge

Handling 100% of permission edge cases isn't worth making the 95% common cases incomprehensible. Sometimes "close enough" is better than "perfectly granular."

4. Users Need to Understand Their Own Permissions

If users don't know what they're allowed to do, they'll either do too little (out of caution) or too much (out of ignorance). Both are problems.

The Real Permission System

The best permission system isn't the one that can handle every possible scenario. It's the one that makes the right access obvious and the wrong access impossible.

Our final approach wasn't technically impressive, but it was human-comprehensible. Every user could explain their own permissions in a single sentence. Every admin could audit access in minutes, not hours.

Sometimes the most secure system is the one that trades flexibility for clarity.

Have you built permission systems that became too clever for their own good? Security through complexity is still security through obscurity—share your stories of permission systems that became permission problems.

Made with Twill | twillcms.com

Read More
Tom Rattigan Tom Rattigan

Finding a problem to solve

Identifying and solving the right problem is at the heart of successful digital product management. Products thrive not because they exist but because they address a specific need or challenge for their target audience. Understanding where to find problems, how to evaluate them, and who they impact is essential for developing meaningful solutions.

This post explores sources for identifying problems, techniques for evaluating them, and the role of personas in ensuring that the chosen problem aligns with user needs and business goals.

Where to Find Problems to Solve

Problems can emerge from a variety of sources. By actively engaging with these channels, product managers can uncover opportunities to create value.

1. Customers

Existing customers are a goldmine of insights. They have firsthand experience with your product or a similar solution and can articulate pain points, unmet needs, or desired improvements.

How to Engage

• Conduct interviews or surveys to gather qualitative insights.

• Analyze support tickets or feedback forms for recurring themes.

2. Prospects

Potential customers offer a fresh perspective, often highlighting barriers to adoption or gaps in the current market. Their input can inform improvements that attract new users.

How to Engage

• Participate in sales calls to understand objections or hesitations.

• Gather input during free trials or demo sessions.

3. Internal Teams

Teams that regularly interact with customers—such as customer support, sales, or account management—are valuable sources of problem identification. They hear feedback directly and often recognize patterns.

How to Engage

• Hold regular syncs with internal teams to gather insights.

• Use customer journey maps to link internal observations with customer pain points.

4. Competitors

Analyzing competitors can uncover opportunities to differentiate your product. Competitor reviews, features, and customer complaints can highlight problems worth addressing.

How to Engage

• Perform SWOT (Strengths, Weaknesses, Opportunities, Threats) analyses on competitors.

• Monitor competitor forums, social media, or review sites.

5. Analysts

Industry analysts provide macro-level insights into market trends, emerging technologies, and customer expectations, helping product managers identify high-level opportunities.

How to Engage

• Review industry reports or participate in analyst briefings.

• Use insights to anticipate future trends or validate existing ideas.

Selecting a Problem to Solve

Not every problem is worth solving, and evaluating which ones to pursue requires a systematic approach. A problem scorecard can help prioritize issues based on their potential impact and feasibility.

Criteria for Evaluating Problems

When using a scorecard, consider the following questions:

Severity: How significant is this problem for the target audience?

Frequency: How often does the problem occur?

Impact: What is the potential business or customer impact of solving this problem?

Feasibility: Do we have the resources, technical ability, and time to solve it?

Alignment: Does solving this problem align with the company’s goals and strategy?

Understanding Personas: Who Has This Problem?

Personas are composite profiles of the individuals impacted by the problem you’re solving. They bring target markets and segmentation data to life, keeping the customer at the center of the development process.

Why Personas Matter

Personas help product managers:

• Keep the focus on real customer needs.

• Provide context by illustrating behaviors, motivations, and pain points.

• Align teams around a shared understanding of the target audience.

Types of Personas

Users: Directly interact with the product.

Financial Decision Makers: Influence purchasing decisions based on cost and ROI.

Technical Decision Makers: Evaluate products for technical compatibility and feasibility.

Influencers: Indirectly shape decisions by offering opinions or recommendations.

Digital product managers tend to focus on user personas, while marketing and sales often prioritize decision-makers and influencers.

Proto-Personas

A proto-persona is a starting point for understanding potential users. It is unvalidated and based on assumptions rather than hard data. Over time, proto-personas can evolve into fully developed personas through qualitative and quantitative research.

Example Proto-Persona

Name: Tech-Savvy Tina

Role: Mid-level IT Manager

Needs: Simplified tools for managing teams remotely.

Pain Points: Current software lacks intuitive interfaces and robust reporting features.

Bringing It All Together: The Problem-Persona Fit

Identifying and selecting a problem isn’t just about feasibility or business impact—it’s also about ensuring that the problem resonates with a specific persona. This connection ensures that your solution will be meaningful, valuable, and likely to succeed.

Steps to Align Problems with Personas

1. Define the problem clearly, using user language.

2. Map the problem to specific personas, outlining how it impacts their daily activities.

3. Validate the problem and personas through user interviews, surveys, and real-world data.

4. Adjust the scope of the problem based on persona feedback.

Conclusion

Finding the right problem to solve is the cornerstone of effective digital product management. By leveraging diverse sources of insights, evaluating problems systematically, and keeping the customer at the center through personas, product managers can ensure their solutions deliver meaningful value. While not every problem is worth solving, the ones that align with user needs, business goals, and market opportunities can set the stage for a successful product.

Read More
Tom Rattigan Tom Rattigan

Will AI Take Over Product Management Jobs? Not Entirely—At Least Not Yet

As artificial intelligence continues to reshape industries, many are left wondering: will AI replace product managers? While AI is poised to take over many aspects of product management—potentially up to 90%—there are crucial areas that will still require human oversight for the foreseeable future. Until we achieve true artificial general intelligence (AGI), product managers will play a vital role in facilitating processes, driving strategy, and ensuring alignment across teams.

This article explores how AI will impact product management roles, the tasks it can automate, and why human product managers remain indispensable.

What Makes Product Management Unique?

Product management encompasses the end-to-end lifecycle of a product: conception, planning, development, testing, launch, delivery, and retirement. It involves both strategic and tactical responsibilities, divided into upstream and downstream functions. Upstream tasks include defining roadmaps, aligning product concepts with company vision, and driving innovation. Downstream tasks involve managing the product lifecycle post-launch, focusing on marketing, sales, and lifecycle management.

Effective product management prevents guesswork, ensuring companies create products that meet customer needs, align with business goals, and drive profitability. It requires navigating both internal environments (tools and processes) and external demands (customer-facing products). This broad scope of responsibilities is one reason AI will struggle to replace product managers entirely.

How AI Will Transform Product Management

AI is already transforming product management by automating many routine tasks, analyzing vast datasets, and providing actionable insights. Here’s how AI is likely to impact the field:

1. Market Analysis and Research

AI excels at analyzing large volumes of data, identifying trends, and synthesizing customer feedback. Tasks like competitive analysis, market segmentation, and user behavior tracking can be performed faster and more accurately with AI tools.

AI in Action: Tools like Tableau, Google Analytics, and customer sentiment analysis platforms help product managers identify market opportunities with greater precision.

2. Roadmap Prioritization

AI-powered tools can analyze data to recommend which features to prioritize based on factors like customer demand, projected ROI, and technical feasibility.

AI in Action: Predictive analytics can simulate outcomes for different product features, helping product managers make data-driven decisions.

3. Internal Process Optimization

AI can streamline internal processes by managing workflows, automating repetitive tasks, and improving cross-functional communication.

AI in Action: Tools like Jira and Asana already incorporate AI to enhance productivity, predict bottlenecks, and automate task assignments.

4. Customer Insights and Personalization

AI can analyze customer data to identify pain points, predict needs, and personalize user experiences. This is especially valuable for external product management focused on customer-facing tools and services.

AI in Action: Chatbots, recommendation engines, and machine learning models are driving hyper-personalized customer experiences.

5. Testing and Quality Assurance

AI can automate testing, identifying bugs or inefficiencies in product functionality during the development phase.

AI in Action: Automated testing frameworks and AI-driven bug tracking are reducing time-to-market for digital products.

The 10% AI Can’t (Yet) Replace

While AI can handle many of the analytical and repetitive aspects of product management, it lacks the ability to replicate certain uniquely human skills and responsibilities:

Strategic Vision

AI can analyze data and provide insights, but it cannot define a long-term product vision that aligns with a company’s mission and values. This requires creativity, intuition, and a deep understanding of market dynamics.

Empathy and Customer Connection

Understanding the emotional and psychological needs of customers is a distinctly human skill. AI can process customer feedback, but it cannot fully grasp nuanced human emotions or motivations.

Cross-Functional Leadership

Product managers must build relationships across departments, mediate conflicts, and inspire teams. AI tools can facilitate communication but cannot replace the human touch in leadership.

Ethical Decision-Making

AI lacks the moral reasoning to address ethical considerations, such as data privacy, inclusivity, and sustainability. Product managers ensure that AI-driven solutions align with societal and organizational values.

Managing Trade-Offs

Balancing competing priorities, such as cost, time, and quality, involves complex judgment calls that require human intuition and experience.

Innovation and Creativity

While AI can optimize existing processes and suggest improvements, it struggles with generating truly novel ideas or reimagining what’s possible in a market.

Why Product Managers Will Still Be Needed

For the foreseeable future, product managers will remain critical to the product development process. Here’s why:

AI as a Tool, Not a Replacement

AI is a powerful tool that enhances a product manager’s capabilities but does not eliminate the need for human oversight. Product managers guide AI systems by setting goals, interpreting results, and ensuring outputs align with broader strategies.

Complexity of Product Management

The diversity of product management responsibilities—from upstream strategy to downstream execution—requires a level of adaptability and contextual understanding that AI has yet to achieve.

Collaboration and Facilitation

Product management thrives on collaboration. Product managers bridge the gap between engineering, design, marketing, and leadership teams. They facilitate communication, align priorities, and drive progress in ways that AI cannot replicate.

What the Future Holds for Product Managers

As AI continues to evolve, the role of product managers will shift. They will increasingly focus on higher-level strategic tasks while relying on AI to handle data-heavy or routine responsibilities. The skill set required for product managers will also evolve, emphasizing:

AI Literacy: Understanding AI concepts and tools to leverage their potential effectively.

Strategic Thinking: Crafting visions that go beyond what AI can predict or optimize.

Human-Centric Leadership: Building and motivating teams in increasingly automated environments.

Conclusion

AI will undoubtedly reshape the field of product management, automating many aspects and enabling data-driven decision-making at unprecedented scales. However, product managers will remain essential for tasks that require creativity, empathy, ethical judgment, and strategic vision. Until we achieve AGI—an era where machines possess human-level reasoning—product management will continue to rely on humans to guide the process, align teams, and make decisions that machines simply cannot. The future of product management is not a choice between humans and AI but a partnership where each amplifies the strengths of the other.

Read More
Tom Rattigan Tom Rattigan

The Role of AI in Digital Product Management

Artificial intelligence (AI) is transforming industries and becoming an essential skill set in product management. With AI-powered innovations driving advancements in fields like autonomous vehicles, computer vision, and natural language processing, AI product managers are emerging as pivotal leaders in the development of groundbreaking products. They navigate the intersection of AI’s technical potential and business strategy, ensuring that AI-driven solutions meet customer needs and drive measurable value for organizations.

This post explores the critical role of AI in digital product management, the unique challenges and responsibilities of AI product managers, and the key skills required to succeed in this dynamic field.

What Is an AI Product Manager?

An AI product manager is a product leader who oversees the end-to-end development of AI-driven products or features. They possess a blend of technical expertise, business acumen, and strong communication skills. Their role involves forming and managing cross-functional teams, defining product vision, prioritizing features, and ensuring successful product delivery while aligning with organizational goals.

The Role and Responsibilities of AI Product Managers

AI product managers play a multifaceted role that bridges the gap between technical teams and business stakeholders. Their responsibilities include:

Defining Product Vision and Strategy

AI product managers craft a product vision that aligns with the company’s mission and strategic initiatives. They identify opportunities where AI can solve customer problems, enhance experiences, or streamline operations. A clear vision provides direction and ensures alignment among diverse stakeholders.

Managing Cross-Functional Teams

AI product development requires collaboration between data scientists, engineers, designers, and business analysts. AI product managers build and lead these teams, ensuring that roles, responsibilities, tasks, and milestones are clearly defined. They foster an environment of collaboration and accountability to drive progress.

Assessing Risks and Mitigating Challenges

AI projects often involve complex technical challenges, ethical considerations, and data dependencies. AI product managers assess risks, develop mitigation plans, and ensure projects stay on track. Balancing trade-offs between feasibility, cost, and user impact is a key part of this process.

Driving Customer-Centric Development

Empathy and customer focus are critical in AI product management. AI product managers use personas, user research, and feedback loops to ensure that AI solutions address real-world problems and deliver tangible value. They prioritize features based on user needs, balancing innovation with practicality.

Communicating Across Technical and Non-Technical Stakeholders

AI product managers act as translators, effectively communicating complex AI concepts to non-technical stakeholders while ensuring technical teams understand business goals. This skill is essential for aligning efforts and securing buy-in from executives, investors, and customers.

Delivering Minimum Viable Products (MVPs)

AI product managers focus on delivering MVPs that validate AI’s potential to solve a problem or create value. They avoid over-promising by setting realistic expectations and iterating based on user feedback and market conditions.

Key Skills for AI Product Managers

To excel in AI product management, professionals must possess a combination of technical, managerial, and strategic skills.

Technical Understanding

AI product managers need a foundational understanding of AI systems, including machine learning, neural networks, and natural language processing. This knowledge enables them to collaborate effectively with technical teams and make informed decisions.

Data Analysis Skills

Proficiency in analyzing data is critical for interpreting user feedback, identifying patterns, and validating AI models. AI product managers use data-driven insights to guide product development and feature prioritization.

Leadership and Team Management

AI product managers build diverse, cross-functional teams and foster collaboration. They must balance competing priorities, mediate conflicts, and motivate teams to deliver results.

Communication Skills

The ability to distill complex AI concepts into clear, actionable information is essential. AI product managers must communicate effectively with engineers, executives, and customers to ensure alignment.

Strategic Alignment

AI product managers ensure that AI solutions align with the company’s goals and values. They evaluate the financial feasibility of AI projects and measure return on investment (ROI) to justify efforts.

Curiosity and Adaptability

AI product managers need a genuine curiosity about emerging technologies and the ability to adapt to rapidly changing landscapes. Staying ahead of industry trends and learning new tools are key to maintaining a competitive edge.

The AI Product Management Process

The process of managing AI-driven products involves several stages, each requiring careful planning and execution.

Opportunity Identification

AI product managers begin by identifying opportunities where AI can solve significant problems or create new value. This involves market research, competitive analysis, and user feedback.

Concept Development

They define the product concept and vision, outlining how AI will be integrated to meet user needs. Personas are used to ensure the solution addresses specific customer segments.

Prototyping and Testing

AI product managers guide teams in developing prototypes and MVPs. Iterative testing helps validate the model’s effectiveness and refine the product.

Scaling and Launch

Once validated, AI solutions are scaled and prepared for market launch. AI product managers ensure that the product meets quality standards and delivers on its promises.

Post-Launch Monitoring

AI product managers monitor performance metrics, user feedback, and market trends to optimize the product over time. They address any issues that arise and identify opportunities for enhancements.

The Importance of AI in Digital Product Management

AI has the power to revolutionize industries and transform how products are developed, marketed, and consumed. In digital product management, AI enables personalization, efficiency, and innovation. Personalization tailors experiences to individual users, increasing engagement and satisfaction. Efficiency automates repetitive tasks to improve productivity and reduce costs. Innovation unlocks new possibilities in fields like healthcare, finance, and entertainment. AI product managers play a vital role in realizing this potential, ensuring that AI solutions are both impactful and responsible.

Conclusion

The role of AI in digital product management is expanding rapidly, and AI product managers are at the forefront of this transformation. By blending technical expertise with customer focus, leadership, and strategic alignment, they drive the development of AI-driven products that create real-world value. As AI continues to evolve, the demand for skilled AI product managers will only grow, making this an exciting and dynamic career path for those ready to lead the charge.

Read More
Tom Rattigan Tom Rattigan

The Build-Measure-Learn Loop

In the ever-evolving world of product development, uncertainty is the norm, and the stakes are high. The Build-Measure-Learn (BML) loop provides a framework for iterative validation that reduces risks, fosters innovation, and ensures better outcomes. Rooted in lean methodologies, this approach enables product managers, designers, and developers to test hypotheses, validate ideas, and make data-driven decisions.

Let’s delve into how the BML loop works, its benefits, and practical tips for applying it to achieve continuous improvement in your product development process.

What Is the Build-Measure-Learn Loop?

The Build-Measure-Learn loop is a cyclical framework that guides teams through iterative experimentation and validation. At its core, the loop consists of three stages:

1. Build: Create a minimum viable product (MVP) or prototype to test a hypothesis.

2. Measure: Collect data and feedback from users to evaluate the hypothesis.

3. Learn: Analyze the data to determine whether the hypothesis is validated, then refine or pivot the product direction.

This iterative cycle encourages quick experimentation, minimizing the time and resources spent on untested assumptions.

How It Works: Breaking Down the Cycle

1. Build

Purpose: To test a hypothesis with the smallest viable effort.

Activities: Develop an MVP, storyboard, or pretotype that encapsulates your hypothesis.

Key Consideration: Focus on delivering just enough functionality to validate the assumption.

For example, if you hypothesize that users need a faster way to schedule meetings, you might build a simple calendar integration tool without all the bells and whistles of a full-featured scheduling app.

2. Measure

Purpose: To gather quantifiable and qualitative data on how users interact with the MVP.

Activities: Use analytics tools to track user engagement, conduct surveys, and collect feedback.

Key Metrics: Look for indicators like user engagement (depth, breadth, and frequency), task completion rates, or conversion funnels.

Example tools: Mixpanel, Google Analytics, or in-app surveys.

3. Learn

Purpose: To assess the results and decide the next steps.

Activities: Analyze the data to confirm or reject the hypothesis, identify insights, and determine what to build next.

Key Actions: Iterate on the design, pivot the approach, or double down on a validated idea.

If your scheduling tool shows high engagement but feedback indicates that users want integration with other platforms, your next cycle might focus on building those integrations.

Benefits of the Build-Measure-Learn Loop

Risk Mitigation

By testing hypotheses early, you avoid wasting resources on features or products that don’t meet user needs. Each cycle reduces uncertainty, allowing for informed decision-making.

Faster Time-to-Market

The iterative nature of the BML loop prioritizes rapid experimentation, enabling teams to release usable features sooner while refining them based on feedback.

Customer-Centric Innovation

By continuously gathering user insights, the BML loop ensures that your product evolves in alignment with customer needs, fostering greater adoption and satisfaction.

Data-Driven Decision Making

The measure phase provides actionable insights, turning subjective opinions into objective, evidence-based decisions.

Practical Tips for Applying the BML Loop

1. Start with a Clear Hypothesis

Clearly define what you want to test and how success will be measured. For example: “We believe that adding a one-click scheduling feature will increase meeting setups by 20% within two weeks.”

2. Keep MVPs Truly Minimal

Resist the temptation to overbuild. Your MVP should focus on the core functionality needed to validate the hypothesis—no more, no less.

3. Emphasize Collaboration

Foster alignment among cross-functional teams, including product managers, designers, engineers, and data analysts, to ensure a shared understanding of goals and outcomes.

4. Use the Right Tools

Leverage analytics platforms, survey tools, and A/B testing frameworks to collect actionable data. Combine quantitative metrics with qualitative insights for a holistic view.

5. Iterate Quickly

Shorten cycle times to maximize learning. Aim for rapid build-measure-learn cycles, even if it means starting with low-fidelity solutions.

6. Embrace Failure

Not every hypothesis will succeed, and that’s okay. Treat failures as opportunities to learn and improve your product.

Real-World Example: Dropbox’s Early MVP

When Dropbox began, its team used the BML loop to validate demand before building the full product. Instead of developing a complex file-sharing platform, they created a simple explainer video showing how Dropbox would work. This pretotype helped them measure interest and gather feedback from users without writing any code. The overwhelmingly positive response validated their hypothesis and guided the development of their platform.

Conclusion

The Build-Measure-Learn loop is a cornerstone of effective product management, offering a structured yet flexible approach to experimentation and learning. By embracing this iterative framework, you can reduce risks, speed up delivery, and create products that truly resonate with users. Start small, iterate quickly, and let the BML loop guide your path to continuous product improvement.

Read More
Tom Rattigan Tom Rattigan

Agile in Action: Iterative Development for Smarter Decisions

Agile has revolutionized how teams approach product development, enabling flexibility, collaboration, and rapid iteration. In a world of constant change, Agile principles empower teams to adapt to evolving user needs, market conditions, and technical challenges. Unlike traditional waterfall methodologies that require rigid adherence to predefined plans, Agile emphasizes iterative progress, continuous feedback, and smarter decision-making.

This post explores Agile principles, practical insights for applying them, and how iterative development can lead to more effective and adaptive product management.

What Is Agile Development?

Agile is a mindset and framework for managing projects that prioritize flexibility, collaboration, and delivering value incrementally. Agile is guided by the Agile Manifesto, which emphasizes:

• Individuals and interactions over processes and tools.

• Working software over comprehensive documentation.

• Customer collaboration over contract negotiation.

• Responding to change over following a plan.

Agile frameworks, such as Scrum, Kanban, and SAFe (Scaled Agile Framework), provide teams with structures to implement these principles effectively.

Core Principles of Agile

Agile development is grounded in several key principles:

Iterative Progress: Break work into smaller, manageable increments (sprints or iterations).

Continuous Feedback: Gather insights from stakeholders and users at every stage.

Cross-Functional Collaboration: Encourage teamwork across disciplines, from design and development to marketing.

Adaptability: Respond to change over rigid adherence to a plan.

Customer-Centricity: Focus on delivering value to users early and often.

Advantages of Agile Development

Agile offers several advantages for teams navigating dynamic environments:

Increased Flexibility: Teams can pivot quickly based on feedback or changing priorities.

Improved Quality: Continuous testing and iteration identify issues early.

Faster Time-to-Market: Delivering incremental updates ensures users see value sooner.

Greater Collaboration: Agile ceremonies, such as stand-ups and retrospectives, foster communication and alignment.

Enhanced Customer Satisfaction: Frequent delivery of working features keeps users engaged.

Agile in Action: Practical Insights

1. Embrace Iteration

Iteration is at the heart of Agile. Instead of delivering a fully completed product at the end of a long cycle, Agile teams focus on delivering usable increments that can be tested and improved.

Example: If you’re developing an e-commerce platform, start with basic functionality like product browsing and checkout, then build features like personalized recommendations in subsequent iterations.

Tip: Treat every sprint or iteration as an experiment. Test hypotheses, gather feedback, and refine the product continuously.

2. Prioritize the Backlog

The product backlog is a living document that lists and prioritizes all tasks, features, and fixes. Regular grooming ensures the backlog aligns with current goals and user needs.

Example: Use prioritization frameworks like MoSCoW (Must have, Should have, Could have, Won’t have) or RICE (Reach, Impact, Confidence, Effort) to focus on high-impact items.

Tip: Involve stakeholders in backlog reviews to ensure alignment and transparency.

3. Leverage Agile Ceremonies

Agile frameworks like Scrum incorporate ceremonies that foster collaboration and alignment:

Daily Stand-ups: Short meetings to discuss progress, challenges, and plans.

Sprint Planning: Define sprint goals and tasks.

Sprint Reviews: Share completed work with stakeholders.

Retrospectives: Reflect on successes and areas for improvement.

Tip: Use retrospectives to celebrate wins and address challenges. Continuous improvement is central to Agile.

4. Measure Success with Agile Metrics

Agile teams rely on metrics to evaluate performance and inform decisions:

Velocity: Tracks the amount of work completed in a sprint.

Burn-down Chart: Visualizes progress toward sprint goals.

Cycle Time: Measures how long it takes to complete a task.

Team Satisfaction: Monitors team morale and collaboration effectiveness.

Tip: Focus on metrics that drive actionable insights, rather than vanity metrics that offer little value.

5. Foster a Culture of Collaboration

Agile thrives in environments where cross-functional teams work together to solve problems. Break down silos and ensure open communication between designers, developers, product managers, and stakeholders.

Example: Use tools like Jira, Trello, or Slack to facilitate collaboration and track progress in real-time.

Tip: Encourage a culture of trust where team members feel comfortable sharing ideas and feedback.

6. Welcome Changing Requirements

Agile is designed to accommodate change. User needs, market conditions, or technical constraints often evolve, and Agile teams must adapt without losing momentum.

Example: If user feedback suggests a feature is underperforming, reprioritize the backlog to address their concerns in the next sprint.

Tip: Build flexibility into timelines and budgets to handle unexpected shifts.

Real-World Example: Spotify’s Agile Approach

Spotify is a well-known example of Agile principles in action. The company uses a unique Agile model organized around “squads,” small cross-functional teams that operate autonomously. Each squad owns a specific feature or part of the product, such as recommendations or user profiles. This structure enables rapid experimentation, iterative development, and quick pivots in response to user feedback.

By combining Agile principles with a culture of innovation, Spotify continuously improves its platform while maintaining a strong focus on user experience.

Challenges and Solutions in Agile Implementation

While Agile offers numerous benefits, it also presents challenges:

Scope Creep: Constant change can lead to uncontrolled project expansion.

Team Misalignment: Without clear goals, teams can lose focus.

Lack of Agile Mindset: Teams unfamiliar with Agile may resist the transition.

Solutions:

• Define clear sprint goals to maintain focus.

• Provide Agile training and coaching to build understanding.

• Balance flexibility with discipline, ensuring changes align with strategic objectives.

Conclusion

Agile development empowers teams to make smarter decisions by embracing iteration, continuous feedback, and adaptability. By prioritizing collaboration, fostering a culture of experimentation, and leveraging metrics to guide progress, Agile teams can navigate change with confidence. Whether you’re building a new product or enhancing an existing one, Agile offers the tools and mindset needed to deliver value in dynamic, fast-paced environments.

Read More
Tom Rattigan Tom Rattigan

What Is Product-Market Fit and How to Achieve It?

It all begins with an idea.

Achieving product-market fit (PMF) is the holy grail for every product manager and entrepreneur. It’s the moment when a product resonates so well with its target audience that it satisfies a real need, creates value, and becomes indispensable. But what exactly is product-market fit, and how can you systematically pursue it? The Product-Market Fit Triad—valuable, viable, and feasible—offers a structured framework for guiding product decisions and optimizing your path toward this critical milestone.

This post dives into the concept of product-market fit, explores the triad framework, and provides actionable strategies to achieve it.

What Is Product-Market Fit?

Marc Andreessen, who popularized the term, described product-market fit as “being in a good market with a product that can satisfy that market.” In simpler terms, it’s when your product solves a real problem for a clearly defined audience, and they are willing to adopt and pay for it.

Product-market fit is often identified by several key indicators:

• Strong user retention: Customers consistently return to use your product.

• Positive customer feedback: Users express satisfaction and loyalty.

• Organic growth: Word-of-mouth drives adoption.

• Revenue growth: Customers are not only using the product but are also willing to pay for it.

The Product-Market Fit Triad: Valuable, Viable, Feasible

The Product-Market Fit Triad helps product managers focus on three interconnected criteria to guide their product decisions:

1. Valuable: Does the product solve a meaningful problem for real customers?

2. Viable: Does the product provide enough value to the business to justify its existence?

3. Feasible: Can the product be built with available resources and within technical constraints?

These three dimensions are essential for achieving and sustaining product-market fit. Let’s break them down.

1. Valuable: Solving the Right Problem

At its core, a product must deliver value to its users. If it doesn’t solve a real problem or satisfy a critical need, customers won’t adopt it.

Key questions to consider:

• What pain points does the product address?

• Is this a “must-have” or a “nice-to-have” for your target audience?

• Does the product offer a clear advantage over alternatives?

Strategies for building valuable products:

• Conduct customer research: Use interviews, surveys, and usability testing to understand user pain points. Empathy is key to identifying what matters most to your audience.

• Develop personas: Create detailed user personas to stay aligned with customer needs throughout the product lifecycle.

• Prioritize features: Focus on delivering solutions to the most pressing problems first. Avoid feature bloat that dilutes your core value proposition.

Example: Slack identified the pain point of team communication inefficiency and built a product that made collaboration seamless. Its clear value proposition led to rapid adoption.

2. Viable: Delivering Business Value

A product must contribute to the business’s bottom line. Viability ensures that the product aligns with organizational goals and can generate sufficient revenue to justify its development and maintenance.

Key questions to consider:

• Is there a viable market size for this product?

• Does the revenue potential outweigh the cost of development and customer acquisition?

• Can this product achieve sustainable profitability?

Strategies for ensuring viability:

• Test pricing models: Experiment with pricing tiers, subscription models, or pay-as-you-go systems to identify what resonates with your market.

• Analyze market segmentation: Understand which customer segments offer the most growth potential.

• Monitor metrics: Track metrics like customer acquisition cost (CAC), lifetime value (LTV), and churn to evaluate business viability.

Example: Netflix’s subscription model aligns its pricing with customer value while ensuring predictable recurring revenue, a hallmark of business viability.

3. Feasible: Building Within Constraints

Feasibility evaluates whether the product can be developed and maintained with the resources, technology, and constraints at hand. A valuable and viable product is meaningless if it’s impossible to deliver.

Key questions to consider:

• Do we have the technical capability to build this product?

• Are there sufficient resources (time, talent, budget) to complete the project?

• Can the product meet compliance or regulatory requirements?

Strategies for feasibility:

• Start with an MVP: Build a Minimum Viable Product to test hypotheses before committing to large-scale development.

• Leverage Agile development: Use iterative cycles to adapt quickly to technical challenges and resource limitations.

• Involve cross-functional teams: Ensure that engineering, design, and business teams collaborate to assess technical and operational constraints.

Example: Airbnb started with a simple website to test its feasibility, later scaling into a robust platform after proving that the idea worked.

How the Triad Works Together

The interplay between valuable, viable, and feasible is critical. A product that excels in only one or two areas will struggle to achieve product-market fit:

• Valuable but not viable: Users love it, but the business can’t sustain it.

• Valuable but not feasible: A great idea that is impossible to implement.

• Viable but not valuable: A profitable product that customers don’t truly need.

True product-market fit exists when all three criteria are met in balance. This holistic approach minimizes risks and maximizes the chances of long-term success.

Practical Steps to Achieve Product-Market Fit

1. Define your target market. Start with a clear understanding of your audience and their needs. Segment your market to focus on a niche with high potential.

2. Validate early and often. Use the Build-Measure-Learn loop to test hypotheses quickly. Gather user feedback at every stage to refine your product.

3. Iterate toward fit. Be prepared to pivot based on data. Achieving product-market fit is rarely a linear journey.

4. Track metrics. Monitor key metrics like Net Promoter Score (NPS), retention rates, and usage patterns to gauge your progress.

5. Communicate across teams. Ensure alignment among product, engineering, marketing, and sales teams to avoid silos that can derail progress.

Conclusion

Product-market fit is the foundation of any successful product. By leveraging the Product-Market Fit Triad—valuable, viable, and feasible—you can systematically evaluate your product decisions and guide your team toward delivering solutions that resonate with users, benefit the business, and can be realistically built. Achieving this balance isn’t easy, but it’s the key to creating products that thrive in competitive markets.

What strategies have you used to achieve product-market fit? Share your thoughts and experiences in the comments!

Read More
Tom Rattigan Tom Rattigan

Key Metrics for Success: From OKRs to KPIs in Digital Product Management

It all begins with an idea.

In digital product management, metrics are the compass that guide teams toward success. They help evaluate performance, inform decision-making, and align teams around shared goals. However, not all metrics are created equal. Understanding how to define and use both leading and lagging indicators, alongside frameworks like OKRs (Objectives and Key Results) and KPIs (Key Performance Indicators), is essential for driving product and business success.

The Importance of Metrics in Digital Product Management

Metrics are vital for several reasons. They provide visibility into how a product is performing, ensure alignment with strategic goals, and foster a culture of accountability and continuous improvement. Clear, actionable metrics enable teams to:

• Monitor progress toward objectives.

• Identify opportunities for improvement.

• Validate or challenge assumptions.

• Make data-informed decisions.

Choosing the right metrics is critical. Poorly defined metrics can lead to misaligned priorities, while meaningful metrics focus efforts on what truly matters.

OKRs vs. KPIs: Understanding the Difference

OKRs and KPIs are two distinct but complementary tools for measuring success.

OKRs (Objectives and Key Results) focus on setting ambitious, qualitative goals (objectives) paired with measurable outcomes (key results). They are forward-looking and aspirational, encouraging teams to stretch beyond their comfort zones.

Example:

Objective: Improve customer satisfaction.

Key Results: Achieve a Net Promoter Score (NPS) of 70+, reduce customer support response time to under 1 hour, and increase positive survey feedback by 15%.

KPIs (Key Performance Indicators) measure the ongoing health and performance of a product or business. They are quantitative, often tied to specific operational or financial outcomes, and monitor progress toward objectives.

Example:

• Average revenue per user (ARPU).

• Monthly active users (MAU).

• Conversion rate.

OKRs are about setting the direction, while KPIs are about measuring how well you are executing. Together, they create a robust system for tracking success.

Leading vs. Lagging Indicators

Metrics can be divided into leading and lagging indicators, each serving a unique purpose.

Lagging Indicators measure outcomes that result from past actions. They confirm whether goals have been achieved but don’t provide insights for real-time adjustments.

Examples:

• Revenue growth.

• Churn rate.

• Total sales.

Leading Indicators measure actions or conditions that predict future outcomes. They help teams course-correct before final results are evident.

Examples:

• Free trial signups.

• Website traffic.

• Feature adoption rates.

Balancing these two types of indicators ensures a comprehensive view of product performance, enabling both reflection and proactive adjustments.

Defining Metrics That Matter

To define meaningful metrics, start by answering these questions:

• What is the overall objective we want to achieve?

• What are the measurable outcomes that indicate progress?

• What specific actions or behaviors drive these outcomes?

Once these questions are answered, follow these steps:

1. Identify Key Goals

Tie metrics to overarching goals, such as improving customer retention, increasing revenue, or expanding user engagement.

2. Align with Strategy

Ensure metrics reflect broader business priorities. Misaligned metrics can lead to wasted effort on initiatives that don’t drive meaningful results.

3. Prioritize Clarity

Metrics should be easy to understand and communicate. A clear metric is more likely to be actionable.

4. Focus on Actionability

Choose metrics that teams can influence through their efforts.

5. Validate Relevance

Regularly review and adjust metrics to ensure they remain aligned with evolving goals.

Examples of Key Metrics in Digital Product Management

For a digital product manager, metrics often fall into categories such as user engagement, growth, and financial performance.

User Engagement Metrics

• Daily active users (DAU).

• Average session duration.

• Feature usage frequency.

Growth Metrics

• Customer acquisition cost (CAC).

• Viral coefficient (how many new users each user brings).

• User onboarding completion rate.

Financial Metrics

• Monthly recurring revenue (MRR).

• Lifetime value (LTV) of a customer.

• Gross margin.

Using Metrics Effectively

1. Track Trends Over Time

Monitor changes in metrics to identify patterns and trends. For example, a gradual decline in DAU may indicate usability issues or competition.

2. Contextualize Data

Interpret metrics within the broader context. A high churn rate might seem alarming, but it could be acceptable if offset by rapid new customer acquisition.

3. Avoid Vanity Metrics

Focus on metrics that drive actionable insights rather than surface-level success. For instance, app downloads are less valuable than active usage.

4. Set Benchmarks

Use industry standards or historical data to establish benchmarks for evaluating performance.

5. Foster Team Ownership

Encourage cross-functional teams to take responsibility for specific metrics. When teams own outcomes, they are more motivated to deliver results.

Example: Using Metrics in Practice

Imagine managing a subscription-based SaaS product. Your high-level goal is to increase revenue, but you need actionable metrics to achieve it.

Lagging Indicators:

• Monthly recurring revenue (MRR).

• Churn rate.

Leading Indicators:

• Free trial conversion rate.

• Customer onboarding success rate.

By focusing on leading indicators such as improving onboarding, you can proactively address issues that might lead to higher churn, ultimately driving long-term growth in MRR.

Conclusion

Metrics are the backbone of successful digital product management. By understanding the distinction between OKRs and KPIs, leveraging both leading and lagging indicators, and selecting actionable, aligned metrics, product managers can guide teams toward meaningful results. Metrics not only help measure success but also inform better decisions, keeping products on the path to achieving their full potential.

Read More
Tom Rattigan Tom Rattigan

User-Centered Design Thinking: Turning Problems into Solutions

In the ever-evolving landscape of product management and digital design, the ability to empathize with users and address their real needs is what sets successful products apart. User-centered design thinking provides a structured yet flexible framework for solving complex problems through a deep understanding of users. This human-centered approach prioritizes empathy, creativity, and iteration, making it a powerful tool for innovation.

This post explores the five phases of design thinking—empathize, define, ideate, prototype, and test—and how empathy drives each stage to transform problems into meaningful solutions.

What Is Design Thinking?

Design thinking is a problem-solving methodology rooted in human-centered design principles. It encourages teams to explore creative possibilities, collaborate across disciplines, and iterate quickly based on user feedback. At its core, design thinking is about creating solutions that are desirable for users, viable for businesses, and feasible to implement.

The Five Phases of Design Thinking

The design thinking process consists of five non-linear, iterative stages. These stages can overlap or repeat, depending on the needs of the project.

1. Empathize: Understanding the User

Empathy is the foundation of design thinking. This phase focuses on gaining a deep understanding of the user’s experiences, needs, and challenges. Empathy allows teams to step into the user’s shoes and uncover insights that might otherwise go unnoticed.

Key Activities

• Conduct interviews, surveys, and ethnographic research to gather qualitative insights.

• Observe users interacting with products or services in their natural environment.

• Create empathy maps to visualize user emotions, goals, and behaviors.

Outcome

A rich understanding of the user’s perspective that informs every subsequent phase.

2. Define: Framing the Problem

In this phase, insights gathered during empathy are synthesized into a clear and actionable problem statement. A well-defined problem ensures that the team is aligned and focused on addressing the right challenge.

Key Activities

• Analyze user research to identify patterns and key pain points.

• Write a problem statement (e.g., “How might we help users easily schedule appointments without feeling overwhelmed?”).

• Develop personas that represent your target audience.

Outcome

A user-centered problem statement that acts as a guide for ideation.

3. Ideate: Generating Solutions

With a clear problem statement, the team moves into ideation. This phase is about exploring a wide range of potential solutions without judgment, fostering creativity and innovation.

Key Activities

• Brainstorming sessions to generate as many ideas as possible.

• Techniques like mind mapping, SCAMPER (Substitute, Combine, Adapt, Modify, Put to another use, Eliminate, Rearrange), and “Crazy 8s” for rapid ideation.

• Evaluate and shortlist ideas based on feasibility, desirability, and viability.

Outcome

A pool of innovative ideas, with a few selected for prototyping.

4. Prototype: Building Tangible Solutions

Prototyping turns abstract ideas into tangible forms that can be tested with users. This phase is about learning through making and identifying what works and what doesn’t.

Key Activities

• Create low-fidelity prototypes, such as sketches, wireframes, or simple models.

• Build medium- or high-fidelity prototypes as the solution becomes more refined.

• Use tools like Figma, Adobe XD, or paper prototyping for digital products.

Outcome

A working prototype that captures key functionality and allows for user testing.

5. Test: Refining Through Feedback

Testing involves sharing the prototype with users to gather feedback and insights. This phase validates whether the solution effectively addresses the problem or needs further iteration.

Key Activities

• Conduct usability testing with representative users.

• Observe user interactions to identify pain points or confusion.

• Collect both qualitative and quantitative data to inform improvements.

Outcome

Actionable insights to refine the prototype or revisit earlier phases if necessary.

How Empathy Drives Innovation in Design Thinking

Empathy is the cornerstone of the design thinking process. It ensures that solutions are rooted in the user’s real-world needs and challenges. By understanding users on a deeper level, teams can uncover hidden opportunities and create products that resonate.

Empathy enables:

User-Centric Problem Solving: Teams focus on what users truly need, rather than assumptions or internal biases.

Inclusive Design: Understanding diverse user perspectives leads to solutions that work for a broader audience.

Emotional Connection: Empathy fosters trust and loyalty by creating products that users feel were designed just for them.

Real-World Example: Airbnb

When Airbnb was struggling in its early days, the founders used design thinking to empathize with hosts and guests. By staying in their customers’ homes and experiencing their challenges firsthand, they uncovered key pain points, such as the need for better photos and clearer communication. This deep understanding led to actionable solutions, transforming Airbnb into a global success.

Applying Design Thinking in Product Management

Incorporating design thinking into product management can help teams create better solutions faster. Here’s how:

Integrate User Research: Build empathy by making user research a regular part of your workflow.

Encourage Collaboration: Break down silos between teams and involve stakeholders in the design thinking process.

Adopt an Iterative Mindset: Use prototypes and tests to iterate quickly, rather than waiting for perfection.

Focus on Outcomes: Keep the user’s problem and desired outcome at the center of decision-making.

Conclusion

Design thinking is a powerful approach for solving problems and driving innovation. By following its five phases—empathize, define, ideate, prototype, and test—you can create solutions that resonate deeply with users. At the heart of the process lies empathy, a skill that enables teams to connect with users and turn challenges into opportunities. Whether you’re designing a new product or improving an existing one, design thinking ensures that your solutions are not only effective but also meaningful.

Read More
Tom Rattigan Tom Rattigan

Our 'Smart' Media Library Became Too Smart for Its Own Good

The AI-Powered Dream

Twill's media library is already pretty solid out of the box—drag and drop uploads, decent organization, reasonable search. But our client, a mid-sized marketing agency, had a problem: thousands of images with inconsistent naming, scattered across multiple projects, and zero organizational structure.

"Can't we just make it automatically organize itself?" they asked during our requirements meeting.

Challenge accepted.

I spent three weeks building what I genuinely believed was the future of digital asset management. Using Google's Vision API and a custom machine learning pipeline, our enhanced Twill media library would automatically:

  • Tag images based on visual content recognition

  • Categorize files by detected subjects and themes

  • Suggest folder structures based on content similarity

  • Auto-generate descriptive filenames from image analysis

  • Create smart collections that updated themselves as new content was added

The demo was flawless. I uploaded a batch of sample images and watched the magic happen: sunset photos automatically tagged as "landscape, orange, sky, evening," headshots neatly categorized under "people, professional, portrait," product shots organized by color and style.

"This is incredible," the client said. "It's like having a digital librarian who never sleeps."

We launched the smart media library with fanfare, proud of our cutting-edge solution to content chaos.

The Machine Learning Meltdown

Two days after launch, I got an email with the subject line "URGENT: Legal document crisis."

The client's paralegal had uploaded a batch of signed contracts and NDAs—standard business documents that needed to be easily findable and properly organized. These weren't images; they were PDF scans of legal paperwork.

Our smart system had other ideas.

The machine learning algorithm, designed primarily for image recognition, had analyzed the scanned documents and made some... creative interpretations. Here's what it decided:

  • Signed contracts → Tagged as "handwriting samples" and filed under "Arts & Crafts"

  • Corporate letterheads → Categorized as "logo design inspiration"

  • Legal disclaimers → Auto-tagged as "fine print photography" and grouped with restaurant menu shots

  • Confidential client NDAs → Somehow ended up in a smart collection called "Typography Examples" alongside food blog headers

But the crown jewel of algorithmic confusion? A scanned invoice for office supplies got automatically tagged as "food photography" because the machine learning model detected what it thought was a grocery receipt, and our system had learned to associate receipts with restaurant and food content.

When Smart Becomes Stupid

The legal document disaster was just the beginning. As more content flowed through our "intelligent" system, the edge cases multiplied:

Screenshot Chaos: Every UI mockup and web design screenshot got tagged as "technology, computer, website" and dumped into the same massive digital pile. Finding a specific wireframe became harder than it was before we had any organization system at all.

Artwork Confusion: The client's creative team uploaded concept art for a fantasy gaming client. Our system confidently tagged medieval castle illustrations as "real estate photography" and sorted them alongside actual property listings.

Color-Based Madness: The algorithm became obsessed with color matching. A red Ferrari, a red stop sign, and a close-up photo of a strawberry all ended up in the same "red objects" smart collection, regardless of their actual purpose or context.

False Confidence: Perhaps most frustrating was how confidently wrong the system could be. It didn't tag things with uncertainty—every categorization came with the same algorithmic certainty, making it impossible for users to know when to trust the suggestions.

The Human Factor We Forgot

The real problem wasn't the technology—it was that we'd forgotten how humans actually organize information.

When the marketing team looked for "client presentation images," they weren't thinking about visual characteristics like "blue, professional, corporate." They were thinking about context: "That shot we used for the Henderson pitch" or "The image from the Q3 campaign that worked really well."

Our smart system understood what things looked like, but it had no concept of why they mattered.

A perfect example: two nearly identical stock photos of handshakes. Visually, they were almost the same—both tagged identically by our system as "business, handshake, meeting, professional." But to the marketing team, one was "the photo we use for partnership announcements" and the other was "the generic handshake we use for B2B content." Context that was invisible to our machine learning but crucial to human users.

The Feedback Loop From Hell

The situation got worse as users tried to "help" the system learn. Twill's media library allowed manual tag corrections, and we'd built a feedback mechanism so the AI could learn from human corrections.

Except different team members had different organizational philosophies.

The graphic designer corrected tags based on visual composition and color theory. The copywriter organized by emotional tone and messaging. The account managers sorted by client and campaign. The social media manager grouped everything by platform requirements.

Our machine learning model was trying to learn from five completely different organizational systems simultaneously. The result was an AI that became increasingly confused and increasingly confident in its confusion.

The system started creating Frankenstein categories: "Blue Professional Client Social Emotional Campaign Content" was an actual auto-generated tag that appeared on seventeen completely unrelated images.

The Breaking Point

The final straw came when a client requested their brand assets for an emergency presentation. Simple request—just grab the logo files and product shots from the last campaign.

Except our smart system had distributed those assets across fourteen different auto-generated categories:

  • Logos were split between "Brand Identity," "Typography Examples," and "Black & White Graphics"

  • Product shots were scattered across "E-commerce," "Lifestyle Photography," and "Marketing Materials"

  • The brand colors were sorted into separate "Blue Content," "White Backgrounds," and "Gradient Collections"

What should have been a 30-second file grab turned into a 45-minute treasure hunt across multiple smart collections. The client missed their presentation deadline.

The Humbling Solution

We rolled back to a much simpler system:

  • Basic auto-tagging for obvious stuff (file type, dimensions, upload date)

  • Manual folder structures that matched how the team actually thought about their work

  • Simple search based on filenames and user-added tags

  • Saved searches instead of algorithmic smart collections

  • Bulk tagging tools to make manual organization faster

The result? Users could find their files again. The system was predictable. Everyone understood how it worked.

What We Actually Learned

1. Context Beats Content

Visual similarity doesn't equal organizational relevance. How humans use files matters more than what the files contain.

2. Predictability Is a Feature

Users would rather have a simple system they understand than a smart system that surprises them. Especially when those surprises cost them deadlines.

3. Automation Should Assist, Not Replace

The best AI helps humans organize their own stuff better. It doesn't try to think for them.

4. Edge Cases Are Just Cases

When you're dealing with real-world content, edge cases aren't exceptions—they're half your data. Legal documents, screenshots, mockups, and weird client requests are normal, not edge cases.

The Real Intelligence

The smartest thing we built wasn't the machine learning pipeline. It was the understanding that humans are really good at organizing things that matter to them, and computers are really good at making that organization faster and more consistent.

Our final system didn't try to be intelligent about content. Instead, it was intelligent about workflow—making it easy for humans to organize files the way that made sense for their actual work.

Sometimes the smartest technology is the technology that knows when to stay dumb.

Have you built "smart" features that turned out to be too smart for their own good? The development community learns more from our failures than our successes—share your stories of AI overreach and human-centered solutions.

Made with Twill | twillcms.com

Read More
Tom Rattigan Tom Rattigan

The Block Editor Feature That Taught Me Why 'Nested' Doesn't Mean 'Better'

The Vision Was Beautiful

When we first implemented Twill's block editor for our client's editorial platform, I was genuinely excited. The concept was elegant: publishers could build rich, dynamic content by stacking blocks—text blocks, image blocks, quote blocks, video embeds—in whatever order told their story best.

But then we discovered something even more powerful buried in Twill's documentation: blocks could contain other blocks. Nested composition. The ability to create complex layouts by embedding blocks within blocks.

"This is it," I told my team during our planning session. "We'll give them the ultimate flexibility. Want a two-column layout with different content types in each column? Done. Need a tabbed interface with rich content in each tab? Easy. Complex hero sections with multiple content layers? No problem."

The client loved the demo. During our presentation, we built a sophisticated landing page in real-time: a hero block containing an image block and text block, followed by a features section block containing multiple feature blocks, each with their own nested image and text combinations. It looked professional, felt intuitive, and seemed infinitely extensible.

We launched with confidence, proud of the flexible content system we'd built.

Reality Had Other Plans

Three weeks post-launch, I got the call every developer dreads.

"The CMS is running really slowly," the client's content manager said. "Sometimes pages take forever to load, and occasionally they don't load at all."

I pulled up their latest blog post to investigate. What I saw made my stomach drop.

The post started with a "Full Width Hero" block. Inside that was a "Two Column Layout" block. The left column contained a "Card Stack" block, which contained three "Feature Card" blocks. Each Feature Card contained an "Image with Overlay" block, which itself contained a "Text with Background" block. The right column had a "Tabbed Content" block containing four tabs, each with a "Rich Content" block containing multiple "Pull Quote" blocks.

And that was just the header section.

Seven levels deep. The content tree looked like a Russian nesting doll designed by someone with commitment issues.

The Performance Nightmare

The technical reality hit me like a truck. Every nested block meant additional database queries. A block seven levels deep required multiple relationship lookups to render properly. What should have been a simple page load had become a cascade of database calls:

  • Query the main content block

  • Query each nested block's configuration

  • Query the media relationships for image blocks

  • Query the content relationships for text blocks

  • Repeat for each nesting level

A single "simple" blog post was generating over 200 database queries. Our carefully optimized Laravel application was choking on the data complexity we'd accidentally enabled.

But the performance issues were just the beginning.

The User Experience Disaster

The content managers were struggling too, but not in ways I'd anticipated. The nested block interface had become cognitively overwhelming. To edit a simple text element buried six levels deep, publishers had to:

  1. Click into the main container block

  2. Navigate to the correct nested section

  3. Find the specific nested block

  4. Click through multiple edit interfaces

  5. Remember their location in the nesting hierarchy

  6. Navigate back out without losing their changes

"I can't find anything," one content editor told me during a support call. "I spent twenty minutes yesterday trying to update a single headline because I couldn't remember which block it was nested inside."

The beautiful flexibility we'd created had become a labyrinthine nightmare. Publishers were creating simpler content not because they wanted to, but because the complex nested structures were too difficult to manage.

When Flexibility Becomes Fragility

The breaking point came when a content editor accidentally created an infinite loop. They'd nested a "Related Content" block inside a "Featured Articles" block, which pulled in an article that contained a "Related Content" block that referenced the original article.

The site crashed. Hard.

Our server ran out of memory trying to render the circular reference, and we spent an emergency weekend implementing safeguards against recursive block relationships. But the damage was done—both to our server uptime and our client's confidence.

The Lessons We Learned (The Hard Way)

1. Constraint Breeds Creativity

The most creative solutions often come from working within limitations, not from having unlimited options. When we limited nesting to three levels maximum, publishers started designing more thoughtful, purposeful content structures.

2. Performance Is a Feature

All the flexibility in the world doesn't matter if your CMS is too slow to use. We implemented aggressive query optimization and caching, but the better solution was preventing the performance problem in the first place.

3. User Experience Trumps Technical Capability

Just because your CMS can do something doesn't mean it should. The most powerful feature is meaningless if your users can't figure out how to use it effectively.

4. Documentation Isn't Training

We'd documented every possible nesting combination but hadn't trained users on when and why to use them. Power features need power user education.

The Solution That Actually Worked

We rebuilt the system with intentional constraints:

  • Maximum three nesting levels (enforced programmatically)

  • Predefined layout blocks for common complex structures

  • Visual hierarchy indicators showing nesting depth in the interface

  • Performance budgets that warned when blocks were getting too complex

  • Content templates for common page types

The result? Publishers were happier, pages loaded faster, and we still offered plenty of creative flexibility—just within reasonable bounds.

The Real Lesson

The most important thing I learned wasn't about Twill specifically, but about the nature of building tools for humans. Engineering elegance and user elegance aren't always the same thing.

Twill's block editor is genuinely powerful, and the ability to nest blocks is a useful feature. But like any powerful tool, it needs guardrails. The best CMS isn't the one that can do everything—it's the one that makes the right things easy and the wrong things hard.

Our clients didn't need infinite flexibility. They needed enough flexibility to tell their stories well, wrapped in an interface that stayed out of their way.

Sometimes the best feature you can build is the constraint that prevents someone from using your worst feature.

Building with Twill? Share your own "lessons learned the hard way" stories. The CMS development community grows stronger when we're honest about what doesn't work, not just what does.

Made with Twill | twillcms.com

Read More
Tom Rattigan Tom Rattigan

When Your Biggest Enterprise Client Breaks Your Assumptions

"We need Twill to manage our global content operation," Nike's technical director explained during our initial scoping call. "Multiple regions, different languages, complex approval workflows, thousands of assets. But your CMS looks flexible, so it should handle our scale just fine, right?"

I looked at our feature list - modular architecture, flexible content types, scalable media handling - and confidently said yes. Twill was built to be adaptable. We'd designed it for exactly this kind of sophisticated use case.

Six months later, I was staring at a system architecture diagram that looked nothing like anything we'd ever built before, wondering if we were still making a CMS or if we'd accidentally become an enterprise software company.

The Assumptions We Didn't Know We Were Making

When we designed Twill's flexible architecture, we thought we were building for scale and complexity. We'd seen agency projects with hundreds of pages, multiple content types, and sophisticated editorial workflows. We felt confident about handling enterprise requirements.

But our idea of "enterprise scale" was adorably naive.

We assumed enterprise clients would need:

  • More content types (they actually needed fewer, but more sophisticated ones)

  • Larger teams (they actually needed smaller teams with more complex permissions)

  • More features (they actually needed fewer features that worked more reliably)

  • Complex content structures (they actually needed simpler structures with enterprise-grade governance)

Every assumption we'd made about scaling up proved to be wrong in interesting ways.

The Workflow That Broke Our Mental Model

Nike's content creation process looked nothing like what we'd experienced with smaller clients. A single product launch involved:

  • 27 different stakeholder approvals across legal, marketing, regional teams, and brand guidelines

  • Content synchronization across 15 markets with different regulatory requirements

  • Asset versioning that needed to track not just what changed, but who approved each change and why

  • Publishing coordination that involved scheduling content across time zones with dependencies between markets

  • Compliance tracking that required audit trails for every content decision

Our "flexible approval workflows" feature had been designed for simple linear approval processes. Content creator → editor → publisher → live. Maybe with a branch for legal review on sensitive content.

Nike needed a workflow system that could handle parallel approvals, conditional dependencies, market-specific requirements, and rollback procedures that preserved audit trails. What we called "workflow flexibility" was actually workflow complexity that we'd never encountered.

The Scale Problem We Didn't Anticipate

The numbers were overwhelming, but not in the ways we'd expected:

  • 50,000+ assets in active use (we'd tested with maybe 1,000)

  • 300+ concurrent users across different time zones (we'd tested with 10-20)

  • 15 different content publishing destinations (we'd assumed 1-2 websites)

  • 12-month content approval cycles (we'd assumed days or weeks)

  • 24/7 uptime requirements across global markets (we'd assumed business hours)

But the real challenge wasn't the raw numbers - it was the interdependencies. When Nike's Tokyo team scheduled a product announcement, it affected content planning in New York, compliance reviews in Amsterdam, and marketing campaigns in São Paulo.

Our system was built for independent content management. They needed orchestrated global content operations.

The Integration Reality

"Can Twill integrate with our existing systems?" had seemed like a straightforward technical question. We'd built APIs, supported webhooks, and designed for extensibility.

Then I saw their integration requirements list:

  • Global asset management system (DAM with petabytes of brand assets)

  • Enterprise resource planning (connecting product launches to inventory and sales systems)

  • Marketing automation platforms (coordinating content with campaign systems across regions)

  • Legal document management (ensuring compliance across different regulatory environments)

  • Analytics and performance tracking (measuring content effectiveness across markets and channels)

  • Translation management systems (coordinating multilingual content with professional translation services)

Each integration was technically feasible individually, but together they created a complexity web that our architecture wasn't designed to handle. We weren't just managing content - we were becoming the central nervous system for Nike's global digital presence.

The Permissions Nightmare

Our role-based permissions system had felt sophisticated when we designed it. Content creators, editors, publishers, administrators - clean categories with clear hierarchies.

Nike's permissions requirements broke that model completely:

  • Geographic restrictions: Tokyo editors could create content but only for Asian markets

  • Product category limitations: Footwear content managers couldn't access apparel systems

  • Time-based permissions: Campaign content was only editable during specific approval windows

  • Conditional access: Legal reviewers could only see content that had passed initial brand compliance

  • Audit requirements: Every permission change needed to be traceable for compliance reporting

We ended up building what was essentially a custom identity and access management system within our CMS, just to handle their permission complexity.

The Performance Assumptions That Failed

We'd optimized Twill for fast content creation and editing. Our performance testing focused on how quickly editors could build pages, upload assets, and publish content.

Nike's performance requirements were completely different:

  • Global CDN coordination: Content changes needed to propagate consistently across dozens of edge locations

  • High-availability failover: System downtime in any region could affect global campaign launches

  • Concurrent editing: Hundreds of users working on interdependent content simultaneously

  • Batch processing: Migrating thousands of legacy assets while maintaining system responsiveness

  • Real-time synchronization: Content changes in one market needed to notify dependent teams instantly

Our "fast CMS" was fast for individual interactions, but we'd never tested it as mission-critical infrastructure for a global operation that never sleeps.

The Mobile-First Reality Check

"Our content teams work globally and need mobile access," seemed like a straightforward responsive design requirement.

The reality was more complex:

  • Content creation on tablets during trade shows and events

  • Approval workflows happening during international flights with intermittent connectivity

  • Asset management from mobile devices in environments where laptops weren't practical

  • Real-time collaboration across teams using different devices and connection speeds

  • Offline capability for content review in locations with poor internet access

Our mobile-responsive admin interface became a full mobile-first content management platform with offline synchronization and touch-optimized editing workflows.

The Support Expectations Gap

We'd been providing community support through GitHub issues and Discord channels. Responsive, helpful, but ultimately best-effort support for an open source project.

Enterprise support meant:

  • 24/7 technical availability across global time zones

  • Dedicated account management with understanding of their specific business context

  • Priority bug fixes with guaranteed response times

  • Custom feature development integrated into their deployment cycles

  • Training and documentation for their global teams

  • Disaster recovery planning and business continuity support

We weren't just providing a product anymore - we were providing enterprise-grade service infrastructure.

What We Actually Built

The Nike engagement transformed Twill from a flexible CMS into an enterprise content platform:

  • Advanced workflow engine that could handle complex, parallel approval processes

  • Enterprise integration architecture designed for mission-critical system dependencies

  • Granular permissions system with audit trails and compliance reporting

  • Global performance optimization with CDN coordination and failover systems

  • Mobile-first editing platform with offline capabilities

  • Enterprise support infrastructure with dedicated resources and SLA commitments

But the most important thing we built was a different understanding of what "enterprise-ready" actually means.

The Lesson About Enterprise Product Development

The Nike engagement taught me that enterprise clients don't just need more of what smaller clients need - they need fundamentally different approaches to the same problems.

Scale isn't just bigger numbers - it's qualitatively different challenges that require different architectural approaches.

Flexibility isn't just more configuration options - it's the ability to integrate into complex organizational systems and processes.

Enterprise requirements aren't just feature additions - they're constraints that affect every aspect of system design and operation.

Success metrics change completely - from "how fast can you build content?" to "how reliably can you coordinate global operations?"

The most valuable part of working with Nike wasn't the revenue or the prestigious client reference. It was discovering the gap between our assumptions about enterprise needs and the reality of enterprise operations.

This experience fundamentally changed how we approached product development. Instead of building features we thought enterprises would need, we started building infrastructure that could adapt to enterprise requirements we couldn't anticipate.

The best enterprise products aren't just scaled-up versions of simpler tools - they're designed from the ground up to handle the complexity, interdependency, and operational requirements that only become visible when you're actually running mission-critical systems at global scale.

Nike didn't just buy our CMS - they taught us what our CMS needed to become. The client that broke our assumptions became the client that made us build something truly enterprise-ready.

Read More
Tom Rattigan Tom Rattigan

The Feature That Everyone Requested But Nobody Actually Used

"Bulk operations would be a game-changer," read the GitHub issue with 47 upvotes and enthusiastic comments from developers across different projects. "We need to be able to select multiple content items and perform actions on them all at once - publish, unpublish, delete, move between categories. This is essential functionality that every CMS should have."

The feature request was perfectly reasonable. The use cases were compelling. The community consensus was clear. We prioritized it for our next major release and spent six weeks building a comprehensive bulk operations interface.

Three months after launch, our analytics showed that less than 2% of Twill installations had ever used bulk operations. Even among the most active users, the feature was being used maybe once per month, if at all.

We'd built exactly what people asked for, but apparently not what they actually needed.

The Perfect Feature Request

The bulk operations request looked like product management gold. It had:

  • Clear problem definition: "Managing large amounts of content one-by-one is tedious"

  • Specific solution requests: "Select multiple items and perform batch actions"

  • Strong community support: Dozens of upvotes and detailed use case descriptions

  • Competitive parity: "Every major CMS has this functionality"

  • Multiple stakeholder validation: Requests came from both developers and content teams

During our planning sessions, everyone agreed this was obvious missing functionality. Our design team created beautiful mockups. Our engineering team was excited about the technical challenge. Even our client teams mentioned that bulk operations would improve content management efficiency.

The feature roadmap practically wrote itself.

The Implementation That Checked Every Box

We built a comprehensive bulk operations system that addressed every use case mentioned in the GitHub discussions:

  • Multi-select interface: Clean checkboxes with "select all" functionality

  • Batch publishing: Publish or unpublish multiple items simultaneously

  • Category management: Move content between categories in bulk

  • Tag operations: Add or remove tags across multiple items

  • Status updates: Change workflow status for entire content sets

  • Bulk deletion: Remove multiple items with proper confirmation dialogs

  • Export functionality: Download metadata for selected items

The feature was polished, well-documented, and exactly what the community had requested. The GitHub issue thread filled with positive reactions when we announced the release.

The Launch That Should Have Been Triumphant

The bulk operations announcement generated significant excitement. The feature was prominently highlighted in our release notes, demo videos, and documentation updates. Early feedback from beta users was positive - they confirmed the functionality worked as expected and solved the problems they'd described.

But something odd happened in the weeks following release: almost no one was actually using it.

Our usage analytics showed:

  • 98% of installations never used bulk operations

  • Active users averaged 0.3 bulk operations per month

  • Most bulk actions were performed on fewer than 5 items

  • The export functionality went completely unused

Support tickets weren't complaining about bugs or missing functionality. Users weren't reporting problems with the feature. They just... weren't using it.

The Reality Check Conversations

Confused by the disconnect between enthusiasm and adoption, I started having direct conversations with users who had originally requested bulk operations.

"Oh, bulk operations - yeah, that looks great!" said one content manager from a client project. "We haven't needed to use it yet, but it's good to know it's there."

"When do you think you'll need it?" I asked.

"Well, when we have to manage large batches of content, obviously."

"How often does that happen?"

Long pause. "You know, now that I think about it... not that often. Most of our content updates are pretty specific. We're usually working on individual pieces."

This conversation repeated across multiple user interviews. People loved the concept of bulk operations, but their actual content workflows rarely involved managing large groups of similar items simultaneously.

The Workflow Reality Gap

The deeper issue became clear during a screen-sharing session with the International Energy Agency's content team. They had thousands of articles, reports, and data sheets - exactly the scenario where bulk operations should be valuable.

I watched their content manager work through updating publication status for a batch of reports. Instead of using our bulk operations feature, she was going through items individually, making different decisions for each one based on context I couldn't see.

"Why not select them all and update the status at once?" I asked.

"Because they're all different," she explained. "This one needs a different publication date. That one is missing required metadata. This one needs review from a different department. They look similar in the system, but they all need slightly different handling."

The bulk operations feature assumed that content management was like email management - lots of similar items that could be processed identically. But content isn't email. Each piece has unique context, requirements, and stakeholder considerations that resist batch processing.

The Feature vs. Workflow Mismatch

Our usage analytics revealed the fundamental problem. When users did perform bulk operations, they were typically:

  • Deleting test content during initial setup

  • Moving demo articles into proper categories

  • Cleaning up imported data from migrations

  • Managing temporary content created for specific campaigns

These were all maintenance tasks, not core content creation workflows. The feature was being used for housekeeping, not for the day-to-day content management that people had described in their requests.

The real content workflows looked different:

  • Individual review cycles: Each piece of content went through unique approval processes

  • Contextual publishing: Publication timing was based on specific campaign or project needs

  • Custom categorization: Content organization required understanding the specific topic and audience

  • Relationship management: Content decisions involved considering connections to other content that wasn't obvious in bulk views

The Community Feedback Loop Failure

The most troubling realization was how the GitHub discussion had created a false consensus. The feature request attracted people who thought bulk operations sounded useful, but it didn't validate whether their actual workflows would benefit from the feature.

Comments like "this would save so much time" and "essential for content management" sounded compelling, but they were hypothetical. People were imagining scenarios where they'd use bulk operations, not reporting frequent situations where they actually needed them.

The social dynamics of GitHub made the problem worse. Once a feature request gained momentum, people felt pressure to agree that it was important. Disagreeing with a popular feature request felt like being obstructionist or not understanding user needs.

Meanwhile, the people who might have pointed out workflow mismatches - content managers dealing with complex, contextual publishing decisions - weren't actively participating in GitHub discussions about CMS features.

What We Actually Learned

The bulk operations experience taught me several crucial lessons about community-driven product development:

Vocal requests don't equal actual needs: The people most likely to comment on feature requests aren't necessarily representative of typical usage patterns.

Hypothetical enthusiasm differs from practical adoption: Users can genuinely believe they need a feature without actually having workflows that would use it regularly.

Edge case optimization can miss core workflows: Focusing on scenarios where bulk operations would be useful (data cleanup, migrations) led us to overestimate how often those scenarios occurred.

Social proof creates false validation: Once a feature request gains community support, it becomes difficult to question whether the underlying need is as common as it appears.

Usage context matters more than functionality: People don't just need the ability to perform bulk operations - they need workflows where bulk operations make sense.

The Pivot That Actually Helped

Instead of doubling down on bulk operations, we focused on improving individual content management workflows:

  • Better content previews so users could make decisions faster

  • Contextual action shortcuts for common individual tasks

  • Workflow automation for repetitive individual operations

  • Content relationship visibility to help with contextual decisions

These improvements addressed the actual time-wasting parts of content management - the friction in handling individual items - rather than trying to batch-process inherently contextual work.

The Lesson About Community Product Management

The bulk operations experience fundamentally changed how I evaluate feature requests, especially popular ones. Now I ask different questions:

  • How often do you encounter this problem? (Not "would this be useful?")

  • Show me the last time you needed this (Not "can you imagine scenarios?")

  • What do you currently do instead? (Understanding existing workflows)

  • Who else is involved in these decisions? (Identifying stakeholders who aren't in GitHub)

The most dangerous feature requests are the ones that sound obviously useful to everyone. They bypass critical thinking because they feel like no-brainer improvements. But "obviously useful" and "frequently needed" are completely different things.

The best features aren't the ones that get the most enthusiastic community support - they're the ones that solve problems people encounter regularly in their actual workflows. Sometimes that means building boring improvements that nobody gets excited about, but that make daily work incrementally better.

Bulk operations taught me that community consensus can be wrong, not because people are lying or confused, but because the social dynamics of feature discussions can create false validation for hypothetical needs that don't translate to real usage patterns.

The goal isn't to build features that sound great in GitHub comments - it's to build features that become indispensable in daily workflows.

Read More
Tom Rattigan Tom Rattigan

Balancing Product Strategy with Operations: Lessons from Managing Twill and Client Delivery

"Can we add multi-language support to the next release?" came the GitHub comment on a Tuesday morning. It was a well-researched request from our growing developer community, backed by clear use cases and potential to expand Twill's market reach.

I was in the middle of reviewing quarterly production metrics when the notification came in. As someone responsible for both Twill's product strategy and AREA 17's operational excellence, this was exactly the kind of strategic decision that required balancing multiple stakeholder needs: our open source community, our internal team capacity, and our client delivery commitments.

This is the reality of senior product management in a services company: every product decision exists within a broader business context, and success requires orchestrating resources across competing priorities without compromising either.

Strategic Resource Allocation

My role required thinking about Twill not as a separate project, but as a strategic asset that needed to complement and enhance our client delivery capabilities. This meant making product decisions through multiple lenses simultaneously.

When Nike needed custom block functionality for their campaign launch, the question wasn't just "how do we build this for Nike?" but "how do we build this in a way that strengthens Twill's core value proposition while delivering exceptional client results?"

The International Energy Agency's performance requirements weren't just a client deliverable - they were an opportunity to stress-test Twill's scalability and identify improvements that would benefit our entire user base.

Every client engagement became a product development opportunity, and every product enhancement became a competitive advantage for client work. The challenge was managing this integration without letting either side compromise the other.

Stakeholder Alignment Across Communities

Managing Twill meant serving multiple stakeholder groups with different needs and communication styles:

Open source community: Expected transparency, regular communication, and responsive engagement with feature requests and bug reports.

Internal development team: Needed clear priorities, realistic timelines, and alignment between client work and product development.

Client teams: Required dedicated attention, confidential handling of their specific needs, and reliable delivery of custom functionality.

AREA 17 leadership: Expected both product innovation that differentiated our services and operational efficiency that supported business growth.

The key was establishing communication frameworks that served all groups without creating conflicts. This meant scheduled community engagement windows, transparent roadmapping that aligned with business cycles, and clear processes for integrating client feedback into product decisions.

Turning Constraints into Strategic Advantages

Rather than viewing client delivery commitments as constraints on product development, I learned to leverage them as validation opportunities and feature drivers.

When we had three major client launches simultaneously - Nike's campaign, IEA's performance optimization, and an enterprise client's advanced permissions requirements - this became a strategic opportunity to battle-test Twill's capabilities across different use cases and scales.

The client work provided real-world validation of product decisions, immediate feedback on feature implementations, and funding for development that might otherwise be difficult to justify. The product work provided competitive differentiation for client pitches and operational efficiency improvements for delivery teams.

This integration required careful planning. We couldn't let client urgency drive product decisions without strategic consideration, but we also couldn't let product perfectionism delay client deliverables. Success required balancing short-term delivery needs with long-term product vision.

Building Scalable Product Operations

The dual responsibility taught me to build product management processes that could scale across both community-driven and client-driven development:

Integrated roadmapping: Product planning that accounted for both community feature requests and anticipated client needs, allowing us to sequence development for maximum efficiency.

Community-driven validation: Using our open source community as a testing ground for features before deploying them in client environments, reducing risk while increasing confidence.

Strategic feature prioritization: Evaluating requests not just on technical merit, but on their potential to serve both community growth and client success.

Resource optimization: Finding ways to make client-funded development benefit the broader product, and product improvements benefit client delivery.

Risk management: Building processes that protected both client confidentiality and community trust, ensuring neither stakeholder group was compromised by the other.

Measuring Success Across Multiple Dimensions

Success in this dual role required tracking metrics that reflected both product and operational excellence:

Product growth: GitHub contributors grew from 5 to 150+, installs reached 100,000+, and community engagement increased consistently.

Client satisfaction: AREA 17 projects using Twill increased by 30%, client retention improved, and Twill became a competitive differentiator in new business pitches.

Team efficiency: Internal development velocity improved as Twill standardized common client needs, reducing custom development overhead.

Strategic positioning: Twill's reputation enhanced AREA 17's market position while client work provided real-world validation of product decisions.

The Strategic Value of Integration

The experience taught me that the most effective product management happens when you understand the broader business context and can align product decisions with organizational success. Rather than managing Twill in isolation, integrating it with client delivery created unique strategic advantages:

  • Real-world validation of product features before community release

  • Client-funded development that benefited the entire user base

  • Competitive differentiation that supported business development

  • Operational efficiency improvements that enhanced delivery quality

  • Community growth that attracted top talent and industry recognition

Lessons for Product Strategy

Managing both product development and operational delivery reinforced several key principles:

Context-aware prioritization: The best product decisions account for resource constraints, market timing, and strategic positioning, not just user feedback.

Stakeholder orchestration: Success requires balancing the needs of multiple groups without compromising the core value proposition for any of them.

Strategic patience: Sometimes the right product decision means saying no to good features in order to focus on great outcomes.

Integration thinking: The most powerful product strategies find ways to align development efforts with business objectives rather than treating them as competing priorities.

Sustainable growth: Long-term success requires building systems and processes that can scale, not just shipping features quickly.

The dual responsibility of product management and operational leadership taught me that the best product managers don't just build features - they build strategic value that serves multiple stakeholder groups while advancing organizational objectives. It's complex, but it's also where the most impactful product decisions get made.

Read More
Tom Rattigan Tom Rattigan

The Browser Field That Launched a Thousand Support Tickets

"We need a way for content editors to select related articles when they're writing posts," the client requested during our initial scoping call. "Something simple—just browse and pick the articles they want to feature."

Simple, right? How hard could it be to build a content browser that lets users select related items?

Three months later, that "simple" browser field had generated more support tickets than every other Twill feature combined.

The Feature Request That Seemed Obvious

The request made perfect sense. Content editors needed to link articles, reference related products, highlight featured content. Instead of making them type in IDs or URLs, we'd build a beautiful browser interface where they could visually search, filter, and select the content they wanted to reference.

During development, it worked beautifully. Click a button, see a modal with thumbnails of all your content, search by title, filter by category, select what you need. Clean, intuitive, exactly what modern users expect from content management interfaces.

The initial feedback was great. "This is so much better than the old system where we had to remember article IDs," one content manager told us. "It's like browsing Netflix, but for our own content."

We shipped it feeling confident. Finally, a feature that truly simplified content editors' workflows.

The Support Ticket Avalanche Begins

The first wave of tickets seemed like edge cases:

"Browser field shows duplicate items" (Issue #45)

"Can't find recently published articles in browser" (Issue #78)

"Browser search returns irrelevant results" (Issue #91)

Each ticket was polite, specific, and completely reasonable. Content editors were trying to do exactly what the feature was designed for, but encountering frustrating edge cases that made the "simple" browser feel broken.

The deeper I dug into these tickets, the more I realized we'd built a feature that worked great for our test content but fell apart with real-world data complexity.

The Taxonomy Nightmare

The International Energy Agency project revealed the first major problem. They had thousands of articles organized across multiple taxonomies: publication type, geographic region, energy sector, publication date, language, and internal department.

When their content editors opened our browser field, they were overwhelmed with choice. Thousands of articles, minimal filtering options, and no clear way to navigate the organizational structure they'd spent years developing.

"I know the article I want exists," one of their editors told me during a support call, "but I can't figure out how to find it in this interface. Can I just type in the URL instead?"

The browser that was supposed to simplify content selection had become more complex than manually entering references. They knew their content intimately—they'd written most of it—but our interface forced them to navigate it like strangers.

The Performance Disaster

The second wave of problems hit when clients with large content volumes started using the browser field heavily. What worked fine with a few hundred test articles became unusable with ten thousand real articles.

Nike's campaign pages needed to reference hundreds of products, athlete profiles, and marketing assets. Opening the browser field with their full content library took 15-20 seconds to load. Searching was slow. Filtering was slower. The interface that was supposed to speed up content creation was becoming the bottleneck.

"We love the concept," their content coordinator said, "but it's faster to just copy and paste URLs at this point. The browser is too slow to be useful."

We'd optimized for the wrong thing. Instead of building for fast content selection, we'd built for comprehensive content display. Every time someone opened the browser, we were loading thumbnails, metadata, and preview information for thousands of items they'd never select.

The Mental Model Mismatch

The deeper issue became clear during a screen-sharing session with a client. I watched their content editor search for a specific article about renewable energy policy.

She tried searching for "renewable" - got 847 results. She tried filtering by "policy" - got 1,203 results.
She tried combining both - got 156 results, including articles about completely different topics that happened to mention renewable energy policies in passing.

"I know exactly which article I want," she said, scrolling through pages of search results. "It was published last month, written by Sarah, about the new EU regulations. But I can't figure out how to tell your search that."

The problem wasn't our search algorithm - it was our understanding of how content editors think about their own content. They don't think in terms of keyword matching and metadata filtering. They think in terms of context, relationships, and recent work patterns.

She remembered who wrote it, when it was published, what project it was part of, and why it was relevant to her current article. But our browser field only understood titles, tags, and categories.

The Context Collapse

The worst realization was that we'd stripped away all the contextual information that made content selection intuitive. In their normal workflow, content editors knew:

  • What they'd worked on recently

  • What their colleagues were publishing

  • What was trending or getting attention

  • What fit their current project's theme

  • What their audience typically engaged with

Our browser field reduced all content to equal thumbnails in an alphabetical grid. A breaking news article from yesterday looked the same as an archived piece from three years ago. A popular tutorial had the same visual weight as an internal memo that accidentally got published.

"Everything looks the same in this interface," one editor complained. "I can't tell what's important, what's recent, or what's actually relevant to what I'm writing."

The Feature Request Explosion

As users struggled with the basic browser functionality, they started requesting features to work around its limitations:

"Can you add recent items to the top?" "Can we see view counts or popularity metrics?" "Can you remember what I selected last time?" "Can we organize by project instead of alphabetically?" "Can you show related content suggestions?" "Can we save commonly used content as favorites?"

Each request made sense individually, but together they revealed that we'd built a feature that was fundamentally misaligned with how content editors actually work.

We were getting requests to rebuild the browser field into something that understood content relationships, work patterns, and editorial context - basically, to replace the simple browser with a sophisticated content recommendation engine.

What We Actually Built Instead

The solution wasn't to add more features to the browser field. It was to replace it with multiple, more focused selection tools:

Recent content widget: Showed the last 20 pieces of content the user had worked with, since that's what they selected 80% of the time anyway.

Project-based browsing: Let editors browse content by the projects or campaigns they were familiar with, instead of by abstract categories.

Smart search with context: Search that understood "Sarah's article about EU regulations from last month" instead of just matching keywords.

Quick reference tools: Simple ways to grab content URLs for editors who knew exactly what they wanted and just needed to link to it quickly.

Contextual suggestions: When editing an article about renewable energy, show related content about renewable energy without making editors search for it.

Bulk selection tools: For editors who needed to select many related items, provide tools that understood content relationships instead of making them click through individual items.

The Lesson About Simple Requests

The browser field disaster taught me that "simple" feature requests often hide complex workflow requirements. When someone asks for a "simple content browser," they're not asking for a generic interface - they're asking for a tool that understands how they think about their specific content in their specific context.

The most dangerous feature requests are the ones that sound obviously useful. Everyone needs to browse content, right? But the devil is in how different users browse differently, based on their role, their content volume, their organizational structure, and their mental models.

We'd built a feature for the abstract concept of "content browsing" instead of for the specific reality of how content editors actually find and select content in their daily work.

These days, when someone requests a "simple" interface for complex workflows, I dig deeper into the specific scenarios they're trying to support. Not just what they want to do, but how they currently do it, what information they use to make decisions, and what their biggest frustrations are with existing approaches.

The best interfaces aren't the most comprehensive ones - they're the ones that match how users actually think about their work. Sometimes that means building three focused tools instead of one flexible browser. Sometimes that means saying no to requests that sound reasonable but would create more problems than they solve.

The browser field that launched a thousand support tickets taught us that simple requests deserve complicated questions before they become complicated features.

Read More
Tom Rattigan Tom Rattigan

Preview vs Reality: The Block Editor's Identity Crisis

"This looks nothing like what I built in the editor."

The client's content manager was sharing her screen, showing me a beautifully crafted page layout in Twill's block editor alongside the completely different-looking frontend that had just gone live. The spacing was wrong, the typography didn't match, and the carefully arranged visual hierarchy had been flattened into a generic blog post layout.

"Did something break during deployment?" she asked.

I stared at both versions—the polished preview in our editor and the bland reality on the live site—and realized we'd built a CMS with multiple personality disorder.

The WYSIWYG Lie We Told Ourselves

When we designed Twill's block editor, we made a decision that felt obviously correct: make the editing experience look as much like the final output as possible. WYSIWYG (What You See Is What You Get) was the gold standard, right? Content creators should be able to see exactly what they're building while they build it.

So we invested heavily in making our editor previews beautiful. Rich typography, accurate spacing, proper image sizing, color-accurate backgrounds. The editing experience felt sophisticated and professional—like using a high-end design tool instead of a traditional CMS.

The problem was that we were showing people a fiction.

The editor preview was rendering content using our carefully crafted admin CSS, optimized for the editing experience. But the frontend was using completely different stylesheets, optimized for performance, accessibility, and responsive behavior across dozens of devices and browsers.

Those two CSS systems had different priorities, different constraints, and—inevitably—different visual results.

The GitHub Issues That Revealed the Truth

Within six months, our most persistent support requests all had the same theme:

"Block editor preview doesn't match frontend output" (Issue #124)

"Typography in editor vs site is inconsistent" (Issue #201)

"Image sizing works in preview but breaks on mobile" (Issue #289)

"Editor shows perfect spacing but frontend is cramped" (Issue #367)

Each ticket was a small tragedy of broken expectations. Content creators would spend hours perfecting a layout in the editor, only to discover that the live site looked completely different. They'd adjust content based on how it appeared in our preview, making decisions that were wrong for the actual user experience.

The worst part was that both versions were "correct" within their own contexts. The editor preview accurately reflected how content would look with admin CSS. The frontend accurately reflected how content would look with production CSS. They were just two completely different visual systems pretending to be the same thing.

The Nike Reality Check

The breaking point came during a major campaign launch for Nike. Their creative team had built an elaborate landing page using Twill's block editor, carefully balancing image sizes, text positioning, and white space to create a specific visual narrative.

The preview looked stunning—magazine-quality layout, perfect typography hierarchy, images that told a cohesive brand story.

Then we deployed to production.

The frontend used Nike's design system, which had different font stacks, different spacing scales, different responsive breakpoints, and different image optimization settings. What looked like a carefully crafted visual story in the editor became a generic content dump on the live site.

"This isn't what we designed," their creative director said during our post-launch review. "We spent weeks perfecting this in your editor, but none of that work translated to the actual website."

She was right. We'd given them a design tool that couldn't actually control the design. We'd shown them a preview that bore no resemblance to reality.

The Impossible Balance

The core problem was that we were trying to serve two masters with incompatible needs:

Content creators wanted predictability: When they arranged content in the editor, they wanted to know how it would look to end users. They needed to make decisions about content hierarchy, image selection, and text flow based on visual feedback.

Developers needed flexibility: Frontend implementations had to work across devices, meet accessibility standards, integrate with existing design systems, and perform well under real-world conditions.

These requirements were fundamentally at odds. A preview that accurately reflected responsive behavior would be too complex for content editing. A preview optimized for content creation couldn't accurately represent frontend constraints.

Our attempts to split the difference satisfied neither group. Content creators couldn't trust the preview to make good content decisions. Developers had to constantly debug why editor layouts broke on the frontend.

The Configuration Nightmare

Our first fix was to make editor styling configurable. Let developers import their frontend CSS into the editor preview so the two would match perfectly.

This created new problems:

Performance disaster: Frontend CSS was optimized for public sites, not editing interfaces. Loading it in the admin made the editor slow and bloated.

Responsive chaos: Frontend CSS assumed specific viewport sizes and device contexts that didn't exist in the editing interface. Mobile-first CSS broke desktop editing workflows.

Conflicting styles: Admin interface styles clashed with frontend styles, creating visual bugs and broken layouts in the editor itself.

Maintenance nightmare: Every frontend CSS change required testing in both the public site and the admin editor, doubling the QA overhead.

The OpenAI project made this painfully clear. Their design system was built for high-performance public pages, not content editing interfaces. When we tried to use their CSS for editor previews, the editing experience became so slow and buggy that content creation was nearly impossible.

The Mental Model Shift

The solution came from changing what we were trying to accomplish. Instead of making the editor look exactly like the frontend, we focused on giving content creators the information they needed to make good decisions.

Content structure over visual design: The editor preview showed content hierarchy, relative sizing, and layout relationships without trying to match exact typography or colors.

Responsive indicators: Instead of showing how content would look on every device, we added simple indicators for content that might have responsive behavior.

Frontend preview integration: Built tools that let content creators easily preview their work on the actual frontend without leaving the editing workflow.

Layout validation: Added warnings when content arrangements were likely to cause problems on the frontend (like text that would be too long for mobile, or images that wouldn't work at different aspect ratios).

Style-agnostic editing: Designed editor interfaces that focused on content relationships rather than visual styling, so creators could build effective layouts regardless of how they'd be styled.

What Actually Worked

Honest previews: We stopped pretending the editor could show exactly what the frontend would look like. Instead, we showed a consistent, functional representation that helped creators understand content structure and flow.

Quick frontend access: Added one-click preview links that opened content in the actual frontend context, so creators could see real results without leaving their workflow.

Layout guidance: Built tools that warned about potential responsive issues, content length problems, or styling conflicts before they became frontend problems.

Template consistency: Created block templates that worked reliably across different design systems instead of trying to make every custom layout perfectly previewable.

Content-first design: Encouraged content creators to focus on information hierarchy and user needs rather than pixel-perfect visual control.

The Lesson About False Promises

The block editor identity crisis taught me that WYSIWYG is often a lie, especially in systems where content and presentation are truly separated. The goal shouldn't be to make editing look exactly like the final output—it should be to give content creators the information they need to build effective user experiences.

Users don't need perfect visual previews. They need reliable feedback about whether their content decisions are working. They need to understand how their choices affect the final user experience, even if they can't control every visual detail.

The most dangerous kind of preview is the beautiful one that bears no resemblance to reality. It encourages content creators to make decisions based on false information, leading to frustration when their carefully crafted layouts fall apart in production.

These days, when clients ask about Twill's preview capabilities, we're upfront about what the editor can and can't show them. We demo the actual frontend preview workflow, not just the editor interface. We talk about the difference between content structure and visual styling.

It makes for less impressive demos, but much better content creation experiences. Because the goal isn't to show people a beautiful lie—it's to give them useful tools for creating content that works well for their actual users.

The best editor preview isn't the most visually accurate one. It's the one that helps content creators make decisions that improve the real user experience, even when they can't see exactly what that experience will look like.

Read More