13 min read

The Users You Forgot to Design For

The assumption nobody questions

Product design, as practiced and evaluated today, assumes a screen. A layout. A visual hierarchy. A user who sees, clicks, scrolls, and navigates a graphical interface.

This assumption is so deeply embedded that it has become invisible. Portfolios are collections of screens. Case studies are measured in interface comparisons. Hiring panels evaluate visual polish. Design systems are component libraries. The deliverable is the interface. The interface is the design.

But what happens when the user does not use a screen?

This is not a hypothetical. It is the everyday reality of a large and growing class of users who interact with software through terminals, API calls, CI/CD pipelines, and AI agents that call your product autonomously. These users never open a web app unless something has gone wrong. For them, the quality of a product is not how it looks — it is how it behaves when they cannot see it.

There is an entire design surface — CLIs, APIs, error messages, tool descriptions — that has users, constraints, and success criteria. Product design forgot to claim it.

The developer as user

Developers are among the most demanding users in existence. They have deep technical fluency, short patience for friction, strong opinions on consistency, and a ruthless instinct for workarounds. When something does not work, they do not file a support ticket — they write a script that works around the problem and move on.

For these users, the “user experience” is radically different from what most product designers are trained to optimize:

  • The error message when something fails. Is it actionable? Does it tell them what to fix, or does it say “something went wrong”?
  • The response time of an API call. Not as a performance metric — as a workflow decision. If the response takes two seconds, they will wrap it in a script. If it takes thirty, they need progress feedback.
  • The documentation. Not a help center with screenshots — a reference that answers “what does this parameter do” without requiring them to leave the terminal.
  • The consistency of the API. Do similar operations work the same way? Can they predict the behavior of an endpoint they have not used yet based on the ones they have?
  • The CLI feedback loop. Progress bars, exit codes, JSON output for piping. Not decoration — workflow machinery.

None of this is visual design. All of it is product design.

The scale of the gap is measurable. A 2024 survey of over 2,100 developers and engineering leaders found that developers lose an entire day each week to inefficiencies — friction in processes, unclear documentation, and unnecessary complexity.1 The problem is not that developers are unproductive. It is that the tools they depend on were never designed with the same rigor as the interfaces their customers see.

Why this matters now

Two forces are converging that make this moment important.

Developer tools are a massive market that designers largely ignore. The tools developers use daily — terminals, APIs, SDKs, CI/CD systems — are designed by engineers for engineers. This is not because design is unwelcome. It is because the design industry has defined itself around screens, and most developer tools do not have one.

The companies that figured this out first built enduring advantages. Stripe treated API design as product design — maintaining a formal cross-functional API Review process, prefixed IDs for instant object recognition, and error messages with embedded documentation links. That rigor was not incidental to their growth. It was the growth engine. Twilio applied the same thinking to developer onboarding: a redesign of their developer experience delivered a 62% improvement in first-message activation and a 33% improvement in production launches — measurable product impact on an interface that was never visual.2

These are not engineering decisions dressed up as design. They are design decisions — consistency, progressive disclosure, error handling, first-use experience — applied to a medium the design industry has not claimed.

AI agents are creating a new class of non-visual user. In When Your Product Becomes Someone Else’s Source, I wrote about what happens when AI systems consume your content as an intermediary — summarizing, repackaging, and citing your information without your control.

That is the content problem. This is the interface problem.

AI agents do not just read your content. They call your tools. They parse your API responses. They interpret your error messages. And the only documentation they have is the tool description you wrote — a few sentences that determine whether the agent uses your product correctly or fails silently.

Anthropic’s Model Context Protocol now sees 97 million monthly SDK downloads and powers over 16,000 indexed servers.3 These are not abstract infrastructure numbers. They represent millions of interactions where an AI agent is the user — and the interface is a tool description, a JSON response, and an error message.

The design implication is precise: “Developers treat MCP like a REST API wrapper. MCP is a User Interface for Agents. Different users, different design principles.”4 A good REST API is not automatically a good agent interface. REST principles like composability and flexibility help human developers but can confuse agents — composability means slow multi-step tool calls, and flexibility invites hallucination. The design challenge is new. The design skills it requires are not.

What building BatchLab taught me

I spent ten weeks building BatchLab — a platform for content visibility — where the primary users were developers. They interacted with the product through a CLI, an API, and an MCP server. The web dashboard existed, but it was secondary. The real product was the terminal experience.

Building for that medium taught me things I did not learn building web interfaces. These are not exotic principles. They are standard product design concerns — applied to a medium most designers have never worked in.

Error messages are the most important interface. In a GUI, you can use color, layout, and context to soften a failure. In a terminal, the error message is all you have. A good error message is a micro-product: it describes what happened, why, and what the user can do about it. A bad one destroys trust in seconds.

The anatomy matters. An error like "Invalid API key provided in 'Authorization' header. Ensure your API key matches the one in your dashboard" takes ten seconds to write. It saves the developer hours. That is a product decision, not an engineering one.5

Consistency is the design system. For a developer using an API, the “design system” is not a component library — it is the predictability of how your API behaves. If GET /orders/{id} returns an object with a status field, and GET /tasks/{id} returns an object with a state field, you have a design inconsistency. It does not matter how clean your OpenAPI spec looks. The developer will notice, and they will lose trust in the system’s coherence.

Progressive disclosure works differently. In a GUI, you hide complexity behind menus and accordions. In a CLI, you hide it behind subcommands and flags. batchlab order create is the simple path. batchlab order create --tier 3 --output json --webhook https://... is the advanced one. Same principle, different medium. The goal is identical: let beginners start quickly, and let power users go deep without friction.6

Feedback loops are trust. A progress bar in a terminal is not a nice-to-have. When a developer runs a batch job that processes 500 files, they need to know: is it working? How far along is it? Should I wait or do something else? This is the same design problem as a loading spinner in a web app, but the stakes feel higher because the user is staring at a cursor with no other visual signal that the system is alive.

The best product demo is curl. When I demoed BatchLab to developers, the moment that clicked was not the Svelte dashboard. It was a curl command that sent a URL and got back structured Markdown. That is the equivalent of a hero image for a developer product — and no design tool can produce it. Designing that experience — what gets returned, how it is structured, what the first response looks like — is the most important first-impression design work for a developer product.

The skills transfer — the medium does not

Here is what I find genuinely surprising about working in this space: the UX research skills transfer almost perfectly.

You observe developers working. You watch them interact with their terminal, their editor, their CI pipeline. You identify friction — where they pause, re-read documentation, say “wait, that is not what I expected.” You listen for workarounds. When developers write wrapper scripts around your CLI, that is the equivalent of a user bookmarking a deep link because your navigation does not work. It is a workaround, and workarounds are one of the richest signals in qualitative research.7

Five developer interviews produce the same kind of insight map as five end-user interviews. The inputs are different. The method is the same.

What does not transfer is the vocabulary. Design tools, design education, and design career frameworks are all built around screens. There is no “developer journey map” template in Figma. There is no “API consistency audit” checklist in most design teams’ toolkits. The theoretical foundations exist — Nielsen’s heuristics have been adapted to API design, Norman’s gulfs of execution and evaluation apply directly to undocumented endpoints, and the Cognitive Dimensions of Notations framework was built to evaluate exactly this kind of non-visual interface.8 The toolkit is there. It just has not been widely applied by people who call themselves designers.

The industry treats developer experience as an engineering concern. It is a design concern. We just forgot to claim it.

Beyond the visible interface

In Designing Operations, Not Just Interfaces, I argued that the invisible back-stage — workflows, exceptions, escalation paths — must be designed with the same rigor as the customer-facing interface.

In When Your Product Becomes Someone Else’s Source, I extended that argument to the intermediary layer — AI systems that consume and repackage your content without your control.

This post adds a third surface: the users who never see any of it. The developer integrating your API at midnight. The AI agent calling your tool description thousands of times a day.

The common thread: design does not stop at the screen.

Where to start

If you are a product designer who wants to work in this space, the entry points are practical:

  • Read API documentation as a user. Pick a product with a developer API — Stripe, Linear, GitHub — and integrate it from scratch. Notice where you get stuck, where the error messages fail you, where the documentation leaves gaps. You are doing UX research.
  • Observe a developer using their tools. Ask to shadow a colleague working in their natural environment — terminal, editor, CI pipeline. Apply the same observation discipline you would to any user research session. You will find friction you cannot find any other way.
  • Audit for consistency. Take any API you work near and ask: do similar operations behave the same way? Do the naming conventions hold across endpoints? Is the error structure consistent? This is a design audit. It just does not look like one.
  • Write an error message. Take the worst error message in a product you work on and rewrite it. Specific, human-readable, actionable. Ship it. Measure whether support requests about that error decrease. That is a design metric.

The interface is not disappearing. It is expanding beyond the screen. And the design skills required to work in that expanded space are the same ones that made good UX practice valuable in the first place: understanding users, reducing friction, building trust through consistent and predictable behavior.

If you are working in this space — or trying to figure out whether your team needs design thinking applied to developer experience — I would like to hear about it. Book a call or find me on LinkedIn.

References

Footnotes

  1. Atlassian (2024). State of Developer Experience Report. Survey of 2,100+ developers and engineering leaders. Key finding: developers lose approximately one day per week to process friction, unclear documentation, and tool inefficiencies.

  2. Twilio’s Developer Experience team built a formal DX Spectrum framework defining five maturity levels — Functional, Documented, Predictable, Seamless, and Magical. A redesigned developer onboarding experience delivered 62% improvement in first-message activation and 33% improvement in production launches.

  3. Anthropic’s Model Context Protocol (MCP), launched November 2024 as an open standard for connecting AI to external tools, reached 97 million monthly SDK downloads and 16,000+ indexed servers. In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, co-founded with OpenAI and Block.

  4. Schmid, P. (2026). “MCP is a User Interface for Agents.” Core argument: a good REST API is not automatically a good MCP server, because agents and human developers have fundamentally different interaction patterns. Agent-facing design principles include “outcomes, not operations” (combine multi-step API calls into single high-level tools) and “instructions are context” (tool descriptions are the only documentation an agent reads).

  5. Thoughtworks published CLI design guidelines codifying that every error should include an error code, description, resolution steps, and a documentation URL. Stripe’s API errors follow a parallel structure: every response includes type, code (machine-readable), message (human-readable), param (the field causing the error), and doc_url (a direct link to relevant documentation).

  6. The Command Line Interface Guidelines at clig.dev — created by Aanand Prasad (Squarespace), Ben Firshman (co-creator of Docker Compose), and Carl Tashian — is the most comprehensive modern CLI design resource. Its opening principle: “If a command is going to be used primarily by humans, it should be designed for humans first.” GitHub extended their Primer design system to include formal CLI guidelines, treating terminal experiences as a first-class design surface.

  7. Workarounds are among the richest signals in user research regardless of medium. In the developer context, wrapper scripts, shell aliases, and custom tooling around a product’s CLI or API are the direct equivalent of bookmarks, browser extensions, and copied-into-spreadsheet patterns that UX researchers look for in end-user research.

  8. Mitra, R. (2017). Adapted Nielsen’s 10 usability heuristics to API design, reducing them to seven principles including visibility of system status, match between system and real world, and error prevention. Published in The New Stack. Green, T. R. G. (1989). Cognitive Dimensions of Notations — a framework of approximately 14 dimensions for evaluating notational systems, directly applicable to API and programming language design. Key dimensions include viscosity, hidden dependencies, premature commitment, and closeness of mapping.