The demo that always works
There is a moment in every product review where someone says: “The prototype looks great.” The screens are clean. The flow is logical. The stakeholders nod.
And then the product ships, and reality arrives.
The customer submits a request. But the request enters a system where assignments are manual, exceptions are handled in spreadsheets, and status updates depend on someone remembering to send an email. The interface promised clarity. The operations behind it deliver ambiguity.
This gap is not a bug. It is a design failure — but not the kind that shows up in a design review, because design reviews rarely include the operations layer.
The interface is the promise. Operations are whether you keep it.
Front-stage and back-stage
In 1984, G. Lynn Shostack published a short article in the Harvard Business Review that changed how we think about services.1 She was a banking executive, not an academic, and her argument was practical: services fail not because failure is inevitable, but because nobody designs them with the same rigor as products. She introduced the service blueprint — a visual map that makes the full service process explicit, including everything that happens behind what she called the “line of visibility.”
That line separates two worlds. The front-stage is everything the customer sees and interacts with. The back-stage is everything that happens behind the scenes to make the front-stage work.2
Most product teams design the front-stage. They research users. They prototype flows. They test interfaces. They iterate on copy and layout and interaction patterns.
Far fewer teams design the back-stage with the same rigor. Workflows, data entry, exception handling, permissions, audit trails, escalation paths, status logic — these are often treated as “implementation details” that will be figured out later.
Later is where products break.
What breaks when they are separated
I have seen the same failure pattern across industries — travel operations, insurance, energy, e-commerce, dispatch. The specific domain changes. The pattern does not.
Booking states that do not match reality. The interface shows “Confirmed.” Operations know the booking is pending a vendor response. The customer calls. Support cannot explain the discrepancy because they are looking at the same misleading interface.
Assignments without ownership. A task appears in a queue. Three people can see it. No one knows who is responsible. The task sits until someone escalates — which means the system has no default behavior for the most common scenario.
Exceptions that bypass the system. A non-standard request arrives. The interface has no path for it. The operator handles it in a side channel — email, chat, a personal spreadsheet. The resolution is invisible to the system. When the same exception recurs, the next operator starts from zero.
Data quality that degrades silently. Front-stage forms collect structured data. Back-stage processes modify it through unstructured channels. Over time, the database contains a mix of system-entered and manually-patched records. Reports become unreliable. Decisions based on those reports become unreliable.
Each of these is a case where the product was designed as an interface, not as a system. The front-stage was given attention. The back-stage was left to figure itself out.
Resilience engineering has a name for this: the gap between “work-as-imagined” and “work-as-done.”3 Managers and designers form mental models of how operations should work. Operators live in how operations actually work. The two are never the same, and the distance between them is where every failure pattern above takes root.
Why product teams miss this
There are structural reasons why product teams design the front-stage and neglect the back-stage.
The back-stage is not visible in research. User research typically focuses on end-users — customers, shoppers, travelers. The people who operate the back-stage (support agents, dispatchers, coordinators, operations managers) are often not included in research plans. Their workflows are treated as internal, not as part of the product.
The back-stage is not visible in demos. Prototypes show the customer-facing experience. They do not show what happens after the customer clicks “Submit.” The gap between the demo and reality is hidden by design — literally.
The back-stage crosses organizational boundaries. Front-stage design sits in the product team. Back-stage operations sit in ops, support, or logistics. The organizational boundary creates an information boundary. Product ships something. Operations adapts to it. The adaptation is never fed back into the design.
This is an old problem with a name. In the 1950s, researchers at the Tavistock Institute studied English coal mines where new technology had actually made things worse. Mechanization had broken apart the teams and social structures that made the work function. The lesson they drew still holds: work systems have a social side and a technical side, and you cannot optimize one without the other.4 They called this joint optimization — designing both subsystems together, because fixing one while ignoring the other degrades the whole.
Product teams do this constantly. They optimize the technical system (the interface) and leave the social system (the operations team, its roles, its coordination, its exception handling) to adapt on its own.
The back-stage is not rewarded. Nobody gets promoted for designing a better exception-handling workflow. The metrics that matter — conversion, engagement, satisfaction — measure the front-stage. The back-stage only becomes visible when it fails.
What designing operations actually means
Designing operations is not about making internal tools prettier. It is about treating the operational layer as a first-class design surface with its own users, its own constraints, and its own success criteria.
In practice, this means:
Map the real workflow, not the intended one. Before designing anything, watch how operations actually work. Not how the process document says they work — how people actually coordinate, escalate, patch, and work around the current system. The gap between the documented process and the real process is where the design problems live.5
Define states and transitions explicitly. Every entity in the system (an order, a booking, a case, a request) has states. Those states have transitions. Those transitions have rules — who can trigger them, under what conditions, with what consequences. If you do not define this explicitly, the team will define it implicitly, inconsistently, and expensively.
Build traceability in, not on. Operations under pressure need to know: what happened, when, and who did it. Audit trails are not a compliance checkbox. They are how operators build confidence that the system is trustworthy. If they cannot trust the system, they will build a shadow system — and the shadow system will win.
Design for exceptions, not just happy paths. The happy path is the easiest flow to design and the least common flow in operations. Most operational work is exception handling. A system that only handles the happy path forces operators to leave the system for every non-standard case — which means the system does not actually manage the work.
Make the back-stage a stakeholder in the front-stage. Before committing to a customer-facing flow, ask: “Can operations actually deliver what this interface is promising?” If the answer is “probably” or “we will figure it out,” the front-stage is writing checks the back-stage cannot cash.
The strongest products I have worked on were not the ones with the best interfaces. They were the ones where the interface and the operations were designed as one system — so the product could keep its promises under real conditions.
The connection to trust
There is a direct line from operational design to product trust.
When a product promises something and delivers it reliably, users build trust. When a product promises something and the delivery is inconsistent — late confirmations, unclear statuses, contradictory information from support — trust erodes.
The erosion is not caused by the interface. The interface is fine. The erosion is caused by the gap between what the interface promised and what operations delivered.
Designing that gap out of the system is not a back-end engineering problem. It is a design problem. It requires someone who understands both the customer’s expectation and the operational reality, and who can shape the product so the two do not diverge.
Signals that your product has an operations design gap
- Support tickets frequently involve status confusion (“The app says X but actually it is Y”).
- Operators maintain personal spreadsheets, side channels, or workarounds alongside the official system.6
- Exception handling is undocumented and varies by person.
- Data quality issues are discovered downstream, after decisions have already been made.
- The product team and the operations team describe the same workflow differently.
- “It works in staging” is a common phrase, because staging does not include operational complexity.
If these are familiar, the fix is not a better interface. It is a design process that treats operations as part of the product — not as a downstream consequence of it.
If you are navigating the gap between what your product promises and what your operations deliver, I would like to hear about it. Book a call or find me on LinkedIn.
References
Footnotes
-
Shostack, G. L. (1984). Designing services that deliver. Harvard Business Review, January–February 1984. Shostack introduced service blueprinting as a way to make the full service process — including everything behind the line of visibility — explicit and designable. ↩
-
The blueprint has evolved since Shostack. Bitner, M. J., Ostrom, A. L., & Morgan, F. N. (2008). Service blueprinting: A practical technique for service innovation. California Management Review, 50(3), 66–94. Expanded the blueprint into five layers separated by three lines: the Line of Interaction, the Line of Visibility, and the Line of Internal Interaction. ↩
-
Hollnagel, E. (2014). Safety-I and Safety-II. Ashgate. Developed the “work-as-imagined” vs. “work-as-done” distinction in the context of resilience engineering. Shorrock, S. (2016). The Varieties of Human Work expanded this into four varieties — imagined, prescribed, disclosed, and done — with a useful taxonomy of how they diverge. ↩
-
Trist, E., & Bamforth, K. (1951). Some social and psychological consequences of the longwall method of coal getting. Human Relations, 4(1), 3–38. The foundational paper on sociotechnical systems theory. Also: Cherns, A. (1976). The principles of socio-technical design. Human Relations, 29(8), 783–792. Codified the design principles, including “variance control at source” — push problem-solving to the point where variability occurs, rather than escalating everything upward. ↩
-
This is harder than it sounds. People rarely describe their real workflow accurately in interviews. They describe the prescribed one, or the one they think you want to hear. Observation and shadowing are better tools here. Shorrock (2016) calls the gap between what people say they do and what they actually do the difference between “work-as-disclosed” and “work-as-done.” ↩
-
A 2026 Smartsheet report on operational excellence found that 70% of operations professionals use tools outside corporate policy, and 76% rely on workarounds because their approved systems cannot keep up with changing priorities. This is not rebellion. It is a design signal. ↩