![]()
Four clients, four industries — insurance, energy, beauty, broadcast — same result: teams that stopped guessing.
Across four engagements at Cloud District—DKV Seguros, Naturgy, Natura, and OBS Motorpool—I turned messy constraints into systems teams could actually run. Different industries, same underlying risk: high-stakes decisions without a shared model of what’s true and what happens next.
Situation
Four clients across insurance, energy, beauty, and broadcast, all regulated or operations-heavy, all needing shared decision models to ship safely.
Role
Senior UX Design Consultant @ Cloud District, owned discovery-to-design across all engagements
Key decision
Built decision-ready artifacts (rules, states, content models, experiments) instead of just screens, so teams could stop guessing and start testing.
Outcome
Repeatable experiment process adopted. Fewer opinion-led debates. Reusable patterns across channels.
TL;DR
Four clients, four industries — insurance, energy, beauty, broadcast. Each one regulated or operations-heavy, running on legacy stacks with limited dev bandwidth. The pattern was the same: teams needed a shared model of truth before they could ship safely.
I translated user research, stakeholder interviews, and domain analysis into decision-ready artifacts: experimentation rules, state models, content architectures, and CRO hypotheses. The goal was never just “a design” — it was shared criteria that reduced how often teams argued about the same thing.
At a glance
- Focus: regulated clarity · content systems · operations under pressure
- Core output: rules, states, content models, and experiment/governance loops (not just screens)
- How success showed up: fewer “what happens now?” questions, clearer ownership, and safer iteration under constraints
Engagement map
| Client | What they needed | What I shipped | Key constraints |
|---|---|---|---|
| DKV Seguros | Higher conversion under regulated copy | CRO audit → hypotheses → A/B testing cycle | compliance, HubSpot/legacy, limited dev bandwidth |
| Naturgy | Customer comprehension during bill anxiety | Factura Interactiva IA + dual-layer content model | regulated disclosures, high-variance billing data |
| Natura | Content velocity without brand drift | Design system-to-CMS mapping + authoring governance | live e-commerce, shared ownership, no downtime |
| OBS Motorpool | Dispatch reliability under time pressure | Two-sided service blueprint + state/exception model | map precision, overlapping requests, ops intervention |
Context
This was multi-client consultancy work. The surface-level problems looked different—an insurance funnel, an energy bill, a headless CMS, a dispatch platform—but the underlying risk was the same:
Teams were making high-stakes decisions without a shared model of “what is true” and “what happens next.”
No shared consulting framework covered all four realities—each engagement required its own model, built from the constraints up.
So my job wasn’t to ship more screens. It was to make the system legible: define the rules, the content, the states, and the trade-offs—so teams could ship with less fear and less rework.
How I worked: operating model
Across engagements, I followed the same sequence to raise decision quality quickly:
- Map the decision risk. Where does uncertainty create rework, compliance risk, or operational failure?
- Make constraints explicit. Legal, legacy stacks, data variability, and “who needs control vs simplicity.”
- Turn findings into decision options. Not “ideas”—options with criteria, trade-offs, and implications.
- Ship the smallest coherent system. Patterns and rules first, UI second. Iterate through evidence.
timeline
Risk : Find the high-risk decisions
Constraints : Make constraints explicit
Options : Turn evidence into options
System : Ship the smallest coherent system
Iterate : Improve through feedback
Selected engagement snapshots
These projects are grouped by the kind of decision risk we were solving—not by “industry,” because the patterns repeat.
Regulated clarity: insurance and utilities
DKV Seguros, CRO Funnel Optimization
Why it mattered: High-intent traffic was arriving, but unclear value framing, navigation ambiguity, and form friction were diluting conversions in a regulated environment.
What I shipped:
- An audit → hypothesis backlog that aligned marketing, IT, and legal on what to test and why.
- A repeatable A/B testing cycle with explicit experiment briefs (hypothesis → metric → decision rule).
- Accessibility improvements folded into conversion work (so we didn’t create compliance debt while optimizing).
Example worth highlighting: We shifted the conversation from “what looks better” to “what reduces hesitation right before starting a quote,” and made that measurable within HubSpot constraints.
Key decisions
- Decision: Iterate with CRO tests instead of a full redesign.
- Criteria: speed-to-learning, dev bandwidth, compliance safety.
- Trade-off: incremental wins over a dramatic visual change.
- Decision: Optimize message–intent alignment over branded slogans.
- Criteria: comprehension at entry, ability to compare and start a quote.
- Trade-off: less “marketing voice” in some modules in exchange for clearer next steps.
Outcome
- Qualitative: winning variants improved click-through to quote starts and reduced drop-off in tested steps (exact figures confidential).
- Qualitative: the team adopted a repeatable experiment process and reduced opinion-led debates.
Naturgy, Factura Interactiva
Why it mattered: During the 2020 price surge, customers needed clarity fast. Naturgy’s Factura Interactiva had to make a legally correct bill understandable: built for people, not just A4 PDFs and legal disclosure.
What I shipped:
- A task-based information architecture that answered top questions first: what I owe, why it changed, what each line means, and what to do next.
- Dual-layer content: legal wording preserved, plain-language explanation alongside.
- A flexible bill schema that could handle high data variability without bespoke templates.
Example worth highlighting: We made regulated disclosure compatible with comprehension by structuring the bill around customer questions while keeping the legal model intact.
Key decisions
- Decision: Make the bill itself the explainer (summary first, details on demand).
- Criteria: time-to-answer, compliance safety, scalability across bill variants.
- Trade-off: more interaction states upfront for far better scannability.
- Decision: Preserve legal text while layering plain language.
- Criteria: legal correctness, comprehension, trust.
- Trade-off: higher content-governance complexity to avoid “help-content dead ends.”
Outcome
- Qualitative: stakeholders aligned on a single, testable model for billing content and interaction rules.
- Qualitative: reviews surfaced “finally understand my bill” moments, indicating improved comprehension without sacrificing correctness.
Content systems: headless CMS
Natura, Headless CMS Redesign + Design System-to-CMS Mapping
Why it mattered: Marketing needed content velocity, but the storefront was fragile—publishing depended on dev handoffs and drifted from the design system over time.
What I shipped:
- A module inventory of the live site (real blocks, variants, and relationships).
- A content model that matched the design system: shared naming, states, and constraints.
- Page recipes and authoring guidelines so editors could assemble pages with confidence (and fewer surprises in production).
Example worth highlighting: We treated the design system as a CMS contract—so “ship a new page” became assembling known modules, not opening a new dev ticket.
Key decisions
- Decision: Treat the CMS authoring experience as a product with governance, not as “a backend.”
- Criteria: publishing safety, maintainability, ownership clarity.
- Trade-off: more systems thinking upfront to reduce long-term drift and rework.
- Decision: Use the design system as the contract between design, content, and engineering.
- Criteria: consistency over time, implementation clarity, faster iteration.
- Trade-off: less freedom for one-off layouts in exchange for reliable, repeatable publishing.
Outcome
- Qualitative: reduced dependency on front-end cycles for common content updates.
- Qualitative: improved consistency because the CMS reflected real UI modules, not generic templates.
Operations under pressure: dispatch and exception handling
OBS Motorpool, Olympic-Scale Motorpool Dispatch
Why it mattered: This was an operations-heavy service where missed handoffs could ripple into time-critical broadcast operations. Reliability lived in states and exceptions, not “pretty UI.”
What I shipped:
- A two-sided model: requester app + operations admin cockpit.
- A state-first blueprint (request → schedule → assign → track → exception handle → complete).
- Guardrails for map precision, overlapping requests, and intervention controls for ops.
Example worth highlighting: We designed for exceptions (overlapping requests, reassignment, venue constraints) early—because in dispatch, edge cases are the product.
Key decisions
- Decision: Borrow ride-hailing patterns (“Uber-style”) as the shared mental model.
- Criteria: learnability under pressure, reduced training, fewer errors.
- Trade-off: careful adaptation where the analogy breaks (scheduling, equipment, venue rules).
- Decision: Lock the service logic (states, exceptions, rules) before high-fidelity UI.
- Criteria: correctness across two surfaces, robustness in real-world edge cases.
- Trade-off: slower visible progress early to avoid late-stage surprises.
Outcome
- Qualitative: created stakeholder alignment through a single service model and shared language.
- Qualitative: reduced hidden operational risk by making exceptions and guardrails explicit.
Reusable Patterns That Carried Across Clients
- Decision briefs that engineering can ship: hypothesis → metric/proxy → decision rule → implementation notes.
- Progressive disclosure for complex truth: answer the top question fast, then reveal detail without hiding the model.
- Content models as product infrastructure: shared naming and governance so publishing stays consistent over time.
- State-first design for operations: define statuses, exceptions, and guardrails before polishing screens.
What I’d Do Next
If I replayed these engagements today, I’d formalize two things earlier:
- Success proxies that teams can own when “hard metrics” are unavailable (comprehension checks, support-deflection signals, operational SLAs, experiment decision rules).
- A lightweight governance loop so systems don’t drift after launch (naming contracts, change reviews, and a clear definition of done).
Project Media & Screenshots
Media
![]()
![]()
Screenshots
![]()