One-line outcome: I delivered a decision-ready UX improvement report and wireframes to reduce first-visit friction, improve information retrieval, and strengthen trust and governance for a public-sector knowledge guide used by specialists and AI-assisted research.
TL;DR
- Problem: Users were landing (often on mobile) and leaving before starting real “technology discovery,” and key pages weren’t structured for fast scanning, comparison, or clear next steps.
- What I did: Audited the web tool as an information retrieval system and delivered intermediate and final UX improvement reports with wireframes and a phased “Now / Next / Later” roadmap.
- Key constraint: The parent portal’s templates/navigation were out of scope, and the guide ran on a different technical stack—so alignment had to work without deep platform integration.
- Outcome: Qualitative only (no implementation metrics were provided): a concrete, prioritized plan with wireframes and acceptance criteria to re-center the experience on discovery and lay governance foundations for reuse via search and AI assistants—without adding an AI feature.
- Timeframe + role: Nov–Dec 2025 · Senior UX Consultant (11.5 working days), via a Luxembourg-based consultancy in partnership with a Big Four firm, for an organization affiliated with a European Union agency.
Context
In late 2025, I supported a public-sector web guide about anti-counterfeiting and anti-piracy technologies, linked from an EU-level observatory portal.
The risk wasn’t aesthetics. It was trust and decision quality: if specialists can’t quickly find, interpret, and compare technologies, they either abandon the guide or act on partial understanding.
A second risk was emerging at the same time: people increasingly “read” public information through intermediaries—search results with generative summaries, assistants, and research agents. The guide could be reused as a source, but without structured content and exports, there’s a real risk that third parties repackage information on their own terms, weakening institutional authority.
My Role & Team
I owned the UX analysis and the recommendation set:
- Prioritization: What to change first vs defer.
- Executability: Wireframes + acceptance criteria so delivery teams could implement without re-litigating decisions.
- Collaboration: Worked with a service delivery manager inside the delivery consortium.
Constraints
- Platform governance: The parent portal’s templates/navigation were out of scope; the guide had to improve within its own surface area.
- Technical separation: The portal and the guide run on different stacks/hosting, limiting shared components and cross-site integration.
- Delivery sequencing: Desktop was prioritized for the first implementation iteration; mobile issues were captured for later phases.
- Scope control: General keyword search was intentionally kept stable in the quick-win phase to reduce delivery risk.
- Public-sector trust: Accessibility, consent, and analytics transparency needed to align with established patterns while improving clarity and confidence.
Approach
I treated the guide as an information retrieval product, not a set of pages.
Entry → orientation → discovery → comparison → next step.
The goal was to increase confidence at each step and remove false choices that distract from the core task.
To reduce uncertainty quickly, I used existing signals (analytics summary + heuristics + content/IA review) to identify where users were dropping off, then focused the first recommendations on “getting into discovery” rather than redesigning deeper pages first.
Finally, I made the output executable: wireframes that demonstrate hierarchy and flows, and a phased roadmap (“Now / Next / Later”) with definitions of done so delivery teams can ship without re-litigating the same decisions.
The diagrams below are simplified versions of the artifacts I used to keep the work legible for non-design stakeholders: how the work ran, what first-time users need, and what was in or out of scope.
timeline
Scope : Agree goals and constraints
Evidence : Find where people get stuck
Decisions : Prioritize what to fix first
Wireframes : Show the proposed structure
Roadmap : Define Now / Next / Later
flowchart LR
A["Arrive<br/>What is this guide for?"] --> B["Find<br/>Where do I start?"]
B --> C["Compare<br/>How do options differ?"]
C --> D["Next<br/>What do I do now?"]
Most recommendations fell into three plain buckets: start discovery faster, make options easy to compare, and keep the information trustworthy over time.
Key Decisions & Trade-offs
-
Decision: Make guided discovery the primary Home entry; demote “About” from a primary call-to-action.
- Options considered: Keep “About” prominent; fold essential context into Home and route users into discovery first.
- Criteria used: Reduce early exits, increase task initiation, and align the first screen with the guide’s purpose.
- Trade-off accepted: Communications/Legal still need a formal “About the guide” destination, but it cannot compete with task entry.
- Resulting implication: Home becomes a launchpad into discovery (guided flow, map, list), especially critical on mobile.
-
Decision: Treat the parent portal as a fixed frame; strengthen “sense of place” via mirroring patterns and a local breadcrumb “IA bridge.”
- Options considered: Deep platform integration; local alignment inside the guide (without platform integration).
- Criteria used: Feasibility within scope, reduced dependency on platform governance, and clearer orientation for deep-link landings.
- Trade-off accepted: The breadcrumb is local to the guide; a fully shared cross-site breadcrumb is a later-phase recommendation.
- Resulting implication: Users understand the relationship to the wider portal without requiring technical unification.
-
Decision: Define “AI readiness” as governance + structure (templates, semantics, exports), not as adding an AI feature.
- Options considered: Add AI UX features; do nothing; build a structured source of truth that can be referenced reliably.
- Criteria used: Preserve institutional authority, reduce inconsistent third-party repackaging, and improve retrieval via both humans and agents.
- Trade-off accepted: Structured export work sits in “Next/Later” and needs a separate decision and budget.
- Resulting implication: A pragmatic path to being an authoritative reference in an agent-mediated world, without betting on a chatbot.
To keep scope realistic, I made the boundary explicit: the portal was a fixed frame; improvements had to work inside the guide.
flowchart TB
portal["Parent portal<br/>(fixed frame)"] --> guide["Technology guide<br/>(improved)"]
search["Search<br/>or deep links"] --> guide
user["Specialist user"] --> guide
guide --> c1["Start discovery<br/>quickly"]
guide --> c2["Easier to scan<br/>and compare"]
guide --> c3["Clear trust<br/>signals"]
Impact
All outcomes are qualitative (no implementation metrics were provided).
timeline
Now : Start discovery easily (Home + orientation)
Next : Detail pages that support comparison
Later : Templates, taxonomy, and simple exports
- Delivered intermediate and final UX improvement reports with wireframes to bridge findings to development without ambiguity.
- Produced a phased roadmap (“Now / Next / Later”) with acceptance criteria, enabling prioritized delivery under scope and platform constraints.
- Reframed key UX issues as decision-quality problems (orientation, task initiation, comparison, “what next”), tying recommendations to observable drop-off patterns.
- Established a governance-oriented “AI-ready knowledge base” direction: structured templates, semantics, and simple exports so the guide remains consistent when reused through search and AI assistants.
What I Learned / What I’d Do Next
What I learned: In public-sector products, trust isn’t only visual consistency—it’s provenance, structure, and predictable reuse. When content is increasingly consumed through intermediaries, the organization’s real “product” becomes the reliability of its reference model.
The long-term value is operational: a repeatable way to publish and maintain information so it stays trustworthy when reused.
flowchart LR
A["Standard template"] --> B["Write / update"]
B --> C["Review"]
C --> D["Publish"]
D --> E["Reuse (search + assistants)"]
E --> F["Fix drift"]
F --> A
What I’d do next: Formalize governance around the guide as an authoritative source:
- Define a structured content model (templates + taxonomy).
- Specify machine-readable exports/schema for the highest-value content.
- Set provenance and maintenance rules so information stays consistent when consumed via search and AI assistants.