![]()
One-line outcome: I built a repeatable decision loop that helped a fast-shipping app studio choose better bets across onboarding, paywalls, and retention.
TL;DR
- Problem: Teams could ship quickly, but portfolio decisions were still too influenced by competitor copying and internal opinion.
- What I did: I built a decision system that connected market signals, user evidence, and delivery planning, then used it to align product managers, ASO, product design, and developers.
- Key constraint: NDA limits on public detail, eight apps running in parallel, and external delivery partners.
- Outcome: Clearer prioritization criteria, fewer repeated debates, stronger alignment between ASO promises and in-app experience, and cleaner experiment briefs for implementation.
- Timeframe + role: 2024–2025 · UX Researcher & Senior UX Designer (consultant).
Context
In 2024–2025, I worked with a Barcelona app studio running a portfolio of B2C subscription products. The growth model depended on speed in mature categories where user expectations were already high.
The main risk was not shipping pace. The risk was decision quality. Without shared criteria, teams would keep moving fast but invest in weak bets, create rework, and dilute product positioning.
The business needed confidence on where to invest next. Product managers needed prioritization clarity, ASO needed claims that held up in-product, design needed a coherent direction, and developers needed decision-ready briefs.
My Role & Team
I worked as a UX Researcher and Senior UX Designer consultant across the portfolio.
- My scope: competitor benchmarking, review mining, targeted qualitative research, CRO hypothesis framing, and decision workshops.
- Decision ownership: I owned how evidence became decision options, criteria, and implications.
- Core collaborators: product managers, ASO specialists, product designers, developers, and business stakeholders.
Constraints
- NDA boundaries: no public product names, internal screenshots, or direct brand attribution.
- Portfolio complexity: eight apps shared monetization patterns but had different user expectations.
- External implementation: requirements had to be explicit enough to avoid back-and-forth during delivery.
- Trust sensitivity: unclear subscription value or messaging could increase churn risk and negative feedback.
Approach
I structured the work to reduce uncertainty before teams committed engineering effort.
- I mapped each app category by acquisition promise, onboarding logic, paywall timing, and trust signals.
- I combined review patterns with focused interviews and usability checks at the highest-friction funnel moments.
- I translated findings into a hypothesis backlog with impact, confidence, effort, and observable evaluation proxies.
- I ran decision workshops with product, ASO, design, and engineering to make trade-offs explicit and keep execution aligned sprint to sprint.
This sequencing improved decision quality because every change had a clearer reason, owner, and implication before build.
Process at a glance
| Phase | What it aligned | Output (decision tool) |
|---|---|---|
| Portfolio mapping | What each app promised vs what it delivered | Category map: onboarding, paywall timing, trust signals |
| Evidence synthesis | What users and the market were actually saying | Review themes + targeted qualitative findings |
| Hypothesis backlog | What to test first (and why) | Prioritized hypotheses with impact/confidence/effort + evaluation proxies |
| Decision workshops | Cross-functional trade-offs and ownership | Agreed options + build-ready briefs for delivery |
flowchart LR
A["Signals<br/>market + App Store"] --> B["Evidence<br/>reviews + interviews"]
B --> C["Options<br/>hypotheses"] --> D["Choose<br/>impact, confidence, effort"]
D --> E["Agree<br/>trade-offs"] --> F["Brief<br/>ready to build"]
F --> G["Ship + learn<br/>what changed"]
G --> A
Key Decisions & Trade-offs
-
Decision: Standardize core onboarding and paywall patterns across apps while keeping room for category-specific differentiation.
- Options considered: Fully custom flows per app; full portfolio standardization; standardize only high-impact core moments.
- Criteria used: User clarity, implementation speed, and cross-app learning reuse.
- Trade-off accepted: Less local variation in some journeys.
- Resulting implication: Faster alignment and lower rework without flattening each product’s positioning.
-
Decision: Prioritize product truth and expectation clarity before adding more feature modes.
- Options considered: Expand feature breadth quickly; improve perceived quality and control first; split effort evenly.
- Criteria used: Review themes, conversion risk at paywall moments, and likely retention impact.
- Trade-off accepted: Slower short-term feature expansion.
- Resulting implication: Better match between what users were promised and what they experienced.
-
Decision: Require an explicit differentiation wedge per app instead of default parity roadmaps.
- Options considered: Competitor parity first; one shared portfolio strategy; per-app wedge with documented rationale.
- Criteria used: Defensibility, relevance to user pain, and delivery feasibility.
- Trade-off accepted: Some short-term keyword opportunities were deprioritized.
- Resulting implication: Clearer portfolio choices and stronger decision discipline.
Impact
- Qualitative outcomes:
- Prioritization moved from opinion-led conversations to criteria-led conversations.
- Cross-functional decisions got faster because trade-offs were explicit early.
- Handoffs to developers improved through clearer experiment briefs and success proxies.
- ASO messaging and in-product experience became more consistent, which reduced expectation gaps.
- Stakeholders shared a common decision language across the portfolio.
What I Learned / What I’d Do Next
I learned that strategic value in fast app studios comes from decision discipline, not design volume. When product, ASO, design, and engineering align on criteria early, teams ship with more consistency and less friction.
Next, I would formalize this into a lightweight portfolio operating model: shared instrumentation standards, recurring cross-app decision reviews, and clearer thresholds for promoting a tested pattern to portfolio default.