Marigold Loyalty — Offer Creation Redesign
Marigold Loyalty is a B2B SaaS loyalty marketing platform. Brand operators use it daily to create and manage offers — discounts, coupons, and promotional incentives that drive member engagement. But the Offer module had never been designed as a cohesive flow. The creation process was fragmented, the editing experience was scattered across modals and tabs, performance data was nearly invisible, and there was no way to distribute offers at scale. For a core, high-frequency feature, this was directly eroding the platform's value to paying customers.
My role on this project was Lead Product Designer, working closely with the Product Manager to define scope, explore design directions, and drive the solution from concept through to production. The project ran from Q1 to Q3 2024 and shipped to production in late Q3.
Time
Jun–Aug 2024
Role
Design Lead
Tools
Lovable, Figma, Chat GPT

The thing nobody wanted to talk about
When I joined the Marigold Loyalty team, the Offer module had been a known pain point for some time. Operators — the brand managers and loyalty program administrators who use this platform daily — had filed feedback. The product team had logged the issues. But the fixes had never been scoped, prioritized, or designed in any coherent way.
Business goal
The business goal behind this redesign was specific: improve the efficiency and reliability of the Offer creation workflow to strengthen platform value for operators, and reduce friction that contributes to churn risk.




Five problems, one root cause
My first move wasn't to open Figma. I conducted in-depth interviews with operators from Baby Bunting, Starbucks, and Pizza Hut, then spent time with the findings — not to catalog complaints, but to understand what they had in common.
A fragmented two-step flow.
Creating an offer required filling out a modal, clicking confirm, and then being redirected to the offer list — not into the offer just created. Operators had to locate their new offer in a list of potentially hundreds, then navigate back in. Every single time.
No efficient way to browse and locate offers.
The offer list had only one view with inconsistent information display. Operators managing large numbers of offers had no way to quickly scan, compare, or locate what they needed. Finding a specific offer required scrolling through dense rows with no visual aid.
Silent data loss in Qualifying Items
Unclear cross-project reusability pushed designers to build components in local files, creating many “close-enough” variants. Alignment became ongoing cost: reviews, re-implementation, and QA churn.
No performance visibility.
The Dashboard existed but showed almost nothing. No redemption rate, no trend, no channel breakdown. Operators had no way to know if an offer was working.
No way to send at scale.
Every distribution was manual, one by one. For enterprise operators managing hundreds of thousands of members, this wasn't a limitation — it was a blocker.
None of these were isolated bugs. What connected all five was a structural reality: the Offer module had never been designed as a system. Each piece — the creation modal, the editing experience, the dashboard, the distribution flow — had been built independently, by different people, solving different problems at different times. Nobody had ever asked: what does an operator actually need to do their job, end to end?
That diagnosis changed the shape of the project. This wasn't a UI polish task.

What the problems revealed
With the patterns mapped, I needed to move from "what's happening" to "why it's happening." I used AI as a thinking partner: I fed it the clustered feedback and asked it to generate candidate explanations for why these problems might be structurally connected. It produced several hypotheses. Most I rejected or refined. But the process forced me to articulate why I was rejecting them — and that sharpened the three insights I ended up with.
Broken continuity, not missing features
Every problem had the same shape: what an operator did in one step wasn't carried into the next. Create an offer, get dropped in a list with no way to find it. Each step was a dead end. The fix wasn't individual features — it was restoring continuity across the whole flow.
A loop the product only half-supported
Operators need to create → launch → monitor → adjust → redistribute. With a near-empty dashboard and no bulk send, the platform only supported the first step.
Unpredictability breaks trust faster than friction
Slow is tolerable. But data that disappears, saves that might not have worked, actions with no confirmation — these destroy trust. Reliability had to come before efficiency.
Testing direction without waiting for users
Before committing engineering resources, I needed directional validation. The challenge: direct access to operators — brand managers at enterprise companies — is logistically difficult. Scheduling even a 30-minute session can take weeks of coordination.
Rather than waiting, I made a deliberate call: design two distinct directions and run a structured cross-functional review. Both solved the redirect problem. The difference was how much friction and uncertainty remained after creation.
Option A preserved a light view/edit separation.

Fields from the removed modal moved into the offer's top panel, with an explicit "Edit" button to modify them.
Option B eliminated edit mode entirely.

Fields in the Definition Tab were directly editable in place, organized into collapsible sections, each with its own Save button. Less form, more workspace.
AI-generated prototypes, not static screenshots
To make the review concrete, I used Claude Code + Cursor to generate interactive prototypes of both options in a few hours — reviewers experienced the two interaction models directly instead of evaluating concepts on paper.
I then sent a structured feedback survey to 17 cross-functional colleagues — PMs, engineers, customer success managers, and product specialists — with the design rationale for each option documented.

The result was clearer than I expected
72.7% chose Option B (11 of 17 respondents). The qualitative feedback converged on three themes — and notably, they mapped directly onto the three insights I'd developed:
Fewer steps to reach configuration — Option B's direct-entry model eliminated unnecessary steps between creation and configuration
Collapsible sections reduce scroll and cognitive load — keeping the active section in view without extensive scrolling matched how operators actually work
Stronger consistency with the broader product system — Option B aligned better with established platform patterns, reducing learning cost
“ Workflow is more explicit — Option A's coupon details location feels like metadata, not higher-level campaign details.”
“ More intuitive and user-friendly — especially the collapsible sections in the definition tab.”
“ More aligned with other screens in the product portfolio — though the "unlock" icon should feel more like a padlock.”
“ Allows direct entry into the offer after creation, with collapsible sections keeping the active area in view. Also solves the section saving issue.”
“ Better assists users in editing and modifying content — simplifies steps and enhances clarity by hiding less important information.”
“ The two edit icons need to be disambiguated.”
“ Collapsible sections on Option B could be made more visually obvious.”
Decision 1 — Kill the redirect
Solving: Insight 1 — broken continuity.
Before: Create → Modal → Confirm → List → Find → Enter offer.
After: Select type + Enter name → Create → Directly in offer.
The alternative was to keep the redirect but highlight the newly created offer in the list. I rejected this because it still required an extra navigation step and didn't address the mental model break — the user had just created something, and the system was taking them away from it.
Decision 2 — Redesigned list with dual view
Solving: Problem 1 — operators couldn't efficiently scan and locate offers.
The offer list was rebuilt with two switchable views. Grid layout gives operators a visual overview — offer images make scanning fast. List layout surfaces detailed metadata — status, campaign, effectivity, response count — for operators who need to compare or filter efficiently. Both views share the same search and filter controls.
The information architecture was also cleaned up: only operator-relevant fields are shown by default, reducing visual noise on a page operators visit constantly.
Decision 3 — One page, explicit saves
Solving: Insight 3 — reliability before efficiency.
Before, saving was ambiguous — it was never clear what had been saved and what hadn't. The redesigned Definition Tab consolidates everything into one page with collapsible sections, each saving independently. Click Save in a section, exactly that section saves. Nothing more, nothing less.
Decision 4 — A dashboard that drives decisions
Solving: Insight 2 — the full workflow loop.
The question I used to evaluate every metric candidate: "Can an operator use this to make a decision about this offer?" If no, it didn't go in.
The metrics that made it: redemption rate, total responses, unique members reached, engagement trend over time, top redemption channels. Together they answer: is this offer working, who's using it, and where?
The metrics that didn't: raw impression counts, internal system IDs, fields that reflected database state rather than operator-relevant performance.
Decision 5 — Bulk send with a deliberate pause
Solving: Insight 2 (capability gap) + Insight 3 (reliability for high-stakes actions).
The feature — rule-based audience builder, estimated audience preview, scheduling — closes the capability gap for enterprise operators. The mandatory confirmation step before any bulk send executes is the Insight 3 application: for an irreversible action touching hundreds of thousands of people, the interface owes the user a moment to stop and check.
What happened after handoff
Designs went to engineering in July 2024. My PM conducted two formal acceptance review rounds before the feature shipped — structured, documented, covering all six offer types across 130-plus test cases. I stayed close to both.
Outcome — what the usability testing showed
To validate the redesign before scaling it, we ran a usability test with 6 internal users — customer success managers and implementation specialists who work with operators daily.
Overall task success rate
Avg. time to create & configure
User satisfaction
A closing thought
I came into this project thinking about it as a usability problem — reduce steps, fix bugs, clean up the flow. I left with a more specific frame: B2B operators don't need delightful products. They need reliable ones.
What erodes trust in a B2B tool isn't friction. Friction can be learned and tolerated. What erodes trust is unpredictability — data that disappears without warning, actions that produce unexpected results, a save that maybe worked and maybe didn't.
Every design decision in this project was, underneath, a trust decision: can the operator predict what this system will do? Can they build their daily work around it without second-guessing it? That turned out to be the right question to be asking.
.say hello













