---
title: "The spend portfolio playbook: How cutting branded bids and A/B testing emails boosted revenue"
url: "https://parcellab.com/blog/retention-revenue-playbook/"
type: Blog
date_published: "2026-04-28T14:20:56+00:00"
date_modified: "2026-04-28T14:20:57+00:00"
description: "Learn how one eCommerce team grew retention revenue by 77% while cutting branded ad spend by 60% with two step-by-step playbooks."
---

Two assumptions run deep in eCommerce marketing:

- You must always bid on your own branded search terms.
- Retention is about sending more emails with better subject lines.

Most teams treat both like common knowledge and file them under “best practice,” never to be reopened.

But what if we told you Nate Hewett and Di Lyngholm challenged those assumptions? What they found changed how their company spends every dollar.

## The team behind the test

Nate leads performance marketing at [HalloweenCostumes.com](http://HalloweenCostumes.com), part of [fun.com](http://fun.com). Di leads CRM and retention. Together, they operate in one of the most extreme seasonal environments in eCommerce. In fact, roughly 85% of their annual revenue lands in September and October. That’s six weeks to make the year.

The pressure was already significant. Then, in 2024, tariffs were added on imported goods, compressing margins further. The mandate was clear: find efficiency without cutting the programs that drive growth. Both had been running their respective programs for years, but needed to think fresh: instead of applying dusty theoretical frameworks, both initiatives emerged from asking uncomfortable questions about things that “worked fine.”

What made those questions possible was culture. As Nate puts it:

**“Everyone has to have a level of vulnerability, right? \[…\] Everyone will look and poke holes and we’re doing it from the kindness of our hearts to help each other out.”**

That vulnerability is the precondition. Without it, the playbooks below stay theoretical.

## Why the traditional approach breaks down

On the paid side, Google’s automated bidding – specifically Target ROAS – is designed to optimize for revenue or conversions, but not for spend efficiency. If you have 95%+ impression share on a branded term and no competitor is bidding, the system should charge rock-bottom CPCs. Spoiler: It doesn’t. The default behavior is to keep spending at elevated levels. And because the ROAS looks “good,” nobody flags it. The waste hides behind a healthy-looking metric.

On the retention side, most programs are campaign-based, not journey-based. They blast a segment with a generic offer rather than building a triggered, personalized sequence rooted in individual purchase history. Those programs optimize for conversion rate instead of profitability, which leads to over-discounting. You hit your open rate targets while quietly eroding margin.

The fundamental truth behind both systems is the same: the biggest gains aren’t hiding in new channels or new audiences. They’re hiding in the performance of things you already run, and in the micro-tests that compound over time when you stop stamping those as *finished*.

## Two playbooks, one mindset: Test what you assume is untouchable

Nate’s “*Uncontested Bid Walk-Down*” and Di’s “*Last-Year-Purchaser Retention Journey*” are different in mechanics but identical in philosophy:

- Take something the organization treats as settled.
- Build a rigorous testing plan.
- Walk it down or build it up methodically.
- Let data – not assumptions – dictate the outcome.

Both required cross-functional buy-in, a defined testing cadence, and a willingness to accept that the current approach might be wrong. Both produced results that compounded: Nate’s savings freed budget for other growth initiatives. Di’s journey expanded from peak season to off-season holidays and from [HalloweenCostumes.com](http://HalloweenCostumes.com) to the [fun.com](http://fun.com) brand.

Here’s how they did it.

***Editorial Note:** Playbook 1 is for performance marketing teams questioning their branded spend. Playbook 2 is for CRM and retention teams building personalized journeys. If your org still treats those as separate conversations, read both.*

## Playbook 1: Cutting branded PPC spend by 60% without losing revenue

### Phase 1: Identify candidates (weeks 1 and 2)

The process starts with data, not instinct. Pull impression share data by branded term from Google Ads. Look for terms where you hold 90%+ impression share. Then cross-reference those terms with organic rankings using SEMrush or a similar tool. If you’re on page one organically and holding 90%+ impression share on paid, that term is a candidate.

Now check the CPCs. If they’re not at or near rock bottom – say $0.01 to $0.05 – automated bidding is likely inflating costs. That gap between what you’re paying and what the market requires is the waste.

The decision rule is straightforward:
*High impression share + strong organic rankings + above-floor CPCs = a likely uncontested term.* 
Flag it for testing.

### Phase 2: Build the testing plan and get buy-in (week 3)

Nate’s team formed a small task force of one PPC specialist and one SEO specialist. They defined success criteria upfront: total revenue – paid and organic combined – must stay flat. Not just paid revenue in isolation. The whole deal.

They mapped out a walk-down schedule: increase the Target ROAS each week to force Google to bid less aggressively. They set up a “catch-all” campaign with an extremely high efficiency target: a safety net to capture anything the main campaigns dropped without overspending.

Getting leadership buy-in meant presenting the plan with defined milestones and kill criteria. If combined revenue drops by a certain percentage for two consecutive weeks, the team pauses and re-evaluates. No ambiguity, no guesswork.

Nate shares the result, so you know what you’re in for:
**“Overall, for these branded terms that we were targeting, we cut spend by just under 60% and revenue stayed flat.”**

That’s the destination – and here’s how they got there.

### Phase 3: Execute the walk-down (weeks 4-12 and beyond)

Each week, the team increased their TROAS targets – nudging Google to bid less. Twice a week, the task force met to review impression share trends, organic ranking changes, CPC movements, and the competitive landscape. They also ran manual incognito searches to verify whether competitors were entering the auction.

Negative keywords were critical. As the team walked down bids on specific branded terms, they added negatives to prevent those terms from being picked up by other campaigns – Performance Max, AI Max for Search, and standard search campaigns. This step is non-negotiable. You need to actively block terms, or automated campaigns will try to fill the gap. Nate’s team was simultaneously testing AI Max for Search, which was trying to dynamically add terms back in. Keeping those tools in their lane while protecting the integrity of the test was one of the harder operational challenges.

The floor signal came when competitors started appearing on their terms. That’s when impression share dropped significantly, and incognito checks showed competitors in the auction. The team had found the bottom, so they segmented the terms and held at that level.

**One important caveat:** [the competitive landscape changes during peak season](https://parcellab.com/blog/what-peak-season-revealed-about-ecommerce-growth/). Terms that are uncontested in March may be heavily contested in October. The team built a plan to re-evaluate before their peak, and they noticed some bids did go back up when the season arrived.

### Phase 4: Lock in savings and reallocate (ongoing)

Many terms ended up at $0.02 CPC, down from $0.30 to $0.40. The roughly 60% in branded spend savings didn’t disappear. It was reallocated to other growth initiatives the team had planned but couldn’t previously fund.

Documentation mattered here. The team used Microsoft Loop – free with their existing subscription – to maintain test briefs, record results, and capture iteration notes. In a hyper-seasonal business where the pace during peak makes it easy to forget what happened last week, this institutional memory is essential. The analysis is re-run quarterly because competitive landscapes shift.

That covers the paid side. Now, let’s look at what happens when you take the same “test the untouchable” mindset and apply it to retention.

## Playbook 2: Building a retention journey that grew revenue per email by 77%

### Phase 1: Data foundation

Di’s team started with a simple but powerful trigger: reach out to customers who purchased in the same period the previous year, five days before their purchase anniversary. The goal is to get in front of them just before they start shopping again, and before a competitor does.

To make this work, order history data had to flow from the company’s homegrown eCommerce platform into Attentive, their email and SMS tool, via API. The key data points consisted of the specific product purchased, the product category, size or type, whether the customer had left a review, and the product’s overall rating. Any brand with an API connection between their shop system and their CRM platform can replicate this architecture. The data requirements are straightforward, even if the implementation takes coordination.

### Phase 2: Build the dynamic email content

The emails weren’t templates with a name token swapped in. They were built from dynamic content blocks, each pulling different data for each recipient.

- The first block showed the **product they purchased last year** – image, name, and rating.
- The second addressed **review status:** if the customer had already reviewed the product, the email displayed their rating. If they hadn’t, it included a “review it now” call to action.
- The third block featured **new product recommendations**, personalized by category and size, but explicitly not the same theme. If someone bought a Harry Potter costume last year, the recommendations pointed to a different theme this year – the assumption being that most people don’t repeat.
- The fourth block added **UGC** – real customers in costumes, building trust and social proof.
- The fifth was the **incentive**: a coupon or site credit, depending on the test variant.

The incentive testing produced one of the most counterintuitive findings of the entire initiative. Di explains:

**“Surprising (finding) was the coupon versus the credit. I would think people would enjoy the credit more because maybe they had $30 credit, it’s like, okay, now I’m only paying $10 versus the discount maybe wouldn’t be quite as high in a dollar amount. But they like that discount, I guess.”**

Coupons won. And the simpler incentive was also a lighter operational lift: no need to generate individual site credit codes at scale.

### Phase 3: Test, test, test – and measure profitability

The testing list was long and specific. Here are three examples plus outcomes:

- Coupon versus site credit: coupon won.
- Varying site credit amounts by segment: the highest amount was not the winner.
- Coupon value at 15% versus 10%: 10% won on profitability, because even though 15% converted at a higher rate, the margin on the 10% cohort was better.

Other tests included coupon placement at the top of the email versus lower in the body, and content variations like product recommendations, UGC, review stars, and review requests. Another variation was timing: how early to start sending, which, for costumes, turned out to be mid-August. Any earlier, and the revenue per email didn’t justify the cost.

The measurement approach was deliberately profitability-first. Revenue per email is a useful metric, but revenue per email minus the cost of the discount is an even better one. Di’s team consistently chose the variant that delivered a higher margin, even when it meant accepting a lower conversion rate.

The compound effect of these micro-optimizations was significant, as Di points out:

**“We saw a 13% increase in conversion rate by just moving that coupon placement up to the top**.”

A single placement test moved the needle by double digits. Now imagine what a full year of iterative testing can do:

**“One year with some of the different tests we did, we increased our revenue per email from 30 cents on this journey to 53 cents per email.”**

That’s a 77% increase in revenue per send, built not from one big swing but from dozens of small, deliberate experiments stacked on top of each other.

### Phase 4: Expand and integrate channels

Once the journey proved itself for peak-season costume buyers, Di’s team expanded it. SMS was integrated into the same journey within Attentive, using channel affinity – the system auto-routes to email or SMS based on each customer’s individual engagement patterns. No manual segmentation required.

The journey logic was then adapted for other purchase categories. Decor buyers received different content than costume buyers.

Plus-size purchasers saw plus-size recommendations. Pet costume buyers saw pet options. The segmentation wasn’t infinite – bandwidth constraints mean the team works in broad but meaningful buckets – but the content resonated because it matched what each customer actually bought.

From there, the model was replicated for off-season moments. Santa costumes for the holidays, easter bunnies, and theme party occasions for New Year’s Eve. And finally, it was ported to [fun.com](http://fun.com), the sibling brand focused on licensed apparel and collectibles – a year-round business where the same purchase-anniversary logic applies but without the seasonal constraint.

## From assumption to evidence: What changed

### Paid search outcomes

Branded spend dropped by roughly 58.5%. Total revenue – paid and organic combined – held flat. CPCs on many branded terms fell from $0.30 to $0.40 down to $0.02. The freed budget was reallocated to growth initiatives the team had planned, while the playbook is now baked into the annual plan – it’s a repeatable process, not a one-time experiment.

### Retention journey outcomes

Revenue per email grew from $0.30 to $0.53 – a 77% increase. Conversion rate jumped 13% from a single coupon placement change.

Coupons beat site credits, a finding that was counterintuitive but validated through testing. The journey expanded from peak season to off-season holidays and to the [fun.com](http://fun.com) brand. And, perhaps most importantly, the journey has been running since 2019 and is still being iterated every year. The improvements compound.

### Cultural and organizational outcomes

Cross-functional task forces became the norm for high-stakes tests. Documentation practices – Microsoft Loop for project briefs, test results, and iteration notes – ensure that institutional memory survives the chaos of peak season. Audience suppression between CRM and paid is now an ongoing practice, not a one-time initiative. At the same time, the “prove it with data” mindset extended beyond marketing: office staff picking and packing in the warehouse during peak season spotted product page errors and packaging inefficiencies that fed back into process improvements. Fresh eyes, applied across functions, generated ideas that no single team would have found alone.

## Start bolder, learn faster

If there’s one piece of retrospective wisdom Di would offer, it’s this:

**“I would have tested something that was more differentiated. My A and B would have been much more different. Instead of just control and my variant having very slight tweaks to it, it would have been control, variant, very different variant. Because I think we would have gotten to our end result a lot faster.”**

The required mindset isn’t complicated, but it is demanding. It takes willingness to question what “works fine.” It takes patience to walk down slowly – Nate’s team took months to reach their floor. And it takes discipline to measure profitability, not just conversion, which sometimes means choosing the variant that converts less.

When you stop treating your paid and [retention programs](https://parcellab.com/blog/how-to-turn-your-data-into-peak-season-success-and-long-term-customer-loyalty/) as finished products and start treating them as living experiments, the wins compound. This year’s test becomes next year’s baseline. The journey Di built in 2019 is still being improved today, and every iteration adds margin.

The customers are already yours. The margin is already in your spend. You just have to be willing to put in the work.

## Your questions, answered

  How do I identify uncontested branded search terms?

Look for terms where your impression share is above 90% but your CPCs aren’t at rock-bottom levels – roughly $0.01 to $0.05. Cross-reference with organic rankings. If you’re on page one organically for those terms, there’s a strong case that your paid spend is redundant, at least outside of peak competitive periods.

  What tools do I need to run these playbooks?

Nate’s team used SEMrush for SEO data, Google Ads reports for impression share and CPCs, and Microsoft Loop for test documentation. Di’s team uses Attentive for email and SMS journeys with dynamic content blocks and channel affinity. Both emphasized that expensive tools aren’t necessary – they run a lot of analysis in Excel.

  How long does the branded bid walk-down take?

The team started in March, their off-season, and walked down weekly over several months. The low-risk period was critical – starting during peak would have been too risky. Plan for eight to twelve weeks of gradual reduction with twice-weekly check-ins before locking in your new baseline.

  What should I test first in a retention email journey?

Start with the incentive type – coupon versus credit – and its placement in the email. Di’s team saw their biggest early wins from these two variables alone. Coupon beat credit, and moving the coupon to the top of the email lifted conversion by 13%. Then layer in personalization: product recommendations by category, UGC, and review signals.

  Can I fix retention and spend at the same time?

Yes. Nate and Di ran them in parallel. They weren’t directly connected, but they were strategically complementary: the paid savings freed budget, and the retention improvements turned that freed margin into compounding customer value. The key connective tissue is audience suppression – make sure your CRM-engaged customers are excluded from paid retargeting.
