Blog

The spend portfolio playbook: How cutting branded bids and A/B testing emails boosted revenue

published on April 28, 2026

Two assumptions run deep in eCommerce marketing:

  • You must always bid on your own branded search terms.
  • Retention is about sending more emails with better subject lines.

Most teams treat both like common knowledge and file them under “best practice,” never to be reopened.

But what if we told you Nate Hewett and Di Lyngholm challenged those assumptions? What they found changed how their company spends every dollar.

The team behind the test

Nate leads performance marketing at HalloweenCostumes.com, part of fun.com. Di leads CRM and retention. Together, they operate in one of the most extreme seasonal environments in eCommerce. In fact, roughly 85% of their annual revenue lands in September and October. That’s six weeks to make the year.

The pressure was already significant. Then, in 2024, tariffs were added on imported goods, compressing margins further. The mandate was clear: find efficiency without cutting the programs that drive growth. Both had been running their respective programs for years, but needed to think fresh: instead of applying dusty theoretical frameworks, both initiatives emerged from asking uncomfortable questions about things that “worked fine.”

What made those questions possible was culture. As Nate puts it:

“Everyone has to have a level of vulnerability, right? […] Everyone will look and poke holes and we’re doing it from the kindness of our hearts to help each other out.”

That vulnerability is the precondition. Without it, the playbooks below stay theoretical.

Why the traditional approach breaks down

On the paid side, Google’s automated bidding – specifically Target ROAS – is designed to optimize for revenue or conversions, but not for spend efficiency. If you have 95%+ impression share on a branded term and no competitor is bidding, the system should charge rock-bottom CPCs. Spoiler: It doesn’t. The default behavior is to keep spending at elevated levels. And because the ROAS looks “good,” nobody flags it. The waste hides behind a healthy-looking metric.

On the retention side, most programs are campaign-based, not journey-based. They blast a segment with a generic offer rather than building a triggered, personalized sequence rooted in individual purchase history. Those programs optimize for conversion rate instead of profitability, which leads to over-discounting. You hit your open rate targets while quietly eroding margin.

The fundamental truth behind both systems is the same: the biggest gains aren’t hiding in new channels or new audiences. They’re hiding in the performance of things you already run, and in the micro-tests that compound over time when you stop stamping those as finished.

Two playbooks, one mindset: Test what you assume is untouchable

Nate’s “Uncontested Bid Walk-Down” and Di’s “Last-Year-Purchaser Retention Journey” are different in mechanics but identical in philosophy:

  • Take something the organization treats as settled.
  • Build a rigorous testing plan.
  • Walk it down or build it up methodically.
  • Let data – not assumptions – dictate the outcome.

Both required cross-functional buy-in, a defined testing cadence, and a willingness to accept that the current approach might be wrong. Both produced results that compounded: Nate’s savings freed budget for other growth initiatives. Di’s journey expanded from peak season to off-season holidays and from HalloweenCostumes.com to the fun.com brand.

Here’s how they did it.

Editorial Note: Playbook 1 is for performance marketing teams questioning their branded spend. Playbook 2 is for CRM and retention teams building personalized journeys. If your org still treats those as separate conversations, read both.

Playbook 1: Cutting branded PPC spend by 60% without losing revenue

Phase 1: Identify candidates (weeks 1 and 2)

The process starts with data, not instinct. Pull impression share data by branded term from Google Ads. Look for terms where you hold 90%+ impression share. Then cross-reference those terms with organic rankings using SEMrush or a similar tool. If you’re on page one organically and holding 90%+ impression share on paid, that term is a candidate.

Now check the CPCs. If they’re not at or near rock bottom – say $0.01 to $0.05 – automated bidding is likely inflating costs. That gap between what you’re paying and what the market requires is the waste.

The decision rule is straightforward:
High impression share + strong organic rankings + above-floor CPCs = a likely uncontested term.

Flag it for testing.

Phase 2: Build the testing plan and get buy-in (week 3)

Nate’s team formed a small task force of one PPC specialist and one SEO specialist. They defined success criteria upfront: total revenue – paid and organic combined – must stay flat. Not just paid revenue in isolation. The whole deal.

They mapped out a walk-down schedule: increase the Target ROAS each week to force Google to bid less aggressively. They set up a “catch-all” campaign with an extremely high efficiency target: a safety net to capture anything the main campaigns dropped without overspending.

Getting leadership buy-in meant presenting the plan with defined milestones and kill criteria. If combined revenue drops by a certain percentage for two consecutive weeks, the team pauses and re-evaluates. No ambiguity, no guesswork.

Nate shares the result, so you know what you’re in for:
“Overall, for these branded terms that we were targeting, we cut spend by just under 60% and revenue stayed flat.”

That’s the destination – and here’s how they got there.

Phase 3: Execute the walk-down (weeks 4-12 and beyond)

Each week, the team increased their TROAS targets – nudging Google to bid less. Twice a week, the task force met to review impression share trends, organic ranking changes, CPC movements, and the competitive landscape. They also ran manual incognito searches to verify whether competitors were entering the auction.

Negative keywords were critical. As the team walked down bids on specific branded terms, they added negatives to prevent those terms from being picked up by other campaigns – Performance Max, AI Max for Search, and standard search campaigns. This step is non-negotiable. You need to actively block terms, or automated campaigns will try to fill the gap. Nate’s team was simultaneously testing AI Max for Search, which was trying to dynamically add terms back in. Keeping those tools in their lane while protecting the integrity of the test was one of the harder operational challenges.

The floor signal came when competitors started appearing on their terms. That’s when impression share dropped significantly, and incognito checks showed competitors in the auction. The team had found the bottom, so they segmented the terms and held at that level.

One important caveat: the competitive landscape changes during peak season. Terms that are uncontested in March may be heavily contested in October. The team built a plan to re-evaluate before their peak, and they noticed some bids did go back up when the season arrived.

Phase 4: Lock in savings and reallocate (ongoing)

Many terms ended up at $0.02 CPC, down from $0.30 to $0.40. The roughly 60% in branded spend savings didn’t disappear. It was reallocated to other growth initiatives the team had planned but couldn’t previously fund.

Documentation mattered here. The team used Microsoft Loop – free with their existing subscription – to maintain test briefs, record results, and capture iteration notes. In a hyper-seasonal business where the pace during peak makes it easy to forget what happened last week, this institutional memory is essential. The analysis is re-run quarterly because competitive landscapes shift.

That covers the paid side. Now, let’s look at what happens when you take the same “test the untouchable” mindset and apply it to retention.

Playbook 2: Building a retention journey that grew revenue per email by 77%

Phase 1: Data foundation

Phase 2: Build the dynamic email content

  • The first block showed the product they purchased last year – image, name, and rating.
  • The second addressed review status: if the customer had already reviewed the product, the email displayed their rating. If they hadn’t, it included a “review it now” call to action.
  • The third block featured new product recommendations, personalized by category and size, but explicitly not the same theme. If someone bought a Harry Potter costume last year, the recommendations pointed to a different theme this year – the assumption being that most people don’t repeat.
  • The fourth block added UGC – real customers in costumes, building trust and social proof.
  • The fifth was the incentive: a coupon or site credit, depending on the test variant.

Phase 3: Test, test, test – and measure profitability

  • Coupon versus site credit: coupon won.
  • Varying site credit amounts by segment: the highest amount was not the winner.
  • Coupon value at 15% versus 10%: 10% won on profitability, because even though 15% converted at a higher rate, the margin on the 10% cohort was better.

Phase 4: Expand and integrate channels

From assumption to evidence: What changed

Paid search outcomes

Retention journey outcomes

Cultural and organizational outcomes

Start bolder, learn faster

Your questions, answered