Yes, you can run and measure two campaigns simultaneously!

Measurement of overlapping campaigns when using controlled experiment design has been a bit of sticky point for many controlled experiment proponents. Some claim that you have to hold other campaigns when measuring, which is a difficult proposition, especially if the goal is to measure every campaign as BAU.

My position is different. You have to both test and measure campaigns exactly as they are supposed to be run in the messy BAU world. It makes no sense to measure a campaign under the laboratory conditions only to see that it is not nearly as effective when implemented in the real world. The question is whether controlled experiments are going to help us correctly measure the effectiveness of messy communications.

Let’s consider a simple example first, for instance, what if we have two overlapping communications that go to the same audience at the same time?

Simple Overlapping Campaign Experiment Assumptions:

  • Both communications target the same exact customer group, i.e. 100% target overlap.
  • The communications’ response windows fully overlap.
  • 10% control (holdout) group is randomly drawn out of each target list; these groups do not receive respective communication, but may receive “the other” communication.
  • Incremental impact of each campaign is a 2% increase in sales over baseline (i.e. spontaneous level).
  • The impact of two communications is fully additive – customers that receive both communications respond at 2%+2% = 4% incremental level.
  • Both campaigns are measured by comparing sales of the treatment group to its respective control group.

First, let’s see who in the target audience gets what.

This breakdown is going to result in certain levels of incremental sales in each group, and we can use them to determine what our controlled experiment is going to produce.

As you can see, simple math shows that controlled experiment design resulted in a flawless assessment of incremental sales for both campaigns.

This was a simple example. Now let’s up the ante and remove some of the assumptions and see what happens.

Generalized Overlapping Campaigns Experiment Assumptions:

  • The communications’ response windows fully overlap.
  • Both communications target a similar group, with some variable overlap.
  • Variable percent control groups are randomly drawn out of each target list; these groups do not receive respective communication.
  • Incremental impact of each campaign is increase in sales over baseline (i.e. spontaneous level), and can be varied between communications.
  • The impact of two communications is not fully additive – customers that receive both communications can respond on a level that is lower or higher than the sum of both.
  • When determining incremental sales of a campaign, we need to find not the “clean” lift that happens when the campaign is run by itself, but the adjusted incremental sales, polluted by having the other campaign in the marketplace.
  • Both campaigns are measured my comparing sales of the treatment group to its respective control group.

Since it is a more robust design, it needs more robust setup. I have created an Excel file that you can download and play with all of the assumptions of the model to see what result a controlled experiment design produces.

Link to full spreadsheet coming.

The bottom line:  controlled experiment can be used to accurately measure real incremental impact of overlapping marketing campaigns.