Controlled Experiments in Marketing
The definition of effective marketing programs is that they produce results that are incremental to status quo. It is the difference between having the program in place and not running it that we are after, and this difference is rather elusive.
The bread and butter of analytics is evaluating the results of marketing programs or tests that the company runs. When looking at a campaign, the first thing that comes to mind is the direct outcome of the program (redemptions, sales, clicks, etc). While it is relatively easy to calculate the direct outcome, it is much harder to interpret this data in the context of whether the number is good or bad.
In analytics, bias is a situation where your benchmark group is not representative of your program group. As a result, your assessment of the results of the program can be wrong, leading to bad decisions.
Why churn reduction programs fail
Most companies have put in place retention programs to reduce churn of their customers. However, it is common to see beautiful reports that show how many customers were saved, while the customer base stubbornly refuses to grow over time. Why do those improved save rates do not contribute to our bottom line? I am going to draw from my 10 years of experience in retention analytics with a major telecom company to explain this phenomena.
Posts on Experimental Design
Measurement of overlapping campaigns when using controlled experiment design has been a bit of sticky point for many controlled experiment proponents. Some claim that you have to hold other campaigns when measuring, which is a difficult proposition, especially if the goal is to measure every campaign as BAU. My position is different. You have to both test and measure campaigns exactly as they are supposed to be run in the messy BAU world. It makes no sense to measure a campaign under the laboratory conditions only to see that it is not nearly as effective when implemented in the real […]
Universal control groups are control groups that are being held out of multiple marketing communications. They are used to measure the cumulative impact of all of the communications the group is excluded from. In compound experiments universal control groups are used in combination individual campaign control groups, providing very powerful tools for sales attribution. Universal control groups are commonly used to achieve these goals: Measure cumulative impact of sequential marketing communications. Measure cumulative impact of concurrent marketing communications. Measure multiple location based tests in retail. Measuring sequential marketing communications with universal control group Marketers believe that advertising has effects that outlive the duration of campaigns, […]
I found several good presentation on how to approach uplift modeling in SAS, and here is their overview. What is uplift modeling? Uplift modeling is a technique that allows us to determine which targets are more likely produce incremental response when exposed to marketing material. What do you need to create an uplift model? To create uplift model, we need to conduct an experiment. An experiment is a setup where we have targets from various groups, some of which have been exposed to our driver (aka marketing material). Sometimes a natural experiment will work, but in a classic case, we designate test and […]