Yes, you can run and measure two campaigns simultaneously!

Measurement of overlapping campaigns when using controlled experiment design has been a bit of sticky point for many controlled experiment proponents. Some claim that you have to hold other campaigns when measuring, which is a difficult proposition, especially if the goal is to measure every campaign as BAU. My position is different. You have to both test and measure campaigns exactly as they are supposed to be run in the messy BAU world. It makes no sense to measure a campaign under the laboratory conditions only to see that it is not nearly as effective when implemented in the real […]

Universal Control Groups and Advanced Experiments in Marketing

Universal control groups are control groups that are being held out of multiple marketing communications. They are used to measure the cumulative impact of all of the communications the group is excluded from. In compound experiments universal control groups are used in combination individual campaign control groups, providing very powerful tools for sales attribution. Universal control groups are commonly used to achieve these goals: Measure cumulative impact of sequential marketing communications. Measure cumulative impact of concurrent marketing communications. Measure multiple location based tests in retail. Measuring sequential marketing communications with universal control group Marketers believe that advertising has effects that outlive the duration of campaigns, […]

Should you start your Y-axis at zero or is truncating OK?

Answer: Truncating Y-axis is misleading only if you do not show the numbers in context. For example, this :   This is an intentionally poor chart that uses fake data and does not provide context for the numbers. In real life, you should never use a chart like this precisely because you want to tell a story about your data point, and for that you need to show how it relates to other data in comparison. In real life, we use data like these: And the point of the chart is much clearer when you truncate the Y axis on both sides: Let me address other concerns […]

Overview of uplift modeling in SAS

I found several good presentation on how to approach uplift modeling in SAS, and here is their overview. What is uplift modeling? Uplift modeling is a technique that allows us to determine which targets are more likely produce incremental response when exposed to marketing material. What do you need to create an uplift model? To create uplift model, we need to conduct an experiment. An experiment is a setup where we have targets from various groups, some of which have been exposed to our driver (aka marketing material). Sometimes a natural experiment will work, but in a classic case, we designate test and […]

Propensity to churn modeling does not help reduce churn

I hear a lot of buzz around advanced methods, like predictive analytics, machine learning, data science, etc. Everyone says that’s what you need if you want to make a difference in your business. For example, executives want to see predictive models that tell them who the most “at risk” customers are so they can be targeted for retention. However, when applied to a real life situation this approach often fails to deliver the results. This is how propensity to churn modeling is done. We run this model on our existing customers, and thus we know a lot about them, from their name and address, which […]

Why do uplift models fail?

Uplift (or incremental lift) modeling is generally harder to execute than response modeling. While response follows known customer traits (demographics, lifestage, transience, change in circumstances), uplift can be dependent on variables not commonly used in response modeling. After watching a few failed attempts to create uplift models, I can identify the most common barrier in creating an valid uplift model: marketing programs that are ineffective. Why would it matter? Let’s review in short what an uplift model is. The dependent variable of the uplift model is the difference in response between test (treatment) and control groups. The independent variables can be […]

Avinash Kaushik on controlled experiments

Back in 2011 Avinash wrote an excellent blog post about the value of controlled experiments called Measuring Incrementality: Controlled Experiments to the Rescue! In his post he describes the application of a test vs control design to breaking down how different channels of communication perform separately vs when used together. This is a good application of the controlled experiment methods. Avinash is clearly very impressed with the methodology. I was really surprised that he called it “advanced”. As far as methods of measurement goes, simple test vs control design like he describes does not require the use of advanced methods, which is […]

How to Create Effective Control Groups

Control group must be representative of the treated group. This is most commonly achieved by random assignment of customers or subjects. In cases when random assignment is not possible, for example, your group has to cover the whole DMA or organizational unit, you want to find several units that are similar on parameters that impact your outcome/measure. In this case, it is best incorporate pre-test trends into the understanding of the test period outcome. If using matched groups is not possible for legal or organizational reasons, your best bet is to transform your control group to match your treatment group […]

Experiment Design in Practice: When Random Selection is Impossible

One of the most important principles of true experiment design is making sure the control is representative of the treatment group. This is generally achieved by random assignment. However, sometimes random selection of customers into the groups is impossible. Here are some examples: Market level test. Whether you test a coupon redeemable at any store of the market or you test market-wide media like radio or TV, market tests are pretty common. Operational restrictions. Sometimes you can only flip the test in a large group of people. Many pricing experiments work that way. Legal restrictions. Again, often related to pricing, […]

Experimental Design in Practice: Dos and Don’ts of Group Size

Today I would like to share one example of design and analysis that I came across awhile ago. It taught me that simplicity is often more important than complicated scientific logic. A few years ago I was asked to look into direct mail programs run by another department in my company. The design was very similar to what I ran: you have a schedule with multiple mailpieces going out to targets, and a random control group is being held out from the mail. The analysis was pretty simple – define your response window, calculate new customer connect rates during this […]