Why churn reduction programs fail
Most companies have put in place retention programs to reduce churn of their customers. However, it is common to see beautiful reports that show how many customers were saved, while the customer base stubbornly refuses to grow over time. Why do those improved save rates do not contribute to our bottom line? I am going to draw from my 10 years of experience in retention analytics with a major telecom company to explain this phenomena.
When implementing a churn reduction problem, most companies look at churn rates by segment and decide to concentrate their retention efforts on the segments most likely to churn. Fish where the fish are. It only makes sense to save more customers, so we need to target the segments most likely to churn!
There are only two problems with this widely pursued approach:
- It cannot deliver long term customer growth.
- It is not the most efficient way to approach churn reduction because churn reduction is about change in behavior, not the churn itself.
Problem #1: Targeting customers likely to churn does not grow the business in the long term.
Let's examine the strategy of targeting most "churny" customers by running some numbers on an imaginary customer base. I am going to promise you that the results are directly applicable to any subscription business, and that the results are going to surprise you.
Assumptions: our customer base consists of two somewhat caricatured types of customers. Type A is "stay and play" and Type B is "churn and burn". The first group is expected to stay with us for 10 years, and the second for one year. Let's assume that annually we connect 1,000 Type A customers and 3,000 Type B customers.
Now it is time to calculate "equilibrium active base", i.e. a stable base of customers that does not change in size or composition over time given the parameters above. Here are the results:
Given the above assumptions, our stable customer base is 13,000 subscribers, 10,000 of which are long term customers, and 3,000 are short term customers. Let's note that the composition of our customer base is different from that of either connects or disconnects, and that's the direct result of having different churn rates for different segment. This situation can be observed in every subscription business.
Now let's assume that the company implemented a successful churn reduction program, and calculate the impact of the program on the customer base. What kind of customer growth should we expect if we reduce the churn of each segment by 10%?
As expected targeting the high churn risk segment gives us better result in the first year. But how will this strategy play over several years?
Turns out that targeting high churn customers for retention programs loses steam very quickly. In the second year, the growth in the customer base from targeting low churn risk customers surpasses that from targeting high risk customers. And that's just the beginning. By year four, total number of customers saved by low risk targeting surpasses those retained by targeting high risk customers.
You can't grow your business over long term by retaining customers who are likely to disconnect. This strategy fizzles out very quickly.
Problem #2: Targeting segments where you can make little difference.
While targeting high risk customers cannot result in long term growth, it may result in a short term improvement in customer numbers. Given that many companies have to take on very short term goals of "hitting this quarter's numbers", it can still be the goal worth pursuing. This is where I hear a lot about using predictive analytics to identify customers at risk for subsequent retention targeting. However, when applied to a real life situation this approach often fails to deliver the results.
First, let's look at how the churn propensity models are created. Since we run this model on our existing customers, we know a lot about them, from their name and address, which we can link to demographics to what products they use, to what their payment history has been. While churn forecasting can be challenging at times, generally speaking, we should expect to create a pretty good model. For example, it's not unexpected that models classify customers who are late on paying their bill at high risk of dropping out.
Second, we need to establish criteria by which we evaluate our retention efforts. Obviously, we want to spend money with most impact, so percent reduction is churn is a good target. For example, customers who are late on their bill may be at great risk of disconnect, but there is not much we can do to convince them to keep the service if they can't afford it. On the other side of the spectrum are customers who are generally not at risk of churn, but our efforts to retain them may trigger them to check out competitive offers, resulting in our retention efforts in generating churn rather than reducing it.
So, how is our propensity to churn going to help us identify customers we should target for churn reduction? It does not. While it does tell us a lot about natural propensity to churn without any intervention, it tells us nothing about our ability to convince customers to stay longer. Turns out, this is not a trivial problem. In fact, many researchers have found that our ability to influence customers may be unrelated or inversely related to their propensity to leave. In other words, just having a customer segment being marked for high churn risk does not mean that it is best segment to spend our money on.
The problem with using propensity to churn model is that it optimizes for the wrong thing. It predicts what the customer is going to do. It does not tell us which customers we can effectively persuade to change their minds. The model that does this is sometimes called an "uplift model". This type of a model can only be run in conjunction with an in-market test, where we look at different factors that help us find customers that are persuadable. Building an uplift model is usually quite challenging, and my general experience suggests that creating a model is usually an overkill. Many times we can get away with measuring your program(s) against a control group, and then cutting our results by most important variables. This is a very efficient, yet simple way to resolve this conundrum.