top of page
Search

Would my loyal users be just as loyal if I didn't have the loyalty app?



Ever wondered about that? I certainly did.


My clients usually reach out a few months after the initial rush of excitement is over. “Our loyalty program is launched! It was a complete success – look at all those tens of thousands of app downloads! Look at all of those welcome offers claimed! Look at all those transactions we are tracking for everybody! It’s great isn’t it? Right? We are doing great? Right?”


Absolutely. But gradually, a few months post-launch, the questions start creeping up. Are these loyal users generating incremental visits – or is it that the people that are likely to sign up for a loyalty app are generally more likely to show up more often and spend more anyway? And now that we know every single thing they ordered and when they ordered it… how does that help us get them to order more?


For the marketing executives selling the CEO and the Board on the program and then leading the program implementation these are very important questions. They point to the ROI of a program that can easily cost 100K+ per year in just vendor fees for a chain of 50 locations, not including the food cost associated with promotions or the cost of a program analyst (and extra headcount in marketing). Is this program generating the ROI?


Can it prove it’s worth?


The metrics coming off the vendor’s dashboard look convincing at the first glance, however they really don’t show the reality of how the things would have been if the program wasn’t there. That’s why, when my clients start facing internal questioning on the ROI of the program, I recommend to start with running controlled experiments. For the first experiment, I find a handful of locations that have a much lower than average penetration of loyalty users. Ideally those locations should be geographically and demographically similar – no major differences in seasonality or other external circumstances that may affect the outcome of the experiment. (If it’s impossible to get a cluster like that I work diligently to understand and quantify the impact of those differences: for example, if one of the locations is in a university town and big chunk of the student population disappears at the end of the school year, we have to account for the seasonal drop.)


Next, I assign half of the locations to the test group. In those locations, we go all out on promoting the loyalty program. Cashiers and waiters are trained to mention it to everyone walking through the door; we run contests (carefully designed – we don’t want fake sign ons!) among the staff to get more app downloads. We print a standing banner by the cash register. We put table tents on every table. We run hyper-targeted local social ads to promote the app. We do paid sponsored search on mobile devices for anyone searching for us (or our direct competitors) in the target geography pointing the person to the app download. We do this for long enough to see the rate of sign ups in the test stores increase substantially over the control. How substantially? At least 25% more. Ideally 50%.


Yep, it’s completely possible to get to 50% with a little focus on sign ups.


Now, it’s time for a little more patience. We wait for at least one, but better two typical frequency cycles. If our pancake parlor sees a typical repeat visit among the regulars 1x per week, we wait at least 2, ideally 3 weeks. If our sandwich shop on the ground floor of an office building sees a typical repeat visit 2x per week by the regulars, we wait the same 2 weeks. If our themed diner brings the regulars back every month or so, we wait two to three months. It’s worth it.


Once the wait is over I compare the sales and guest counts between the test and control, and know exactly how much lift can be attributed to the exposure to loyalty program over a typical frequency cycle.


(Don’t want to wait? Want to preview the answer? Here is a hint: identify comparable locations with low and high loyalty penetration and run the historical comparison. It works almost as well, as long as you have a strong knowledge of each locations’ performance anomalies over the periods you are comparing. A couple of big storms or a local parade can throw the answers out of whack).


Just like that, the indisputable answer is here: the only difference between the locations is the percentage of loyalty users among overall transactions. Take those findings, present them with just one well designed slide.


Voila! We’ve proven the baseline impact of your loyalty program.


Have you done these experiments with your brand? Were you convinced by the outcomes? Did your C-suite execs buy your answers? I’d love to hear about your experience!


Comments


bottom of page