Building the Business Case for Behavioral Targeting
It is often said that building (or proving) the business case for (site-side) behavioral targeting has been a lot harder than justifying an investment in more straightforward site optimization techniques such as A/B testing.
As a result, you can read independent industry analyst reports observing that some applications that can do testing and targeting (hint, hint) are a lot more frequently used for just testing rather than targeting today.
You can even hear from some of the best known and experienced consultants in the online optimization industry that they don’t feel convinced by the business case for (site-side) behavioral targeting because they feel it is less clear cut vs. testing.
This doesn’t need to stay this way.
The problem is that we have been asking the wrong question.
The question should not be “proving the business case for behavioral targeting”. But we need to make the question specific to the use case for targeting that the marketer wishes to pursue / prioritize.
That is to say, we need to seek the business case for using behavioral targeting technology to do one or multiple of the following things:
- Improve conversion rates for acquiring new clients
- Improve on-boarding of customers
- Improve cross- / up-sell
- Improve customer service case resolution times
- Improve customer retention
- Improve win-back of former customers
- Improve satisfaction with the site’s usability, i.e. ease of finding what visitors are looking for.
When restated in this fashion the business case becomes much clearer. For example, if behavioral targeting allows you to improve customer retention by 1%, then you can calculate what that is worth to your business.
How do you prove it then?
How do you prove that behavioral targeting has been able to help you improve XYZ by some percentage though?
Simple
You do it through hold-out testing. You simply compare what happens to the hold-out group vs. the test group who are exposed to behaviorally targeted recommendations for the use case.
If you think that hold-out testing is complicated … then you have no business even thinking about behavioral targeting. Your organization needs to first learn how to do A/B testing.
Why has this been so hard for online marketing optimizers then?
My personal guess is because:
- Despite the wonderful, 2001 emetrics paper by Jim Sterne and Matt Cutler, web analysts are – still – not thinking about the customer life cycle enough (i.e. acquire, convert, on-board, grow lifetime value, retain, etc.). Instead, analysts may be too busy optimizing ads and pages. We aren’t measuring customers but ads, pages, transactions. And frankly, web analytics tools were originally created for the latter and most do a horrible job when it comes to measuring customers.
- We seem to have a blind spot for hold-out groups somehow. As a symptom, Jim Novo and Kevin Hillstrom have been frequently reminding their readers of this neglect. Strange though. After all, hold-out testing is just another name for A/B testing, which we supposedly master so well online.
Go figure
“too busy optimizing ads and pages” indeed.
One of the great mysteries of our time. Some web analysts have told me they are not “permitted” to analyze customers, they have no access to the data.
Others have told me “the boss” is responsible for Sales, so Profit is not something they care to optimize for. That is an argument I’d like to see their boss make in front of the CFO and CEO.
And yes, the tools themselves are not often designed to track customer behavior over time – unless you pay up. Thing is, it’s relatively simple for most of these tools to be passing “the good stuff” from online into the back end, where analysis could be done to prove the investment in the advanced web tools would be worth it.
All it takes in one analysis that proves (to take a very common example) that the campaign with the highest conversion rate generates the lowest value customers, and because of this, the company has left $XX million in profit on the table over the past year.
Game – Set – Match
Hi Jim,
Good point about starting with an extract or feed from the web analytics solution to prove the value and get more resources granted that way.
…
I remember one client of Unica’s started that way. Found an unused old server under some desk.They pulled a feed of visitor level data onto that server. They proved the value of the data and turned the experiment into a permanent program.
…
Soon the server was too small but they got resources granted. Then one day, their “manual” based scoring/profiling of leads didn’t scale anymore but they had proven enough to get an SPSS resource assigned to the project.
…
Anyways, I look back fondly at our May 2009, WAA webcast with you and Kevin Hillstrom where you described what near term value can be generated from “the good stuff” data and Kevin described his method for deriving longer term forecasts. For interested readers, the replay link is below:
http://register.webcastgroup.com/event/?wid=0870519094639
…
Thank you for sharing!
Akin
Good blog. Perhaps feature targeting ( I think the term ‘feature’ is more general than ‘behavioral’) is inherently a more complex process, and the folks who run most online optimization efforts tend not to have backgrounds that would make them comfortable using targeting. Here are three more reasons why might be hard for online marketing optimizers:
1) Operationally more complex – the application needs to have access to the user data at decision time and pass that to the decision engine.
2) One often needs a feature mapping to convert the raw input data into more useful targeting features as well as a feature selection process to determine what features will be useful in targeting (i.e. what data to use and in what form) (look up ‘Regularization’ for more info on feature selection). This is a real issue for most learning problems, so I don’t see how it can get hand waved away in the online optimization space.
3) There are now two types of variables to report on– with feature targeting there are now the policy variables ( the things that we control) as well as the targeting variables (data about the users and context). Reporting on targeted testing is easy(ish) since you tend to have mutually exclusive segments – but how to report and explain the results of decision systems that make use of function approximation (possibly nonlinear) techniques? My sense is that web analytics folks tend to feel more comfortable with reporting and analysis rather than automated optimization, so my guess is that many will tend to focus not on the marginal benefit of the targeting implantation as a whole (the ROI of the targeting project) but might get overwhelmed by trying to understand the impact of each targeting feature.
Thanks,
Matt
Hi Matt,
You are hitting it on the nail with your comment. In testing it is OK to let the test automation decide randomly which version it is going to serve to each individual. Targeting applications born in the online space have followed the same black-box model (just let the app figure it out) but as you are saying – that is not enough for marketers.
…
Namely, as you are saying, there is a set of targeting variables that the app can figure out. But there is another set of policy rules that the marketers (i.e. the business) needs to be able to control.
…
If I may put in a plug here. Unica’s Interact targeting application is coming from the opposite side. It requires users to start with the segmentation and policies which are configured in terms of flow charts. Then within those segments the app has some self-learning algorithms.
…
Some combo such as this is needed to make targeting real.
…
Tell us how your company Conductrics approaching this area? It sounds very interesting based on your website.
Akin