The Teeter-Totter of Customer Data Management

By on in with 2 Comments

Teeter-totter

I had a professor in business school who used to explain the relationship between the stock market and the bond market as a teeter-totter (in rural southeast Texas, I grew up knowing this as a see-saw): as the yields on one went up, the yields on the other went down and vice versa. 

Managing your customer data can be like that, too — the more of a burden you put on your customers and prospects to keep your data about them clean, the less of a burden you put on yourself. And, likewise, the more of a burden you take on yourself, the less of a burden you’re putting on your customer.

While bouncing through links from a tweet, I stumbled across Steve Woods’s original Contact Washing Machine post, and it set some alarm bells off. Steve’s a damn sharp guy — he was a co-founder and remains the CTO of Eloqua, and he is pretty much an undisputed visionary when it comes to marketing automation technology. Yet, this post sparked an immediate reaction, as well as teeter-totter imagery. Since then, Steve has clarified…and I think I misread his initial premise. His point is that data cleansing should happen as early in the data acquisition process as possible — cleanse the data as it comes in, rather than crossing your fingers and waiting to run batch processes after the fact in the hopes that the data will get cleaned up.

That’s a valid point, but, after digging deeper into the cross-links in the post, I still think there’s some under-estimating of what it takes to “fix” dirty data as it comes in. For starters, when it comes to customer/prospect data, there are typically a range of incoming data entry points:

Web Data Entry

In the world o’ the web, data can come into your systems directly as typed by a visitor to your site — when a user is filling out a web form, for instance. On the surface, that’s a great place to do data validation, because you’ve got the actual user right there to clarify anything that has gone amiss. If he’s fat-fingered his phone number or put in an e-mail address that is clearly not valid, it’s best to prompt him right then and there to correct the mistake. But, the teeter-totter comes into play: if that piece of data is really not germaine (as perceived by the user), it doesn’t take long for your cleansing to lead to a frustrated visitor to your. Worse, if you don’t allow the user to bypass the validation step (with a “I don’t care what you think, I’ve entered the information correctly, so just keep it that way and let me move on” option), there is a very good chance that you will keep some visitors from ever getting to where they and you want them to!

If you include field validation on your web forms, and if you don’t allow the user to override that validation, it behooves you to include detailed form abandonment tracking in your web analytics to make sure you haven’t set up an insurmountable barrier for some of your customers.

Human Data Entry

Call centers almost always serve a data entry function as part of the customer service process. In addition, many companies have dedicated data entry staff to translate mail, fax, tradeshow-collected leads, or other transactions. This can be a great opportunity to clean your data up front, as you can certainly place a higher burden of getting the data right and enforced data validation on employees of your own company than you can on your customers and prospects.

BUT, this turns out to be a stickier wicket than it seems at first blush. If I had a nickel for every time I heard someone living in world of backend data propose data augmentation or enhancement by updating the human data entry processes to “just add one more quick step,” I’d be able to buy a Starbucks Venti Caramel Frapuccino® blended coffee (which is a lot of nickels, if you think about it). Two reasons that there should be a proceed-with-extreme-caution label placed prominently on any solution that heads down this path:

  • Call centers typically live and die by the average handle time (AHT) for their calls; yes, they want to meet the customer’s needs, but they also, out of necessity, can save big dollars by cutting the AHT by a few seconds on average. Adding 5 or 10 seconds to every call can have a very real impact (and can make you some quick enemies with call center managers)
  • It’s easy to identify the benefits of more, more complete, or cleaner data…when it comes to backend processes and data analysis. But, is that benefit readily evident to the people whom you’re relying on to capture it? Does it benefit them directly, either through smoothing the immediate next steps in their process or by impacting their compensation? Due to the high-volume nature of call center and data entry work, data that is “just another field you need to fill out” is data that is at risk of falling prey to shortcuts (the first value in the dropdown, “aaa” in a text field, etc.). The most successful introductions of process changes have a net-no-change or net decrease in the number of steps/time/complexity of the process into which it is being introduced.

Human data entry offers opportunities to get data that is more complete and cleaner…but those opportunities don’t come automatically.

There are many other ways that data can enter your systems: provided by an intermediary (often semi-independent sales channels: distributors, resellers, etc.), sourced from a third-party lead sourcing company, passed in from another system within your company (often a system that doesn’t store the data in the same format or even have the same definitions for what specific fields mean and are used for), etc. There’s value in inspecting the sources of your customer data, assessing how clean the data is that comes from those different sources, and then, with the teeter-totter firmly in mind, investigating where and how to get that data coming in cleaner!

Photo courtesy of jhirtz.

2 Comments


  1. And this is why Business Process Redesign, Change Management and Data Governance HAVE to be part of any initiative to improve Customer Data Management. Most programs only attack the “data residue” of ineffective process and governance and then naievely believe that if they build a better app, the users (internal or external) will use it appropriately. Lots of failed projects (including many failed CRM implementations from the late 90s) fell victim to this incomplete approach.

  2. Tim,
    Thanks for calling me to task on the original post – your points (as always) have been bang on and really got me thinking much more about the challenge and issue of balancing the customer experience and data quality.

    It’s interesting that with all the landing page and form testing that is going on, very little of it focuses on this particular issue. Perhaps if breadth and quality of data were added as key criteria, we would shine a much brighter light on what can be done.

    There are some very interesting interface techniques that work to accomplish both by *very* intelligent handling of the data while the customer is on the form (Google Business Center’s way of showing/facilitating/correcting address info comes to mind).

    Hopefully this is an area in which we see a lot of advancement in over the coming few years – the benefits for data and customer experience are very clear.

    Thanks again for highlighting this one Tim, great points!

Leave your Comment


« »