Newsletters




Developing a Big Data Strategy

<< back Page 2 of 2

If your situation is like the bank’s, you may want to focus primarily on cost reduction as well. Keeping a laser-like focus on that objective—and not being seduced by the other blandishments of big data—will help to ensure that it is achieved.

Cost reduction can also be a secondary objective after others have been achieved. Let’s say, for example, that your first goal was to innovate with new products and services from big data. After accom­plishing that objective, you may want to examine how to do it less expensively. That was the case, for example, at GroupM, the media-buying subsidiary of the advertising conglomerate WPP. The com­pany buys more media than any other organization in the world, and it uses big data tools to keep track of who’s watching it on what screen. This would be fine, except that GroupM has 120 offices around the world, and each office has been taking its own approach—with its own technology—to big data analytics. If the organization allowed each office to implement its own big data tools, it would cost at least $1 million per site.

Instead of this highly decentralized approach, GroupM plans to offer centralized big data services out of its New York office. It will focus on twenty-five global markets and expects that it will spend just over a third of the amount per site that the decentralized approach would have required. We will probably see many more such consoli­dations in the next several years as firms that allowed decentralized experimentation with big data attempt to rein in their costs.

Time Reduction from Big Data

The second common objective of big data tools is reduction of the time necessary to carry out particular processes. Macy’s merchandise pricing optimization application provides a classic example of reducing the cycle time for complex and large-scale analytical calculations from hours or even days to minutes or seconds. The department store chain has been able to reduce the time to optimize pricing of its 73 million items for sale from over twenty-seven hours to just over one hour. Software vendor SAS has named this application high-performance analytics (HPA). HPA obviously makes it possible for Macy’s to reprice items much more frequently to adapt to changing conditions in the retail marketplace. Theoretically, the retailer could reprice according to daily weather conditions, for example (although electronic tagging would make this much easier!).

This HPA application takes data out of a Hadoop cluster and puts it into other parallel computing and in-memory software architec­tures. Macy’s also says it has achieved 70% hardware cost reduc­tions. Kerem Tomak, vice president of analytics at Macys.com, is using similar approaches to time reduction for marketing offers to Macy’s customers. He notes that the company can run a lot more models with time savings: “Generating hundreds of thousands of models on granular data versus only ten, twenty, or one hundred that we used to be able to run on aggregate data is really the key difference between what we can do now and what we will be able to do with high-performance computing.” Tomak also makes use of visual analytics tools which is common with big data.

A financial asset management company provides another example. In the past, research analysts there could analyze a single bond issued by a city or company, and do risk analysis based on twenty-five variables, with perhaps one hundred different statistical simulations of model results. It’s a much better analysis to use one hundred variables and a million simulations. That was impossible three years ago, but now such a fine-grained analysis—a couple of trillion calculations—can be done in ten minutes on a big data appliance.

As the chief information officer described the benefit of this approach, “The primary advantage is that the discovery process becomes very fast. An analyst can build models, run them, observe what happens, and if he doesn’t like something, change it, all in one minute. This cycle used to take eight hours—if you could do it at all. The train of thought is much more continuous, which means a higher quality of research.”

If your company is primarily interested in time reduction, you need to work much more closely with the owner of the relevant busi­ness process. A key question is, what are you going to do with all the time saved in the process? Respectable business-oriented answers include:

  • We’re going to be able to run a lot more models and better understand the drivers of our performance in key areas.
  • We’re going to iterate and tune the model much more frequently to get a better solution.
  • We’re going to use many more variables and more data to compute a real-time offer for our customers.
  • We’re going to be able to respond much more rapidly to contingencies in our environment.

Bad answers (at least in strict business terms) include playing more golf, drinking more coffee, or finally having enough time for that three-martini lunch.

Developing New Offerings

To my mind, the most ambitious thing an organization can do with big data is to employ it in developing new product and service offerings based on data. One of the best at this is LinkedIn, which has used big data and data scientists to develop a broad array of product offerings and features. These offerings have brought millions of new customers to LinkedIn, and have helped retain them as well.

Another strong contender for the best at developing products and ser­vices based on big data is Google. This company, of course, uses big data to refine its core search and ad-serving algorithms. Google is constantly developing new products and services that have big data algorithms for search or ad placement at the core, including Gmail, Google+, Google Apps, and others. Google even describes the self-driving car as a big data application. Some of these product developments pay off, and some are discontin­ued, but there is no more prolific creator of such offerings than Google.

There are many other examples of this phenomenon in both online and primarily offline businesses. GE is mainly focused on big data for improving services—among other things, to optimize the service con­tracts and maintenance intervals for industrial products. The real estate site Zillow created the Zestimate home price estimate, as well as rental cost Zestimates and a national home value index. Netflix created the Netflix Prize for the data science team that could optimize the com­pany’s movie recommendations for customers and is now using big data to help in the creation of proprietary content. The testing firm Kaplan uses its big data to begin advising customers on effective learning and test-preparation strategies.

Novartis focuses on big data—the health-care industry calls it informatics—to develop new drugs. Its CEO, Joe Jimenez, commented in an interview, “If you think about the amounts of data that are now available, bioinformat­ics capability is becoming very important, as is the ability to mine that data and really understand, for example, the specific mutations that are leading to certain types of cancers.” These companies’ big data efforts are directly focused on products, services, and customers.

This has important implications, of course, for the organizational locus of big data and the processes and pace of new product develop­ment. If an organization is serious about product and service genera­tion with big data, it will need to create a platform for doing so—a set of tools, technologies, and people who are good at big data manipulation and the creation of new offerings based on it. There should probably also be some process for testing these new products on a small scale before releasing them to customers. Obviously, anyone desiring to cre­ate big data–based products and services needs to be working closely with the product development team, and perhaps marketing as well. These projects should probably be sponsored by a business leader rather than a technician or data scientist.

Taking a product/service innovation focus with big data also has implications for the financial evaluation of your efforts. Product development is generally viewed as an investment rather than a sav­ings opportunity. With this focus, you may not save a lot of money or time, but you may well add some big numbers to your company’s top line. 


About the author: 

Thomas H. Davenport is a world-renowned thought leader on business analytics and big data, translating important technological trends into new and revitalized management practices that demonstrate the value of analytics to all functions of an organization. He is the President's Distinguished Professor of Information Technology and Management at Babson College, a fellow at the MIT Center for Digitial Business, cofounder and Director of Research at the International Institute for Analytics, and a senior adviser to Deloitte Analytics, as well as the author of dozens of articles for Harvard Business Review.


Reprinted by permission of Harvard Business Review Press. Excerpted from "Big Data at Work: Dispelling the Myths, Uncovering the Opportunities" by Thomas H. Davenport. Copyright 2014. All rights reserved.

The book is available from AmazonBarnes & Noble, and other retailers.

<< back Page 2 of 2

Sponsors