Site icon IT World Canada

IT’s Big Data Blunder

Many organizations are pouring in millions of dollars on tools and data scientists but many continue to struggle in turning up any meaningful returns from their big data initiatives.

Behind this big data debacle, according to an article in the Harvard Business Review, is a common blunder by many IT departments to treat their analytics and big data projects just as they would other IT projects such as an enterprise resource planning (ERP) or customer relations management (CRM) installation.

Traditional project planning methods typically involve establishing requirements and tech specifications upfront as well as building and deploying technology based on pre-conceived plans and delivering it on time and on budget.

This method may work well with big data project when an organization’s goal is to boost efficiency, reduce expenses and improve process. However, despite delivering these benefits, according to authors Donald Marchand and Joe Peppard, their research has shown that many business executives remain unsatisfied with the results.

The gaffe they said lies in IT’s failure to determine how to effectively use the information generated by their big data project so that users can make better decisions or to abstract unexpected insights that could be useful to the business.

RELATED CONTENT4 ways to tweak business analytics tools
Why ‘big’ is not driver in big data adoption

For instance, a common notion in many enterprise organizations is that big data projects are meant to provide high quality information to executives and managers so that they could make better decisions.

This line of thought does not take into account that many people, including managers and executives, have biases that often hinder their decision making capabilities.

Marchand and Peppard recommend that users themselves, rather than just their managers, be given access to the information turned up by big data programs. It is the users who can create meaning out of the information.

Most IT systems are designed on the assumption that data involved has already been deemed important. Such a method may be useful in highly structured tasks which require precision, however people do not think in a vacuum such an approach fails to recognize that people organize, interpret and use information based on their experiences, their own mental models and needs.

This conventional IT approach tends to fall apart when requirements call for the data to be “taken out of the technology domain” and brought into the “human domain.”

To find out more, click here

Exit mobile version