Despite spending billions on big data, a 2015 study shows that 43 percent of companies "obtain little tangible benefit" from their data — and 23 percent "derive no benefit whatsoever."

We are on the eve of a new era of big data, where the global shortage of data scientists, and high cost, low return-on-investment of traditional “stop/start” data projects will force enterprises to reassess the way that they deal with data.

Enterprises need to follow in the footsteps of companies like Netflix, Uber and LinkedIn, which have integrated data into their business models. Otherwise, they risk falling behind the competition, which can gather vital insights quicker and more effectively than they can.

So why should enterprises start taking steps towards making their data an integral part of their business models and day-to-day workflows?

Invest in Data Scientists

Data is a raw material, and as with most raw materials you need specialist teams to gather, store and refine these products before they are of any use.

The necessary experts, data scientists — holders of the so-called sexiest jobs of the 21st century — are now among the most sought after professions on the marketplace. Companies are offering more than $200,000 starting salaries to try and attract the best minds.

But even enterprises offering the most competitive salaries and perks may soon find the river run dry. In a much cited survey, as McKinsey and Co. predicted that the U.S. could face a shortage of 140,000 to 190,000 data scientists by 2018.

This creates a predicament for enterprises, as there is a limited supply of the essential data scientists needed to make the necessary changes to reap more benefits from their big data. You can't hire more data scientists if there aren't any left to hire.

Flawed Enterprise Data Tactics

Flawed enterprise data tactics further compound the problem. Aside from a handful of forward thinking, high budget enterprises that have already embedded big data into their business models, most companies use big data in a “stop/start” manner.

They initiate individual big data projects (either in-house or through a contracted provider) to gather insights to solve a specific problem using their own data, public data or datasets bought from other sources, as a means of gaining an advantage over competitors.

But many companies lose their competitive advantage as industry rivals gather the information quicker, or fail to take full advantage of the data gathered due to poor management and the fact that they are lacking the essential experts needed to discover insights from the raw data.

To improve ROI and gain more value from their data, companies need to implement systems where data is constantly being gathered, analyzed and can be tapped into for insights at the click of a button. But developing these systems requires highly skilled experts and complex projects.

So with such a dearth of talent, how can enterprises scale their problem solving and decision-making before it is too late?Forrester said that enterprises will have to look at ways to scale up the whole process of data science without relying on large teams of data scientists.

Most Big Data Projects Will Fail

IDC forecasts that the market for big data will grow at 23.1 compound annual growth rate (CAGR), reaching a staggering $48.6 billion in 2018. While the market is set to continue growing at a rapid pace, Gartner predicts 60 percent of big data projects will fail and will end up being abandoned.

Increased competition in the big data space means that only the players that can provide the best, quickest ways to productize data are going to reach the surface swimming.

To solve this conundrum, enterprises need to turn to developing cognitive data systems, which can use machine learning AI to farm, sort and deliver data quicker and more efficiently than human teams.

But speed is only one key to success.

Make Data A Reusable Resource

Rather than using data to meet one specific aim then starting a new project with new aims and datasets, enterprises need to start viewing their data as a reusable resource. Data-product driven companies excel because they don't just get one insight into their customers, every insight becomes the raw material for the next wave of insights.

Author and investor Peter Pham wrote, “Data is no longer a static disposable resource that loses usefulness once it has served its singular purpose. Its life may be extended through multi-use, multi-purpose data processing. As a renewable resource, its value should be assessed not by the bottom line, but as an asset that not only grows in value but one which further provides value creation opportunities.”

Forward thinking companies across a wide range of industries have already integrated data into their business models, and are reaping the benefits as a result.

Cisco, a world leader in enterprise network solutions, has pushed data to the forefront of its business model and developed an in-house IT Hadoop Platform which works constantly using data from a wide range of sources. Cisco farms amazing insights from huge large datasets related to customers, products, and network activity and terabytes of unstructured data such as web logs, video, email, documents, and images, to maintain a competitive advantage in its sphere.

"Cisco UCS CPA for Big Data provides the capabilities we need to use big data analytics for business advantage, including high-performance, scalability, and ease of management," says Jag Kahlon, Cisco IT architect.

Learning Opportunities

Cisco recently acquired Jasper, an industry-leading cloud-based Internet of Things (IoT) service, which allows companies of all shapes and sizes to launch, manage and monetize IoT services on a global scale. By harnessing the power of cognitive data science platform Cisco wants to learn from the data gleaned from potentially millions of individual devices connected to the IoT.

In another case, Micropact, the creators of the enterprise BPM software entellitrak, a unified platform for case management and business process management application has undertaken what CTO Mike Cerniglia calls a Data-First approach. The entellitrak platform is based on the idea that the data needs to be the core of a solution’s design.

Cerniglia told Forbes contributor Dan Woods: “When you buy a house, you usually don’t start by looking at the blueprints—you look at the rooms and think about how you are going to live there. Starting with BPM diagrams or most other ways of developing applications is like looking inside the walls at the plumbing or the electrical wiring first. We feel that our data first approach matches the way people think about doing work and provides IT professionals with a better picture of the desired user outcome.”

Operationalize Your Data

Rather than simply pumping more and more resources into individual big data projects to achieve specific aims, or waiting in hope that teams of qualified data scientists will fall on their laps, to stay on top of the big data wave, enterprises need to focus on operationalizing their data into their everyday business models.

Operationalizing big data requires an all-team effort, which combines marketing, product design, analytics and IT implementation skills.

To do so, enterprises need to assess their potential market in a structured manner and then design the product based upon clearly identified customer needs and the type of information that they want to gather.

The necessary components of a successful big data business model are being useful and significant for various aspects of the enterprise’s big mission, having predictive power and being able to trigger proactive actions.

Using a mix of specialist data scientists and machine learning powered big data systems, data needs to be operationalized and built into the production environment so it iterates on its own.

The Value of Cognitive Technologies

This cognitive AI element allows systems to constantly learn, so doesn't they don't need to depend on human teams. Instead of paying top rate wages to huge teams of data scientists, who are constantly at risk of being wooed away by a better offer, let the machine figure it out.

The systems themselves can run different algorithms until they get a good match, decide what features should be selected from data sets and leave smaller teams of data scientists to really do the job they were paid for, using their brains to change parameters, enhance and farm amazing insights.

Automating data science is the future, and will separate the weak from the strong in that sphere. Creating cognitive systems can dramatically speed up the process, allow for answers to questions on demand, improve ROI and most importantly offer amazing insights that can propel enterprises to fantastic new levels.

Current big data statistics are bleak, and reveal hundreds of millions of dollars being pumped into projects, which are failing to meet aims, or make positive changes.

As more companies realize their previous failings, and turn to automated big data systems, those who continue in the old fashion will simply become redundant. Unless someone stumbles across a hidden island packed to the brim full of data scientists who are willing to work for an average wage, there doesn't seem to be another option.

A new era of big data management is coming, and the question is no longer if, but when.

fa-solid fa-hand-paper Learn how you can join our contributor community.