What’s good for the community is good for business. This is the new golden rule. Or at least that seems to be the case in the world of big data, where most commercial solutions are open source at their core.

“Enterprises don’t want lock-in,” said Michael Cucci, a marketing manager at EMC/VMware spinoff Pivotal Software, during an interview last week. He added that companies want to be able to influence the future of the technology that they use to drive their businesses. In fact it's practically a must. “It has to be open source or the conversation doesn’t begin,” he explained.

With realizations like this, how do you sell (even the best) a big data platform that’s largely proprietary?

It turns out that maybe you don’t.

Good Isn't Enough

Even if your parent firms — VMware and EMC — spent millions acquiring the various differentiating technologies involved and then building them out … even if there’s nothing else on the market that compares.

If you want to compete in the big data game, your core technology has to be both open and free.

This might have been be a hard fact to accept, especially for someone like Pivotal Software’s President Scott Yara, who two short years ago mouthed off that anything that Hadoop distro startups (like MapR, Cloudera, and Hortonworks) built wouldn’t be able to hold a candle to Pivotal HD, his company’s SQL on Hadoop solution.

“We literally have over 300 engineers working on our Hadoop platform,” Yara told GigaOm’s Derrick Harris. “We’re bringing all the power of EMC and VMware behind it.”

And while Yara and company seem to have built a good product, it certainly didn’t blow the competition away. Their proprietary go-to-market strategy may have kept them from getting all of the traction they expected and as a result a new business model needed to evolve. Now it has.

Pivotal Joins the Club

Today, during a live streaming event, Yara is expected to announce that Pivotal will open source each of the core components of its big data stack, namely Pivotal HAWQ, Pivotal Greenplum Database and Pivotal GemFire. The aforementioned work in concert with the Apache Foundation’s open source big data processor Hadoop.

“We’re doing this even though our business results for these products were substantial," said Cucci, explaining that bookings for Pivotal’s big data software exceeded $100 million last year, and subscriptions for its Big Data Suite exceeded 40 million over its first nine months on the market.

Why Do It?

Aside from open source being a “must have” on most big data related RFP’s, Pivotal seems to believe that big data fragmentation and fear of vendor lock-in are inhibiting widespread adoption of Hadoop and associated big data technologies. As a result, their pitch is that they are stepping forward to help solve the problem by spearheading an Open Data Platform (ODP) initiative through which participants will build Hadoop solutions using a common core of code.

“You’ll be able to certify once and use anywhere,” said Cucci.

As we predicted last week, it’s a Cloud Foundry-like model to which Altiscale, EMC, GE, Hortonworks, IBM, Infosys, Pivotal, SAS, Verizon Enterprise Solutions, VMware, Teradata and a few others whose names we have yet to learn have signed onto. (Or are expected to sign onto.)

Learning Opportunities

The big question is whether the same thing couldn’t be accomplished via a new or existing Apache project.

Unify or Divide?

While Pivotal intends for ODP to be a unifying, community-building, community-driven initiative, it looks like it could be divisive as well.

Hadoop distro provider MapR wants nothing to do with it. “We decided not to participate,” said Jack Norris, the company’s chief marketing officer. To him ODP looks more like a partner program than a community initiative.

“We strongly believe in reinforcing the Hadoop community,” he said. “The Open Data Platform is not more open than the Apache community which has fostered innovation at a tremendous pace.” Norris also pointed out that Apache Big Top already addresses Apache project interoperability, so, to him, it’s not clear what advantages the ODP would actually deliver.

Mike Olson, Cloudera’s Chief Strategy Officer, went a step further in a blog post published this morning. Bringing competitor and ODP member Hortonworks into the conversation, he wrote, "The Pivotal and Hortonworks alliance, notwithstanding the marketing, is antithetical to the open source model and the Apache way. While the ASF (Apache Software Foundation) is open to vendors, the ODP isn’t actually open at all. As a vendor-driven consortium, membership is only for enterprises with serious money — it ought to be called the 'Only Dollars Play' alliance. The price of entry is beyond the means of precisely the people who really drive the Hadoop standard — the individual engineers who participate in the Apache projects.”

It will take time to tell what others think, what happens next or if Hortonworks, which, at least up until now, has been an open source zealot, wants to comment.

Anything they say will, no doubt, be colored by the fact that they announced an alliance this morning which brings together Pivotal’s SQL on Hadoop, analytical database and NoSQL in-memory technologies, with Hortonworks’ expertise and support for Hadoop.

Hadoop Wars Reignited?

While Pivotal also announced a few additions and enhancements to its Big Data Suite package today around Cloud Foundry, Redis, RabbitMQ and Spring XD, today’s news is really about its strategy shift and whether the community will buy into ODP (and some large vendors certainly seem to like it- more on this later) or simply see it as a vehicle for clouding its exit as Hadoop distro provider.

Pivotal insists the latter is not happening.