While there is a general consensus about the need to manage the explosion of data across an enterprise, there are many different views as to how that should happen. Texas-based Calpont has upgraded its InfiniDB to v2.0 to now include fast data loading and improved query response times.

Speaking to CMSWire before today’s release John Webber, VP of Business Development and Products said that the new features were designed to offer enterprises the ability to use and apply large data volumes across the enterprise, while at the same time offering them quick and easy access to their scalable data capacity. Because of its flexibility, it also enables enterprises to use existing server-storage hardware relationships as well as their analytics layer

Unlike the commonly used multiprocessing relational database management systems (RDBMS), it uses column-based query to overcome the limitations set by input/output performance limitations. By storing and managing data located in columns rather than in rows using a Massive Parallel Processing architecture that processes and analyzes queries rapidly, the upgrade is sure to thrill many.


Calpont's InfiniDB v2.0 MPP

Databases and Content Management

At the database end of content management, the rapid increase in the amount of data that is now available across the enterprise is confounding traditional database and data warehousing.

While there are a large number of applications and tools available to analyze the information, the speed and accuracy of business decision making is often compromised by the amount of information that has to be analyzed.

The result is the emergence of analytic-specific technologies like InfiniDB, a scalable database built from the ground up for analytics, business intelligence and read intensive applications.

Calpont and InfiniDB

Built to augment, or even replace RDBMS technologies, InfiniDB is a column-orientated technology that scales with any type of storage hardware and can grow as the need for wider analytics appears.

Using column-oriented architecture, it overcomes query limitations that exist in traditional row-based RDBMS systems by analyzing only necessary columns in queries, reducing I/O activities by ignoring unneeded rows. The result is far faster response times and the ability to use larger chunks of data.

InfiniDB v2.0

With v2.0, Calpont has added three principal new features that extend these abilities:

Compression and Decompression

By gathering similar data into column files, it can both compress and decompress data onto a disk with no loss of data read performance.

The space savings from physical compression is combined with the elimination of indexing and views, resulting in quicker response times with a smaller disk footprint and better I/O bandwidth.

User-Defined Functions

Users can write analytic functions specific to their needs by adding logic that can be evaluated in SQL statements. UDFs are completely parallelized and scalable. This enables implementation of custom analytic functionality that uses the benefits of InfiniDB's integrated map reduction capabilities.

Automatic Partitioning

Data is broken up vertically and horizontally within the database so only columns needed to reply to a query will be used. As some data decreases in value over time, InfiniDB’s automatic partition drop enables removal of data partition boundaries and associated data.

InfiniDB responds to a market for Big Data analysis that is becoming increasingly important as datasets get bigger and bigger. In fact, datasets have grown so large that it has become awkward to work with them when using conventional database management tools.

This is particularly true with the amount of unstructured -- and structured, of course -- information that is currently available to enterprises, the analysis of which can give insights into behavior that has not been possible until now.

With InfiniDB v2.0, Calpont is joining a growing number of companies that are looking to provide solutions to this problem. In February it introduced v1.0 with a v2.0 release so soon after, many are expecting further developments for next year.