Big Data has become an omnipresent term in every industry in recent years. However, Big Data does not necessarily mean big database. Big Data is not only defined by its volume; other factors also need to be taken into consideration including the complexity of the data, the computational demands of the queries (and associated analytics required) and the constraints of the data’s time to value. Some extremely large databases can be filled with large quantities of simple data that may be accumulated into a single table, for example, making processing and queries on this large data set very straightforward and responsive.
Other databases, although much smaller in aggregate size, can have less data volume, but the data sets themselves are more complex. In these instances, data transactions and queries can span multiple tables and structures that have to be transitionally isolated in real-time.
Database technologies are rapidly evolving to a degree where it can be difficult to keep up with the newest solutions and buzzwords, let alone distinguish one from another. Looking for the right solution can be especially challenging when IT vernacular is constantly putting terms at odds. There are countless “this-or-that” conversations in database technology. In this slideshow, Kurt Dobbins, CEO of Deep Information Sciences, takes a look at a few of the most common faceoffs.