Enterprise Database Applications and the Cloud: A difficult road ahead

Enterprise Database Applications and the Cloud: A difficult road ahead – Stonebraker et al. 2014

In the rush to the cloud, stateless application components are well catered for but state always makes things more complicated. In this paper, Stonebraker et al. set out some of the reasons enterprise database applications present challenges to cloud migration. Workloads are divided into three categories: OLTP, OLAP, and ‘everything else.’ In the everything else bucket the authors put the wide variety of NoSQL stores that have sprung up – often with a horizontally scalable cloud architecture as a fundamental design choice. Because of their diversity, this category is not considered in the rest of the paper. This is important to remember when considering the overall pessimistic tone. The paper can read as a bit of an anti-cloud rant (perhaps Stonebraker might call it ‘a dose of realism’), but the section on future hardware trends and how they will impact DBMSs is worth the price of admission.

OLTP applications are those where ACID transactions are omni-present and high-availability is a requirement (AMC note: so is low-latency). Databases are on the order of Gigabytes – Terabytes in size and transaction volumes are high. Transactions typically update a few records at a time, interspersed with small-medium sized queries.

The data warehouse contains a historical record of customer facing data, often imported via an ETL tool. Databases are large (Terabytes – Petabytes) and the workload typically consists of OLAP queries. Once the data is loaded it is rarely if ever updated.

For such database applications, the authors estimate that 10% of the ongoing costs are due to hardware maintenance and upgrades, 10% due to software maintenance and upgrades, and 80% on labour for systems and database administration. (Which incidentally makes a good case for renting rather than owning, and offloading those costs onto someone else!).

Concerns relating to I/O virtualization

The first set of challenges invloved in moving these workloads to the cloud comes from the fact that clouds are virtualized environments (so the same issues will occur inside a virtualized enterprise too).

File system level distribution, known as I/O Virtualization, allows a file system to span multiple nodes and distributed data across a cluster of machines. Such distribution is provided by services such as GFS and HDFS. Running on top of such a file system is not likely to find acceptance among performance conscious DBMSs. Essentially all multi-node DBMSs send the query to the data rather than bring the data to the query. This is because there is significantly less network traffic if the query runs at the location of the data. Achieving this is not trivial or even possible in these modern distributed file systems.

(Note that Impala for example runs an impalad process at every Hadoop datanode in order to be able to exploit data locality).

CPU virtualization is just as bad for performance according to the authors, since ‘it hides the databases’ physical location from the DBMS, thereby virtualizing the I/O.’

Active-passive DBMS replication with log shipping “requires some adaptation to work with replicated file systems” (e.g. HDFS).

If the file system also supports transactions, then this facility can be turned off in the DBMS and the file-system supplied one used instead. Since few file systems support both replication and transactions, this active-passive approach will be problematic, and DBMSs are unlikely to use it.

Active-active replication involves executing a transaction at each replica: “…there are many use cases for which active-active is faster than active-passive. Obviously an active-active scheme is incompatible with file system replication.”

(incompatible sounds too strong to me: file system replication may not strictly be needed in such a scenario, but I don’t see why it should cause anything to break.)

Another reason why base file system replication may not be ideal is found in the data warehousing world.

Some data warehouse DBMSs such as Vertica store replicas in different sort orders so that the query executor can use the replicator that offers the best performance for each individual query…. clearly such logical replicas cannot be maintained by the file system.

Combining the various effects, the authors state that:

In general, one should expect a DBMS running on the public cloud to have an order of magnitude worse performance, compared to a non-virtualized I/O system.

If you use a SAN or HDFS within the enterprise, this doesn’t apply to you – you’re already suffering the performance disadvantages. Will in-memory databases be the solution?

At least for OLTP applications, the size of the database is increasing slower than the cost of main memory is declining. In addition, [14] makes the case that even a large retailer like Amazon is a candidate for main memory database deployment. Main memory DBMSs are being marketed by SAP, Oracle, Microsoft, and a variety of startups including VoltDB and MemSQL.

Despite all the difficulties presented in the paper, Amazon, Google, et al. seem to be doing just fine running big data processing systems in the cloud!!!

Future hardware trends

Between many-core CPUs and non-volatile RAM, we expect most current system software will have to be rewritten, causing possible extensive marketplace disruption as traditional DBMSs make way for new technology.

Many core architectures (>= 1000 cores within the next decade) will be a significant challenge to DBMSs with shared data structures.

We have a strong suspicion that latch contention will kill the performance of traditional DBMSs on many-core systems. As a result, we predict that all traditional DBMSs will have to be rewritten to remove such impediments. Alternatively, the DBMS could use lock-free data structures, as in MemSQL or Hekaton, or use a single-threaded execution engine, as in VoltDB and Redis. It is also likely that this rewrite will be a daunting challenge to OSs, file systems, application servers, and other pieces of system software. Many-core may well impact enterprise applications that use shared data structures in much the same way.

This refactoring to support many-core architectures may also improve the ability to operate in elastic cloud environments.

“We do not see flash fundamentally altering DBMS architectures” – it’s the replacement of one block-structured device with another one that is faster and more expensive. (Compare this with a statement in the Architecture of a Database System paper back in 2007 – also with Stonebraker as an author – that ‘flash may have significant impact on future DBMS design.’).

Main memory changes will be significant though:

We expect most (if not all) modern OLTP applications to fit entirely in main memory within a few years. Over time, a substantial fraction of the DBMS market will move to memory-only storage…. There are several technologies that may prove viable later this decade that have the possibility of replacing DRAM as the primary storage for DBMS data. These include memristors, phase-change memory, and spintronic devices. The expectation is that these technologies will be able to read and write data at close to DRAM speeds but at a fraction of the cost and with much higher storage capacities. This technology will extend the reach of main memory DBMSs into the 1-10 Petabytes range.

As a consequence all but the largest data warehouses will be able to use in-memory technology, and “we expect block-structured DBMSs to disappear during this decade.”