By Joe Jablonski, Co-Founder and Chief Product Officer at Ocient
The Ocient Hyperscale Data Warehouse system has been able to achieve 10 to 50 times the performance against alternative solutions on average, and as much as 100 times the performance in certain scenarios when compared against Hadoop, Presto, RedShift AQUA, and others. But in a sea of “better,” “faster,” “cheaper,” claims coming from virtually everywhere, I realize Ocient may seem like more of the same.
The truth is, speed, performance, and cost all depend on the workload, the size of the datasets, the number of variables to account for, and a variety of other factors. For that reason, I’m not going to tell you that Ocient is the best data analytics solution provider out there, but I will tell you why we are as fast as we are, particularly when operating at hyperscale.
NVMe SSD drives have revolutionized the storage stack. They have been able to achieve much higher levels of throughput and IOPS compared to all previous drives. NVMe enabled this performance disruption achieving a per single drive throughput of 56Gbps with a random read performance of 1M IOPS. This performance is achieved through parallel access and much higher queue depths. With NVMe SSDs, higher queue depths above 256 are generally required to saturate the drives. Considering many servers have up to 24 NVMe drives in them, the software layer needs to have over 6,000 requests in flight to saturate a 24-drive NVMe system achieving 1.3Tbps. All databases designed prior to NVMe drives were not designed for this level of queue depth or throughput and therefore can’t achieve anywhere close to leveraging this amount of performance. By focusing our software on the new parallel characteristics of NVMe drives, Ocient optimized its software to recognize the full performance from this disruption of storage hardware.
Most databases and data warehouses access the hard drive through a filesystem and a kernel mode driver. This layer of abstraction was acceptable when using spinning disks, but as hard drives have increased their performance, it has been increasingly expensive to keep this level of abstraction. As a result, Ocient has removed it entirely, bypassing the kernel to read and write directly to NVMe drives through the PCI bus. This direct level of access dramatically lowers the number of context switches and memory copies that need to occur in the data path and dramatically frees up memory bandwidth and CPU usage for other tasks.
Until relatively recently, increases in computational or data processing speed were made by increasing the speed of processing elements (for example, the CPU clock rate.) Now, and in the foreseeable future, increases in total computational throughput are realized less through increasing the speed of single components and more through having multiple components acting in parallel. Achieving peak throughput and performance from NVMe storage, high core count processors, and modern memory systems requires software that has been designed to interact with and consume these components in a highly parallel manner.
For this reason, we designed Megalane, Ocient’s high-throughput custom interface to NVMe SSDs that uses highly parallel reads with high queue depths to saturate drive hardware and maximize the benefits of leveraging ultra-performance industry standard hardware. Ocient’s ability to fully capitalize on the capabilities of the modern hardware on which it runs includes processing patterns that view individual CPU cores as parallel data channels, storage structures designed to effect parallel IO requests, and data flow schemes that allow the engine to maintain highly parallel flow across process and network boundaries. This sort of highly parallelized design must be built into the engine at its lowest levels and cannot simply be “bolted on” after the fact.
Most existing database and data warehouse systems were designed in a world where serial spinning disk technology was the only viable bulk storage available and with internal structures that don’t align with the parallelism exposed by processing elements. As a result, they are often fundamentally incapable of fully utilizing the resources available. And given that future increases in computational throughput will be brought about by increases in parallelism at all levels of the hardware, existing systems will be even less capable of capitalizing on this increased throughput. By contrast, the Ocient Hyperscale Data Warehouse, due to its highly parallelized design, scales its processing throughput in line with future increases in hardware capabilities.
Ocient offers a main clustering index and multiple types of secondary indexes. Secondary indexes include inverted indexes which are applied to fixed length columns and can execute exact match or range scans. We have hash indexes that work on variable length columns and are good for exact matching on strings. And we have NGram Indexes, which tokenizes variable length strings and then store the index in an inverted index. NGram indexes are great when doing %like% searches on items such as domain names or host names. Many systems do not support secondary indexes, which when querying petabytes of data with a predicate filters requires full table scans. Avoiding full table scans dramatically speeds up performance. It is important to note that Ocient does the indexing on ingress of the system and does this incredibly efficiently while maintaining low latency and high throughput loading.
Arrays and Tuples for Semi-Structured Data
Ocient processes JSON files and stores them as arrays or tuples in our columnar data warehouse. A user can then query the multidimensional dataset using standard SQL array functions like unnest. By using semi-structured data in this way, it avoids incredibly large joins that would otherwise slow down performance. This results in higher performance at scale as much less CPU and RAM are needed for each query. Furthermore, Ocient allows indexes on arrays and tuples further enhancing performance when doing predicate filters.
Optimizing queries is an essential part of a relational database. The number of permutations available for complex queries is endless. Finding an optimal solution has dramatic impact on query speed.
The Ocient Hyperscale SQL Optimizer uses multi-dimensional probability density functions instead of histograms to calculate selectivity. In general, this allows the optimizer to compute much more accurate selectivity estimates and allows it to understand correlation between columns in intermediate results. This aids the optimizer in estimating the “cost” of alternative plans. Additionally, the combination of a rich set of runtime operators and novel stochastic optimization techniques allow Ocient to generate execution plans for queries that would be impossible or never found by traditional database systems.
Finally – The Ocient Hyperscale SQL Optimizer has a model of the network and hardware infrastructure layer to manage the optimizer and build a model to find the most efficient execution path for the query.
Highly Optimized C++
Ocient chose to use C++ to develop its software and has made heavy use of templates during the development of our highly optimized software. The use of templates speeds up the code paths and tends to minimize cache misses on the processor. In addition to using templates, Ocient has done a lot of work to design algorithms and structures that minimize cache misses.
Ocient also uses huge pages in Linux and has written many of its own memory allocators for different algorithms that run frequently. By avoiding fragmenting memory and using the knowledge of how the algorithms are going to use memory, memory bandwidth and CPU utilization have been dramatically improved.
Why Being Fast Isn’t Good Enough
As you can hopefully see by now, Ocient has optimized and re-optimized for speed by designing our software to leverage every ounce of performance from the most modern industry standard hardware, NVMe SSDs. But the truth is, being the fastest technology in town isn’t good enough to be the chosen provider for hyperscale data analytics solutions across all industries and use cases.
At the end of the day, no one is going to buy a slower technology solution. Being faster is meeting the minimum bar for evaluation. So we tick that box, and we’re proud of it. We’ve built our technology from the ground up to deliver maximum performance at hyperscale, meaning we can tackle a variety of operations from transformation, loading, indexing, and ultra-large analytics all at once. But even that isn’t necessarily enough to seal the deal when it comes to the largest, most mission-critical workloads.
For Ocient, the key to delivering next generation hyperscale data analytics solutions is developing a close partnership with customers so that we can deeply understand their unique business requirements, their pain points, their metrics for success, and their existing data ecosystems. Once we understand these elements, we get to work designing end-to-end solutions powered by our blazing fast data warehouse with the goal of enabling our customers to execute previously infeasible queries that unlock transformational business and operational value. This close partnership is unique in the market as many organizations are left to fend for themselves, sometimes without the skillsets and experience to develop ground-breaking data analytics solutions on their own.
If you’re looking for a partner to deliver unparalleled speed at scale, I invite you to check out Ocient. And if speed isn’t enough, we’re right there with you. For anyone looking for a partner to drive innovation in the space of hyperscale data analytics, we’d be eager to take your call.