By Jenna Boller, VP of Marketing at Ocient
As AI applications commandeer a growing share of energy off the grid, more concerning reports have been published about the lack of energy available to supply the bourgeoning demand of these compute-intensive workloads. Here’s one of the latest headlines from NPR last month: AI brings soaring emissions for Google and Microsoft, a major contributor to climate change.
Last week, leaders from the data storage industry gathered at the Future of Memory and Storage in Santa Clara, formerly the Flash Memory Summit. A big topic of conversation was how industry players can work together to effectively lower the energy demands of growing workloads in data management and artificial intelligence (AI).
During the event, our partner, Solidigm, hosted a session for press and analysts aimed at shedding light on the current AI environment and data storage challenges. During the session, representatives from Solidigm talked about how certain AI model datasets are growing at a rate of 2.3x every 2 years, with the average active power required per processor increasing 5x+ over the past 10 years.
As Solidigm reinforced throughout their presentations, in a world that relies on always-on, compute-intensive data and AI workloads – every watt counts.
A more efficient engine for high-intensity data ingest, preparation, and predictive AI
During the session, Ocient’s chief product officer, Joe Jablonski, presented Ocient’s high-octane performance capabilities in data analytics and predictive AI. His presentation highlighted the Ocient platform’s efficient footprint, demonstrating how Ocient can reduce cost, space, and energy consumption for always-on, compute-intensive workloads by up to 90%.
Many of the unique innovations brought to market by Ocient are directly possible thanks to our all-NVMe software architecture:
- CASA vs ROSA – Ocient’s Compute Adjacent Storage Architecture™ (CASA) eliminates the remote object tier in a remote object storage architecture (ROSA), which can be constrained by network bottlenecks and, for many legacy systems and cloud data warehouses, will include some less energy efficient spinning disks underneath the software layer
- Native support for AI/ML, data preparation, and geospatial analytics – leverages the extreme processing capabilities of NVMe SSDs to bring more compute and analytics processing to the data versus copying and moving data in between multiple disparate systems
- Zero Copy Reliability™ – takes advantage of the low failure rates of NVMe SSDs alongside parity encoding and efficient compression to deliver enterprise-grade reliability without storing 2-3 copies
- Ocient Megalane™ – delivers massively parallel processing and high queue depths, aligning to the underlying industry-standard hardware resources and Solidigm’s message that “every watt counts”
- Comprehensive Indexes – not possible within an HDD architecture, comprehensive indexes accelerate query performance with minimal overhead
Understanding compute tiers to reduce energy consumption in an AI storage cluster
Solidigm also took time during the session to walk through the various compute processes within an AI data pipeline, pointing out opportunities to reduce energy consumption and maximize effectiveness and efficiency of these energy-intensive workloads.
The challenge of fueling Big Data and AI workloads with enough energy will only grow over time as more organizations invest in GenAI and machine learning at scale. When building AI data pipelines, one critical consideration is how to use all compute power effectively, including GPU resources that can sit idle in between training cycles.
By identifying opportunities to optimize energy consumption across an AI cluster, partners like Solidigm and Ocient can drive greater performance without linearly scaling (or beyond) the energy consumed by high-intensity data products and services.
Ocient delivers maximum data performance in a minimal footprint
Ocient enables customers to drastically reduce complexity within their data management environments by breaking down data silos, consolidating multiple capabilities on a single platform, and reducing the cost of data processing at scale. As a result, organizations can power their machine learning and AI applications with fresher, more accurate, more comprehensive, and lower-cost data pipelines. From data ingest to preparation, Ocient can boost data quality while reducing the cost and resourcing required to feed an AI pipeline at scale.
In addition to the performance benefits, Ocient delivers significant cost, space, and energy efficiency benefits. For example, when comparing Ocient to a legacy deployment for a 4.41 petabyte storage + analytics cluster, Ocient deploys into 33% of the legacy data center footprint while saving an estimated $1.6 million in energy costs alone – not to mention savings in software, hardware, and operating costs.
We truly value our partnerships with data storage industry leaders like Solidigm and look forward to the additional energy-savings benefits we can deliver in a world where software and hardware are closely woven into unique solutions that deliver increased business value with a minimal carbon footprint.
For more information about Ocient’s partnership with Solidigm and our energy-efficient data analytics solutions, please reach out to us here and request a cost savings analysis or demo.