By Jenna Boller, VP of Marketing at Ocient
Attending the 2024 Open Compute Project (OCP) Global Summit in San Jose this week had one clear and repeated theme – the AI era is fueling a surge in demand for energy that is currently not sustainable.
During the summit, I joined representatives from Solidigm, Supermicro, and Iceotope for a panel discussion moderated by Allyson Klein from TechArena: Navigating the AI Compute Curve: Adoption of Efficiently Scalable Data Center Solutions.

Some highlights and takeaways from our panel:
- Today, data centers use 2% of the world’s power. This is expected to reach 17% in the coming years.
- In Ocient’s Beyond Big Data report launched last month, 31% of respondents said they would be willing to switch to a different data or AI solution if they could reduce energy consumption for compute-intensive workloads.
- Innovation in liquid cooling enables almost 100% heat capture with solutions under development to store and transfer heat to swimming pools and other relevant sources.
- Energy supply and demand may drive opportunities and challenges within edge computing, particularly for AI use cases.
In addition to the panel, the OCP Summit included several presentations around sustainability, data center energy efficiency, and the measurement of scope 2 and 3 emissions. Many sessions ended with a clear call to action to advance the impact of these working groups.
Here are three key themes I took away from the 2024 OCP Global Summit.
A call for transparency around the energy cost of AI and data applications
During our panel discussion and throughout the Q&A sessions I attended at the OCP Summit, there was ample talk around the need for software companies to provide transparency in reporting the amount of energy consumed for compute-intensive applications. Without transparency, there is no real ability for buyers and users to optimize or govern the use of their tech stack with energy requirements or goals in mind.
To date, much of the discussion has centered around efficient data centers, however, when partnering closely with hardware and components manufacturers, Ocient has been able to demonstrate a 50-90% reduction in the cost, system footprint, and energy consumption of an always-on, compute-intensive data environment against other legacy and cloud-based data platform software solutions.
To me, this makes it clear that software has a role to play in driving down the energy consumption and cost of compute-intensive data and AI applications. Providing transparency around the energy cost of various software applications can lead to more efficient computing – ensuring that software is written and modernized to maximize the underlying hardware and cloud instances.
There is an urgent need to adopt a common framework for carbon measurement and reporting
In a session titled From Metrics to Impact: Implementing Effective Carbon Reduction Strategies, Andrea Desimone from Schneider Electric and Kellie Jensen from Meta presented challenges in measuring and reporting on carbon emissions along with their ideas to build effective decarbonization strategies.

By developing and aligning around a standard carbon metric for the various data center equipment and device components, organizations would have a way to rate the sustainability of their supplies. Perhaps more importantly, they could compare the carbon impact of their preferred solutions alongside price, performance, and other common purchase consideration metrics.
Desimone and Jensen also introduced some complexities around measuring embodied carbon versus operational carbon, including challenges around how to account for both metrics when making an informed decision.
In addition, their research uncovered challenges in driving alignment across the industry around a standard framework for carbon measurement and reporting.

Software and hardware vendors must co-design sustainable solutions
Ashish Jain from AMD shared that his company has set a “30×25” goal – 30x efficiency by 2025. While the company is on track to meet that goal, part of his call to action included more co-design between hardware and software vendors with tight collaboration across system components.

He also called for standardized power monitoring and management capabilities across system components.
Across all the presentations, one thing was clear – collaboration is at the heart of OCP’s working groups. It was nice to see cross-sector representatives from utilities, tech giants, and startups leading projects and initiatives to drive adoption of more sustainable metrics and best practices.
Ocient delivers efficient compute-intensive data and AI workloads
While at the conference, I talked to Allyson Klein from TechArena and Jeniece Wnorowski from Solidigm about the ways in which Ocient is pioneering a new software architecture and services to drastically reduce the cost, system footprint, and energy consumption of always-on, compute-intensive data and AI solutions.
To hear more about Ocient’s innovation in the data warehousing technology space, check out the podcast here:
From its founding, Ocient optimized for efficient performance at scale in order to deliver the lowest cost and system footprint to customers. We partner closely with companies like Solidigm to leverage the latest high performance standard hardware and we architect our software layer to process data as efficiently as possible.
From our Compute Adjacent Storage Architecture® (CASA), which collapses the compute and storage layers into a single tier to our Zero Copy Reliability™ footprint which makes data reliable without copies, Ocient’s breakthrough approach delivers a 50-90% reduction in the system cost, footprint, and energy consumption for always-on, compute-intensive workloads.
Whether you’re looking to reduce the cost and energy footprint of your existing data-intensive workloads or architecting a new AI application, please reach out if you’d like to explore the cost and energy savings you can realize by migrating to Ocient.