The explosion of AI is further heightening demand for storage performance and capacity as organizations feed models and databases with unprecedented amounts of data, meaning the next generation of storage technologies will need to deliver even greater performance, density and capacity than ever before.
Supermicro’s fourth annual Open Storage Summit brings together leading storage experts from across the industry including drive manufacturers, compute components manufacturers, software developers and of course Supermicro’s industry leading system architects to discuss the latest in storage technologies and how they will solve tomorrow’s data challenges from the data center right out to the intelligent edge.
This year’s Summit includes a roundtable keynote session followed by five focus sessions, with guests from the storage industry’s leading players including Intel®, AMD, NVIDIA, Micron, Kioxia, Solidigm, and Samsung, as well as Supermicro’s storage software partners.
New Innovations For Storage Performance
In a time in which pure processing power is game-changing, it’s important to continually reflect on current solutions and look for new ways to keep business players progressing through new levels. Sometimes, that progress means stopping investment in an old version of that game and crafting a whole new open world instead.
In Session 2 of our 2023 Open Storage Summit, you will hear from NVIDIA on how they are helping organizations build whole new worlds in which to operate. Through the introduction of the third pillar of computing – the Data Processing Unit – DPUs join CPUs and GPUs to create a futuristic blue sky environment in which applications are accelerated well beyond the capabilities of CPUs alone.
This is particularly important in the frenetically growing AI market, in which lightning-fast storage processing time means that critical business initiatives make their way to the leaderboard instead of being relegated to game-over status.
During this session, players in the audience will:
- Discover the limitations inherent in traditional storage architectures
- Understand the advantages of GPUDirect storage and RDMA for AI
- Learn how the GPUDirect Storage and RDMA work at the rack-level to combine the resources of multiple systems into one massive compute cluster
- Uplevel their knowledge around how DPUs can effectively offload compute tasks to massively improve storage performance
Register for upcoming webinars
Join the discussion! Register now for full access to the storage industry’s leading online event to get the latest on key storage trends as well as exclusive look into the future of high performance storage from the most influential minds in the industry.
Register now to get the latest on key storage trends and enter for a chance to win a $250 Amazon gift card.