Dell Technologies announces Dell AI Factory advancements, including powerful and energy-efficient AI infrastructure, integrated partner ecosystem solutions and professional services to drive simpler and faster AI deployments.
Why it matters
AI is now essential for businesses, with 75% of organizations saying AI is key to their strategy and 65% successfully moving AI projects into production. However, challenges like data quality, security concerns and high costs can slow progress.
The Dell AI Factory approach can be up to 62% more cost effective for inferencing LLMs on- premises than the public cloud and helps organizations securely and easily deploy enterprise AI workloads at any scale. Dell offers the industry’s most comprehensive AI portfolio designed for deployments across client devices, data centers, edge locations and clouds. More than 3,000 global customers across industries are accelerating their AI initiatives with the Dell AI Factory.
Dell infrastructure advancements help organizations deploy and manage AI at any scale
Dell introduces end-to-end AI infrastructure to support everything from edge inferencing on an AI PC to managing massive enterprise AI workloads in the data center.
Dell Pro Max AI PC delivers industry’s first enterprise-grade discrete NPU in a mobile form factor
The Dell Pro Max Plus laptop with Qualcomm® AI 100 PC Inference Card is the world’s first mobile workstation with an enterprise-grade discrete NPU. It offers fast and secure on-device inferencing at the edge for large AI models typically run in the cloud, such as today’s 109-billion- parameter model.
The Qualcomm AI 100 PC Inference Card features 32 AI-cores and 64 GB memory, providing power to meet the needs of AI engineers and data scientists deploying large models for edge inferencing.
Dell redefines AI cooling with innovations that reduce cooling energy costs by up to 60%
The industry-first Dell PowerCool Enclosed Rear Door Heat Exchanger (eRDHx) is a Dell- engineered alternative to standard rear door heat exchangers. Designed to capture 100% of IT heat generated with its self-contained airflow system, the eRDHx can reduce cooling energy costs by up to 60%10 compared to currently available solutions.
With Dell’s factory integrated IR7000 racks equipped with future-ready eRDHx technology, organizations can:
· Significantly cut costs and eliminate reliance on expensive chillers given the eRDHx operates with water temperatures warmer than traditional solutions (between 32 and 36 degrees Celsius).
· Maximize data center capacity by deploying up to 16% more racks of dense compute, without increasing power consumption.
· Enable air cooling capacity up to 80 kW per rack for dense AI and HPC deployments.
· Minimize risk with advanced leak detection, real-time thermal monitoring and unified management of all rack-level components with the Dell Integrated Rack Controller.
Dell PowerEdge servers with AMD GPUs maximize performance and efficiency
Dell PowerEdge XE9785 and XE9785L servers will support AMD Instinct™ MI350 series GPUs, which offer 288 GB of HBM3E memory per GPU and deliver up to 35 times greater inferencing performance. Available in liquid-cooled and air-cooled configurations, the servers will reduce facility cooling energy costs.
Dell advancements power efficient and secure AI deployments and workflows
Because AI is only as powerful as the data that fuels it, organizations need a platform designed for performance and scalability. The Dell AI Data Platform updates improve access to high quality structured, semi-structured and unstructured data across the AI lifecycle.
· Dell Project Lightning is the world’s fastest parallel file system per new testing, delivering up to two times greater throughput than competing parallel file systems. Project Lightning will accelerate training time for large-scale and complex AI workflows.
· Dell Data Lakehouse enhancements simplify AI workflows and accelerate use cases — such as recommendation engines, semantic search and customer intent detection — by creating and querying AI-ready datasets.
“We’re excited to work with Dell to support our cutting-edge AI initiatives, and we expect Project Lightning to be a critical storage technology for our AI innovations,” said Dr. Paul Calleja, director, Cambridge Open Zettascale Lab and Research Computing Services, University of Cambridge.
With additional portfolio advancements, organizations can:
· Lower power consumption, reduce latency and boost cost savings for high performance computing (HPC) and AI fabrics with Dell Linear Pluggable Optics.
Increase trust in the security of their AI infrastructure and solutions with Dell AI Security and Resilience Services, which provide full stack protection across AI infrastructure, data, applications and models.
Dell expands AI partner ecosystem with customizable AI solutions and applications
Dell is collaborating with AI ecosystem players to deliver tailored solutions that simply and quickly integrate into organizations’ existing IT environments. Organizations can:
· Enable intelligent, autonomous workflows with a first-of-its-kind on-premises deployment of Cohere North, which integrates various data sources while ensuring control over operations.
· Innovate where the data is with Google Gemini and Google Distributed Cloud on- premises available on Dell PowerEdge XE9680 and XE9780 servers.
· Prototype and build agent-based enterprise AI applications with Dell AI Solutions with Llama, using Meta’s latest Llama Stack distribution and Llama 4 models.
· Securely run scalable AI agents and enterprise search on-premises with Glean. Dell and Glean’s collaboration will deliver the first on-premises deployment architecture for Glean’s Work AI platform.
· Build and deploy secure, customizable AI applications and knowledge management workflows with solutions jointly engineered by Dell and Mistral AI.
The Dell AI Factory also expands to include:
· Advancements to the Dell AI Platform with AMD add 200G of storage networking and an upgraded AMD ROCm open software stack for organizations to simplify workflows, support LLMs and efficiently manage complex workloads. Dell and AMD are collaborating to provide Day 0 support and performance optimized containers for AI models such as Llama 4.
· The new Dell AI Platform with Intel helps enterprises deploy a full stack of high performance, scalable AI infrastructure with Intel® Gaudi® 3 AI accelerators.
Dell also announced advancements to the Dell AI Factory with NVIDIA and updates to Dell NativeEdge to support AI deployments and inferencing at the edge.
Perspectives
“It has been a non-stop year of innovating for enterprises, and we’re not slowing down. We have introduced more than 200 updates to the Dell AI Factory since last year,” said Jeff Clarke, chief operating officer, Dell Technologies. “Our latest AI advancements — from groundbreaking AI PCs to cutting-edge data center solutions — are designed to help organizations of every size to seamlessly adopt AI, drive faster insights, improve efficiency and accelerate their results.”
“We leverage the Dell AI Factory for our oceanic research at Oregon State University to revolutionize and address some of the planet's most critical challenges," said Christopher M. Sullivan, director of Research and Academic Computing for the College of Earth, Ocean and Atmospheric Sciences, Oregon State University. "Through advanced AI solutions, we're accelerating insights that empower global decision-makers to tackle climate change, safeguard marine ecosystems and drive meaningful progress for humanity."