At Tech World @ CES 2026 in Las Vegas, Lenovo announced a new portfolio of purpose-built AI inferencing servers, solutions, and services designed to help enterprises unlock real-time AI decision-making at scale. The announcement marks a significant step forward as organizations move beyond training large language models (LLMs) toward deploying fully trained AI models that deliver immediate business value.
Lenovo’s latest AI inferencing portfolio is built to support workloads across cloud, data center, and edge environments. This flexible approach ensures that AI runs where it creates the most impact—whether that’s in centralized data centers, distributed cloud platforms, or latency-sensitive edge locations. By addressing the entire AI deployment lifecycle, Lenovo enables enterprises to operationalize AI faster and more efficiently.
At the core of the portfolio is the new Lenovo ThinkSystem SR675i, engineered for the largest and most complex AI workloads. Designed to support full-scale LLM inferencing, the SR675i targets high-demand industries such as manufacturing, healthcare, and financial services, where real-time insights and automation are critical. Complementing this is the Lenovo ThinkSystem SR650i, which delivers high-density GPU compute optimized for accelerated AI inferencing while remaining easy to deploy within existing data center environments.
For edge use cases, Lenovo introduced the ThinkEdge SE455i, enabling ultra-low-latency AI inferencing closer to where data is generated. Built for compact, rugged, and temperature-flexible deployments, the SE455i is ideal for retail, telecommunications, and industrial applications that require immediate AI-driven decisions.
To address energy efficiency challenges in high-performance AI environments, Lenovo integrates advanced air and liquid cooling technologies through Lenovo Neptune. This innovation helps reduce power consumption while maintaining performance at scale. Additionally, Lenovo TruScale provides flexible, usage-based consumption models, allowing enterprises to adopt AI infrastructure while maintaining budget control and operational agility.
These new inferencing servers form the foundation of the Lenovo Hybrid AI Factory, a modular and validated framework for building and operating enterprise-grade AI solutions. Lenovo further strengthened this offering by introducing pre-validated hybrid AI inferencing platforms with Nutanix, Red Hat, and Canonical, ensuring scalable, secure, and cost-effective deployments.
To accelerate adoption, Lenovo also launched Hybrid AI Factory Services, offering advisory, deployment, and managed services. Real-world impact was highlighted through immersive content creation at the Sphere in Las Vegas, showcasing how Lenovo’s AI infrastructure powers next-generation experiences.
With this comprehensive AI inferencing portfolio, Lenovo positions itself at the forefront of real-time enterprise AI innovation.




.jpg)
.jpg)
