Cloudera, the only true hybrid platform for data, analytics, and AI, today launched Cloudera AI Inference powered by NVIDIA NIM microservices, part of the NVIDIA AI Enterprise platform. As one of the industry’s first AI inference services to provide embedded NIM microservice capability, Cloudera AI Inference uniquely streamlines the deployment and management of large-scale AI models, allowing enterprises to harness their data’s true potential to advance GenAI from pilot phases to full production.
Recent data from Deloitte reveals the biggest barriers to GenAI adoption for enterprises are compliance risks and governance concerns, yet adoption of GenAI is progressing at a rapid pace, with over two-thirds of organizations increasing their GenAI budgets in Q3 this year. To mitigate these concerns, businesses must turn to running AI models and applications privately – whether on premises or in public clouds. This shift requires secure and scalable solutions that avoid complex, do-it-yourself approaches.
“With growing adoption of AI in Asia Pacific, many enterprises are keen to take a step further towards AI-driven innovation for their organizations. We are proud to offer solutions like AI Inference powered by NVIDIA NIM microservices to address the diverse data infrastructure needs of enterprises in Asia Pacific,” said Remus Lim, Senior Vice President of Asia Pacific and Japan, Cloudera. “This partnership brings together the best of both worlds to enable our customers in harnessing the power of AI while ensuring data security and compliance, with stringent data privacy regulations across the region.”
Cloudera AI Inference protects sensitive data from leaking to non-private, vendor-hosted AI model services by providing secure development and deployment within enterprise control. Powered by NVIDIA technology, the service helps to build trusted data for trusted AI with high-performance speeds, enabling the efficient development of AI-driven chatbots, virtual assistants, and agentic applications impacting both productivity and new business growth.
The launch of Cloudera AI Inference comes on the heels of the company’s collaboration with NVIDIA, reinforcing Cloudera’s commitment to driving enterprise AI innovation at a critical moment, as industries navigate the complexities of digital transformation and AI integration.
Developers can build, customize, and deploy enterprise-grade LLMs with up to 36x faster performance using NVIDIA Tensor Core GPUs and nearly 4x throughput compared with CPUs. The seamless user experience integrates UI and APIs directly with NVIDIA NIM microservice containers, eliminating the need for command-line interfaces (CLI) and separate monitoring systems. The service integration with Cloudera’s AI Model Registry also enhances security and governance by managing access controls for both model endpoints and operations. Users benefit from a unified platform where all models—whether LLM deployments or traditional models—are seamlessly managed under a single service.
Additional key features of Cloudera AI Inference include:
- Advanced AI Capabilities: Utilize NVIDIA NIM microservices to optimize open-source LLMs, including LLama and Mistral, for cutting-edge advancements in natural language processing (NLP), computer vision, and other AI domains.
- Hybrid Cloud & Privacy: Run workloads on prem or in the cloud, with VPC deployments for enhanced security and regulatory compliance.
- Scalability & Monitoring: Rely on auto-scaling, high availability (HA), and real-time performance tracking to detect and correct issues, and deliver efficient resource management.
- Open APIs & CI/CD Integration: Access standards-compliant APIs for model deployment, management, and monitoring for seamless integration with CI/CD pipelines and MLOps workflows.
- Enterprise Security: Enforce model access with Service Accounts, Access Control, Lineage, and Auditing features.
- Risk-Managed Deployment: Conduct A/B testing and canary rollouts for controlled model updates.
“Enterprises are eager to invest in GenAI, but it requires not only scalable data but also secure, compliant, and well-governed data,” said industry analyst, Sanjeev Mohan. “Productionizing AI at scale privately introduces complexity that DIY approaches struggle to address. Cloudera AI Inference bridges this gap by integrating advanced data management with NVIDIA’s AI expertise, unlocking data’s full potential while safeguarding it. With enterprise-grade security features like service accounts, access control, and audit, organizations can confidently protect their data and run workloads on prem or in the cloud, deploying AI models efficiently with the necessary flexibility and governance.”
“We are excited to collaborate with NVIDIA to bring Cloudera AI Inference to market, providing a single AI/ML platform that supports nearly all models and use cases so enterprises can both create powerful AI apps with our software and then run those performant AI apps in Cloudera as well,” said Dipto Chakravarty, Chief Product Officer at Cloudera. “With the integration of NVIDIA AI, which facilitates smarter decision-making through advanced performance, Cloudera is innovating on behalf of its customers by building trusted AI apps with trusted data at scale.”
“Enterprises today need to seamlessly integrate generative AI with their existing data infrastructure to drive business outcomes,” said Kari Briski, vice president of AI software, models and services at NVIDIA. “By incorporating NVIDIA NIM microservices into Cloudera’s AI Inference platform, we’re empowering developers to easily create trustworthy generative AI applications while fostering a self-sustaining AI data flywheel”.
These new capabilities will be unveiled at Cloudera’s premier AI and data conference, Cloudera EVOLVE NY, taking place Oct. 10. Click here to learn more about how these latest updates deepen Cloudera’s commitment, elevating enterprise data from pilot to production with GenAI.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)