Zoho Corporation’s SaaS App To Infuse LLMs, Using NVIDIA AI Accelerated Computing Platform

0
Zoho
Zoho

Large language models (LLMs) will be built and deployed in Zoho Corporation’s SaaS applications using the NVIDIA AI accelerated computing platform, which includes NVIDIA NeMo, a component of NVIDIA AI Enterprise software, the global technology company with its headquarters located in Chennai announced today at the NVIDIA AI Summit in Mumbai.

The 700,000+ customers of Zoho Corporation across ManageEngine and Zoho.com worldwide will have access to the LLMs as they are developed and implemented. The company has already spent over USD 10 million on NVIDIA’s AI and GPUs in the last year, and it intends to spend an additional USD 10 million in the upcoming year.

The Director of AI of Zoho Corporation, Ramprakash Ramamoorthy, said: “Many LLMs on the market today are designed for consumer use, offering limited value for businesses. At Zoho, our mission is to develop LLMs tailored specifically for a wide range of business use cases. Owning our entire tech stack, with products spanning various business functions, allows us to integrate the essential element that makes AI truly effective: context.”

Instead of retrofitting models that are already compliant with privacy rules, Zoho prioritizes user privacy from the beginning. By utilizing the entire suite of NVIDIA AI software and accelerated computation to boost throughput and lower latency, it aims to assist businesses in quickly and efficiently realizing a return on investment, BrandSpur business and economy news reports.

For more than ten years, Zoho has been developing its own AI technology and integrating it into its extensive product line, which includes more than 100 items from its ManageEngine and Zoho divisions. It takes a multimodal approach to AI to generate contextual intelligence that can assist users in making business decisions.

Development Of Other Models

In contrast to LLMs, the company is developing narrow, small, and medium language models. This offers choices for using various size models to deliver superior outcomes for a range of use scenarios.

Also read: https://wordpress-1516176-5827464.cloudwaysapps.com/2024/10/25/starlink-suspends-price-hike-a-victory-for-nigerian-consumers/

By using a variety of models, AI can still be useful for companies with limited data. Zoho’s AI plan also places a strong emphasis on privacy, and its LLM models won’t be trained using client information.

According to Vishal Dhupar, NVIDIA’s Managing Director for Asia South: “The ability to choose from a range of AI model sizes empowers businesses to tailor their AI solutions precisely to their needs, balancing performance with cost-effectiveness.

“With NVIDIA’s AI software and accelerated computing platform, Zoho is building a broad range of models to help serve the diverse needs of its business customers,” Dhupar added.

Using this partnership, Zoho will use the NVIDIA NeMo end-to-end platform to develop unique generative AI, including LLMs, multimodal, vision, and speech AI, and accelerate its LLMs on the NVIDIA accelerated computing platform using NVIDIA Hopper GPUs.

To optimize its LLMs for deployment, Zoho is also testing NVIDIA TensorRT-LLM, which has already demonstrated a 60% improvement in throughput and a 35% decrease in latency when compared to an open-source framework that was previously in use.

However, the company is also using NVIDIA’s accelerated computing infrastructure to speed up other workloads, such as speech-to-text.