How Organizations Are Using Custom AI to Protect Data and Drive Efficiency – SPONSOR CONTENT FROM NVIDIA


article hero image 1200x675 1

By Bryan Catanzaro

Generative AI tools like ChatGPT, Gemini, and Claude represent significant advancements in the everyday use of AI.

These general-purpose large language models (LLMs) contain hundreds of billions or even trillions of parameters. Like a public library, they contain vast amounts of information about as many topics as possible, and familiarity with what they offer can empower you to solve difficult problems and improve your performance on a number of tasks.

However, similar to a public library, you may not find highly specialized information in commercial LLMs. And you certainly hope not to find any of your own information there for public use.

While LLM-powered applications can quickly and accurately respond to a broad range of inquiries, they can be difficult to train and deploy; rely on training data sets that can quickly become outdated; and lack access to proprietary policies, practices, or unique knowledge.

This matters in enterprise settings, where specialized and proprietary data is the foundation of competitive advantage. These “company secrets”—data on people, processes, formulas, business practices, and more—are especially valuable when combined with the automation and analytics capabilities of AI.

To maximize the value of AI while safeguarding data and business information, enterprises are developing smaller customized AI models. These models can tap into local proprietary data sets and on-site computing resources, while algorithm training can be supplemented with synthetically generated data.

Tap Into Your Data 

Enterprises are customizing AI applications with their own business data using retrieval-augmented generation (RAG). This AI framework links foundation or general-purpose models to proprietary knowledge sources like product data, inventory management systems, and customer service protocols.

By connecting AI models to handpicked data sources, RAG enables faster effective, and case-specific generative AI deployment, while only exposing a targeted portion of enterprise data to AI models.

Even highly specialized fields can tap into internal data sets to train and deploy useful generative AI models. NVIDIA researchers used RAG to build an AI copilot to support engineers designing chips.

NVIDIA GPUs are incredibly intricate systems, composed of tens of billions of transistors connected by metal wires that are 10,000 times thinner than a human hair. By fine-tuning an existing foundational model with NVIDIA design and schematics data, developers built a copilot that can accurately respond to questions about GPU architecture and design and help engineers quickly find technical documents. Using RAG, the copilot pulls from live databases on local servers, keeping all compute operations secure and in-house.

With this strategy, organizations in any industry can use their own data to build AI agents to support numerous business functions. This may include customer support agents trained on product catalog and customer interaction data, supply chain optimization copilots trained on inventory and demand forecasting data, or even product quality control agents trained on labeled image data and inspection criteria.

However, for many organizations, the biggest challenge to achieving AI success lies in collecting and preparing the right data to train effective models.

Synthetic Data, Real Results

Collecting and labeling data to train models for specific use cases can take weeks or months and can become extremely costly. Highly regulated industries such as health care, finance, and government may be prohibited from transporting data into AI environments altogether.

For these reasons, AI-generated data (aka synthetic data) is increasingly part of the recipe for AI success. Organizations can use gen AI techniques to create synthetic data by training models on real data and then using them to generate new data samples.

Delta Electronics, a global leader in power and thermal management technologies, needed days to manually collect and label images to train automated optical inspection algorithms for use on its assembly lines. To speed things up and cut costs, the company began training deep neural networks for perception tasks with AI-generated synthetic data. It can now generate the amount of needed training data in just 10 minutes and complete model training in one-hundredth of the time it took previously.

The Future of Enterprise AI

Smaller RAG-equipped models offer a solution to the challenge of balancing privacy and problem-solving in AI. They can query local data and run on on-site infrastructure, reducing data center costs and enhancing security by removing the need to send workloads to third-party servers. And synthetic data offers a quick and cost-effective way for organizations to obtain the data they need to build accurate, customized AI solutions.

To reduce barriers to custom AI, enterprises can make use of partnerships to access foundational models, AI and RAG workflows, synthetic data generation pipelines, and other AI development toolkits.

By customizing their own models, businesses can take advantage of reduced computational requirements, faster AI deployment, and decreased exposure of sensitive information while maintaining data security and regulatory compliance.


Learn more about AI-driven business transformation.

Bryan Catanzaro is VP of applied deep learning research at NVIDIA.



Source link

About The Author

Scroll to Top