#MLOPSLIVE WEBINAR SERIES
Session #27
LLM Validation & Evaluation
Watch this on-demand webinar with Ehud Barnea (Tasq.ai), Guy Lecker (Iguazio) and Yaron Haviv (Iguazio) as they discuss LLM validation & evaluation.
Data labelling at scale has become a massive challenge in AI, with the rapidly growing use of LLMs for diverse use cases such as call center analysis, chatbots, personal assistants and more.
How can you validate, evaluate and fine tune an LLM effectively? Is there a way to automatically label data to improve model efficiency, without needing to employ hundreds or thousands of human data labelers?
In this webinar we:
- Demonstrate how to effectively validate and evaluate your LLM
- Showcase a real world use case
- Dive into the pipeline to show how automation can be used across the board—from data labeling at scale, to deploying and managing your gen AI application in production