NEW RELEASE

MLRun 1.7 is here! Unlock the power of enhanced LLM monitoring, flexible Docker image deployment, and more.

What is the key difference between fine tuning and embedding a foundational model?

Fine-tuning is the process of taking a pre-trained model and training it further on a smaller, domain-specific dataset. This additional training helps the model adapt to the nuances and requirements of the specific task or domain.

Fine-tuning helps transferring the general knowledge learned from the foundational training on to specific tasks or domains, to enhance the model's performance on those tasks.

Embedding a foundational model means integrating or incorporating the pre-trained model into another system or application, but without significantly altering its learned parameters. This means using the model's capabilities as they are.

For example, the model could be used for generating text, answering questions, or any other task it was originally trained for.

Interested in learning more?

Check out this 9 minute demo that covers MLOps best practices for generative AI applications.

View this webinar with QuantumBlack, AI by McKinsey covers the challenges of deploying and managing LLMs in live user-facing business applications.

Check out this demo and repo that demonstrates how to fine tune an LLM and build an application.

Need help?

Contact our team of experts or ask a question in the community.

Have a question?

Submit your questions on machine learning and data science to get answers from out team of data scientists, ML engineers and IT leaders.