NEW RELEASE

MLRun 1.7 is here! Unlock the power of enhanced LLM monitoring, flexible Docker image deployment, and more.

AI Pipeline Orchestration

Automate and scale ML and generative AI application lifecycles. Get from PoC to production faster to make true business impact.

Automated and Scalable
AI Pipeline Orchestration

Consume MLRun, Iguazio’s open source framework, to automate data preparation, model tuning, customization, validation and optimization of ML models and LLMs over elastic resources. With MLRun you can rapidly deploy scalable real-time serving and application pipelines, with built-in observability and flexible deployment options: multi-cloud, hybrid and on-prem environments.

AutoML

Automated Productization

Use real-time serving and application pipelines for rapid deployment and CI/CD pipelines for model training and testing.

Feature Engineering Made Simple

Responsible AI with Minimal Engineering

Monitor models, resources and data in real time. Auto-trigger alerts and LLM customization.

Online and Offline Feature Store

Deploy Anywhere

Flexible deployment options, supporting multi-cloud, hybrid and on-prem environments.

LLM Customization

Fine-tune models with RAG, RAFT and others to improve model accuracy and reduce costs.

Benefits

Cut Time to Production

Cut Time to Production

Automate Productization

Automate Productization

LLM Customization

LLM Customization

Collaborate and Re-Use

Collaborate and Re-Use

Learn More

Data Science Platform Tutorials

Platform Overview

Get started with a video introduction to the Platform

Data Science Platform Documentation

Documentation

Access overviews, tutorials, references and guides

Accelerate your AI pipelines

Learn how you can automate, scale and orchestrate AI pipelines end to end with the Iguazio AI Platform