NEW RELEASE

MLRun 1.7 is here! Unlock the power of enhanced LLM monitoring, flexible Docker image deployment, and more.

MLRun

MLRun is the first open-source AI orchestration framework for managing ML and generative AI application lifecycles. It automates data preparation, model tuning, customization, validation and optimization of ML models and LLMs over elastic resources. MLRun enables the rapid deployment of scalable real-time serving and application pipelines, while providing built-in observability and flexible deployment options, supporting multi-cloud, hybrid and on-prem environments.

Github   |    Join the Slack Community

Open-Source MLRun
Automated and Scalable AI Orchestration for Faster Pilot to Production and Business Impact

 

MLRun architecture

  • Automated Productization: Use real-time serving and application pipelines for rapid deployment and CI/CD pipelines for model training and testing.
  • LLM Customization: Fine-tune models with RAG, RAFT and others to improve model accuracy.
  • Responsible AI with Minimal Engineering: Auto-track data, lineage, experiments and models. Monitor models, resources and data in real time. Auto-trigger alerts and LLM customization.
  • Scale and Elasticity: Orchestrate distributed data processing, model training, LLM customization and serving.
  • Future-proof and Reduce Risks: Integrate with any third-party service. Deploy multi-cloud, hybrid or on-prem. MlRun supports all mainstream frameworks, managed ML services and LLMs.
  • Collaboration: One technology stack for data engineers, data scientists and machine learning engineers.

Why MLRun

12X faster time to production

12X faster time to production

End-to-end observability

End-to-end observability

6X reduction in computation costs

6X reduction in computation costs

Open architecture

Open architecture

90% reduction in manual tasks

90% reduction in manual tasks

Managed MLRun on Iguazio
Resiliency, Security and Functionality for the Enterprise

 

New MLRun architecture high res

Enterprise Management & Support

Operators can painlessly set up the system through wizards, configure administration policies and register for system notifications, with no need for automation scripts or hands-on daily management. The Iguazio Data Science Platform with managed MLRun is delivered as an integrated offering with enterprise resiliency and functionality in mind. Enable data collaboration and governance across apps and business units without compromising security or performance. Authenticate and authorize users with LDAP integration and secure collaboration. The real-time data layer classifies data transactions with a built-in, data firewall that provides fine-grained policies to control access, service levels, multi-tenancy and data life cycles. Enterprise customers get dedicated 24/7 support to onboard, guide, and consult.

Managed Services

Lift the weight of infrastructure management by leveraging built-in managed services for data analysis, ML/AI frameworks, development tools, dashboards, security and auth services, and logging. With managed MLRun, anyone on your ML team can simply choose a service, specify params and click deploy. Data scientists can work from Jupyter Notebooks or any other IDE and automatically turn it into an elastic and fully managed service directly from Jupyter or another IDE, with a single line of code.

MLRun Ecosystem

Open Source VS Enterprise

Features & Functionality
Open Source MLRun
Managed MLRun on Iguazio

Project Management

Single pane of glass for managing projects
Git integration
Built-in services (Spark, Presto, Grafana, Dask, Horovod etc.)
Project resource management
Air-gapped environments support
Built-in Jupyter service

Experiment Tracking

Artifacts management
Pipelines management

Model Deployment

Serverless function for running models (nuclio)

Model Monitoring

Model monitoring dashboard
Model drift identification
Canary rollout

Security

LDAP integration
User(s) & group management
Service authentication & authorization
Secured authentication for API gateway

Management & Support

Community support
24/7 enterprise support
Service monitoring
Logs management
Performance monitoring reports
Events, alerts & audit
Self-service

MLRun Resources