NEW RELEASE

MLRun 1.7 is here! Unlock the power of enhanced LLM monitoring, flexible Docker image deployment, and more.

Can MLRun support models built in an AWS Dev Account and promote them to upstream environments?

Yes. Here’s how it works:

In MLRun, projects are the fundamental unit for working with the platform. First, users add or edit code and configurations in their project. Then, they run it and debug locally. Once ready, the code and configurations are pushed into a source repository (git) and tagged with labels. Finally, the code and configurations are loaded so they can run on development or production clusters.

Users can load different versions of their projects, like development, staging and production. MLRun also integrates with CI systems. These CI systems enable automatically pushing code to the next required step. For example, to a development environment, to the testing phase, into a deployment flow with canaries, etc. Users can configure the CI to promote models to any environment or account. For more information from the docs on getting started with AWS and MLRun, see here.

For more on all the ways Iguazio works together with AWS, including solution briefs, demos and more, check out the partner page.

Need help?

Contact our team of experts or ask a question in the community.

Have a question?

Submit your questions on machine learning and data science to get answers from out team of data scientists, ML engineers and IT leaders.