AI/ML projects can run up big bills on compute. With Spark Operator, you can take advantage of spot instances and dynamic executor allocation, which can deliver big savings. Here's how to very simply set it up in MLRun.
Cheers to a successful 2025! Here are my predictions for the upcoming year.
AI/ML projects can run up big bills on compute. With Spark Operator, you can take advantage of spot instances and dynamic executor allocation, which can deliver big savings. Here's how to very simply set it up in MLRun.
Here's how to use the Iguazio feature store to build, store and share features from your Snowflake data.
Iguazio users can now run their ML workloads on AWS EC2 Spot instances. When running ML functions, you might want to control whether to run on Spot nodes or On-Demand compute instances.When deploying Iguazio MLOps platform on AWS, running a job (e.g. model training) or deploying a serving function users are now able to choose to deploy it...
AutoMLOps means automating engineering tasks so that your code is automatically ready for production. Here we outline the challenges and share open-source tools.
In this article, we will walk you through steps to run a Jenkins server in docker and deploy the MLRun project using Jenkins pipeline.
We’re proud to share that Iguazio has been named a sample vendor in eight Gartner Hype Cycles for 2022.