Looking into 2023: Predictions for a New Year in MLOps
Yaron Haviv | December 14, 2022
In 2022, AI and ML came into the mainstream consciousness, with generative AI applications like Dall-E and GPT AI becoming massively popular among the general public, and ethical questions of AI usage stirring up impassioned public debate. No longer a side project for forward-thinking businesses or CEOs that find it intriguing, AI and ML are now moving towards the center of the business. For enterprises, this means that in 2023 AI and ML have the potential to exceed every technological and business expectation.
As an ML industry veteran, I’m excited to see the rapid changes in AI and ML’s technological capabilities and the accelerated adoption rate in the past years. As we raise our glasses to the upcoming year, here are my predictions of what we’re expected to face as an industry in 2023:
From ML Models to AI Apps
Up to now, the popular approach has been to start with building models and thinking about the overall application and business integration later. Models were used initially by business analysts and reporting systems which didn’t require integration with business applications. As AI becomes central to the business, data science teams now need to integrate with live data sources and existing applications, and turn manually executed applications into interactive and real time applications. The traditional approach is to produce model serving endpoints. These endpoints accept a numeric feature vector and respond with a prediction output. In this scenario, the critical functionality of integrating with data, adding business logic, acting on the results, and managing the live operations and monitoring takes place separately, implemented by other engineering silos. However, this approach makes delivery, scaling and maintenance much more complicated.
I predict that in 2023, the industry will shift away from this traditional approach to building AI apps. We’ll see data science teams work together with engineering teams to build the delivery process with the end in mind, by beginning with the design of online or real-time application pipelines, and with model serving as one of the steps. In these new pipelines, data, application logic, models, and operations work in concert to deliver a business application with measurable value and ROI. ML applications should interact directly with other services such as customer portals, transactional systems, manufacturing lines, etc. And they can work in real-time and process thousands of requests per second or run periodically to analyze data and make automated decisions.
Here’s what a real-time and batch ML application pipelines can look like:
The technologies for this way of working are already available. This is our approach with MLRun, the open source MLOps orchestration framework we maintain here at Iguazio, which enables the building of multi-stage online pipelines in just a few simple steps, with MLRun serving graphs.
From Siloed to Collaborative and Continuous MLOps
We’ve been seeing AI and ML becoming more ubiquitous in business applications, as a means to improve performance, functionality and scalability. But there’s still a lot of friction that prevents many models from making it to production. Today, most organizations develop models and algorithms in siloed research environments. When these models need to be deployed to production, different engineering teams build them from scratch, taking into consideration the operational challenges (data integration and processing, application logic, scale, security, availability, observability, continuous upgrades, etc.). The process is long, expensive and ineffective, and it requires large teams of data professionals.
Last year, I predicted that in 2022 the practice of automating the ML training process, aka “AutoML”, would pave the way for adopting a more holistic approach to the ML pipeline, through MLOps. In 2023, I think the AutoMLOps approach will be adopted even more robustly. Enterprises will embrace new technologies and tools for eliminating engineering efforts by automating tasks like injecting parameters and code into tasks, integrating with CI/CD, Git and reporting systems, distributing workloads, passing data to and from cloud resources and databases, gathering data from cloud resources and databases, security hardening and protection and versioning.
AutoMLOps will be implemented through new technologies, and we will see the rise of open source solutions to support them. Two of them, MLRun and Nuclio, already support AutoMLOps through capabilities like recording metrics along with the parameters, data lineage, code versioning, and operational data, and automatically adding production features for auto-scaling, resource management, auto-documentation, parameter detection, code profiling, security, model registry, and more.
ML Project Lifecycle: The Traditional Way
I predict that in 2023, organizations will adopt a production-first approach. This approach will be based on cross-pipeline and cross-organization collaboration and incorporate continuous and agile workflows. MLOps, a combination of AI/ML with DevOps, will become more widely adopted and it will create continuous development, integration and delivery (CI/CD) of data and ML intensive applications. The process will be automated, from data collection and preparation all the way to deployment and monitoring.
ML Project Lifecycle: The Right Way
Benefits:
- 10x faster time to production
- Efficient use of resources
- High quality and responsible AI
- Continuous application improvement
As a result, in 2023 enterprises will cut excessive resource spend, increase productivity, improve internal collaboration and derive more business value from their models.
Shifting the Focus to Operational and Post-Production Challenges
In the first generation of enterprise AI, most of the focus was on building models. As this practice matures–with widespread use of automated machine learning (AutoML) and the commoditization of reusable models (see Hugging face, GPT, etc.)-- the focus is shifting to the operational challenges, which include automation, monitoring, and governance. In the past, most resources were allocated to the beginning of the pipeline: to training, building and shipping. Building the model was the end of the road. In the future, the focus will move to the continuous development, monitoring and operations of ML factories and applications.
As we make AI services and applications essential parts of our business, poor model performance will lead to liabilities, revenue loss, damage to the brand, and unsatisfied customers. Therefore, it is critical to monitor the data, the models, and the entire online applications pipeline and guarantee that models continue to perform and that business KPIs are met. Thanks to well-implemented monitoring solutions, organizations can quickly react to the problems by notifying users, retraining models, or adjusting the application pipeline.
In 2023 we will see continuous growth and innovation in the areas of monitoring and observability. Monitoring systems track various infrastructure, data, model, and application metrics and can report or alert on different situations, this is not limited to drift or accuracy measurements, use cases include monitoring the following:
- Data or concept drift: The statistical attributes of the model inputs or outputs change (an indication that the model will underperform).
- Model performance problems: The results of the model are inaccurate.
- Data quality problems: The data provided to the model is of low quality (missing values, NaNs, values are out of the expected range, anomalies, and so on).
- Model bias: Detect changes between the overall scoring and scoring for specific populations (like male and female, minorities, and so on).
- Adversarial attacks: Malicious attempts have been made to deceive the model.
- Business KPIs: Verify that the model meets the target business goals (revenue increase, customer retention, and so on).
- Application performance: The application manages to properly serve requests and without delays.
- Infrastructure usage: Track the usage of computing resources.
- Model staleness: Alert if it is too long since the last time a model version was deployed.
- * Anomaly detection: Model data or results don't fall under the expected norm or classes (for example, using an encoder-decoder neural network model).
Alerts generated by the monitoring system can notify users (via emails, Slack, and so on) or trigger a corrective action such as retraining a model with newer data, changing model weights, and so on. Feature stores will play a significant part in monitoring data and models and help storing relevant metadata, auto save production datasets, and conducting various analytical operations required for monitoring (join, compare, etc.) .
In 2023, I predict that the focus in the industry at large will shift to live operations: continuous integration and deployment (CI/CD for ML), retraining, model monitoring, security, resiliency, responsible AI, and more. By shifting the concentration to production maintenance and operations, enterprises will be able to enhance the business benefits from their models, improving their real-time accuracy, data freshness and efficiency.
Moving AI to the Center of the Business
In 2023, AI will continue to transform businesses and global economies. While PwC predicts that AI could contribute as much as $15.7 trillion to the global economy by 2030, I believe it will become a significant force even sooner, and as early as this coming year. This is because AI helps businesses find new revenue streams, cut operational costs, improve productivity, reduce friction and increase competitive differentiation.
Various surveys over the last few years indicate that the major impediments to the success of AI in the organization are not technological, but rather cultural. Challenges like change management, the need to reengineer business processes, staff education, data literacy requirements, organizational alignment and elimination of silos to support business objectives. Many organizations report that direct involvement from C-level executives is essential to the success of AI projects. The Harvard Business Review dedicated an article to the vital role of CEOs in leading a data-driven culture, and McKinsey also writes about the the role of the CEO and MLOps.
So while the business motivation for AI is very high, if it's not a central part of the business and led by senior executives it is, in essence, doomed to fail. Organizations will have to turn AI and data into the center of the business and build the applications around it.
Addressing the cultural challenges and organizational or technology silos is not enough. To achieve a successful AI strategy, you need to redesign all your business processes and tasks around data and AI:
- Build systems and processes for continuously collecting, curating, analyzing, labeling and maintaining high-quality data. The most significant impediment to effective algorithms is insufficient or poor data.
- Develop effective and reliable algorithms that can be explained, are not biased against particular groups or individuals, are correctly fit, continuously monitored and regularly updated using fresh data.
- Integrate a business application’s data assets, AI algorithms, software and user interface into a single project with clear ownership and milestones. Avoid organizational silos.
- Build robust engineering and MLOps practices to continuously develop, test, deploy and monitor end-to-end ML applications.
I’ve seen it with my own eyes: LATAM Airlines Group (the biggest South American airline carrier) business was struck hard by their industry’s most dire crisis, the COVID-19 pandemic. With a comprehensive AI-driven cross-company strategy that was championed by the CEO, and a robust MLOps program, they were able to quickly build and maintain over 500 models in production, across many business domains. One such application reduces the amount of fuel needed for each flight, saving the company tens of millions of dollars annually and significantly reducing CO2 emissions.
That’s why AI is the future of business. It can help automate processes that reduce costs and increase productivity, generate new products and improve user experience. By investing in AI and methodically building an AI adoption plan that ties AI to business value and metrics, businesses can significantly increase their ROI. But without making radical shifts in how you manage and operate ML applications and turn them into the heart of the business and processes you will not see the full ROI.
If 2022 was the year of MLOps maturity, 2023 will be the year of MLOps scalability. Enterprises across industries will turn AI and ML into the pinnacle of their business, and implement agile processes to optimize its use. I can’t wait to see what this year will bring.
And one last prediction for 2023: Stay tuned for the release of the new book “Implementing MLOps: A Production-First Approach” authored by yours truly and Noah Gift. You can check out the first two chapters for free, available here. 🙂
Happy New Year! From myself and all of us at Iguazio.