Introducing our New Book: Implementing MLOps in the Enterprise
Sahar Dolev-Blitental | December 14, 2023
Introducing The New O'Reilly Book:
Implementing MLOps in the Enterprise
“Implementing MLOps in the Enterprise: A Production-First Approach” is a practical guide, authored by MLOps veterans Yaron Haviv and Noah Gift and published by O’Reilly, which guides leaders of data science, MLOps, ML engineering and data engineering on how to bring data science to life for a variety of real-world MLOps scenarios, including for generative AI. Drawing from their extensive experience in the field, the authors share their strategies, methodologies, tools and best practices for designing and building a continuous, automated and scalable ML pipeline that delivers business value. With practical code examples and specific tool recommendations, the book empowers readers to implement the concepts effectively. After reading the book, ML practitioners and leaders will know how to deploy their ML models to production and scale their AI initiatives, while overcoming the challenges many other businesses are facing.
Who This Book Is For
This book is for practitioners in charge of building, managing, maintaining, and operationalizing the ML process end to end:
- Data science / AI / ML leaders: Heads of Data Science, VPs of Advanced Analytics, AI Lead etc.
- Data scientists
- Data engineers
- MLOps engineers / Machine learning engineers
This book can also be valuable for technology leaders who want to efficiently scale the use of ML and generative AI across their organization, create AI applications for multiple business use cases, and bridge organizational and technological silos that prevent them from doing so today:
- CIOs
- CTOs
- CDOs
Finally, this book is relevant and interesting for people with a zest for MLOps and ML, since this is currently the only book that serves as a comprehensive guide for setting up an end-to-end MLOps pipeline, customizing it to any use case or scenario and running it, based on open source tools or the tools of your choice. The book contains a full chapter dedicated to generative AI.
Why Did the Authors Decide to Write this Book?
As MLOps veterans, Yaron and Noah were all too familiar with the following scenario: data science development in enterprises started small, in the lab, where teams worked in isolation with limited datasets. When attempting to deploy the trained models, this isolated approach and the use of disparate tools and frameworks across the pipeline complicated the ability to bring it to production. Organizations struggled with the ingestion of production data, large scale training, serving in real-time, monitoring/management of the models in production, and more. This led to wasted resources and time, and ultimately contributed to the failure of many data science projects. The book is poised to address these exact challenges.
In this book, the authors advocate for a production-first mindset. Following their extensive experience in successfully deploying ML and generative AI models, they suggest beginning with the end in mind, by designing a continuous operational pipeline and ensuring all components and practices are mapped into it and automated as possible. This approach aims to make the process efficient, scalable, fast, repeatable and capable of delivering quick business value while meeting the dynamic demands of enterprise MLOps.
With the increasing relevance of AI models in various business contexts and the emerging opportunities in generative AI, the need for effective strategies to actualize data science in real-world MLOps scenarios has never been more relevant or necessary.
Key Takeaways
1. MLOps begins with the end in mind: Adopt a production-first mindset
- In the traditional data science process, models are developed in siloed environments, leading to technical and operational challenges when attempting to deploy these models to production.
- This can be solved by addressing production and deployment needs from the get-go.
- The book guides readers on how to adopt a “production-first” approach and understand the business value of their work.
- The readers learn how to design and run a continuous, automated, streamlined and scalable ML pipeline that will bring their models to production and the company’s data science to life.
2. MLOps is about building an automated environment and processes for continuously delivering ML projects to production
- MLOps is not only about model training, tracking local experiments or placing an ML model behind an API endpoint.
- There are four components to MLOps:
- Data collection and preparation
- Model development and training
- ML service deployment
- Continuous feedback and monitoring.
- The book provides in-depth explanations and hands-on examples for each component.
- Readers learn how to approach and implement each component in their own environments.
3. The first step of MLOps is understanding the use case, goal, and ROI
- MLOps requires more than a notebook or IDE. It starts with conversations with the company’s decision makers to understand the business needs.
- The book provides readers with a framework for tying their project to the business.
- Readers learn to identify the use case, plan pipeline strategies, which questions to ask stakeholders and what to discuss with them.
4. There are 6 high-level steps in every MLOps project
- The 6 steps are:
- Initial data gathering (for exploration).
- Exploratory data analysis (EDA) and modeling.
- Data and model pipeline development (data preparation, training, evaluation, and so on).
- Application pipeline development (intercept requests, process data, inference, and so on).
- Scaling and productizing the project (adding tests, scale, hyperparameter tuning, experiment tracking, monitoring, pipeline automation, and so on).
- Continuous operations (CI/CD integration, upgrades, retraining, live ops).
- The book guides readers on how to implement each step in their environments, including tools, practices and code examples.
- Readers learn how to build an MLOps project from A to Z.
4. Data management and processing are the most critical components of ML
- Data is essential to the ML process, can come from many sources, and must be processed to make it usable.
- The book provides readers with tools, code examples and best practices for data management and processing steps.
- Readers learn how to implement data management steps like data versioning and lineage, data preparation and analysis, interactive data processing solutions, batch data processing and real-time processing, and feature stores.
5. Automation techniques like AutoML, hyperparameter tuning, auto-logging, AutoMLOps, and pipelines are key for building high-quality ML models for production
- Implementing automation and observability in the model development process allows for higher-quality models and continuous development and deployment flows, bringing business velocity.
- The book details the methodologies, tools, and approaches to implement and run on models throughout the ML pipeline before production.
- Readers learn how to implement steps for:
- Running, tracking, and comparing ML jobs
- Automations
- Training and ML at scale
- Testing
- Resource management
- And more.
6. Rather than “serving a model”, look at the bigger picture of delivering the application as a whole.
- When delivering an application, ignoring the bigger picture will lead to significant functionality gaps, failures, unnecessary risks, and long delays.
- The book details the required steps, along with code and tools examples.
- Readers learn how to:
- Build and register the model for use in the production application.
- Create an application pipeline that accepts events or data, prepares the required model features, infers results using one or more models, and drives actions.
- Monitor the data, models, and applications to guarantee their availability and performance.
- Retrain.
- Deploy according to various strategies.
7. MLOps simplifies and abstracts complexities for complex models like DLs and LLMs
- DL, GenAI, and LLM projects introduce risks, excessive costs, and operational complexities, which are addressed by MLOps.
- Building an MLOps pipeline for them requires careful consideration.
- The book covers the different technologies and demonstrates how to build production GenAI applications through examples.
- Readers will learn how to implement steps to:
- Reduce LLM Risks and quality challenges by implementing LLM data, ML, and application pipelines
- Provide guardrails and monitor various model metrics and application
- Fine tune models efficiently and customize the LLM
- The book also provides guidance for advanced data styles like NLP, video, and image classification.
Getting the Most out of Implementing MLOps in the Enterprise
The book is meant to be read in 3 ways:
- As a strategic guide that opens horizons to new MLOps ideas.
- When making strategic changes to the pipeline that require consultation and assistance. For example, when introducing real-time data into the pipeline, scaling the existing pipeline to a new data source/ business use case, automating the MLOps pipeline, implementing a Feature Store, or introducing a new tool into the pipeline.
- Daily when running and implementing MLOps. For example, for identifying and fixing a bottleneck in the pipeline, pipeline monitoring, and managing inference.
About the Authors
Yaron Haviv
Yaron Haviv is a serial entrepreneur with deep technological experience in data, cloud, AI, and networking. Yaron is the Co-Founder and CTO of Iguazio, which was acquired by McKinsey and Company in 2023. He is an author, keynote speaker and contributor to various AI associations, publications and communities, including CNCF Working Group and the AIIA. Prior to Iguazio, Yaron was the Vice President of Datacenter Solutions at Mellanox (now NVIDIA - NASDAQ: NVDA), served as the CTO and Vice President of R&D at Voltaire, a high-performance computing, IO and networking company which floated on the NYSE in 2007 and was later acquired by Mellanox (NASDAQ:MLNX).
Noah Gift
Noah Gift is the founder of Pragmatic AI Labs. With over 30 years of experience, Noah is an accomplished technology expert and a recognized thought leader on topics like MLOps, data engineering, cloud architecture, and programming systems in Rust. He lectures in the data science programs at universities including Northwestern, Duke, UC Berkeley, UNC Charlotte, and the University of Tennessee. Prior to “Implementing MLOps in the Enterprise”, Noah published a number of best-selling and award-winning books by O'Reilly and Pearson that were adopted by major universities worldwide.
Get your copy here: https://www.oreilly.com/library/view/implementing-mlops-in/9781098136574/