NEW RELEASE

MLRun 1.7 is here! Unlock the power of enhanced LLM monitoring, flexible Docker image deployment, and more.

#MLOPSLIVE WEBINAR SERIES

Session #13

How Feature Stores Accelerate & Simplify Deployment of AI to Production

Share:

There are many challenges to operationalizing machine learning, but perhaps one of the most difficult is feature engineering. Features are properties that are used as inputs to a machine learning model. Generating a new feature, or feature engineering, takes a tremendous amount of work.

Many enterprises go through the same process of feature engineering for training, based on historical data, and again for model prediction, based on online or real-time data. Multiply that by the number of models that a company must create. Apart from the extensive duplicated effort required of their engineering team, this also leads to model inaccuracy when different features are used.

This is where feature stores come in. A feature store accelerates the development and deployment of AI applications by automating feature engineering and providing a single pane of glass to build, share and manage features across the organization. This improves model accuracy, saves your team valuable time and provides seamless integration with training, serving and monitoring frameworks.

Watch this session to explore:

  1. The challenges associated with feature engineering across training and serving environments, especially when building real-time ML pipelines
  2. What feature stores are, and how they make it simpler for teams to build, share and manage features across the organization
  3. How the Sheba Medical Center plans to predict COVID-19 patient deterioration in real time using a feature store
  4. How to extend feature stores to modern workloads spanning real-time, unstructured data, NLP (natural language processing) and deep learning (live demo to be shown)