NEW RELEASE

MLRun 1.7 is here! Unlock the power of enhanced LLM monitoring, flexible Docker image deployment, and more.

HCI’s Journey to MLOps Efficiency

Alexandra Quinn | January 12, 2023

HCI (Home Credit International) is a global consumer financial provider. As leaders in their space, they identified the potential of ML models in financial institutions, and especially in risk-related use cases. However, deploying ML models in enterprises is not always an efficient process: time to delivery is long and access to data is limited. Jiri Steueur, Enterprise Architect at HCI, recently joined us for a webinar to share his top tips and ideas for achieving MLOps efficiency. In this blog post, we bring a concise overview of his findings.

To see Steuer’s entire presentation, including an in-depth description of an MLOps efficiency solution, you can watch the entire webinar here.

Machine Learning Use Cases in Financial Institutions

There is high potential for ML in financial institutions, especially with regards to risk-related requirements. Some of the top use cases include:

  • Campaign management - Identifying the most fitting consumers for products
  • Generating offers -  Defining default and pre-approval limits
  • Risk-based pricing - Calculating prices while balancing attractive products with product security
  • Next best offer (cross selling) with the prediction of the next best offer
  • Next best offer with the ability to decrease insurance-based risks
  • Penalties when servicing products
  • Payment behavior, calendars and promise to pay during collection processes
  • Anti-fraud to protect clients, through device fingerprinting, mobile device scanning, malware detection, and more
  • Creating a client behavioral profile based on predictions from data mining

Improving ML Efficiency

Yet, despite the value of ML, and according to HCI’s internal research, nearly 80% of the time spent on data science-related tasks pertains to collecting datasets and cleaning and organizing the data. This means only approximately 20% of the time is dedicated to core tasks, like building training sets, mining data and refining algorithms.

This aligns with Gartner’s findings, which conclude that in 2021, the average delivery time of an AI initiative, from prototype to production, was more than seven months. Finally, according to Steuer’s own research, the biggest blocker for more efficient use of AI/ML is access to data, followed by the need for a proper AI/ML environment.

These alarming stats, especially when coupled with the potential of ML for financial institutions, raise the urgent need to find ways to make ML delivery and deployment more efficient. There are four key areas where this can be done:

1. Access to Data - Building a data strategy using structured, semi-structured and unstructured data, as well as federated and virtual data.

2. Automation - Creating a proper environment through automated building, training and monitoring and operational efficiency.

3. Performance - Creating a proper environment through elasticity, serverless and a hybrid solution.

4. Knowledge sharing and support.

Let’s shed a light on some of the activities in these areas that can increase efficiency:

1. Building a Data Strategy

A data strategy can help guide organizations along the path from the state they are in and until they reach the state they want to be in. When building a data strategy, the focus should be on three main aspects: the current state, the future architecture and the transition from one to another.

To understand the current state, start by defining your current position and maturity. This includes cross-data governance, data quality, the organizational structure and roles, processes, technical solutions, current capabilities and more. Then, identify the pain points, bottlenecks and missing capabilities across the entire company, including all business units.

Once you know what your current state is, it’s time to determine what you want to achieve. Define your business vision and requirements, while taking into account trends, opportunities and best practices.

Finally, it’s time to outline the transition itself. Define the steps and milestones, including a high-level project plan, the impacted sources, knowledge dependencies and risks. Then, identify any blockers. Finally, specify delivery schedules.

2. Improving Time to Delivery

Time to delivery can be improved in three main ways:

  1. Integrations: Through federated queries, online sources like Kafka, MQ, REST API, and offline sources like HFDS, Cloudera and ObjectStorage.
  2. Standardization and Sharing: Model and data sharing with the help of feature stores and component sharing via repositories.
  3. Automation: Continuous training and hyperparameters with CI/CD pipelines.

3. Improving Operations and Elasticity

Operational efficiency can be achieved by using both on-premises and the public cloud and by implementing the following solutions:

  • For hardware - Using different types: bare metal/virtual, CPU/GPU/QPU.
  • For file systems - Supporting HFDS and object storage.
  • For maintenance support - Backup, archiving and cleaning.
  • For enterprise security - Using SSO, IdM and role management, SIEM and vulnerability checks and more.
  • For data management - Using a central data catalog, ensuring data quality and data lineage and more.

Elasticity is achieved through zero downtime (by means of continuous deployment and delivery), auto-scaling including thresholds and throttling, using easy current Python code scaling and the support of complex event processes like event aggregation and fix/flow time windows.

ML Efficiency Results in Improved Delivery Outputs

As a result of implementing these efficiency measures, enterprises can expect to achieve the following:

  • Time to delivery reduced by 3 to 6.6X and even up to 10x
  • Operating costs cut by 60%
  • Storage capacity reduced by 20X

Improving MLOps Efficiency With MLRun

Open source MLRun, built and maintained by Iguazio, is an open source MLOps orchestration framework for accelerating ML pipelines. MLRun comprises four main components:

1. The Feature Store - Enabling automated offline and online feature engineers for real-time and batch data.

2. The Real Time Serving Pipeline - For rapid deployment of scalable data and ML pipelines using real-time serverless technology.

3. Monitoring and Retraining - Codeless data and model monitoring, drift detection and automated remediation retraining.

4. CI/CD for ML - Integrated CI/CD across code, data and models by using mainstream ML, Git and CI/CD frameworks.

These four components enable running automated, fast and continuous ML processes and delivery of production data. With MLRun, code is deployed to the microservice in one click, pipeline deployment is automated and monitoring is automated and codeless.

Through collaborative and continuous development and MLOps, organizations can achieve faster time to production, efficient use of resources, high quality and responsible AI and continuous application improvement.

Watch the entire webinar here.