NEW RELEASE

MLRun 1.7 is here! Unlock the power of enhanced LLM monitoring, flexible Docker image deployment, and more.

The Importance of Data Storytelling in Shaping a Data Science Product

Yaron Haviv | January 4, 2021

Artificial intelligence and machine learning are relentlessly revolutionizing marketplaces and ushering in radical, disruptive changes that threaten incumbent companies with obsolescence. To maintain a competitive edge and gain entry into new business segments, many companies are racing to build and deploy AI applications.

In the frenzy to be the first to deploy these AI solutions in the marketplace, it’s easy to overlook the ultimate objective of embarking on data science initiatives — not just extracting business-focused insights from data but also communicating these insights to the intended audience.

Let’s face it…data is boring to the average audience.

Communicating insights without visuals is tantamount to displaying data in a vacuum — it doesn’t help audiences understand the significance of what they’re looking at. While data scientists can analyze and extract intelligent information from mountains of historical and real-time data, they are sometimes unable to effectively relay these hidden insights to audiences.

This is where data storytelling comes in.

Data storytelling focuses on communicating insights to audiences through the use of appealing visuals and narratives. It holds the power to place the human perspective on increasingly complex, expanding and rapidly changing data sets.

To learn more about the importance of data storytelling in shaping a data science product, watch the MLOps Live session with Siemens on demand here.

Why data visualization is an important aspect of data science

Humans are evolutionarily hard-wired to react emotionally to visuals. Also, it’s more effective to use visuals when communicating since the human mind processes information in images. This is especially true in the realm of data science.

Accurate data is indispensable to the derivation of accurate and intelligent insights. However, these insights must be communicated in a way that drives actionable results and intelligent decision-making. This is where savvy businesses leverage the power of data visualization.

Facts simply present data, whereas, visuals provide at-a-glance snapshots that present boring statistics in a much more compelling manner. By creatively using powerful visualizations, businesses can capture and engage audiences. The right visuals bring data to life and enable consumers to spot patterns and trends in data sets. This helps audiences better digest the information and derive valuable insights, and it guides them towards intelligent data-driven decision-making. This is the ultimate goal of data science.

How data storytelling can impact the selling power of a data science product or an AI application

Despite the proliferation of dashboards and BI tools, a lot of businesses are still unable to understand and take advantage of the gems of information hidden in their data. These tools have one major limitation — they are designed to present data as charts and numbers. As such, they can only inform audiences about what is happening — not why it’s happening.

True data storytelling requires businesses to go beyond data visualization and enter the realm of narration. Narratives help to contextualize the data in visuals and explain to audiences the reason behind the patterns they see and its implications for their specific use case. Essentially, narratives give data and visuals an expressive voice that uses simple, relatable language to effectively convey insights to the target audience.

This makes narrative a key component of data storytelling. By enabling audiences to fully comprehend the information being presented in visuals, narratives hold the power to 10x the effectiveness of data visualization.

The best way to create a narrative is to become completely immersed in the audience’s mindset. This enables teams to come up with an intriguing story that explores the key points in the data set while connecting emotionally with the target audience. This emotional connection is indispensable when marketing and selling AI products.

By creating a story out of the insights garnered from analyzing data sets, businesses can help shape audience perceptions and behaviors, educate them about complex issues and stimulate conversation around their AI product.

However, it’s important to choose an effective visual to appeal to the target audience based on the domain and the context of the data. Although data science and data engineering teams usually take care of this, a data visualization expert can help ensure the efficiency of the visuals. The right visuals should highlight the most important aspects of the data rather than crowd the target audience with too much information.

Data visualization tools

Data visualization is absolutely important in every phase of developing and deploying an AI product. It’s the ultimate step in getting executive buy-in to kick off the project and user adoption.

There are several handy tools for data visualization, the most popular including Python libraries, Tableau, Power BI, Spotfire, Google Data Studio and QlikView. These tools help teams execute data analysis and visualization to reveal hidden patterns and insights and generate colorful and advanced forms of both 2D and 3D plots and graphs. 

However, they typically work against some sort of analytical database that uses more of historical than real-time data. For visualizations that have to be done at the development phase or in deployment environments that work with real-time data, more agile tools are needed.

Implementing data visualization into an operational ML pipeline

During both the exploration and the production phases of data science projects, teams need to achieve visualization in a way that encapsulates the full value of the data. However, this can lead to a major hiccup.

Data visualization in traditional development environments is relatively easy. It becomes challenging within ML pipelines and development environments that leverage MLOps. This is because data visualization requires the use of data preparation software to build up the specific aspects and set of features that need to be visualized — and this is a multi-step process that requires data to go through different stages.

Incorporating this multi-step process within modern ML pipelines can be problematic since automating the entire flow of preparing data through training to deployment is a key concept of MLOps. MLOps ensures that the entire ML pipeline flows seamlessly by addressing and automating deployment, scalability, maintainability and the upgrade or update of new features.

Leveraging feature stores in the data visualization process

To resolve this challenge, data science teams are turning to feature stores. These stores come in handy when data scientists need to work with data from various sources — real-time data coming through streaming or restful APIs, historical data coming from ETL jobs or databases, and interactive data within the environment.

With feature stores, data science teams no longer need end-to-end ML pipelines from raw data to models. Feature stores enable seamless data visualization within operational ML pipelines by separating the process for ingesting and featurizing data from the process of training models with features that come from one or more disparate sources. This is important for data visualization in MLOps environments since the cadence for features engineering is usually different from that used for model training.

Along with its data layer, feature stores also come with a data transformation service that allows teams to manipulate data and store it as features for use by ML models. They ingest data from several sources, validate, featurize and then cache these visualization elements in the feature store in various formats, ready to be consumed by ML pipelines and models.

This solves a lot of complexity regarding the data visualization aspect of ML pipelines. When building the final data product, teams can request one or more feature sets and merge these multiple features to create the kind of visualization needed for their particular use case.

MLRun, an open-source framework for machine learning automation, runs as a managed service within Iguazio’s data science platform. It acts as a multi-model feature store that ingests data from various sources (real-time data, historical data, interactive data within the environment, prediction results, etc.) and delivers features as needed to BI dashboards, real-time dashboards, web and mobile applications, Python visualizations, etc. With MLRun, data scientists can achieve data visualization with minimal development effort.

Start Your Journey: Book a Live Demo.