Adding AI to applications - how to succeed with data

By Peter Greiff, Data Architect Leader EMEA, DataStax.

  • 1 year ago Posted in

Everywhere you look, artificial intelligence (AI) is present. From news stories about ChatGPT through to new services that tout their AI credentials, implementing AI is seen as a way to improve business performance. According to research by McKinsey, more than half (56%) of companies have already adopted AI in some form within their businesses. 

However, delivering successful AI projects is still problematic for companies outside the technology elite. According to Accenture, only 12% of companies have currently achieved significant growth from their AI projects. This should expand rapidly over the next few years, as developers and IT teams understand more about how to architect and run AI as part of their operations.

Similarly, according to a Gartner® survey (1), “On average, 54% of AI projects make it from pilot into production” . Frances Karamouzis, distinguished VP analyst at Gartner, commented: “Scaling AI continues to be a significant challenge. Organizations still struggle to connect the algorithms they are building to a business value proposition, which makes it difficult for IT and business leadership to justify the investment it requires to operationalise models. ”

So how can we bridge the gaps that exist and make AI available and successful to every company, not just the privileged few?

The state of AI today

The first issue - and potentially the most serious one - is applying the wrong approach at the start. A lot of this comes down to the data that you have to feed into your AI approach. If you don’t have the right data, or if you handle it wrongly, then your AI implementation will not provide useful insight. The phrase ‘garbage in, garbage out’ is an accurate one here. Alongside the quality of the data, you will also have to consider the velocity, type, and volume of data that you have if you want to ensure the quality of your predictions and how they drive outcomes. 

In order to take the right approach here, you should serve your applications with the right data and you have to understand how a feature store works. Feature stores are at the heart of AI and machine learning deployments, as they hold the data models used for analysis, known as features. These features are measurable properties that can be used for analysis, and generally require some data transformations to work. These transformations are defined during the process of feature engineering and can include scaling up values or carrying out computations based on prior records. Essentially, these feature stores contain the analytic brains used to think about what might happen next, and they carry out the work to look at new inputs, make predictions and create outputs. 

If these models are wrong, or they carry out inaccurate transformations, then the results will not be useful to you. For example, you may miss a pattern in your data, and that can lead to worse results - more customers will churn, or your security system may miss a threat.

Using the wrong data?

This problem can be caused when you have too much data to go through. To deal with data volumes, you can aggregate data to make it easier to transport, and you can transform it into a new set that is then easier to use. However, if you have to carry out multiple transformations just to get that data set into your feature store, then you may miss out on important insights. Like photocopying a document multiple times, each transformation can make that data less accurate and less useful in reality.

To overcome this problem, you will have to keep a close watch on how your models perform over time. Where you have missing data or where your models don’t have the right data to work from, you will start to see less accurate prediction performance. To prevent this, you may have to go back to your raw data as you build new features for your models. However, this can slow down your data scientists and extend their experiments.

Another challenge is how to scale up your AI and machine learning infrastructure to work in production. Testing will normally involve a small amount of data to get a model created, and then data scientists will iterate based on that initial data set. Once the model is tested, it can then be moved into production. However, this scaling effort is not as simple as just moving to bigger machines or consuming more processing power.

The quality of your data models should improve as they scale up, as using more data should improve the statistical accuracy that you are able to achieve. However, the sheer volume of data and events that you need to process can break when running on legacy infrastructure. 

The last challenge here is where you process data too late to make a difference. When you have multiple systems involved in serving a data model, you can get latency between when a transaction starts and when your AI system delivers its results. What works for longer term trend analysis or historical data will not be fit for purposes around real-time transactions. For example, proposing a special offer when the customer has already completed a transaction is frustrating for the customer, while this can also irritate customers that have already abandoned their purchase. This can also affect your internal teams. If you are trying to build more accurate models and improve your AI based on old data, then you can never catch up to the reality of customer experience.

Solving the issues around applications and AI

To get ahead of these issues, look at the applications that you have and the user experience that you want to support with AI. Are you looking at real-time interactions and use cases, or more historical data analysis? Technologies that work for one use case are not necessarily right for other situations.

For real-time AI deployments like recommendations or personalisation, look at how you can bring your machine learning approach to your datasets, rather than the other way around. Rather than struggling with data infrastructure to cope with real-time data, this approach allows you to concentrate on computing features and understanding relationships between events over time. It also means that you can work asynchronously around features, which then helps your machine learning models scale up and manage millions of contexts in parallel.

From a workflow perspective, this also makes it easier to build features that understand the timing or sequence of events that users go through. By understanding the user journey, you can make your models aware of what is expected and then act upon any context changes that you see in your data. This means that you can concentrate on the right areas and get them working in real time.

Alongside this, you can look at how to use cloud services to power your approach as part of your applications. This makes it easier for developers to build AI support into their applications as well, as these services can be connected through APIs. At the same time, scaling is easier for both the application and the AI model predictions.

This approach makes it easier to add AI to applications, allowing more companies to use feature stores and machine learning capabilities in their services. Rather than being the preserve of the big technology companies that were able to invest early in AI, these steps make it easier for more companies to add AI to their applications and deliver the experiences that customers want and expect today. With the right data, at the right time and in the right place, every application can be AI-powered.

(1) Gartner Press Release, “Gartner Survey Reveals 80% of Executives Think Automation Can Be Applied to Any Business Decision,” published August 22, 2022, https://www.gartner.com/en/newsroom/press-releases/2022-08-22-gartner-survey-reveals-80-percent-of-executives-think-automation-can-be-applied-to-any-business-decision. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.