CDP and AI Challenges: Riding the Hype

CDP and AI Challenges - the problems with artificial intelligence

This post on AI challenges is part 2 of our series exploring the intersection of advances in AI and how that relates to the customer data platform (CDP) market and customer experience (CX) more broadly. In the first post covering generative AI, I put on my optimist hat and explored the top possibilities for artificial intelligence (AI) in the context of CX. By contrast in this post, I’ll balance that optimism with the AI challenges that will need to be solved for.

AI Challenge 1: Building, Training, and Operationalizing Custom Models

Data is the foundation for any AI model and companies are increasingly differentiating not just through products and services but in investments in customer data. Unfortunately, open models like ChatGPT are not trained on customer data, they are trained on open web data, and no amount of prompt engineering will change that fact. When it comes to large enterprises leveraging AI for CX, out-of-the-box models are a stopgap solution. The real value will be unlocked in the application of this differentiating asset (customer data) to custom AI models. Brands will then be able to build and custom models for specific use cases, and custom models almost always outperform out-of-the-box models.

Here the challenge with AI is the technology and expertise to enable this. Large Language Models (LLMs) and other foundation models have catapulted to the top of every CEO’s strategic priority, but aside from ChatGPT and internal models at large tech companies (e.g., Google’s Bard), this technology and expertise is barely past the open-source and academic stage. Certainly, there are some companies building platforms to better operationalize custom genAI models, such as dataiku, Snorkel AI, and MosaicML, but these products and this market are still in the early phases of development.

Nothing illustrates this better than Databricks’ recent acquisition of MosaicML, a two-year-old company with ~60 employees, for $1.3B. This was no doubt an acquisition driven by talent and an early product with a novel approach, which isn’t something you’d expect to see in a mature category with well-established products. From my perspective, this puts Databricks in an excellent position to be that platform for genAI models, in the same way they have been for traditional machine learning (ML). But even if the technology is not a limiting factor, the expertise to leverage it will be, at least for most brands. I’ve never met a company who has said they’ve been able to hire as much data science talent as they needed/wanted, and expertise around LLMs will be an even smaller pool of people.

AI Challenge 2: Difficult to Validate

Ok, so let’s assume the technology and expertise to develop custom models on proprietary data sets is there. Woohoo, we’re done, off to the races! Not quite. The next challenge that will arise will be in how to validate, and ultimately trust, the output of the model.

Deep learning models are incredibly complex, and model explainability and interpretability, the terms used to describe how easy it is to understand how and why a model is making a prediction, is nearly impossible. This is an issue for two reasons. One is that while LLMs are more often more accurate than traditional ML models, they are nowhere near perfect, and unfortunately, have a tendency to be confidently incorrect at times. My favorite example of this was in Google’s first demo of Bard, where a factual error, confidently delivered by Bard, resulted in a $100B drop in Alphabet’s market cap. Alphabet will be fine, and its stock has since more than recovered. But not every company is Google, both in terms of having the expertise to recover and improve the model or in the market’s willingness to get past missteps or problems with AI so quickly. Even Microsoft, widely viewed as the big tech pioneer in this space, recently had to disable their ChatGPT-enabled Bing search due to its unforeseen ability to circumvent publisher paywalls.

The point is, it’s nearly impossible to understand how these models will behave in the wild. There are going to be hallucinations, confidently incorrect factual statements, and unforeseen and potentially harmful applications. All of this will create risk for the company’s deploying these models, and that risk will need to be managed and balanced with the value being created. What this likely means, and to an extent what we’ve already seen, is that these models will be leveraged first and foremost in places where that risk is lower. Web search is inherently probabilistic, but given the scale of companies like Google and Microsoft, even incremental improvements can have a huge upside. But what about in regulated industries, where there are legal restrictions on the type of data that can be used for marketing or support? In an age where brand value can ebb and flow daily with public opinion, some brands will opt for caution, and prefer to be on the back end of the adoption curve.

AI Challenge 3: Cost and Difficulty to Commercialize

Finally is the question of who is going to pay for all of this. Training LLMs can be expensive, and is best supported through specialized hardware, hence Nvidia’s meteoric rise in valuation. I have no doubt that as the methods and hardware advance, there will be huge improvements to bring down the cost to train and operationalize them, as has already started to happen. Still, at least for the foreseeable future costs will remain high, and tech companies integrating AI into their product will need to figure out how to pass this cost on to their customers, especially in this area of tight money and efficient growth.

Maybe that comes in the form of new pricing models that incorporate AI, or maybe the costs are offset by increased growth driven by the AI functionality. Both are certainly possible, but it’s not a given. My guess is that many of these new AI features won’t prove to be valuable enough to justify the costs and the strategy and product will evolve. Which is fine, that’s all part of the hype cycle of integrating new technologies into products and services, it’s just good to remember that not all of these products or features will be home runs.

In Closing

None of these challenges with AI are insurmountable, and in general I’m firmly convinced that the opportunities will outweigh the challenges and that the next few years will be looked back on as the critical inflection point for the adoption of AI, in the same way the 2000s was around the web and the 2010s was around data. At ActionIQ, we certainly are approaching this opportunity with open arms, and are looking forward to working with our customers and partners in defining the future of AI in CX.

Stay tuned for the next two parts of our mini-series on AI, where we’ll cover how generative AI will open up specific use-cases, with business users and audiencing customer data.

Justin DeBrabant
Justin DeBrabant
Senior Vice President, Product
Justin spent his formative years building large distributed systems to support data science and analytics. Justin holds a Ph.D. from Brown University where he researched scaling high-throughput in-memory database systems to support larger-than-memory datasets.
Table of Contents

    More From Our Blog

    Recipe to Deploy an AI Use Case: The IDEAL Framework

    Momentum behind Artificial Intelligence (AI) and Generative AI has gone from hype to massive heights – and it’s only getting started. Surveys and reports show the intent of enterprises to…

    • AI & Predictive Analytics
    Embracing the Third Modern Revolution: How AI and GenAI Will Change the Customer Experience and How ActionIQ Is Supporting It

    Back in the early aughts, the world changed forever when digital life exploded online with Web 2.0. The internet became interactive and dynamic – with community and connectivity emerging online…

    • AI & Predictive Analytics
    • Customer Experiences
    Your LLM Performance Will Give What it Gets: Feed Your LLM Diverse Data and Metadata

    Heading into 2024 the hype generated by the launch of ChatGPT has not deflated. If you’ve ventured into Generative AI and their Large Language Models (LLMs), there’s no doubt that…

    • AI & Predictive Analytics

    Discover the Power of Data in Motion