X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Effective Due Diligence of AI Products & Companies for Investment

Author: Steven Devoe Posted In: Data, Machine Learning

In today’s landscape, where ChatGPT and its peers have reshaped technology, it seems almost every company is embracing AI. At the same time, there’s a looming concern that companies are overstating how substantial their AI capabilities are and even how proprietary certain AI capabilities are. For example, OpenAI recently announced the ability for ChatGPT to interact with PDFs in October 2023, which wiped out the value of a dozen or more startups almost instantly.  

For private equity investors looking to dive into this world, it is vital to be rigorous with their due diligence, ensuring they do not miss lucrative opportunities—or invest in vaporware. In this post we cover a foundational, non-technical due diligence approach that private equity investors can use to separate truly groundbreaking AI applications from those that are more common. 

The two fundamental questions to answer as a part of non-technical AI due diligence are: 

  1. Is this an appropriate use of AI or is it something else entirely? 
  2. How novel or distinctive are the AI elements compared to what is already widely available? 

An appropriate use case or riding the AI wave?

With the surge in AI’s popularity, we have noticed that many companies that talk about their “AI capabilities” are describing something else such as mathematical optimization or deep logical structures built into software. While these tools can be transformative and incredibly powerful, they do not fit the AI mold specifically and should not garner the respective investment premium associated with AI companies.  

Perhaps more concerning to investors, some companies are applying AI to use cases they should not be. AI is good at many things, and it will only continue to improve in depth and breadth of capabilities; but as it stands today, it is generally more expensive, less repeatable, and harder to maintain than many other technological approaches. In the right situations, AI can provide incredible value, but it may not always be a better solution. The question, then, is, Should AI only be used when it can provide greater value than traditional technologies? 

How novel or unique is this?

The second question that is critical to answer as an investor and much more nuanced is “How novel or unique are the AI aspects of any given company?” Ultra-sophisticated AI models can be accessed via a simple API call giving the illusion of deep-rooted innovation and highly sophisticated AI models. Despite some very clever implementations using this approach, it does not equate to a robust competitive advantage for investors. Plus, data science has always been an “open” community in which information is shared in whitepapers or other mediums, not unlike other scientific disciplines.  

When it comes to AI, the way to distinguish yourself comes down to the data used with your model. 

In the AI and more traditional machine learning worlds, the number one thing an organization could do to distinguish themselves or improve their technical results is have access to data that is exclusive to them. This data allows the owners to use it as an input to a pre-trained large language model (LLM) or train a more specific model, making it even more proprietary. These data inputs act as examples of the inputs the model would expect and the outputs that it should produce. Typically, this proprietary and exclusive data is gathered through an existing product or service that an organization offers, and the data is repurposed as inputs and outputs of the AI model, as described above.  

An organization can also distinguish itself in the AI world by creating or fine tuning an AI model that is specific to a use case and exceeds the capabilities of those models that exist today.  

  1. Fine Tuning a Model – Take a foundational model and show the model examples of inputs and outputs to optimize for a specific use case. This organization would likely have data engineers and data scientists on staff and their cloud spend would likely be tens or hundreds of thousands of dollars or more. 
  2. Training a Model – As the largest investment in time and money, this involves building a model from a blank sheet of paper. This is most often used when a company is trying to build 1) a better model in terms of model performance, accuracy, size, or capability or 2) for applications in which a generalized model would not be able to comprehend the domain as effectively as the medical or legal fields.  The training of models from scratch is happening less and less as more models are built and publicly available. This organization would likely have many data engineers, ML engineers, and/or data scientists on staff to build the model. Their cloud spend would likely be in the millions of dollars stemming from the immense, one-time cost of training a new foundational model. 

Both aspects – data and AI models – are highly nuanced, and the best solutions often use both in tandem. The world of AI is evolving extremely rapidly and understanding these two questions merely scratches the surface of what is involved, but should prove to be an effective way to distinguish those that warrant further investigation from myriad of AI companies today.