Monday, July 1, 2024

How the rise of large graphical models can give enterprises a crystal ball [Q&A]

Share

Data analytics

A new AI technology is emerging alongside LLMs — Large graphical models (LGMs). An LGM is a probabilistic model that uses a graph to represent the conditional dependence structure between a set of random variables.

Organizations can use LGMs to model the likelihood of different outcomes based on many internal and external variables.

We spoke to Devavrat Shah, CEO and co-founder of Ikigai Labs, to find out about the rise of LGMs and how they can complement LLMs.

BN: What are LGMs?

DS: LGMs are an emerging generative AI technology. LGMs enable organizations to model different variables, measure the relationship between those variables, and forecast certain business outcomes against those variables. Essentially, it serves as a sort of crystal ball that helps enterprises predict the future with high levels of confidence.

An LGM represents the conditional dependence structure between a set of random variables. They are probabilistic in nature — their goal is to describe and explain the entire joint distribution between all variables of interest.

LGMs produce interactive graphs that visualize the complex relationship between different data points. With a focus on time-series data, LGMs are used to predict how those relationships may change in the future as the data changes. For example, an LGM could firstly tell a coffee shop how pastry sales over the past year impacted coffee sales, then show how pastry sales over the next year will impact coffee sales. With that information, the owner could decide to focus more on pastries (if they’re proven to boost coffee sales) or focus less on pastries (if they’re shown to not impact coffee sales).

Of course, this is a very simple example that includes just two variables. In real world usage, LGMs make complex predictions based on dozens of variables.

BN: What sort of data do LGMs train on?

DS: LGMs were built specifically to model tabular data, which is data found in spreadsheets, tables, and databases. Furthermore, LGMs are particularly useful for analyzing time-series data (a series of data showing how a single variable changes over time. This could be a coffee shop’s hourly coffee sales data). By looking at time-series data through the novel lens of tabular data, LGMs are able to forecast critical business trends, such as revenue, inventory levels and supply chain performance. These insights help guide enterprises to make better decisions.

BN: How do LGMs differ from large language models (LLMs)?

DS: They do very different things. LGMs are great for providing probabilistic forecasts based on tabular, time-series data. Meanwhile, LLMs are typically used to provide traditional textual insights (hence the ‘language’ in large language model). The output is distinct, LGMs give you interactive graphs, while LLMs provide text information.

LLMs are incredibly powerful for all sorts of different applications (i.e. content creation, language translation, chatbots, etc.), but they’re simply not built to analyze and model tabular data.

That’s because there’s a structural difference between the two. At their core, text documents are linear. LLMs employ very high-dimensional encoding and a multi-resolution structure to discern the relationship across different parts of a document. Tabular, time-series data, on the other hand, is multi-dimensional – there is no singular left-to-right or top-to-bottom direction to analyze it. Think of that table of hourly coffee sales data. You could read it chronologically, reverse-chronologically, in order of largest dollar amount to smallest, etc. LGMs are engineered to do that. They have the ability to easily go beyond text’s linear structure and can capture relationships across many dimensions. For that reason, LGMs are much more suitable for modelling tabular data.

BN: How do LGMs provide business value? What sort of applications do they support?

DS: LGMs enable organizations to predict outcomes in all sorts of different scenarios and contexts. For example, enterprises can learn things like:

  • How will this year’s sales tax change impact our sales?
  • How will growing holiday season demand slow down the speed of our supply chain?
  • How will the growing number of wildfires in the state increase insurance rates?

With these insights, enterprises can make more informed business decisions. And better decisions mean better outcomes — i.e. greater revenue, increased profit, better employee retainment, shorter average sales conversion, etc. LGMs can impact both very broad and very niche outcomes. Ultimately, anything that can be measured can be improved with the technology.

Looking at more specific applications, LGMs can be leveraged for their predictive capabilities in a variety of different industries. Some of the big ones include:

  • Supply chain (labor planning, sales and operations planning)
  • Retail (demand forecasting, new product launch)
  • Insurance (auditing, ratemaking)
  • Financial services (compliance, KYC)
  • Banking (customer entity matching, txn reconciliation)
  • Manufacturing (predictive maintenance, quality assurance)

BN: LLMs are known to consume a lot of computing resources. Are LGMs any more resource efficient?

DS: Yes. Over the past decade, the distributed, iterative (also called message-passing) algorithms have become the architecture of choice for computing at scale. To implement them quickly and widely in a cost-effective manner for LGMs, Ikigai has developed an innovative computational architecture using PubSub infrastructure. This architecture supports learning and inference at scale for LGMs. As a result, the technology has become affordable for every organization, both big and small.

Photo Credit: Sergey Nivens / Shutterstock

Read more

Local News