How to Think About Gen AI Use Cases
Getting the foundations right.
Background
Generative Artificial Intelligence, or Gen AI for short, is all the rage right now. Every business leader and consulting firm is rushing head over heels to implement some use cases around it to extract some quick value. Many organisations converge and approach their first few use cases on Gen AI by pursuing interactive customer chatbots as a means to extract efficiencies from customer service activities. They think that’s where the value opportunity is — replacing the customer service call centres with AI. I argue otherwise.
And so I’m dedicating my 40th weekly article to deconstructing the thought processes to identifying valuable use cases to pilot Gen AI.
(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)
Learnings from Traditional AI
Before the recent public advent of Gen AI, we’ve had predictive AI, now more commonly labelled as traditional AI. Traditional AI has had over 30 years to mature across various industries and domains, and much of those learnings can be applied to the new Gen AI wave.
The precursor to traditional AI was statistical predictive modelling. Back then, many corporate leaders were unconvinced of its value — “How can a desk jockey use data to improve my business when they are not out in the field; how can data replace superior human intelligence?” Despite the many pushbacks, predictive modelling found its early footings in Marketing Analytics (e.g. creating more effective campaigns leads) and Customer Science (e.g. understanding customer segmentation and preferences). There was clear convergence towards using early AI in these practices across many industries. Why is that? While it may seem counterintuitive, these pilot use cases in the customer-facing domain were high-yield, low-risk instances. High-yield because it was obvious that there was a lot of wastage in marketing activities. Low-risk because you still had the man-in-the-middle who could see the ‘sense’ in the AI-generated output list before deciding if it should be released. What’s the worst that can happen? You make an irrelevant offer. So what. Only after sufficient learnings and implementation experience did the application of traditional AI moved ‘inwards’ toward organisational risk management, operations decision automation and financial optimisation. Hence, we see this shift from outward to inward across the industries.
Lift & Shift for Gen AI?
It is tempting to apply the same thought process of outward-to-inward for the case of Gen AI. However, that would be a mistake. Rather, for Gen AI, it should be an inward-to-outward trajectory. Why? It’s based on the simple notion of high-yield, low-risk. Piloting Gen AI on employee-facing use cases provides the best risk-adjusted value extraction to learn from.
Unlike traditional AI which was developed to reduce uncertainty, Gen AI was developed to solve a different class of problems — reduce wasted time in seek & search for information, increase speed and reduce inaccuracies in information summarisation, reducing cost and effort of creating non-unique content through mimicry. These classes of problems intersect nicely with the domain of employee productivity. So there’s a high-yield opportunity there. At the same time, piloting Gen AI use cases in this area is of lower risks because of the employee-in-the-middle (employees have sufficient working ‘sense’ to evaluate the output of Gen AI). So it’s a safe space for the Gen AI solutions to evolve and mature. Only after sufficient learnings should we then use it for customer-facing solutions.
Getting the Foundations Right
Another area where we can learn from the implementation of traditional AI is in infrastructure. The foundations of traditional AI are the data marts which evolved into the data warehouse and data lake and its various hybrids. Understanding how to structure, index, and run efficient computations on data translates into sustainability, scalability, and re-usability for traditional AI.
The analogous foundation for Gen AI is the large language model (LLM for short). It represents the same “single source of truth” as the data warehouse represents for traditional AI. We are no longer constructing data, but simply leveraging the foundational LLMs. Even though LLMs will compete to introduce new features, at the foundational level, the LLMs will converge towards the same. Similar training data. Similar algorithmic techniques. Similar multi-modality — multi-modal in and multi-modal out(meaning that all competitive LLMs will be able to ingest text, images, audio and video, and output with the same variety of formats). Other than for researchers and hobbyists, the differences in the LLMs won’t really matter too much to the business community.
But what are the analogous operating principles for Gen AI to achieve similar sustainability, scalability and re-usability outcomes as traditional AI? I don’t yet have the answers or strong opinions, and working towards it can be an interesting exercise (perhaps I’ll write about this at a later stage), but observationally, it seems the operating principles may be centred around:
- Efficient contextualisation — e.g. supplementary data ingestion with retrieval augmented generation (RAG) and expanded context window, and AI-to-AI interactions.
- Real-time on-demand — e.g. lightning fast, load-efficient computation.
- Everywhere easy availability — e.g. hosting on edge devices and integrated user interfaces.
Landing on these operating principles will be important for us to ‘roadmap’ a set of scaffolded learnings for successful Gen AI leverage, and addressing them through incremental pilot uses cases, starting with the employee-facing route.
Conclusion
The Alexander Pope proverb “fools rush in where angels fear to thread” may be apt for businesses looking to quickly implement Gen AI for competitive advantage. A recent article in a Singapore newspaper described how enterprises had underestimated the technical requirements and overestimated the business impact of Gen AI use cases. This only serves to underscore the need to approach use case piloting as a scaffolded learning exercise. By taking an inward-to-outward prioritisation perspective of use cases, we would sufficiently manage the risk, while extracting measurable near term value. With this progressive learning approach, we will then begin to truly appreciate the enormity of this revolutionary tool.