The Fast and Curious blog series you know and love is back! This time, we are bringing you all things AI. Watch the 3-minute video below to learn how we break down the enterprise barriers to GenAI adoption (or you can read the full transcript that follows).
Below is a transcript of the video:
Welcome to Season 2 of Fast and Curious – the AI Edition. Everyone, everywhere is talking about Generative AI and Large Language Models. Having said that, large enterprises have not widely adopted Generative AI, and for some very legitimate reasons.
In this series, we’re going to talk about three of the big enterprise barriers to GenAI adoption and wait for it, how we’ve solved for them. So, let’s get started.
Number one on our list of GenAI blockers is: Intellectual Property Restrictions.
The basic problem is that, how can I know whether the text or responses that we’re getting back from an AI system are okay for us to use? That is, how can I be assured that the AI is not using some source material that might have a copyright restriction or an IP restriction because that might make me liable as a result. The flip side of that is, how can I be guaranteed that whatever I send to an AI model won’t be used by the model and sent to someone else at some other company?
The press about employees inadvertently doing just that has been pretty scathing, so everyone and every company will want to avoid that.
That is the number issue that we have seen that gets talked about. While it’s the biggest issue, it’s not the only one.
Number two on the list of concerns is: consistency.
Experts talk about GenAI as being stochastic models. What they mean by this is that there is some randomness in the results that they generate. Sometimes AI can even generate false answers. The kinder euphemism is to say that the models hallucinate. But, a wrong answer is still a wrong answer. If you are an expert in the field, you may be able to figure out how to tell the difference. The advantage of these systems, available to everyone, is that you don’t have to be an expert, so errors going unnoticed are a real risk.
Last but not least, number three is: cost or availability of the system.
GenAI models use GPUs generally rather than CPUs, and that is expensive and it’s in short supply. Maybe I should say, they are in short supply, so they are very expensive. This is the reason that NVIDIA became a trillion dollar company last year. Even big players like AWS and Microsoft Azure have a scarce supply of GPUs, so you can imagine how in demand they are.
To recap, issue one is IP Protection, issue two is Consistency and fighting hallucinations, and issue three is Cost.
If you are an enterprise CIO, CISO, or Chief Data Officer trying to figure out how to provide GenAI to your employees that all of your users are clambering for and are trying to overcome these barriers, watch out for the next episode of Fast and Curious about these solutions.
Can’t get enough of these Fast and Curious videos? Well, we want to hear from you! What topics should we cover next? Drop a note in our inbox to let us know: email@example.com.