Buzz in Town
The term AI or artificial intelligence is the sexiest term out there. Celebrities and news reporters are dishing it out in every conversation. Some think that AI is going to solve humanity’s problems while others think it will destroy the world.
All this talk does contain some element of truth. But in real terms, AI today is a technology in search of viable business applications.
The key term here is ‘viable’.
Does the technology provide ways to improve current methods with sufficient accuracy and contribute to higher profits? The answer is no in a large number of scenarios.
Profits?
This concern about viability was reflected in stock price losses of Big Tech companies which have heavily invested in AI. This list includes Google/Alphabet, Meta, Microsoft and Amazon.
The development of the technology and its proliferation on phones and social media accounts has not yet contributed to larger profits for these companies.
At least not yet.
There are some opportunities in ad-targeting, conversion of customers from ads, smoother office applications and maybe some intelligent insights that could contribute to better supply chains.
But do these opportunities yield profits that are far superior to existing systems?
We haven’t seen it yet.
ChatGPT/OpenAI - A case study
Macro numbers according to a September 2024 report suggest that ChatGPT has 200 million weekly active users. And it also claims it has 100 million paid customers. Now, we don’t know the churn rates and overall revenue, profits etc. But the fact that they are raising funds so often indicates that costs are exceeding earnings.
As an individual user, I have not seen a major difference between a paid and free version for most daily tasks. I had a subscription of four months, but terminated it as I could not see much value beyond recreational tinkering.
Visual and audio capabilities of ChatGPT are kinda fascinating, but it doesn’t have many use cases for an individual user. However, I believe a large number of security and education related products can be built with these capabilities.
But eventually, it will boil down to costs, because compute power required to run LLMs is still higher than a similar solution that could involve a human.
For example, an AI enabled security camera might be a cool thought but running it would be costlier than hiring a security guard. An AI-enable tutor might be a great solution, but it can only address repetitive tasks and personalise some parts of the learning journey. But it is not going to be cheap if you expect it to teach, calibrate and engage a student like a real human teacher.
The cost factor is kinda being addressed as OpenAI is working on cheaper models that can achieve optimal performance. But, I am yet to see the larger shifts in search market, which kinda predicted death of Google.
However, we can see ChatGPT’s UI resembling Google Homepage with every passing day. Initially, people didn’t know what to do with this technology. Now, the homepage educates the user and possible tasks that can be done with this tool .
Unlike Google ChatGPT is supposed to give an accurate and satisfactory answer to the user. This expectation hits a roadblock because of two reasons.
All users don’t know how to frame a question or problem statement in a way that ChatGPT can solve effectively.
In the absence of several competing links or answers thrown by Google, the user finds it hard to fully trust the answer without citation, hyperlinks and other markers of trust. Is there a better answer than this? Can there be mistakes here? These are looming questions for every user.
A large shift in user behaviour is needed for the economic success of products like ChatGPT. With Apple Intelligence mainstreaming LLM technology on consumer devices, organisations like OpenAI might end up developing and testing solutions on tools like ChatGPT and then become service providers to big tech companies like Apple or Microsoft.
What Do We Do Now?
What’s the appropriate use of AI or machine learning models in the real world?
Well my litmus test for a suitable AI solution involves these factors:
Is there a lack of a universal law for the solution being provided?
Can large dumps of data be collated to create a situational solution for the problem?
Is there potential for improving the solution as more data enters the system built on current data?
Can the solution absorb reasonable amount of error?
Will the solution be safer, reasonably accurate and cheaper than the current solution in market?
Anyone who has built a tech product can understand the context of these questions fluently. For others I will provide two examples.
Customer Care
Customer care has no universal laws. However, you can train a human or a computer to solve this with past data. As new scenarios arrive the customer care person or a computer with similar learning capacity can improve.
For simple apps like Swiggy or Blinkit, there is no safety issue (beyond food quality and electronic device risks that are outside scope of discussion). This solution can be cheaper and reasonably accurate. In case of errors by the bot, you can always connect to a human.
So customer care can be effectively solved with AI and it can contribute to profits by lowering customer care expense.
Health Care
There is a case for AI applications in helping diagnosis of patients but prescription and surgery are dangerous domains. Even a slight error in the process can have massive consequences.
Of course, it is theoretically possible to train a robot to perform a surgery by feeding it tonnes of information and data. But, the costs involved and the risk factor is way too high in this scenario. Not many people would prefer a robotic surgery for a lower cost.
In the best case scenario, the technology can only assist a human surgeon and reduce errors that could creep in during a surgery.
Impending Doom?
As the complexity of tasks increases, the level of data, training and compute power required to run an AI / ML model/robot rises. In such cases it is far cheaper and safer to hire a human to perform such tasks. This is the reality in the present and near future according to a large number of experts (and my own common sense).
However, once we achieve artificial general intelligence (a solution that enables machines to learning anything from their surroundings), at scale and reduce costs of computation drastically, we will not be anywhere close to AI doom predicted by many people.
At least from the product management, data analysis, design and coding POV, I believe that AI will only be an assistant to humans and help them be more productive. However, more mundane parts of these jobs are ripe for automation, freeing up people to do more innovative work.
This thumb rule is likely to be true for all creative jobs that involve human interaction, unpredictability and a high risk profile. Artificial Intelligence is unlikely to take over the entire job role in such a scenario.
ChatBots and assistants provided to millions of people on WhatsApp or Instagram are unlikely to serve any major needs beyond generating that cute meme that you want to send to your crush or friend. But this could be a playground where data can be collected to build or fine tune monetisable products in future.
Predictions of doom are not going to materialise very soon. However, we can see culling of many low-skill, uncreative and repeatable jobs.