If you're an entrepreneur, start-up, or a business looking to develop your MVP (Minimum Viable Product), bringing in Generative AI may seem like an attractive idea. Right now it is a hot topic in the tech world, and many are trying to incorporate it into their products (event if it not needed)
Specifically, if you're thinking about incorporating a Large Language Model (LLM) or image Generative AI like Midjourney and wondering about its cost implication, this blog post is meant for you. I'll try to provide a clear picture of the cost of using such AI models in your MVP as a percentage of the total cost of the MVP.
How Much Does LLM Cost in Your MVP?
Let's delve into the details straight away and break it down. The cost of the model can vary depending on the intricacies of your MVP. If the ai is simply a feature and not the main product in your MVP, the overall cost can be significantly reduced. For example if you are building a marketing tool that uses LLM to generate content, the cost of the model will based on the usage of the tool and not the number of users. If you are building a social network that uses LLM to welcome new users, the cost of the model will be based on the number of users.
Remember, an MVP is all about minimum features that aptly solve a customer's problems. If incorporating the model doesn't serve a core purpose of solving a problem, its addition may not be viable.
Strategies to Offset the Cost
If the cost of incorporating a model poses a financial challenge, there are certain strategies you can employ to offset the cost. However, it's important to note that these strategies may not be applicable to all cases. You'll have to analyze your MVP and the model to determine if they're feasible.
Additionally you should always first approach the problem from a business perspective. A general rule of thumb is to use external services as plugins and not as core features of your product. This way you can easily replace them if they become too expensive or if they are not needed anymore.
Running the Model at The Client End: One solution could be to use a free local version of the AI model and run it at the client end. This way, you can save considerable costs related to cloud hosting and management. However your users will need to have a powerful computer to run the model. If you are building a mobile app, this may not be a viable solution. If your target audience is tech savvy, this may be a good idea.
Self-Hosted Model: Another approach is to use a self-hosted version of the model when you're certain that you will utilize it fully and speed is not a concern. Self-hosting eliminates the need for recurring subscription costs while leveraging the power of AI. You should not expect your self host model to be as fast and as good as the cloud version of established vendors as they have a lot of resources to optimize their models and perform economies of scale.
Utilizing a Cheaper Model: If budget constraint is a real concern, consider using a cheaper model that performs similarly. Usually, vendors offer multiple models with varying capabilities and costs. You can choose the one that best suits your needs. However, you should be careful not to compromise on the quality of the model.
Improving Your Prompts: By optimizing your prompts, you'll be able to minimize the number of tokens, thereby reducing the costs. Even if the model has not a per token cost, the number of tokens will affect the speed of the model. This is especially true for LLM models.
Sharing Results: When possible, share results between users like a caching system. This can help you reduce the number of calls to the model and thus reduce the cost. Security and privacy should be a concern when sharing results between users.
Ensuring LLM is the Right Tool: Last but not least, ensure that AI is the right tool for your MVP. It may sound simple, but unnecessary add-ons can inflate your budget. We may use LLMs for tasks that can be done with a simple rule-based system. For example, if you are building a chatbot that answers simple questions, you may not need a LLM. A simple rule-based system may be enough. But if it is important to have a natural conversation, then a LLM is the way to go.
The Challenge and Novelty of Designing an MVP Using LLM
The journey of designing an MVP using LLM can be challenging, but it also unravels a world of opportunities. It's fairly a new concept in the market, and not many have mastered its application. However, if strategically done, it can help your MVP stand out in the crowd.
In the end, the cost is just one side of the coin. Strategizing your MVP around problem-solving and customer satisfaction is critical. And when these user-centric goals align with the capabilities of tools like LLM, the MVP not only becomes cost-effective but also more impactful.
My years of experience in the tech industry have taught me that the best way to approach a problem is to first understand it from a business perspective. This way, you can make informed decisions and avoid unnecessary costs. If you're looking for a partner to help you design your MVP, I'd be happy to help. Feel free to reach out to me. I will be happy to help you.
This article was generated with the assistance of AI and refined using proofing tools. While AI technologies were used, the content and ideas expressed in this article are the result of human curation and authorship.
You may read more about my ideas on the subject in my blog post: Importance is All You Need