Discord is shutting down its AI Chatbot: Lessons for founders and builders
Navigating the AI hype
Hello there 👋
Welcome to today's edition of the Breve - the newsletter that provides you with product and growth insights. For speaking engagements, please email me at durotoluwaolumide@gmail.com
Today, I will be talking about the news that Discord will shut down Clyde, its AI chatbot. According to the announcement, users will no longer be able to invoke Clyde in DMs, Group DMs or server chats from December 1, 2023.
Clyde was launched as an experimental feature in March 2023 and built using Open AI API. It was made available to a limited number of Discord servers for free.
Users could invoke Clyde where it has been enabled, by mentioning “@Clyde”. Just like you converse with ChatGPT, users could chat with Clyde by sending it questions, and Clyde would generate a response. Users could ask Clyde questions like “Recommend a movie for late night” or “Crack a joke to get everyone excited.”
The difference here is that because Clyde operates in Discord’s chat environment, it has knowledge of the server, its members, and recent chat history. Clyde acts like a real Discord member and can include GIFs and emojis in its responses.
It is not clear why Discord is shutting down Clyde. Discord could have learnt enough during this beta testing and decided that it did not want to bake an AI chatbot into its product. It may also want to re-introduce it as a paid Nitro-only feature in the future. I think the chance of re-introducing it is slim because the tone of the announcement message does not give any inkling of that.
The question that needs to be answered is: does the AI chatbot improve user experience significantly to warrant the investment? If this feature was valuable to Discord members, why would it be deactivated? Reading through comments on Reddit suggests otherwise. For some Discord members, Clyde was a fun tool to use, as it could be manipulated to say anything.
Aside from being funny, Clyde's replies have also been horrible that if it were a real person, it would have been banned multiple times. It had a reputation for randomly attacking users and mocking people. Definitely, this is not what Open AI or Discord envisioned. Could this be the reason Discord is deactivating it?
Let’s talk about the costs of running Clyde. Is it sustainable to keep running the experiment without charging for it? Of course, Discord might want to stick it out and ride the wave of AI, but there is little it can push before the costs roar.
Using Open AI API is expensive, and whoever is building on top of it must have a direct monetization model. Open AI API follows a pay-as-you-go model. This means that every user interaction (both prompting and output) incurs a cost.
Let me put it straight: building with Open AI API or other LLMs without a direct monetization model is not sustainable unless you have a lot of money to throw around. You can compare the costs of other LLMs here. I believe this is the biggest reason Discord could not continue running the Clyde experiment.
Let’s assume Discord used the GPT-3.5 Turbo API; it costs $0.0010 per 1000 tokens for every input prompt and $0.0020 per 1,000 tokens for all output responses the model provides (for the gpt-3.5-turbo-1106 model). A token is approximately 4 characters or 0.75 words
Discord’s Monthly Active Users (MAU) was 154 million as of January 2023. Let’s make some conservative assumptions here for a quick calculation:
Clyde was actively used by 0.1% of its MAU;
Each user prompts 50 words per day;
Clyde returns a 100-word response to each user per day.
With the above assumptions, the bill for running Clyde climbs quickly to over $19million between March and November 2023. That’s a cost only a few companies can afford when there is no monetization tied to it. Even X (formerly Twitter) is deliberate about making Grok AI only available to X Premium+ users.
There is a chance that Clyde could return as a Nitro-only feature in the future. But that's if Discord can validate that adding an AI chatbot experience would increase the likelihood of users paying for Nitro. Other than this, the feature cannot be sustainable.
The big lesson here for founders and builders is that the hype around AI is not sufficient to decide whether to embed Large Language models (LLMs) in your product or build an LLM-native product. If you have the resources to experiment with it, why not? But, you should focus on solving validated customer pain points, not on the technology.
We are going to see more failed or abandoned AI projects in the coming months. This is understandable because a lot of companies are still honestly figuring out what to do with Gen AI, hence, failure is part of the learning experience. But more so because a lot of companies are jumping on the hype, and there is a high chance of getting it wrong.
Not every app or platform needs to have a chatbot experience or embedded LLMs. And you don’t necessarily need GenAI in your product to woo investors or raise more money in your next funding rounds.
As I have written here and here, you need to be clear on how leveraging these AI models will not only improve your user experience but also positively impact revenue.
That’s all for today.
Kindly share this with your friends and colleagues. Have a great week ahead!
Olumide ✌️
Additionally, it’s not as expensive as you may think it is. It’s tricky calculating these costs, and I sometimes get it wrong too. But if you consider the fact that it’s $0.0010 per 1000 input token, $0.0020 per 1000 output tokens, assumption of one user prompt per day of 50 words input prompt per day, 100 words output per day, 0.1% MAU at 154,000 users per month, (30 days) over a period of 9 months, it’s much cheaper, but could still be quite expensive when it’s not monetized or used as a complementary application for another income generating service
Good one. It could be relatively costly running on an API without a monetization model.
It could be a good strategy is the idea is to develop an MVP in preparation for something bigger. Maybe they are training their own model in house.
an alternative costing model would be to set up your compute infrastructure and run an open source model.