-
@ StarBuilder
2024-01-29 23:37:10First in, first out.
When ChatGPT was released in November 2022, it was the only chat interface that offered a new way to interact with LLMs. However, in March 2023, they updated it with GPT-4 and introduced the ChatGPT Plus plan. I immediately signed up for the Plus version as it was leaps ahead of the competition and provided real value in responses. Throughout last year, they updated it with additional features such as plugins, code interpreter, custom GPTs, vision, and the GPT store. This also attracted lots of new users, and their backend was not able to keep up with the load. The service started to become slow and unresponsive during peak hours, and the responses became less useful.
When I look at my usage, it is always in bursts when I want to get something done. So, often I hit rate limits and have to wait 3-5 hours before it gets reset. As a result, I started using it less and less over time. I am always the guy who is first in line to get any new tech gadgets, but I am also the first one to leave when the service hits its limit.
Don’t leave money on the table
As you know, the same GPT-4 can be accessed using API. If you compare the cost using API, you will need to generate more than 120,000 words to break even with $20. See the picture below. 120,000 words is equivalent to a full book, which translates to at least using it 300+ times in a month. When I looked at my chat history, I was barely crossing 60k words even using it while creating several articles and doing data analysis. So technically, I could have used the GPT-4 API for the same and saved money. So, a $20 all-you-can-eat buffet sounds good on paper until you can’t eat after the second round.
Privacy concerns are huge
LLMs get better and better when you throw good data. So the key differentiator between models is the datasets they use to train. This is why they give away GPT 3.5 for free to collect, review, and train using the data they receive. Only for teams, you can opt out of training based on your data. Elon Musk said it out loud.. “Everyone is training on copyrighted data.” Look what happened to New York Times. While you can be careful, at times you may copy/paste some personal or company private information unknowingly, which can come back. Check this article: https://www.bleepingcomputer.com/news/security/openai-rolls-out-imperfect-fix-for-chatgpt-data-leak-flaw/
We are Kept in the dark all the time:
OpenAI regularly updates ChatGPT to enhance its capabilities, such as improving factuality and mathematical reasoning. These updates are crucial to maintain the model's relevance and accuracy. However, the specifics of these updates are not always immediately clear to the public. While OpenAI provides release notes, some users find that changes in model behavior are more readily discussed in forums and blogs rather than through official channels.
Also, I simply do not trust OpenAI with my data. We all know about Sam's firing drama that happened in November 2023. Very few details emerged after that event, and no one knows what really happened. Additionally, Ilya Sutskever, who, according to Elon Musk, has a better moral compass, has been missing in action since December 14th.
Easy alternatives exists:
Since GPT-4 is available as an API and so is Claude, Google Gemini Pro, and many other open-source LLMs, this means you can simply set up a user interface to access these LLMs using API and only pay per use. There are many open-source chat interfaces that let you do that.
Here are a couple:
Big-AGI - https://big-agi.com/: You can simply use their web version but input your API keys and use it. You can also download and run it locally without giving away your privacy. Or deploy it on a cloud instance.
ChatbotUI - https://www.chatbotui.com/: This is another really good chat interface that you can run locally completely free and open source built by star developer Mckay Wrigley https://github.com/mckaywrigley
Bing & Bard: Bing copilot has access to the web and GPT-4. Bard has access to Gemini Pro. Both provide lots of functionality for free when searching, browsing, and utilizing AI to get better results or formats.
More and more SaaS products started to integrate AI features right into it so that you don’t have to go outside to access AI. Co-pilots, sidekick, AI knowledge are some of the features that are being rolled out. Wait till Apple WWDC. They are going to have a ton of features integrated into iOS and Mac Apps.
Also, depending on your use case, there are several use case-focused AI solutions such as Perplexity for search, POE for AI agents, GoCharlie.ai for blogs, social media posts, and if you want to dig deeper, you can go to https://theresanaiforthat.com/most-saved/ to see other 10,000+ AI solutions.
Run your own AI.
I am a big proponent of open-source LLMs and there are plenty. If you have an Apple silicon MacBook Pro, then you can run small LLMs directly on it. Simply experiment with what you can do or if you want something bigger, you can always build a GPU machine that can host larger models. You can check out this article I wrote before:
https://www.linkedin.com/pulse/get-your-own-ai-2024-arun-nedunchezian-ymrye%3FtrackingId=WP0hTSn%252FRwymJLIc0fl4Sg%253D%253D/?trackingId=WP0hTSn%2FRwymJLIc0fl4Sg%3D%3D
What am I missing?
Nothing but FOMO. Influencers, AI experts, and other tech reviewers create big FOMO with ChatGPT features and how you can create from websites to earning 6 figures and automate lots of tasks. While these demos are cool to look at, they are often not feasible to use on a daily basis. Also, the whole point of AI is to free you from repetitive tasks rather than being chained to a chat interface all day long.
What are your thoughts on ChatGPT and its alternatives? Have you tried any other open-source LLMs or integrated AI features into your own projects?
I'd love to hear your experiences and recommendations!