This morning, I went to log on to ChatGPT to help me ideate some ideas for content such as lists & hacks. I was given a message that “ChatGPT was experiencing unprecedented demand,” which is not an unfamiliar problem, but the message today seemed to communicate that the outage might take longer to resolve than normal.
As I mentioned, I’ve run into this problem before, but today, it made me pause, and think about the implications of unreliability in a system that is proposed to revolutionize our work. Beyond the possibility of outages, ChatGPT’s current structure (centralized, & unregulated) poses many risks to everyone who is considering using it to change their workflow.
Individuals in the Web3 space have spent years making the case for decentralization — that no government or corporation should control individuals’ ownership or access to assets. And yet, using ChatGPT as a primitive second brain is outsourcing one of our most important assets — our ideas — to a centralized entity.
If you want to harness the AI power of ChatGPT as a second brain, hopefully a subpar joke about the site will suffice instead
I wanted to look into the history of OpenAI (the company who released ChatGPT) and see what business/research minds are behind it, and what their intentions are for the product. Interestingly, while the initiative to make an AI product that replicated human thinking started as a non-profit, OpenAI switched to a capped-profit company in 2019. The idea was to allow investors to earn returns of up to 100% on their original investment, but no more.
This is a nice idea — provide an influx of capital without making OpenAI like any other startup that is wholly focused on scaling & monetizing. Or is it? The precedent of switching away from a nonprofit model is concerning — using collective resources to build a product to benefit humanity, and then moving the goalposts so that the product must benefit humanity and provide returns to investors should make even the casual user concerned.
If OpenAI has already shown that they’re willing to go back on their word, how can users trust that they won’t engage in practices such as clawback pricing (raising prices once users are dependent on a product) or restricting access in other ways, even if they promise at the moment that access will remain open.
The idea of restricting access is especially relevant because OpenAI is basically the only player who has an extremely powerful AI algorithm, combined with a user friendly interface. Most monopolies drive other competitors out of business before they can dominate the market, but OpenAI doesn’t even have to do that — they’re already a monopoly.
Web3 builders are always on the lookout for the best new technology — so it makes sense that we would be enthusiastic about a revolutionary technology such as ChatGPT. Yet, it’s important to recognize the vulnerabilities of this product and be wary when building systems around it.
What do you think — do you already see red flags 🚩 within the idea of ChatGPT/relying on AI in general?
Thanks for reading! TokenTag is building an all-in-one Web3 community platform — check us out!