This is a company that has been around for 10 years, didn't launch a revenue-generating product until 2020, and still has not been profitable at all. To put the absurdity of this time scale into perspective, Google was founded in 1998 and didn't reach $1 trillion in value until 2022.
OpenAI's valuation is the result of massive AI speculation and huge investment from major tech companies, including Microsoft and Nvidia. Microsoft alone has lost $3.1 billion on its 27% stake in OpenAI but is holding onto it in hopes that it generates these massive returns. OpenAI is also deeply intertwined with the increasingly-problematic circular loans between the various major tech players, including Microsoft, Meta, Google, Nvidia, Intel, AMD, Amazon, Oracle, Anthropic, and CoreWeave.
This is peak AI bubble, and I highly doubt the bubble (and therefore the U.S. economy) are going to hold up until OpenAI goes public, but I'm curious as to people's thoughts on this.
1) IF artificial intelligence becomes seriously useful and
2) IF it doesn't kill us all and
3) IF it doesn't usher in a socialist revolution and
4) IF OpenAI isn't outcompeted by the myriad of other competitors and
5) IF China doesn't just give it away for cheap like they do everything else then
OpenAI could be worth an ungodly amount of amount in the future, like tens of trillions easily. What's the value of billions of human-equivalent years of labor?
Right now its the pick and shovel makers who are making the big bucks.
Many (if not most) of these have a very slim chance of actually happening. Guess what you'll end up with if you multiply them all together.
1) We’re years away from that, especially since right now the major players are focused on delivering a lot of stupid AI to any sector where it can be of some kind of money-saving “assistance” so that they can keep playing up the hype
2) I’m not gonna pretend to know enough about self-awareness theory to comment on the likelihood of this. But if you meant from a resource-consumption perspective, then I agree
3) Ok, I’m seriously curious as to this one. As a democratic socialist, I’m wondering how you think a socialist revolution could be inspired by AI. Last I checked, the ten or so tech conglomerates riding the AI wave are all despised by socialists given that they’re not only absorbing a massive amount of wealth from the rest of the market and sharing it almost exclusively amongst themselves but also that most of them are such large companies that a socialist doesn’t believe they should even exist
Obamacare is having that problem right now where the young and healthy are massively subsidizing the old and sick. So healthy people drop out of the marketplace, causing premiums to go up, more healthy people to drop, etc... Now (unsubsidized) insurance for a family is like $3k/month and barely covers anything.
But what if AI takes all the jobs and human labor becomes worthless?
Capitalism works because it (mostly) rewards people for labor, innovation, and risk taking that benefits humanity. But if human labor is worthless, then there is nothing to reward. There is no longer a downside to socialism. Since we're all worthless, everyone should get the same as anyone else.
I don't believe this would ever play out in real life, however, largely because of capitalism. Even if the AI thing plays out the way the visionaries say it will and it doesn't just go belly-up and cause a massive crash, the savings and wealth generated from AI will be hoarded by the tech companies and their executives. I don't see AI causing a socialist revolution in the real world
* Claude is gaining share, and is probably #1 for coding agents at this point.
* Gemini is the best at frontier level math, science, and research. They also have the best image generator.
* Grok has reached parity or near-parity in record time. They also have the best ability to build out data centers.
OpenAI leads in brand recognition among consumers. But I don't think consumer applications are the big prize. As someone said, "be scared when the AI is closed to outside users". Selling subscription for $20/month is peanuts. The eye watering valuations are based on the prospect of mass replacement of knowledge workers.
Although it looks like OpenAI passed them recently.
But that's just the consumer model. Recently, Google was able to achieve a gold medal level performance on an International Math Olympiad test.
You have to look past the consumer applications on which they are (admittedly) still weak.
Did you know that Google actually started the whole LLM revolution?
They also have a unique advantage with their TPU chips, which frees them of much of the hardware dependency which OpenAI is stuck with.
And not everything in AI is LLMs. Google has also done amazing things with AlphaFold and AlphaZero. Oh, and they also have the best self-driving car.
I'd rank Google as the clear leader in the AI space.
Moreover, it got a gold medal on an olympiad for high schoolers. That's still somewhat impressive, but it's still a long way off (for now) from research level mathematics. The 4.2% accuracy for Gemini pro that you mentioned here definitely checks out hahaha
I think the issue these LLMs need to be able to do in order to be a larger contributor to scientific research is recognize when they are right or wrong. A 10% accuracy is certainly better than 0%, but 10% accuracy doesn't matter if it can't even tell when it's right or wrong.
Anyway, I'm sure it won't be long before I can actually bounce math research ideas off of some of these LLMs, but that time has not yet fully come.
That's why I never get too upset when AI makes mistakes. They haven't replaced me yet!
And again, I don't think the political or socioeconomic framework is in place in our society for theorized mass replacement of human workers with AI to be a good thing. Companies are not going to compensate laid-off employees using the money they save from using cheaper AI employees. The govt. is not going to tax AI employees to subsidize UBI. This would be a disaster for the middle & working classes.
https://ai-2027.com/
Trigger warning: may cause anxiety in sensitive individuals.
They predict superintelligence by 2027 and human extinction by 2030.
"[laughing emoji] You're right—[error] was a mistake on my part in the last response. Let me retry this again, this time factoring in [user's suggestion] and avoiding [error].
[A response attempt that makes the exact same mistake it did before.]"
These LLMs have to get a lot better to be feasible replacements for more skilled work, like research mathematics and data analysis. ChatGPT couldn't even accurately help me with my high school calculus homework.
We haven't seen the extent of AI's capabilities, but I really don't think that it will extinguish higher-level research in areas such as math or physics. It takes so much human intuition and collaboration to make headway in mathematics anymore. Sure AI could go and perform scientific studies or do the type of PhD work from many other humanities fields, but once you get so abstract that it takes more intuition to develop a field, I wouldn't be surprised if this is where AI struggles. You can train AI to think like a human, but philosophically speaking, AI will never be able to genuinely think.
Would AI even reach the "2027 superintelligence" deadline without the resources?
Agreed. If AI becomes good enough to assist with higher-level research etc., it would need to work alongside human experts, not replace them.
Frankly, I think AI replacing human workers in white-collar jobs would be a good idea assuming that it wouldn't lead to a massive unemployment crisis, but, as things stand right now, it absolutely would.
That being said, we need to be cautious of AI, especially concerning AGI and machine consciousness. Recall Revelation 13:15, which states "And he had power to give life unto the image of the beast, that the image of the beast should both speak and cause to be killed as many as would not worship the image of the beast." Everybody wants to talk about the economic and moral consequences of AI, but maybe it's time to shift gears and focus on the spiritual aspects.
As for the bubble, I'm not going to say anything for certain because I'm not an economist, but everything happening in AI right now is mirroring past bubbles (like dotcom, real estate, and speculation in the 1920s) and the concentration of wealth among a few companies and individuals is inevitably going to cause a huge recession/depression when those earnings start to falter. The top 1% of earners in the US control more wealth than the bottom 93%; the S&P 10 is worth more than the S&P 490.
The massive returns being generated for a lot of these companies are increasingly based more and more on hype, hope, and speculation, and the investment money is increasingly flowing in a circle between a handful of companies. The bubble is definitely at risk
But, that's an interesting quote to mention because allegorically speaking, which matches the tonality of Revelation, we do have the power to "give life" to AI. Let us be sure to not worship it.
1) Moderation of quizzes for new users. What happens is that AI flags certain things and then I'll look at it. If there are no flags, then it gets auto-approved. Has saved me a lot of time.
2) Suggestion of new questions for daily quizzes. There is a multi-step process to generate question ideas. These are heavily reviewed and edited by me. I reject at least 95% of question ideas. But it's a good brainstorming tool.
3) Thumbnails. We're doing more of this but people are noticing less. Again, there is a heavy human element.
The love for AI is honestly very concerning, given it's soulless and biased, yet people have interpersonal relationships with it.
Oh well, this is kali yugam ahh