The Future Of Life
There's a lot of coverage flying around about ChatGPT and the threat of advanced AI to jobs and the human way of life more broadly. Here are some quick takes to set the stage:
Vivek Sodera (Founder of Superhuman):
Millions will lose their jobs because of AI. Not just SDRs, support, content creators, & copywriters, but also teachers, doctors, lawyers, software engineers, etc. Millions.
At current pace, this is about to happen (for sure in the next 3 yrs). We need to wake up and get ready.
Sam Altman (Founder of OpenAI):
Things we need for a good AGI future:
1) The technical ability to align a superintelligence.
2) Sufficient coordination among most of the leading AGI efforts.
3) An effective global regulatory framework including democratic governance.
Jackie Berardo (my smart friend) in response to the tweet above:
Things we literally have none of while you're continuing R&D :saluting_face:
This is all happening in the background of a very high powered and very public tussle between Sam Altman's OpenAI and Elon Musk's newly popular Future of Life Institute.
AI is moving very quickly and the Future of Life Institute is calling for a 6 month break. More specifically: "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." (full petition here)
The petition has been signed by big names like Elon (obvs) and Steve Wozniak (co-founder of Apple).
Whether this is right or wrong is an ongoing discussion. Can you slow progress? What would the rules actually be? Who decides?
My take is this: Things are moving so fast that I have very low confidence that the powers that be - whether that be government or tech leaders - will be able to come to a consensus fast enough that it will make a material difference.
Given my low confidence in a consensus being reached I see two possible outcomes:
1. AI continues to progress at an incredible speed. The results are very positive or very negative or anywhere in between. You can't really know.
2. Something very negative happens in the short term that goes viral and leads to a large scale pause of AI research. Memes matter.
As quickly as the internet decided that AI was a global technology paradigm shift it could also decide that it can't be trusted and should be caged to be studied meticulously.
We'll see what happens.