Podcast roundup: The inadequacy of today's AI safety practices + how to make AI go better
Proof that I'm not a deepfake (or am a very convincing one)
Welcome! A number of folks have asked where they can hear more about my time at OpenAI, so I’ve put together a round-up of different podcast appearances I’ve done.
In each of these, I talk in more detail about my time at OpenAI, my research on AI risk, and my policy solutions for making AI go better. If you’d like to have me on your show or otherwise suggest an idea for coverage, please feel free to get in touch here.
The ControlAI Podcast
We talked about the race to build AGI as soon as possible; the inadequacy of current voluntary safety commitments and testing procedures; concerning AI behaviors like self-preservation and deception already being observed; and the risks of recursive self-improvement where AI systems are used to accelerate development of even more powerful AI.
The Cognitive Revolution
We talked about the safety questions that the AI industry continues to struggle with today; OpenAI’s attempted conversion from a non-profit to for-profit tech company; the exodus of OpenAI staff to form Anthropic; and the changing safety culture at AI labs.
Robert Wright’s Non-Zero
We talked about what it means to “feel the AGI”; the different types of AI catastrophes; what it will take for US-China AI competition to end well; and what’s really happening with ChatGPT’s sycophancy.
Acknowledgements: The views expressed here are my own and do not imply endorsement by any other party. All of my writing and analysis is based solely on publicly available information.