// include the latest version of the regex crate in your Cargo.toml
extern crate regex;
use regex::Regex;
fn main() {
let regex = Regex::new(r"(?m)^\s*\[[^][]*]\(https?://[^\s()]*\)\s*").unwrap();
let string = "---
layout: post
title: \"13 - First Principles of AGI Safety with Richard Ngo\"
date: 2022-03-30 22:15 -0700
categories: episode
---
[Google Podcasts link](https://podcasts.google.com/feed/aHR0cHM6Ly9heHJwb2RjYXN0LmxpYnN5bi5jb20vcnNz/episode/OTlmYzM1ZjEtMDFkMi00ZTExLWExYjEtNTYwOTg2ZWNhOWNi)
How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment.
Topics we discuss:
- [The nature of intelligence and AGI](#agi-intelligence-nature)
- [The nature of intelligence](#nature-of-intelligence)
- [AGI: what and how](#agi-what-how)
- [Single vs collective AI minds](#single-collective-ai-minds)
- [AGI in practice](#agi-in-practice)
- [Impact](#agi-impact)
- [Timing](#agi-timing)
- [Creation](#agi-creation)
- [Risks and benefits](#agi-risks-benefits)
- [Making AGI safe](#making-agi-safe)
- [Robustness of the agency abstraction](#agency-abstraction-robustness)
- [Pivotal acts](#pivotal-acts)
- [AGI safety concepts](#agi-safety-concepts)
- [Alignment](#ai-alignment)
- [Transparency](#transparency)
- [Cooperation](#cooperation)
- [Optima and selection pressures](#optima-selection-pressures)
- [The AI alignment research community](#ai-alignment-research-community)
- [Updates from Yudkowsky conversation](#yudkonversation-updates)
- [Corrections to the community](#community-corrections)
- [Why others don't join](#why-others-dont-join)
- [Richard Ngo as a researcher](#ngo-as-researcher)
- [The world approaching AGI](#world-approaching-agi)
- [Following Richard's work](#following-richards-work)
**Daniel Filan:**
Hello, everybody. Today, I'll be speaking with Richard Ngo. Richard is a researcher at OpenAI, where he works on AI governance and forecasting. He also was a research engineer at DeepMind, and designed the course [\"AGI Safety Fundamentals\"](https://www.eacambridge.org/agi-safety-fundamentals). We'll be discussing his report, [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ), as well as his [debate with Eliezer Yudkowsky](https://www.alignmentforum.org/s/n945eovrA3oDueqtq) about the difficulty of AI alignment. For links to what we're discussing, you can check the description of this episode, and you can read the transcripts at [axrp.net](https://axrp.net/). Well, Richard, welcome to the show.
**Richard Ngo:**
Thanks so much for having me.";
// result will be an iterator over tuples containing the start and end indices for each match in the string
let result = regex.captures_iter(string);
for mat in result {
println!("{:?}", mat);
}
}
Please keep in mind that these code samples are automatically generated and are not guaranteed to work. If you find any syntax errors, feel free to submit a bug report. For a full regex reference for Rust, please visit: https://docs.rs/regex/latest/regex/