# coding=utf8
# the above tag defines encoding for this document and is for Python 2.x compatibility
import re
regex = r"(?m)^\s*\[[^][]*]\(https?://[^\s()]*\)\s*"
test_str = ("---\n"
"layout: post\n"
"title: \"13 - First Principles of AGI Safety with Richard Ngo\"\n"
"date: 2022-03-30 22:15 -0700\n"
"categories: episode\n"
"---\n\n"
"[Google Podcasts link](https://podcasts.google.com/feed/aHR0cHM6Ly9heHJwb2RjYXN0LmxpYnN5bi5jb20vcnNz/episode/OTlmYzM1ZjEtMDFkMi00ZTExLWExYjEtNTYwOTg2ZWNhOWNi)\n\n"
"How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment.\n\n"
"Topics we discuss:\n"
"- [The nature of intelligence and AGI](#agi-intelligence-nature)\n"
" - [The nature of intelligence](#nature-of-intelligence)\n"
" - [AGI: what and how](#agi-what-how)\n"
" - [Single vs collective AI minds](#single-collective-ai-minds)\n"
"- [AGI in practice](#agi-in-practice)\n"
" - [Impact](#agi-impact)\n"
" - [Timing](#agi-timing)\n"
" - [Creation](#agi-creation)\n"
" - [Risks and benefits](#agi-risks-benefits)\n"
"- [Making AGI safe](#making-agi-safe)\n"
" - [Robustness of the agency abstraction](#agency-abstraction-robustness)\n"
" - [Pivotal acts](#pivotal-acts)\n"
"- [AGI safety concepts](#agi-safety-concepts)\n"
" - [Alignment](#ai-alignment)\n"
" - [Transparency](#transparency)\n"
" - [Cooperation](#cooperation)\n"
"- [Optima and selection pressures](#optima-selection-pressures)\n"
"- [The AI alignment research community](#ai-alignment-research-community)\n"
" - [Updates from Yudkowsky conversation](#yudkonversation-updates)\n"
" - [Corrections to the community](#community-corrections)\n"
" - [Why others don't join](#why-others-dont-join)\n"
"- [Richard Ngo as a researcher](#ngo-as-researcher)\n"
"- [The world approaching AGI](#world-approaching-agi)\n"
"- [Following Richard's work](#following-richards-work)\n\n"
"**Daniel Filan:**\n"
"Hello, everybody. Today, I'll be speaking with Richard Ngo. Richard is a researcher at OpenAI, where he works on AI governance and forecasting. He also was a research engineer at DeepMind, and designed the course [\"AGI Safety Fundamentals\"](https://www.eacambridge.org/agi-safety-fundamentals). We'll be discussing his report, [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ), as well as his [debate with Eliezer Yudkowsky](https://www.alignmentforum.org/s/n945eovrA3oDueqtq) about the difficulty of AI alignment. For links to what we're discussing, you can check the description of this episode, and you can read the transcripts at [axrp.net](https://axrp.net/). Well, Richard, welcome to the show.\n\n"
"**Richard Ngo:**\n"
"Thanks so much for having me.")
matches = re.finditer(regex, test_str)
for matchNum, match in enumerate(matches, start=1):
print ("Match {matchNum} was found at {start}-{end}: {match}".format(matchNum = matchNum, start = match.start(), end = match.end(), match = match.group()))
for groupNum in range(0, len(match.groups())):
groupNum = groupNum + 1
print ("Group {groupNum} found at {start}-{end}: {group}".format(groupNum = groupNum, start = match.start(groupNum), end = match.end(groupNum), group = match.group(groupNum)))
# Note: for Python 2.7 compatibility, use ur"" to prefix the regex and u"" to prefix the test string and substitution.
Please keep in mind that these code samples are automatically generated and are not guaranteed to work. If you find any syntax errors, feel free to submit a bug report. For a full regex reference for Python, please visit: https://docs.python.org/3/library/re.html