Value Alignment and AI Override
Tuesday, May 30 2017 at 7:00PM
Download iCalendar file
(e.g. import to Outlook or Google Calendar)
The Blue Moon
2 Norfolk Street
What's the talk about?
People are talking about the risks of AI, and the importance of AI alignment. But what does this mean in practice? And what can be done about it?This talk attempts to inject some formal rigour into both those questions. If there's time, we'll also look at why answers in the area are so fraught and varied, and why expertise is of limited use.
Stuart Armstrong's research at the Future of Humanity Institute centres on formal decision theory, general existential risk, the risks and possibilities of Artificial Intelligence (AI), assessing expertise and predictions, and anthropic (self-locating) probability.
He has been working on several methods of analysing the likelihood of certain outcomes and in making decisions under the resulting uncertainty, as well as specific measures for reducing AI risk. His collaboration with DeepMind on Interruptibility has been mentioned in over 100 media articles.
His Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for virtual screening of medicinal compounds.