oter

AI should serve human interests without conflict from "summary" of Human Compatible by Stuart Russell

The fundamental idea is that AI systems should reliably and effectively pursue our objectives without causing harm. This principle is straightforward and uncontroversial, but it has profound implications. It means that AI should be aligned with human values - that is, it should do what we want it to do. If we specify our objectives as "maximize the number of smiles," then the AI should work to maximize the number of smiles. If we specify our objectives as "cure cancer," then the AI should work to cure cancer. If we specify our objectives as "make as much money as possible," then the AI should work to make as much money as possible. The challenge is to design AI systems that are provably aligned with human interests under all circumstances and to ensure that they remain so as they become more intelligent. The AI's objectives must be well-specified. They must capture our true desires and values, as opposed to some distorted version of them. They must take into account the full range of consequences of the AI's actions, not just the immediate effects. And they must be robust, in the sense that they are not easily subverted or undermined by unforeseen events or adversaries. In practice, achieving alignment means solving a difficult technical problem, which is how to design an AI system that can reliably learn our ob...
    Read More
    Continue reading the Microbook on the Oter App. You can also listen to the highlights by choosing micro or macro audio option on the app. Download now to keep learning!
    oter

    Human Compatible

    Stuart Russell

    Open in app
    Now you can listen to your microbooks on-the-go. Download the Oter App on your mobile device and continue making progress towards your goals, no matter where you are.