Highest Rated Comments


Ratheka_Stormbjorne10 karma

This appears to me more of a vague statement of principles, and those largely for lower-than-human ability / non-general AI systems; while these do bear thinking about, I think that a much more important issue to solve, for the future of Earth-originating life, is having an actual technical plan for actually bringing the desires of the AI system in line with human values and the sorts of futures we would prefer to have unfold. I don't see anything here that suggests a path for dealing with instrumental convergence, for example, so much as a sort of wishlist of properties you hope your systems will have.

Ratheka_Stormbjorne7 karma

What are your thoughts, if any, on the alignment problem? What approach is Microsoft taking towards it?