Highest Rated Comments
Adito9952 karma
Hi Luke, long time fan here. I've been following your work for the past 4 years or so, never thought I'd see you get this far. Anyway, my question is related to the following:
we need to solve a particular set of math problems (outlined here), including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.
This seems impossible. Human value systems are just too complex and vary too much to form a coherent extrapolation of values. Value networks seem like a construction that each generation undertakes in a new way with no "final" destination. I don't think a strong AI could help us build a world where this kind of construction is still possible. Weak and specialized AIs would work much better.
Another problem is (as you already mentioned) how incredibly difficult it would be to aggregate and extrapolate human preferences in a way we'd like. The tiniest error could mean we all end up as part #12359 in the universe's largest microwave oven. I don't trust our kludge of evolved reasoning mechanisms to solve this problem.
For these reasons I can't support research into strong AI.
Adito9922 karma
DH is also used by routers to negotiate VPNs. That gives them access to any and all information that passes through the tunnel.
Adito9920 karma
I like this perspective on it. Shepherd was fighting against a force that had destroyed civilizations many times more advanced than his. There was really no good reason to expect that one human would succeed when countless billions had failed. Despite that he fought because there was nothing else to do. That's impressive in its own way.
Adito9978 karma
You might be stumbling on a more general phenomenon here. It's well known that people (in general) perform badly on abstract reasoning tasks but can do concrete tasks pretty easily even if each kind of task uses the exact same logic. We just don't see into the deep structure of things very easily.
View HistoryShare Link