peacebuster
Highest Rated Comments
peacebuster35 karma
The Celglowski talk has many flaws in its logic. His argumentative approach is basically brainstorming up a bunch of possible scenarios where AI doesn't kill us all, so therefore we shouldn't take any steps to curb runaway/unintended AI development. He uses the Emu War as an example of greater intelligence not being able to wipe out a lesser intelligence, but do we really want to be hunted down like animals and lose a large portion of our civilization and population just for a chance at more efficient automation? Celglowski also mentions several points for the AI-being-dangerous side that he never refutes. He overgeneralizes human examples of greater intelligence not being able to accomplish certain tasks that alarmists fear AI can accomplish, but ignores that computers are forced by programming to act, whereas people have the needs for self-preservation, reproduction, etc. to keep them in check, and lazy people can be lazy because they don't HAVE to do anything, but computers will have to do those things because they're programmed to act whenever possible. Finally, just because there are other dangers to Earth at this point doesn't mean the AI problem shouldn't be taken seriously as well. What I've taken away from this AMA so far is that the PhD students are only refuting strawman arguments, if any, and misunderstanding, not aware of, or just ignoring the real, instantaneous dangers that runaway/unintended AI would pose to humanity.
peacebuster545 karma
I had some of them at Carl's Jr. They really live up to the hype. I wish I could just buy them in a package at a supermarket or something.
View HistoryShare Link