Highest Rated Comments


APGamerZ398 karma

Just to give another opinion. The issue of job displacement is mostly for the political world to hammer out (every citizen is a part of the political world). When it comes to the development of AI technology that will displace jobs, it is on the burden of government to create policy that protects us from the upheaval. However, when it comes to the dangers that a strong general AI will pose to humanity, it is for developers and engineers to be aware of these issues so that they can mitigate the development of technology that will pose a danger to society.

These are two very different issues that are the domain of two different disciplines.

APGamerZ78 karma

Sorry if I wasn't clear, what I meant by "dangers to society" were AI behaviors that were strictly illegal or directly harmful. The discussion around restricting development doesn't typically extend to prohibiting researchers/developers/engineers from pursuing technology that will displace jobs (e.g. the discussion isn't surrounding banning Google from developing self-driving cars because that may lead to jobless taxi drivers). The technology industry isn't going to change its course or stop development based on displacing jobs.

However, when it comes to creating technology that will endanger people*, that's a different story (not talking about people meant to be at the other end of a weapons technology). If a technology poses a likely risk to inadvertently harm people, it is on the burden of those researchers/engineers/developers to moderate or eliminate those risks.

Edit: formatting

APGamerZ68 karma

This is not about AI Uber becoming Skynet, or any other current AI project we're hearing about. I think a large part of the problem in the discussion about the dangers of AI with people who disagree with the premise that any dangers exist, is the clarity on the separation between a "general AI superintelligence" and "current and near-future AI". This is about the potential dangers in development of Strong AI. This is less about current software safety practices and more about what techniques we may need to administer and advance to guide a Strong AI down a path that limits the probability of mass harm.

These dangers aren't about today or tomorrow's AI development, it's about Strong and near-Strong AI development. It's about discussing potential risks ahead of time so that people are aware of the problems. Right now futurists/philosophers/ethicists/visionaries/etc are focusing on this issue, but one day it's going to come down to Software leads who are going to be using practices that are currently not a part of software development (because they don't need to be).

As a species, we're capable of managing risks at many different levels, so looking far ahead isn't a problem especially when it's the focus of a very select group of people at the moment.

Also, the argument surrounding the risk of a general AI superintelligence is separate from the argument of whether such a thing is possible. Of course, many believe it's possible hence the discussion around the possible risks. When it comes to dangers unique to an AI superintelligence, I recommend reading Superintelligence by Nick Bostrom. William from /u/SITNHarvard linked a talk (text and video version) in a comment below by Maciej Cegłowski, the developer who made Pinboard, that calls out "AI-alarmist" thinking as misguided, but to me his points seem to come down to the idea that thinking about such things is no way to behave in modern society, makes your thinking weird, and separates you from helping with the struggles of regular people. I could make a very long list of things I disagree with about that talk, but I'll just link it here as well so you don't have to look for Kevin's comment below if you're interested.

Edit: formatting

APGamerZ23 karma

Two questions:

1) This is probably mostly for Dana. My understanding of fMRIs is limited, but from what I understand the relationship between blood-oxygen levels and synaptic activity is not direct. In what way does our current ability in brain scanning limit our true understanding of the relationship to neuronal activity and perception? Even with infinite spatial and time resolution, how far would we be from completely decoding a state of brain activity to a particular collection of perceptions/memories/knowledge/etc?

2) Have any of you read Superintelligence by Nick Bostrom. If so I'd love to hear your general thoughts. What do you make of his warnings of a very sudden general AI take-off? Also, do you see the concept of whole brain emulation as an eventual inevitability as is implied in his book with the increases in processing power and our understanding of the human brain?

Edit: grammar

APGamerZ7 karma

Interesting, thanks for the response! A few followup questions. If the encoding models operate at the voxel level, how does that limit the mapping between stimuli and neural activity? If each voxel is tens of thousands of neurons, is there fidelity that is being lost in the encoding models? And does perfect fidelity, say 1 voxel representing 1 neuron, give a substantial gain in prediction models? Do you know what mysteries that might uncover for neuroscientists or capabilities it might give to biotech? (I assume 1 voxel to 1 neuron is the ideal or is there better?)

Is there a timeline for when we might reach an ideal fMRI fidelity?