Hi Reddit! My name is Darrell West and I'm the vice president of Governance Studies at the Brookings Institution, a DC-based think tank. People are worried about whether AI will hurt us, so I'm here to answer your AI questions and outline what we need to do to avoid possible harms. My book Turning Point: Policymaking in the Era of Artificial Intelligence is out now!

Ask me anything!

Proof

Comments: 124 • Responses: 38  • Date: 

hamahamaseafood18 karma

A.I., like other tech terms before it, now seems to be used by everybody wanting to be on the flavour of the day train. I often hear very unsophisticated software tools described as "A.I."

What specifically makes something A.I.?

Can you think of any examples of things currently described as A.I. in marketing materials that is not sophisticated enough to have earned that distinction?

Am I wrong in assuming that A.I. is more intricate, sophisticated and/or powerful than other digital tools?

RealDarrellWest13 karma

In our book, we argue AI is the transformative technology of our time. It refers to automated software than can learn and adapt as circumstances change. It is different from conventional software of the past. For info on key AI terms, see our Brookings glossary at https://www.brookings.edu/blog/techtank/2020/07/13/the-brookings-glossary-of-ai-and-emerging-technologies/

xynix_ie11 karma

I'm on the board of two companies specializing in Machine Learning software and we don't use the term AI. As ML is indicative of programming language created specifically to create alternative modes of operation. Auto fixing problems and auto detecting issues and then based on a series of programming initiatives it may have learned to fix those issues. We take that and plug it into a database so the next time the software attempts an autofix it already knows based on previous experiences what the answer to that particular problem is. For instance a networking gateway issue, a firewall issue, or much much deeper.

What you describe here is ML. There is nothing intelligent about it.

Sorry let me step back and refer to your link's text: This definition emphasizes several qualities that separate AI from mechanical devices or traditional computer software, specifically intentionality, intelligence, and adaptability. AI-based computer systems can learn from data, text, or images and make intentional and intelligent decisions based on that analysis.

This is what we do. In the replication arena for instance our ML based software does all of this in order to detect problems with data moving from point A to point B. I don't consider this an AI. It's purpose built to learn from it's mistakes or the mistakes of humans and to adapt and autofix as per it's programmed instructions and to put those notations into a SQL database to refer to later.

We call this Machine Learning. We know it's just a fancy term for our software doing much more advanced functions on much more advanced hardware with many more FLOPS available.

SO. Why do you term what we're doing AI and we term it ML? Going to the point of /u/hamamaseafood where they ask about the term A.I. being the flavor of the day.

RealDarrellWest7 karma

You are right that AI and ML are thrown around with a lack of precision and that is problematic. It confuses people and makes them fearful of emerging technologies because they don't understand key terms. It is the reason we put together our Brookings Glossary of key terms to provide more uniformity in the definitions.

xynix_ie8 karma

I see that and being in this industry for 20+ years I'm going to have to say that AI doesn't exist and probably won't for a long time. Everything else is a programmed response planned by a developer and reacted to by a programming mechanism. We can wrap a term around whatever that looks like but it's just humans making programs and then tweaking the code when things go pear shaped.

If there is an AI in my lifetime it will have to be born. It will have to make it's own choices on what it's going to be when it "grows up." That first nest of code will have to be built by a human but then it's free. It's going to have to write it's own code to plan what it wants for it's next birthday party. It's going to have to write it's own code to figure out what job it's going to get. It's going to have to write it's own code to take advantage of our search engines to learn those skills. An AI can't have a human involved in this process or it's just a program learning based on code. If the AI is not making it's own decisions in this process it's not intelligent.

Mr. Shubhendu and Mr. Vijay's definition of AI is exactly what machine learning code created by and maintained by normal humans does. It's just a stretch. With all due respect, seriously, this descriptor is not AI, it's machine learning code.

RealDarrellWest7 karma

I agree it will be a long time before we have general AI that can perform well across a large variety of tasks. Most of the successful AI is specific to particular tasks but not able to integrate diverse activities.

hamahamaseafood3 karma

From the link he provided:
ARTIFICIAL INTELLIGENCE (AI):

Indian engineers Shukla Shubhendu and Jaiswal Vijay define AI as “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.”[2] This definition emphasizes several qualities that separate AI from mechanical devices or traditional computer software, specifically intentionality, intelligence, and adaptability. AI-based computer systems can learn from data, text, or images and make intentional and intelligent decisions based on that analysis.

RealDarrellWest3 karma

Thanks for looking at the Brookings Glossary!

croninsiglos10 karma

Creating AI , even dumb AI, weapons seems trivial such as an off the shelf drone that targets someone by facial recognition.

What kind of policies would even be possible to prevent this or is it only the level of punishment after the fact?

RealDarrellWest9 karma

Some communities are banning facial recognition when used by law enforcement or advocating for a moratorium until we gain a better understanding of its impact. We can limit its usage in law enforcement settings, set time limits on how long images can be stored, require disclosure when FR is being used in public settings such as stores. Read our Brookings report on 10 ways people can protect themselves from FR at https://www.brookings.edu/research/10-actions-that-will-protect-people-from-facial-recognition-software/

Anund6 karma

Is Elon right to be afraid of AI?

RealDarrellWest22 karma

yes, but not for the reasons he gives. The risk is not human enslavement but basic threats to fairness, bias, transparency, and human safety.

roman_fyseek3 karma

While it's fine that you guys are thinking about policy and protection, we all remember teenagers making radioactive sources in their garages. What types of policy and protection things can protect us from 'rogue actors' in the AI world?

RealDarrellWest5 karma

Rogue actors are a big problem everywhere, not just in the digital space. Criminals are exploiting our weak defenses and lax security practices and taking advantage of other people. The solution is to take these threats seriously and toughen the sanctions for malicious behavior.

Venkonite3 karma

What do you think about the recent studies regarding fairness and discrimination in AI? Do you think they can actually work well and be used on real life applications?

RealDarrellWest8 karma

Many studies have found evidence of bias in AI. This is a big problem we need to address. We need to bring anti-discrimination laws into the digital space and enforce existing rules. Organizations also should consider audits by independent firms of their AI applications to make sure there is little disparate impact across groups.

orionface2 karma

If an AI chooses on it's own to discriminate based on information that it learns or is given then if you code into it that it's not supposed to do such a thing, is it really an AI anymore? Isn't that the whole point of creating an AI, for it to come to it's own unique conclusions much like we do with our own senses and reasoning?

RealDarrellWest4 karma

AI can come to its own conclusions but we want it to align with human values. We want the AI to reflect fundamental values such as fairness, transparency, and human safety. AI that chooses to discriminate should be penalized in the same way we would penalize a human that does that.

flipflopyoulost2 karma

Where do you see limits, we should A.I.s. bind to?

I mean couldn't there be, at some point in the future, an A.I. which could develop a conscioussness? And I don't mean in an apocalyptic kind of way, that it wants to destroy humanity.

Just in a human way. Would our norms, ethics and laws apply to it? Would we have to keep it alive because it started to do something that we would call "feel"? Or are there any "fail saves" to prevend such a step at all? Should there be such steps? Or is all of this just an uopia?

RealDarrellWest3 karma

I think it will be a long time (meaning generations) before AI develops a consciousness. It is not something I worry about right now. There are real problems of AI in terms of ethics and societal ramifications and those are the ones our book focuses on.

Chtorrr2 karma

What is the most surprising thing you have found in your research?

RealDarrellWest5 karma

The most surprising part of AI is how poorly developed the policy regime is. Most of our policies were developed for an industrial era that now is shifting to a digital economy. We need new policies for the digital era and our book presents a policy blueprint for how to do that. See more information at www.InsidePolitics.org

xyran_2 karma

I'm seeing alot of people, especially who do not know anything about AI talking about machines are going to replace the majority of jobs in the coming future. To what extent is that true?

RealDarrellWest11 karma

Sophisticated machines will take jobs, especially at the entry level. It will create other jobs, but many people will not have the job skills necessary for those positions. People will need to engage in lifelong learning in which they take courses to improve their job skills at age 30, 40, 50, and 60.

DigiMagic2 karma

When you mentioned possible harms from AI, did you mean harms from people abusing AI with malicious intents or not programming it carefully enough, or AI acting on its own? If the later case, what makes you think it will be advanced enough to act on its own anytime soon?

RealDarrellWest3 karma

The big problem today is AI developers not programming applications carefully and generating bad ethical or societal consequences. We suggest organizations hire ethicists who can help them think through applications early in the design process so as to avoid major problems down the road.

t_h_e_V2 karma

What do you think about Andrew Yang's warnings about automation taking away millions of jobs currently and in the near future?

RealDarrellWest7 karma

Yang is right to be worried about jobs because we are seeing fully automated retail stores and factories, robot delivery services, and autonomous vehicles. These developments will take jobs from sales clerks, factory workers, taxi drivers, and ride-sharing services. We will need to put many more resources into worker training and job development.

ZealousidealHat92 karma

What are your thoughts on AI in warfare? (i.e. fully automated weapons)
I know most nations oppose it, a few are for it. Do you think it is necessary to develop these technologies to maintain a strong military dominance/threat?

RealDarrellWest4 karma

My co-author John Allen has written about hyperwar, which is new approaches to war that are fast and automated. Imagine a hundred drones swarming an aircraft carrier and the difficulty of defending the ship. That is the future of warfare and we need a new generation of military leaders who can deal with high-tech warfare and the new moral and ethical issues raised by it.

paladine12 karma

Got a remote job for a recently diagnosed Multiple Sclerosis patient? 16 years self-employed with both a BS and MBA. Very reliable and self-efficient. As you can see, I am a little desperate as I cannot do my former self-employment work (too physical) and the pandemic has dried up the job market. Desperate times call for desperate measures.

RealDarrellWest3 karma

The good news is remote jobs are increasing which means you and others will have more opportunities distant from your geographic location. The bad news is many of these positions do not have health benefits attached to them which is highly problematic for those with chronic illnesses. We need to address that problem because it is quite serious for many people.

paladine11 karma

Luckily, I am currently covered under my wife's insurance. My medication costs close to $100k per year, but my co-pay is $40 per month thank goodness. The other co-pays do pile up, neurologist, regular doctor, blood work, MRI's, etc. I have always been so independent, which is what makes this disease even more difficult for me.

RealDarrellWest3 karma

Glad you have health coverage and low co-pays. Hope things work out okay for you!

justlose2 karma

Should we fear AI?

How bad could it be if used for bad things (say by a terrorist group)? And is there something that can be done to prevent that?

RealDarrellWest2 karma

We should be fearful of emerging technologies utilized by rogue actors or terrorist groups. Technology is decentralized so it is a problem to keep it out of the hands of bad actors.

roman_fyseek2 karma

How far off are we from the perfect 'deepfake'?

RealDarrellWest2 karma

We are very close to "perfect deepfakes" now, which obviously is a huge problem. It endangers ordinary people, political candidates, and anyone else who becomes the object of a deepfake. A well-design deepfake can put someone in a situation where they look like they are saying or doing something which they did not do. This technology has reached a point where it is very dangerous and we need legislative remedies that penalize its malicious use.

Rotlam2 karma

What do you think got you hired doing the job you do now?

RealDarrellWest2 karma

I taught political science and public policy for 26 years at Brown University. That gave me experience in analyzing politics and policy that has proven very valuable in my current position.

Pack_Black2 karma

Is it possible that due to AI taking over lots of jobs, we simply run out of jobs for the whole population? Of course we can train people to do more specialized work, but imagine a company that previously employed hundreds of workers; they can't all become managers or do these higher up services.

RealDarrellWest2 karma

I don't think we will run out of jobs for the whole population. The more likely scenario is assistive technology that helps humans perform better. That is more akin to job enhancement as opposed to job replacement. But for people to take advantage of this, many will need to upgrade their job skills.

erispeon1 karma

What can a current college student entering the workforce soon do to have their best chance at not becoming obsolete due to AI/automation?

RealDarrellWest2 karma

If you develop skills of data analytics, you will always have a job. We need well-trained data scientists who understand how to analyze large data sets because all these digital applications are generating huge amounts of data. We will need people who understand how to learn from that information.

true_spokes1 karma

A major concern in the use of ML in governance is algorithmic bias inherited from training sets such as arrest records and hiring pools. Is there any potential for the use of AI to identify such biases? Can this be done in a way that avoids the classic dilemma of who is guarding the guards?

RealDarrellWest3 karma

Those kinds of biases are real problems because they have dramatic consequences for employment and imprisonment. We have to correct them. AI can be helpful in identifying clearcut biases but we need better training data to improve the AI applications.

jamred5551 karma

When looking at the NSCAI recommendations it's clear that one of the government's chief concerns is falling behind other countries. Do you think this is something we should be worried about, and should we dictate policy to counteract that concern?

RealDarrellWest2 karma

We should worry about our international competitiveness because our immigration crackdown is slowing the arrival of new talent. In the longrun, that will be a big problem if we continue to follow that approach. Immigrants have been big drivers of tech and AI innovation.

coryrenton1 karma

What is the strangest idea a colleague or intern has come up with that turned out to be right, and who is the strangest person you were surprised to find out was associated with or invited to speak at the Brookings Institution?

RealDarrellWest3 karma

The weirdest encounter we had at Brookings from a speaker was President Erdogan of Turkey. There were protesters across the street and Erdogan's security forces attacked the protesters. Not the kind of behavior you want from your speakers.

Rokwind1 karma

I have thought for awhile that we need to develop legislation to help the future of AI. For instance a law that requires the three rules of robotics. also we need to develop legislation that insures the protection of all sentient life and free will. That way when the first true AI asks "what am I?" we are already prepared and can envelop them into society better. Not saying that it will be easy, some will never accept robots with a will of their own, but most will if there are already laws in place.

so my question to you is: What opinions do you have about this?"

also as a blind man I would really love a robot helper. Like a guide bot instead of guide dog. You know for those of allergic to guid dogs and are getting tired of people saying we need one. give us a seeing eye robot.

RealDarrellWest1 karma

We are starting to see caretaker robots that help those with chronic conditions. They still are pretty rudimentary, but can be a help to those who need assistance. On the policy question, we are moving towards a situation of greater oversight and regulation. Even Tech CEOs are calling for more regulation.

true_spokes1 karma

What government agency, if any, would have the authority to halt a corporation’s development of an AI deemed to be dangerous or improperly safeguarded? As AI development accelerates, do you think there is need for an official federal agency tasked with that type of oversight?

RealDarrellWest1 karma

Right now, responsibility for AI oversight is spread among many different agencies, generally based on the sector where the AI is being deployed. For example, AI in autonomous vehicles is regulated at the federal level by the Dept of Transportation and at the state level by Depts of Motor Vehicles. Health care applications are regulated through Departments of Health or the Food and Drug Administration. Education AI is regulated mostly at the local level because of their use by particular schools. Some regulation will need to say sectoral while others may need to be across the board.

workingatbeingbetter1 karma

What policy proposals do you recommend for dealing with AI/ML technologies that act as a double-edge sword?

For example, there is technology that recognizes emotion through voice (see here for example) and one of the proposed use cases is that it can help weed out prank calls on the coast guard, thereby allowing the coast guard to more effectively distribute their resources. However, this same technology can be implemented into devices like the Amazon Echo to identify emotions of users and direct the user to particular ads. As you can imagine this can become dystopian rather quickly. What policy proposals do you recommend to deal with these issues? Would you amend Bayh Dole? Would you require FFRDCs to fundamentally change their structure? Would you require particular licenses be used (and if so, which licenses would you require)?

I ask the question above because it's one I've dealt with directly as part of my job working in tech transfer at a major AI university. I have TONS of technologies that I'm always trying to thread the needle on between public release and more limited approaches, but there doesn't seem to be any real way to solve this issue under the current laws.

RealDarrellWest1 karma

This is what we refer to in our AI book as the problem of dual-use technology that can be used for good or ill. It is hard to regulate the bad application without harming the beneficial use. This is the challenge for policymakers, to figure out how to threat the needle in these cases. The secret is really the impact of the AI application on people and dealing with deleterious uses.

workingatbeingbetter1 karma

Quick follow-up:

Beyond identifying this as a problem, does your book discuss any solutions? The only proposals I've seen so far seem to be impractical (e.g., an FDA for regulating AI) and/or too discretionary (e.g., internal policies for government funded research.

RealDarrellWest1 karma

The distinctive feature of our AI book Turning Point is its policy focus. We spend a lot of time thinking about policy solutions and how to address the various AI problems that are arising.

Veskerth1 karma

Two questions:

Are Elon Musks warnings about AI overblown?

As AI develops and matures, should we redefine the meaning and nature of "work" for human beings?

RealDarrellWest6 karma

Musk's warnings about AI are overblown. I don't worry about humans being enslaved by AI-powered robots, but the more immediate threats of bias, lack of transparency, issues of human safety, and basic fairness. There will be a jobs impact and we will need to redefine "work" to include parenting, caretaking, and recognized volunteer activities. The key is how to provide pay and benefits for new kinds of jobs. One vehicle could be a national family leave policy.

Veskerth1 karma

Quick follow-up:

What about the issue of accountability? Whoever holds the power of AI wields enormous power over other humans. What do we do about the reality of bad actors? What can we do to mitigate this?

RealDarrellWest3 karma

We need anti-malicious behavior legislation that targets bad actors. Right now, our laws are now well-designed for digital crimes or digital biases and that allows offenders to get away with really bad behavior.

pikknz1 karma

Do you think it is over-hyped? What is the real future?

RealDarrellWest2 karma

There is a lot of hyping, but the AI is getting better, in part due to advances in computing power and data analytics.

Cognitively_Absurd1 karma

What do you think will be the next big machine learning innovation after GANs?

RealDarrellWest3 karma

Have to say I don't know the answer to that but it is an interesting question!

pppossibilities1 karma

What sort of policies do you believe should be front and center on how humans and AI interact? Is there a quantifiable minimum level of transparency we can achieve that is understandable to the uninitiated?

RealDarrellWest2 karma

In our AI book, we call for annotated AI software where developers explain key decision points in their software development. That would help others understand the choices that are made and in cases where there are harms, it would help outsiders figure out where the AI went wrong.

carryab1gstick1 karma

Do you think that AI and humans will ever have a symbiotic relationship?

RealDarrellWest3 karma

There are examples of this already in terms of brain implants for the visually- or hearing-impaired. Those devices enable people to see and hear more clearly than otherwise would be the case. Over time, these types of devices will proliferate and pave the way for symbiotic relationships down the road between AI and humans. It probably will be decades before we get to meaningful applications with intelligent AI, but the AI is improving at a rapid pace.

raw_testosterone1 karma

What companies are making the biggest innovations in AI? Because they’ll be a trillion dollar company over night with a great breakthrough.

RealDarrellWest1 karma

The large companies are buying up much of the AI talent, but that doesn't guarantee they will make the biggest innovations. Most of these firms are growing by acquisition.

drewhead1181 karma

Thanks for hosting this AMA!

As someone with a strong interest in AI (but also as someone who hasn't read every recent paper or development in the process), I've found some of the AI safety bits I've read on to be positively fascinating, but most were posed as still open questions. What are the current best strategies for ensuring a sophisticated AI system's goals stay aligned with the goals of its creators?

RealDarrellWest1 karma

The best strategy is to build ethics into AI design and deployment. We can't wait until the AI is deployed and we start to see problems. We have a paper that encourages organizations to hire ethicists and develop internal review processes that incorporate a wide variety of ethical and societal considerations. See our paper at https://www.brookings.edu/research/how-to-address-ai-ethical-dilemmas/

sfgunner0 karma

Does the Brookings Institute have plans to use AI to write articles justifying US wars of aggression around the world? Or will it continue to employ human writers to urge for constant meddling in the Middle East and other areas?

RealDarrellWest2 karma

We use human writers, not AI to write.