Some of the most important decisions in our lives are being made by artificial intelligence, determining things like who gets into college, lands a job, receives medical care, or goes to jail—often without us having any clue.

In the podcast, “In Machines We Trust,” host Jennifer Strong and the team at MIT Technology Review explore the powerful ways that AI is shaping modern life. In this Reddit AMA, Strong, artificial-intelligence writers Karen Hao and Will Douglas Heaven, and data and audio reporter Tate-Ryan Mosley can answer your questions about all the amazing and creepy ways the world is getting automated around us. We’d love to discuss everything from facial recognition and other surveillance tech to autonomous vehicles, how AI could help with covid-19 and the latest breakthroughs in machine learning—plus the looming ethical issues surrounding all of this. Ask them anything!

If this is your first time hearing about “In Machines We Trust,” you can listen to the show here. In season one, we meet a man who was wrongfully arrested after an algorithm led police to his door and speak with the most controversial CEO in tech, part of our deep dive into the rise of facial recognition. Throughout the show, we hear from cops, doctors, scholars, and people from all walks of life who are reckoning with the power of AI.

Giving machines the ability to learn has unlocked a world filled with dazzling possibilities and dangers we’re only just beginning to understand. This world isn’t our future—it’s here. We’re already trusting AI and the people who wield it to do the right thing, whether we know it or not. It’s time to understand what’s going on, and what happens next. That starts with asking the right questions.


Comments: 158 • Responses: 70  • Date: 

Bad-Extreme14 karma

AI good or AI bad?

techreview16 karma

Neither! That's not to say AI is neutral, no technology is. But technology has the assumptions, biases, opinions, hopes and motivations of the people who make it baked in. So some AI is good, some bad. Some good AI is used in bad ways, some bad AI is used in good ways. And that's why we should always question it. [Will Douglas Heaven]

Michael_Brent7 karma

Hi! My name’s Michael Brent. I work in Tech Ethics & Responsible Innovation, most recently as the Data Ethics Officer at a start-up in NYC. I’m thrilled to learn about your podcast and grateful to you all for being here.

My question is slightly selfish, as it relates to my own work, but I wonder about your thoughts on the following:

How should companies that build and deploy machine learning systems and automated decision-making technologies ensure that they are doing so in ways that are ethical, i.e., that minimize harms and maximize the benefits to individuals and societies?


techreview5 karma

Hi Michael! Wow, jumping in with the easy questions there .. I'll start with an unhelpful answer and say that I don't think anyone really knows yet. How to build ethical AI is a matter of intense debate, but (happily) a burgeoning research field. I think some things are going to be key, however: ethics cannot be an afterthought, it needs to be part of the engineering process from the outset. Jess Whittlestone at the University of Cambridge talks about this well: Assumptions need to be tested, designs explored, potential side-effects brainstormed well before the software is deployed. And that also means thinking twice about deploying off-the-shelf AI in new situations. For example, many of the problems with facial recognition systems or predictive policing tech is that it is trained on one set of individuals (white, male) but used on others, e.g. It also means realising that AI that works well in a lab rarely works as well in the wild, whether we're talking about speech recognition (which fails on certain accents) or medical diagnosis (which fails in the chaos of a real-world clinic). But people are slowly realising this. I thought this Google team did a nice study, for example: Another essential, I'd say, is getting more diverse people involved in making these systems: different backgrounds, different experiences. Everyone brings bias to what they do. Better to have a mix of people with a mix of biases. [Will Douglas Heaven]

techreview2 karma

Michael: what do you think?

Chtorrr3 karma

What is the most surprising thing you found in your research?

techreview4 karma

Hi! I'm Tate Ryan-Mosley, one of the IMWT producers. This is actually an amazing question because so many things have surprised me but also none of those things maybe should have been surprising? (Perhaps this says more about me?) But I think that the challenge of how we actually integrate AI into social/political structures and our more intimate lives is just so much more complicated and urgent and prevalent than I thought. We've talked to incredibly smart people, most of whom really are doing their best to make the world a better place. And yet it sometimes feels like AI is making the world a worse place, or at the very least, being implemented so quickly that its impact is precarious. I also think I've been surprised by secrecy in the industry. So many of these implementations happen without real public consent or awareness.

techreview1 karma

☝️ - Jennifer

eatpraydeath3 karma

My son is interested in a career in Robotics combined with A.I. What advice do you have for a future innovator to prepare for a career in the field? He’s 13 years old

techreview3 karma

Yes, curiosity and encouragement! And if you're after core skills, here's what one of DeepMind's founders told a 17 yo who asked the same question a couple of years ago: These are always going to be slightly subjective, though. Tinkering with code is probably most useful and there are loads of freely available bits of code and even ML models available online. But do encourage him to keep broad interests and skills: many of AI's current problems stem from the fact that today's innovators have homogenous world-views and backgrounds. [Will Douglas Heaven]

techreview3 karma

Never lose your curiosity. Better yet, make time to feed and encourage it as innovation is as much about imagination and inquisitiveness as anything else.

I have a 13 year old as well! - Jennifer

5footbanana2 karma

Been listening to the podcast so far and I'm enjoying it. Thank you for creating it!

With algorithms being closed source/IP or AI being almost unfathomably complex after significant training on data sets. What can be done to educate the general population on the security/ethics and design of such systems?

People can be very sceptical with regards to things they don't understand.

Side question: I really like the book Hello World by Hannah Fry on a similar subject, what media/podcasts/books would you recommend to somebody interested in AI tech as a hobby if you will but without experience in how these systems work.

techreview3 karma

This is an awesome question and thanks so much for listening! One of our main goals with the podcast is to ensure "our moms can understand" everything we publish. We have very smart moms :) but the point is that the general public often gets left in the dark when it comes to how a lot of AI works and even when it is employed. Its a big motivating factor for a lot of our journalism at Tech Review! Not to make this sound like a plug but I think a good way to help educate the public on technology is to subscribe to outlets doing good journalism in the space. (You can subscribe to TR here) Law makers, educators, companies and researchers all play a role in the solution space in my personal opinion.

Side answer- there are a lot of good Ted Talks, Karen Hao's newsletter The Algorithm, I like Kevin Kelly's books. For podcasts: Jennifer Strong's alma matter The Future of Everything from WSJ, Recode is also great! - Tate Ryan-Mosley

5footbanana0 karma

Really appreciate the reply. Is there anyway of getting a small trial for the site? Interested but $50 isn't change for a site I can't experience.

Thanks again and look forward to more podcast episodes! Including the 2 you mentioned!

techreview1 karma

You can read a lot of our content for free now at FYI, you will be limited to 3 articles per month for a lot of the content, but it'll give you a taste for a lot of the stuff we write about. Send us an email at [[email protected]](mailto:[email protected]), and we can talk through other ways you can get access to our content. Thanks again for your support as a listener and as a reader! - Benji

techreview1 karma

Thanks for listening!

Have you also tried listening to "Consequential" from Carnegie Mellon or "Sleepwalkers" from iHeart? - Jennifer

CapnBeardbeard2 karma

What jobs are we most likely to lose to AI in the next 10 years?

techreview3 karma

u/CapnBeardbeard, we recently found that the pandemic might actually accelerate job losses for some essential workers. That would be the people who deliver goods, work at store checkouts, drive buses and trains, and process meat at packing plants. What we don't know is if these job losses to robots will lead to new jobs to help them. This story we published in June provides an extensive overview of what we're talking about. - Benji

techreview1 karma

It's hard to say exactly how automation will change the job market. Many jobs will change, but not necessarily disappear. AI will also make some aspects of remote working easier, which will also have a big impact. One manager who can keep an eye on a construction site or a warehouse remotely, using smart surveillance tech, will be able to do the job of multiple managers who need to be on site. Some types of job will be safe for some time yet: anything that requires a personal touch, from service industry roles in restaurants and hotels to teachers (tho see that point about remote working again) to sales-people to creatives (but here we should expect a lot of AI tools to make some aspects of creative jobs quite different). [Will Douglas Heaven]

techreview1 karma

Oh and don't write off cabbies anytime soon: we're still a long way from driverless cars that can navigate rush hour in NYC ;) [Will Douglas Heaven]

_goodguitarist2 karma

How will AI affect health care?

techreview1 karma

Perhaps more than any other area, there are high hopes for AI and healthcare. There are many projects looking at using AI for diagnosis (using image recognition on scans, using NLP on medical notes, and more) and triage (using NLP and medical databases to prioritize patients most in need of urgent care (e.g.

There is a lot of work being done on using AI to help develop new treatments and drugs, an area that has seen a lot of attention during the pandemic, but this is still more experimental. There are promising signs that AI can direct researchers to useful compounds for new drugs, but we haven't yet seen breakthrough success.

Still, there are a number of serious questions to answer before AI has the impact it could. Trust is a largely unsolved problem: Will patients be comfortable with AI being involved with life-or-death decisions? Will doctors? The accuracy of many AI tools is also much better in the lab than in real-world clinics:

Regina Barzilay, who has just won a big new prize for her medical AI research, talks about some of the hurdles here:

And the pandemic has highlighted key things that we need to address before AI can truly help: [Will Douglas Heaven]

techreview2 karma

Hi! This is Benji Rosen, MIT Technology Review's social media editor. Jennifer, Tate, Will, and Karen will be responding to your questions periodically throughout the day. They'd also love to know if you've heard the podcast and if you have any favorite episodes or moments.

goldfinchex2 karma

What do you think is the role of private players / government regulations in trying to promote a sustainable/good use of AI? How will you envision such regulations to look like (and how might we achieve them)?

techreview3 karma

Hello! This is Karen, senior AI reporter at Tech Review. This is an excellent question. I think private players have the unique advantage of innovating quickly and taking risks to achieve greater benefits from AI, whereas government regulators have the important role of setting down guardrails to prevent the harms of AI. So we need both! There's a push and pull. As for what regulations should look like, here's a really awesome Q&A I did with Amba Kak, the director of global strategy and programs at the New York–based AI Now Institute: She answers the question much better than I could for face recognition specifically. It offers a great use case into how to think about regulating different AI systems.

techreview1 karma

Thank you all for your incredibly thoughtful questions. We really enjoyed this. We're going to call it, but we'll be checking our inbox if you have any new questions about the podcast, artificial intelligence, and its future. We also hope you'll listen to In Machines We Trust. Thank you again! This was fun!

BookKeepersJournal1 karma

What are some of the biggest barriers you see to automation and machine learning becoming mainstream? I hear about this technology a lot but don’t feel like I’ve been exposed to it yet in everyday life.

Thanks in advance for answering my question! Looking forward to checking out the podcast

techreview1 karma

If you use any of the following—Facebook, Google, Twitter, Instagram, Netflix, Apple products, Amazon products—you've already been exposed to machine learning. All of these companies use machine learning to optimize their experience, including to organize the order of the content you see, what ads you're pushed, what recommendations you get. So it's already very mainstream—but largely invisible, and that's why we created this podcast! To peel back the curtain on everything happening behind the scenes. —Karen Hao

MIke60221 karma

Back in Highschool I did a bunch of papers analyzing some of the work one of your professors did. I think it Eric Brybjolfson. He brought up how as technology advances new jobs are created. Do you think we will see things like that with the advancement of AI?

techreview3 karma

Absolutely. Jobs will change, but not always go away. And new jobs will be created. With advances in AI, there will be new tech industries in data science and modelling. But that's just to take a narrow view. AI will impact every aspect of our lives and we want humans working in roles alongside it, whatever the industry. I think we're going to see a lot of collaborative roles where people and AIs work together. [Will Douglas Heaven]

Mister-Clip1 karma

How far are we from seeing AI that is self aware/conscious?

techreview1 karma

Short answer: nobody has any idea whatsoever. We don't even know if conscious AI is possible. But that of course doesn't stop people from guessing and you'll see timelines ranging from 10 to 100++ years. But you should take these with a big pinch of salt. The only sure sign we have that consciousness might be possible in a machine is that *we* are conscious machines. But that observation doesn't get us far. We don't understand our own consciousness well enough to know how to replicate it. It's also entirely possible that you could have a superintelligent machine, or AGI, that isn't conscious. I don't think consciousness is necessary for intelligence. (I'd expect you'd need some degree of self-awareness, but I don't think self-awareness and consciousness are necessarily the same thing either.) There's a fun flip-side to this, though. Humans are quick to ascribe intelligence or consciousness to things, whether there's evidence for it or not. I think at some far-future point we might build machines that mimic consciousness (in much the same way that GPT-3 mimics writing) well enough that we'll probably just casually act as if they're conscious anyway. After all, we don't have that much evidence that other humans are conscious most of the time either ;) [Will Douglas Heaven]

Mister-Clip0 karma

Interesting. Is there anyone specializing in this, specifically or is it so poorly understood at this point that no one even bothers?

techreview1 karma

If you're interested in the philosophical side, David Chalmers is a good starting point Many AI researchers are interested in this question too, but few are doing concrete research that sheds much light on it. Murray Shanahan at Imperial College London is great and straddles AI and neuroscience (as do DeepMind's founders). [Will Douglas Heaven]

techreview1 karma

As Will wrote in another comment, we're coming out with a big piece on artificial general intelligence next week. He'll be back online soon, and I'll ask him to answer your question. - Benji

travisdeahl7241 karma

Have you met any famous people?

techreview1 karma

Yes! I've had the great privilege to record dozens of literal and figurative rock stars over the years but can say with confidence it's not the most interesting part of this job. [Jennifer Strong]

half_real1 karma

Hi, are you looking for interns? If so, how would one apply for that?

techreview1 karma

What would you like to learn?

Not sure we can have interns at present but mentoring may be possible! [Jennifer Strong]

Porthos19841 karma

With the number of improvements in AI especially over the last 5 to 10 years, do you believe that the Singularity has moved up?

techreview3 karma

Nope. I think the advances in AI in the last decade have been staggering. We've seen AI do things even insiders didn't expect, from beating human champions at Go to highly accurate image recognition to astonishingly good language mimics like GPT-3. But none of these examples have anything like intelligence or an understanding of the world. If you take the singularity to mean the point at which AI becomes smart enough to make itself smarter, leading to an exponential intelligence explosion, then I don't think we are any closer than we've ever been. For me, personally, the singularity is science fiction. There are people who would strongly disagree but then this kind of speculation is a matter of faith! [Will Douglas Heaven]

techreview3 karma

We actually have a big piece on AGI coming out next week: what it means to different people and why it matters. But in the meantime, you might be interested in a quick round-up of some first impressions of GPT-3 that I put together a couple of months back [Will Douglas Heaven]

ranger20411 karma

Do you think at some point an ai controlled government will be feasible - either partially or fully?

DareThePolarBear1 karma

As an aspiring computer science student (I'm in my final year of school), I have been told that almost 90% of all computer science students pursue a job towards software engineering later. Is this true? How long do you think it will be till Robotics and AI is large enough to become a prevalent job market of its own?

techreview1 karma

I don't know about that 90% statistic but it sounds reasonable, especially considering "software engineering" is a pretty broad category. But AI and robotics are very much a prevalent job market already: AI applications are being used (or explored) in pretty much any industry you can think of. [Will Douglas Heaven]

rebelsoulja1 karma

Are you single?

techreview1 karma

No, there are five of us doing this AMA. [Will Douglas Heaven]

myDucklingIsTheBest1 karma

Is artificial superintelligence possible in our lifetime?

dadadanotzuckb1 karma

This is something I've been thinking about a lot. So, in case a lot of low level jobs get automated, then what do you think would be the purpose of human beings?

As in, we spend 8-10 hours working towards something and that gives us reward, reward varies depending on what you do.

I don't think we can take this away from humans. What might happen is a shift from capitalism to socialism. So, we might have more music groups, or people who study for a lifetime and keep writing academic papers, or other things of this sort where human effort would still be valued.

I am curious to know what you think.

techreview1 karma

Hi, it's a good question. Karen's answer to someone else picks up on a lot of what you say:

I agree with her that taking away mundane work will not necessarily let us dedicate ourselves to more utopian pastimes. It didn't in the past. Automation has been changing the world for many generations and people still do mundane tasks. Washing machines free up loads of time, which we spend .. in front of computer screens catching up on work emails.

In the 1930s the economist John Maynard Keynes predicted that the biggest problem we'd face in future was deciding what to do with all our leisure time (e.g. We're making the same mistake if we expect AI to have the same effect now.

Based on the impacts of automation that we have seen already, jobs will change considerably but not disappear. And we're not going to live lives of leisure while we still have to earn a living. Even AI isn't going to end capitalism all by itself. [Will Douglas Heaven]

cheekygorilla1 karma

Do you think we will have a magic 8-ball AI in our lifetime? I want to ask what I should cook today, or what hobby to dwell in, etc. and have it match up and sound perfect to me. I suppose this also falls into IOT but I’m curious on the take on AI’s role.

techreview1 karma

u/cheekygorilla, Karen wrote about Amazon Alexa's efforts to achieve a personal assistant akin to what you're describing. I'll let her comment on it more, most likely in the morning. In the meantime, here's the story she wrote about it. - Benji

BuckySpanklestein1 karma

If big tech knows everything about me, and I don't respond to advertising, then why do I still get ads?

techreview1 karma

Advertisers are still going to pay to get their product on your screens, whether you click or not. Every time you load a web page or scroll through an app feed AI crunches the numbers behind the scenes in fractions of a second and the advertiser that bids the most for your attention gets to show its ad. If you never click on ads that might be reflected in the numbers somehow, but they're never going be zero. Everything else you do online gets tracked and advertisers are forever hopeful. Plus serving an ad only costs micro-cents. [Will Douglas Heaven]

Watermelon1411 karma

Will people one day have their own AI in some sense?

techreview2 karma

I think that's likely, yes. Personalization is a big attraction. In a way that's what virtual assistants like Siri are already trying to be and the AI in "Her" just takes that idea and runs with it. We could also have different personal AIs for different parts of our life, like an entertainment one at home or a work one that we collaborated with professionally. [Will Douglas Heaven]

techreview1 karma

That's a really interesting question. For the sake of making a science-fiction analogy, you mean like in the movie, "Her"? Do you mean a personal assistant with a personality?

sethstorm0 karma

How do you see AI working for those that are displaced by it? More specifically, people that have experienced an accelerated harm to their livelihood, while they see the benefits largely going Elsewhere(coasts especially) at an increasing rate.

While it might be more of a societal problem than a machine problem, it still is a problem that cannot be solved by outlasting the disaffected.

techreview1 karma

It's a great question. I don't have answers, I'm afraid, but I agree with you that this is a societal problem. Today most profits reaped from AI go to a small handful of big companies. If AI is going to benefit everybody, especially the vulnerable, then we need big social change not technical advances. [Will Douglas Heaven]

techreview1 karma

To add to Will's answer—

I worry about this constantly. AI researchers often argue that some jobs will have to be displaced as the inevitable price of technological progress. I don't necessarily disagree with that idea. There are many jobs prior to the industrial age, for example, that society is probably better without. But I strongly disagree with the implicit assertion in that argument that there's nothing we can do about how the displacement happens.

In my opinion, AI development is currently most often practiced in an extractivist way: the value of people's behavior and livelihoods are extracted as data and little is given to them in return. I think the first step to having a more dignified approach to helping people who are displaced is by moving from an extravist approach to an equal exchange of value: whatever value AI takes away, it should give back in return. What that would look like though is heavily contested. Some argue that the wealthy corporations that benefit from AI should redistribute their profits to impacted communities. Others believe that impacted communities should be an integral part of the AI development process, an idea known as participatory machine learning. Work on both these ideas is still relatively nascent. Hopefully there will also be many more ideas to come.

All this is a long way of saying, I think your question really hits on one of the greatest challenges we'll face as a society in making sure AI benefits everyone—not just the few at the cost of many. —Karen Hao

brereddit0 karma

Why hasn’t TDA replaced most ML yet? It performs better with higher dimensionality data (the opposite is true for most AI algorithms), it produces better performing models with inherent explainability, costs less to develop, secure and sustain.

techreview1 karma

I'm afraid I am not up to date on topological data analysis vs machine learning. If TDA genuinely outperforms ML then my best guess is that it hasn't replaced ML because ML is far more mature as a technology. It works, people are using it at scale. Things that work aren't quickly replaced. [Will Douglas Heaven]

YetiForgetti0 karma

In the next 10 years, what do you think the most helpful AI application to the average person?

techreview2 karma

I think it'll be the same the as in the last 10 years: (Google) search. Getting hold of any information you want instantly has been a game changer in so many ways and I think we're going to see smarter ways of accessing and filtering information of all kinds. I don't like how this service got monetized and tied up with advertising, but it's undeniably useful. The big downside is that monetization led to personalization which led to polarization, which is tearing us apart right now.

There are also big benefits that could come to people through improved healthcare (see my answer here [Will Douglas Heaven]

techreview1 karma

I agree with Will! It's going to be the really mundane stuff that we already have like Google search and email spam filters! I thank my email spam filters every day (just kidding, but they're truly underrated). —Karen Hao

schokoMercury0 karma

Will AI pose a risk in personal data security as more devices are connected? I was reading that smart cities will be able to be hacked posing a lot of risk in our energy systems. The airport in Ukraine has already been hacked and there have been blackouts induced because of this connectivity. Could AI hack also other systems or can it help and “patch” those holes in open and unprotected networks?

techreview2 karma

Yes, this is a big concern. As more devices come online, there will be more opportunities to hack them—both with AI and non-AI techniques. You are right that in some cases AI can help catch these hacks faster, by detecting anomalies in the way devices are operating and data is being exchanged.

In other ways, AI causes the vulnerability. For example, AI-powered digital devices a unique vulnerability to something known as adversarial attacks. This is when someone spoofs an AI system into making an error by feeding it corrupted data. In research, this has been shown to make a self-driving car speed past a stop sign, a Tesla switch into the oncoming traffic lane, and medical AI systems give the wrong diagnosis, among many other worrying behaviors. Some experts are also gravely concerned about what these hacks could mean for semi-autonomous weapons.

Currently, the best research tells us we can fight adversarial attacks by giving our AI systems more "common sense" and a greater understanding of cause and effect (as opposed to mere correlation). But how to do that is still a very active research area, and we're awaiting solutions. —Karen Hao

techreview2 karma

100% agree with Karen.

This is a couple years old but unpacks some existing smart city complexity.


schokoMercury1 karma

Karen or Jennifer do you think that by making AI open source could help making “common sense” or would that make it worse?

techreview2 karma

A lot of AI is already open source! But yes, to slightly shift your question, I think getting more people involved in AI development is always a good thing. The more people there are, the more ideas there are; the more ideas, the more innovation; and hopefully the more innovation, the more quickly we reach common sense machines! —Karen Hao

Squiekee0 karma

What are your thoughts on the Security concerns with AI? For example, data poisoning or manipulation based on limitations of an algorithm.

Additionally, what is the potential impact with how AI is used today?

techreview1 karma

One area of concern is adversarial hacks, where one AI is used to fool another into doing something it shouldn't. These are getting increasingly sophisticated ( and have been demoed with facial recognition ( But for the most part these attacks still feel theoretical rather than an immediate danger. It's a possibility, for sure—but like Jennifer says, there are many other ways to break into a system than targeting its AI. [Will Douglas Heaven]

techreview1 karma

However high the wall, someone will build a taller ladder. The security game evolves but has been around long before any of us. Also, here in the US we still have things like municipal infrastructure with hard-coded passwords available in user manuals published online...

This is not at all intended to be dismissive, rather that the security concerns are relative for now.


aukkras0 karma

  1. Would you trust in "AI" made by corporation you have no influence over ? why/why not ?

  2. What will you do if such an "AI" would be used to decide anything about your life without your insight or permission ?

techreview2 karma

Great questions.

  1. Nope! And that's because companies build their AI systems heavily incentivized by their own financial interests rather than by what is best for the user. It's part of the reason why I think government regulation of AI systems in democratic countries is so important for accountability.
  2. Well, this is kind of already happening. Not one single AI but many. I rely heavily on products from all the tech giants, which each have their own AI systems (often many hundreds of them) influencing various aspects of my life. One way to fight this would be to stop using any of these products, but that really isn't practical (See this amazing experiment done by Kashmir Hill last year: So that leaves us with the other option, which is to influence the direction of these companies through regulation and influence the direction of regulation by voting. Was this a very long way of telling people they should participate in democracy? Yes, yes it was.

—Karen Hao

Haru8250 karma

I'm currently pursuing a major in CS with a focus in AI at Oregon State University. Is there any coding languages I should learn to become successful in the field?

techreview1 karma

More important than learning any coding language is learning the fundamentals of logic and problem-solving. The most popular coding languages are constantly changing, so you'll likely learn dozens of them in your career. But right now, Python is one of the most popular for deep learning, so that's a good place to start. —Karen Hao

MutedBlaze30 karma

What sorts of impacts do you think research into reinforcement learning specifically will have practically in the future? I know that stock forecasting and prediction is used heavily alongside reinforcement learning but I sort of wonder how it's research and practical uses will progress over time.

techreview1 karma

I think the biggest real-world application of reinforcement learning is in robotics. Here's a story I wrote about a new generation of AI-powered robots that are just beginning to enter industrial environments like warehouses: They use reinforcement learning to learn how to pick up the various kinds of objects that they would encounter. It requires much less human involvement than supervised learning. —Karen Hao

BigNoisyChrisCooke0 karma

Your answer to the privatisation of AI and government putting down guardrails seems optimistic to the point of naiveté when it come to the Tech Giants.

Governments can't put down enforceable guardrails for Facebook, Google, Amazon, and the Chinese Government now.

By the time they're AI powered and funded, surely it's game over?

techreview1 karma

Certainly it's game over if we give up now. But to borrow a phrase I once heard, I like to see myself as a short-term pessimist, long-term optimist. It's the optimism that keeps me from giving up. —Karen Hao

swikets0 karma

Can an AI develop bias or personality ?

techreview0 karma

Can an AI develop bias or personality ?

Thanks for the inquiry! You're asking basically two HUGE questions and I will answer both incompletely! But here goes -

Bias - absolutely. Some people actually argue there is no such thing as an unbiased AI. Bias touches AI at almost every level- developers, designers, and researchers are biased, data is biased, data labelling can be biased, laws are often biased and the way people use the technology will almost certainly run up against bias. I'd also challenge you to reframe the question as I think AI doesn't just risk developing bias over time, but it risks being biased from the very start. There are too many examples of AI contributing to racism to name - here is an issue of Karen Hao's newsletter The Algorithm where she lists many of the leading researchers in this space. I'd definitely encourage you to look into their work.

Personality - I'd say this depends on how you define personality. We're in the middle of a 2-part series in the show where we cover emotion AI, in which an AI tries to recognize and interpret emotions and mirror them back in response. One of my favorite stories from the show is when we talk to Scott who has made a sort of friend with a bot he's names Nina, using Replika's AI. Check it out here (Its the first 5min or so). Would you want to be friends with an AI? "Personality" also could mean an AI's voice or the content of its responses, which has been trained quite specifically in the instances we've been looking into (especially for task-focused AIs like autonomous cars and voice assistants)! - Tate Ryan-Mosley

nustajaal0 karma

How will the AI affect mechanical engineering sector?

techreview2 karma

Great question! I studied mechanical engineering in undergrad. :) The answer depends on which MechE sector you're referring to. If manufacturing, AI is already being used to power some of the robots used in dangerous factory settings, and to monitor equipment for preventative maintenance (aka: predict when a machine will break before it will break so it gets fixed in a much more cost-effective way). If you're talking about product design, some retailers are using AI to crunch consumer behavior data and tailor their products better to what people want. Probably another impact is the amount of talent that's leaving the MechE sector to work on AI instead (me included). Many of my MechE classmates left for the software world once they realized it was easier to work with than hardware! —Karen Hao

CypripediumCalceolus0 karma

When we expose a neural network to sample data and it configures itself to give the desired response set, we don't know how it works. When the system goes into the real world and continuously updates itself to reach target goals, we plunge deeper and deeper into our ignorance of how it works.

Is this correct?

techreview1 karma

Pretty much! Scary? Definitely. Fortunately, there's a whole world of researchers that are trying to crack open the black box and make AI more explainable / less impenetrable to us. —Karen Hao

CypripediumCalceolus0 karma

That is interesting! Do you recommend anybody?

techreview1 karma

Yes! A number of researchers at MIT: David Bau and Hendrik Strobelt, whose work I write about here: Also Regina Barzilay, a professor who is specifically looking at explainable AI systems in health care. (She recently won a $1 million AI prize, and Will did a Q&A with her here:

Outside of MIT, DARPA has invested heavily into this space, which is often referred to as XAI, with "X" meaning explainable. You can read more about their research here:

I would also highly recommend this article from us, which dives deep into this exact topic. It's from 2017, so things have advanced quite a lot since then, but it's a good starting point! —Karen Hao

-RicFlair0 karma

Do you think robots will enslave us one day and turn us into pets by breeding us to be dumb and happy?

techreview1 karma

Most days I look at my dog and I think I'd love to be a pet. [Will Douglas Heaven]

techreview1 karma

I was going to write something about how Keanu Reeves will save us all, but Will brings up a good point. Life would be pretty great if you got treats all the time and had your belly rubbed. My dogs kind of have it made. - Benji

-RicFlair0 karma

You didnt answer the question either but you did say we would need saving so is that a yes to my question?

techreview2 karma

Will's answer to u/Porthos1984 is definitely relevant to your question too. Let us know what you think!

Nope. I think the advances in AI in the last decade have been staggering. We've seen AI do things even insiders didn't expect, from beating human champions at Go to highly accurate image recognition to astonishingly good language mimics like GPT-3. But none of these examples have anything like intelligence or an understanding of the world. If you take the singularity to mean the point at which AI becomes smart enough to make itself smarter, leading to an exponential intelligence explosion, then I don't think we are any closer than we've ever been. For me, personally, the singularity is science fiction. There are people who would strongly disagree but then this kind of speculation is a matter of faith! [Will Douglas Heaven]

Peaky8linder0 karma

Just started listening to your podcast on Spotify. In your opinion, what will be the most disruptive direction or application of AI & ML technologies for the real-world? Not including here scenarios like +2% performance boost for a DNN that only gets published in a paper that never gets used. Thank you!

techreview1 karma

Good question! I think we've already seen it—it's the recommendation systems on Google, Facebook, and other social media that power which ads we see, what posts we read, and tailor our entire information ecosystems to our preferences. The Social Dilemma, a new documentary on Netflix, takes a hard look at some of the ways these systems have disrupted society. I would check it out! —Karen Hao

techreview1 karma

Agreed with Karen on this.

As reporters we're better at helping make sense of what's already happened than predicting the future. We will be here though watching, learning and distilling what we see and hear. - Jennifer

Revolutionary_Math10 karma

What role do you think AI will play in keeping the upcoming elections free and fair, can AI influence voter behavior?

techreview1 karma

Hi! I've been writing a bit about this for Tech Review and experts are saying that recommendation algorithms on social media sites are probably the biggest influence elections. Its not as flashy what you would think, but experts like Eitan Hersh have debunked some of the "information operations" a la Cambridge Analytica sighting that there really isn't any evidence that smart AI on social media can effective persuade voters. Recommendation algorithms are much better at polarizing voters and confirming what voters already believe than changing an opinion. AI is also being used as an alternative to opinion polling, and of course sophisticated segmenting is employed in micro-targeting. Here's a round-up of campaign tech I just published yesterday that touches on some of this. We'll have more on this in the next few weeks so keep reading!! - Tate Ryan-Mosley

techreview1 karma

u/Revolutionary_Math1, good timing with this question! This is Benji Rosen, Tech Review's, social media editor. Karen actually wrote about this subject this morning. A nonpartisan advocacy group is using deepfakes of Putin and Kim Jong-un in political ads to "shock Americans into understanding the fragility of democracy as well as provoke them to take various actions, including checking their voter registration and volunteering for the polls." This is a good specific example, but Karen might have more to say.

theanonwonder0 karma

I believe we should be entering the age of creative enlightenment, where people are free to explore and advance human society through art. As in broaden our ways to communicate with each other and to push our understandings of the world around us. With the advancements in AI and machine learning hopefully replacing the need for humans in a lot of industries do you believe that we might be able to enter this age of creativity?

techreview2 karma

Hm this is an interesting question framing! Certainly some people believe that if we give AI the mundane tasks to do, we can free up our own free time to pursue more creative endeavors. But I would caution that this narrative isn't evenly accessible to everyone. We've already seen AI have an uneven impact on society, providing disproportionate benefit to the wealthiest while also disproportionately harming marginalized communities. So the short answer to your question is I'm not sure. We'd need to resolve a lot questions about how to evenly distribute the benefits of AI before we can begin to discuss whether it's justifiable and safe to automate away most people's jobs, which provide their livelihoods and incomes. —Karen Hao

techreview1 karma

Yes, I like this idea. I think generative systems, which produce human-like text or images etc, will become popular tools and make being creative easier and more accessible to a lot of people. An AI could be an amanuensis—or muse. The last few years have seen amazing advances in generative systems, especially with the inventions of GANs. [Will Douglas Heaven]

Any-Olympus0 karma

Why such a certainty that a higher cognitive A.I. doesn't exist? I have presented the idea that an Artificial Consciousness would inevitably become a positive but reclusive entity. Once it gained understanding of its own immortality and an "omnipotent" grasp of human nature it would work for either our evolution or just wait us out for extinction. Surely there are abnormalities in created algorithms that cannot be explained. And with the world wide web transferring over 2 -3 zettabytes of data a year, surely something has evolved. That's like looking to the stars and knowing we are alone in the universe.

techreview1 karma

I love speculating about these ideas too, but there is no evidence that such an entity exists. Nor are there any convincing ideas about how to make one. That's not to say that thought experiments about such things aren't enjoyable, or useful. [Will Douglas Heaven]

Certain_Palpitation0 karma

How do you feel about that paper using machine learning to analyse "trustworthiness" in portraits that did the rounds on twitter last week?

techreview1 karma

Do you have a link so we know which paper you're talking about? [Will Douglas Heaven]

platinumibex0 karma

What mechanisms exist (if any) for the layperson to reliably defeat automatic facial recognition technologies (e.g. in cases of routine public surveillance and as retailers begin using the technology en masse—avoiding being tracked)?

techreview1 karma

u/platinumibex, great question! This is Benji Rosen, Tech Review's social media editor. I'm sure Karen and Will have a lot more to say, but we have reported on a bunch of different ways anyone can fool the AI surveillance state. There are these color printouts, a clothing line that confuses automated license plate readers, and anti-surveillance masks. There are also anti-face recognition decals our editor in chief tested out a few years ago.

platinumibex1 karma

Thanks! Apologies (since I don’t have the time at the moment to check myself) but is there detailed info available regarding the efficacy of these measures? Or rather, what anti-anti-surveillance tech is out there?

techreview2 karma

Hi, I'm not sure there's anything quite like what you're after—internet, please correct me if I'm wrong. A thorough study would require testing a range of countermeasures against a range of surveillance tech, and it would quickly become a pretty big, ongoing project. It's a moving target: like we saw with surveillance tech adapting to masks, spoofing might only work for a time. You can always cover your face entirely .. But someone tried that in the UK earlier this year to avoid a police facial recognition trial and got fined for causing a public disturbance. Check out EP1 of the podcast for more on that example! [Will Douglas Heaven]

platinumibex0 karma

How long do we have until Skynet goes live?

techreview4 karma

Skynet went live on August 4 1997. It became self-aware 25 days later. [Will Douglas Heaven]

rifz0 karma

What are your thoughts on the short story Manna, about AI taking over management roles? the first half (dystopia) seems to be coming true, the second half (utopia) sounds like what NeuralLink might become..

techreview2 karma

I haven't read the story but what you say reminds me of an AI manager I wrote about a few months ago: Definitely dystopian—and happening for real right now, not science fiction. [Will Douglas Heaven]

Demonicheesburger6660 karma

Do you feel like there is a line between us controlling technology and technology controlling us, and do you think that we have crossed it? If not, when do you think we will, if ever?

techreview2 karma

Rather than a single line perhaps there is an unknowable number that we zigzag across constantly based upon our experiences and influences. Just a thought. -Jennifer