Highest Rated Comments


Michael_Brent7 karma

Hi! My name’s Michael Brent. I work in Tech Ethics & Responsible Innovation, most recently as the Data Ethics Officer at a start-up in NYC. I’m thrilled to learn about your podcast and grateful to you all for being here.

My question is slightly selfish, as it relates to my own work, but I wonder about your thoughts on the following:

How should companies that build and deploy machine learning systems and automated decision-making technologies ensure that they are doing so in ways that are ethical, i.e., that minimize harms and maximize the benefits to individuals and societies?

Cheers!

Michael_Brent2 karma

This is really helpful stuff! I agree that the responsible development and use of ML/AI systems must be built-in from the start, in all the ways you suggest.

Although of course the contexts of use vary across different ML/AI products, in my experience thus far, the ethical challenges tend to correspond to three categories:

• The data used to train models

• The models themselves

• The intended and unintended uses

To build responsibly, each category requires a series of questions aimed at clarifying the ethical risks involved. For example, we want to know the sources of our data, whether its complete and accurate, or limited and biased, etc. We also want to know how our models have been built and tested, which algorithms have been deployed, etc., so that we our products are transparent. And, we want to know the intended or ideal use-cases for our products, in order to anticipate how they might be abused or unintentionally bring about disastrous consequences. All of this work, and more, it seems to me, should be performed by as wide and diverse an array of people as possible. No easy task, but I’m optimistic.

Michael_Brent2 karma

Thanks for the pointer!

Michael_Brent0 karma

An excellent resource, indeed.