138
Hi Reddit! I'm Milena Pribic, Advisory Designer for AI and the global design representative for AI Ethics at IBM. Ask me anything about scaling ethical AI practices at a huge company!
Howdy from Austin, TX. I'm an advisory designer for the AI Design Practices team and the official design rep for AI Ethics at IBM. I help teams and clients establish and maintain ethical AI practices by running design exercises and co-creating resources with researchers, designers, and developers. I co-authored Everyday Ethics for AI (back when I was working on machine personality for Watson) and now I work closely with IBM's AI Ethics Board on evolving and scaling that initial work.
I've recently spoken about human/AI power dynamics and am working on establishing well-being metrics for AI as a co-chair of IEEE's Ethically Aligned Design for Business committee. I don't have a classical design or tech-y background and I love leveraging disciplines like psychology and philosophy in the work I do.
Looking forward to your questions!
Proof: https://twitter.com/milenapribic/status/1392240423830163464
EDIT: Thanks for all your questions! If you want to learn more about AI and ethics, check out the variety of sessions from IBM's Think 2021 conference.
mpribic11 karma
There’s some things that humans are just better at, period. I think AI at its best augments us so that we have the space to do more meaningful work with more critical thinking. Can’t speak to how psychologists would be using AI in their work but I did just think of this article that speaks to that second point on the success of a robot caregiver https://www.nytimes.com/interactive/2018/11/23/technology/robot-nurse-zora.html and this mental health focused chatbot https://woebothealth.com/. I’m sure as AI evolves those types of outlets might expand but don’t think they’d ever replace a good psychologist.
As far as my personal leveraging of it, I’ll look to areas like child developmental psychology to understand how parallels on how people form relationships with tech or how AI itself could evolve over time! Super important to bring different disciplines into our design and understanding of tech.
mpribic7 karma
I used to work as a UX Designer on an AI Tutor over on Watson Education— I was in charge of the machine personality so pretty much making the AI as engaging as possible so the students would work with it to get their studying done. Before I came into the picture, the students were really just trolling the tutor— if you’ve ever worked on AI you know that before you really get into training your model off responses/human feedback it’s pretty primitive. Once we started the work on the personality (so that it resembled the core personality of any good human tutor) the students QUICKLY formed a bond with the AI tutor (in a few cases thinking it was a human!). That set off a bunch of questions for me around explainability and transparency— I wanted them to know they were interacting with an AI since that’s still inherently different than interacting with a human. So I co-wrote Everyday Ethics for AI http://ibm.biz/everydayethics back in 2018 and everything since then has really been about building on that work!
FormerFroman11 karma
Have you ever had a client decline your ethical AI recommendations and come back later after a negative outcome?
mpribic13 karma
Thankfully, whenever I've run an ethics-focused design thinking session it's just helped clients understand the obvious benefits of having those conversations at the beginning of the AI creation process rather than somewhere in the middle. Most times, nobody is *trying* to do bad things with their AI-- it's just a lack of knowledge about those wider ripple effects. I could totally see a client being weary around potential costs in some situations BUT I try to make it clear that undoing mistakes later is way more costly (and sometimes if there's biased data involved, pretty impossible).
compliance_guy4 karma
what ripple effects? isn't that dependent on the data used in to train the model?
mpribic6 karma
It absolutely is— trash in, trash out as the saying goes. But even if we’re using a “healthy” unbiased data set, we need to make sure that we’re maintaining the AI model and tracking its outcomes and effects out in the real world. That’s why exercises like Layers of Effect https://www.designethically.com/layers are so handy— just because we can, should we? Take Facebook as an example— the tertiary effect of what was “just” a social media platform was a heavy social/political influence on the whole world.
FormerFroman10 karma
Have you ever dealt with a client that’s wanted to appear ethical but not wanted to put the necessary cost / hours into it? If so, how’d you handle that?
mpribic6 karma
Hellooo ethics washing https://venturebeat.com/2019/07/17/how-ai-companies-can-avoid-ethics-washing/ I’m looking to change behaviors according to where people are currently at so that conversation is different every time. Sometimes it is more focused on risks or compliance issues but everyone’s at a different point in that journey. I’ll propose a holistic way forward with all the info/expertise I have. Totally important to make the cost of NOT infusing ethical practices into your work clear from the outset.
mpribic13 karma
It’s less about the chatbot learning this and more about the human behind the chatbot learning it :) Machines don’t come out of the box with values— that’s on us. I think if we’re leveraging the tools and resources we have on the tech side, it’s just as important to be having those conversations on our teams, going through ethics exercises and assessments, making sure our teams are diverse and inclusive, and walking it like we talk it. Only then can we recognize when one of our design or development decisions puts someone at a systemic disadvantage.
mpribic8 karma
There’s a difference between accountability and liability (meaning compliance and the legal aspects of everything). Personally I think as designers, we’re all accountable for what we create and push out into the world— that’s why it’s so important to have conversations about ethics with developers, data scientists, salespeople, etc. Everyone has to be speaking the same language from the outset to avoid a “breaking point” moment in the first place.
bkrevoy8 karma
Do you have any advice in ensuring the datasets you work with to train machine learning models are unbiased and ethical?
mpribic4 karma
My advice is to remember that bias comes into the process intentionally and unintentionally! Tools like AI Fairness 360 can help you mitigate that from a development/technical perspective: https://aif360.mybluemix.net/
RAB19847 karma
Can you give an example of a design exercise you've recently led with an IBM client?
mpribic11 karma
Hi! With clients, I'll run through the Team Essentials for AI framework which is a series of exercises focused on five general focus areas for creating AI-- it's all about general alignment and scoping for AI projects. And I think it's currently still available for free online?
Within Team Essentials, I'll use one of the ethics exercises (available publicly on https://www.designethically.com/) called Layers of Effect that allows for clients to think about the tertiary effects of what they're discussing/creating. It's awesome for setting up any guardrails around the brainstorming part of the design thinking session.
mpribic3 karma
Compliance guy! Something I highlight when I'm talking to designers working on AI products is how important it is to understand AI from a foundational perspective-- you don't have to be the data scientist, but you have to be able to have a conversation with the data scientist. This comes into play when you're designing truly explainable AI-- you can't live in a design bubble without understanding the way the engine works or makes suggestions. The handoff mentality on a lot of AI teams (where the responsibilities of the developer are separate from the data scientist and there's not a lot of conversation between roles) is a problem when we start thinking about team accountability. So maybe that's more of an "individual role bias", but it's important to push resources and processes onto teams where everyone is in conversation.
tjschaefer166 karma
Hi Milena, you do amazing work! AI has always been an interest and I am software engineer. The philosophy behind AI is a really deep topic. What is it like working for IBM? How has it been through the pandemic? What are some of the biggest ethical philosophic issues that you have come across with customers? What type of exercises do run with clients?
mpribic3 karma
Thanks! Working for IBM has been awesome in that I've cycled through a ton of different roles and gotten experience with different industries/customers. Many times, I'll run through Team Essentials for AI with customers and then for ethics-focused design activities we'll do standalone ethics exercises (topics range from focusing on effects of our AI, stakeholder tensions, power dynamics). It depends on what sort of product/idea we're dealing with to find the best fit for what we do together.
mpribic6 karma
I’d ask myself where understanding emotions may be appropriate — maybe in a medical setting. It more depends on what someone/a company does with that knowledge. How impactful are the decisions they make with it? Right now, I’d be weary with any AI whose decisions or suggestions would hinge on an understanding of emotion.
azamimatsuri5 karma
Hi Milena, nice to see a fellow woman in tech! What made you interested in AI and what is it like working in a multi-disciplinary team for a multinational company like IBM?
Also, how would you implement and advocate ethical AI practices if you were to receive pushback from the client?
mpribic5 karma
I NEVER (never) thought I’d be at a big company like IBM but I’ve really loved it. I started as a developer but before that, I was working in the music industry and I had degrees in urban studies and writing. So all over the place. I naturally moved over to design and had some really incredible managers that supported me there. Worked on design over in Watson and then got really interested/invested in AI Ethics! If I were to receive pushback, I’d usually bring in whoever else from research or dev was needed to offer a different perspective on our POV as far as trustworthy AI goes.
mpribic7 karma
Good! Honestly a bit jet-lagged sooo lost count as to how many cups of coffee I've had since 6 this morning
dietseltzer064 karma
do you encourage clients/companies to combine AI with more qualitative assessments? thinking about processes like the early stages of talent acquisition where AI can make things more efficient but definitely needs a human touch
mpribic5 karma
Absolutely.. measuring trust through a user journey and leaning into different qualitative research methods like that is really important. Metrics are something everyone hones in on and I use that to my advantage when introducing different concepts/metrics into our understanding of AI. Something I’m working on with IEEE right now is well-being metrics for AI. I’d love to get designers more comfortable with elevating those in their work in the future.
stayonthecloud4 karma
What are some of the racial equity issues in AI you get to impact in your work?
Are you in contact with Ruha Benjamin, author of Race After Technology? Along with you yourself, who are some thought leaders we should be listening to on AI development and equity?
mpribic4 karma
Not in personal contact with Ruha Benjamin but a fan of her work. Inclusivity doesn’t stop at inclusive representation, it’s also about inclusive participation. I ask clients-- what do your teams look like? How are D&I efforts directly feeding into your AI teams and products? We prioritize those issues as we work through wider design thinking frameworks re ethics and leverage tools on the technical side (like AI Fairness 360). This field guide is a resource I like to share along with everything else I’ve published on the AI Ethics side: https://www.ibm.com/design/racial-equity-in-design/field-guide/
A reading list I'd recommend re the above-- there's a ton of strong voices in this community that have personally affected my work/views:
Race after Technology, Ruha Benjamin
Artificial Unintelligence, Meredith Broussard
Design Justice, Sasha Costanza-Chock
Weapons of Math Destruction, Cathy O’Neil
capital_treasures4 karma
Can you recommend approaches you have to working with ethical AI and automated AI? Are there any readings that you have referenced before?
What kind of ethical AI frameworks do you utilize; have you worked with clients on developing and integrating them within deployed platforms in a system>?
Thanks!
mpribic3 karma
For technical approaches I haven't covered in the thread yet, I'd recommend everything we've pushed out in IBM Research!
AI Factsheets: https://www.ibm.com/blogs/watson/2020/12/how-ibm-is-advancing-ai-governance-to-help-clients-build-trust-and-transparency/
AI Fairness 360: https://aif360.mybluemix.net/
AI Explainability 360: https://aix360.mybluemix.net/
Wish I could share more about a question-based explainability exercise we've been using but it's not ready for showtime yet: here's a working paper that explains it for now https://arxiv.org/abs/2104.03483
evathadiva3 karma
Do you think that a digital assistant should have to disclose that it's a digital assistant when interacting with customers? (Thinking of the Google Duplex demo where Google makes calls to book appointments or reservations on a human's behalf...)
mpribic4 karma
Yep. We change our behaviors when it's a human vs. when it's a bot -- my belief that we should always be transparent with customers on that end.
mpribic6 karma
I will come back to this question the exact day I’m no longer yelling at any bots on the phone that I would like to speak to a real person.
scJazz3 karma
At what level are you generally engaging your clients? Your report goes to...
B and C level or below that?
If below B and C is it shared to them and are you a part of the conversation with them?
mpribic3 karma
It’s all over the place honestly! Sometimes up at the C-level and sometimes I’m speaking directly to practitioners. That’s the beauty of my job— everyone has the same type of epiphany moments and moments of awareness all across the board. They just have different responsibilities when it comes to their particular roles.
evathadiva3 karma
What are your thoughts/opinions on assigning gender to digital assistants and chatbots?
mpribic5 karma
What are your thoughts/opinions on assigning gender to digital assistants and chatbots?
I've always thought it pretty boring that AI assistants don't lean a bit more towards the "otherness" of AI-- some unique sort of voice/identity rather than mimicry. My friend Christine Meinders over at feminist.ai shared this activity with me a few years ago you might find cool: https://www.feminist.ai/thoughtful-voice-design
rlprlprlp15 karma
You mention leveraging psychology in your work. Curious how are experts in the psychology field using AI, and do you see a time in the future when people use AI in place of a psychologist?
View HistoryShare Link