Highest Rated Comments


Mozilla-Foundation1281 karma

There were so many things that left us feeling like these apps were especially creepy. One of the things that really stood out for me was just how much sensitive, emotional, and personal information these apps can collect. And then you just have to trust them with all that information. And to be honest, I just don’t trust most of these companies. They seem to care about profit first and protecting their users’ privacy way down the line from that. Another thing that really got me as creepy was the issue of consent. Lots of the privacy policys we read said things that sounded quite nice like, “We will never share or sell your personal information without your consent.” Hey, that sounds great. I just don’t give my consent and I’m all good, right? Well, maybe not.Because consent is confusing when it comes to these (and all) apps. To some, by downloading and registering with the app, it appears you have given them consent to use your personal information. Which probably isn’t what most people think of as consent. And then they tell you to withdraw consent, you have to delete the app. Yuck. And then there’s the idea that these companies can change their privacy policy whenever they want to change how they use/protect your personal information. The Verge wrote a great article about that after we published our *Privacy Not Included guide that I really appreciated. https://www.theverge.com/2022/5/4/22985296/mental-health-app-privacy-policies-happify-cerebral-betterhealth-7cups

-Jen C

Mozilla-Foundation386 karma

That is a great point. As Misha and I were doing this research, we also had this thought. Should mental health apps even exist? However, there is a way to do this better than is being done by many companies now. Non-profit organizations actually make a few of the apps on our list and their privacy policies are generally really great. They don’t collect a lot of personal information and they don’t share it all around for advertising or treat it like a business asset. That is a great approach to this. Have non-profits create and run these apps. That would mean people would have to support these non-profits with donations though. And we did see non-profits just didn’t have the same resources to have security teams to think about the security of these apps, which stinks.

The other thing that we’d love to see is policy regulations to catch up with the use of mental health (and all health) apps. Right now there are not a lot of regulations on what data these apps can collect, how they can use it and how it must be protected. That needs to change. Stricter health privacy laws like HIPAA do cover things like the actual conversation you have with a human therapist. But not things like the fact you’re talking with a therapist, how often, for how long, when, etc. That’s all data that a company can use or share or sell.

Also, there’s the really interesting question about whether AI chatbots are covered by HIPAA. Talking to a human therapist means you have stricter privacy protections. Talking to an AI therapist doesn't necessarily have the same protections. -Jen C

Mozilla-Foundation342 karma

Absolutely! Be concerned. Ask questions to all your doctors and therapists about how they see your data being handled. Only share what is absolutely necessary. Opt out of data sharing where you can. Ask your health care provider to only share with the telehealth company what is absolutely necessary. Raise your concerns and have a conversation with your health care providers. -Jen C

Mozilla-Foundation291 karma

This is true. A mood/symptom tracker like Bearable does have to collect that data to provide their service. The concerns come in when you have to trust companies to protect that sensitive information and then when they say they could use some of your personal information (but not necessarily your sensitive health data) to target you with ads or personalize the service to keep you using it even more. What we want to see are absolute assurances in the privacy policies of these companies that they are taking every step possible to protect and secure that data. We want to see things like:

  1. A clear statement promising to never sell that data.
  2. A clear statement showing what third parties they share this data with and for what purpose.
  3. When they say they will never share personal information without your consent, what does that consent look like? How clear is it?
  4. We want to see a promise to never collect data from third parties to combine it with data they already have on you unless there is an absolutely clear reason they need to do that to offer the service.
  5. And we want to see that every users, no matter where they live and what legal laws they live under, have the same rights to access and delete their data. This is what we dream to see as privacy researchers. This and for companies to stop using annoyingly vague and unclear language in their privacy policies that leave them all sorts of wiggle room to use your personal information as they want.
    -Jen C

Mozilla-Foundation248 karma

There are a couple of concerns we have. First, it’s not always that the data is compromised. That would mean that is has been leaked or breaked or hacked or snooper on by an employee who shouldn’t have access. That can and does happen and YIKES! You really don’t want that to happen to your very sensitive chats with therapists, mood tracking info, or your conversations about your suicidal thoughts. This is also why nearly all privacy policies have a line that says something along the lines of, “Nothing on the internet is 100% safe and secure. We do our best but don’t make any guarantees. So, yeah, we don’t have legal liability if something bad happens to the data you share with us. If you don’t want to have your personal information potentially leaked or hacked, don’t share it with us.” This is paraphrasing, of course, but that’s what they mean. Then there is the data that isn’t compromised but just shared and the companies tell you in their privacy policy they will use it to do interested-based ad targeting or personalization or share it with third parties for more ad targeting or combine your personal information with even more information they get from third parties like social media or public sources or data brokers. That data isn’t compromised, as such, but it’s out there and they treat it like a business asset to make money. And that to us felt super gross. To target people at their most vulnerable to gather as much sensitive, personal info as possible and then use that to make as much money as possible. 

-Jen C