I am Danielle Citron, professor at Boston University School of Law, 2019 MacArthur Fellow, and author of Hate Crimes in Cyberspace. I am an internationally recognized privacy expert, advising federal and state legislators, law enforcement, and international lawmakers on privacy issues. I specialize in cyberspace abuses, information and sexual privacy, and the privacy and national security challenges of deepfakes. Deepfakes are hard to detect, highly realistic videos and audio clips that make people appear to say and do things they never did, which go viral. In June 2019, I testified at the House Intelligence Committee hearing on deepfakes and other forms of disinformation. In October 2019, I testified before the House Energy and Commerce Committee about the responsibilities of online platforms.

Ask me anything about:

  • What are deepfakes?
  • Who have been victimized by deepfakes?
  • How will deepfakes impact us on an individual and societal level – including politics, national security, journalism, social media and our sense/standard/perception of truth and trust?
  • How will deepfakes impact the 2020 election cycle?
  • What do you find to be the most concerning consequence of deepfakes?
  • How can we discern deepfakes from authentic content?
  • What does the future look like for combatting cyberbullying/harassment online? What policies/practices need to continue to evolve/change?
  • How do public responses to online attacks need to change to build a more supportive and trusting environment?
  • What is the most harmful form of cyber abuse? How can we protect ourselves against this?
  • What can social media and internet platforms do to stop the spread of disinformation? What should they be obligated to do to address this issue?
  • Are there primary targets for online sexual harassment?
  • How can we combat cyber sexual exploitation?
  • How can we combat cyber stalking?
  • Why is internet privacy so important?
  • What are best-practices for online safety?

I am the vice president of the Cyber Civil Rights Initiative, a nonprofit devoted to the protection of civil rights and liberties in the digital age. I also serve on the board of directors of the Electronic Privacy Information Center and Future of Privacy and on the advisory boards of the Anti-Defamation League’s Center for Technology and Society and Teach Privacy. In connection with my advocacy work, I advise tech companies on online safety. I serve on Twitter’s Trust and Safety Council and Facebook’s Nonconsensual Intimate Imagery Task Force.

Comments: 448 • Responses: 52  • Date: 

Gawkhimm568 karma

whats going to happen when large numbers of people all are going to claim its deep fakes, no matter the reality?

DanielleCitron427 karma

Great question. That is what Bobby Chesney and I call the Liar's Dividend--the likelihood that liars will leverage the phenomenon of deep fakes and other altered video and audio to escape accountability for their wrongdoing. We have already seen politicians try this. Recall that a year after the release of the Access Hollywood tape the US President claimed that the audio was not him talking about grabbing women by the genitals. So we need to fight against this possibility as well as the possibility that people will be believe fakery.

SinisterCheese125 karma

Hello from Finland.

I'm sure that you have heard of Chris Vigorito, and his famous neural network fake of the controversial Dr. Jordan Peterson's voice, since the system was take offline due to his request I shall provide different example of the system at work. https://youtu.be/3Xqar7OgiIA

Now what I want to ask is: Now that the technology is available and proven to work. And it can be used for malicious purposes towards private and public individuals. Now that the trend is towards voice and video recordings becoming unreliable, what should society do to combat this? It doesn't take wild imagination to think that in a heated political battle someone would start spread lies or even fabricated controversial material of their opponents. Since the public can't tell the difference between fabricated and real material, and a lie has travelled around the world before the truth has their boots on. How would one defend themselves in a court or police investigation against material like this?

Social media has already shaken the society to it's core when it comes to trust, towards private and public individuals. What is the estimated impact of something like this, when it comes popular.

DanielleCitron113 karma

Great questions. The social risks of deep fakes are many and they include both believing fakery and the mischief that can ensue as well as disbelieving the truth and deepening distrust often to the advantage of those seeking to evade accountability, which Bobby Chesney and I call the Liar's Dividend. In court, one would have to debunk a deep fake with circumstantial evidence when (and I say when deliberately) we get to the point that we cannot as a technical matter tell the difference between fake and real. Hany Farid, my favorite technologist, says we are nearing that point. We can debunk the fakery but it will be expensive and time consuming. I have a feeling that you are really going to enjoy my coauthored work with Bobby Chesney on deep fakes.

cahaseler119 karma

Thanks for doing this AMA!

How can we keep deepfakes and other manipulated media out of our elections? Is this something we can legislate, or do we need to rely on private social media companies to take action?

DanielleCitron97 karma

Great question. We need both lawmakers and social media companies on the case. Social media companies should ban harmful manipulated or fabricated audio and video (deep fakes or shallow ones) showing people doing or saying things they never did or said. Companies should exempt parody and satire from their TOS bans. This will require human content moderators, an expensive proposition, but one worth the candle. Bobby Chesney and I have more to say on this front in our California Law Review article on deep fakes. Now for lawmakers. Mary Anne Franks and I have been working with House and Senate staff on prohibiting digital forgeries causing cognizable harm like defamation, privacy invasions, etc. Law needs to be carefully and narrowly drafted. It likely will not come in time to meet the 2020 moment so we also need to be much more careful consumers and spreaders of information.

R0nd165 karma

If you give some authority the power to moderate fakes, what would keep them from policing speech using the same power and tools?

DanielleCitron46 karma

Great question. This is why we must be very careful in our definitions of digital impersonations or forgeries, narrow enough to exclude speech of legitimate concern to the public including parody and satire. As Mill and Milton worried, government can be counted on to disfavor speech that would challenge its authority. Hence, any regulatory action must be narrowly tailored and circumscribed to harmful digital forgeries amounting to defamation, fraud, invasions of sexual privacy etc. and exclude matters of public importance including parody and satire.

slappysq49 karma

You talk a lot about governmental policy solutions. What technological solutions should we be working on as well?

Some intermediate ideas:

Deepfakes technological countermeasures: Security camera and police camera video has to have embedded video frame signing using their hardware embedded private key and their serial number. All cameras have serial number public keys available on the website of the manufacturer and verified by blockchain so manufacturers can't edit after the fact. Therefore video frames can trivially be shown to have been altered or not by examining the signature on each frame. Sizing / scaling the video of course breaks traceability. Could use mkv containers to contain rescaled versions of all frames that are all signed.

Cyberstalking / doxxing tech countermeasures: Automated fuzzing of personal data in online comments and ML that detects when you're posting something that could be used to trace you. You're not from Brooklyn, you're from Yonkers. You're not 34, you're 35. You don't work at Google, you work at Amazon.

DanielleCitron37 karma

There are businesses interested in creating authentication technologies just as you imagine. But the key is widespread adoption. If platforms allow any and all types of video and audio to be shared, then it shall be shared. Technologists like Hany Farid are skeptical that we will have such adoption any time soon. Bobby Chesney and I talk about the possibility of technical solutions in our work.

DanielleCitron26 karma

Dear friends, This has been an extraordinary conversation! Thank you for the insightful questions. I have to hop off now. Please know how much I appreciated our AMA!

IronOreBetty20 karma

What is "the line" when it comes to revenge porn? Legally, if no genitalia are shown, is it still illegal? How much of this falls under "I know it when I see it"?

DanielleCitron13 karma

My colleague Mary Anne Franks has written a model state and federal statute (followed by a number of states including Illinois) that carefully defines nonconsensual pornography. Our coauthored law review article also explores the boundaries of nonconsensual pornography. It is definitely not akin to Justice Potter Stewart's remark of one knows it when one see its--we try to be as clear and narrow and specific as possible. Check out our CCRI website for the definition.

Refinnej-18 karma

Big fan of your book!

In terms of cyberspace abuse, particularly sexual abuse and harassment, what advice would you give for online safety and protecting yourself?

DanielleCitron42 karma

Great and difficult question. Sometimes, there is nothing someone could have done to prevent the sexual abuse. People doctor photos and create deep fake sex videos so there was literally nothing the victim could have done differently. I also don't want people to stop expressing themselves sexually. This generation shares nude photos and there is nothing wrong with that. The key is trust and confidentiality. We need to stress the importance of confidentiality, and law needs to protect against invasions of sexual privacy.

marinaraleader26 karma

This has been my life the past year and a half. Someone decided to post an old social media pic attached to my legal name online and it spurred into people making fake hardcore pornography using Photoshop and one deepfake video completely unprompted.

I don't know who the person is and the police haven't done anything. It's an absolute nightmare and I feel hopeless as far as options go. I really hope there is a solution.

DanielleCitron17 karma

I am incredibly sorry. Do get in touch with me.

dopebdopenopepope16 karma

Thank you for doing this AMA. It’s so valuable to the public to have access to specialists in a time when public dialogue often seems so distorted and counter-productive.

I wonder if we haven’t simply entered a completely new paradigm, where how we conceptualize public/private spheres has so changed that we can’t think ourselves back to an early state. The upshot for the law is that our legal practices are working in the old paradigm and are thus largely ineffective to operative as we need them to in this new paradigm. Doesn’t the legal paradigm need to go through a revolutionary shift? Aren’t the problems we are experience at least partially due to two paradigms talking passed each other, if you will?

DanielleCitron11 karma

I just love this question. It is something my colleague Barry Friedman and I have been talking about quite a bit. Indeed, the collapse of the public/private divide throws a wrench in lots of ways that we once thought about the protection of central civil rights and civil liberties. For instance, the Bill of Rights largely applies (according to the Civil Rights Cases) to state actors. But now private actors act on behalf of state actors or have more power than state actors in some respects. Tearing down the state action doctrine would be a total mess and inadvisable but we may need to rethink our commitments given that as Chris Hoofnagle put it so well so many years ago that companies are Big Brother's Little Helpers.

Der_Absender12 karma

How much privacy do we need to give up?

DanielleCitron6 karma

That is a broad question and one that requires context to think through. But I can say that we need to understand the value of privacy to individuals, groups, and society before we consider countervailing concerns and interests. That is how I view my work and the work of my beloved privacy colleagues.

zoekay9 karma

[deleted]

DanielleCitron8 karma

Thank you so much for asking this question. You raise such a crucial point. Even talking about privacy invasions like doxxing (the public disclosure of one's home address in an effort to terrorize and endanger folks) compounds the privacy invasion and hence the harm. Check out Zoe Quinn's website Crash Override for some practical advice.

PepperoniFire9 karma

  1. What are the first steps governments at any level should take when we talk about jurisdictional issues for online harassment (e.g,. the harasser is in state A, the victim is in state B or country B, the server is in another place, the company is incorporated in another place...etc.)

  2. True threats often require an objective test. With so many men on the bench determining what constitutes 'objective,' do you think that more women on the bench would change rulings if more women were able to bring certain online harassment claims to court? Is there an appropriate cause of action for such a thing given the forum and jurisdiction issues above?

  3. Section 230 of the CDA arose out of free speech cases, so most of the protections make sense to me. However, from a product liability perspective - which is almost always strict liability - it's bizarre. For example, if an app is designed in such a way that there is reasonably foreseeable misuse, that is a claim that, in my eyes, should at least be able to survive summary judgment, but 230 is often used successfully used at that stage. Should we be approaching 230 reform from a products liability perspective either instead of or in addition to a speech perspective? I hardly ever see it framed as such in conversations.

Thank you!

DanielleCitron11 karma

Love these questions. Let me answer them in turn.

  1. Jurisdictional hurdles can make enforcement difficult but not insurmountable. I worked with CA AG's office as they worked to help pass a law that would allow California courts to exercise personal jurisdiction over harassers targeting victims in the state. CA legislature passed that law, which would withstand DPC challenge in all likelihood. The next challenge is resources. And that is the big challenge. I have seen prosecutors who want to bring harassers from state A into their state, let's say B, and their requests for resources denied. Let's work on pressuring DAs to spend money on such requests.
  2. Let me ask the second question. As I explore in my book, there are tort claims that harassment victims can bring in the wake of terroristic threats (and often defamation and sexual privacy invasions that accompany those threats). With pro bono counsel like K and L Gates or with independent funds, they could sue harassers for intentional infliction of emotional distress, for instance. Such tort claims are key for the recognition of wrongs and to empower victims. Again resources is often the sticking point. Now for the first question, we have seen male and female judges get the problem. We do need more training of judges to educate them about the harms of online abuse, to be sure. I am not sure if the objective standard with regard to threats and Supreme Court doctrine is the problem.
  3. Fantastic question and a theory championed by Carrie Goldberg in her suit against Grindr and theorized by Olivier Sylvain in his scholarship. I do think the instinct is right though courts are not there yet. Section 230 should not apply if you are suing for someone a provider itself has done, e.g. design of algorithms, rather than user-generated content. Let's keep pressing that argument in the lower courts.

PepperoniFire5 karma

Thanks for answering my question! Regarding (2), I remember one of the key issues in the Gamergate case(s) with Brianne Wu was how each FBI field office was stuck tossing around the reports and claims because there was no clear path forward for them.

DanielleCitron10 karma

Frankly the problem was lack of meaningful will.

ArchonUniverse8 karma

How much will deepfakes affect the world, in political, social and cultural terms?

DanielleCitron18 karma

Significantly. First let's take the social impact for individuals and businesses. A deep fake sex video can ruin someone's reputation and life. A deep fake of a CEO doing something outrageous can tank an IPO if released the night before a major stock offering. Now for the cultural. Deep fakes deepen the distrust that we already have in important institutions if deep fakes target those institutions. They compound the difficulties of having the truth overcome lies. And politically they can endanger elections if timed just right. Check out my coauthored law review article with Bobby Chesney for a lengthy discussion of all of these concerns. Thanks!

durpenhowser6 karma

Has revenge porn gotten better/gone down since Hunter Moore was locked up? Or have people just gotten smarter? What is a person's rights when it comes to it?

DanielleCitron4 karma

I wish we have seen less of it since Is Everyone Up has shuttered and Hunter Moore sent to prison. There are still thousands of sites devoted to NCP and NCP appears on porn sites as well. People can ask sites to take NCP down, but those requests are often ignored. They may be able to sue the posters and report to law enforcement. My book Hate Crimes in Cyberspace goes into lots of detail here.

LividGrass6 karma

Thanks for sharing your time with us!

Much of the discussion I've heard surrounding this topic focuses on high profile individuals, who despite being larger targets also have greater access to resources. However I have encountered increasing anxiety in my interactions with High School teachers and college staff about the ways that these kinds of attacks can affect their students and propagate quickly through campus culture. As the tools necessary to enact cyber harassment and create convincing fakes become more accessible, what ways do you see as effective ways groups with limited resources to become informed and effectively combat this problem.

DanielleCitron8 karma

Education and conversation strike me as crucial here. We need parents, teachers, and staff to teach students about their responsibilities as digital citizens, that they can do tremendous harm with networked tools. And we have to hold kids accountable in ways that are meaningful and create teaching moments. Schools tend to sweep issues under the rug. That is cowardice and educational malpractice.

DISREPUTABLE6 karma

Can asking a question here be incriminating?

DanielleCitron8 karma

I suppose it depends on what you ask--stay clear of incorporating trade secrets or terroristic threats or defamation and you are likely good to go. I answer this in the spirit of satire in which it seems to be asked!

Qhjh6 karma

Kind of a basic question (sorry) but what exactly are deep fakes? I think I have a general idea but I’d like to hear a definition from a professional. Thank you!

DanielleCitron11 karma

Of course! Deep fakes are often described as manipulated or fabricated video and audio showing people doing or saying things that they never did or said. The state of the art is rapidly advancing so that it may soon be impossible to tell the detect the fakery (it is an arms race and one that the white hats are not likely to win in the near term) and the state of the art is democratizing rapidly. You can find tutorials on how to make deep fakes on YouTube.

lsahart5 karma

What advice do you have for law students who are interested in studying and working on the kinds of privacy issues that you're interested in?

DanielleCitron14 karma

Love this question. Take information privacy law and if your school does not list it, demand that it does so! Also try to take classes on free speech, intellectual property, antitrust, admin law, and compliance type classes. Volunteer at organizations like the Cyber Civil Rights Initiative. Urge your summer firms to take on pro bono matters involving sexual privacy invasions as K & L Gates has done in the Cyber Civil Rights Legal Project. Research for your privacy prof. Seek internships at EFF, EPIC, CDT, CCRI, ACLU, and the like. Welcome to the field!

blipblopdowntothetop4 karma

[deleted]

DanielleCitron7 karma

You are precisely right. The generative adversarial network technologies at the heart of deep fakes is growing with sophistication every day. Talking to technologists at universities and companies makes clear that soon, if not already, it will be impossible as a technical matter to tell the difference between fakery and real. It is an arms race and it is unclear if the white hats will beat the black hats in the near or mid term.

djbrax754 karma

Why do I get so much push back from people when I give them information that counters or debunks fake news?

DanielleCitron12 karma

Great question. The short answer is confirmation bias. People are attracted to information that accords with their views and beliefs. They are disturbed by information that contradicts their viewpoints. Hence it is hard to pierce filter bubbles especially in these polarized times and especially when a major news outlet (Fox) is often spreading falsehoods. I strongly recommend reading Yochai Benkler, Rob Faris, and et. al's book Networked Propaganda, showing that Fox news was responsible for the spread of Pizzagate and Seth Rich conspiracy theories.

Disembowell4 karma

Is this a great question?

DanielleCitron5 karma

I am sure it is.

mamba_rojo4 karma

I’m currently writing my journal note on deepfakes and have become very familiar with your work! With that being said, what do you generally think about California and Texas’ recent legislation on the prohibition of deepfakes during elections? What challenges might other states face in enacting similar legislation in this area?

DanielleCitron8 karma

I'm skeptical about the efficacy of those laws, especially laws restricting the posting of deep fake videos for a certain time period. I'm concerned, as is Rick Hasen (who has a great piece about deep fakes and elections, so check it out!), about the likelihood that those laws will face serious constitutional challenge. I wonder if they can survive challenge given that elections are fundamentally matters of public interest. Thanks for your question and working on these issues!

jacksonbarnett4 karma

Hello dear professor!!! I’m enjoying following this AMA! I have a question re: best practices for online safety. What do you envision is the most effective way of actually educating everyone on best practices? How do we ensure that folks who are not attuned to the minute details of online safety are protected? Maybe your answer is that changes in laws have normative / educational effect, but are there other ways of mass public education that you think should accompany changes in laws?

DanielleCitron9 karma

It is such a joy to hear from you, my dear RA. You are spot on that law is our teacher and can serve to shape norms by educating us about what is wrongful behavior. Law cannot do the heavy lifting of changing behavior and attitudes alone. Indeed, we need education to play a major role here. Education must include parents and teachers. I cannot tell you the number of times I have heard a parent say to me--c'mon Johnny shared his slut classmate's nude photo because she was foolish to share it with him. Gosh, we have work to do with parents. And we also have work to do with anyone using online tools who may like, click, and share harmful content. Talk to your friends. Talk to your family and neighbors. Companies must also be involved in this work and that is why I have been working with folks at social media companies for years now. Thankfully you are in this cause with me. Let's get others on board.

workingatbeingbetter3 karma

Hi Danielle,

Thanks for the AMA! I'd like to ask you a question about controlling deepfakes and similar potentially problematic technologies from the research side.

Specifically, I'm a lawyer and engineer in charge of a large technology portfolio that consists of at least 50% ML and AI technologies, including deepfake technologies, at a large research university in the U.S. A number of the faculty and researchers here publish and, without consulting my office first, open source potentially problematic technologies, such as deepfake technologies, facial and emotion recognition technologies, and so forth. In a perfect world, they would consult our office first and we would put their technology out under something like a "Research and Education Use Only" License Agreement (a REULA) to limit problematic uses (i.e., Clearview AI, for example). But as far as I can tell, once a technology is put out under and opensource license (e.g., MIT, BSD, etc.), that bell cannot be unrung if even one person downloads that software. From the administration side, there is also a major hesitancy to do anything to the student/researcher who inappropriately posts this license because they want to respect that student/researcher's academic freedom. I took this job to help shape this field to be less dystopian, but I'm not sure if there is a better way to deal with this situation than simply biting the bullet and trying to educate the researchers in advance.

Do you have any advice/tips on how to deal with the above situation? Also, do you think the "open source" bell can be un-rung? I am not an expert on agency law, but I feel there might be an argument that the opensource license is invalid because the student/researcher lacked agency. However, I don't have the capacity/resources to research this theory deeply.

In addition to the above questions, if you're ever looking for a new research or paper topic, please feel free to PM me. I have endless useful and interesting law review article topics from my time here. Anyway, thanks again!

DanielleCitron5 karma

What a fascinating question. We are in seriously challenging ethical and legal times. We certainly need some stronger controls--pre-commitments to ethics and review before going headlong into creating and sharing new technologies. It is tech determinism run amuck. I am going to think about this and loop in Ryan Calo as well as Evan Selinger who think quite a bit about ethics and AI. Thanks for this and yes would love to chat!

sary0073 karma

What is sexual privacy? And how can one maintain it?

DanielleCitron3 karma

As I have conceptualized it, sexual privacy concerns our ability to manage the boundaries around our intimate lives. It involves access to and information about our bodies and the parties of our bodies that we associate with sex, sexuality, and gender; activities, interactions, communications, searches, thoughts, and fantasies about sex, sexuality, and gender as well as sexual, gender, and reproductive health; and all of the personal decisions we make about intimate life. Sexual privacy matters for sexual autonomy, dignity, intimacy, and equality. We can and must protect sexual privacy with our commitments and actions. Individuals have responsibilities to one another--to respect those boundaries. Companies should be held responsible for protecting intimate information and at times for not collecting it in the first place. Governments have similar commitments. I will be writing about this in my next book. Lots of law review articles of mine to read on point if you are interested. Much thanks!

imranmalek2 karma

Do you think there's utility in creating a national database of "deep faked" videos and content akin to the National Child Victim Identification Program (NCVIP), under the idea that if that data is shared across social networks it would be easier to "sniff out" faked content before it goes viral?

DanielleCitron3 karma

Great question! I do, so long as the "deciders" have a meaningful and accountable vetting process to ensure that the fakery is indeed harmful and is not satire or parody. The hash approach with coordination among the major platforms can significantly slow down the spread of harmful digital impersonations. We need to make sure that what goes into those databases are in fact harmful digital impersonations rather than political dissent or parody and the like. Quinta Jurecic and I have written about the advantages and disadvantages of technical solutions to content moderation in our Platform Justice piece for the Hoover Institution. Thanks for this!

aymswick2 karma

Do you think that the recent EARN IT bill proposed in US congress which weeks to weaken and restrict encryption, the basis for computer security and thus online privacy - is constitutional?

DanielleCitron2 karma

See above. I am not a fan of the law.

whooptywhoop2 karma

What podcasts do you listen to/recommend?

DanielleCitron4 karma

I love this question!

Strict Scrutiny

Rational Security

National Security Podcast

Slate's Amicus (Dahlia Lithwick!!!)

Lawfare

Slate's Political Gabfest (I love Emily Bazelon)

Slate's If/Then

The Ginsburg Tapes

Clear and Present Danger

virachoca2 karma

Hi Dan, thanks for doing this.

What is your take on the future of privacy if you consider any video or sound material will be able to modified and indistinguishable from the original one thanks to AI? For example, if Lady Gaga’s face was used in a porn film although she wasn’t any part of it but AI made this very easy and convenient, what would be the other safeguards to protect her privacy? Or will AI render privacy useless in certain cases like this?

DanielleCitron4 karma

I'm certainly fighting against the idea that we have no privacy and ought to get over it. Yes, technology makes it all too easy to create a deep fake sex video that undermines one's sexual autonomy in exercising dominion over one's sexual identity without consent. That is why law and norms must respond. There are websites that cater to deep fake sex videos and host user-generated videos and make a pretty penny from advertising revenue. As federal law stands, those sites are shielded from liability for user-generated content despite the fact that they solicit and make money from these invasions of sexual privacy. We need to change that law. And we need to criminalize invasions of sexual privacy and companies need to ban them.

vinyljack2 karma

How can deepfakes etc impact the current pandemic, both politically and interms of false information being spread, and what are the best ways to combat this?

DanielleCitron7 karma

Thanks to you both for asking this. Indeed, I have a brilliant student working on just this issue for our free speech class. We have already seen disinformation spread virally about the virus. We need to pressure social media companies to remove the disinformation because it is a true blue health risk. The more people ignore the CDC's recommendations, the more likely the virus will spread, and the more likely it spreads, the deadlier it is, especially for vulnerable folks (the elderly, the immunocompromised, and those with preexisting conditions like Type one and Type two diabetes). Report the falsehoods. Combat the falsehoods. Share CDC's materials. We all have a role here and now is the time to play it.

KingOfTheBongos872 karma

What's the difference between Trump's numerous twitter threats (to opponents, dissidents, 16 year old environmental activists, etc.) and Schumer's threats to supreme court judges?

DanielleCitron2 karma

How one assesses threats is context and words. Some of the President's tweets seem designed to incite violence and abuse against particular individuals. Now for Schumer's threat that Justices would pay for their decisions--in context it was arguably not suggesting violence or inciting abuse against them but it was a terrible idea.

space_crafty2 karma

Danielle! I was an intern for Cynthia Lowen’s idiom, NETIZENS, and most of my work was with your interview! I was so disappointed that more of your interview wasn’t included in the final cut. I learned so much from listening to you - I must have spent 4 weeks with that segment alone!

I wanted to ask: how have things changed in the two years since the film came out? Are we at a standstill, or are things getting better? Thank you so much for all the work that you do. Your voice is such an important one.

DanielleCitron3 karma

Gosh thank you for working on her amazing film. I still have not seen it! My daughter went to a viewing as a CCRI intern and she told me it was brilliant.

Thanks so much for your generous comment.

Well, things are slowly, ever slowly improving. We have seen extraordinary legal change when it comes to nonconsensual porn thanks to the incredible work of Mary Anne Franks and the rest of the CCRI group. Of course, those laws need to be enforced and that is proceeding at a snail's pace. We have seen incredible work by federal prosecutors like Mona Sedky. We have seen brave litigants and counsel who have earned my life long respect and admiration like Elisa D'Amico and Dave Bateman from K & L Gates and Carrie Goldberg of Goldberg and Associates. And we have seen companies make great strides against nonconsensual porn. In particular I am thinking about Facebook's work on the hashing project and folks like Antigone Davis, Karina Narun, Nathaniel Gleicher, and Monika Bickert on safety issues. (I get no compensation for my work for FB, they often hate what I say but I am glad they hear me out). We have seen federal lawmakers and law enforcers step up to the plate like then AG and now US Senator Kamala Harris and Rep. Jackie Spier. But we have a long way to go.

Dark_Link_19962 karma

What made you go into this field?

DanielleCitron3 karma

Ever since law school I have been drawn to sexual privacy and reproductive choices. And the networked age seemed to raise a vast and sundry issues involving sexual privacy. There is so much to explore and write about and work on.

NatSecGirlSquad1 karma

Hi Danielle!

You know we are such a fan of you :)

1) So many people think 'deep fakes' are a new problem. Is that really true? Who has been sounding the alarm on this - and why does that matter?

2) We've heard you talk about deep fakes in the context of national security and feminism, could you talk about that a little here?

DanielleCitron1 karma

And I am such a #NatSecGirlSquad fan myself!! Thank you for joining us here. Great questions, so let me answer them in turn.

  1. Lies are indeed nothing new. As long as we have had humanity, we have had people telling falsehoods. And doctored video and audio is not new either. What is new is the convergence of two trends. First our cognitive biases, our tendency to believe what we see and hear, confirmation bias, and our inclination to share the negative and novel. Second social media platforms whose business model only thrives if we like, click, and share. The combination--with an emphasis on the business model of social media platforms--enhances the likelihood of deep fakes going viral. And we have seen it in practice with the shallow fake of the Pelosi video. Bobby Chesney, Quinta Jurecic, and I wrote a piece for Lawfare about the Pelosi video, think you would like it. Lots of folks have been sounding the alarm and talking really thoughtfully about deep fakes including Sam Gregory from Witness, Mary Anne Franks from Miami Law, Ari Waldman from NY Law, Matt Ferraro of Wilmer Hale, and of course my coauthor Bobby Chesney!
  2. It is amazing how close the connection is between national security and feminism. The early attacks on women online (take 2007 cyber mob attacks on various female journalists and law students) laid the groundwork (in terms of strategy) for hostile state actors. I called the online assaults on women "cyber mobs" in 2007. Now, Facebook calls hostile state actors mobbing journalists and political dissenters etc. cyber brigades. Same tactics, same strategies used--discrediting folks with faked or real sex videos, defamation including the suggestion that folks have HIV or herpes, and threats. Today, we see China bought Grindr. National security experts warn that the country will use the trove of intimate information to blackmail, extort, and harass opponents. I could go on and on, sadly.
  3. Thank you for all you do!

DanielleCitron1 karma

I am such a fan of #NatSecGirlSquad! Thanks so much for these questions!

  1. Lies are nothing new, and doctored videos and photos are nothing new. What is new is the convergence of human biases (our tendency to believe what we see and hear, confirmation bias, and our natural inclination towards the negative and novel) with the business model of social media companies based on likes, clicks, and shares. It makes rational sense for social media companies to amplify content that will earn them advertising fees and negative and novel deep fakes will surely do that. There are wonderful folks working in this space including Sam Gregory from WITNESS, law profs Mary Anne Franks and Ari Waldman, Matt Ferraro, and my amazing coauthors Bobby Chesney and Quinta Jurecic.
  2. The connection is profound. We have seen hostile state actors take note and follow the playbook. of what I called in 2007 "cyber mobs." Now called cyber brigades, hostile state actors use trolls to mob political dissenters and critics with the same playbook of the cyber mobs of the mid 2000s (nude or doctored nude photos, threats, defamation). China just bought Grindr likely to leverage intimate information to extort and harass opponents. The connection is alive and well.

GorillaWarfare_1 karma

In your opinion, what is the biggest problems with established First Amendment Jurisprudence? It'd be interested to hear your general thoughts on a structural critique, or if there are specific doctrinal developments that you find problematic.

In asking this, I am more interested on your opinion of established doctrines, as opposed what new topics need to be regulated.

DanielleCitron3 karma

I love this question, and one I am frequently asked. My broader concern is that we are using analogies that may not meet this particular moment and the particularities of the technologies that we have in the here and now. Genevieve Lakier has a brilliant essay in the Knight First Amendment online series about how the problem isn't that we use analogies for free speech but the analogies that we use. Is the internet and the different players in the internet infrastructure like the town square? No, of course not, but the Supreme Court has not yet shaken this view in 25 years. We need creative thinking and nuance and care and we will likely need more and different regulation for the different layers of the internet infrastructure. Neil Richards and I write about this in a piece we coauthored in Wash U Law Review. The concept of the marketplace of ideas is under serious strain--so much speech is unanswerable (like rape threat) or just so much noise. Mary Anne Franks has a great article here about Noisy Speech. In short, First Amendment doctrine must take into account these new and significantly different structural realities.

tellsatanbepatient1 karma

Hey what’d you get on your LSAT?

DanielleCitron3 karma

Gosh that is personal. :)

pussgurka1 karma

What are the best practices to train online content moderators on distinguishing hate speech and minimizing individual biases?

DanielleCitron1 karma

You might be interested in an article that I wrote with Helen Norton about how intermediaries might define hate speech and how they need to be clear about the harms that they are trying to address and their speech processes.

Pokketts1 karma

What is the best way to stay informed on deepfakes? And also is there a good way to determine deepfakes I can do without much knowledge on deepfakes?

DanielleCitron1 karma

Do please read the work that I have written with Bobby Chesney in California Law Review, Foreign Affairs, and Lawfare. Matt Ferraro, a practicing lawyer, is busy writing updates to the laws around the country. I promise to keep writing myself!

jzimmerman19851 karma

What is your opinion of the Earn It Act currently being discussed in committee as it applies to privacy on the internet?

DanielleCitron1 karma

I am concerned for several reasons. It is a piecemeal approach and thus will need to be updated. Any best practices affirmed by an agency must be updated and remain nimble. Also why tie this to encryption? I am very concerned about this law passing.

darthnut1 karma

Any relation to Joel?

DanielleCitron3 karma

Not to Joel Citron, but my late father's name was Joel so it is my favorite name.

bovaryschmovary1 karma

[deleted]

DanielleCitron1 karma

Yellow of course!

geschtonkenflapped1 karma

Hello Miss Citron.

Did life ever give you lemons?

DanielleCitron2 karma

I am definitely enjoying this question. As a lemon myself, yes.

LVman53-1 karma

Hate Crimes are a bullshit unconstitutional term that predetermines motives based on skin color. Kill a black and your white hate crime, kill a white and your black, normal everyday occurrence. Prove me wrong wannabe lawyer, how do you legally defend such a standard that is out of touch with the constitution?

DanielleCitron3 karma

Gosh well I am definitely a lawyer as well as a law professor. Hate crimes are proscribable because we are punishing the motive of singling out someone due to a protected characteristic. You will surely be interested in reading Supreme Court caselaw on point, notably Wisconsin v. Mitchell.

tungvu256-2 karma

how is it possible the president of USA is not held to any standard OR accountability?

DanielleCitron1 karma

This is a bit off topic for me, but I would say that whatever the President does is a matter of deep public interest. Hence I think why Twitter has kept him on the platform despite his obvious violations of the company's TOS. We have an election coming up and we can express our will.

LVman53-4 karma

Afraid of my Question eh?

DanielleCitron3 karma

I am not sure which question that is.