Hi Reddit!!!

TL;DR: with colleagues we conducted the first empirical study on how to reduce aversion to shocking (yet informative) content and have developed a google chrome extension named Arkangel (wink wink Black Mirror) that applies the concept to all images we browse on the web. AMA!

Full intro Hi you all! Lonni here. With my colleagues, we conducted the first empirical study on the use of colour manipulation and stylization to make surgical images and videos easier to watch. While aversion to such stimuli is natural, it limits the ability of many people to satisfy their curiosity, educate themselves and make rational decisions. We have selected a diverse set of image processing techniques and tested them on surgeons and people who are not exposed to such stimuli on a daily basis (non-specialists/lay people). Through this study, we have found a technique that is particularly effective in reducing the shocking aspect of such images while preserving the initial information of the image. Based on this result, we developed a google chrome plugin (Arkangel, in reference to Black Mirror) to automatically process any image that could be shocking.

An illustration of the technique that seems to work best is available here, in the header (don't worry it's all safe for work)

We hope that this kind of technology can be generalized to help in many ways:

We also consider many other scenarios that we explain in this TEDx talk, including how we could use this in augmented reality for example. The plugin and the latest information about the project are available on the project page: https://www.aviz.fr/visualcensoring

We also announce some news on twitter, don't hesitate to follow me @lonnibesancon

The proof for this AMA is available here and just for fun, the Arkangel version is available here (if you think my smile is a little bit too creepy on the first image, that will help).

Here is the Open Access link to the two scientific articles published on the subject. They talk about the trade-off between the shock of the image and the fact that we must keep the details visible.

I think I've said it all! I hope to have interesting exchanges with all of you r/IAmA! So… AMA!

Edit: Obvious typo in the title: I meant 911 not 9/11

Edit2: It's 3:18 am and I need a short rest before answering all other questions you might have. Feel free to keep posting and commenting, I'll be back very soon

Edit3: I'm back :)

Comments: 434 • Responses: 119  • Date: 

HutSutRawlson313 karma

Since you reference Black Mirror, I'm sure you're already aware that technology like what you are developing has the potential for malicious use. One of John Oliver's recent shows focused on facial recognition software, a similar technology with similar potential for abuse, and the irresponsibility of those currently in control of it. What do you think about the potential for governments and other institutions of power to use this technology to censor images that aren't disturbing, but that they simply don't want people to see? What plans do you have for ensuring your technology doesn't get used for censorship?

lonnib146 karma

Very important and interesting question. Indeed we are aware of potential misuses and even mention them on the TEDx video about this.

First, the technology to process images existed before we thought of this as a use case (it was invented by researchers working on rendering techniques). So governments could have used it for a while. Our contribution is mainly the study: we have found one technique that seems to reduce aversion more than the others. The google chrome plugin is but a small contribution to have as a proof of concept (and we'd be happy to see people extend on it, it's all on GitHub).

Second, the implementation of the google chrome extension gives complete control to the user. You decide if the extension is on or not and you can decide how strong the processing of the image is going to be. A right click on the image and you get the original image back.

Third, as with any technology, abuse is impossible to avoid. The internet is a wonderful technology but has been used to spy or do a lot of illegal things (from governments to individuals). As we did not even invent the technique, we have very little control over its use, but we make sure to explain that user control is necessary for this kind of technology to be useful.

Bammetje03352 karma

As a response to your third argument, do you think that this technology facilitates abuse more than average?

lonnib42 karma

I do not think so. But you might have something in mind that I have not thought of...?

TrollGoo47 karma

How about the way news is presented? One class of people or victim has the water color filter from 1996 you reinvented. The act of horrific violence doesn’t seem so bad. But our soldiers being injured are modified the other way... more juice and splatter.

lonnib1 karma

But our soldiers being injured are modified the other way... more juice and splatter.

Not sure what that would lead to.

smoozer56 karma

I assume continued indoctrination of young minds by the media. We've already been selectively presented western military actions as "good" for our entire lives.

lonnib16 karma

In our case it's all user-controlled :)


*for now

lonnib2 karma

How could it change ? Also you would see it and install a different browser, a VPN or whatever...

Forestginge24 karma

But surely you must consider that if you lift the lid on such software then you're inviting manipulation of the media to be commonplace and normalising it. Its naive to think that this will always stay in the hands of the user, people will deconstruct your software and create their own variations. Manipulating pictures in the press already happens I realise but this would allow for it on an unprecedented scale. Have you considered the potential side effects of the technology down the line and what form they may take?

lonnib26 karma

The technology to process pictures has been available for more than 20 years now... our contribution is nowhere near this... you don't need to reverse engineer anything we've done, the code for these techniques has been online for years, in this sense we haven't brought any more danger. The only thing we contributed is the study to show it can make shocking images more palatable...

We explain the side effect in the TEDx video though and how misusing this, if it were to be implemented for AR contexts, could be problematic

GutzMurphy20997 karma

And you really can't see any potential downsides to making it easier for human beings to view disturbing sensory input and not feel bad about? I mean, there's a reason why we have a strong visceral reaction to violent or upsetting imagery--ie. because it let's us know that something is not right with the input in question.

You remove that and you may be taking away a very important part of the human being's ability to correctly deduce when something may be very, very wrong. Feels like you're kind of playing with fire here.

lonnib29 karma

Again, there are two things you should consider.

1/ we do see downsides and have explored them. But if users have control, it should all be ok (although we also explore some downsides there even with user control).

2/ research is not about the use that people make of it. I explained this in another thread. Nuclear energy --> atomic bomb. Internet --> Sharing of murder videos and a lot of very illegal and morally-wrong content... should the people/researchers working on these have not done the research because of these potential downsides?

Edit: how about photoshop? this seems like it could do wayyyy more damage than our study results.

Forestginge0 karma

I appreciate your response, I think where I personally struggle with it is the automated nature of it, one could theoretically write an algorithm to censor a certain type of media, and inject that into some sort of malicious virus program which if undetected, would have the capacity to totally influence what a person sees without them knowing. I'm not accusing you guys of such and I absolutely see the upsides of such work but this is the sticking point I'm at

lonnib4 karma

I can understand that, but TBH, you would automatically realise that the image has been processed. Look at the lasagna example here: https://www.aviz.fr/visualcensoring

i_never_get_mad24 karma

Agreed. We can’t fully control how the government will use existing technology. The only way to control them is by voting those who care about the people.

lonnib22 karma

In our case we really argue for the use of the technology with user control and explain why it is needed. I think that's the best we can do.

i_never_get_mad14 karma

I see that you are based in France. If the project is sponsored/funded by the government, does the French government own the patent/technology? How would you prevent that the tech won’t get bought out by the government, and hence you lose your control over how it gets implemented? I’d think that you (or your team) would sell the technology to someone/company at some point.

lonnib21 karma

It's all CC-BY-SA, the code for the plugin is on GitHub, the papers are online. Our contribution is the study of the image processing technique, not the coding of the image processing technique which existed before 2010 if I am not mistaken.

So the French government does not own this, but this was done in France yes (although I am now doing research in Sweden).

Also to help answer your question, this previous answer: https://www.reddit.com/r/IAmA/comments/hszfne/i_am_lonni_besançon_i_have_studied_automated/fydopdw/

mostnormal1 karma

So we need a third party, then.

lonnib20 karma

Who would that third-party be? I fear that if you don't trust governments, no kind of third-party or institution would be trust-worthy enough. Don't you think?

IrrelevantLeprechaun-2 karma

No. We basically need to dissolve government and rebuild it from the ground up so that it's structure doesn't overtly favour the elite.

lonnib15 karma

Really did not expect this AMA to get to that

trivial_sublime6 karma

Welcome to Reddit

lonnib3 karma

And I'm not even a reddit newbie...

spaderr2 karma

How would this protect people like content moderators if they have to click to know if it’s going to be illegal/offensive or not? They have to see it anyway, no?

lonnib5 karma

They would see a processed version of it, that is less shocking and can hopefully still be sued in most cases to still decide if it's against TOS or not

kovelandkrim-8 karma

Sounds very Cambridge Analytica-esque

lonnib8 karma

How so?

lambuscred20 karma

Want this answered bad. I can't wait for this AMA to be in the history holograms about the downfall of objective truth in video format

lonnib12 karma

Hope I answered the question well enough then :). Feel free to ask more information or raise more points :)

adviceKiwi1 karma

Yeah, this could never go wrong...

lonnib8 karma

Did you say so about internet? powder? TNT? Photography (creep shots)? Nuclear energy? Drones?

kovelandkrim2 karma

Because censorship tools and nuclear bombs are one in the same? Lol

lonnib8 karma

Have you seen what kind of censoring we provide or did you just decide that it's the crazy censoring that you're afraid of? https://www.aviz.fr/visualcensoring here it is.

And the point I made was about technology...

lonnib7 karma

Or more related: about photoshop ?

Titanruss85 karma

Why should we make atrocity more palatable?

Does this technology not create more shock and harm on the viewer once they actually do encounter real images or real life situations?

How is this technique better than education for understanding that reality is graphic and can be disturbing?

For people that need to censor internet videos, what benefit does this provide? If its an algorithm that decides what to censor... whats the point of the person?

thebraken37 karma

Because it makes information more accessible. It seems like it's just sort of a filter that takes the edge off, so to speak - not entirely unlike putting on sunglasses when it's too bright.

Couldn't it be used alongside education that reality is graphic and disturbing?

lonnib26 karma

That's exactly the point!

lonnib21 karma

Take the surgery example I have mentioned in the OP and several time here. Just because you need to undertake surgery and want to understand how it works does not mean that you should be shocked by the images you see when looking at it. In the case of Facebook moderator who have to sit through hours of disturbing content to see if they break TOS or the law, this is also not necessary.

See this related answer on a related question here :)

ColonelBelmont21 karma

I dunno, man. You're literally automatically censoring reality. Did you name it after that Black Mirror thing as a tongue-in-cheek way of sorta preemptively owning/controlling the inevitable comparison to the atrocious, borderline-dystopian themes of that show? Like, if you use the name in a whimsical/self-deprecating sort of way, it'll maybe distract from how twisted this actually is?

Grithok36 karma

Have you seen any of the example images? I am not sure this censorship is in line with the censorship you are imagining.

It just cartoonizes outlines and blurs color some so that things don't look at gross. You still pretty clearly see the detail, as stated. Of course, the image I saw was the given lasagna example, but nothing was hidden except for the distinct color variations across surfaces.

Furthermore, they didn't produce this algorithm. They tested a number of existing ones and thought that this one had the potentially useful stated use.

Is it potentially useful? I don't really think so, but at the very least this team isn't advancing us towards a dystopian future.

lonnib29 karma

Sometimes I feel like people haven't checked out the example at all... Thanks for pointing it out

lonnib14 karma

Did you name it after that Black Mirror thing as a tongue-in-cheek way

Totally as a wink to Black Mirror.

Like, if you use the name in a whimsical/self-deprecating sort of way, it'll maybe distract from how twisted this actually is?

I really don't think it's twisted in any way. More on this in that quite-detailed comment

MegaTiny12 karma

Just because you need to undertake surgery and want to understand how it works does not mean that you should be shocked by the images you see

This is a really, really weird example case you keep bringing up. If you're having surgery you aren't shown a bunch of graphic images of what will happen to you. You get a diagram and a text description of what will happen alongside potential complications from the surgery.

Seeing a paint-daubs filtered image of someone being cut open is not better than that.

lonnib8 karma

Making these is time-consuming and very expensive. Our process is simple and cheap.

Wikipedia information seeking is also one use case. Adding to all the ones I mentioned in OP

bro_before_ho4 karma

Seeing a paint-daubs filtered image of someone being cut open is not better than that.

I mean they literally studied it and showed that it is?

lonnib4 karma

I mean they literally studied it and showed that it is?

Yep and we have quite good evidence of it.

lunaerisa2 karma

This has been upvoted a lot but I'll share an opposite view. I am having a type of brain surgery in two weeks and I wanted to know more about the procedure I will be having, so I looked it up on Youtube. There are 167,000 views and 125 comments, almost all of them are from others like me who wanted to know what would happen during the surgery.

So from my perspective it is not as rare or weird of an example as you think.

lonnib2 karma

Hey I did not get a notification for this but I’m happy to see that you provide evidence for it! Thanks!

Really hope the surgery goes ok! Please let us know :-)

SlowbeardiusOfBeard4 karma

How can they tell if the images genuinely break TOS if they aren't seeing the unadulterated images? I can easily forsee the use of false-positives where the system censors an image, and the mod bans it for "gore" or whatever reason without seeing the full ontext.

I watched a video of yours quite a while back and couldn't understand how you seem so oblivious to how insidious this technology is.

I genuinely can't think of a legitimate use case.

lonnib16 karma

How can they tell if the images genuinely break TOS if they aren't seeing the unadulterated images?

You don't need the original image to see if a murder is being recorded/shown in the video that they are watching. The same goes for rape, child pornography or other very disturbing and illegal kind of content. Maybe in other cases it is more difficult I agree, which is why we would like to conduct a study with Facebook moderators to find the perfect trade-off.

I genuinely can't think of a legitimate use case.

Explaining surgery to lay people, 911 video operator and the Facebook example that I have just explained.

flexylol4 karma

Someone who is "shocked" by surgery has no business doing surgery.

I see your point tho re: mods and similar tasks.

TomAto31414 karma

It's for the person who is undergoing surgery to have more information about what is happening to them. Not necessarily for the person doing the surgery.

lonnib5 karma

yep :)

lonnib2 karma

Well your general doctor (not the surgeon one) might need to understand how specific parts of a body work, which might need some pretty graphic pictures, but they should be required to be able to withstand this per se.

ShonenBat8829 karma

Rembering back when I was a child there were many instances where I’d experienced or viewed something that should have been censored by today’s standards. I do think these instances had a profound way I view the world (I’m sure both good and bad). Is there any concern of large scale censoring technology like yours causing a separation of reality vs expectation? Would the lack of exposure correlate to lack of understanding?

awsomebro600018 karma

I stmbled across shock images when i was young, honestly now i dont really feel anything if i go on a shock site, that fact does trouble me though because i know i am supposed to feel revulsion yet i dont.

Grieve_Jobs16 karma

Nah, you just got used to seeing shocking things that you are removed from by at least 2 panes of glass. You will feel different about seeing it in real life. So don't feel too dead inside just yet, there's still hope you can be traumatised.

lonnib5 karma

Not sure if u/awsomebro6000 should be reassured or not now

lonnib7 karma

Interesting point

ParadisoBud5 karma

Seeing shit I shouldn't have seen at a young age did the opposite for me. I honestly still see some of the more disturbing shit clear as day in my head and it really fucked me up for a while. I still can't watch anything disturbing (real life events) bc I'm afraid of something I can't unsee. No matter how curious I might be.

lonnib2 karma

Perhaps you should try the extension then no?

lonnib10 karma

Very interesting question! We actually mention this in the TEDX talk about the project.

This is also in the black mirror episode. It is totally possible indeed that making every single image on the internet or, should this technology one day be available in AR and be applied onto everything that can be shocking could have a negative effect. To what extent is unclear though. Your guess is really just as good as mine I would say, or as "Arkangel"'s writer(s)

Vandechoz28 karma

would you say that your system errs more on the side of too much information loss, or on the side of too little difference in shock factor from the original image?

lonnib35 karma

So with the study in the papers we have tried to identify the ideal parameters for surgery. But the google chrome plugin actually gives you the possibility to set the values you want yourself. So you can decide 1/ if you want the plugin to work at all, 2/ how strong you want the filtering of the image to be so the trade-off is done by you and you alone eventually. We can only provide with recommended value based on the limited pool of images that we have tried.

Edit: spelling

MediocrePigeon313 karma

Why did you decide to start this project?

lonnib15 karma

As many fun projects (yes it was a fun project really, and not at all the focus of my PhD), it was decided while talking over a beer. One of the senior researchers in my team saw an surgery procedure demonstrated via a video during a lecture and could not stand this and thought "surely they are ways to make this palatable" and he thought of a couple of fun ways. We paired with researchers in rendering/processing of images to try a couple. We tried very simple ones (black-and-white images or change of blood color) and more artsy or complicated ones.

I know where you're going with that question though (I think): the idea was before Black Mirror's Arkangel. But getting ethics approval for this research was a long and tedious process (which we understand, after all we basically asked lay people to look at shocking images in our lab).

CAMO_PEJB3 karma

what were some of the more 'artsy' ways you did it?

lonnib3 karma

You can check them all out here on page 6.

rolabond4 karma

I think your paper needs a more overt example to make the visual impact of your project clear. The lasagna is cute but people who will be using this won’t be looking at lasagna, they’ll be looking at genuine surgery images. So I think omitting an image of an actual surgery (and how it looks through the filter) was a missed opportunity. Disgust is an immediate physical and emotional response and it’s hard to appreciate how well this technology can tempter it without genuinely repulsive imagery to make the contrast evident.

lonnib9 karma

The problem is that we are not legally allowed to do so (privacy reasons) but also that it's likely to repel reviewers and readers.

rolabond3 karma

Have you considered using meat as an example? Anyone that goes to the grocery store or that cooks raw meat is familiar with what it looks like and have become accustomed to the gore of it so it’s unlikely to be so distasteful they won’t read but it’s the best approximation to human flesh.

lonnib10 karma

Well meat is in lasagna and we needed to have something that could represent skin/tissue and everything involved in a regular surgery

freemason77712 karma

Si do you still know what you're seeing with this plugin turned on or is it censored like in the bm episode?

lonnib28 karma

Oh no no, you very much still see what you're looking at, example available here: https://www.aviz.fr/visualcensoring don't worry nothing shocking --> Totally SFW

Unhyper9 karma

Oh, I like that. That demonstration is much better than the doomsday scenario I expected.

lonnib8 karma

We had to make sure we had examples we could show to a wide audience. Lasagnas are very close to surgeries when you think about it :p

DamagedGenius5 karma

Next time cut open a jelly donut!

lonnib7 karma

In France donuts are not a thing, really. Lasagnas though... :)

DamagedGenius4 karma

I'm sure there's at least one wild Baker willing to put jam inside an éclair.

lonnib11 karma

No!!!! This is sacrilege level!

Scoundrelic12 karma


Is this safe enough for children to watch porn for the plot instead of the sex?

lonnib11 karma

I didn't expect that one. Well in one of the papers we do mention studies that show that kids are annoyed by porn sometimes or shocked and that they find it sometimes by mistake. We explain that the technology could work on nudity but haven't been able to test that.

You probably meant it as a joke. I would love to try this on adults. But applying for ethics approval might be funny in this case really.

Scoundrelic5 karma

Thank you.

Parents will set the filter on the device, then walk away.

lonnib7 karma

Users can deactivate it and right click to see original content anyways. But legal reminder (I'm not the funny one I guess), kids are not allowed to look at pornography.

dracapis10 karma

What’s a question about this project that you wish people would ask, but that it never gets asked?

lonnib15 karma

Ok one more, I wish more people would ask me if they could clone the GitHub repository to contribute to the extension :)

threePwny7 karma

I have zero software development experience, but for those who could contribute, may we clone the GitHub repository to contribute to the extension?

lonnib7 karma

Haha nice one!

Please do so

lonnib8 karma

Interesting question! And very meta!

I guess I would love to talk about the use of this to communicate about slaughter houses. Would that help people seeing what happens behind close doors? I don't know more might come later :).

I love this meta-question though :)

Baccata642 karma

Interesting, i was thinking of asking about this. Don't you think industrial meat corporations would buy into this tech to prevent depression for people who work in slaughterhouses, and reduce likelihood of getting exposed for terrible practices ?

lonnib7 karma

That is a possibility I really did not think of and I hope they won't (vegetarian for years and newly vegan here)

stuffedfish9 karma

Do you forsee any possible downside/darkside uses of your technology in the future?

edit: oh nevermind, i saw u/HutSutRawlson's question and you answered it pretty well.

How long did it take for your project, that is from the first conception of the idea, to the stage where it's a workable program?

lonnib7 karma

Oops almost replied before your edit...

Idea was in 2014 and the first paper got accepted in 2018 (just after my PhD, though that was not my PhD topic). Second paper accepted in 2019 and google chrome extension with it.

greffedufois8 karma

Does it allow blocking of flashing images? A lot of my photosensitive friends on r/epilepsy would love that. Unfortunately people like to post flashing gifs sometimes in Hope's of causing a seizure. Why, I have no idea.

lonnib5 karma

That is not something we detect or that the technology could circumvent I am afraid. We only have worked on shocking content so far.

bs136908 karma

So you do understand that Black Mirror tech is usually not a good thing to emulate? It worries me that you seem to be touting that as a positive. People need to see real graphic images to understand the horror, the severity, the importance, etc. I don't think censored images can help anything.

lonnib1 karma

People need to see real graphic images to understand the horror, the severity, the importance

First, I would like some evidence of this. While I can easily understand that you have this opinion there is no proof of this, and the Black Mirror's take on the technology is nothing but a work of fiction (so no evidence).

Second, our initial use case if for people who want to understand a surgery they might need to undertake, for Facebook moderators who have to look at disturbing content for hours just to determines if it breaks the law or TOS. Would you argue that in any of these two cases people need to see horror? For Facebook moderators, my OP contains a link: https://www.theverge.com/2019/6/19/18681845/facebook-moderator-interviews-video-trauma-ptsd-cognizant-tampa

Do you think that PSTD-level trauma from online browsing is necessary for these people (who are, just doing their job) or for people who just want to get information about a surgery? Or for 911 video operators?

I understand your take and I hope that you don't take my response badly. We use black mirror as a background because 1/ almost everybody knows it 2/ it makes telling the story behind the research therefore easier.

lonnib2 karma

Actually we might reference that paper. If not, we should have :)

HighFunctionalPsycho7 karma

What in your opinion are the upcoming developments in the near future?

lonnib11 karma

I'm assuming you mean for this technology. We are looking fora couple of things:

  • Being able to test this with Facebook/Twitter/Youtube moderators. Their job is hard: looking at very difficult content all day, but we could maybe make it more palatable with this while still allowing them to detect if the image/video is against the TOS or illegal? But getting Facebook/Twitter/Youtube to see our idea and help us test this seems unlikely or very difficult (if you know someone, lemme know). We would also like to test this on 911 video operators to see the kind of trade-off needed for this to work for them.

  • We would like to test this also on editors going through 1000s of photos to chose one. Some of them are hard to look at. (same here if you know people I'm interested :p).

  • Using machine learning/deep learning to properly detect and process only the shocking content that the user wants to process. We have a small work in progress on this, but none of us researchers are paid for that project so it's done on our free time mostly.

lonnib3 karma

Also, applying this with AR headsets would be amazing!

utopiah2 karma

It would be but identifying in real time objects is not easy. I would start exploring using the Microsoft HoloLens 2 with YOLO v4 for fast object detection then apply style transfer only on the portion of the image with the detected object, ideally using shaders.

lonnib3 karma

I have a hololens for work (or my student has) but it's not a related project. We did something for CERN with it though: https://hal.archives-ouvertes.fr/hal-02442690/file/revision.pdf

Duke_Newcombe6 karma

I'm sure you've probably answered this, but what is the concern among you and your peers that pursue this technology that it can further desensitize the users from violence/ahborence, further dehumanizing themselves and the subjects.

Case in point would be aerial UAV operators. The experience is already very "gameified", where "targets" such as other humans become blips on a screen.

Would such "ignorance" of a human being be beneficial in some instances (moderators on social media looking to eliminate shocking material) be detrimental when other uses become apparent (police and corrections in dealing with people, politicians and those in power facing constituents)?

Be interested in the discussion going on in your community regarding this issue.

lonnib5 karma

I'm sure you've probably answered this, but what is the concern among you and your peers that pursue this technology that it can further desensitize the users from violence/ahborence, further dehumanizing themselves and the subjects.

I have only partially :). In the TEDx talk about this we briefly touch on this at the end.

It is indeed a concern if the technology is adapted to real-time processing of real-time feed... but we're not there yet.

Would such "ignorance" of a human being be beneficial in some instances (moderators on social media looking to eliminate shocking material) be detrimental when other uses become apparent (police and corrections in dealing with people, politicians and those in power facing constituents)?

I do think so indeed. It really depends on the context and use that people make. It is needed in the cases that we mention here (and you highlighted) it definitely make sense to have this technology. In others it might be detrimental. In the TEDx talk, we mention slaughter houses and animal associations and say exactly that "it could help them communicate to more people" but also that "it could hinder the message they are trying to convey".

assgravyjesus6 karma

Do you play a lot of borderlands video games? The visual filter looks like their shading style.

lonnib7 karma

Don't know about Borderland games at all. I'm guessing they use cell-shading... no?

Yet another proof that the technology existed a while ago, we just proved that it works well to make images palatable :)

assgravyjesus11 karma

Right on. Its cell shading. Looks a bit like the movie "a scanner darkly" as well.

lonnib4 karma

Did an AMA on r/france and someone said so too :). Never seen it though

double-you2 karma

Excellent movie, and book, by Philip K. Dick. The police use scramble suits to hide their identity by looking like everybody and nobody.

lonnib1 karma

Will give it a shot then :)

lonnib1 karma

Currently watching the matrix trilogy for the first time right now though

pr3tex5 karma

Hi Lonni :) Very interesting project and of course I agree it will find many uses, especially the ones you mentioned. Most of the questions I would like to ask have already been asked so i just have only one question - how much the university helps you in this project? Do you have a scholarship or grants?

lonnib8 karma

Hey thanks a lot for the nice words!

how much the university helps you in this project? Do you have a scholarship or grants?

Well nothing per se. The university in Paris paid for my PhD but this was unrelated, it's something that all of the researchers involved have done either on their free time (I was at the end of my PhD and waiting for my contract in Sweden to start, so technically unemployed) or based on their usual research time. In France or Germany, where all of were, you can be a permanent researcher and just get a salary based on state funding, no grants :)

pr3tex5 karma

Ok, i thought so. Thank you for your answer and good luck with further work :)

lonnib6 karma

Thanks a lot. Are you also a researcher?

pr3tex4 karma

I'm not. I was just curious to see what it looks like in your country and whether you have adequate support there for such initiatives so that you can focus only on work.

lonnib5 karma

Nice that you wondered :). Well I guess the situation in France is quite good if you manage to make it to be a researcher :)

corsyadid5 karma

How do you pronounce your surname?

lonnib5 karma

I'm lucky (or not as a kid) that my family name is the name of a city in France. So wikipedia to the rescue for the prononciation

r00t14 karma

How would you stop China from abusing your technology?

lonnib3 karma

I think I have explained most of it in a comment on this thread here

phi_array4 karma

There are a lot of articles saying shady stuff about Facebook. How shady do you think the company and the employees are? Are there employees who are totally unrelated to that?

lonnib6 karma

How shady do you think the company and the employees are?

Tricky question. I actually do not know much about Facebook TBH. I know about they have changed what is allowed to be posted or not many times. I don't think I can say whether or not the direction team at Facebook has any evil intention. I hope not.

Are there employees who are totally unrelated to that?

If you think they are shady (again I don't really know about this), I am sure there are thousand of employees totally innocent in there: engineers, marketing teams...

hoopsterben3 karma

This is more of a philosophical question, so I doubt you have any interest.

Do you believe it possible for someone to desensitize themselves in order to commit atrocities or do you believe that regardless of said desensitization they would do it anyway?

While you would like to think people don’t shoot the app designer, marilynn Manson took heavy heat for someone having his poster in their room. Make sure to approach this responsibly.

lonnib2 karma

Do you believe it possible for someone to desensitize themselves in order to commit atrocities or do you believe that regardless of said desensitization they would do it anyway?

I would tend to go for the 2nd option.

Make sure to approach this responsibly.

I really hope to start interesting philosophical and moral/ethics discussions here :)

SgtMajorProblems3 karma

Two somewhat ignorant/weird questions:

  1. Could you see it used in therapeutic ways? (Sort of like EMDR maybe)

  2. Working off the idea of "desensitization" - could you see this used in the medical field for training? Any ideas on what the psychological effects are versus traditional exposure/observation desensitization? I guess question is would you become desensitized to where the filter is not needed. (Thinking of medical field and combat applications here).

geldin4 karma

This post definitely got me thinking about therapeutic use, like for people with phobias or something.

Is it possible that this could be taught to detect for phobia-specific stimuli?

lonnib2 karma

The image detection algorithms can for sure. But not sure the processing technique that we use currently will help for all phobias. I'd like to study this more :)

lonnib3 karma

Could you see it used in therapeutic ways? (Sort of like EMDR maybe)

I don't know EMDR at all, so not sure. But maybe that would help indeed.

Working off the idea of "desensitization" - could you see this used in the medical field for training?

Yes I don't think your regular doctor (not doing surgeries) needs to see bloody images to be a good doctor.

Any ideas on what the psychological effects are versus traditional exposure/observation desensitization?

I wish I had, I'd love to be able to study this, but it is 1/ not an easy study and 2/ not easy to get ethics approval for this.

SgtMajorProblems2 karma

I could see ethics approval being difficult especially for a longitudinal study.

It's pretty incredible, thanks for your work!

lonnib3 karma

Really glad that you like it :)

moondes3 karma

So if the software deems the object is a hot dog, you can automatically make it look like not a hot dog?

lonnib4 karma

A not shocking hot dog (which I guess can be shocking for vegans XD)

thxxx13373 karma

What's the biggest risk to freedom in all this?

lonnib3 karma

I think I have answered this in this response: https://www.reddit.com/r/IAmA/comments/hszfne/i_am_lonni_besançon_i_have_studied_automated/fydopdw/

But TL;DR: the technology existed before, we just proved that it can help with making images palatable. Do you foresee another risk?

incognito_rito3 karma

How do you plan to monetise this ?

lonnib18 karma

There is no plan at all. The plugin is online, the code for the plugin is on GitHub, the data from the studies is online on OSF.io.

I am a researcher/academic, I don't try to monetise. I am just very happy to be able to conduct research on projects that I like or am interested in.

dracapis7 karma

As a general rule, European researchers rarely monetize side projects, which are often sponsored by universities/public institutions/governments

lonnib3 karma

I guess that's true :)

nadademais3 karma

Favourite Final Fantasy?

lonnib3 karma


Hope I can choose 3 :)

phi_array3 karma

How does fb manage to automatically detect if an image is offensive or fake news?

I mean I know somewhere there is:

if(isFake(image)) but how does the isFake function work?

lonnib7 karma

Guestimating here: probably a deep learning technique based on a model that is trained on a dataset before.

We are currently experimenting with that so users can chose what they want to process and we only process the images of that kind. But it's a little bit difficult and no-one on the research team knows much about it

IrrelevantLeprechaun2 karma

It isn't a function. It's a deep learning algorithm. It isn't as simple as injecting a line of code.

lonnib6 karma

But I guess once the deep learning is coded, you can just call a function :p

xXHacker69Xx2 karma

Hi, I haven’t seen Black Mirror and my English is also not exactly what you’d call amazing. So would you mind explaining what you did as if I’m 5?

blindeey3 karma

It makes things that would upset you into one that looks real and more cartoony, so it makes your emotional reactions less, so you can see whatever the photo is and learn from it (such as from surgery or trying to find clues in it, or such)

xXHacker69Xx2 karma

Ohhh that’s interesting, so emotional manipulation?

blindeey3 karma

Yes, pretty much, if you wanna call it that. Photo manipulation to be make your emotional reaction not as strong.

xXHacker69Xx2 karma

I need some in my life. God I would love to make myself less emotional.

blindeey3 karma

Have you looked into cognitive-behavioral therapy? It can be pretty effective.

But as far as this is concerned, they built a browser extension so you can totally download that and see what it does for you.

xXHacker69Xx2 karma

Wowie thanks Blindeey. I appreciate the efforts! :)

Have nice guy, mine just got a lot better.

lonnib2 karma

Thanks u/blindeey for explaining while I was sleeping.

And feel free to come back to me about the extension u/xXHacker69Xx

MossyDefinition2 karma

what kind of things does your artificial vision system recognize?

lonnib2 karma

The beta version of it (not sure we released it yet, will have to check), uses pre-existing models to find nudity, surgery and blood.

cousincrimp2 karma

Have you tested this in a longer time span to see if there is a sort of adaptation effect of seeing the censored images, and that the levels of disgust/disturbance response in viewers seeing them end up rebounding?

lonnib3 karma

Haven't done any studies of the likes yet... But I would really love to TBH

strangethingtowield2 karma

When challenged, you keep mentioning "explaining surgery to lay people" as a use case for this. I do get squeamish looking at surgery images, but this doesn't strike me as a particularly frequent or crucial use case. Do you have any insight into how many people currently suffer any degree of harm due to an inability to have surgery explained to them using existing technology?

lonnib5 karma

No not at all, and this is not my go-to case either. Most of the potential use cases are mentioned in the OP, I just don't want to explain them all over again every time. But here they are again

  • helping non-specialists to inform themselves (e.g., on wikipedia)
  • helping online content moderators who have to deal with illegal and offensive content all the time
  • help police officers who have to go through the gory files of some criminals
  • helping animal protection associations to disseminate, for example, images of slaughterhouses
  • ...

AllHailSundin2 karma

Do you also think that episode is terrible, and easily one of, if not the worst of the series?

lonnib6 karma

I actually really like it! And I loved the one after even more (if I recall correctly it's crocodile)

Nash_and_Gravy2 karma

Hey so this may be more of a personal question but I hope you don’t mind.

I’m curious as to how you arrived at the position to conduct this research, i realize you are in France and I’m an American so obviously the paths would be different. This is work that seems very fulfilling to me but I messed up a lot when I was younger in high school and ruined my chances for college. Currently going through a community college and then I’m going to try again for an actual school, I’m just kinda curious if you think this is the type of work (sociology? Social research, idk what the proper term would be) someone in my position could eventually be doing.

lonnib3 karma

I don’t mind personal and I’m happy that you ask. I was educated as a computer science engineer and ended up doing a PhD in data visualization and human computer interaction. Some of my work includes working with CERN recently for instance.

I do not know much how things and community colleges work in the US, but I want to believe that everything is possible if you really try hard enough! If you want to get there, I’m sure you’ll manage! I’ll be happy to talk more though DMs if you need or want!

In any case I wanna applaud you for realizing that education is important and for trying hard already!

AdmiralHTH2 karma

So you’re softening the realities of the real world to cater to the fragile sensibilities of a generation of cowards?

lonnib4 karma

I believe you don’t know what Facebook moderators are going through... do you think not being able to look at videos or murders and rape to make sure the content should be removed from the platform is being a coward ?


One word: Government coverups and propaganda

What will you do about it?

In the past Governments have reduced the intensity of certain events while some have totally lied to their peeple.

Then you have people who cry "Fake News" who might make real news fake and fake news real.

How do you deal with ethical concerns during the design process.

lonnib3 karma

Well we just studied an effect, we did not create the tech. Do you have the same concerns about photoshop? After effect? These tools literally make everything you are afraid of possible. In our case, we just found out that a specific image processing technique that already existed works to reduce how shocking images are.

I think I've made a lot of these point in this thread: https://www.reddit.com/r/IAmA/comments/hszfne/i_am_lonni_besançon_i_have_studied_automated/fydopdw/

Gliese-832-c2 karma

Does this mean that video evidence of crimes isn't valid anymore? For example, what if someone gets assaulted and there's video evidence, but the court dismisses that as fake and potentially lets off a murderer without punishment?

lonnib9 karma

No absolutely not :). The original video is still there on the server/machine. It's just processed for you to look at it so you might be less shocked.

choose_a_lawyer1 karma

How advanced would you say this automated system is if it was to find an unmarked NSFW post?

lonnib3 karma

We haven't made any contribution on the detection of shocking content at all, just on the making it more palatable :). So not sure what to answer there.

choose_a_lawyer1 karma

Omg I'm so sorry i ment to change the post after reading the comments cuz I didn't really catch on to what it was you did I thought. It was a censorship type of thing😂😂. Thanks though ツ

lonnib2 karma

sure :)

nith_wct1 karma

I'm curious how would you use this as an individual if you could choose exactly what you wanted to block with perfect accuracy? I see the values you've listed, but it seems extremely unlikely that those applications will ever be the primary use. People will use it to block out anything they don't like and then it becomes a tool to shelter yourself from information, not expose yourself to it. Where are the limits of what is healthy to block out of your life?

lonnib2 karma

Where are the limits of what is healthy to block out of your life?

I am afraid I have no answer to that (yet :p). I would love to know more about this, but at the moment I don't. TBH, you should try to wikipedia about the human body, can really be quite shocking. Or google for crocodile... sometimes leads to images of the crocodile drug (not linking to it cause definitely NSFW)

nith_wct2 karma

I think my limit is that I would choose not to see some things a second time rather than never at all. Having seen the crocodile stuff before, that's one of them.

lonnib1 karma

Having seen the crocodile stuff before, that's one of them.

can relate

zdog2341 karma

Sorry if this is mentioned in your paper somewhere, but culd your model be used as a starting point for transfer learning? If I wanted to go full black mirror and block a specific person's face, would I need a different model architecture, or could I just train on different data?

lonnib2 karma

We did not contribute to the finding of shocking content, only relied on pre-existing techniques I'm afraid. So by playing around with our code on GitHub, you would not be able to change that I'm afraid.

Snadams-1 karma


lonnib2 karma


Snadams-1 karma

lonnib1 karma

Do you need more explanations?

Snadams0 karma

Was just a joke man,I was saying it sounded complicated

lonnib1 karma

Not so much, but can explain better if needed :)