workingatbeingbetter4 karma2020-03-27 03:18:36 UTC
Fortunately all of my friends that do smoke are paranoid enough to have given it up for now, but I greatly appreciate your response. Fighting misinformation right now is vital and you guys are doing great work by doing things like this. Thank you!
View HistoryShare Link
workingatbeingbetter3 karma2020-03-13 16:58:54 UTC
Thanks for the AMA! I'd like to ask you a question about controlling deepfakes and similar potentially problematic technologies from the research side.
Specifically, I'm a lawyer and engineer in charge of a large technology portfolio that consists of at least 50% ML and AI technologies, including deepfake technologies, at a large research university in the U.S. A number of the faculty and researchers here publish and, without consulting my office first, open source potentially problematic technologies, such as deepfake technologies, facial and emotion recognition technologies, and so forth. In a perfect world, they would consult our office first and we would put their technology out under something like a "Research and Education Use Only" License Agreement (a REULA) to limit problematic uses (i.e., Clearview AI, for example). But as far as I can tell, once a technology is put out under and opensource license (e.g., MIT, BSD, etc.), that bell cannot be unrung if even one person downloads that software. From the administration side, there is also a major hesitancy to do anything to the student/researcher who inappropriately posts this license because they want to respect that student/researcher's academic freedom. I took this job to help shape this field to be less dystopian, but I'm not sure if there is a better way to deal with this situation than simply biting the bullet and trying to educate the researchers in advance.
Do you have any advice/tips on how to deal with the above situation? Also, do you think the "open source" bell can be un-rung? I am not an expert on agency law, but I feel there might be an argument that the opensource license is invalid because the student/researcher lacked agency. However, I don't have the capacity/resources to research this theory deeply.
In addition to the above questions, if you're ever looking for a new research or paper topic, please feel free to PM me. I have endless useful and interesting law review article topics from my time here. Anyway, thanks again!
workingatbeingbetter2 karma2020-03-27 03:10:02 UTC
My friends and I were discussing this the other day and couldn’t find an answer, but would smoking marijuana maybe once or twice a day increase a person’s susceptibility to the more pernicious symptoms of COVID-19? Or in other words, would you recommend people stop smoking marijuana or would you consider the additional risk (if any) negligible?
workingatbeingbetter1 karma2020-07-29 19:09:15 UTC
What policy proposals do you recommend for dealing with AI/ML technologies that act as a double-edge sword?
For example, there is technology that recognizes emotion through voice (see here for example) and one of the proposed use cases is that it can help weed out prank calls on the coast guard, thereby allowing the coast guard to more effectively distribute their resources. However, this same technology can be implemented into devices like the Amazon Echo to identify emotions of users and direct the user to particular ads. As you can imagine this can become dystopian rather quickly. What policy proposals do you recommend to deal with these issues? Would you amend Bayh Dole? Would you require FFRDCs to fundamentally change their structure? Would you require particular licenses be used (and if so, which licenses would you require)?
I ask the question above because it's one I've dealt with directly as part of my job working in tech transfer at a major AI university. I have TONS of technologies that I'm always trying to thread the needle on between public release and more limited approaches, but there doesn't seem to be any real way to solve this issue under the current laws.
workingatbeingbetter1 karma2020-07-29 19:33:21 UTC
Beyond identifying this as a problem, does your book discuss any solutions? The only proposals I've seen so far seem to be impractical (e.g., an FDA for regulating AI) and/or too discretionary (e.g., internal policies for government funded research.
Copyright © 2014 BestofAMA.com, All rights reserved.
reddit has not approved or endorsed BestofAMA, reddit design elements are trademarks of reddit inc.