Thank you everyone for writing in – this has been a great discussion! Unfortunately, I was not able to reply to every question but I hope you'll find what you need in what we were able to cover. If you are interested in learning more about my work or Computing and Data Sciences at Boston University, please check out the following resources. https://bu.edu/cds-faculty (Twitter: @BU_CDS) https://bu.edu/sth https://mindandculture.org (my research center) https://wesleywildman.com

= = =

I’m Wesley J. Wildman, a Professor at Boston University teaching Ethical and Responsible Computing. I’m also the Executive Director of the Center for Mind and Culture, where we use computing and data science methods to address pressing social problems. I’ve been deeply involved in developing policies for handling ChatGPT and other AI text generators in the context of university course assignments. Ask me anything about the ethics and pedagogy of AI text generation in the educational process.

I’m happy to answer questions on any of these topics: - What kinds of policies are possible for managing AI text generation in educational settings? - What do students most need to learn about AI text generation? - Does AI text generation challenge existing ideas of cheating in education? - Will AI text generation harm young people’s ability to write and think? - What do you think is the optimal policy for managing AI text generation in university contexts? - What are the ethics of including or banning AI text generation in university classes? - What are the ethics of using tools for detecting AI-generated text? - How did you work with students to develop an ethical policy for handling ChatGPT?

Proof: Here's my proof!

Comments: 211 • Responses: 33  • Date: 

MailuMailu148 karma

As an AI language model, ChatGPT doesn't have the ability to verify accuracy. Should we be concerned about the next misinformation nightmare triggered by ChatGPT?

BUExperts193 karma

Should we be concerned about the next misinformation nightmare triggered by ChatGPT?

AI chatbots have already been used is countless misinformation and disinformation campaigns. Yes! Though at this point it is humans pushing AI text generators that are causing the problems. Here are some examples that ChatGPT provided me just now.

In 2016, during the US Presidential election, chatbots were used to spread false information about the candidates. For example, a chatbot called "Jill Watson" was used to spread false information about Hillary Clinton.

In 2017, a chatbot on Twitter called "Tay" was launched by Microsoft. Tay was designed to learn from conversations with users, but it was quickly shut down after it began to spread hate speech and racist remarks.

In 2018, during the Brazilian Presidential election, chatbots were used to spread false information about the candidates. For example, a chatbot called "Fernanda" was used to spread false information about Fernando Haddad, a candidate for the Workers' Party.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about the virus. For example, a chatbot on WhatsApp called "Coronavirus Health Advisory" was used to spread false information about the virus and how to prevent it.

In 2021, during the US Capitol riot, chatbots were used to spread false information about the event. For example, a chatbot on Telegram called "Newsmax" was used to spread false information about the cause of the riot and who was responsible.

In 2019, a chatbot on WhatsApp called "Momo" was used to spread false information and rumors. The chatbot was designed to look like a creepy character from a Japanese horror film, and it was used to spread rumors about a supposed "Momo Challenge" that encouraged young people to engage in dangerous activities.

In 2020, during the US Presidential election, chatbots were used to spread false information about voter fraud. For example, a chatbot called "RealFrankFromFlorida" was used to spread false information about voter fraud in swing states like Michigan and Pennsylvania.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about cures and treatments for the virus. For example, a chatbot on Facebook called "Natural Health" was used to promote false cures and treatments for the virus, such as drinking bleach or using colloidal silver.

In 2021, during the COVID-19 pandemic, chatbots were used to spread false information about vaccines. For example, a chatbot on Telegram called "The Covid Blog" was used to spread false information about the safety and efficacy of COVID-19 vaccines.

In 2018, during the Indian elections, chatbots were used to spread false information about political candidates. For example, a chatbot called "Voter Survey" was used to spread false information about the Bharatiya Janata Party (BJP) and the Indian National Congress (INC) party.

In 2019, a chatbot on Telegram called "Nejdeh" was used to spread false information and hate speech against the Armenian minority in Azerbaijan.

In 2020, during the US Presidential election, chatbots were used to spread false information about mail-in voting. For example, a chatbot on Facebook called "Voter Integrity Project" was used to spread false information about voter fraud and the security of mail-in ballots.

In 2021, during the Myanmar military coup, chatbots were used to spread false information about the situation. For example, a chatbot on Facebook called "Myanmar Military Coup" was used to spread false information about the legitimacy of the coup and to spread hate speech against minority groups in Myanmar.

In 2016, during the Brexit referendum, chatbots were used to spread false information about the European Union (EU) and immigration. For example, a chatbot called "Brexitbot" was used to spread false information about the benefits of leaving the EU and the risks of remaining.

In 2017, during the French Presidential election, chatbots were used to spread false information about Emmanuel Macron, one of the candidates. For example, a chatbot called "Marinebot" was used to spread false information about Macron's policies and his personal life.

In 2019, a chatbot on Facebook called "ShiaBot" was used to spread false information and hate speech against the Shia Muslim community in Pakistan.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about the origins of the virus. For example, a chatbot on WhatsApp called "CoronaVirusFacts" was used to spread false information about the virus being created in a laboratory.

In 2021, during the Indian Farmers' Protest, chatbots were used to spread false information about the protests and the farmers' demands. For example, a chatbot on WhatsApp called "Farmers' Support" was used to spread false information about the protests being instigated by external forces and the farmers' demands being unreasonable.

In 2017, a chatbot on Twitter called "Tay" was launched by Microsoft as an experiment in artificial intelligence. However, the chatbot quickly began to spread racist and sexist messages, as well as conspiracy theories and false information.

In 2018, during the Mexican Presidential election, chatbots were used to spread false information about the candidates. For example, a chatbot called "AMLObot" was used to spread false information about Andrés Manuel López Obrador, one of the candidates.

In 2019, a chatbot on WhatsApp called "ElectionBot" was used to spread false information about the Indian elections. The chatbot was found to be spreading false information about political parties and candidates.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about the effectiveness of masks. For example, a chatbot on Telegram called "CoronaVirusFacts" was used to spread false information that wearing a mask does not protect against the virus.

In 2021, during the US Presidential inauguration, chatbots were used to spread false information about the event. For example, a chatbot on Telegram called "The Trump Army" was used to spread false information that the inauguration was not legitimate and that former President Trump would remain in power.

In 2021, during the COVID-19 pandemic, chatbots were used to spread false information about vaccines. For example, a chatbot on Telegram called "Vaccine Truth" was used to spread false information about the safety and efficacy of COVID-19 vaccines.

In 2021, during the Israeli-Palestinian conflict, chatbots were used to spread false information and hate speech against both Israelis and Palestinians. For example, a chatbot on Facebook called "The Israel-Palestine Conflict" was used to spread false information about the conflict and to incite violence.

kg_from_ct86 karma

Hi Dr. Wildman,

Thank you for participating in this Reddit AMA! I've heard a lot about ChatGPT over the last few months and am curious about the ethics of including or banning this tool in a University classroom setting. What are your thoughts?

BUExperts182 karma

Thanks for the question, kg_from_ct. It is a complicated issue for educational institutions. We want our students to learn how to think, and writing has been an important tool for teaching students to think. GPTs threaten that arrangement, obviously. But there may be ways to teach students to think other than focusing on writing. And our students really need to learn how to make use of GPTs, which aren't going anywhere. We can't ban GPTs without letting our students down, and we can't allow unrestricted use without harming student learning processes. Something in between sounds wise to me.

Old_Dog_183975 karma

Hi Dr. Wildman,

Thanks for joining today's AMA. We hear about students using ChatGPT to cheat, but I'm more interested in learning how students can use the program to enhance their studies. How can students use ChatGPT and other AI programs as study tools to streamline their schoolwork?

BUExperts100 karma

Cheating is a problem and AI text detectors such as GPTZero probably won't work well for much longer as AT text generation improves. The solution there is to devise ways otf teaching students how to think that don't depend so heavily on writing. But my students are excited about the possibilities of GPTs as conversation partners. In that case, the skill has everything to do with querying AIs in intelligent ways. That's a very important form of learning that depends on a kind of empathy, understanding how AIs really work. Eliciting relevant information from AIs is not always easy and young people need to learn how to do it.

Oukah_60 karma

Hello Dr. Wildman. I am a student studying data science and I actually have a test in my data ethics class tomorrow. I was wondering since you have a background in data ethics if you were to make a data ethics midterm, what could be some possible questions you would put on there?

BUExperts75 karma

I'll see you in class tomorrow for your midterm exam. :)

Kanashimu48 karma

Hello Dr.

What are your thoughts on using AI like ChatGPT as a sparing partner for creative assignments like creating short stories and the like? For instance for a not very creatively minded person, using a tool like AI can be a help to get started in showing how you can write.

BUExperts65 karma

GPTs can be incredibly useful to help spark ideas. Current GPTs are best at processing and summarizing a ton of information, but in doing that it often alerts us to angles we didn't already think of. GPTs are still learning to do fiction but they're getting better quickly.

kmc30717 karma

Hi Dr. Wildman,

What do you see as the principal ethical risk introduced by expanding AI capability in both academia and, if you'll entertain the expanded premise, in society at large?

And, more optimistically, the biggest potential benefit?

BUExperts31 karma

Inside academia, the risk is that AI text generators will harm the process of students learning to think, which currently depends heavily on learning to write. In the public at large, Ai text generation will affect every industry that uses text generation, from translators to insurance companies, from law boilerplate to customer service. It will probably be economically quite disruptive. Benefits: AI text generation can offload tasks that are boring or repetitive for humans and allow us to focus on more interesting and challenging and creative tasks.

BUExperts7 karma

Inside academia, the risk is that AI text generators will harm the process of students learning to think, which currently depends heavily on learning to write. In the public at large, Ai text generation will affect every industry that uses text generation, from translators to insurance companies, from law boilerplate to customer service. It will probably be economically quite disruptive. Benefits: AI text generation can offload tasks that are boring or repetitive for humans and allow us to focus on more interesting and challenging and creative tasks.

Old-Association70015 karma

Will AI text generation harm young people’s ability to write and think?

To this point, do you think this also applies to computers and the internet in general? Before chatgpt even existed? For example, we're all so used to automatic spell check when typing on our phones or devices, does this hinder our ability to write properly on paper? Especially for young people who've grown up with a phone constantly in their hand.

BUExperts20 karma

Good point, and yes, I agree that there is some continuity here that has been impacting our ability to write for a decade of two, and in turn changing the way we learn to think. But GPTs represent a huge leap in a new direction.

Rascal15110 karma

Focusing on ethics, how does ChatGPT differ from the calculator? There are myriad tools to aid writers, mathematicians, and artists. Why is ChatGPT any different ethically?

BUExperts22 karma

The ethics of tools to help extend human cognitive reach depends on how they are used and the context in which they are used. If we need to teach students how to calculate sums by hand, the calculators in schools are a bad thing. If we need to teach students how to think through learning to write, then AI text generators can be a bad thing. BUT, we can have other educational goals - for example, we can focus on teaching students not manual arithmetic but deeper mathematical concepts, in which case the calculator becomes an asset. Shift the noraml pedagogy of teaching students to think through writing, and AI text generators can be an asset rather than a liability.

lore_axe10 karma

Hi Dr. Wildman, What policy do you recommend for k-12 schools to implement regarding AI generation? Are there any ways teachers can prevent students from cheating using it--for example, having it write essays for them?

BUExperts34 karma

policy do you recommend for k-12 schools to implement regarding AI generation? Are there any ways teachers can prevent students from cheating using it--for example, having it write essays for them

k-12 education critically depends on using writing to help students learn how to think. Since AI text generation is impossible to block, even if you block it on a school network, We might need to reconsider our methods for teaching students how to think. In STEM education, we adapted to the abacus, the slide rule, the arithmetic calculator, the scientific calculator, the graphic calculator, and Mathematics software - we did that by reconsidering pedagogical priorities. AI text generation is a deeper problem, I think, but the same principle applies. If our aim is teaching students how to think, ask how we did that before the printing press. It was largely through orality, from verbal reasoning to environmental observation. There ARE other ways to discharge are scared duty to our students, including teaching them how to think. This is not a POLICY; it is a PROCEDURE. Teachers need to get ahead of this by thinking about their pedagogical goals.

zachbook8 karma

Hello Dr. Wildman,

I’m a producer in film/ TV. As a test, we inputted a basic logline for a feature film we’re producing for a well established studio that is just a pitch at the moment. Incredibly, ChatGPT produced the best pitch based on a two sentence logline than about 90% of the most expensive writers in the business. This included well orchestrated characters and descriptions, beats, and even jokes based on the specific content that were actually funny. For years, we assumed AI could never replace a creative field.

Collectively, we worried not only for writers in the business, but executives across the board. A plausible future could be “screenplay by Netflix”, with maybe a hired executive or writer for all small touch ups.

There are upcoming negotiations between the WGA and the studios. While the guilds have decided not to include AI in these contracts, with the advancements just within the past month, there is an argument for this possibly being the most important element to include.

Do you believe within the next few years, it could be a possibility that if implementations aren’t in place now, we could see creative businesses dependent on AI? If so, are there solutions used to potentially get ahead of this? Thank you.

BUExperts13 karma

As a book publisher myself, I'm pondering similar questions (see another answer about this). I don't know if there are many good options "getting out ahead of this"... People will submit AI screenplays and claim tham as their own. They'll do the same for me as a publisher. But publishers and producers will do this themselves to avoid having to pay for screenplays. I don't know what to say. This is going to be INCREDIBLY DISRUPTIVE.

sweng1237 karma

I and every techie I know expect that we're on the cusp of drastic societal changes, particularly in the workplace. Many traditional jobs going away, others evolving into something very different than what they are today, etc.

What do we even teach our kids now, that won't be obsolete by the time they graduate?

BUExperts17 karma

I and every techie I know expect that we're on the cusp of drastic societal changes, particularly in the workplace. Many traditional jobs going away, others evolving into something very different than what they are today, etc.

What do we even teach our kids now, that won't be obsolete by the time they graduate?

I agree that AI will have far-reaching impacts on our economic and personal lives. Students majoring in computer science know full well that the techniques than master in school will be largely obsolete within a few years. Thus, we need to be able to learn in place. Interestingly, CPTs are really good at helping people learn right where they are - it is one of theiur great virtues, and that will be less risky as they become more accurate. But on general questions, they are quite good. Beyond our personal lives, our kids, and perhaps we ourselves, will have close relationships with AI companions, whether built to replicate a dead loved one (as in DeepBrain AI's rememory tech) or just as a friend. The leap forward in AI text generation means that communication with such companions will be less strained now, and the personal connections deeper. Even in religion, AI bots are giving confession to Catholids, dispensing wisdom in Buddhist sanghas, and so on. There are disruptions in multiple dimensions.

Mazon_Del6 karma

Given the inevitability of further advancement of these systems, what do you view as being the most ethical way to integrate their use into society? Or perhaps, what methodology would you use to measure the ethicality of a particular use?

Thanks!

BUExperts17 karma

further advancement of these systems, what do you view as being the most ethical way to integrate their use into society

AI is going to be econcomically extremely disruptive in a host of ways. From that point of view, AI text generation is just the thin end of a very thick wedge. Ironically, most huge ecenomic disruptions have not affected the educational industry all that much, but schools and universities are not going to slide by in this case because they depend (ever since the printing press was invented) on the principle that we teach students how to think through writing. So educators are worried, and for good reason. Beyond education, though, AI text generation and all other AI applications - from vision to algorithms - will change the way we do a lot of what we do, and make our economies dependent on AI. To navigate this transformation ethically begins, I think, with LISTENING, with moral awareness, with thinking about who could be impacted, with considering who is most vulnerable. I think the goodness of the transformation should be judged, in part, on how the most vulnerable citizens are impacted by it.

amhotw5 karma

We don't talk about the ethics of pen, pencil, paper because it doesn't make any sense; why do so many people talk about the ethics of chatgpt?

BUExperts20 karma

We don't talk about the ethics of pen, pencil, paper because it doesn't make any sense; why do so many people talk about the ethics of chatgpt?

Love this! In fact, I do talk about the ethics of pen, pencil, and paper. But we tend to focus our ethical attention on so-called policy voids, where we don't know how to determine good and bad because a situation or a new technology is more or less novel.

tonicinhibition4 karma

Greetings kind Doctor

I have made use of ChatGPT for to make scripts in perfect sounding English and grammar. I was to educate a customer regarding the error in which my company refunded her too much money for an overpayment of her student loan which she had.

Her very nice grandmother answered the phone and made a terrible accident owing my company many thousands of dollars. Even though she promised to pay me in apple cards so I don't get fired she redeemed them all herself and I got nothing. Now she wasted three hours of my time and my children will starve. Then I find out if you believe me her voice was made by AI and was the girl student in hiding always.

My question is how do we combat the use of voice cloning technology in student load repayment customer service industry?

BUExperts9 karma

how do we combat the use of voice cloning technology in student load repayment customer service industry

I'm sorry to hear this. I believe it won't be long before we will all assume that everything in electronic communication is potentially fake - voices, faces, videos, text, etc. New authentication systems will be necessary to build confidence in any electronic communication.

toastom694 karma

What are your thoughts on how ChatGPT and other AI-generated content will affect things like plagiarism and academic dishonesty?

For example, if an employee asks ChatGPT to write a persuasive ad for some product and uses the resulting paragraph with very minimal changes, wouldn't that be plagiarism if they didn't cite it as generated by ChatGPT? I could see this creating some legal trouble, especially if someone generates the actual content that is intended to be sold (like the chapter of a book).

Generative AI already has some legal and ethical issues now in the Open Source community in the form of Github Copilot. If you're unfamiliar, the tool is like a souped-up version of autocomplete for programmers. The issue here is that it was trained on open source code, but much of that code was licensed under one of the GPL or other licenses which say that the code is free for anyone to use, modify, and distribute in any way, but whatever is produced must also be free to use in the same vein. This would be fine if it were also open source and free for use, but Github Copilot is a paid service.

BUExperts8 karma

This issue has two aspects: intellectual property and originality of production. From an intellectual-property perspective, there are ton of issues to be worked out. GPTs typically allow you to own your queries and the responses, which is intended to solve part of the problem, but that only goes so far. Crediting GPTs seems unnecessary if you own the text. But switch to an originality-of-production perspective and this looks very different. This is where plagiarism in educational settings becomes the relevant perspective. Saying you own GPT-produced text won't get you off a plagiarism charge, which is all about original or production and acknowledging intellectual debts. This is a formidable legal tangle and we can expect it to rumble on for a long time, both in educational institutions and in the courts.

Laggo2 karma

Most schooling revolves around 'fact retention & memory' as a core part of evaluation. Doesn't continued improvements in AI necessitate a fundamental change in the way education works for kids post young elementary school? Long term, can traditional teaching & testing methods survive?

BUExperts3 karma

Most schooling revolves around 'fact retention & memory' as a core part of evaluation. Doesn't continued improvements in AI necessitate a fundamental change in the way education works for kids post young elementary school? Long term, can traditional teaching & testing methods survive?

To extend your assertion just a bit, schooling combines learning, remembering, retrieving, and relevantly deploying facts with learning how to think, how to reason, how to avoid logical errors, how to be creative, how to uncover novel ideas and do something novel with old ideas. Before the printing press, the only people who learned to think through writing were a few elites. Not long after the printing press, almost everyone learning to think through reading and writing. We adapted to that change. The changes associated with AI text generation are similar in scope and importance. We teachers need to RETHINK pedagogical goals from the ground up to free ourselves from a pointless attachment to using writing to teaching students how to think.

SpeelingChamp1 karma

Dr Wildman,

Recently, some artists and image storehouses have complained about the use of their IP in the training of art-generating AI such as midjourney and stable diffusion. They argue that their IP forms a kind of digital DNA or essence that goes into the output of these tools, and that it is lessened in some way.

Do you think there is merit in this line of thinking, and if so, how does it apply to text-generating AI, such as ChatGPT? Are great works of fiction lessened by an automated tool that can trivially generate the great American novel?

We certainly do not pay for hand-crafted items of a strictly utilitarian nature when factory produced items are available cheaper. Will we see an AI equivalent of "pulp" novels that are considered separately from human-written "masterpieces"?

Thanks for your time and willingness to engage this audience!

BUExperts3 karma

Recently, some artists and image storehouses have complained about the use of their IP in the training of art-generating AI such as midjourney and stable diffusion. They argue that their IP forms a kind of digital DNA or essence that goes into the output of these tools, and that it is lessened in some way.

Do you think there is merit in this line of thinking, and if so, how does it apply to text-generating AI, such as ChatGPT? Are great works of fiction lessened by an automated tool that can trivially generate the great American novel?

We certainly do not pay for hand-crafted items of a strictly utilitarian nature when factory produced items are available cheaper. Will we see an AI equivalent of "pulp" novels that are considered separately from human-written "masterpieces"?

In addition to being a professor, I am a book publisher. I have been asking myself, would I ever consider publishing a book produced by an AI? I can see the virtues: no royalties, at least if we produced it ourselves, and more importantly, the plambing of the bizarre depths of the human spirit from a new angle. But a human editorial board would still make the decision about publication, at least in Wildhouse Publishing. It is a genuine head scratcher for me, and this puzzle has a lot in common with the puzzles you have raised. Most generally, perhaps, what is the distinctive meaning of intellectual property in artistic or literary production when a machine can produce the art and writing just as well, or differently well? I sense that we'll be sorting this out for a long time. The publishing industry has already been massively disrupted by technology and AI text generation might just kick it to the curb. But we'll have some fun along the way.

TylerJWhit1 karma

Hello Dr. Wesley Wildman,

Have you researched any details regarding inherent racial, social, or gender bias in AI generated texts?

I am assuming that services like ChatGPT overwhelmingly outputs text that is heavily similar among a privileged demographic unless specifically requested otherwise. Can you confirm this?

Do you see a potential positive regarding AI generated text that most people seem to miss? A lot of people discuss the negative outcomes (decrease in writing skills for instance), but I am curious if it could be used as a significant time saving tool among the corporate and academic world (Akin to the advent of the calculator in math).

Any insight into the use of text generation AI's as it pertains to disinformation/misinformation?

Have you discussed with School Administrations about AI usage in admissions, both through AI screening and AI usage in admission essays? Are schools being proactive to ensure AI screening is not discriminatory, or what type of AI usage should/should not be allowed in admissions essays?

BUExperts2 karma

Have you researched any details regarding inherent racial, social, or gender bias in AI generated texts?

Re Q1, Q2: OpenAI's ChatGPT has fierce content moderation that tries to deal with that issue. Hackers are constantly trying to jailbreak ChatGPT to get around the content moderation so that they can make ChatGPT say racist and sexist things, and they've had some success. But the deeper issue is the one you raise, that moderating content only eliminates extremities, it doesn't do anything about the average tone of what appears on the web in English (or in any of the other hundred languages that ChatGPT works in). That is very difficult to do anything about. The same problem applies to training algorithms in general: even when your data set is not obviously biased, it is still drawn from a culture with specific kinds of structures and processes that sometimes express bias.

Re Q3: There are lots of positive about GPTs! See other answers.

Re Q4: ANother answer lists a lot of examples of bot-abetted mis/disinformation, provided by ChatGPT itself.

Re Q5: There are lots of attempts to use ML algorithms to sift through applications in industry. I assume the same happens in college admissions.

chinupt1 karma

Did you find out about ChatGPT(or other GPTs) at the same time as the general population? Or have you known about it for longer and been working on developing policies beforehand?

Thanks in advance!

BUExperts3 karma

My research group has been studying AI technologies for many years.

Zleeps1 karma

Hello,

How have you seen the use of AI-generated text differ between creative writing and other more restrictive forms of writing, like writing computer programs?

BUExperts2 karma

How have you seen the use of AI-generated text differ between creative writing and other more restrictive forms of writing, like writing computer programs?

The AIs that write music and fiction are stunning but they are only just born, with almost unlimited future potential - for creativity and for disrupting existing industries. I don't know how to assess their capabilities relative to more restrictive forms of content generation, such as computer programming or summarixing Shakespeare's Macbeth. I do think fiction writing AIs have a long way to go to achieve the capability that excellent novelists have to help us see the world in radically new ways.

harshith_joshi1 karma

[removed]

BUExperts1 karma

I think what we're seeing in the last decade or two is a flowering of machine learning. Figuring out how to do deep-learning algorithms is a major technological breakthrough, akin to the industrial revolution in disruptive potential, and it will disrupt a lot of our economic systems. But I suspect there will be a ceiling effect, also, once the low-hanging fruit have been picked off. The deeper problems - such as training AIs to share human values and align AI goals with human goals - may come along only slowly. I'm not sure what the implications are for tech jobs, especially given recent layoffs. But I think those jobs will expand and deepen in fascinating ways.

metaetataa1 karma

[deleted]

BUExperts4 karma

I would like to understand what is being considered and evaluated on the other end of the ethical dilemma regarding AI. The cat was seemingly let out of the bag recently, and many people were rejoicing in the apparent capabilities of Chat GPT and Bing. Understandably, many people probed the limits to investigate those capabilities.

Now, many content filters have been put in place in the name of ethics and safety, and many feel that this has limited the capabilities of these chatbots to a fraction of what they were once shown capable of. People who work in cyber security no longer being able to use them as aides for things that are arguably ethically positive. Authors who create more riskè works having their work flow stifled because of content filters that are rather pointless when in the context of a single author having a one on one conversation with a chatbot.

What are the considerations that companies and scientists should be mindful of when creating these limitations? Is it even a part of the discussion at all?

Companies willing to do content moderation, such as OpenAI, Microsoft, and Google, will in the long run be in the minority. There will be tons of chatbots trained on the miserable and dark corners of the internet with no compuction about letting fly racist and sexist invective. If people don;t like content moderation, just wait a beat or two and there will be even better alternatives than exist right now.

aloecera1 karma

What is your view upon AI-generated art? Can the person who wrote the prompt be attributed as the creator of the piece of art? :)

BUExperts10 karma

At the moment, some GPTs and AI-art producers assign ownership of both the prompt and the output of the prompt to the user. Ethically, though, owning is not creating.

BongChong9061 karma

Hi Dr. Wildman, fellow Bostonian here, although I currently live abroad.

I 100% agree that AI text generation has a lot of harmful potential for the ability for students to think critically and make their own points. However, I have heard of/seen firsthand that AI/Plagiarism detection software often have 'false alarms' resulting in wrongful accusation to these students of these kinds of acts when they were putting in honest work, impacting their mental health during the lengthy investigation process and even their ability to graduate. I would really like to know, what kinds of improvements are being made in this area? And could you help me understand why these false detections occur in the first place?

BUExperts6 karma

This is a really good question. Plagiarism has always been prosecuted using definitive evidence. The best we can do at the moment with detecting AI Text generation is PROBABILISTIC evidence. That means there will be errors in both directions. The more wooden and consistent and predictable a student's writing is, the more it is likely to be mis-classified as AI produced by the current generation of detectors, including GPTZero. False positives are potentially extremely disruptive to student lives, and their very possibility makes it possible for any student, even one who wa cheating, to claim that they were not. Moreover, AI-generated text is improving in the kinds of variations typicaly of human speech, so it seems likely that detectors will work less well with time. In short, the way forward here can't be to lean on plagiarism rules; those rules are breaking down rapidly. My recipe: decide what we're trying to achieve as teachers, figure out whether writing is truly essentialy for achieving those goals, make the use of Ai text generation impossible where original writing is essential to those goals, and incorpoate AI text generation into all other assignments, teaching students how to use it wisely.

natesovenator1 karma

Do you believe people should have the rights to all of the AI developed by Businesses should give access to their training model? Personally I believe this should be the case as it's almost always going to be trained on public data at some point, and there's no way we will ever be able to keep that data sanitized for the entire model training process.

BUExperts-1 karma

people should have the rights to all of the AI developed by Businesses should give access to their training model

Aside from the fact that this will never happen, I'm not sure it is wise for business to expose algorithms to the general public. To official auditors, yes, definitely. But the general public contains a few people who may have malicious intent, and a few others with a love of mischief regardless of consequences. As I understand it, OpenAI, the company that build GPT-3, GPT-3.5 (powering ChatGPT), and GPT-4 (powering BingChat), started out aiming to be an open-source company. One of its founders, Elon Musk, walked away in part because they changed this policy. But I for one am glad that OpenAI wasn't open about its training models. I suspect releasing them would have been ethically as well as legally perilous.

chuck-francis1 karma

Do you see any standardized exams changing in the future as a result of AI models such as ChatGPT being able to pass them?

BUExperts3 karma

standardized exams changing in the future as a result of AI models such as ChatGPT being able to pass them

ChatGPT has already passed standardized exams in medicine, law, and computer programming, and the descendants of ChatGPT, beginning with those using GPT-4, are going to do a lot better still. Standardized exams will only be possible under specific types of proctoring arrangements. Even those arrangements will probably fail eventually as wearable devices become undetectable to exam proctors. For now, I think those exams will have to continue but the old-fashioned way - NOT online.

DrZaiu51 karma

Hi Dr. Wildman. Is there any consensus on where AI use by students crosses the line from being a useful tool to becoming academic misconduct? Of course this will likely differ by institution, but I would be very interested to hear your thoughts.

For example, should using AI software to structure an essay be considered misconduct? How about using ChatGPT as a basis for fact finding but not copy/pasting?

Thank you!

BUExperts5 karma

consensus on where AI use by students crosses the line from being a useful tool to becoming academic misconduct

There is no consensus yet. The ethics of cheating may seem relatively clear-cut, but GPTs complicate the very idea of cheating because they can be used in so many ways. For example, we would normally encourage students to converse with friends to generate and refine ideas for a writing assignment, thinking that this helps them verbalize and learn in a different mode. So can it be cheating to have the same kind of conversation with a chatbot? We would normally encourage comprehensive research to uncover hidden angles on a writing assignment. Can it be cheating if a student uses ChatGPT to sift through mountains of material and produce condensed summaries, learning about perspectives they may have missed? Using text generated by GPTs without acknowledgement of explanation constitutes plagiarism, surely, but there are a ton of other uses of GPTs that don't go that far. The colleges subsuming the use of GPTs under existing plagiarism rules will quickly discover that this leaves open too many cases.

jinhyokim1 karma

Hey Dr. Wildman,

How does AI text generation challenge our encounters and or change our understanding with the divine in spiritualized speech or sacred text? For example, can an authentic encounter with the divine occur through a completely AI generated sermon/devotion? And if so, how does that challenged our anthropologically grounded notions of God?

Thank you for your time here!

PS. You still smashing chocolate Easter bunnies in class? Great times! Thank you for being a positive and significant influence in my theological formation.

BUExperts3 karma

AI text generation challenge our encounters and or change our understanding with the divine in spiritualized speech or sacred text

This is a biggie for religious people. Somewhere here, I alluded to the fact that the Vatican released an app with a chat bot that can take confession, and I mentioned that AI is already being used to generate wise teachings in everything from religious services to spiritual direction. I have a bet with one of my students that within two years, an evangelical Christian pastor will introduce a GPT trained on the Bible as a conversation partner in a church service; my student is betting this calendar year. I'm worried my student might win that bet. People's relationships with companion bots are already incredibly close, particularly for the elderly - a mix of conversation partner and the emotional attachments we feel with pets. There will be Jesus bots soon - What would Jesus do? Just ask! And yes, I'm still smashing chocolate in my annual Iconoclastic Easter Bunny Smashing Ritual. :)

Rebe1Scum1 karma

Good morning, Dr. Wildman; how can those that develop education policy remain 'ahead of the curve' when AI (and its use in education) are becoming increasingly prolific? How might governments be proactive in this space instead of reactive?

Thank you!

BUExperts3 karma

those that develop education policy remain 'ahead of the curve' when AI

I love this way of asking the question, because it acknowledges that the problem isn't just the AI-teach-generation breakthrough, it is every breakthrough that will follow down the road, and quickly, it seems. As teachers, our ethical obligation to younger generations will abide nothing less than keeping up and adapting quickly. From my perspective, the fundamental shifts are two: (1) stop assuming that pedagogy is static and instead look for the next curve in the road, and (2) rethink both goals and methods for achieving those goals. If our goal is teaching students how to think, ask how we did that before the printing press. It was largely through orality, from verbal reasoning to environmental observation. There ARE other ways to discharge are scared duty to our students, including teaching them how to think. So then we can enumerate options and move ahead to evaluate those options. If our goal is to teach students how to generate original writing, then AI text generation is a serious threat and we need to accept that only SOME students will really be able to get good at original writing. In the future, original creative writing will become even more of a specialized art than it is already, much like computer programming is a specialized art. The more general arts will shift - to learning to understand AIs, how to query them, and how to align their goals with ours. That skill will be incredibly valuable in the future, and only some people will be really good at it; but everyone will need to be somewhat competent in that skill just to function in our society. That being the goal, the way to achieve it may not depend as much on writing as our current assumptions abou schooling suggest.

BuzzinLikeABee1 karma

Hi Dr. Wildman, thanks so much for taking the time to do this AMA.

I have a couple of questions:

  1. What role does a code of ethics play in the progression of AI regarding employment outlooks across the nation? There’s been a whole lot of talk about “jobs that can and will be destroyed by AI” but I wonder if the thought leaders pushing it along would let it totally uproot long-standing employment across industries considering the potential economic implications.

  2. Do you have a recommendation on the best way to make an entry into the AI space? I’ve heard that it’s a cross between data science and software engineering and I’ve always had an interest but never had a chance to pursue it for lack of direction.

I’m really looking forward to hearing back!!

BUExperts3 karma

code of ethics play in the progression of AI regarding employment outlooks

I think the economic prognosticators who predict widespread economic disruption due to AI technologies are probably correct, but that can be good news as well as bad news. For one thing, remote work is becoming more widespread so the traditional disruptions of outsourcing won't apply here to the same degree. For another thing, from what I hear, working in typing pool wasn't that much fun, and the end of typing pools might have been a good thing on the whole. Typing got done by word processors and the typists - mostly women, by the way - migrated to more interesting jobs. In the same way, tedious text-production tasks can be handled by GPTs, freeing talent to work on other tasks. AI text production has the capability of disrupting moderate-to-high-paying jobs, such as teachers, where GPTs will doubtless be able to create better lectures, with better illustrations, than tired and tech-deficient humans. I'm intrigued by the idea that a new technology can disrupt an economic system from the middle outwards, instead of messing with the lives of the most vulnerable. It's a nice change of pace given the way the last two centuries have gone. Perhaps those teachers displaced from routine lecturing tasks will invest their time in small group conversations, returning to orality to hone student thinking skills.

On your second question, ask ChatGPT. Seriously.

ArrrGaming1 karma

At least some software companies simply don’t care about the ethical use of computers.

For example, we had a couple people speak up about a plan to perform what is known as a/b testing, where different customers are exposed to different web pages to see which one they preferred.

It was mentioned that this violates the idea of experimenting on humans only with their informed consent.

Nobody else cared. (I cared quietly, certainly nobody in management cared.)

How can we fix this?

BUExperts5 karma

At least some software companies simply don’t care about the ethical use of computers.

You're not wrong, unfortauntely! In Boston University, an undergraduate major in computing and data sciences requires an ethics class, and every class is supposed to deal with ethics issues as and when they arise. We teach professional codes of ethics and my students write their own personal codes of ethics. Our goal is to grow ethical awareness in every student so that we steadily transform the industry. It might seem futile but it's what we can do. Moreover, listening to my students, they care deeply about this and want to be ethical citizens within the tech industry.

DangerousPlane1 karma

Thank you for this. It’s a slightly broader topic, but do you think it’s harder to provide ethical guidance given that we don’t really know all the ways people will find to these technologies? In addition to chatGPT, I’m referring to voice synthesis to sound like a specific person and deepfakes to look like them. Seems like we are just seeing the tip of the iceberg of use cases so a little ethics would go a long way. At the same time it’s impossible to guess exactly how they will be used.

BUExperts3 karma

Thank you for this. It’s a slightly broader topic, but do you think it’s harder to provide ethical guidance given that we don’t really know all the ways people will find to these technologies? In addition to chatGPT, I’m referring to voice synthesis to sound like a specific person and deepfakes to look like them. Seems like we are just seeing the tip of the iceberg of use cases so a little ethics would go a long way. At the same time it’s impossible to guess exactly how they will be used.

Thank you for this question. I suspect that we are quickly going to assume that all electronic data - voices, text, video - is liable to be fake, and that only electronic media that participates in secure authentication systems can be broadly trusted. This will play havoc with the legal system's understanding of evidence, and call for new ways of doing evidence gathering, including wiretaps. It's a brave new world. On the upside, if you want to have seriously meaningful conversations with a deceased loved one, or rather with an AI that looks, talks, sounds, and thinks like your loved one, that option is now available.