Highest Rated Comments


BUExperts193 karma

Should we be concerned about the next misinformation nightmare triggered by ChatGPT?

AI chatbots have already been used is countless misinformation and disinformation campaigns. Yes! Though at this point it is humans pushing AI text generators that are causing the problems. Here are some examples that ChatGPT provided me just now.

In 2016, during the US Presidential election, chatbots were used to spread false information about the candidates. For example, a chatbot called "Jill Watson" was used to spread false information about Hillary Clinton.

In 2017, a chatbot on Twitter called "Tay" was launched by Microsoft. Tay was designed to learn from conversations with users, but it was quickly shut down after it began to spread hate speech and racist remarks.

In 2018, during the Brazilian Presidential election, chatbots were used to spread false information about the candidates. For example, a chatbot called "Fernanda" was used to spread false information about Fernando Haddad, a candidate for the Workers' Party.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about the virus. For example, a chatbot on WhatsApp called "Coronavirus Health Advisory" was used to spread false information about the virus and how to prevent it.

In 2021, during the US Capitol riot, chatbots were used to spread false information about the event. For example, a chatbot on Telegram called "Newsmax" was used to spread false information about the cause of the riot and who was responsible.

In 2019, a chatbot on WhatsApp called "Momo" was used to spread false information and rumors. The chatbot was designed to look like a creepy character from a Japanese horror film, and it was used to spread rumors about a supposed "Momo Challenge" that encouraged young people to engage in dangerous activities.

In 2020, during the US Presidential election, chatbots were used to spread false information about voter fraud. For example, a chatbot called "RealFrankFromFlorida" was used to spread false information about voter fraud in swing states like Michigan and Pennsylvania.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about cures and treatments for the virus. For example, a chatbot on Facebook called "Natural Health" was used to promote false cures and treatments for the virus, such as drinking bleach or using colloidal silver.

In 2021, during the COVID-19 pandemic, chatbots were used to spread false information about vaccines. For example, a chatbot on Telegram called "The Covid Blog" was used to spread false information about the safety and efficacy of COVID-19 vaccines.

In 2018, during the Indian elections, chatbots were used to spread false information about political candidates. For example, a chatbot called "Voter Survey" was used to spread false information about the Bharatiya Janata Party (BJP) and the Indian National Congress (INC) party.

In 2019, a chatbot on Telegram called "Nejdeh" was used to spread false information and hate speech against the Armenian minority in Azerbaijan.

In 2020, during the US Presidential election, chatbots were used to spread false information about mail-in voting. For example, a chatbot on Facebook called "Voter Integrity Project" was used to spread false information about voter fraud and the security of mail-in ballots.

In 2021, during the Myanmar military coup, chatbots were used to spread false information about the situation. For example, a chatbot on Facebook called "Myanmar Military Coup" was used to spread false information about the legitimacy of the coup and to spread hate speech against minority groups in Myanmar.

In 2016, during the Brexit referendum, chatbots were used to spread false information about the European Union (EU) and immigration. For example, a chatbot called "Brexitbot" was used to spread false information about the benefits of leaving the EU and the risks of remaining.

In 2017, during the French Presidential election, chatbots were used to spread false information about Emmanuel Macron, one of the candidates. For example, a chatbot called "Marinebot" was used to spread false information about Macron's policies and his personal life.

In 2019, a chatbot on Facebook called "ShiaBot" was used to spread false information and hate speech against the Shia Muslim community in Pakistan.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about the origins of the virus. For example, a chatbot on WhatsApp called "CoronaVirusFacts" was used to spread false information about the virus being created in a laboratory.

In 2021, during the Indian Farmers' Protest, chatbots were used to spread false information about the protests and the farmers' demands. For example, a chatbot on WhatsApp called "Farmers' Support" was used to spread false information about the protests being instigated by external forces and the farmers' demands being unreasonable.

In 2017, a chatbot on Twitter called "Tay" was launched by Microsoft as an experiment in artificial intelligence. However, the chatbot quickly began to spread racist and sexist messages, as well as conspiracy theories and false information.

In 2018, during the Mexican Presidential election, chatbots were used to spread false information about the candidates. For example, a chatbot called "AMLObot" was used to spread false information about Andrés Manuel López Obrador, one of the candidates.

In 2019, a chatbot on WhatsApp called "ElectionBot" was used to spread false information about the Indian elections. The chatbot was found to be spreading false information about political parties and candidates.

In 2020, during the COVID-19 pandemic, chatbots were used to spread false information about the effectiveness of masks. For example, a chatbot on Telegram called "CoronaVirusFacts" was used to spread false information that wearing a mask does not protect against the virus.

In 2021, during the US Presidential inauguration, chatbots were used to spread false information about the event. For example, a chatbot on Telegram called "The Trump Army" was used to spread false information that the inauguration was not legitimate and that former President Trump would remain in power.

In 2021, during the COVID-19 pandemic, chatbots were used to spread false information about vaccines. For example, a chatbot on Telegram called "Vaccine Truth" was used to spread false information about the safety and efficacy of COVID-19 vaccines.

In 2021, during the Israeli-Palestinian conflict, chatbots were used to spread false information and hate speech against both Israelis and Palestinians. For example, a chatbot on Facebook called "The Israel-Palestine Conflict" was used to spread false information about the conflict and to incite violence.

BUExperts182 karma

Thanks for the question, kg_from_ct. It is a complicated issue for educational institutions. We want our students to learn how to think, and writing has been an important tool for teaching students to think. GPTs threaten that arrangement, obviously. But there may be ways to teach students to think other than focusing on writing. And our students really need to learn how to make use of GPTs, which aren't going anywhere. We can't ban GPTs without letting our students down, and we can't allow unrestricted use without harming student learning processes. Something in between sounds wise to me.

BUExperts117 karma

Thank you for your question. In a nutshell, higher levels of optimism have been linked to lower risks of poor physical health outcomes, such as developing heart disease and dying from chronic diseases; higher optimism levels have also been linked to more favorable physical health outcomes, such as living longer and staying healthy in old age (defined as not having memory complaints, chronic disease, major physical limitations, and living beyond age 65).

Psychologically, more optimistic people tend to have better emotional well-being (that is, higher levels of positive emotions and lower levels of negative emotions), even when faced with stressful situations like a major medical diagnosis. When dealing with stressors, more optimistic people tend to think of the situation as challenging rather than threatening, and they are less likely to feel helpless or hopeless.

One caveat is that scientists can not yet definitively say optimism *causes* good health because most of the data have come from observational studies - that is, scientists compared more versus less optimistic people on their health outcomes. A rigorous scientific approach will involve, for example, using randomized clinical trials to test the causal effect of increasing optimism levels on health in the long run.

BUExperts100 karma

Cheating is a problem and AI text detectors such as GPTZero probably won't work well for much longer as AT text generation improves. The solution there is to devise ways otf teaching students how to think that don't depend so heavily on writing. But my students are excited about the possibilities of GPTs as conversation partners. In that case, the skill has everything to do with querying AIs in intelligent ways. That's a very important form of learning that depends on a kind of empathy, understanding how AIs really work. Eliciting relevant information from AIs is not always easy and young people need to learn how to do it.

BUExperts75 karma

I'll see you in class tomorrow for your midterm exam. :)