I'm Professor Toby Walsh, a leading artificial intelligence researcher investigating the impacts of AI on society. Ask me anything about AI, ChatGPT, technology and the future!
Hi Reddit, Prof Toby Walsh here, keen to chat all things artificial intelligence!
A bit about me - I’m a Laureate Fellow and Scientia Professor of AI here at UNSW. Through my research I’ve been working to build trustworthy AI and help governments develop good AI policy.
I’ve been an active voice in the campaign to ban lethal autonomous weapons which earned me an indefinite ban from Russia last year.
A topic I've been looking into recently is how AI tools like ChatGPT are going to impact education, and what we should be doing about it.
I’m jumping on this morning to chat all things AI, tech and the future! AMA!
EDIT: Wow! Thank you all so much for the fantastic questions, had no idea there would be this much interest!
I have to wrap up now but will jump back on tomorrow to answer a few extra questions.
If you’re interested in AI please feel free to get in touch via Twitter, I’m always happy to talk shop: https://twitter.com/TobyWalsh
I also have a couple of books on AI written for a general audience that you might want to check out if you're keen: https://www.blackincbooks.com.au/authors/toby-walsh
If you’re not a very good writer, fucked is probably the correct adjective.
But if you’re any good, ChatGPT is not going to be much of a threat. Indeed you can use it to help brainstorm and even do the dull bits. Toby
How do I know it’s you responding, and not an AI writing responses for you?
Ha! Good question. But it will stake a better question than that to catch me out. How do I know you’re a real person asking me a question?
I see a lot of people treating ChatGPT like a knowledge creation engine, for example, asking ChatGPT to give reasons to vote for a political party or to provide proof for some empirical or epistemic claim such as "reasons why 9/11 was an inside job."
My understanding of ChatGPT is that it's basically a fancy autocomplete-- it doesn't do research or generate new information, it simply mimics the things real people have already written on these topics and regurgitates them back to the user.
Is this a fair characterization of ChatGPT's capabilities?
100%. You have a good idea of what ChatGPT does. It doesn’t understand what it is saying. It doesn’t reason about what it says. It just says things that are similar to what others have already said. In many cases, that’s good enough. Most business letters are very similar, written to a formula. But it’s not going to come up with some novel legal argument. Or some new mathematics. It's repeating and synthesizing the content of the web.
What are some important things AI will change that we don't yet realize?
We’re still working out what ChatGPT can and can’t do.
Large Language Models (LLMs) like ChatGPT have already surprised us. We didn’t expect them to write code. But they can. After all there is a lot of code out on the internet that ChatGPT and other LLMs have been trained on.
Hopefully AI will do the 4Ds – the dirty, dull, difficult and the dangerous. But equally they might change warfare, disrupt politics, not in a good way and cause other harms to our society. It’s up to us to work out where and where to let AI into our lives and where not to let AI in.
I know a lot of people are freaking out about AI tools like ChatGPT and how it's going to put programmers, writers, etc out of a job, as well as making it extremely easy to cheat on essay questions and exams. I have two questions:
1) How do you think detection of cheating using ChatGPT would be handled? It seems like it would be hard to detect an essay if you were to use it as a starting point and then edit it significantly. And is this something we would want to discourage?
2) Do you think that people will be completely replaced by tools such as these, or will their roles be adjusted using these tools, similar to how we no longer have "calculator jobs" but we use the tool to make things quicker?
The only way to be sure someone is not cheating with ChatGPT is to put them in exam conditions. In a room without access to any technology.
Tools for “detecting” computer generated content are easily defeated. Reorder and reword a few sentences. Ask a different LLM to rephrase the content. Or to write it in the style of a 12 year old.
And yes, I do see this moment very much like the debate we had when I was a child about the use of calculators. And the calculator won that debate. We still learn the basics without calculators. But when you’ve mastered arithmetic, you then get to use a calculator whenever you want, in exams or in life. The same will be true I expect for these writing tools.
Now that the cat's out of the bag, future LLMs may unwittingly use training data "poisoned" by ChatGPT's predictions. What are the consequences of this?
If we’re not careful, much of the data on the internet will in the future be synthetic, generated by LLMs. And this will create dangerous feedback loops.
LLMs already reflect the human biases to be found on the web. And now we might amplify this by swamping human content with synthetic content and training the next generation of LLMs on this synthetic content.
We already saw this with bots on social media. I fear we’ll make a similar mistake here.
What is likely the first profession to be automated by a system like Chat GPT?
We’re already seeing some surprised.
Computer programmers are already using tools like CoPilot https://github.com/features/copilot/
These won’t replace all computer programmers. But they lift the productivity of competent programmers greatly which is bad news for less good programmers
I’d also be a bit worried if I wrote advertising copy, or answered complaint letters in a business.
How would you recommend University level professors embrace/regulate AI tools in the arts? Interested in any takes you have on pros and cons of integrating it deliberately vs acknowledging it. What is a safe way of approaching forming policy’s around it?
Thanks for your time!
On one level, you can see them as tools, to democratize art. I can make much better designs using Stable Diffusion than I could by hand.
But I don’t see these designs as art. Art is about exploring the human condition. Love, loss, mortality …. all these human issues that a machine will never experience because it will never fall in love, lose a loved one, or face the fear of death.
These tools will therefore never mean as much to us as human made creations.
What kind of ethical problems do you foresee with AI that trains off of publicly available data? Is it more/less ethical than a person studying trends and data then creating something from that training?
It’s not clear that the data used for training was used with proper consent, that it was fair use, and that the creators of that data are getting proper (or even any) rewards for their intellectual property.
Lately my mind is being blown by technology in a way I didn't think was possible five years ago. How do I keep from getting left behind? Is it possible to get a foot in the door to start gaining experience in this area with only basic coding experience and no quantitative background or industry/academic connections?
Reading my books!
The good news is that there are some greater online courses you can do to get your hands dirty and learn more about the technology.
Here in Oz, we have Jeremy Howard’s fast.ai courses, free and online (and even face-to-face in Brisbane). Worth checking out.
Will future AI be strictly cloud based or will we be able to have a private on site home Jarvis?
We’re at the worst point in terms of privacy as so much of this needs to run on large data sets in the cloud.
But soon it will fit into our own devices, and we’ll use ideas like federated learning, to keep onto our data and run it “on the edge” on our own devices.
This will be essential when the latency is important. Self-driving cars can’t run into a tunnel and lose their connection. They need to keep driving. So the AI has to run on the car.
A lot of education at universities these days is not about learning, but about getting an accreditation. People tend to learn a lot on the job too, and outside of universities on their own via other means (udemy, YouTube tutorials, freecodecamp, etc.).
It seems chatGPT is exposing this fact, as so much assessment at university is still focused on essays and exams. What do you think about the future of universities in this new context? How can they restructure to put a focus back on "learning" vs. accreditation, and should they?
Universities need to equip people with the skills for the 21st century not the 20th.
We need to teach people how to learn lifelong... Your education isn’t going to finish when you leave university but will go on for as long as you work and new technologies arrive at ever-increasing rates.
We also need to return to the more old fashioned skills that ironically were often better taught in the humanities such as critical thinking and synthesis of ideas, along with other skills that will keep you ahead of the machines like creativity and adaptability.
But universities will also increasingly offer short courses, that you can take once you're out in the workforce.
How close are we to AI having an original thought, and how will we recognize it when that happens?
ChatGPT is just mashing together text (and ideas) on the internet.
But computers have already invented new things, new medicines, new materials. ….
I am scared, can you please reassure me that the future is not bleak?
The future is not fixed. Technology is not destiny. It’s up to us today to decide the future by the decisions we make now.
But apologies to all the young people here. We really have f*cked the climate, the economy and international security in the last few decades.
And it’s only by embracing the benefits of technologies like AI, and carefully avoiding the possible downsides do we have any hope at fixing the planet.
How often is AI research done across international borders (and is it difficult to achieve) given its potential security restrictions? Are there any countries or regions leading the way in this field?
Are there any interesting companies or projects we should keep our eye on out of interest?
Australia punches well above its weight internationally. We’re easily in the top 10, perhaps in the top 5 in the world. It’s not well-known how innovative we’ve always been in computing. We had the 5th computer in the world, the first outside of the US and the UK.
US and China, and then Europe (if you count it as one) are leading the way.
What is remarkable is China has gone from zero to the top 1 or 2 in the last decade. The best computer vision work is probably now in China. The best natural language (like ChatGPT) is the US. Though China has the biggest LLM anywhere.
Like my peers, I work with many colleagues in Europe, the US, and Singapore...
As for other companies to watch (beyond usual suspects like OpenAI, DeepMind, …), I’d keep an eye on companies like Stability AI, Anthropic...
Will ChatGPT be monetised? Surely it won't stay free forever. Imagine it being used in search engines, AI messaging services, call centre conversations, smarthome integration – will it be used in more contexts than a chat service?
There’s already a premium service you can sign up for.
I expect there will always be free tools like ChatGPT. Well, not free but free in the sense that you will be the product. The big tech giants will all offer them “free” like they offer you free search, free email … because your data and attention are being used and sold to advertisers, etc.
Will human artisan work (writing, painting, etc) become a sort of luxury for a few in the future?
Yes, we see this already, within hipster culture, and a return to hand made bread, artisan cheese...
Basic economics tells us that machine-produced goods will get cheaper and cheaper, as we remove the expensive part of manufacturing --- the human operators.
But artisan goods will be rarer and ultimately more expensive.
I’ve joked, one of the newest jobs on the planet – being an Uber driver –is one of the more precarious. We’ll soon have self-driving taxis.
But one of the oldest jobs on the planet – being a carpenter – will be one of the safest. We’ll always value the touch of the human hand, and the story the carpenter tells us about carving the piece we buy.
Work, culture... might be a large arc taking us back to the sort of things that we did hundreds of years ago?
My mind was blown when I first read Isaac Asimov's The Last Question. Do you see AI playing an exponential role in advancing technology through materials science? At some point, will humans simply think of ideas and let computers maximize efficiency for us?
AI is already inventing new materials, new drugs, new meta-materials...
It won’t stop with humans thinking of the ideas, and the machines inventing them. Ultimately the machines will be able to do both!
Do you think an outcome like in the plot of ‘terminator’ or ‘wargames’ has the potential to become reality as A.I technology improves?
Wargames is a better (worse?) possibility than Terminator. We know what happens when you put algorithms against each other in an adversarial setting. It’s called the stock market and you get flash crashers when unexpected feedback loops happen. Now imagine those algorithms are in charge of weapons in the DMC between North and South Korea. You’ve just started a war.
Thanks for the AmA!
What can be done and what should we do to prevent AIs negative impacts on society as we know it?
I could write a book on this.
Wait I have!
But in brief: education, and regulation
All of us need to be more aware, educated about risks, and to use our power, how we vote, where we spend our dollars, to encourage better outcomes.
And we need to better regulate tech space so it is better aligned with societal good.
Given that humanity has seemingly lost its way politically, morally, economically and environmentally, do you think we should turn to AI to start solving our problems as a species?
We face a tsunami of wicked problems starting with the climate emergency, moving onto the broken economy, increasing inequality, and troubled international security.
Politics has failed us. The only hope now is to embrace technologies (like AI) to tackle these problems. We could have made some modest changes to our lives and avoided changing the climate. But that’s too late. We are locked into at least 1.5 degrees, perhaps 2. according to AI forecasts.
We need then to use AI to live lighter on the planet. Use resources more efficiently. Make better decisions about the resources we do use.
If so, we can look forwards to a future where the machines do more of the sweat, and we hopefully spend more time on the finer things in life!
How do I get into AI?
I’m a cloud engineer without a lot of programming experience and no hands on machine learning experience.
I spend so much time daydreaming about the idea of artificial general intelligence and I’ve come to believe its the thing I’d want to do if I could pick anything imaginable…but I’m not sure I’m smart enough to get into such a difficult, bleeding edge area.
What would you say about my situation?
There are plenty of exciting jobs in AI, and we can’t turn out students fast enough
Here in Oz, we have Jeremy Howard’s fast.ai courses, free and online (and even face-to-face in Brisbane). Worth checking out.
As to our future relationship with AI, I see us being quite “intimate” with machines, after all, they’ll know us as well if not better than our spouses.
I hope AI stops being Artificial Intelligence and ultimately becomes Augmented Intelligence as we realise these are merely tools, not tools to amplify our muscles like previously but tools to amplify our minds And we are the ultimate tool users.
I’m a writer, how fucked am I?
View HistoryShare Link