My short bio: I'm an MIT professor who who loves thinking about life's big questions. My new book "Life 3.0" is about how to create an inspiring future with AI & my 1st book was about physics & big questions. I'm also president of the Future of Life Institute, which aims to ensure that we develop not only technology, but also the wisdom required to use it beneficially.

My Proof: https://www.facebook.com/Max-Tegmark-461616050561921/

Comments: 169 • Responses: 29  • Date: 

antiquark234 karma

If you were allowed to say crazy things without anybody judging you, what would you say consciousness "actually is?" Do you have any far-out ideas about consciousness that you haven't subjected to any scientific scrutiny yet?

MaxTegmark46 karma

I think that consciousness is the way information feels when being processed in certain complex ways. To me, the exciting remaining challenge is to clarify what precisely those "certain complex ways" are, so that we can predict which entities are conscious. :-)

pproct19 karma

What is your perspective on free will? Elaboration would be much appreciated!

MaxTegmark59 karma

Thanks for bringing up the fun topic of free will! The free will issue becomes very interesting as soon as you assume that our universe is purely physical, regardless of whether you also assume that it's totally mathematical. If your decisions are made by computations in your brain that correspond to elementary particles moving around according to the deterministic laws of physics (and making you feel conscious in the process), then how can we understand why it feels like we have free will?

Philosophers have of course argued over this famous question for ages without reaching consensus. Quantum physicist Seth Lloyd has made the interesting argument that a brain (or computer) will feel that it has free will if it can't figure out what it will do a minute later in less than a minute, i.e., if there's no "shortcut" way of getting to the answer of the decision-making computation without actually making the whole computation. This agrees well with how I feel when I decide: I consider the consequences of my various options, weigh the pros and cons, etc., and don't know what I'm going to decide until I've finished thinking it all through.

lv4_multiverse_admin16 karma

Dear Professor Tegmark, I'm a young neuroscientist developing one of the local Effective Altruism chapters. How can I transfer from my field to the AI safety and x-risk research? I would love to directly contribute my intellectual work to the focus areas which seem to be humanity's ultimate challenges, but it's extremely difficult for an "outsider" to reach the decisive circles and secure a high-impact position in the existential risk network.

As a side note, I have recently read "Our Mathematical Universe" and your deep considerations on the nature of reality significantly helped me in getting through difficult times by appreciating the uniqueness of life. Thank you!

MaxTegmark15 karma

Thanks for your encouraging words! Please get in contact with 80,000 Hours (https://80000hours.org), who give awesome advice for jpw to switch into such a career. :-)

ThinkAgainSunshine16 karma

Are you a chick magnet?

MaxTegmark52 karma

My wife is here next to me laughing, and says "our first date was pretty magnetic"... :)

mm_writer13 karma

What is your opinion on self-driving cars? Do you believe that current technology is adequate to make them safe for road-use and what developments do you foresee where they are concerned?

MaxTegmark35 karma

I think that self-driving cars will eliminate the vast majority of of the 30000+ annual road deaths in the US. Of course there will be problems, but on the whole, less than with human drivers. They already drive better than I do on highways, which is, admittedly, not saying a lot... :-)

_jonyoung_13 karma

What books/papers have influenced your thinking the most and/or what books/papers do you think you've learned the most from?

MaxTegmark30 karma

The book that's blown me away most recently is "Sapiens". What got me into physics in the first place was "Surely you're joking, Mr. Feynman" and "The Feynman Lectures on Physics, Part 1". :-)

Hakiz11 karma

Do you agree with english being the latin of our age, in the sense that it is the primary language for science and it's frontiers?

As you, I'm a native swedish speaker. How do you view swedish (or any other language) in comparison to english when it comes to with ease expressing both old and new scientific thoughts?

MaxTegmark24 karma

Pratar du svenska? I like learning other languages because I love travel and getting different perspectives. Different languages have different strengths, and which ever one I'm speaking, I'll often miss some cute idiom that only exists in another language. That said, I like English a lot, except for the horrible spelling rules where "a" can be pronounced in 6 different ways...

Xenoprimatology11 karma

Why are we living in a three-dimensional universe? Is there something special about having three spatial dimensions?

MaxTegmark51 karma

As described in https://arxiv.org/pdf/gr-qc/9702052.pdf and in my 1st book (http://mathematicaluniverse.org), you can't have stable atoms or solar systems if there are more than 3 spatial dimensions, you don't have gravitational attraction in less than 3, and you can't predict anything if there's more than 1 time dimension, making it pointless to have a brain. So if there's a multiverse where different parts have different dimensionality (as in many models with stringtheory + inflation, say), then you'd probably only have observers in parts with 3 space dimensions and one 1 time dimension - and here we are! :-)

reg1033611 karma

Dear professor Max Tegmark, I absolutely loved your first book, it was a very fun and interesting read. You are one of the reasons I study physics and I love it. Do you have any recommended readings? Also, I wonder what your favourite paradoxes are. Thank you very much for doing this AMA and I look forward to reading your book!

MaxTegmark16 karma

Thanks for your encouraging words - it makes my day to hear that it contributed to your decision to study physics! I put a long list of my favorite physics-related books at the end of "Our Mathematical Universe". I love all paradoxes, since I feel that it's precisely where our understanding breaks down that we're most likely to find helpful clues that help science progress.

MaxTegmark5 karma

Thanks for your encouraging words - it makes my day to hear that it contributed to your decision to study physics! I put a long list of my favorite physics-related books at the end of "Our Mathematical Universe". I love all paradoxes, since I feel that it's precisely where our understanding breaks down that we're most likely to find helpful clues that help science progress.

CaptEntropy10 karma

What do you think of the "warnings" of writers like Nick Bostrom ? Isn't being afraid of a future superintelligence taking over the world sort of like being afraid our descendants? I for one welcome our future AI overlords :)

MaxTegmark28 karma

I'm optimistic that we can create an inspiring future with AI - but it won't happen automatically, so we need to plan and work for it! If we get it right, AI might become the best thing ever to happen to humanity. Everything I love about civilization is the product of intelligence, so if we can amplify our human intelligence with AI and solve todays greatest problems, humanity might flourish like never before. But the research needed to keep it beneficial might also take decades, so we should start it right away to make sure we have the answers when we need them. For example, we need to figure out how to make machines learn, adopt and retain our goals. And whose goals should it be? What sort of future do we want to create?

Your vision of intelligent machines as the descendants of our civilization indeed appeals to many, just as it appeals to many to have a child who carries on our legacy and values and goes on to do what we could only dream of. But you'd be less pleased if your child were the next Hitler who destroyed you and everything you cared about. That's why we put so much effort into raising our children well, and teaching them our values. We need to do the same if we ever build superintelligence AI - not just be irresponsible parents by switching it on and hope for the best!

slcnface7 karma

Pleased to meet you. What do you think is the most probable/soonest available way to probe the Planck scale physics for e.g. quantum gravity?

MaxTegmark22 karma

It's tough, since you need incredibly violent (high-energy) experiments to detect quantum effects . My hunch is that our best bet is studying the most violent experiments that nature has already done for us: our Big Bang (studied via inflationary gravitational waves, say) and black holes (preferably in their final stages of Hawking evaporation). I'm also a big fan of theory work, since we're not in the situation of having many consistent quantum gravity theories to choose between and just needing experiments to tell who's right. Rather, we have no fully fleshed-out and understood theories from which we can work out the key testable predictions - neither string theory nor loop quantum gravity are at that stage of development yet.

lakyberry4 karma

What is your prediction of ASI arrival? When do you thing it's gonna happen?

MaxTegmark6 karma

When we polled top AI researchers at our recent Asilomar conference (https://futureoflife.org/bai-2017/), the median guess was that AI would be able to outperform humans at all intellectual tasks a few decades from now; I think that's plausible. I also think it's plausible that this could rapidly lead to superintelligence, but I try to keep a very open mind about all this. To me, the most interesting thing isn't guessing exactly what will happen, but asking what constructive steps we can take now to maximize the chances of a good outcome. #1 on my list is for governments to start seriously funding AI safety research, so that AI safety researchers at universities, MIRI, FHI, CSER, CSI, etc don't need to survive on just fumes & idealism. :-)

JimCui4 karma

Hello again Professor Tegmark,

I like that you included chapter 8 in your book!

What specific experiment are you currently most excited/interested/optimistic about for making progress on the EHP and/or PHP?

MaxTegmark8 karma

I describe an experimental setup in the book (http://space.mit.edu/home/tegmark/ai.html) that my group is trying to perform here at MIT, where the consciousness theory you're trying to test (IIT, say) is used to make a computer predict what you're subjectively experiencing at any one time. What I like about this is that it makes consciousness theories falsifiable: if the computer tells you that you're aware of something that you're not aware of, the theory goes in the trash bin of scientific history.

jarjarbinks1294 karma

Hi Dr. Tegmark, if you posit consciousness to be a state of matter, what determines which particular consciousness is experienced by which individual conscious entity? (i.e. why am I seeing the world through my eyes instead of through yours?)

MaxTegmark8 karma

The conscious visual information being processed in my brain comes from my eyes, not yours, which the corresponding subjective experience is of what my eyes (not your eyes) saw.

datadata4 karma

Hi Max, I greatly enjoyed your first book.

What might consciousness without the limitations of decoherence look like? For example, consider a quantum computer that is complex enough to have self awareness and small enough to still be coherent (pick a different part of the level 4 multiverse with a much larger value of Planck's constant or similar as needed...). This entity would not be forced occupy a single observer moment and could in principle compute its entire future.

MaxTegmark5 karma

I love pondering what quantum intelligence (or "quintelligence", as my colleague Frank Wilczek calls it) would subjectively feel like. I've spent about a year working on this question, but don't feel that I have a satisfactory answer yet.

Jeroen19823 karma

Is having consciousness relevant to our survival? Does it somehow give us an advantage over an AI life form without it?

MaxTegmark7 karma

With my definition of consciousness (="subjective experience"), my answer is "no": what affects your survival is only what you do (which depends on your intelligence), not how you subjectively feel. But it's quite possible that the most evolutionarily efficient way to implement intelligence is by an computational architecture that is conscious as a side-effect.

Neurogence3 karma

Do you think strong artificial intelligence will be conscious? Or will it strictly be "intelligent?" It seems that most of the people working on this, are working on artificial intelligence, but not on "artificial consciousness." The distinction between intelligence/consciousness is very important, but is unfortunately, often ignored.

MaxTegmark8 karma

I agree that it's unfortunate that it's ignored – that's why I chose to write a whole chapter (chapter 8) about precisely this in my new book Life 3.0 (http://space.mit.edu/home/tegmark). Although most people I know think they know the answer to your question (half think "yes" and half think "no" :-), I think we don't, and explain what sort of theories and experiments can be pursued to find the answer.

that-is-classified3 karma

Hello, Professor Tegmark. You argue that it is the pattern in which elementary particles are placed that differentiates a human brain from other lumps of matter. Given this, could it be that in order to achieve consciousness, or sufficient informational processing ability, elementary particles must be organized in such a way as to resemble the wetware that is a human brain?

MaxTegmark6 karma

My guess is no: that matters isn't the low-level implementation of the information processing, but the high-level structure of the information processing itself. But I try to keep an open mind about this, since Giulio Tononi argues that it might be the the other way around.

franklintheweiner3 karma

Hi Prof. Tegmark, are you taking UROPs next year?

MaxTegmark5 karma

Perhaps! Please email me about this at [email protected]. :-)

Saladino933 karma

Hi professor Max!

I launch two questions to you:

1)From what I understood today we have very specific AI that can perform much better than humans in very specific activities(like chess, driving cars, games, etc...). And it's easy to see why they do better(win/not win, more accurate/less accurate, etc...)

Then we have the next step of development, the general type of artificial intelligence(that I think it is the one that many people are afraid of). How can we know that this will perform better than us as the specific one? I'm especially thinking about the definition of better. If this will be some sort of human how can we tell that it/he/she/*** is better than us? Between humans is very difficult to define who is better than who....

2)I remember some years ago when an Italian physics professor, G. Parisi, said that we are becoming more aware of the fact that we cannot have an intelligence without a body. Why no one is talking about a body when introducing AI? (in case if you can give me some resource pls because I'm really ignorant about this)

MaxTegmark12 karma

1) Intelligence is the ability to accomplish complex goals, so it can’t be quantified by a single number such as an IQ, since different organisms and machines are good at different things. To see this, imagine how you'd react if someone made the absurd claim that the ability to accomplish Olympic-level athletic feats could be quantified by a single number called the "athletic quotient", or "AQ" for short, so that the Olympian with the highest AQ would win the gold medals in all the sports. 2) AI can trivially have a "body" in the form of sensors, actuators, etc – or simply being connected to the internet and enough money to buy the real-world goods and services it needs. I open my new book (http://space.mit.edu/home/tegmark) with a detailed thought experiment to explore this point.

Tiberivs_Septimvs3 karma

Mr. Tegmark, First of all, it's wonderful to have a chance to speak with you! I absolutely loved your "Our Mathematical Universe" book. I'm a Mechanical Engineer student who loves physics.

I'm very conviced about alien life in space. There is billions of planets that could contain life, and the statistics says it must be life somewhere other than Earth. But intelligent life? How we became intelligent in first place? Why evolution needed to improve human mind more than any other spieces? Most of animals can communicate with voices, it's not special for only humankind but, we're the only ones that note it down and transfer information to our grandkids so much faster than genetic methods of learning in evolution. We have aesthetic values, we love, we think our place in cosmos, we think the main purpose of life, we have a huge passion to learn more things. Maybe we are the exeption. Or maybe the other intelligent species have so much more different methods of live, communicate and store information than us and we cannot observe them yet. Do you think humankind is a major player in cosmos, or we're just as important as bacterias living right now on my keyboard?

MaxTegmark7 karma

Although human life is having an almost imperceptibly small impact on our universe now, I believe that we can have an enormous impact in the future. I'm in an uber in Boston right now reflecting on how life went from being a side show here to totally dominating the landscape. As I explain in chapter 6 of Life 3.0 (http://space.mit.edu/home/tegmark/ai.html), I think that life can similarly transform our cosmos once empowered by AI.

orionflyer122 karma

Hi Dr. Tegmark - What do you make of the contention that a theory that predicts multiple universes effectively loses its explanatory power since it: 1. predicts nothing, since everything can happen; 2. relies on unseen and impossible-to-ever-detect alternative universes. If a theory emerged that could explain our universe without relying on unseen additional universes, wouldn't that be more ideal?

MaxTegmark16 karma

It's crucial not to conflate theories with their predictions. Parallel universes are not a theory, but the prediction of certain theories, which are in turn scientific and testable because they make other predictions. For example, the theory of cosmological inflation (whose predictions agree well with recent measurements from the Planck satellite etc, helping explain why it's now our most popular theory for what put the bang in to our Big Bang) predicts that space is larger than the part we can see (the Level I multiverse). Neither my book nor other recent books discussing the Level I multiverse (by Vilenkin, Susskind, Greene, etc.) claim that the Level I multiverse exists. Rather, they claim that inflation implies a Level I multiverse, so that your take inflation seriously, then you're logically forced to take this multiverse seriously too. The logic is analogous for the other multiverse levels. Is inflation correct? We don't know yet, but this is a scientific question that upcoming experiments will shed more light on - and these experiments have nothing to do with personal beliefs or arm-waving.

dinochow992 karma

Didn't you get two different degrees from two different universities at the same time? I recall reading something about that in the past. How did that work out?

MaxTegmark5 karma

Yeah, I was indeed a confused youth... :) I ended of finishing both (econ in one place, physics in another), and you'll actually find some traces of my economics education snuck into my new book "Life 3.0" (http://space.mit.edu/home/tegmark/ai.html), especially in the prelude and chapter 3.

MarcusB45882 karma

Do you believe AI will take over the majority of "menial" jobs within the working world, and if so how will we as people adjust to support those who would have been employed within those positions?

MaxTegmark18 karma

Not only menial jobs, but also many jobs that require lots of training for us humans, such as analyzing radiology images to determine whether patients have cancer. To safeguard your career, go for jobs that machines are bad at – involving people, unpredictability and creativity. Avoid careers about to get automated away, involving repetitive or structured actions in a predictable setting. Telemarketers, warehouse workers, cashiers, train operators, bakers or line cooks. Drivers of trucks, buses, taxis and Uber/Lyft cars are likely to follow soon. There are many more professions (including paralegals, credit analysts, loan officers, bookkeepers and tax accountants) that, although they aren't on the endangered list for full extinction, are getting most of their tasks automated and therefore demand much fewer humans. I give more detailed job advice in Chapter 3 of my new book. If machines becomes able to do all our jobs in a few decades, that doesn't have to spell doom and gloom as is commonly assumed. It could give everyone who wanted a life of leisure and play if we as a society share the vast new wealth produced by machines in a way such that nobody gets worse off. The'll be plenty enough resources to do this, but whether there's the political will is another matter, and currently I feel that things are moving in the opposite direction in the US and most western countries, with the large groups of people getting steadily poorer in real terms – creating anger which helps explain the victories of Trump & Brexit.

mperhats2 karma

Apologies for the long question. If you'd touch on what you have time for that would be awesome! Thanks for having this discussion. Your work is fascinating!

In your recent interview with Sam Harris on his Waking Up podcast you discuss two schools of thought that should be taken into consideration when trying to morally assess the future tolerance for a superhuman artificial general intelligence. The first is to keep it boxed and restricted rather than the second school of thought that has an allowance for an autonomously functioning robot that is free to absorb and interpret information as it pleases. You also discuss how the second school of thought depicts a level of immorality of restricting any sort of intelligence to not be able to freely interpret information. This second school of thought suggests that the particular superhuman artificial general intelligence will have its own, individualistic, subjective experience. What is the evidence that an AI will have a subjective experience, a consciousness, and be able to experience a version of emotion? I find it much more likely that an AI, if we programmed it to be human-like, may have the illusion of a subjective experience. I find it difficult to imagine a human-like superhuman artificial general intelligence to be able to experience true pain, hate, love, happiness, anger, etc. as they are not biological evolutionary life forms. I have no doubt that these entities will have a profound impact on our proceedings as a species but why should we take into consideration their individual subjective experience rather than using them as an altruistic lever? And my last question is, if we were going to restrict this machine, why would it have any sort of incentive to break out of its restriction? I agree that with humans, complex biological organisms, prohibition, regardless of its extensiveness, does not work. I think that the reason for this has a lot more to do with our evolutionary desires to have sex, alter our states of consciousness, etc. But for a non-biological organism why would this AI have any incentive at all to break out of its confinement if it has no true biological motivation to do so?

MaxTegmark8 karma

First of all, we've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But from my perspective as a physicist, intelligence and consciousness is simply a certain kind of information processing performed by elementary particles moving around, and there's no law of physics that consciousness requires cells or carbon atoms. In face, I dislike the "carbon chauvinism" suggesting otherwise. Second, as I explain in detail in Chapter 7 of "Life 3.0" (http://space.mit.edu/home/tegmark), almost any goal we give to a smart AI robot (say buying us groceries) will lead it to develop subgoals that include self-preservation (since it realizes that it lets itself be attacked & destroyed on the way to the supermarket, it won't accomplish its shopping goal). So machine goals will be the rule, not the exception, regardless of how it subjectively feels to be the AI.

ubiq1er2 karma

Bonjour Pr Tegmark,

How do you conciliate the idea of emergent SuperAI with the Fermi Paradox ? Let me rephrase (as i did here : https://www.reddit.com/r/singularity/comments/65dtol/ai_the_fermi_paradox/?st=j6zcexlb&sh=c6ccd173) : "If E.T. Super AI has emerged somewhere in the galaxy (or in the universe) in the past billion years, shouldn't its auto-replicating, auto-exploring ships or technological structures be everywhere (a few million years should be enough to explore a galaxy for a technological being for which time is not an issue) ?"

Thank you for this AMA, and for your books !

MaxTegmark5 karma

Thanks for your encouraging words! For the reasons I explain in chapter 13 my 1st book (http://mathematicaluniverse.org), I'm of the minority view that we're probably the only civilization in our observable universe that's reached our level of technology. That's why distant AI hasn't taken over our solar system. In any case, if we do discover extraterrestrial intelligent life, I think it will almost certainly be postbiological AI. :)

Dogs_R_2_Big2 karma

Hi Max, I am just about to start university. I would like to go into Artificial Intelligence and am wondering what advice you would give to people my age?

MaxTegmark5 karma

Please get in contact with 80,000 Hours (https://80000hours.org), who give awesome career advice for idealistic people.

godelbrot2 karma

Hey Max, I spent a lot of time thinking about the Quantum Suicide thought experiment and it got me thinking if you could use it as a kind of replacement for perfectly intelligent AI.

For example, instead of a perfectly intelligent AI used to play a particular stock perfectly for maximum profits (through buying and selling), would it work to have the Quantum Random Number Generator trigger a buying or selling action (buy if it outputs a 1, sell/hold if it outputs a 0) and then have the shotgun trigger on the outcome of that action (shotgun goes off if money is lost). Would you find yourself in the universe that the QRNG output the perfect buy/sell instructions? (assuming quantum suicide works in the first place!)

Or a password cracker, where the shotgun was programmed to go off if there is a failed password entry, and the QRNG binary is converted into standard character output. Would you find yourself alive only in the universe where the QRNG just happened to output the correct password?

With the standard experiment you find yourself (if it works!) in the universe that the QRNG only output Zeros (if the machine was programmed to trigger the shotgun if it output a 1), so essentially what I am asking is, do you think that it makes any difference to the possibility of the thought experiment working if the binary output is used to influence things in the world, and that the shotgun would trigger based on the outcome of those influences?

I wrote a little more in depth about the idea here:

https://medium.com/@godelbrot/the-deplorable-device-how-to-harness-quantum-suicide-to-become-insanely-rich-2820e98de1f6

MaxTegmark5 karma

As I explain in my 1st book (http://mathematicaluniverse.org), I don't think that quantum suicide works, so if you want to get rich, please try to do it through traditional means!