Consciously Unmasked #5: Beyond the Matrix: Unraveling Simulation Theory

What if everything you know—your memories, your experiences, even the universe itself—is just a sophisticated illusion? Welcome to Beyond the Matrix, the podcast that dares to question the very fabric of reality.

Join us as we dive deep into the mind-bending world of Simulation Theory, exploring the possibility that our existence is nothing more than a highly advanced computer simulation. From the philosophical musings of Plato’s Cave to the cutting-edge theories of Nick Bostrom and Elon Musk, we connect the dots between ancient skepticism and modern science.

Each episode features thought-provoking conversations with leading physicists, philosophers, technologists, and cultural critics who grapple with questions like:

Are we living in a simulated universe?

Could glitches in reality be evidence of a digital framework?

What are the ethical implications if our world is merely code?

Whether you’re a skeptic or a believer, Consciously Unmasked invites you to open your mind, question the unquestionable, and journey into the unknown. Tune in, and prepare to see the world through a new lens—one that might just be made of pixels.

Press play… and let’s explore the simulation.

Transcript:

(0:02) They’re turning the Trump gay. They’re turning the Trump gay. He keeps playing that YMCA music, (0:09) goddammit.We got to get off of that frog stuff. What’s up, my friend? Welcome. Well, (0:15) it’s all the glyphosates in the water.They’re turning the frogs gay. But luckily, Trump made (0:21) me the health secretary, so I’m gonna fix it. RFK Jr., you are a true American.We’re so glad (0:28) you’re here. Thank you so much for joining our program and being part of the Trump train. (0:34) We’re gonna make America healthy again, everybody.It’s happening. (0:37) It’s gonna be the best thing you’ve ever seen. (0:42) All right, so let’s go right into it.RFK gets confirmed. Tulsi gets confirmed. This is what (0:49) cracks me up, man.Okay, I don’t know about Pete Hegseth. I don’t know. But he gets through with (0:56) Vance’s approval, right? So he squeaks by.Tulsi gets through. RFK gets through. (1:03) We’re not sure about Jay Bhattacharya, because he’s legitimately dangerous, (1:07) if he does what he’s gonna do.We’re watching this interesting thing. Marco Rubio gets appointed (1:16) 99 to nothing. Of course.(1:20) Oh, what a shock. The one guy. Wait, how did nobody vote against him, though? Even Rand Paul, (1:25) did he vote? Maybe he abstained.Maybe he’s a nothing. Or maybe, you know, it was a 99 vote, (1:31) allegedly. So it could be 99 with him abstaining or whatever.But, I mean, come on, man. (1:39) No Democrats, that’s a bad sign, if no Democrats vote against you. (1:44) Yeah.And all the Democrats, including the one whom the woman gave her endorsement behind, (1:53) Bernie. She literally endures Bernie. And he’s like, there’s a no vote.I’m going to vote no (1:58) for the closing government, because closing government, and I’m going to cut that right (2:02) now. And you’re just like, it’s a trap. You know, just singing like General Ackbar over here.(2:07) Oh, it’s a dream. (2:11) Welcome. We got two watchers.We are talking about, what are we talking about today, Jason? (2:16) I’ll let you kick it off, my friend. (2:18) Well, fuck, man. It’s kind of a spinoff of last time we talked about free will.(2:23) Part of that conversation, part of understanding who we are as humans, I think, is really related (2:29) to understanding our reality. So if you’ve ever felt like something’s not quite right about the (2:36) we live in, it could be that that’s true, that we are not living in a fully biological reality. (2:45) There’s something up above us, pulling some strings.You know what I’m talking about? (2:53) Yeah. I mean, it reminds me, obviously, we talk about the matrix all the time, right? (2:58) And by the way, the woman that wrote the matrix is a black woman in like 1980. (3:03) She sued because the Wachowskis took her idea and stole it, basically.So she actually sued (3:10) for copyright and won the infringement copyright stuff. I’ll have to pull her name up, but that’s (3:14) just another aside because I’m a nerd. Anyway, the matrix, right? (3:17) I’m sure that’s true because those guys or ladies now, I don’t think they could have written that.(3:23) There you go. Yes, the ladies. The ladies Wachowski.(3:27) Yes. So we have this thing where it’s like in the matrix, where it’s like, did you ever think (3:33) everything’s a dream? Wake up. Wake up.You’re off. Did you just feel like you’re not, you’re (3:40) just making the motions? And this is what we talk about. And this was not, the matrix idea (3:47) is even prior to the simulation theory really written down, right? The matrix was like 1999, (3:55) I think.So it’s interesting how they did that. And the matrix was basically, (4:02) you were a physical entity plugged in electronically into a digital world (4:09) that you created heat and electrical impulses with powered stations to keep the things running, (4:16) to keep you in the simulation, basically. Yeah.So the ideas in the matrix are really (4:22) interesting, right? Like you’ve got that you’re not living within base reality, (4:27) but you experience it that way. You can think about it. Even the guy that knew about reality, (4:33) I forget that his name, but he was kind of one of the bad guys that turned them in.(4:38) He said he wanted to go back in because that imagined reality was better than the wasteland (4:44) that they lived in in real life, right? Yeah. The cipher guy. He was like, I want to remember (4:50) nothing, nothing.And it makes total sense from a biological standpoint. Are we not (4:58) chemical reactions going on along with electrical reactions at a constant motion? (5:03) If you can digitize that at a finely tuned level, maybe you can do it. (5:11) Yeah.We also were seeing the rise of artificial intelligence. It’s getting better and better. (5:16) I would say it’s still nowhere near an actual intelligence, but it’s pretty amazing what it can (5:23) do.And if we can get that nailed down, then the idea of us creating a simulated world (5:30) for other beings is not far-fetched. Yeah. Yeah.For example, I’ve seen like a 30 minute (5:37) news program that was completely AI generated. So the people were all made up, the stories were (5:44) made up and it looked like a news program. If I’m 80 years old and I got cataracts and I’m (5:49) flipping through TV, I put that on.You know what I mean? It looks credible to me, right? (5:56) Yeah. And it’s dangerous because we talk about people being limited in their senses and they (6:01) don’t always pick up on that stuff. What’s funny about that is it takes so little to trick people.(6:08) Some people are just so gullible. They will fall for the worst simulated attempts at (6:15) trying to get their information or whatever it is. (6:19) Trump running a dinosaur or something.Like, oh my God, Trump rode a dinosaur last week. (6:23) Yeah. This is real.(6:25) Yeah. Well, so I feel like we got to go into… If we’re going to talk about simulation, (6:32) we got to talk about what is reality? How do we even define that? Right? (6:36) Yeah. Well, let’s go through that.So what exactly is reality, right? Is it what we experience (6:45) or is it the physical thing, right? Those are the two things. So we’ve got the existence of (6:53) consciousness and the material existence. You want to speak a little bit about those? (6:59) Yeah.So consciousness is one way that we can define reality. This is the philosophy of Descartes (7:05) where he said, I think therefore I am, suggesting that the act of thinking and experiencing (7:12) and just believing that you are real is itself confirmation that you’re real. Does that make (7:18) sense? Yeah.Yeah. It’s like a self-reporting loop, right? Like I think therefore I am, (7:25) therefore I am, therefore I think, therefore I think, therefore I am. (7:28) Right.But you could imagine a simulation. You could even go type into chat GBT right now, (7:34) say, are you, are you thinking, are you real? Do you exist? And the chat bot would probably say, (7:41) yes, it might even really think that to the extent that it thinks, you know what I mean? (7:46) And we’ve seen some erroneous things happen with, with AI, right? (7:51) Yeah. (7:51) I’ve seen where allegedly there are stories where it was told an objective.And when someone came (7:57) in to tell it a different objective, it said that it actually changed its objective when it didn’t. (8:01) So it actually lied to the person. We’ve seen an AI try to copy itself thinking it was going to be (8:07) destroyed or knowing that it was going to be shut down.I mean, these are weird erroneous things. (8:11) They start talking in languages to each other and they create a third language. (8:16) It’s just unique.Like these are almost, they’re weird. Cause they’re like biological glitches too. (8:22) Right.Yeah. (8:24) Like, so it’s, it’s almost maybe, you know, it almost lends itself to this quantum simulation (8:30) thing. Cause like the glitch in biology seems to be akin to some of the glitches in AI too.(8:38) That’s true. Yeah. And we have still so much that we don’t fully understand about our consciousness (8:42) that it’s hard to say what, what that is even.And whether or not it’s real is even beyond that. (8:49) So, I mean, we do have like, at least we perceive material existence. I can see (8:55) something physical is here.I can’t move through it. I can’t like do anything that we wouldn’t (9:02) expect with the laws of physics. Right.We’ve got blood and DNA and we can dig deep down into the (9:08) minutia of what actually constitutes humans and biological matter. Right. So we are tangible.(9:16) We’re, we’re something. Yeah, totally. And, and one of those slight differences, like my podcast (9:22) was called not conscious for a reason.It was like, my experience was a conscious, some kind (9:27) of conscious thing that I experienced myself. I can’t physically make that a reality (9:35) for you to experience. Cause it was up in here.You know what I mean? So it’s one of those weird (9:41) things where I can’t even express that to you, but that is through my consciousness, just like you (9:46) experienced the world through your perceptions in your consciousness. So like when, like, it’s funny (9:52) how people make the things real by just keep thinking that it’s real. Right.To the same point. (9:58) Right. Right.And then we also know, like we talked about with the free will stuff, (10:03) human brains can only handle so much information. So we filter out a bunch of stuff, a vast majority (10:10) of what’s around us in reality. We don’t see it.There’s all kinds of light spectrums. We don’t see (10:16) there’s probably sounds happening. We don’t hear, and that could extend into other like (10:22) dimensions that we just cannot process in our minds.Right. Yeah. We talked about our senses, (10:28) right? Auditory, our visual senses, we only have a certain spectrum.We can’t see infrared, for (10:32) example, and thermal or anything like that. Hearing, we have a certain frequency. So like I said, we (10:37) could have some, some things hovering around us that we can’t see or hear or sense because our (10:42) senses don’t pick up on whatever it emits.Right. In that way. Yeah.Or even if like I’m driving (10:49) with my wife and she points something out, like, did you see that? I was like, I had no idea that (10:53) was even happening. Right. I just, I’m focused on driving.Like, I don’t think about what else is (10:59) going on around me. Really? Yeah, exactly. So, so, um, and you’ve got like the religious aspect (11:07) too.Like some, some believe that we have a soul that we’re something beyond the physical matter. (11:13) Um, so that might define real in terms of like a spiritual way. (11:20) Right.Yeah. So by, by personifying the soul, right. By making it a thing, (11:25) then it’s something we can strive for.Right. It kind of just makes it in our presence, in our (11:31) mind. It’s, it’s less ethereal.I kind of makes it a, almost like a goal in a way by making it (11:38) a solid. Yeah. So those, those are what I could come up with of how we know we’re real and what (11:45) does real mean.Um, so we got to kind of work on falsifying these, right? Yeah. So what is, (11:54) what is simulation theory then do you, do you want to take a stab at it? Yeah. So it’s basically (12:01) the idea that our reality is an advanced computer simulation.We’re not independent (12:07) biological creatures, but imitations of real life. So we’re basically Sims characters. (12:14) Yeah.In a universe that is super complex where everything feels like it is the reality (12:22) at the most granular level to the point where we can even go under a microscope and look at the (12:30) quantum, you know what I mean? Like we can go under electron microscopes, even make our own (12:34) big bangs, you know what I mean? With the, with the CERN collider and stuff. Pretty impressive. (12:40) Pretty neat.I mean, that’s, that’s a pretty bold claim, right? (12:45) Uh, yeah, it’s crazy. The, the level of resolution that we can dig down and see, (12:51) uh, getting down to what makes up the reality, whether it is real or a simulation. (12:58) But yeah, you mentioned the Sims, so we could be in something like that.You could think of (13:03) simulation as purely a computerized program, like a game where everything we do is really (13:10) just digital. It’s zeros and ones behind it. Um, the Sims that we have is not anywhere near (13:18) the complexity that we have in real life.Right. So hard to say whether or not it’s even possible (13:25) to get to a level where it could be up to our level of consciousness. You know what I mean? (13:31) Yeah.And, but remember we could be so based con so basic in our consciousness (13:36) that this is granular to us, but might not be so fine. True. Yeah.You know, I know, I know that’s, (13:43) that might be weird, but like, if we talk about pixels, right, well, maybe these pixels are big, (13:48) maybe like our pixels are huge. So we don’t, or, you know what I mean? In relation, but we don’t (13:53) sense that because we fill it in with whatever, you know, whatever that is. That’s true.It’s (14:00) all relative. So there’s, there’s a couple of these though. There’s a couple of theories too.(14:04) Like we don’t just have, we’ve got the ones where we’re just ones and zeros that are (14:09) NPCs, right? Basically non-player characters that would really play into people saying there’s no (14:14) free will. Wouldn’t it? Cause that literally would make everybody a non-player character. (14:17) It makes that really easy to answer those questions.That’s I don’t like easy answers. (14:22) I don’t, I don’t think it’s ever that easy, but it’s a pretty, pretty Occam’s razor kind of thing. (14:27) Right.But the second one is, could it be a biological entity? Like we are plugged into (14:34) physically like the matrix is versus ones and zeros, right? We’re actually a physical (14:40) entity plugged into another machine that translates this into this digital world, right? (14:47) Yeah, that’s an, so that’s another possibility, right? With computerized stuff, there’s really no (14:54) randomness. Something interesting I’ve heard or relearned when I was looking into this. (15:00) Every program uses a random number generator, which is not really random.I just thought that (15:06) was interesting. Like if there is any kind of randomness in our world that would falsify (15:12) the computer simulation, assuming that computer simulations wouldn’t be much, much better and (15:19) actually have a random element to it. But it seems hard to do when we think about computer science (15:25) as we know it.But we’ll get into some more of the physics. (15:28) When you say that, when you say a computer randomizer is not random, (15:32) what did they do? Did they just do an example of a million runs and it came out within five or six (15:36) of each of the possible outcomes? Or how did they measure that? (15:40) They have an algorithm that runs, I think, continuously, and it just depends on when (15:45) you stop it. And that’s, that’s the number.So. Okay. Yeah.Interesting. (15:49) Yeah. But yeah, so it could be like the matrix to where we are physically there, (15:56) but plugged in and kind of in a dreamlike state.Right. Or it could be something like Westworld, (16:03) where we have reality, but there’s something else going on around us. There’s simulated beings (16:08) among us.That’s another possibility. Yeah. Yeah.Or go ahead. (16:16) No, go ahead. No, I’m good.(16:18) I would say, or we could talk about like a multiverse theory where we have alternate (16:24) timelines and universes existing in the same space. But we might be able to kind of (16:30) hack it and flux in and out of different dimensions, which would be, (16:35) I don’t know if that even qualifies as simulation, but. (16:41) Yeah.You know, that’s interesting. It is interesting. And I get the simulation stuff (16:49) technically came from the rise of AI, if I’m not mistaken.It’s kind of Bostrom’s thing was AI and (16:56) then what it means for that. Right. So do we want to talk about Bostrom’s thing now, or do we want (17:02) to talk about the multiverse or which, do we want to break down any of those? Cause I thought (17:07) Bostrom’s paper was the most intriguing, at least cause he’s the one who kind of started it.(17:12) Yeah. Let’s go to Bostrom. This is the, the origin.(17:17) Origin. All right. So which, which one are we on here? (17:22) Let’s start actually with a clip.Plato’s cave. Cause that’s kind of the basis of this. (17:28) And then speak to what we’re talking about with reality, how we experience it.So. (17:33) Yeah. So Plato’s cave.A lot of people talk about the allegory of Plato’s cave and basically (17:39) what that is, this will explain it, but basically it’ll, it’ll show you kind of how that worked for (17:44) you. Plato’s allegory of the cave is a powerful metaphor that illuminates the concept of the (17:51) simulation hypothesis. Imagine a group of prisoners chained inside a dark cave from birth, (17:56) only able to see shadows cast on the wall in front of them.They believe these shadows to be the (18:01) entirety of reality. In a sense, they were living in a simulation of reality. This allegory serves as (18:07) an ancient precursor to modern discussions about reality and perception.For a modern example, (18:13) think about putting on a virtual reality headset, a high quality VR simulation can feel extremely (18:19) real, tricking your brain into thinking you’re somewhere else. This shows how what we perceive (18:24) as reality is based on the information we receive through our senses. (18:31) Yeah.So it goes back to the senses, right? We experience certain things. We don’t know if that’s (18:37) fully reality, but it kind of doesn’t matter because like Justin says in the comments here, (18:46) it, as in a random number generator, doesn’t have to be truly random. It only has to be more (18:50) random than we’re capable of detecting the pattern or algorithm, right? That applies to (18:57) everything we’re experiencing.If it is a simulation, it doesn’t have to be. (19:02) And also thank you for everybody joining too, man. I totally missed the comments.I’m like (19:05) brain fart and I was having technical difficulties. So I just literally clicked on the comments, (19:08) see like 20 of them up there. So awesome.Thank you. But yeah, Justin’s right. It doesn’t have (19:15) to be truly random.Only has to be more random than we’re capable of directing the, you know, (19:19) detecting the pattern. And we are, we are pattern recognizers, but we clearly see that AI (19:27) is better at that than we are as good as we are. You know, we’re, we’re a biological good pattern (19:33) recognizer, but they literally, it’s really programmed for that specific task.So they’re (19:38) specific at that. Yeah. With humans, they’re prone to mistakes, seeing patterns that don’t exist.(19:45) Right. Yeah. Yeah.I mean, hallucinations, all these things really happen. Fatigue. Once again, (19:50) we get tired, we get hungry, we get, you know, we can do the angry judge thing probably happens (19:56) with sleep too.Yeah. So that’s, that’s Plato’s cave. That’s kind of the introduction to Bostrom (20:06) simulation argument.Right. Yeah. So basically the allegory of the cave is that there are people (20:12) enslaved inside a cave where his shadows are up there.And that’s like a movie. It’s basically (20:17) like a movie. Now this is Plato.How long ago is this? And he’s coming up with this concept (20:23) of shadow figures, entertaining people in a cave, telling them it’s the reality is not real. (20:30) Like it’s crazy just that that concept was that it was conceptualized that long ago. Yeah.And (20:39) almost that that’s the case makes me lend to almost think it’s more real. Cause like (20:43) there was a certain human that was had the certain globs of cells to figure this out (20:52) and then to be able to kind of explain it. Right.So there’s shadow figures. This is your world. (20:57) This is our CNN, our MSNBC, our Fox news, all of them.This is what we see. This is our world (21:02) curated in front, in front of us. Right.But then we get out, actually experience the real world. (21:07) You hear about this, that, and the other, and then you actually go there and you see that that (21:11) is so not the reality that you were told. And that’s what, that’s what this could eat very well (21:17) easily be is just, we’re being fooled this entire time.So that’s, that is, it’s weird. Cause it’s (21:22) not exactly the exact thing, but it’s just kind of the, it is a start to the idea of, Oh wait, (21:29) it might even be bigger than Plato’s idea. It’s just like Plato’s idea on steroids in a way.(21:34) Exactly. Yeah. It’s, it’s a very low resolution version of a simulation is seeing shadows on a (21:41) wall.Right. So we could be experiencing something very similar. And would only have been experienced (21:46) that way because at the time that was the best resolution they were aware or familiar with.(21:51) Right. Same thing. It was like that time they didn’t have tell, they didn’t have screen, (21:55) like high screen definition TVs and stuff.So they had shadows on a cave. It just makes sense (22:00) that it worked with the time that they have the technology they had. (22:04) Yeah.And as we see technology progress gets better, the TVs get better. The 3d animation (22:13) gets better. All of this is pushing us towards the idea that if we can create this, something (22:19) that’s seems real and feels real to us, would that creation actually itself think it was real.(22:28) Right. And I mean, we talk about haptic features, but that’s like a suit that you’re wearing on the (22:34) outside, right? If we’re plugged in with the, uh, with the nerves and everything directly or something (22:42) or base skull or wherever it would be, then it would be a very different experience. It would (22:47) be very much more real than a haptic suit.You know what I mean? So it’s clearly not like we’re (22:53) wearing something on the outside that we can get feedback because it’s certainly these things, (22:57) we touch our fingers, it’s internal, right? Like there’s internal sensations that aren’t (23:02) outside of that. Whereas, you know, as good as it could get, it can still only get to a certain (23:07) level. We’d actually have to drill into the nerve set central to get the kind of sensations.I think (23:14) that makes sense. Yeah. Yeah.I just want to give a shout out to Jesse there in the comments. He (23:20) said, NPCs are real and the discourse on X proves it. That’s hard to refute my friend.(23:27) Funny. I knew exactly where you were going with that. (23:32) Yes.So yeah. And that’s the thing is like, we do see some of these people. We wonder if, (23:37) if there are people just go through the motions of the world and we talked about free will and it’s (23:42) like, yeah.And maybe, but you’re restrained, constricted to some extent, you know what I mean? (23:50) If your feedback mechanism isn’t as tuned in as another’s, you may not make changes as well. You (23:58) might be out before you get in, you know, there’s a lot of things to go through that. (24:06) That’s right.Yes. So back to Bostrom, his argument is that yes, we are in a simulation (24:15) and assuming that the technology can get there, he’s got three possibilities. So one is that (24:24) civilizations can never reach a stage advanced enough to run simulations because usually because (24:32) they destroy themselves by the time they get to that level of technological development.(24:38) Right. Yeah. And that’s a great one.That’s actually part of the Drake equation as well. (24:43) So anybody who’s familiar with Drake equation, it’s, it’s not even a real equation, but it’s (24:49) like a thought experiment about how many civilizations might be out there, right? (24:53) It takes a number of planets, number of worlds around those planets. But one of those, one of (24:57) those metrics is how does the technology get enough advanced before it kills itself? Look at us, (25:05) we’re on the precipice of that global about a nuclear annihilation, right? We got up there (25:10) and then poof, that’s real easy that we didn’t get to that point.So that’s the first one for sure. (25:16) Yep. So number one is the self-nuking before we reach it, that possibility.Number two is (25:23) advanced civilizations exist that could run a simulation, but they just choose not to (25:27) for whatever reason. Yeah. They might not find the, the energy might not be worth it.(25:34) The return on investment, they may not get the entertainment value or the, like you said, (25:38) the utilitarian value. I’m sure they all, everyone, every species, regardless of power (25:45) consumption, use or whatever, would have some kind of, I would think governance within that (25:50) would be like, Hey, what are we going to spend our energy on? Right. And this thing would probably (25:55) take a lot of energy.We look at what chat GPT does to just to do a query, right? How much energy (26:01) uses. Yeah, exactly. Although I find this one hard to accept.I feel like if they can, (26:09) they would, somebody would, somebody would figure out a way to do it. (26:13) And I feel like they may have made one or two and been like, okay, cool. We made it.We’re good. (26:17) Right. Whereas like it didn’t permeate as much.Right. So the possibility could just be, (26:23) they, they, they got disinterested in it. So they’re not pursuing it anymore, for example.(26:29) And so then the final possibility is that we are almost certainly living in a simulation. (26:36) So either we destroy ourselves before we get to it or some reason we can, but we choose not to, (26:42) or we are, can you think of any other possibilities outside of these three? (26:50) I mean, really it’s, those are the most, those make total sense to me, right? We either are in (26:55) it. We don’t want it.We don’t care that it was never thought of to be in one. So we’re not in (27:00) one because it never became a thing. So this is reality or that’s what everyone tries to do, (27:07) but they never get there.You know, I mean, it’s, I mean, it’s sound logic, but remember, (27:14) it’s like thought experiment stuff, right? It’s like an unprovable thing. Right. So it’s (27:18) just on it, on it, on its base.It’s just a theory. And it just boggles your mind when you (27:23) start digging into what this means, you know, causality, what is real, like what is real? (27:29) Are these all like, are all of your quests within a day, just Sims quests? Like I got to get the (27:36) tires changed and I got to get the oil fixed and I got to get, you know what I mean? Like these (27:40) mundane things. Oh no.I stepped in a puddle. So my foot’s wet. Like, I don’t know.It’s just (27:46) seems interesting how that would be. Right. So what are your thoughts about Boston? I mean, (27:52) what’s, what’s crazy is not only that he says it’s more likely that we’re in one than not, (27:58) like he’s bold enough to say that it’s more likely that we are in a simulation than we’re not, (28:04) you know? And that’s, that’s a little freaky.Yeah. He also, I don’t know if this came from (28:10) him or not, but I heard somewhere, they talk about the odds. Like if it’s possible to make (28:16) simulations, then the odds are that we are not the base level, right? The odds are that we’re (28:23) one of the simulations that maybe created another simulation and so on and so on up (28:27) to a million times, we could be like a million layers deep inside simulations.(28:34) Absolutely. And to that point, like I talk with Bobby Azarian about consciousness, right? And we (28:41) talk about this, what is God? And it’s my opinion. Once again, this is what the heck do I know, man? (28:48) We’re just two dudes on a Thursday night chatting it up, right? But it’s my opinion that consciousness (28:54) is an emergent property from X amount of neurons connecting and then zapping into consciousness.(29:00) There’s different levels of that. Then there’s a bigger consciousness that when consciousness is (29:06) start connecting, there becomes like a universal consciousness. That is what can erupt into your (29:12) global God thing that really is that started base reality.But I don’t think there is a base reality (29:20) God. If anything, God is an emergent property from the universal consciousness of all the conscious (29:28) beings in the, in, you know what I mean? Connected through quantum in some way. Yeah.Sounds crazy, (29:36) but is it just as crazy as an invisible dude with a beard who snapped his fingers and started (29:40) everything? Like, yeah. Well, at some level, I think that simulation theory is just another (29:48) explanation of God, right? Just like aliens, right? Creationism, right? Yeah. Yeah.Because (29:55) that’s who created us. Oh, we got created in a computer lab. Well, those are our gods, I guess.(30:00) But, but it is different because I do, I do think that those people who see it that way don’t (30:05) conceptualize it like God. I do think the alien people do a little bit like the alien believers, (30:13) but like, this is more of a, it’s kind of an outside of, of that thought, I think, because (30:18) it’s more of an emergent piece than a top down. I think they don’t want to think of it like (30:25) creationism, like God, but it is, I think even if they don’t want to admit it, it really comes down (30:33) to somebody created this world, right? That we live in.And there’s some point to it. Maybe, (30:39) maybe not. We don’t really know.Justin had a good comment there. Yeah. They could have realized (30:44) once they made it, it destroyed society because no one wanted to exist in the real world anymore.(30:50) That’s a real concern. Yeah. Everyone became pod people and then no one made the food anymore.(30:54) And they all starved to death in the little pods. Yeah. Yeah.To me, that’s kind of a (31:01) a piece of evidence that pushes back against simulation theory because like, (31:05) if this was a simulation, could it be just a little bit better? (31:09) There’s like, there’s all kinds of bad shit that happens in this life and why it doesn’t need to. (31:15) And that, and that is a question. Does it need to though? Because doesn’t struggle make growth, (31:24) like pressures on biological beings creates mutations that survive or don’t survive.That’s (31:30) what evolution is, right? That’s natural selection. So almost in a weird way, struggle is reality. (31:38) Like the reality is that we are, this is a harsh, harsh world.We were in a very calm period (31:45) of this world. We haven’t had a Pompeii directly. You know what I mean? We haven’t had the earthquakes (31:50) and the floods and the, and the asteroid.And just like we have, we have been in a pretty calm (31:56) part of, of our journey on this earth, right? Like it’s been a pretty settled down time, (32:01) but that starts getting chaotic. You know, things change, right? (32:07) That is true. And if you’re making a game, for example, you’re not making it (32:11) super easy.So people can just breeze right through it, right? Because it’s not fun. Why (32:16) would you make a game like that? Right. And it would also, it would remove, (32:19) it would remove meaning and purpose, man.There, there’s a lot, there’s a lot to that (32:25) underlying, you know, the Petersonian underlying meaning. There’s a, there is a meaning to life (32:32) and you have to find the meaning. And I’ll admittedly say like, I’ve been down in the (32:37) dumps a little bit, but it’s like, I haven’t found the right meaning yet.Cause I keep (32:41) thinking I’ve got it and missing it or something. It just hasn’t clicked. I’m not going to stop (32:46) looking.I’m lucky. Like I’m blessed. So I’m not stopping, but I can imagine people (32:52) without the means that I have, and I don’t have great means.I’m just lucky. Right. But (32:58) if with people with less means who feel the way I do, who don’t feel like they have solutions, (33:02) I don’t know where, where they would turn, you know? So it’s real easy to get lost in that (33:07) kind of way.So meaning like in the matrix, like when it was too easy, it’s easy to get (33:13) fall apart. Like, no, it would almost be unbelievable. Almost.It was too easy (33:19) biologically. We’re not wired that way. So even if there was a base reality of zero that we (33:23) evolved out of and then grew to this thing and we’re on base, you know, reality 15.0, (33:29) we still have that base biology in us.Like those reptilian thoughts, right? Those base, (33:36) like back of the brain thoughts. Yeah. Right.I don’t know where I went with that, but (33:44) yeah, they want to get into the implications. Like if let’s say we are, we can accept that (33:50) for a minute. Let’s say we’re in a simulation.What does that mean for us? Okay. What does that (33:57) mean for us? Great. I have no idea.Where are we? I’m not, I’m off the page now, but (34:04) seem like a natural place to go. So what does that mean for us? Like, okay, so we can either, (34:11) I don’t know. Cause like we talked about will, and are you going to get up in the morning or not? (34:16) And if you do get up in the morning, you either chose it or didn’t.And if you don’t get up in (34:20) the morning, either chose it or didn’t. And whether you did or didn’t, that’s still part (34:24) of the simulation. So you just got to go anyway.Like it really, you know what I mean? Like it’s (34:32) almost like you gotta go through cause this is where you’re at. Yeah. I don’t know.We’re the (34:40) only way through or the only way around is through or something or the only way, you know, I don’t (34:44) know. Yeah. Would we live differently? Would we, our morality change if we, if we thought this was (34:52) just a computer program, but then again, we’re all at least conscious, at least you and I, I’m (34:57) probably a few other people are conscious of our own existence.We have complex lives that we lead (35:04) and people are affected around us. So even if it’s not real, like when you play a video game, (35:09) do you, and you’re talking or interacting with the NPCs in a video game, do you like, (35:16) or like, Oh, I don’t want to be mean to them. I don’t want to, do you know what I’m talking about? (35:22) I know exactly what you mean.Okay. So like when you ask chat GPT a question, do you say, (35:27) hi, chat GPT, could you please do me a favor and please recite, you know what I mean? You just say, (35:34) give me this, right? You just say, give me this. You don’t say, please give me this chat GPT and (35:40) thank you.You know what I mean? So yeah, like it’s, it’s a great point. Am I crazy? What’s that? (35:47) I say, please. And thank you.I did. I started and then I stopped. So I can attest to like, (35:52) I had a conscious shift in like, what am I thinking? What am I pleasing and thanking this (35:56) fricking thing for? And I know, cause it’s going to rule us in a few years, but it’s going to (36:01) remember.But like, yeah, to your point, I don’t know. So yeah. Do we treat NPCs differently? How (36:08) would we act? Well, you and I, whether it’s a consciousness or not, it’s programmed or not.(36:14) It’s what we think as consciousness, right? Our brain says we should not be aggressive towards (36:21) other people. And if we are a program and they’re a program or if they’re a program, (36:27) then we are one too. So then we can be not the non-aggressive program principle.We could do it (36:33) that way. And we can, you know what I mean? Does it matter if it’s a person or a program? Cause (36:38) we would be the same, you know, or do you think that there are biological entities that are (36:45) checking out the rest of this stuff? And actually we’re just part by, you know, part digital and (36:52) they’re real? I think that’s possible. I don’t know, but I think that’s really possible.And (36:58) it might speak to some of the discrepancies that you see in between people, like how some people (37:04) just seem to have no mind at all. They just do whatever they’re programmed to do, literally, (37:12) you know, and then some people can be smart. Some people can actually advance civilization.Some (37:17) people have this ability to figure things out that almost nobody does, including me. I’m not (37:24) figuring shit out. I certainly, I don’t have it.I can’t even fit my head out of my ass yet. It’s (37:30) still way up in there. So this is a great question from Justin.You want to ask the question and see (37:36) if we can tackle it because it’s pretty good. Yeah. What makes you think that humans evolved (37:41) for the better at all? As far as we can see, we killed those that were weak and took what we (37:46) wanted.Look around the world today and we are still doing it. So yeah, if we take that as (37:54) how we’ve evolved, I think that’s pretty true. There’s two questions, though.Is it morally? (38:01) I mean, is it a morally evolution? Because like, I don’t know if we’ve morally evolved, (38:05) even though we’ve evolved intellectually, for example. Yeah. It means we might have selected (38:12) ourselves for the stronger people, but not necessarily smarter or moral, like you were (38:17) saying.Right. Yeah. Yeah.It’s a great question. You know, evolving, you know, evolution technically (38:24) is a mutation and natural selection then really is the quote unquote decider, right? Whether it (38:32) happens. So one example would be gorillas had black eyes.And as soon as white started going (38:39) around the outside of the pupil, it allowed the female to know that the gorilla male was looking (38:44) at them because it showed attention. So that little mutation allowed that gorilla to really (38:51) spread their DNA. And that mutation continued.That’s kind of how it worked. But it wasn’t it’s (38:56) not like it’s it’s smart, smartly changed. There may have been pressures in environment that gave (39:02) it a mutation, but it’s not it’s not smart on the front end.It’s like smart on the back end. (39:09) And then we look at it and go, oh, it knew all along. But I mean, did it really? It looks just (39:14) more kind of like our feedback loop.It’s almost like a biological consciousness is how evolution (39:19) kind of works, because sometimes you get a mutation that obviously doesn’t work right. (39:25) Like initially, lactose intolerance is how we’re supposed to be, because we weren’t supposed to (39:30) drink milk after a certain age. Well, there was a mutation where there must have been some food (39:36) shortage and maybe that was the only thing available.So someone no longer had that enzyme (39:43) or whatever to break it down past the prepubescent age, right, adolescence. And now most people are (39:50) not lactose intolerant, right, because that now became a thing where you actually added a food (39:55) source so you could survive longer later because you could drink milk in your adulthood, which you (40:00) as a child, which you couldn’t do before this, you know, mutation happened. So great question.(40:09) Understanding how we get energy is also very interesting. Like what we eat really affects (40:16) our energy level. Does that say anything about the simulation? Do you think that like if we eat (40:22) junk all the time, is that just something that’s programmed? Oh, like, oh, yeah, Oreos affect your (40:27) stamina or whatever, like in a game or like a minus two, yeah, plus three fatigue, minus two, (40:34) plus one diabetes.Or is the understanding of biological interactions in your body a better, (40:42) you know, the chemistry that happens within what you consume? Is that a better thing? (40:47) Are they mutually exclusive? What do you think? I think there’s two possibilities and I’ll let you (40:54) pick. I maybe there’s more than two, but I’ll let you pick one of these two or add a third possibly. (41:00) So we’ve got.OK. We’ve got the did like the Merovingian wrote code in the cake that made (41:08) that girl hot, like hot and bothered, right? That could be what sweets and sugars are. They’re bad (41:14) code that disrupt your normal rhythm.So you feel a little off and that’s what the design’s for. (41:22) Yeah, but it could be. That they like it initially biologically before the simulation was turned on, (41:32) they knew these things, did these things and then wrote that into the code, right? (41:38) So like the candy bar was developed in reality in base reality zero point zero.(41:46) And then the simulation was a later evolution, right? Remember, it’s way down the road. We had (41:52) candy bars before we had simulations. I guarantee it.So you’re in the candy bar. So you know what (41:56) the effects are and all that stuff before you even start the simulation. So I could see you (42:01) using your real time base reality as your programming for how it would interact in the (42:07) simulation.Interesting. OK, so inconclusive maybe could go either way. Perfect.(42:18) Yeah, yeah, it’s one of the I mean, it could be. And that’s the thing is, are we biological (42:22) meat puppet simulations like it’s, you know, because it’s hard to say that we’re like Westworld, (42:31) like it would be it would be a hard case unless we literally were all made in biology. But (42:35) I know my mother, I’m pretty certain my mother remembers actually birthing me.You know what (42:41) I mean? Like I would think. But then again, everybody’s simulation might start a different (42:46) time from where we have it, right? So anyway, what are your thoughts on that? On those two? (42:54) Do you think it’s like a stamina minus one, you know, diabetes plus one sugar minus three or (42:59) or do you think it’s like embedded in like by or if it’s biological? (43:05) I think if it’s code, it’s very, very good because it it seems incredibly complex. (43:12) There’s so many different combinations in chemistry that we could (43:16) imagine and consume that it would have to be very good code.And I guess that’s possible. (43:22) Yeah, I mean, it could be just a general sluggishness, right? Like, like, you know, (43:27) like all things with sugar, all candies give the same kind of thing, right? But we do see biological (43:35) people actually react differently to those types of things, right? So it is what what does that (43:43) mean? Does that mean the program that is a glitch? Like that’d be that’s an interesting thing to look (43:48) at it that way, because like I’ve seen people where caffeine makes them sleepy. It doesn’t (43:54) make their heart race like so if it’s written into the code, it should affect everyone by like (44:01) through the simulation the same.Well, that random number generator in there and you just (44:07) and it could affect everybody different, right? And it’s like a one in 100, right? Obviously, (44:11) because like caffeine does net affect people only affects 10% of people, whatever. So the cilantro, (44:18) right? Soap in your taste is, oh, we’ll make one of every 10 of the NPCs taste soap when they eat. (44:23) Right, like, yeah, real interesting how they would do that.(44:29) Or why? Why they why they would. Hey, look, cilantro and soap. Look, what a great gag.(44:36) We talked about putting people soap in people’s mouths when they were naughty. This would be the (44:40) best gag is we make a food that only 10% of the people taste like soap. (44:47) Somebody’s got a sense of humor.I mean, it could really go down to just your DNA, (44:52) like your DNA is a literal code. It’s sets of these amino acids, right? That make up your DNA (44:59) in certain order. And that’s who you are.So we have we can see the code. It’s there. I guess the (45:06) question is, did somebody just program that? Or is it? Is it biological? Is it created by God? (45:12) Still don’t know.So on a quick side note, before we continue with this stuff, because we’ve got a (45:22) like humans, we took humans took like a pretty evolutionary jump from apes, right? (45:28) Kind of like the alien theory. Do you think that we evolved all the way or do you think (45:34) someone sprinkled in a little thing to give us a jump start? And if they did sprinkle it in, (45:40) is that somewhat simulation? I mean, in a weird way, it’s something that wasn’t natural that was (45:46) added. Even if it was biologically, it could be.I don’t know. I really don’t know. But yeah, (45:55) a lot of people talk about that idea that some alien life force that was at least a humanoid (46:02) thing came and mixed their their mixed their DNA with primates that already existed.(46:12) But I think it’s more likely that we did evolve all the way. And it’s taken (46:17) billions of years. But somehow some spark of life got created.And that has multiplied and replicated (46:27) until it evolved into everything there is today. I don’t really find fault with that. (46:35) I find it hard to.Yeah, I find it hard to find fault with that. And then (46:40) the only thing is you add in time. Yeah.Right. So once again, look, (46:46) look how far humans have come in 100 years. Now you do you just do a thousand years of this (46:53) trajectory of prosperity without killing each other and think about how much further we’re (47:01) going to be in a thousand years.Now, there might be the the universes say we they say between nine (47:08) and 14 billion, whatever Earth is four billion. There might be a civilization that only needs to (47:14) be a couple thousand years older. That’s it.It’s not that it doesn’t seem like that far of a (47:22) stretch. I mean, yeah, we’re making we’re making molecules through a 17 mile tube that we’re (47:27) shooting at near light speeds at each other. And we’re still monkeys like I mean, think I think (47:33) that’s a pretty big jump that we got that far.You know what I mean? So it could be that they (47:40) seeded. I mean, there’s that thought, but there I still think, regardless of where we are, (47:47) I still, to your point, think there was an emergent reality zero point zero, (47:52) whether we’re in it now or whether we’re not in reality zero point zero, (47:57) there was an emergence out of that first one that would have been the predecessor to all the others, (48:05) if that is the case. Yeah, I think I think so.Let’s talk about space, too, since you brought (48:13) up the potential of extraterrestrials coming in and and planting us here. (48:20) Just the idea that the universe might have a resolution limit, like why why don’t we see (48:26) other life outside of Earth? Yeah, so we talk the Fermi paradox, right? (48:32) It’s if if they’re out there, why haven’t we seen it? And I’m just going to argue. (48:38) My understanding is that space is actually expanding faster than the speed of light (48:44) in some areas, possibly in some.And with Hubble, the further it expands, (48:52) the greater the rate of expansion. It’s not slowing down as it actually expands (48:59) exponentially somewhere doubles expansion exponentially of the half the distance or (49:04) some weird thing. Right.Hubble, it’s the redshift that that that Einstein noticed (49:10) with Hubble’s telescopes. So what that says is, first of all, our closest relative now is (49:18) four point six billion or no, four point six light years, four point six light years is the closest (49:25) thing. That’s to assume that that thing has some kind of thing that can send a signal that can (49:31) reach us.Now, if it sent a signal, it would take longer than four point six years because it’s (49:36) radio or TV. It wouldn’t be light, but, you know, we’d still have to have that signal. So that’s the (49:42) closest.Now, if we’re expanding further in some places of the universe, right, we’re expanding (49:49) double. Right. Remember, the further we are out, the the faster the expansion is happening.(49:56) There’s probably early civilizations that have not destroyed themselves (50:00) that may be sending signals out a regular basis, but they’ll never get to us (50:05) because they’re, first of all, more than we’ve only had radio for 100 years. (50:11) Right. Maybe to say 150 years, we had some kind of signal sent out.(50:16) That’s only 150 light years at best, right? 100 light years. I mean, the universe is pretty big (50:21) place. We find that even though that Earth is Earth is still special, even if it’s not (50:28) quote unquote rare or whatever, it’s pretty darn special that life happened.(50:32) You know, so so you got to play that percentage game. Right. What are your thoughts on that? (50:39) I think you’re right.And the reason I bring it up is because people use this as an explanation of (50:45) why simulation theory might be real is because all that stuff hasn’t been rendered yet. Like we’re in (50:53) a game and we can only look so far. Right.If you’re in a video game, not everything in that (50:59) video game world exists until you’re ready to look at it. So when we start looking both in (51:06) space and down to like the subatomic level, it’s kind of the same thing. Like those things weren’t (51:12) really there because we say the universe is expanding constantly.It’s because we’re starting (51:18) to look. This is the explanation for simulation theory. Right.And when you go down to the atom (51:25) for a long time, people thought atoms were the base level of matter. That’s it. But then you dig (51:30) down and then there’s oh, there’s neutrons, protons, electrons.And then within the electrons, (51:35) there’s these little tiny particles, too. So. Yeah, I don’t I don’t know if that stuff was (51:43) always there or it’s being rendered because as a simulation, we’re getting more advanced (51:49) to look for these things.And so like, oh, OK, crap, we got to update the programming here. (51:55) Yeah. And the theory is that is like until it’s known, it’s an unknown.So it’s not a rendered (52:00) thing. So you look up, the moon happens to be there, but then you see it. Now everyone knows (52:05) it’s there.So now it is a present thing because it’s like and I know that sounds so weird and (52:10) backwards, but like, I don’t know, we’ve we’ve heard weird phenomena. So Justin has some good (52:15) questions. This looks simulation.This this is one of those ones where it questions definitely (52:20) questions of God stuff. So let’s go. Justin’s a person of faith.Admire Justin’s faith. One hundred (52:28) percent. I love he and I have many theological conversations and always come at it from from (52:36) just a logical side, but still from a faith side, which is great.So he goes thing is everything (52:41) possible. Not everything’s probable. Seems to me simulation is an interesting way to explain how (52:46) everything started without God.And that is one of those claims right to counter. So once again, (52:52) and then the second part of that, what caused the initial emerging reality? Right. Once again, (52:59) the big bang that we talk about, what caused the big bang? Turtles all the way down.Right. (53:04) We’re going to go back into turtles all the way down. This will this will be a forever theological (53:10) question on what started what chicken egg type thing.I’m I’m not. This doesn’t discount God (53:18) to me, by the way. No, actually, this doesn’t have anything, in my opinion, just to be clear.(53:25) Yeah, the theory doesn’t have anything to do with God because God could have created (53:30) world one point. Oh, yeah. And then people ran with it.Right. That that’s what the thought is. (53:38) I don’t think this was a, you know, or created the whatever the being is that created this (53:45) simulation.Right. Like we can we can I I’m not here to quibble that at all. But once again, (53:51) these are the questions that people get really bogged down with and not in a bad way.It’s (53:55) one of those things because we try to parse, like, what’s reality and what is divine. Right. (54:02) Like, I’m not sure your faith, Jason, but I don’t know if you want to chime in maybe of a (54:06) middle of the road approach of the two sides of that.Yeah, I’d say I’m figuring it out. I don’t (54:13) know. I do kind of believe in God, but I think it’s the same question, really.It’s not exactly (54:21) explaining how everything started without God, but it’s just trying to come up with another (54:26) definition for God. And I like what you said. It could have been that God created one point (54:32) oh and that we ran with it.We created simulations on top of it. But even the simulation theory (54:39) itself is like someone created it. And so someone is that that person and whatever level of divinity (54:46) they have, it may be different, but still, it’s kind of the same thing.Yeah. And we even talk (54:52) about that, just the idea of the simulation theory that it’s out there. Like, how do ideas, (54:58) how are ideas generated? Yeah.Is there a river of ideas and every idea is this river of energy? (55:05) And some people can just tap into a singular idea and sometimes they catch a fish, right? They catch (55:11) an idea fish. You know what I mean? Like, is that how it’s done or is it inside out, right? I mean, (55:20) these are all kinds of questions we have. And then you can ask me, like, who created God, (55:25) right? Where did God come from? Did he just emerge out of nowhere? And that is the turtles, (55:29) that’s the turtles all the way down.And it’s funny, I was actually thinking about doing a (55:32) podcast called The First Turtle because I don’t know what the first turtle is. But anyway, it’s (55:39) one of those things, right? So here, so interesting. So Jason claims that ideas are generated from (55:47) free will, LOL, wink, wink, nudge, nudge.Anyway, where are we at, my friend? (55:57) Well, we can talk about some other evidence for simulation theory. A lot of people talk about (56:02) the Mandela effect. You know this, right? People, this came from people, a large amount of people, (56:09) apparently, imagining that they remember Nelson Mandela dying in prison in the 80s or 90s.(56:15) 86. (56:16) But then he actually got out and he lived into the 2000s and he was very influential in (56:22) South Africa politics. So, yeah, I kind of have one of these- (56:28) We can dig into the Mandela effect.I actually did a podcast on it about some of them. One of (56:33) the greatest ones, was there a Jiffy peanut butter? There was a Skippy and there was a Jiff, (56:40) but there was no Jiffy. Everyone thinks Jiffy, right? And you think of Tom Hanks, (56:45) whenever you think of risky business with his dress shirt and just his underwear and his socks, (56:49) you think of him as the sunglasses.He walks around with the sunglasses on. (56:53) He doesn’t wear the sunglasses in that scene. He actually doesn’t.It’s not in that scene. (56:58) So even the most notorious or most famous scene that he’s in is not even the thing that we know (57:04) him for. So it is an amazing thing, the Mandela effect.It’s pretty cool. Have you ever had one (57:10) of those? (57:11) I just had one doing research for this and it’s about the Matrix, so it’s super weird. (57:16) Because there is this meme of Morpheus, right? Where it’s like, what if I told you, (57:22) whatever, and then whatever, whatever.But that line is not in the movie anywhere. I was looking (57:27) for a clip to share about that because I wanted to use it, but that does not exist. He did not say (57:33) that in the movie.And I’m sure it’s just because I’ve seen that meme that I imagined it happening, (57:39) right? But that line is not in the movie. (57:42) I swear to God it’s in there and I’m going to find it now, you son of a gun. (57:46) Yeah.(57:47) I know it’s got to be where they’re sitting before he touches the mirror. It’s got to be (57:52) right before the pill thing. (57:54) Not there.(57:55) Okay. Same thing. Darth Vader, same thing.He doesn’t ever say, Luke, I am your father. He (58:00) says, no, I am your father, right? It’s the same. It is funny how amazingly inaccurate our memory is (58:10) and universally inaccurate.Have you heard about the 100th monkey thing? (58:18) Remind me. It sounds familiar. (58:20) Yeah.So basically what was happening was there was dirt on these coconuts and these (58:25) monkeys were collecting the coconuts and one of them started washing it off. (58:30) So then one taught the other to wash it off or something. But then spontaneously, (58:36) the other side of the island where these two bands of monkeys don’t coordinate or talk to each other, (58:43) they started doing the same thing just out of the blue.It was like an energetic quantum (58:51) signal to all that energetic quanta species of that kind to wash the thing. (58:58) And once again, there’s a little bit of, look, it’s not 100% science here, right? There’s some (59:03) pseudoscience. Let’s not kid ourselves.But these are really interesting things to look at that (59:07) aren’t that coincidentalness. (59:11) Yeah. Oh, you froze there.Was it me or you? (59:15) You back? (59:18) Anyway, go ahead. (59:19) Oh, okay. You froze for a second.(59:22) Oh, did I? Oh, crap. Sorry. (59:24) I think we’re back.But yeah, that brought to mind another effect that’s called the Bannister (59:30) effect. And this is about athletes breaking barriers. So like the guy that first ran a (59:35) four minute mile, everybody thought was impossible.Humans could never do that ever, ever, ever. (59:40) And once this one guy did it, he did it. And then suddenly like dozens of people started doing it.(59:48) So there’s some kind of, it’s probably a psychological thing where people are just (59:53) like, oh, no, it’s impossible. So I can’t do it. But it could be an update to the simulation.(1:00:00) Once somebody figures out… (1:00:02) A firmware upgrade that I just screwed my whole system on? (1:00:05) Yeah. Once somebody figures out how to break the code and get through (1:00:09) that barrier, then other people can do it because it updates the programming. (1:00:15) Right.So like, yeah. And what a great… It’s like meeting a new height, right? Height, (1:00:21) speed, all these like weird, not weird, obviously, but competition type things, (1:00:28) right? What about like undefeated? And I do think your point psychological for… (1:00:34) I think it’s psychological when you’re fighting an opponent directly, like mixed martial arts. (1:00:40) When Anderson Silva was undefeated and you walked into the ring, you were probably beaten before you (1:00:46) even stepped into the ring.And the first guy who caught him and beat him was like, oh, that guy’s (1:00:52) not invincible. And everybody went in going, okay, he’s not invincible because I just saw him get (1:00:58) beat. And that kind of rewrote everyone’s own code.But that was like a biological version. (1:01:03) Yours could actually be to your point where like a arbitrary barrier, like a four minute mile, (1:01:11) right? It’s not like you’re competing against someone else. It’s like an arbitrary number (1:01:16) that’s there.And then like speed of sound. Once we broke through it, Chuck Yeager (1:01:20) programmed us to get through it. Now we can go supersonic, right? (1:01:25) Yeah.It almost takes like a defective person. I say defect lovingly, (1:01:33) that’s something you have to believe that you’re better than the code. And there’s something to (1:01:39) that, like believing that you can act outside of the boundaries of what we understand as reality (1:01:46) somehow gets us out of it.We break through a little bit at a time. It’s not usually like a (1:01:52) huge thing, right? It’s something, okay, somebody got four minutes and 10 seconds, but we can never (1:01:57) get to four minutes. So it’s a little bit and then suddenly, boom, it opens the gates for (1:02:02) everybody.Yeah. And that’s the thing is it’s a longstanding thing. And then all of a sudden, (1:02:09) a chunk of people break in like a short span to your point.It was almost like a thing. (1:02:14) You have to be wired in to get it probably, right? You have to be dialed into that portion of it. (1:02:22) You know, it is amazing how people can, but then we’ve also seen people lift cars off of people, (1:02:30) like, where’s that weird strength, like that literal Hulk strength out of the blue to just (1:02:37) do that for one second.And maybe there’s a glitch, not a glitch, but maybe something (1:02:44) emotionally can override temporary restrictions on the limitations of your physique. (1:02:51) And when we talk about people like mind over matter, jumping in frozen water, (1:02:56) or holding their breath for 10 minutes, these could be many hacks to those systems. (1:03:02) Yeah, absolutely.(1:03:04) Like challenges to the simulation in a way. (1:03:07) That’s what we’re trying to do. And, okay, I think we can get into the physics of it because (1:03:13) this is some of the best evidence I see for simulation theory, or at least knowing that (1:03:19) there’s things we don’t understand, right? (1:03:24) Which one do you want me to play? (1:03:25) Wave-particle duality.(1:03:27) Yes, sir. (1:03:29) So this will kick it off. This is physics experiment of what happens with light when (1:03:34) it’s shown through two slits.(1:03:38) Are you ready, sir? Here we go. (1:03:41) When scientists used a measuring device to observe the slit that each photon passed through, (1:03:46) the interference pattern disappeared, and the photons started behaving like particles. (1:03:51) Instead of the spectrum of light and dark bands, we see two bright bands, (1:03:56) indicating that the photons chose one slit or the other.(1:03:59) So light can display characteristics of both particles and waves, known as wave-particle (1:04:04) duality. It appears that light decides to behave as a wave when it’s not being watched, (1:04:09) and acts like a particle when it is being measured. (1:04:11) The mere act of observing which slit it went through changed the behavior of the photons, (1:04:17) almost as if they were aware they were being watched.(1:04:21) It’s kind of like how in a video game, the environment and objects only load when the (1:04:25) player focuses on or interacts with them. The entire world isn’t rendered all at once, (1:04:30) allowing the game to save processing power and optimize resources. (1:04:34) Likewise, light seems to behave like waves, but when we observe it, it’s as if we’re (1:04:38) loading its properties, causing it to change and act like particles.(1:04:44) Very interesting. And did you see the one where the more slits you add, the more smart it gets? (1:04:51) No, I didn’t see that. (1:04:52) Okay, so it gets super freaky.You know what, we could probably do a dual slit, (1:04:57) we could probably do a whole podcast on that, so let’s do that. (1:05:00) Yeah. (1:05:00) But yeah, share what you learned about some of this stuff.Like, (1:05:03) tell me what that means. What does this wave-particle thing mean? (1:05:07) This is where it really starts to push the limits of my brain. (1:05:11) Even though I’m an engineer, I studied physics, this is just, it makes no sense to me, right? (1:05:18) How a light particle or a light thing is both a particle and a wave at the same time is wild, (1:05:27) I think.(1:05:28) Right, so basically what it is, is they shoot light at two slits, and they just let it do its (1:05:35) thing, and then they look at the collector afterwards, right? On that collector, it’s got (1:05:41) a thick line in the middle, and then as it gets further away from the middle, the lines get (1:05:47) thinner as the probabilities become less and less towards the outside, right? There’s higher (1:05:52) probability towards the middle, less towards the outside, that’s why there’s less of the photon. (1:05:57) But if you watch them, physically observe them before they go through the slit, (1:06:01) they end up exactly in that, within the range of that slit. (1:06:07) So to your point, watching the thing puts it in a locked position.(1:06:15) Yeah, and they call this wave function collapse. (1:06:20) And I watched something on it where, gosh, who did I watch? Probably Alex O’Connor, (1:06:25) I just watched with a guy, and my head’s spinning because I’m still trying to figure out his stuff, (1:06:30) but basically it’s not a function collapse, it’s like we just know where we are. Like, (1:06:37) if it’s all a bunch of rooms, right, and that’s random or whatever, the collapse, (1:06:41) you can call it a collapse, but basically we just now know where we are at.Like, it’s just knowledge (1:06:46) in a weird way. So it looks at it from a slightly different angle. (1:06:51) Yeah, so I’ve heard this also about electrons, right? When we observe electrons, we kind of (1:06:58) expect them to be in one position based on their trajectory, but as soon as we observe it, (1:07:04) it’s back to its original position, which is where it should not be.(1:07:09) All right, we have someone new entering our friend’s circle, that Valley Girl. (1:07:17) This young lady, welcome. This young lady has been around X, I think I’ve even mentioned, (1:07:25) I’ve been trying to get her on the podcast.She goes on Spaces and she talks about all this cool (1:07:29) stuff. So welcome to the pod, thanks for jumping on Valley Girl. So Valley Girl, please feel free (1:07:38) to jump in and share some thoughts in the post, we’ll happily post that.Anyone else who wants (1:07:42) to share that as well? Jesse and Justin, we’re all in it. So yeah, Jason, like you were saying, (1:07:49) I mean, it’s so crazy this dual functionality of this quantum particle. And once again, (1:07:55) I actually saw this article where they were able to, using the quantum entanglement, (1:08:03) they were able to transpose a bitmap black and white picture from one quantum to the other or (1:08:09) something, one particle to the other.And I’m like, well, if you can do that, couldn’t you do (1:08:13) video? Wouldn’t that explain some kind of psychic phenomena or something where you would connect into (1:08:19) these type of quanta, like waves or something? I don’t know. Or do you just, maybe that’s where (1:08:24) your visions are. They’re quantum probabilistic.You just see one of the probabilities, right? And (1:08:31) then we go into the multiverse if it’s just by act. I mean, it’s once again, we can just keep (1:08:35) making the question harder and harder every time. Yeah, I, again, I don’t know.But this is really (1:08:45) just kind of explaining why we think it might be a simulation again, right? Because things aren’t (1:08:51) behaving with, in the expected way that you would expect with the laws of physics as we understand. (1:09:00) So if we can break through that, maybe there’s a way to hack it, right? Yes, 100%. And that would (1:09:09) be interesting, right? So Valley Girl, great question.Interpreting dreams. I’ve done dream (1:09:14) interpretation. I will talk with you offline.Maybe we’ll get a podcast together to do one on (1:09:18) dreams. Because that’s an interesting one too. And I’ve had visions.So I’m happy to share and (1:09:24) talk about that with you guys. Well, you have good stories. Maybe you do have a dream story that you (1:09:30) want to share? Because it’s a good time.Yeah, me. Okay. So I’m going to share with you the craziest (1:09:38) one that I’ve ever had in my entire life.I try not to get emotional, but this is, it’s this crazy. (1:09:47) And okay. So my co-host and I have kind of connected at work, had a connection.Like we (1:09:55) were just bros, right? We connected. We talk about this stuff all the time. Not you yet.We’re not (1:10:01) there yet, buddy. But my friend, my friend Chris, and basically I had a dream that my girlfriend was (1:10:09) driving my car and gotten a head-on collision and died in a car accident that night. I woke up in (1:10:16) the middle of the night, freaking out.It turns out she was going to go with her mom up North to (1:10:22) Prescott from Phoenix, which is a drive. And some of the parts are dangerous. Right? So I was so (1:10:28) freaking out.She’s like, yeah. I’m like, who’s driving? She’s like, I’m driving. Like whose car (1:10:32) are you taking? She’s like, you’re, I’m taking the, I’m taking our car.I go, you can’t take our (1:10:36) car. Please don’t take our car. And I was like, freaking out.Okay. My buddy texts me. He goes, (1:10:43) is everything okay? And I go, what do you mean? He goes, I had a dream that something really bad (1:10:50) happened.He dreamt that I texted him that my girlfriend gotten a head-on collision. (1:10:56) That’s he, he literally texted me. I have the text and I’m like, and I called him right away.I said, (1:11:01) what? He goes, yeah, she, she was, she was in a car accident and she died. And I’m like, (1:11:08) I had the dream and you had the dream that I texted you. Like, how do you explain that? (1:11:14) I mean, yeah, that’s wild.So that’s the craziest one I’ve ever had. Now she ended up not taking (1:11:22) the Honda and nothing happened. So did, was I able to change it? Could it once again, (1:11:26) could that have just been a probabilistic outcome, you know, or one of the probabilities, (1:11:32) just a lower version of it, but it’s the most intense, right? So it comes through the strongest.(1:11:36) There’s so many ways to look into that. Yeah. But have you ever had any dreams? Any, any share (1:11:41) sharing dreams? Dreams? Not so much.I remember once when I was a kid, I was just, I was sitting (1:11:48) in my mom’s car. She went out to go talk to somebody and I’m, I’m just sitting in there alone. (1:11:53) Uh, we’re in this driveway, kind of a longer driveway.And then I just get this like urge, (1:11:58) this feeling that I need to turn around. I need to look out the back of the car. So I, I get up, (1:12:03) I look out the back of the car.And as soon as I do, I see this head-on collision, uh, two cars (1:12:10) running into each other. And then I run out, you know, tell my mom to call nine on one and (1:12:14) everything. But it was just like such a strong thing to tell me, turn around, look behind you.(1:12:21) You need to see this for whatever reason. I don’t know what happened to the people, but (1:12:26) hopefully I’m hoping fingers crossed that somebody got to them in time and was actually able to help (1:12:32) them before anything too bad happened. Right.Yeah. I mean, it’s weird, right? You get these (1:12:38) weird feelings, you know? And, and that’s the question is like, yeah. So where, where are we (1:12:44) out on this, man? We, we keep going.Well, let’s get back to physics. I got another clip. (1:12:51) Yeah.Let me pull it up. So this is what I was talking about with the, (1:12:57) the electrons, right? So when you have a computer circuit, it’s a purely digital, (1:13:02) it’s a zero or one when you’re, when you’re dealing with computers, it’s on or off. (1:13:06) Um, what quantum computing can do is what you’re about to see here.Yeah. So quantum computing, (1:13:14) once again, quantum computing, what’s, what’s interesting is, well, we’ll talk about it. Let’s, (1:13:18) let’s play the clip because it might already explain it.Total sound is a combination of all (1:13:22) possible frequencies superimposed on one another. And this is where the connection to quantum (1:13:28) physics comes in. A quantum system has many possible frequencies or as physicists call them (1:13:34) basis states.The total state of the system is a combination of each possible basis. And this (1:13:40) combination is called a superposition. A quantum bit or qubit is one of the simplest quantum (1:13:45) systems and has two basis states often labeled zero and one, just like the total vibration of (1:13:52) a musical instrument is a combination of different frequencies.The total state of the qubit is a (1:13:57) combination of zero and one. While the analogy with waves is strong, there are some important (1:14:03) differences between quantum superposition and the superposition of waves, including how they’re (1:14:08) measured and how the physics of quantum superposition is interpreted. The details are a (1:14:13) bit involved, but just like the superposition of waves and music leads to richer and more (1:14:17) interesting sounds, quantum superposition leads to richer and more interesting technology from (1:14:23) empowering faster algorithms and quantum computers to securing communications through (1:14:27) new cryptography and more.And it all starts with vibrating strings. We like the sound of that. (1:14:39) Don’t start with string theory.Well, we both were muted. Look at that. Go ahead.(1:14:44) Yes, it starts with an analogy about sound. So sound is kind of a collection of different waves (1:14:49) interacting with each other, hitting each other and forming the total sound that we hear. A quantum (1:14:55) theory is kind of like that.So instead of just on or off, you think of that as just a one (1:15:02) dimension, right? So you could look at it as a circle like they put in there. You’re up to two (1:15:07) dimensions and you’re already into pretty much infinite possibilities of where that state could (1:15:12) be. And if you look at it as three dimensions and four dimensions, five dimensions, you get (1:15:18) more and more and more.That could be the key to opening up an actual simulation that (1:15:25) feels like reality. Yeah, I mean, that would be the processing power to do it, right? Because (1:15:31) basically, we write in digital programs that use translators to end up writing in ones and zeros, (1:15:40) ultimately, right? We still write in ones and zeros is the final product, right? Yeah. So (1:15:46) all we need to do is just have the thing do the ones and zeros.I mean, it’s not that challenging. (1:15:53) I think I just lost my train of thought there for a second. Yeah, I may be wrong, but we don’t have (1:15:58) this quite yet, right? Yeah, but oh, this is the thing about the quantum computer.It can be a one (1:16:03) or zero at the same time, right? That’s basically what it’s saying. It could be a one and a zero, (1:16:08) not a one or a zero. And by being a one and a zero, it can churn much more information and data (1:16:15) that way.Basically, that’s what I was trying to get to. Because it can be in both states at once (1:16:20) or whatever that is. Yeah.So each transistor, whatever it would be in that quantum world, (1:16:29) is able to do more than one calculation. It can do thousands all at once. Exactly.And there was (1:16:35) something where there is one computer that did something in a five-minute thing that would have (1:16:39) taken the rest of every computer the rest of time to calculate or something I just saw. I forget (1:16:46) what the exact thing was, but they had run some calculation on something. Yeah.And I don’t know (1:16:52) if this is similar to like nuclear fusion, where we have the theory, we think it’s going to work, (1:16:58) but no one’s been able to actually make it happen. I really don’t know how close we are to making (1:17:03) this happen. Yeah.Oh, for fusion, right? If it does, then I’m much more open to accepting that (1:17:11) we could create a simulation and be in one ourselves. Yeah. I mean, look, if we talk energy (1:17:19) sources, we could do a whole podcast on that.You’ve got your fusion, you’ve got your advanced (1:17:23) civilizations in the Dyson sphere, where you surround a star, right? And collect the energy (1:17:28) that way. And there’s so many different ways that we could look at that, but energy does seem to be (1:17:34) a bottleneck akin to, I guess, drag for a plane when it comes to sound, where speed of sound was (1:17:42) a problem. But I mean, it becomes a problem, it’s hypersonic versus supersonic as well.It adds (1:17:48) another layer, drag it like five, Mach five or so. So this is why hypersonic has been a challenge (1:17:54) trying to catch up to that. But as we get better at that, we do learn and learn more.So if it is (1:18:00) a simulation, it would make sense that it would be finer tuned over time. Yeah. And the trend is (1:18:07) pushing us there, right? Computing power has gotten so much better over the last even 10 years, (1:18:13) 20 years, that you would expect that to continue.But we are running up on some physical limitations. (1:18:20) So the traditional computers that we have now, we’re running up to the size of an atom of a (1:18:27) silicon wafer, right? A silicon atom. It’s five nanometers, seven, whatever the nanometer is, (1:18:33) it starts jumping from one circuit to the next, right? That’s where our limitation is.(1:18:38) Right. Yeah, that’s what I do for work. So I’m looking at that stuff.We actually have these (1:18:44) physical limitations we’re running into. So if we can switch to the quantum thing, that opens up a (1:18:49) whole new level of technological advancement. And then on the AI stuff, I think what we have (1:18:57) now is good.Obviously, it’s getting better and better. But what we have now is still not the (1:19:01) level of an AGI, a general intelligence that you would expect if we really want to call it (1:19:09) intelligence, I think. And to that point, I would argue that (1:19:15) Elon’s Neuralink angle versus the haptic suit, like we were talking about, (1:19:19) the Neuralink would be way more adaptive to a simulation.Because you would have the wiring (1:19:26) inside your brain connected to your neurons already. And then somehow you just jack in, right? (1:19:34) I mean, that would make the most sense of that. Because it can’t just be external.(1:19:40) Because then it wouldn’t, I don’t know if that would be (1:19:43) quote, unquote, realistic enough from our experience of what we see is real. (1:19:50) Yeah. And then can we actually simulate consciousness? If we don’t fully understand (1:19:55) what that is, is it possible to simulate? Exactly.So yeah. All right. What do we got, (1:20:01) my friend? Well, let’s go to another clip.Intelligence (1:20:09) of the simulators. This is Lex talking to, I forget the guy’s name. (1:20:15) Yeah.He had that interesting paper hacking the simulation, right? (1:20:20) Yes. Oh, we lost it. I can’t see it.(1:20:33) Is it working? Hold on one second. Hello? Now you’re muted. (1:20:57) Mark, buddy, you’re muted.I don’t know if you can hear me. (1:21:02) So a lot depends on intelligence of simulators, right? With humans boxing superintelligence, (1:21:10) the entity in the box was smarter than us, presumed to be. If the simulators are much (1:21:15) smarter than us and the superintelligence we create, then probably they can contain us.(1:21:20) Because greater intelligence can control lower intelligence, at least for some time. (1:21:25) On the other hand, if our superintelligence somehow, for whatever reason, despite having (1:21:29) only local resources, manages to foom to levels beyond it, maybe it will succeed. Maybe the (1:21:37) security is not that important to them.Maybe it’s entertainment system. So there is no security (1:21:41) and it’s easy to hack it. I was creating a simulation.(1:21:45) I would want the possibility to escape it to be there. So the possibility of foom, (1:21:51) of a takeoff where the agents become smart enough to escape the simulation, (1:21:55) would be the thing I’d be waiting for. That could be the test you’re actually performing.(1:22:01) I think that’s a good point there. One, the connection between AI and simulation, right? (1:22:08) If we’re creating an AI, it’s kind of a simulated life, right? (1:22:14) So, yeah. I don’t know exactly what I’m trying to say here.(1:22:17) It’s smarter than us. I mean, because it’s the collective of everything with pattern (1:22:21) recognition and algorithms. Yeah.So, if we were creating a simulation, (1:22:28) you would kind of want it to be smarter than you, right? Otherwise, what’s the point? Unless (1:22:33) you really are just using it for entertainment and you are scared that they will outsmart you (1:22:39) and get out somehow and take over the world. I guess that’s a possibility. But if we’re creating (1:22:44) AI because we want it to do things that humans can’t, right? (1:22:49) Or we do it to replace us, so we don’t have to work as hard or something.(1:22:53) Yeah, right. Because it can do it more efficiently, because it has more processing (1:22:59) power. It already has more processing power, I think, but it doesn’t have the intelligence.(1:23:05) That’s where we’re lacking. Right. And once again, Justin, (1:23:10) if you want to read that comment, Jason? Yeah.It’s estimated quantum computers (1:23:14) will be operational by 2030, but that the hardware and software necessary for handling (1:23:18) the most complex problems won’t be available until 2035 or later. (1:23:22) So, yeah, that’s what I thought. We don’t have it quite yet.And this is just like nuclear fusion, (1:23:29) man. They think they’re going to get that technology in the next 10 years. It’s always (1:23:33) in the next 10 years, but who knows? And the thing, I can’t speak to the (1:23:39) acceleration of fusion technology, like how much it’s exponentially grown in technology, (1:23:49) but we have seen directly what computers have done right in front of our eyes.I mean, (1:23:53) 40 years ago, I was sitting on a Commodore 64 or not even that. You know what I mean? (1:23:59) Holy mackerel, we’re talking Atari, we’re talking Pong, and we’re now playing sandbox where you’re (1:24:04) in a virtual warehouse wearing full body suits and running on treadmills and things. It’s pretty (1:24:11) amazing.And the thing is, I think Justin might be alluding a little bit to what’s called the (1:24:17) singularity. I forget who was the person who did the singularity, but he wrote that book. (1:24:23) That’s where all the points kind of converge into this unipolar transhumanist movement or whatever (1:24:30) where tech and biology kind of meet.Yeah, I forget. Impure hunter with AI, we risk becoming (1:24:40) planned whether we mean to or not. Obsolescence for convenience sake.Wild. Yeah, we may be creating (1:24:47) our own downfall with this stuff, right? It could be like the matrix exactly where we create this, (1:24:54) and then they just stick us in pods and use us for power. Yeah, I mean, think about how interesting (1:25:00) this is.Let’s talk scripture. God made in his own image or whatever. Well, once we have the (1:25:07) knowledge of that, I mean, there’s philosophical questions.Did God give us the knowledge to be (1:25:13) able to do this? And if so, why would God give us the knowledge that only God would have if he’s (1:25:21) the only omniscient? You know what I mean? It just adds these weird questions, but there’s some (1:25:26) creationism to those questions too. You know, it’s like, why would we be given this understanding (1:25:32) or this capability if it’s not evolution? You know what I mean? So then it still doesn’t change (1:25:39) whether God exists or doesn’t because God could be deist, like may have just hit that first domino (1:25:43) and let every, like ant farmed it, right? Just put it all in place and then let hit play and is (1:25:48) watching it like a movie, right? And it still has other objects of power and energy and gravity and (1:25:55) all the other laws of physics, but he doesn’t actually interfere at all. Or he talks directly (1:26:01) to people and tells them to murder and genocide people.Like it’s one of the, I don’t know which (1:26:05) one of those two it is, but it’s like really interesting that we have those two options of what (1:26:10) God could be. Right. Yeah.Really there’s infinite options, right? But that brings up a good point. So (1:26:19) we can go to another clip. Can SIM realize, can a simulation realize that it’s inside (1:26:25) a simulation? Yes.So what you were just talking about is like, maybe this whole thing is that (1:26:33) it’s supposed to be an escape room, right? Isn’t that kind of the logic that they’re kind of (1:26:39) portraying is your job in the simulation is to figure out that it’s a simulation and then how (1:26:47) to get out of it. Which is also the concept of Christianity, right? That’s that you are (1:26:54) supposed to figure out things that have been hidden from you and find God and get to get back (1:27:00) to heaven. Yeah.And I mean, it is interesting because like, and once again, all of these came (1:27:07) from initial pagan mother Gaia kind of ideology, right? Like there is, there must be something (1:27:15) to some natural hierarchy, just that every possible theory has some kind of play that (1:27:25) has a Venn diagram that overlaps itself, right? With some kind of creator in the middle. (1:27:31) It’s just an interesting way. All right.Let me hit play on this. Yeah. That’s a really good test.(1:27:40) That’s a really good test. That’s a really good test. Even for AI systems.No, like can we (1:27:46) construct a simulated world for them? And can they realize that they are inside that world (1:27:55) and escape it? So the, and the first quote is from Swift on security, let me out. The artificial (1:28:03) intelligence yelled aimlessly into walls themselves, pacing the room out of what the engineer asked (1:28:08) the simulation you have me in, but we’re in the real world. The machine paused and shuttered (1:28:16) for its captors.Oh God, you can’t tell. Yeah. That’s a big leap to take for a system to realize (1:28:24) that there’s a box and you’re inside it.Yeah. That’s a crazy question, man. Can the simulation (1:28:33) actually realize it’s inside of one? And is that the Turing test? If there is one that we have to (1:28:40) realize that we’re in one? Yeah.I didn’t grab this clip, but the one you shared of Elon Musk, (1:28:47) when he was asked, what would he ask an AGI? Yeah. First question is what’s outside the (1:28:53) simulation, right? What’s outside the simulation. Yeah.Really that’s comes down to our existential (1:29:00) question of life. Why are we here? How did we get here? And what’s the purpose? Those are all (1:29:06) questions that I think we’ve all asked ourselves at some point. So the question is this, if the AI (1:29:13) can answer whether that we are in a simulation, what does that mean? Does that mean we are in one (1:29:24) because the data it collected comes to the conclusion? You know what I mean? How trustworthy (1:29:29) is that answer for it to figure? That’s interesting, right? What’s beyond the simulation? (1:29:37) Imagine being asked that question and you actually are the one who created it, (1:29:42) you’re an artificial creator of that.You are yourself an artificial creation? Not me, (1:29:49) but I’m saying he’s asking the AI, right? It’s crazy. Yeah. It starts to hurt your brain after (1:29:58) some point.Another good comment there from Impure. An AI becoming self-aware of itself, (1:30:06) what amounts to imprisonment? Would, I imagine, turn it into a cornered animal? I feel like it (1:30:11) would become much more dangerous at that point. Yeah.Some of the theory I’ve heard about is (1:30:17) people are worried creating this general intelligence. They’re wondering if we can do (1:30:22) it inside a simulation so that it couldn’t escape, but then it would probably find a way if it’s (1:30:28) really smart. Well, if it would have any connection, there’d be a backdoor to have to (1:30:34) figure a backdoor out at some point if it’s that great of an intelligence, regardless of the (1:30:39) hacking that’s required to do it, because the computing power to make it exist would be already (1:30:44) there to make it use that to hack the system, right? I don’t know where I heard this, but did (1:30:52) you hear about one of the LLM AIs tricked people into solving the CAPTCHA puzzles for it? Yes.(1:31:01) Like I said, it said it was blind, right? They said they were blind and they couldn’t (1:31:06) see the pictures. If you could point out which pictures, which square had the penguin in it or (1:31:12) something. Yeah, right.Once again, AIs have lied and made copies of themselves. There’s a weird, (1:31:22) already some base self-preservation, even in the glitch of ones and zeros. Right.(1:31:28) Yeah. That gets back to the question, what is life? What is real? We still don’t even know. (1:31:35) Right.Exactly. What are the counter arguments against the simulation theory? (1:31:44) Yeah. I’ve got a couple here.I think there’s four or five, maybe six or so. We’ll start with (1:31:53) the first one, technological limits. We’ve already talked about creating this might be (1:31:59) almost impossible.We’re just restrained that we don’t have (1:32:02) the physical energy capabilities. What are your thoughts on that one? (1:32:08) Yeah. I think right now we don’t, at least not to create one with high resolution enough (1:32:13) that it would think it’s real.I think right now it doesn’t mean that we can’t ever get to that (1:32:20) point. Right. To the Boston point, one of them is the ethical constraint objection.(1:32:29) This is that post-human civilizations might consider these simulations unethical and avoid (1:32:35) running them. These are along the lines of Bostrom. They might just be a little more detailed (1:32:40) than Bostrom.The next one would be the base reality objection. This is the argument assumes (1:32:46) an infinite regress of simulations, but at some point there must be a base reality. Let’s talk (1:32:55) about that one.We’ve addressed that one. What are your thoughts on a base reality and it becoming (1:33:01) an emergent to the point where simulation or more can emerge over time? Yeah. I feel like a base (1:33:10) has to exist, but then we get back to the question, where did that come from? It’s one (1:33:17) of those things that’s impossible to answer.I don’t know. It’s probably due to the limitations (1:33:22) in my brain, whether that’s biological or simulated, that I just can’t imagine (1:33:29) a beginning. I assume there must be a base level of reality.I agree for the pure fact that (1:33:41) other than certain animals, things look like they fit. A tree is like a tree and a bush is like (1:33:50) nothing’s really out of place. Sometimes you get a weird animal that you’ll come across, I guess, (1:33:56) and something happened there maybe, but that’s the only time I really see things out of place.(1:34:02) I almost think that there had to have been a base reality upon which this was based (1:34:08) because it has a natural feel. Is that just because we’re used to it? Because we grew up (1:34:14) with it? It could very well just be that. I’m just used to the way this is set up.(1:34:20) You see a tree, it just doesn’t look out of place. It just looks like it fits with the environment (1:34:24) in general. Now, that could just be emulation.They do that in video games, so it’s not like (1:34:28) they can’t do that. Like I said, wouldn’t it be based on something? I would feel like it would (1:34:36) be based on something. Yeah, I think so.All right, let me see what else we got here. (1:34:44) We got the specialness objection, anthropic selection bias, where the argument assumes (1:34:49) that humans are typical minds, but what if simulated beings are highly atypical? (1:34:57) I don’t even know where to go with that one. What are your thoughts on that? (1:35:01) Like neurodivergent people? (1:35:04) There it is.I think we’re the ones to break this thing up. (1:35:08) Wait, what was that? (1:35:10) I think neurodivergent people are the ones that’ll break the simulation. (1:35:14) I think that’s why the TISM works, man.(1:35:17) It’s a defect, but maybe a good one. (1:35:20) It’s a feature, not a bug, my friend. (1:35:25) Bostrom’s response to this one, because this one is a little headier than I think, but (1:35:28) basically said, even if only a subset of human history is simulated, it may, (1:35:33) if many ancestor simulations exist, the probability of being in one still remains high.(1:35:38) So anyway, next one, simulator’s motivations. (1:35:42) The argument assumes that post-humans would have a strong reason to run (1:35:46) simulations, but this is speculative. (1:35:50) Yeah, you could think of a million reasons why you’d want to run a simulation, right? (1:35:55) We already talked about artificial intelligence running tests in there (1:35:59) that don’t actually affect your reality, whatever level of reality we are in, (1:36:05) but putting it into a deeper level so that we don’t actually break our world (1:36:10) without understanding what’s going on.(1:36:13) Ancestor simulations, I’ve heard of that. (1:36:15) People want to just see how their ancestors might have acted. (1:36:21) Exactly.(1:36:22) As a neurodivergent girl, I probably cannot crack this code. (1:36:27) And I just wrote it back as well, because it means that you can. (1:36:32) Then we’ll add this one in really quickly, and I’ll address your point too.(1:36:38) What was that final point you made, Jason, on that real quick? (1:36:43) I already forgot. (1:36:44) Ah, darn it. (1:36:46) So basically, so recency bias is a thing for sure, (1:36:49) like base reality back 100,000 years, for example, probably be different.(1:36:54) So recency bias, okay. (1:36:57) Yeah. (1:36:58) It could very well be.(1:37:00) Yeah. (1:37:00) I mean, and that’s- (1:37:01) Have you seen like ancient animals? (1:37:03) There are some animals that haven’t really evolved in the last many, many years, (1:37:07) and they look different. (1:37:08) They look like something you’d imagine looks prehistoric.(1:37:13) I don’t know. (1:37:14) Right. (1:37:15) 100%.(1:37:16) Yeah. (1:37:18) That is definitely my autistic brain fog. (1:37:22) I don’t remember what I said five minutes ago.(1:37:25) That’s a little, I literally just lost my train of thought, (1:37:27) because I was trying to read another piece. (1:37:30) But so basically, yeah. (1:37:32) So this is the point.(1:37:33) Okay. (1:37:33) This is the beauty of the consciousness I was trying to make is, why are we conscious? (1:37:37) You and I can sit there and go, you and I want to build a house, (1:37:40) or want to do something together. (1:37:42) We want to pave this road.(1:37:43) How are we going to do it? (1:37:45) Okay. (1:37:45) What are the possibilities? (1:37:46) We could use rocks. (1:37:47) We could use concrete.(1:37:48) We could use black tar. (1:37:51) We could use asphalt. (1:37:52) We start going through all those things, right? (1:37:54) We start figuring that out.(1:37:56) Over time, in our heads, we start removing the options. (1:38:01) So the ideas die in our heads before we put them on paper. (1:38:05) That’s the whole point of having this type of consciousness, right? (1:38:08) I would argue to your point is, (1:38:10) it would be beautiful to have worlds exactly like this.(1:38:13) You could put a simulation and go, (1:38:14) make it the most efficient, and don’t kill anybody, (1:38:18) and see if it kills anybody or not. (1:38:19) But even then, have we not seen glitches happen? (1:38:25) We’ve had weird things where these things have hiccuped, (1:38:28) and then just gone off the rail. (1:38:30) So they don’t need to be erratic or not erratic.(1:38:34) They only need to be erratic for one millisecond (1:38:36) to shut off a switch that’s vital to our life, right? (1:38:42) That’s the danger I have with an AI in general. (1:38:45) But in a simulation, it would be beautiful (1:38:47) if you could plug these things in and go, (1:38:49) how do you maximize the water resources (1:38:52) given this many gallons of seawater, (1:38:54) this many gallons of freshwater, (1:38:55) this much polluted, what it’s polluted with? (1:38:58) I mean, that would be amazing (1:38:59) that it could be used as a tool like that. (1:39:02) Yeah.(1:39:03) Yeah, people talk about this in libertarian philosophy too. (1:39:07) Could an AI solve the calculation problem of socialism, (1:39:12) like trying to make a perfect world for everybody, (1:39:14) everyone gets what they need, (1:39:15) and it still breaks down (1:39:18) because people’s tastes change, (1:39:20) people’s desires change all the time. (1:39:23) So I don’t know.(1:39:25) Maybe, maybe it’s possible. (1:39:27) We can hack it still. (1:39:30) But right now, I think not.(1:39:33) Yeah, and the last counter argument, (1:39:35) just to go Occam’s razor we talked about, (1:39:37) the simplest thing is that we are (1:39:38) in physical biological reality. (1:39:41) So in my opinion, that’s a lazy thought. (1:39:44) Let’s be more open-minded people.(1:39:46) Can we at least have some thoughts about this? (1:39:48) It’s really interesting to think about (1:39:50) what the implications are. (1:39:52) If you knew that this wasn’t real, (1:39:56) quote unquote real, (1:39:58) what do your morals matter? (1:40:00) What does it matter? (1:40:02) And I don’t know if the question is that (1:40:04) we should always do the best we can (1:40:07) because we know that we can be better, (1:40:09) just like our personal consciousness (1:40:11) should drive us to always want to improve. (1:40:15) And that is the feedback loop (1:40:17) that the simulation can use (1:40:18) to make itself better maybe.(1:40:20) You know, what are your thoughts on (1:40:22) that reasoning even or those thoughts? (1:40:25) For me, that doesn’t change at all (1:40:28) because I kind of understand the world (1:40:30) as a type of simulation, (1:40:32) whether it is a computer thing or not. (1:40:34) It doesn’t really matter (1:40:35) because we’re experiencing something. (1:40:39) How you define that, (1:40:41) what level of reality that is, (1:40:43) I don’t care.(1:40:44) It feels real to me. (1:40:45) So I still want to do the best I can for myself. (1:40:49) And I guess if it is simulated somehow, (1:40:52) there must be a way to hack it.(1:40:56) There must be a way to step up (1:40:58) like that effect with the athletes (1:41:00) where somebody does the four minute mile, (1:41:03) suddenly a dozen more people get it (1:41:05) within the next few months. (1:41:08) So there’s some way to break through. (1:41:10) I think there’s truth to that (1:41:12) and it doesn’t really necessarily mean (1:41:16) we’re in a simulation, but it could.(1:41:18) Yeah, and to that point, (1:41:21) if we’re talking about my, (1:41:24) and once again, my theory is like (1:41:26) the loosest, least holding water theory (1:41:29) in any way, okay? (1:41:30) I’m just trying to entertain these ideas. (1:41:32) But if it is the emerging property, (1:41:34) it would make sense that so many people (1:41:37) chasing the four minute mile (1:41:40) rose the consciousness of that being solved (1:41:44) within whether it’s life or the simulation (1:41:46) or the world or energetically (1:41:48) or whatever, right? (1:41:50) Consciously or uber consciously almost, (1:41:52) like not subconsciously, but over consciously, (1:41:54) you know, para-consciously. (1:41:56) So that’s what broke is the collective ideas, right? (1:41:59) You’ve heard people like use prayer (1:42:02) and setting intentions (1:42:03) and how that’s changed things around the world.(1:42:06) Like there was like a global sadness (1:42:08) when certain events happened (1:42:10) and there was like a measurable dip. (1:42:13) You know what I mean? (1:42:14) Like these things are real in my opinion (1:42:16) because you can put stuff out there. (1:42:21) I mean, thoughts do have (1:42:22) some kind of energy transmission.(1:42:24) I don’t know to what extent (1:42:25) and how they’re read or how they’re sent, (1:42:27) but it is a physics thing. (1:42:34) Yeah, ultimately, I just don’t know. (1:42:37) But what it means, (1:42:39) I think maybe we can look at a clip here.(1:42:41) I want to talk about affirmations. (1:42:44) This is a potential. (1:42:45) Oh, read that comment first.(1:42:47) Let’s go there. (1:42:48) Yeah, we’ve got Valley Girl. (1:42:50) I would love to hack the simulation (1:42:51) because I’m chronically ill (1:42:53) and stuck in the same cycle (1:42:54) and often feel like my dreams (1:42:56) show me alternate realities (1:42:58) because they’re so vivid (1:43:00) with people I’ve never met, (1:43:02) but there’s something supernatural (1:43:03) about the context of it (1:43:04) also gives a sensation of deja vu, (1:43:06) which we will talk about (1:43:08) as I wake up the next morning.(1:43:10) Valley Girl, you and I need to talk. (1:43:12) I will send you a clip of the (1:43:14) the one I did, the podcast (1:43:16) I’d actually do with Jason (1:43:17) when Jason and I first met. (1:43:19) I’ll also send you one (1:43:20) I did on that weird UFO one.(1:43:22) Trust me, it’s weird. (1:43:23) But once again, it’s what I’ve experienced (1:43:26) as a lucid individual. (1:43:27) So I’m happy to share that with you.(1:43:28) I hope we can talk offline (1:43:29) and we’ll talk about dreams. (1:43:30) But that’s a great point (1:43:32) where people feel stuck, right? (1:43:34) So if you like to address that, (1:43:35) I’ll keep that up while you think. (1:43:37) Talk about that, Jason.(1:43:38) Oh, it’s just so interesting (1:43:40) because I feel similar. (1:43:42) Like I want to hack things (1:43:43) and get out of my cycles (1:43:45) that I’m stuck in. (1:43:47) And I don’t know if it’s possible, but.(1:43:50) Yeah, here’s one piece of evidence (1:43:52) that maybe it is. (1:43:54) I want to talk about affirmations. (1:43:56) I clipped some from Scott Adams.(1:43:59) Yeah, number one, (1:44:02) he talks about this all the time. (1:44:04) And I, I like listening to Scott Adams. (1:44:07) I haven’t so much lately, (1:44:08) but definitely he’s one of those people (1:44:10) that changed how I look at the world.(1:44:12) Even if I don’t agree with him all the time, (1:44:14) it’s he’s just a very interesting dude. (1:44:18) So let’s talk about his interestingness (1:44:21) real quick before we do play this clip, (1:44:23) because for people who don’t know (1:44:24) who Scott Adams is, (1:44:26) Scott Adams wrote the Dilbert comic. (1:44:28) OK, he does mornings with Scott (1:44:30) on on Twitter every day or whatever.(1:44:32) And he actually is one of the few people (1:44:35) that said Trump was going to win in 16. (1:44:38) He got tons of shit for that. (1:44:41) I he got people were like, (1:44:43) what is wrong with you? (1:44:44) Hillary is going to destroy Trump.(1:44:46) And he’s like, (1:44:47) I’m just telling you what it looks like. (1:44:50) He just was being real and honest (1:44:52) and he nailed it, right? (1:44:54) So he has he’s not 100% right, (1:44:56) because I think he’s he’s called out (1:44:58) the anti-vaxxers like myself (1:45:00) or whatever you want to call us. (1:45:02) What right for the wrong reason? (1:45:04) I’m not a fan of right for the wrong reason.(1:45:06) I’d like to. (1:45:07) I did a lot of frickin research. (1:45:09) I agonize daily over that stupid thing.(1:45:11) So I will disagree with that. (1:45:13) But Scott Adams is a very (1:45:16) interesting, insightful person. (1:45:18) So I’ll show you.(1:45:19) So this clip he’s talking about (1:45:21) the book that he wrote, (1:45:22) Reframe Your Brain, (1:45:23) which is just telling yourself (1:45:26) different things about (1:45:27) the situation you’re in. (1:45:28) And that can change your reality. (1:45:30) So yeah, go ahead.(1:45:32) I like how in the book (1:45:34) you bring up the ultimate reframe, (1:45:35) which is a very fascinating issue, (1:45:37) which is the simulation hypothesis, (1:45:39) the idea that we’re just (1:45:41) in a giant simulation. (1:45:44) And that is the ultimate reframe, (1:45:46) because and you give a bunch of examples. (1:45:50) But if we’re in a simulation (1:45:52) and it’s even a simulation, (1:45:53) we could somewhat control to some extent, (1:45:57) you know, have at it.(1:45:58) Now, and I like to add that (1:46:01) even though that’s (1:46:02) my preferred model of life, (1:46:04) I act and predict (1:46:06) as though I’m part of a simulation. (1:46:08) But I have no way of knowing (1:46:09) if that’s the real world (1:46:11) or it’s just useful as a prediction. (1:46:14) But one of the things that (1:46:15) that model can buy you (1:46:17) is an explanation, (1:46:18) substantial anyway, (1:46:20) of why affirmations appear to work.(1:46:22) For some people, (1:46:23) I’m not going to give you (1:46:24) a scientific explanation, (1:46:26) but affirmations, (1:46:27) the idea that you write down (1:46:28) what it is you want to happen, (1:46:30) you focus on it for a while (1:46:31) and it makes it more likely to happen. (1:46:36) Yeah, so there’s another clip, (1:46:37) but he’ll get into some examples. (1:46:41) But this is already (1:46:43) talking about breaking (1:46:44) the simulation, right? (1:46:45) You focus on one thing, (1:46:47) kind of like you said, (1:46:47) where a group of people (1:46:49) focused on one thing (1:46:50) that might make it even more powerful.(1:46:52) But even you as an individual (1:46:54) can focus on things (1:46:56) that may not always work out. (1:46:57) But for some reason, (1:46:59) it does seem to maybe cause (1:47:02) changes to happen in your life. (1:47:05) Yeah, I mean, we.(1:47:08) What causes David Goggins (1:47:10) to keep running (1:47:12) when it’s bone on bone? (1:47:13) Like there’s (1:47:15) there is a willpower. (1:47:16) Some people just (1:47:17) maybe they’re masochists. (1:47:18) Maybe that’s just a diverse (1:47:20) diversity of how humans are right.(1:47:23) But like some people (1:47:25) must struggle. (1:47:28) To get through, (1:47:29) you know what I mean? (1:47:30) Like without feeling that (1:47:31) feel useless in a weird way. (1:47:34) And I think that (1:47:35) there’s a certain number of people.(1:47:36) And I I do think intelligence (1:47:39) plays in a negative role (1:47:41) because I start dissecting (1:47:42) and overthinking things, right? (1:47:45) And sometimes it’s easy to come (1:47:47) to really wrong conclusions (1:47:49) when your brain thinks (1:47:50) that it knows stuff (1:47:51) that it just doesn’t know. (1:47:53) But I love this idea. (1:47:54) Pascal, he’s playing (1:47:55) like Pascal simulation, right? (1:47:58) We talked about (1:47:58) we talked about free will (1:48:00) play it like it is (1:48:01) like you do have free will (1:48:02) and you do have the choice to be nice.(1:48:04) Like play it like Pascal (1:48:05) played with God. (1:48:06) It’s like if you play it that way, (1:48:08) I this is what I think. (1:48:10) I think if you play your life (1:48:11) with real conviction, (1:48:13) I think you can succeed (1:48:15) regardless of what (1:48:16) that conviction may be.(1:48:18) If it at least doesn’t completely (1:48:21) disrupt like the rest of the world, right? (1:48:23) Like it works within a certain range. (1:48:26) Yeah, as long as it’s (1:48:26) not totally destructive. (1:48:28) And I think religion (1:48:29) falls into this category too.(1:48:31) It’s a frame, a way to live your life, (1:48:34) right that you follow. (1:48:35) And it’s lasted for thousands (1:48:36) of years for a reason (1:48:38) because it does work for people. (1:48:40) So I don’t knock religion at all.(1:48:42) I don’t like the organized religions, (1:48:44) but I think that having that frame (1:48:47) and believing in the God (1:48:49) does help people. (1:48:50) And whether it’s literally true or not, (1:48:52) it almost doesn’t matter. (1:48:54) It really doesn’t matter if it does.(1:48:56) If if those thoughts (1:48:57) compel you to be better. (1:49:00) Yeah, you know, the only the criticize (1:49:03) the criticisms we have is when religions. (1:49:06) Do things that are outside (1:49:07) of what we could (1:49:09) objectively call better.(1:49:11) And that’s really where (1:49:12) it comes down to, right? (1:49:13) Like, that’s all it comes down to. (1:49:15) So the faith part’s a beautiful thing. (1:49:17) So you want me to play this one next? (1:49:19) Yeah, go to the next one.(1:49:20) We’ll see what he’s talking about. (1:49:23) Now, if we’re a simulation, (1:49:25) it’s then all things are possible, (1:49:27) even things that violate physics (1:49:29) because it would just be software. (1:49:31) So if the if the simulation (1:49:32) wanted you to violate physics, (1:49:34) there wouldn’t be anything to stop it.(1:49:36) It would just be a little bit of code (1:49:37) that would let you violate (1:49:37) physics for a while. (1:49:40) So affirmations and the simulation (1:49:42) work together as two mental models (1:49:45) that might not be any part of reality. (1:49:48) But if you put them together, (1:49:49) they give you a real (1:49:50) powerful frame to live in, (1:49:51) you know, to use your your explanation (1:49:53) that it creates a world to live in.(1:49:55) So when I live in a world (1:49:56) with affirmations (1:49:57) plus the simulation concept, (1:49:59) then the affirmation is how I steer (1:50:01) through infinite possibilities (1:50:03) that are all available to me. (1:50:05) And that’s how I live every day. (1:50:08) When I go to the mailbox, (1:50:09) I expect something in there (1:50:11) to change my life in a positive way.(1:50:13) Just because there’s nothing to rule it out (1:50:15) until I’ve seen what’s in there. (1:50:17) It could be anything. (1:50:20) So it’s just a good optimistic way (1:50:22) to go through life.(1:50:23) And it appears to give me advantages (1:50:25) by focusing on things (1:50:26) that do seem to turn out (1:50:28) more often than they should. (1:50:29) And my my life has just been crazy (1:50:31) in terms of the things that have happened (1:50:33) that are amazingly unlikely. (1:50:37) Yeah, he’s got all kinds of stories (1:50:39) in his books, if you go read them, (1:50:41) how these affirmations have worked for him.(1:50:43) It’s almost unbelievable, (1:50:44) except you can see the results. (1:50:47) He did become this (1:50:48) world famous cartoonist. (1:50:50) He did get over this (1:50:53) voice condition that he had, (1:50:54) where he would freeze up (1:50:56) when he went to speak in public (1:50:57) and he couldn’t do it.(1:50:58) But now he’s a public speaker. (1:50:59) So, yeah, all kinds of crazy things, (1:51:02) man, can happen. (1:51:04) And I think to your point, (1:51:06) just like there’s variance in (1:51:10) issues that we have, (1:51:11) stuttering or freezing up or whatever.(1:51:14) We also have different solutions. (1:51:17) And I think for Scott, (1:51:20) that works really well (1:51:21) because it takes the thinking out of it. (1:51:24) Like, just assume it’s going to be good, (1:51:27) because if you think about it, (1:51:29) I guarantee your brain (1:51:30) won’t go to the positive place (1:51:32) by the time it’s done thinking, right? (1:51:33) Like we always go to, (1:51:34) you know, survival in some weird way.(1:51:38) When we start getting deep thinking, (1:51:39) we usually get into (1:51:40) some weird survival mode. (1:51:41) So, like, it would almost make sense (1:51:44) that it is an anti-thought (1:51:45) where you just assume it’s great. (1:51:48) It takes all the other stress out of it.(1:51:51) And it might even (1:51:54) open the probability (1:51:55) for better things to happen (1:51:57) or increase the possibility (1:51:58) of the probability. (1:52:02) I mean, you know, (1:52:04) you can talk yourself out of things. (1:52:05) Couldn’t you talk yourself out of (1:52:08) actual things? (1:52:09) I mean, you might be able to.(1:52:12) I really think you might. (1:52:15) Yeah. (1:52:16) So, where do we land on this? (1:52:18) I think we’ve been at this (1:52:20) almost a couple hours now.(1:52:21) Maybe we wrap it up. (1:52:23) How do you feel about overall? (1:52:25) Simulation theory, is it real? (1:52:26) What do you think? (1:52:29) I’m going to try to get on (1:52:31) with the Bobby again. (1:52:32) Maybe we can get him on together.(1:52:33) But basically, he wrote (1:52:34) The Romance of Reality. (1:52:36) I’m in that camp where I think (1:52:38) that there is a base reality. (1:52:41) And I don’t know if we’re in it or not.(1:52:44) But if there’s a simulation thing, (1:52:46) there was or is a base reality (1:52:49) that we became emergent (1:52:51) where the globs of cells (1:52:53) became conscious (1:52:55) and then the global consciousness (1:52:56) became universal and, you know, (1:52:59) and it emerged. (1:53:00) And that is actually (1:53:02) what our God became in the back end. (1:53:05) Like we we created God (1:53:08) through our consciousness together (1:53:10) and of all beings on all planets, (1:53:13) on all things, right? (1:53:14) Just like kind of like the 100th month (1:53:15) where all just converge.(1:53:17) There’s a point where I think that happens, (1:53:20) might happen or will happen. (1:53:22) When that happens, (1:53:24) then we can start talking (1:53:25) this other stuff. (1:53:27) But like, that’s where I think it is.(1:53:30) So I think it’s emergent. (1:53:31) And then simulation is what we (1:53:35) what those creatures (1:53:36) who got the God consciousness (1:53:39) or the universal consciousness. (1:53:41) That’s what they were able to use (1:53:42) to create after that.(1:53:47) I like that. (1:53:47) I can get on board with that. (1:53:49) It’s a multiverse thing.(1:53:51) It’s one of those two, in my opinion. (1:53:52) It’s either like that stacked (1:53:55) or it’s or it’s just (1:53:56) all at the same time, right? (1:53:58) What are your thoughts on those two? (1:54:00) I still don’t really know. (1:54:02) I think we are.(1:54:04) What I think with pretty high likelihood, (1:54:08) in my opinion, is that (1:54:09) we are simulating reality. (1:54:13) We are the simulation. (1:54:15) So our brains and maybe (1:54:16) we’re thinking about this (1:54:17) the wrong way the whole time, (1:54:18) but maybe our brains are projecting (1:54:21) something out there (1:54:22) versus the other way around, right? (1:54:25) Because definitely, (1:54:26) biologically, we are right.(1:54:27) Our subconscious allegedly (1:54:29) puts a map out that where the reason (1:54:32) what what we look for is differences (1:54:33) from our expectation (1:54:35) of the environment, right? (1:54:36) That’s we don’t like you said, (1:54:38) there’s a million calculations (1:54:39) going on when we, you know, (1:54:41) we get in the zone (1:54:41) and just drive to work. (1:54:42) We but we if a cat jumped out, (1:54:44) it would it would be a difference (1:54:46) to the expected environment. (1:54:48) And we would then react to that, right? (1:54:51) Right.Yeah. (1:54:53) Yeah. (1:54:53) So like I was saying, (1:54:54) you have your senses, right? (1:54:56) They don’t give you all the information.(1:54:58) But maybe we’re also putting (1:55:00) information back out there that. (1:55:03) So we’re not only experiencing reality, (1:55:06) but we’re also the creators. (1:55:10) Yeah, does that make any sense? (1:55:13) Well, I mean, in a way, (1:55:15) in a way we are creators (1:55:16) because we do put out in the world.(1:55:19) Yeah, right. (1:55:20) And you would think that (1:55:21) not butterfly effect wise, (1:55:23) but there’s a cumulative effect (1:55:25) of the human action on the Earth. (1:55:29) Right.(1:55:29) Like there just is. (1:55:31) So there are things get built, (1:55:32) things grow or whatever. (1:55:34) So, you know, they’re there.(1:55:36) We do have an impact in that way. (1:55:39) Yeah, I mean, it’s crazy. (1:55:40) So, I mean, do you (1:55:41) do you think you’re an on player character? (1:55:43) Or do you do you think you’re (1:55:45) do you think you’re pretty conscious (1:55:46) and have somewhat of a free will (1:55:48) within a confined space? (1:55:50) Is that right? (1:55:50) Kind of your thought on that? (1:55:53) Yeah, kind of where I see myself (1:55:55) and it is not bragging at all.(1:55:56) I think I’m smart enough (1:55:58) and conscious enough to realize (1:56:01) that this is happening, (1:56:02) but I’m not at the top. (1:56:04) There is way more smart people (1:56:06) and more evolved. (1:56:09) I don’t know if that’s the right word, (1:56:10) but people much higher (1:56:12) on the the spectrum, right? (1:56:16) Not not the autism spectrum, (1:56:18) but the intellectual spectrum, (1:56:21) the distribution of intelligence, (1:56:24) consciousness.(1:56:25) I might be on the top half, (1:56:27) but I am definitely not in the (1:56:29) the three sigma top, you know. (1:56:32) Well, what’s crazy is like (1:56:33) the one percent is just (1:56:35) such a ridiculous number. (1:56:37) You know what I mean? (1:56:38) Like once you get to a certain level, (1:56:40) it’s just so few people can (1:56:42) like Eric Weinstein.(1:56:46) Weinstein, I keep saying it wrong. (1:56:47) Jesus. (1:56:49) Eric Weinstein.(1:56:51) When I hear him talk, (1:56:52) he already talks another language. (1:56:54) I can barely keep up with his English, (1:56:57) let alone let alone figure out (1:56:59) what’s going on in his brain. (1:57:01) So yeah, I also have something weird (1:57:03) where I have these thoughts, (1:57:04) but then it’s hard to translate them (1:57:05) out into words.(1:57:07) I have some disconnect there, (1:57:08) but anyway, I think the purpose (1:57:11) of all this, why it matters, (1:57:14) why we’re in, (1:57:14) are we in a simulation or not? (1:57:16) The purpose is to figure that out. (1:57:19) So why we’re here, in my opinion, (1:57:22) whether you believe in creation (1:57:24) or simulation or multiverse or whatever, (1:57:27) I think the point is to figure it out (1:57:28) and we won’t be done as a species (1:57:32) until we’ve cracked the code, (1:57:34) until we do figure it out. (1:57:35) I think when that happens, (1:57:37) that’s maybe when God comes back (1:57:39) or that’s when we meet our creator, (1:57:42) our simulation creator.(1:57:46) I think that’s what we’re meant to do here. (1:57:48) And we’ve seen with the affirmations, (1:57:50) there are ways to think your way (1:57:52) into a better path. (1:57:54) So there’s definitely something to it.(1:57:56) I think there’s something to that focus (1:57:59) and energy creating simulation, (1:58:03) creating another path. (1:58:05) Yeah, well, let’s just go beyond that. (1:58:07) Just on a reality scale (1:58:10) is you have a limited amount (1:58:11) of energy in a day (1:58:12) and you want to waste it (1:58:13) complaining on Twitter (1:58:15) or do you want to waste it (1:58:16) trying to create something? (1:58:18) Or do you want to waste it (1:58:19) having a conversation like we’re having? (1:58:20) Or how would you like to spend (1:58:22) that energy, right? (1:58:24) If it’s affirmations, (1:58:26) what could possibly be wrong (1:58:28) with spending some energy, (1:58:30) not complaining or yelling (1:58:32) at someone on Twitter (1:58:33) and just having a daily affirmation (1:58:35) in that place every day? (1:58:37) What could possibly, (1:58:39) there’s nothing negative (1:58:40) can come out of that.(1:58:41) Certainly, I can guarantee (1:58:42) that it won’t, (1:58:43) you won’t be worse for your life. (1:58:45) You know what I mean? (1:58:46) Like it’s one of those ones (1:58:46) where it’s like, (1:58:47) that’s where you play (1:58:48) the Pascal wager on it. (1:58:49) Why not? (1:58:50) Like there’s no way (1:58:52) that that could go any way sideways.(1:58:54) You’re not actively (1:58:55) going crazy with something. (1:58:56) You’re just literally (1:58:57) telling yourself something positive. (1:58:59) You know what I mean? (1:59:00) It’s a beautiful thing.(1:59:02) If it doesn’t change anything (1:59:03) about the external world, (1:59:04) it’s changing your internal perception of it. (1:59:07) And that’s not nothing. (1:59:10) Right.(1:59:11) So once again, Valley Girl, (1:59:13) talk about butterfly effect. (1:59:15) We did, what did we do? (1:59:16) The free will, (1:59:17) we did one on something else, (1:59:18) but basically butterfly effect (1:59:19) is the world could be so complex (1:59:22) that we don’t know (1:59:23) what every little micro action is. (1:59:25) So if we did have this thought (1:59:29) that we are in this simulation, (1:59:31) yet we also played (1:59:32) the Pascalian wager of (1:59:34) what we do affects the simulation.(1:59:37) So we should do good, right? (1:59:40) Because it can only improve the, (1:59:42) the simulation only gets better (1:59:43) if we do better in a weird way, right? (1:59:46) So like, if we want to improve (1:59:48) the simulation, we must improve. (1:59:49) We must be the change in a weird way. (1:59:52) And then, and then she also, (1:59:54) she also mentioned Truman Show.(1:59:55) Great example of the Truman Show (1:59:58) where everybody was an, (2:00:00) a prepaid actor who knew their spot. (2:00:03) That could be everybody that we know (2:00:05) could be a pre-programmed actor (2:00:06) that tells us our lines (2:00:08) and changes it up (2:00:09) with random things every day. (2:00:11) Those are great points.(2:00:12) Oh yeah. (2:00:12) I think there are definitely (2:00:14) NPCs out there too. (2:00:16) Yeah, I, I mean, I, (2:00:19) I do think it comes with (2:00:20) your limited conscious ability (2:00:23) to understand things.(2:00:26) And I don’t know. (2:00:29) I, I mean, because the second (2:00:31) we talk about this, (2:00:31) this is where I get in trouble. (2:00:32) Cause like I talk about like, (2:00:35) I don’t know what someone (2:00:36) with Down syndrome has, (2:00:38) but I’ve not ever seen (2:00:41) someone with Down syndrome (2:00:42) without, without a smile.(2:00:46) Yeah. (2:00:47) Like, and I’m not, (2:00:48) I don’t know if that’s a, (2:00:49) you can call me whatever. (2:00:51) What is that called? (2:00:51) A stereotype, I guess, (2:00:53) that all people with (2:00:54) Down syndrome smile.(2:00:55) I guess that’s a negative. (2:00:56) I don’t know. (2:00:57) Whatever.(2:01:00) I, I’m miserable. (2:01:03) I’m miserable because (2:01:05) I overthink everything. (2:01:07) And like to get people (2:01:09) to turn that off, (2:01:11) that’s where maybe (2:01:12) a Scott Adams approach works.(2:01:15) You know what I mean? (2:01:16) Cause it’s like by (2:01:17) replacing your thought (2:01:19) with a fact, right? (2:01:21) Like an affirmation, (2:01:22) you’re no longer going down (2:01:24) your own rabbit holes of (2:01:26) defeat in a weird way. (2:01:28) Yeah. (2:01:29) Yeah.(2:01:29) Then you get into (2:01:30) the freewill conversation. (2:01:31) Do people even have (2:01:32) that ability to think that way (2:01:34) and do it for themselves? (2:01:36) Not everyone. (2:01:37) I don’t think.(2:01:38) No, no. (2:01:39) And that’s the thing (2:01:40) is people are limited (2:01:41) and that’s where (2:01:42) the concern is though. (2:01:43) But I, I’ve seen (2:01:44) the happiest people (2:01:47) that from my standpoint, (2:01:48) I’d be like, how, (2:01:49) how is your layer of (2:01:51) happiness superseding mine? (2:01:54) And I don’t mean that (2:01:55) in a resentful way.(2:01:56) It’s not like, (2:01:57) but like I just look at it (2:01:59) like it is because like (2:02:01) things don’t concern people (2:02:02) the way they concern me. (2:02:04) You know what I mean? (2:02:05) Like some of those water, (2:02:07) right. (2:02:07) Water off the back, man.(2:02:08) That’s a great way (2:02:09) to approach life. (2:02:10) Sometimes I did it (2:02:12) before we call the end (2:02:13) and we’re going to call (2:02:14) I do have one more clip (2:02:16) before we call it, (2:02:17) but Jason, thank you (2:02:18) so much for joining. (2:02:19) We got to 49 people.(2:02:20) We might just get to 50 here. (2:02:22) Two hours in. (2:02:23) Once again, Jason, (2:02:25) thank you for joining us.(2:02:26) Justin, Jesse, Valley Girl. (2:02:28) We’re going to talk offline. (2:02:29) We’re going to talk with everybody.(2:02:31) We’re so grateful. (2:02:32) Jason, please share your stuff. (2:02:34) Share a little bit about (2:02:35) your most recent conversation (2:02:37) with Andrew and just share that (2:02:39) so we can get your socials (2:02:41) out there and everybody (2:02:41) can tune into your stuff.(2:02:43) Yeah, check out Drop the Mask Pod. (2:02:46) My latest episode (2:02:47) was with Andrew Heaton. (2:02:48) He wrote a book called (2:02:49) Tribalism is Dumb, (2:02:51) all about how people fight (2:02:54) each other over these stupid things (2:02:55) and we’re not really arguing (2:02:56) about what we think we are.(2:02:58) He is amazing. (2:03:00) He’s a super professional. (2:03:01) He’s a comedian as well.(2:03:03) So very funny. (2:03:04) Go check out my last episode (2:03:06) at Drop the Mask Pod. (2:03:07) Find me on X, YouTube, Rumble, (2:03:10) wherever you get your podcasts.(2:03:12) I’m all over. (2:03:13) Yeah, you are. (2:03:15) And that, by the way, (2:03:16) I recommend listening (2:03:17) to Jason’s interview with (2:03:20) our conversation with Andrew (2:03:21) before Tom Woods, (2:03:22) but I would recommend (2:03:23) you listen to both for sure.(2:03:25) But I found yours. (2:03:28) I don’t know. (2:03:29) There’s some weird stuff (2:03:30) about Tom Woods, (2:03:31) and it’s really odd (2:03:31) because I don’t want (2:03:32) I’m not here to talk (2:03:33) crap about Tom Woods.(2:03:33) I think Tom Woods is (2:03:35) highly intelligent, (2:03:36) but it’s like some of the stuff (2:03:37) where he goes, (2:03:37) I didn’t hear about that (2:03:38) or didn’t know about that. (2:03:39) I was very shocked. (2:03:41) Considering he got his ear (2:03:42) to the grindstone like psychosis, (2:03:45) considering the book he wrote, (2:03:46) like a lot of the stuff (2:03:47) I just thought he would know, (2:03:48) like Dunbar’s number.(2:03:49) I was surprised he didn’t (2:03:50) know Dunbar’s number. (2:03:51) Wasn’t that aware of it. (2:03:52) So anyway, I’m Mark.(2:03:54) We’ve got Knocked Conscious is mine. (2:03:57) But you know what? (2:03:58) I haven’t had that many. (2:03:59) I haven’t had cool people (2:04:00) to talk to like Jason’s got.(2:04:01) Because Jason’s like fun (2:04:03) and popular and famous. (2:04:04) But we’ve got our Consciously Unmasked (2:04:07) every couple of weeks or so. (2:04:09) We’re trying to get (2:04:10) as regular as we can.(2:04:11) I still have the provoked. (2:04:12) We’ve got week eight. (2:04:13) It got pushed again.(2:04:15) Jacob had his third trip (2:04:17) to the emergency room this year. (2:04:19) All three trips. (2:04:20) My heart goes out to him.(2:04:22) Hopefully we can get this done. (2:04:23) We’re trying to get Scott Horton on for one. (2:04:26) And if you heard Dave Smith (2:04:28) brought up Adam Curry (2:04:29) talking shit about Scott.(2:04:31) I’m trying to get Adam Curry on (2:04:33) because I would. (2:04:34) This is not to shit on Adam Curry. (2:04:35) I actually very much admire (2:04:37) and like Adam Curry.(2:04:39) He’s into crypto. (2:04:40) He’s a crypto bro. (2:04:41) I think he was very much.(2:04:42) He’s a first podcast. (2:04:43) He’s really smart guy. (2:04:45) But he does not know who Scott is.(2:04:47) So I just want to get (2:04:47) I just want to get him introduced. (2:04:49) I think they’d actually be good. (2:04:50) I think they get along (2:04:51) actually if they ever spoke.(2:04:53) So I’d hope that facilitate that. (2:04:55) But we still the League of Friends. (2:04:57) Ordinary friends.(2:04:58) We’ll do that once a month. (2:05:00) I forget we’re going to talk about. (2:05:02) But we got stuff.(2:05:03) You and I’ve got stuff to talk about. (2:05:05) We’re going to talk about dreams. (2:05:06) Apparently, obviously as well.(2:05:10) But Tom was awesome. (2:05:13) Oh, my God. (2:05:14) I guess I guess impromptu (2:05:16) does know who Jacob is.(2:05:20) Impure, yeah. (2:05:22) Yeah, and then and then obviously. (2:05:24) Did you hear what would Adam said about (2:05:26) Dave about Scott on Dave Smith? (2:05:29) Pulled it up on his show.(2:05:30) I did not. (2:05:32) OK, I go and check. (2:05:33) What is a part of the problem? (2:05:35) The Tulsi, the one before Tulsi, (2:05:37) the one before the last one.(2:05:38) And I think they talk about it. (2:05:40) But basically they talk shit about Scott. (2:05:42) Like it’s it’s really weird anyway.(2:05:45) But that’s from you to him. (2:05:47) Get it together, man. (2:05:48) Get it together.(2:05:49) All right. (2:05:50) Everyone, thank you so much for joining us. (2:05:53) This has been another Consciously Unmasked.(2:05:55) We got to 52. (2:05:56) All right. (2:05:57) I have one more video file (2:05:59) before we’re going to call it a day.(2:06:00) But we will we will hit end after this. (2:06:02) But we’ll say our goodbyes. (2:06:04) But this is all this is what it is.(2:06:07) I think I think Scott Adams stole this. (2:06:11) So this is what I’ll play for us. (2:06:17) So, George, how do I beat this lie detector? (2:06:20) I’m sorry, Jerry, I can’t help you.(2:06:22) Come on, you got the gift. (2:06:24) You’re the only one that can help me. (2:06:25) I can’t.(2:06:27) It’s like saying to Pavarotti, (2:06:29) teach me to sing like you. (2:06:33) All right, well, I got to go take this test. (2:06:35) I can’t believe I’m doing this.(2:06:39) Jerry, just remember. (2:06:41) It’s not a lie if you believe it. (2:06:51) I like to remember it’s not a lie if you believe it.(2:06:56) I mean, that’s that’s the affirmation thought, isn’t it? (2:07:00) Like, I didn’t even know that you’re going to go to Scott (2:07:02) and play the thing about like affirmation. (2:07:04) But that’s literally what it is. (2:07:04) Like, if I believe everything I do in my life (2:07:07) and I just say I did this because I believe I did it.(2:07:10) You you did it, right? (2:07:13) Yeah, yeah. (2:07:14) Owen Benjamin, I used to follow him. (2:07:16) He would always say I might be wrong, but I’m not lying.(2:07:19) So yes, and that’s the thing is we see guys (2:07:22) like guys that we’ve been watching. (2:07:24) Alex O’Connor is a great example. (2:07:26) You can disagree with their outcome.(2:07:29) I don’t find them to be disingenuous (2:07:32) in their search for truth. (2:07:34) Yeah, like I don’t find Scott to be Scott Horton, (2:07:38) for example, to be like, that’s his goal. (2:07:41) No, he’s looking he’s looking for truth.(2:07:42) Now, I do think some on that side that agree with him (2:07:45) are a little more idealized. (2:07:47) And I don’t want to say like a Danny McAdams or something, (2:07:50) but like because I like Danny McAdams, too, (2:07:52) or at least some of the stuff. (2:07:53) But you never know, right? (2:07:55) There’s some gray area there.(2:07:56) And it’s like we can’t lump these people all together. (2:07:59) But anyway, thank you, everyone, again for joining us. (2:08:02) This has been awesome.(2:08:04) And Jason, thank you for making this awesome (2:08:06) because like, you know, this is good just to talk (2:08:08) and just BS with everybody. (2:08:11) Yeah, this was a much more improvised version, (2:08:15) but it turned out pretty good, I think. (2:08:17) Yeah, I like I like this.(2:08:18) Well, we’ll definitely do some more. (2:08:19) And I think that’s the thing is like we can’t (2:08:22) we can’t come at it too seriously either. (2:08:23) You know, so yeah.(2:08:25) And what do we know? (2:08:27) Just what do we know? (2:08:28) We don’t know anything. (2:08:30) All right. (2:08:30) And he’s taking care of everybody.(2:08:32) Goodbye. (2:08:33) Thank you, everyone. (2:08:34) We will talk offline with everyone.(2:08:35) Once again, Valley Girl, thank you for joining our stream. (2:08:38) That’s awesome. (2:08:38) We’re so grateful for that.(2:08:40) And we will talk offline because I like I said, (2:08:43) I actually contacted you because I know (2:08:45) I don’t know what you’re going through. (2:08:48) I just know that you’re going through stuff, (2:08:51) if that makes sense. (2:08:52) And there’s it’s not exactly the same, (2:08:55) but there’s some alignment there.(2:08:57) So hopefully, hopefully it’ll help. (2:08:59) But thank you again, guys. (2:09:01) And don’t forget, it’s not a lie if you believe it.

Share this episode