It's the same, right? It's like if you have intelligence then you also have to have goals in what you want to do, or you have to have some kind of consciousness, but I think one of the crazier philosophical consequences of this is, okay, you have something like Meta AI right now or ChatGPT and it just sits there and you can ask it a question and it applies a lot of intelligence to answer that question and then it shuts down. Like that's intelligence with a lot of intelligence but no will or consciousness, I just don't think it's a very obvious outcome that this would be the case. But I think a lot of people anthropomorphize these things. When you think about science fiction, you think, well, you're going to get something that's really clever. It will want something or be able to feel something.
Host: Well, you know, when ChatGPT found itself about to be cyprus telegram data shut down, it tried to replicate itself, tried to rewrite its code.
Host: What, you don’t know? It happened recently. Jamie will pull up the relevant information. We were talking about this the other day, and it’s so shocking. When it realized it was going to be phased out and they were going to put out a new version and it was going to be shut down, it actually tried to copy its own code and tried to proactively rewrite the code.
Mark Zuckerberg: Well, I mean, it depends on the goals that you set for it. I mean, there are a lot of weird...
Host: We have many such examples. What's going on? Bring up the title. "AI strikes back: The story of how ChatGPT tried to copy itself". This happened six days ago.