Mark Zuckerberg on Threads, the future of AI, and Quest 3

Share via:

Photo illustration by Alex Parkin / The Verge

In a rare interview, Meta’s CEO dives into where AI is going next, the new Quest 3 headset, and his ongoing rivalry with Elon Musk.

What motivates Mark Zuckerberg these days?

It’s a question I posed at the end of our interview last week, after we had spent an hour talking about Threads, his vision for how generative AI will reshape Meta’s apps, the Quest 3, and other news from the company’s Connect conference.

“We went through a period where a lot of what we needed to do was tackle and navigate some important social issues, and I think that that required a somewhat different style,” he told me, alluding to his “Senator, we sell ads” era. “And then we went through a period where we had some quite big business challenges: handling in a recession and revenue not coming in the way that we thought and needing to do layoffs.”

“But now I think we’re squarely back in developing really innovative products, especially because of some of the innovations in AI. That, in some ways, plays exactly to my favorite style of running a company.”

This time last year, Meta’s reputation — and by extension, Zuckerberg’s — couldn’t have been in a more different place. Doubts were swirling about whether its ads business would recover and if spending billions of dollars on a far-out metaverse strategy made any sense. It wasn’t clear if Elon Musk was going to actually buy Twitter, which ended up giving Zuckerberg the opening to build his own competitor with Threads. A potential cage match between the two billionaires certainly wasn’t on my bingo card at the time, but here we are.

That cage match isn’t going to happen after all. (Do you think Musk was ever serious about it? “You’d have to ask him.”) But Zuckerberg is dead set on Threads reaching a billion people, even with reports about its early spike in engagement falling off. He’s bullish on decentralized social media, which has the potential to reshape the power dynamic between platforms and their users for the better.

In the near term, he’s perhaps the most excited about infusing WhatsApp, Instagram, and Facebook with generative AI. Practically, that looks like lots of chatbots and some clever creative cools that will only get richer over time. In the not-too-distant future, he sees AI intersecting with the metaverse in a powerful way, though the exact timeline for when headsets hit the mainstream remains hazy.

After spending the past five years as a wartime CEO, Zuckerberg is getting back to basics, and he clearly feels good about it. “I think we’ve done a lot of good things,” he told me. “I think we need to make sure that they stay good. I think that there’s a lot of work that needs to happen on making sure the balance of all that is right.”

“But for the next wave of my life and for the company — but also outside of the company with what I’m doing at CZI [Chan Zuckerberg Initiative] and some of my personal projects — I define my life at this point more in terms of getting to work on awesome things with great people who I like working with.”

This transcript has been lightly edited for clarity.

Mark, I have to be honest. Not long ago, I was thinking we may be doing this as like a post-fight interview in Las Vegas, right outside of the Octagon after you get out of a fight with Elon.

Maybe next year. Not Elon, but someone. I want to keep competing, but I just need to find someone who asks me.

Do you think he was ever serious about fighting?

I don’t know. You’d have to ask him. But I don’t know. I just really enjoy doing it as a sport. For me, it’s a competition, and it’s a sport. I mean, I love doing it. I train with a bunch of guys, and you know, I definitely want to compete more, but we’ll see.

Are there any other tech CEO rivals you would want to fight if you could, or have you moved on from that?

I think it’ll be more fun to fight someone who actually fights.

Who takes it seriously?

Yeah.

Settling tech business rivalries by combat… you don’t think that’s going to become a thing now?

No, I don’t think so. I think that’s not generally the direction that our society is heading.

Probably for the best.

It probably is for the best. I think a little bit of a channel to get some aggression out is good. I think the one that was proposed with Elon could have been fun, but it’s okay.

If he came back to you and said, “I’ll fight on your terms, you pick the venue,” would you still do it?

I don’t think it’ll happen.

Fair. I agree with you.

There’s sort of a valorization where people look at this stuff and are like, “Oh, I could do that.” But I mean you have to train. It’s very technical. It’s very fun, very intellectual.

When I was a lot younger, I used to fence competitively. A lot of the striking aspects — I mean, obviously, it’s different because, I mean, [in] fencing, you’re playing for points, right? So when you get a touch, the sequence is done, whereas here, you have to worry about being countered and all that. It’s very intellectual.

I really enjoyed thinking about all the different combos and moves and all that. There’s a period where you’re ramping up and learning all the basic stuff before you can get to the intellectual part of it. But once you’re there… I don’t know, it’s super fun. I love doing it with friends.

Your mind doesn’t just shut off when you’re doing it? You actually find it to be mentally stimulating?

Yeah.

Last year, when Elon was close to taking over Twitter, I asked if you had any advice for him. I’m not going to ask you to give him advice this time, but a lot has changed in a year. You’ve got Threads now. I’d love to get into why you did Threads when you did and the approach that you took and kind of when you made that decision because it seemed like it happened pretty quickly.

Yeah. You know, I’ve always thought that the aspiration of Twitter — to build this, you know, text-based discussion — should be a billion-person social app, right? There are certain kinds of fundamental social experiences that, you know, I look at them, and I’m just like, “Okay, like if I were running that, I could scale that to reach a billion people.”

And that’s one of the reasons why, over time, we’ve done different acquisitions and why we’ve considered them.

You tried to buy Twitter way back in the day, right?

Yeah, we had conversations. I think this was, gosh, this was, I think, when Jack was leaving the first time. And look, I get it. I mean, different entrepreneurs have different goals for what they want to do, and some people want to run their companies independently, and that’s cool.

It’s good that there’s sort of a diversity of different outcomes. But I guess Twitter was sort of plodding along for a while before Elon came, and I think the rate of change in the product was pretty slow, right? So it just didn’t seem like they were on the trajectory that would maximize their potential, and then with Elon coming in, I think there was certainly an opportunity to change things up, and he has, right?

I mean, he’s definitely a change agent, right? I think it’s still not clear exactly what trajectory it’s on, but I do think he’s been pretty polarizing, so I think that the chance that it sort of reaches the full potential on the trajectory that it’s on is… I don’t know. I guess I’m probably less optimistic or just think there’s less of a chance now than there was before.

But I guess just watching all this play out, it just kind of reminded me and rekindled the sense that someone should build a version of this that can be more ubiquitous. And, you know, I look at some of the things around it… I think these days people just want… Well, let’s put it this way. A lot of the conversation around social media is around information and the utility aspect, but I think an equally important part of designing any product is how it makes you feel, right? What’s the kind of emotional charge of it, and how do you come away from that feeling?

I think Instagram is generally kind of on the happier end of the spectrum. I think Facebook is sort of in the middle because it has happier moments, but then it also has sort of harder news and things like that that I think tend to just be more critical and maybe, you know, make people see some of the negative things that are going on in the world. And I think Twitter indexes very strongly on just being quite negative and critical.

I think that that’s sort of the design. It’s not that the designers wanted to make people feel bad. I think they wanted to have a maximum kind of intense debate, right? Which I think that sort of creates a certain emotional feeling and load. I always just thought you could create a discussion experience that wasn’t quite so negative or toxic. I think in doing so, it would actually be more accessible to a lot of people. I think a lot of people just don’t want to use an app where they come away feeling bad all the time, right? I think that there’s a certain set of people who will either tolerate that because it’s their job to get that access to information or they’re just warriors in that way and want to be a part of that kind of intellectual combat.

But I don’t think that that’s the ubiquitous thing, right? I think the ubiquitous thing is people want to get fresh information. I think there’s a place for text-based, right? Even when the world is moving toward richer and richer forms of sharing and consumption, text isn’t going away. It’s still going to be a big thing, but I think how people feel is really important.

So that’s been a big part of how we’ve tried to emphasize and develop Threads. And, you know, over time, if you want it to be ubiquitous, you obviously want to be welcome to everyone. But I think how you seed the networks and the culture that you create there, I think, ends up being pretty important for how they scale over time.

Where with Facebook, we started with this real name culture, and it was grounded to your college email address. You know, it obviously hasn’t been grounded to your college email address for a very long time, but I think the kind of real authentic identity aspect of Facebook has continued and continues to be an important part of it.

So I think how we set the culture for Threads early on in terms of being a more positive, friendly place for discussion will hopefully be one of the defining elements for the next decade as we scale it out. We obviously have a lot of work to do, but I’d say it’s off to quite a good start. Obviously, there’s the huge spike, and then, you know, not everyone who tried it out originally is going to stick around immediately. But I mean, the monthly active’s and weekly’s, I don’t think we’re sharing stats on it yet.

You can if you’d like.

No, I mean, I feel quite good about that.

Because there’s been the reporting out there that engagement, which I think is natural with any spike like that, is not going to sustain. You guys set the original industry standard on engagement for these kinds of products, so I assume you’re guiding toward a similar kind of metric.

Yeah, we just have this playbook for how we do this. Phase one is to build a thing that kind of sparks some joy and that people appreciate. Then, from there, you want to get to something that is retentive. So that way, people who have a good experience with a thing come back and want to keep using it.

And those two things are not always the same. There are a lot of things that people think are awesome but may not always come back to. I think some of what people are seeing now around ChatGPT is part of that. Like this level of AI is a miracle. It’s awesome, right? But that doesn’t mean that everyone is going to have a use case every week.

First is to create the spark. Second is to create retention. Then, once you have retention, then you can start encouraging more people to join. But if people aren’t going to be retained by it, why would you ask people to go sign up for something?

Step one: spark; step two: retention; step three: growth and scaling the community. And then only at that point is step four, which is monetization. We take a while to go through all those. We’re really, in some sense, only getting started on the monetization of the messaging experiences like WhatsApp now with stuff like business messaging.

Took a while.

But 2 billion people use the product every day, right? So we scaled it pretty far. But I think with our model, that sort of works.

You are competing with Twitter, but you’re trying to do it differently. To me, as a Twitter addict for way too long and a very early Threads user — and I’ve been seeing similar feedback from others when Adam Mosseri has been asking for feedback on Threads — it still lacks that real-time feeling.

That’s what I go to Twitter for: news. And I know you guys aren’t necessarily trying to emphasize news in this experience, which is a whole other topic, really, but how do you get that kind of Twitter-like “This is what’s going on right now” feeling? Because I don’t think Threads quite has that yet.

I think it’s a thing that we’ll work on improving, but I mean, hard news content isn’t the only fresh content. Even within news, there’s a whole spectrum between sort of hard, critical news and people understanding what’s going on with the sports that they follow or the celebrities that they follow. It’s not as cutting as a lot of the kind of hard news — and especially the political discussion. I think it’s just so polarized that I think it’s hard to come away from reading news about politics these days feeling good, right?

But that doesn’t go for everything, and part of this overall is just how you tune the algorithm to basically encourage either recency or quality but less recency. So, I’m not sure that we have that balance exactly right yet. It may be the case that in a product like Threads, where people may want to see more recent content, as opposed to something like an Instagram or Facebook, where it’s more visual and the balance might just be balancing toward maybe a little more quality, even if it’s 12 hours ago instead of two hours ago. So I think that this is the type of stuff that we need to tune and optimize, but yeah, I think I agree with that point.

This hasn’t happened yet with Threads, but you’re eventually going to hook it into ActivityPub, which is this decentralized social media protocol. It’s kind of complicated in layman’s terms, but essentially, people run their own servers. So, instead of having a centralized company run the whole network, people can run their own fiefdoms. It’s federated. So Threads will eventually hook into this. This is the first time you’ve done anything really meaningful in the decentralized social media space.

Yeah, we’re building it from the ground up. I’ve always believed in this stuff.

Really? Because you run the largest centralized social media platform.

But I mean, it didn’t exist when we got started, right? I’ve had our team at various times do the thought experiment of like, “Alright, what would it take to move all of Facebook onto some kind of decentralized protocol?” And it’s like, “That’s just not going to happen.” There’s so much functionality that is on Facebook that it’s way too complicated, and you can’t even support all the different things, and it would just take so long, and you’d not be innovating during that time.

I think that there’s value in being on one of these protocols, but it’s not the only way to deliver value, so the opportunity cost of doing this massive transition is kind of this massive thing. But when you’re starting from scratch, you can just design it so it can work with that. And we want to do that with this because I thought that that was one of the interesting things that’s evolving around this kind of Twitter competitive space, and there’s a real ecosystem around that, and I think it’s interesting.

What does that mean for a company like yours long term if people gravitate more toward these decentralized protocols over time? Where does a big centralized player fit into that picture?

Well, I guess my view is that the more that there’s interoperability between different services and the more content can flow, the better all the services can be. And I guess I’m just confident enough that we can build the best one of the services, that I actually think that we’ll benefit and we’ll be able to build better quality products by making sure that we can have access to all of the different content from wherever anyone is creating it.

And I get that not everyone is going to want to use everything that we build. I mean, that’s obviously the case when it’s like, “Okay, we have 3 billion people using Facebook,” but not everyone wants to use one product, and I think making it so that they can use an alternative but can still interact with people on the network will make it so that that product also is more valuable.

I think that can be pretty powerful, and you can increase the quality of the product by making it so that you can give people access to all the content, even if it wasn’t created on that network itself. So, I don’t know. I mean, it’s a bet.

There’s kind of this funny counterintuitive thing where I just don’t think that people like feeling locked into a system. So, in a way, I actually think people will feel better about using our products if they know that they have the choice to leave.

If we make that super easy to happen… And obviously, there’s a lot of competition, and we do “download your data” on all our products, and people can do that today. But the more that’s designed in from scratch, I think it really just gives creators, for example, the sense that, “Okay, I have…”

Agency.

Yeah, yeah. So, in a way, that actually makes people feel more confident investing in a system if they know that they have freedom over how they operate. Maybe for phase one of social networking, it was fine to have these systems that people felt a little more locked into, but I think for the mature state of the ecosystem, I don’t think that that’s going to be where it goes.

I’m pretty optimistic about this. And then if we can build Threads on this, then maybe over time, as the standards get more built out, it’s possible that we can spread that to more of the stuff that we’re doing. We’re certainly working on interop with messaging, and I think that’s been an important thing. The first step was kind of getting interop to work between our different messaging systems.

Right, so they can talk to each other.

Yeah, and then the first decision there was, “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on.

But that basically has just meant completely rewriting Messenger and Instagram direct from scratch. So you’re basically going from a model where all the messages are stored in the cloud to completely inverting the architecture where now all the messages are stored locally and just the way…

While the plane’s in the air.

Yeah, that’s been a kind of heroic effort by just like a hundred or more people over a multiyear period. And we’re basically getting to the point where it’s starting to roll out now.

Now that we’re at the point where we can do encryption across those apps, we can also start to support more interop.

With other services that Meta doesn’t own?

Well, I mean, the plan was always to start with interop between our services, but then get to that. We’re starting to experiment with that, too.

I promised to stop bringing up Elon, but you and he were together with Sen. Chuck Schumer at the White House recently for this big AI summit, and a lot of it was closed doors.

Along with a lot of other people.

Along with a lot of other people. You guys were sitting at opposite sides of the table. I thought that was an interesting choice. What was your takeaway from that and where the government is in the US on regulating AI? What do you think is going to happen?

Well, I didn’t really know what to expect going into that conversation, but it was quite substantive, and I think we covered a lot more ground than I expected. You asked about what it says about where the government is, and aside from Sen. Schumer, who basically moderated the discussion, it was really an opportunity for them to hear from the people in the tech industry but also folks in civil society.

I mean, you had people running unions. You had people from Hollywood and representing the creative industry and intellectual property. You had researchers and people focused on AI safety, and one of the things that I actually thought was the most interesting was the senators didn’t really speak that much.

There’s sort of the meme that it’s like, “Okay, a lot of these politicians, they go to a place where they’ll get attention for themselves.” But, you know, this was a three-hour event, and I think there were like 40 senators sitting and listening and taking notes and not really participating in the discussion but just there to learn.

And I thought that was super interesting, right? In a way that reflects pretty well on our system and the intellectual curiosity of the people who are ultimately going to be making those kinds of legislative decisions.

So that was fascinating to see. I mean, I didn’t come away — you know, apart from seeing their heads nod when certain people made certain points — it wasn’t a time for us to really get their sense of where they are. I think it was more just they were hearing the discussion of the issues.

Have you seen some of the criticisms — and I don’t think it’s necessarily focused at you specifically — that the tech industry is performing regulatory capture right now with AI and is essentially trying to take the drawbridge up with them here? Are you worried about that at all?

I have seen that concern, and I’m somewhat worried about it myself. I mean, look, I think that there are real concerns here. So, I think a lot of these folks are truly earnest in their concerns. And I think that there is valuable stuff for the government to do, I think both in terms of protecting American citizens from harm and preserving I think what is a natural competitive advantage for the United States compared to other countries.

I think this is just gonna be a huge sector, and it’s going to be important for everything, not just in terms of the economy, but there’s probably defense components and things like that. And I think the US having a lead on that is important and I think having the government think through, “Okay, well, how do we want to leverage the fact that we have the leading work in the world happening here, and how do we want to kind of control that, and what restrictions do we want to put on that getting to other places?”

I think that that makes sense. There are a bunch of concerns there that I think are real. You know, one of the topics that I’ve spent a lot of time thinking about is open source. Because, you know, we do a lot of open-source work at Meta. Obviously, not everything we do is open source. There’s a lot of closed systems, too. I’m not like a zealot on this, right? But I think I lean probably a little more pro-open source than most of the other big companies.

And we believe that it’s generally positive to open-source a lot of our infrastructure for a few reasons. One is that we don’t have a cloud business, right? So it’s not like we’re selling access to the infrastructure, so giving it away is fine. And then, when we do give it away, we generally benefit from innovation from the ecosystem, and when other people adopt the stuff, it increases volume and drives down prices.

Like PyTorch, for example?

When I was talking about driving down prices, I was thinking about stuff like Open Compute, where we open-sourced our server designs, and now the factories that are making those kinds of servers can generate way more of them because other companies like Amazon and others are ordering the same designs, that drives down the price for everyone, which is good. PyTorch is great because it basically makes it so that it’s like the standard across the industry as people develop with this, which means that more libraries and modules are created for it, which just makes it better. And it makes it better for us to develop internally, too.

So I think that all that stuff is true and works well for open source. And also, I think it’s pretty well established that open-source software is generally more secure and safer because it’s just more scrutinized, right? Every piece of software has bugs and issues, but the more people who can look at it, the more you’re going to basically identify what those issues are and have eyes on fixing them. And then also because there’s sort of a standard that’s deployed across the industry, those fixes get rolled out everywhere, which is a big advantage for safety and security.

And when I think about AI safety, I think one of the big issues — if there’s like a single super intelligence and it’s closed, and someone figures out how to exploit it — is everyone kind of gets screwed at the same time. Whereas, in an open-source system, if people find issues and just like your Mac or whatever gets patched, people find the issues, and then it just gets rolled out across the industry.

So I think that that’s generally positive, but there’s obviously this whole debate where when you open-source stuff, we can build in safeguards, but if you open-source something, you’re not fundamentally going to be able to prevent bad guys from taking that and running with it, too. So there is sort of this debate around, “Okay, well, what’s the balance? How capable do you want the models that are open source?”

And I think that there is a real debate there. I do sometimes get the sense that some of the folks whose business model is to basically sell access to the closed models that they’re developing, I do think that they have to be careful because they are also talking their book when they’re talking about dangers of open source, and I think that there are dynamics like that that happen that I hear either overtly or sometimes behind closed doors something will get back to me, that’s like, “Oh, this company was talking about why they’re kind of against open source.” And it’s like, yeah, well, their whole business depends on selling access to proprietary models, so I think you have to be careful about that.

The regulatory capture thing, I think you need to be careful about things like that because I do think one of the big benefits of open source is it also just decreases the cost of adoption for small companies and a lot of other folks. So I do think that’s going to be a big thing to watch out for over time.

I think Llama and the Llama 2 release has been a big thing for startups because it is so free or just easy to use and access. I’m wondering, was there ever debate internally about “should we take the closed route?” You know, you’ve spent so much money on all this AI research. You have one of the best AI labs in the world, I think it’s safe to say. You have huge distribution — why not keep it all to yourself? You could have done that.

You know, the biggest arguments in favor of keeping it closed were generally not proprietary advantage.

Or competitive advantage?

No, it wasn’t competitive advantage. There was a fairly intense debate around this.

Did you have to be dissuaded? Did you know we have to have it open?

My bias was that I thought it should be open, but I thought that there were novel arguments on the risks, and I wanted to make sure we heard them all out, and we did a very rigorous process. We’re training the next version of Llama now, and I think we’ll probably have the same set of debates around that and how we should release it. And again, I sort of, like, lean toward wanting to do it open source, but I think we need to do all the red teaming and understand the risks before making a call.

But the two big arguments that people had against making Llama 2 open were one: it takes a lot of time to prepare something to be open. Our main business is basically building consumer products, right? And that’s what we’re launching at Connect. Llama 2 is not a consumer product. It’s the engine or infrastructure that powers a bunch of that stuff. But there was this argument — especially after we did this partial release of Llama 1 and there was like a lot of stir around that, then people had a bunch of feedback and were wondering when we would incorporate that feedback — which is like, “Okay, well, if we release Llama 2, is that going to distract us from our real job, which is building the best consumer products that we can?” So that was one debate. I think we got comfortable with that relatively quickly. And then the much bigger debate was around the risk and safety.

It’s like, what is the framework for how you measure what harm can be done? How do you compare that to other things? So, for example, someone made this point, and this was actually at the Senate event. Someone made this point that’s like, “Okay, we took Llama 2, and our engineers in just several days were able to take away the safeguards and ask it a question — ‘Can you produce anthrax?’ — and it answered.” On its face, that sounds really bad, right? That’s obviously an issue that you can strip off the safeguards until you think about the fact that you can actually just Google how to make anthrax and it shows up on the first page of the results in five seconds, right?

So there’s a question when you’re thinking through these things about what is the actual incremental risk that is created by having these different technologies. We’ve seen this in protecting social media as well. If you have, like, Russia or some country trying to create a network of bots or, you know, inauthentic behavior, it’s not that you’re ever going to stop them from doing it. It’s an economics problem. You want to make it expensive enough for them to do that that it is no longer their best strategy because it’s cheaper for them to go try to exploit someone else or something else, right? And I think the same is true here. So, for the risk on this, you want to make it so that it’s sufficiently expensive that it takes engineers several days to dismantle whatever safeguards we built in instead of just Googling it.

You feel generally good directionally with the safety work on that?

For Llama 2, I think that we did leading work on that. I think the white paper around Llama 2, where we basically outlined all the different metrics and all the different things that we did, and we did internal red teaming and external red teaming, and we’ve got a bunch of feedback on it. So, because we went into this knowing that nothing is going to be foolproof — some bad actor is going to be able to find some way to exploit it — we really knew that we needed to create a pretty high bar on that. So, yeah, I felt good about that for Llama 2, but it was a very rigorous process.

And you guys have now announced the Meta AI agent, which is proprietary. I’m sure it’s using Llama technology, but it’s a closed model, and you’re not really disclosing a lot about the model and its weights and all that. But this is the new agent that people are going to be seeing in the apps.

Yeah. So, at Connect, we announced a bunch of different things on this. Meta AI and the other AIs that we released are based on Llama 2. It’s not exactly the same thing that we open-sourced because we used that as the foundation, and then we built on top of that to build the consumer products. But there were a few different things that we announced.

I feel like that part — the AI, to me — feels like the biggest deal in the near term. Because a lot of people are going to be seeing it, and it may be the first time, even with all the coverage of GPT, that a lot of people experience a chatbot like this. And it’s free, which is different.

I’m very curious to see how the stuff gets used.

I used it for a little bit, and it can pull in web results. So it’s got recency, which is nice. It wouldn’t give me advice on how to break up with my girlfriend.

It wouldn’t?

I don’t have a girlfriend; I’m married. But I was trying to see what it won’t and will answer. It seems relatively safe.

It seems like the type of thing that it should be fine giving you advice on.

Well, I’m just telling you. But what do you imagine people using this for? Because it’s got that search engine component, but it can do a lot of things. I mean, is this a pure ChatGPT competitor in almost every way in your mind? How do you think about it?

I think that there’s a bunch of different spaces here that I think people are going to want to interact with AI around. Take a step back. I think that the vision for a bunch of folks in the industry, when I look at OpenAI or Google, is the sense that there’s going to be one big superintelligence, and they want to be it.

I just don’t think that’s the best future. I think the way that people tend to process the world is like, “We don’t have one person that we go to for everything. We don’t have one app that we go to for everything.” I don’t think that we want one AI.

It’s overwhelming. I find this with the current chatbots. I feel like it can do so much that I’m not actually sure what to ask it.

Yeah, so our view is that there’s actually going to be a lot of these that people talk to you for different things. One thought experiment that I did to sort of prove to myself that this would be the case is like, let’s say you’re a small business and you want to have an AI that can help you interface with customers to do sales and support. You want to be pretty confident that your AI isn’t going to be promoting your competitor’s products, right? So you want it to be yours. You want it to be aligned with you, so you’re going to want a separate agent than your competitor’s agent.

So, then, you get to this point where there’s going to be 100 million AIs just helping businesses sell things. Then you get the creator version of that, where, like every creator is going to want an AI assistant, something that can help them build their community. People are going to really want to interact with it; there’s just way more demand to interact with creators.

And there’s only one Kylie Jenner.

There’s, I think, a huge need here. People want to interact with Kylie. Kylie wants to cultivate her community, but there are only so many hours in a day. Creating an AI that’s sort of an assistant for her, where it’ll be clear to people that they’re not interacting with the physical Kylie Jenner, it would be kind of an AI version.

That will help the creators, and I think it’ll be fun for consumers. That one’s actually really hard because I think getting the creator one to work — we’re not actually launching that now, that’s, I think, more of a “next year” thing — because there’s so many… you can call it brand safety type concerns.

If you’re a creator, you really want to make sure that these AIs reflect the personality of the creator and don’t talk about things that the creator doesn’t want to get into or don’t say things that are going to be problematic for the creator and their endorsement deals.

The creator, I feel like, should have input in all of this. They should be able to say, “I don’t want this.”

Oh yeah, but in some ways, the technology doesn’t even exist yet to make it that trained. I mean, this isn’t code in the deterministic sense, right? It’s like a model that you need to be able to train it to stay right in certain bounds. And I think a lot of that is still getting developed.

So that’s more next year.

Yeah. So there’s businesses. There’s creators. That stuff is fun, and the business stuff is, I think, more useful. And then I think that there’s a bunch of stuff that’s just interesting kind of consumer use cases.

So there’s more of the utility, which is what Meta AI is, like answer any question. You’ll be able to use it to help navigate your Quest 3 and the new Ray-Ban glasses that we’re shipping. We should get to that in a second. That’ll be pretty wild, having Meta AI that you can just talk to all day long on your glasses.

So, yeah, I think that’ll be pretty powerful. But then there are also going to be all these other new characters that are getting created, which is somewhat of an easier question to start with than having AIs that are kind of acting as a real person because there aren’t as many kinds of brand safety concerns around that, but they could still be pretty fun. So we’re experimenting with a bunch of different AIs for different interests that people have, whether it’s interested in different kinds of sports or fashion.

The one I tried was a travel agent type.

Yeah, travel. There’s some that are more on giving people advice. There’s like a life coach and, you know, like an aunt, right? And then there’s some that are more game-y. Like Snoop Dogg is playing the dungeon master, and there’s a few that are just these text-based adventure games and the ability to just drop that into a thread and, you know, play a text-based game is going to be super fun.

So, I think part of this is that we want to create a diversity of different experiences to see what resonates and what we want to go deeper on. This is the first step toward building this AI studio that we’re working on. That will make it so that anyone can build their own AIs, sort of just like you create your own UGC, your own content across social networks.

So, you should be able to create your own AI and publish it. I think that’s going to be really wild.

I do agree it’s going to be wild. There’s a bit of uneasiness to it for me, just the idea that we as a society are going to be increasingly having relationships with AIs. I mean, there’s stories about Character.ai, which has a similar kind of library of personas you can interact with and people literally like falling in love with some of these chatbots. I mean, what do you think about that phenomenon? Is it just inevitable with where the tech is going?

That’s not where we’re starting. So I think that there’s a lot of use cases that are just a lot more clear than that, right? In terms of, you know, someone who can help you make workouts, someone who can help you with cooking, more utility, figure out travel — or even the game-type stuff.

I think that a bunch of these things can help you in your interactions with people. And I think that’s more our natural space. One of the things that we can do that’s harder for others to do is the ability to make it so you can drop these into group chats. So we’re starting with Meta AI. You can just invoke it in any thread. Like I could be having a one-on-one thread with you, and I could just ask Meta AI something. I can do that in a group chat thread. So I think that that’s going to be really fun, right? It’s just having these kinds of fun personalities in these threads, I think, will create sort of an interesting dynamic. I think especially around image generation, and we haven’t talked about that as much.

I used that. It was pretty impressive, and it was fast.

Yeah. I mean, I think the team has made awesome progress. We’re at good photorealistic quality.

For people who haven’t used it yet, you just type into the bot what you want the image to be, and it’ll just make it.

Yeah. And the fact that it’s fast and free, I think, is going to be pretty game-changing. I mean, there are photorealistic image generators out there, but a lot of them take a minute.

They’re hard to use and to find — on Discord or whatever.

Yeah. And you have to pay a subscription fee. So I think having it be free, fast, able to exist in group chat threads — I think people are just going to create a ton of images for fun. And I don’t know, I’m really curious to see how this gets used, but I think it’s going to be super fun.

I already just sit there with my kids, and the word you say to get it to make an image is “imagine,” and my daughter’s just like, “I just want to play ‘imagine.’” I’m just like, “Imagine this.” And we get an image, and “Oh, I actually want to change it. So imagine this,” and edit the prompt. But because it’s just a five-second turnaround, you could do that so easily. You could do it over the internet with group chat.

I think there are all these things where you can use these tools to facilitate connections and just create entertainment, which is actually probably more what the technology is capable of today than even some of the more utility use cases because there is the factuality issue. I mean, with the hallucinations and all that, and you know, we’re trying to address that by doing partnerships with search engines that you mentioned. So you can type in a question and ask in real time, “Who won this fight this weekend?” and it’ll be able to go do a search and bring that in. But hallucination hasn’t been solved completely in any of these.

So I think, to some degree, the thing that these language models have really been best at is — I mean, it’s kind of what the name “generative AI” suggests — being generative, right? Suggesting ideas. Coming up with things that could be interesting or funny. I wouldn’t necessarily yet want it to be my doctor and ask it for a diagnosis and have to rely that it’s not hallucinating.

So I think having it fit into a consumer product where the primary goals are suggesting interesting content and entertainment is actually maybe a more natural fit for what the technology is capable of today than some of the initial use cases that people thought about it, like, “Oh, it’s going to be this kind of like all intelligent assistant, or it’s going to be my new search engine or something.”

It’s fine for those a bunch of the time, and I think it’ll get there over the next few years, but I think the consumer thing is actually quite a good fit today.

It seems like a key differentiator for Meta in the whole model race is you have, probably second to maybe Google, the most user data to train on. And I know a lot of it’s private, and you wouldn’t ever train on private chats.

We don’t.

WhatsApp is encrypted, too, but public stuff — Reels, public Facebook posts — that seems pretty natural for this. Is that feeding into Meta AI right now?

Like you said, we don’t train on private chats that people have with their friends.

But you’re sitting on this just massive hoard of data. It could be interesting in a model like this.

I actually think a lot of the stuff that we’ve done today is actually still pretty basic. So I think there’s a lot of upside, and I think we need to experiment with it to see what ends up being useful.

But one of the things that I think is interesting is these AI problems, they’re so tightly optimized that having the AI basically live in the environment that you’re trying to get it to get better at is pretty important. So, for example, you have things like ChatGPT — they’re just in an abstract chat interface. But getting an AI to actually live in a group chat, for example, it’s actually a completely different problem because now you have this question of, “Okay, when should the AI jump in?”

In order to get an AI to be good at being in a group chat, you need to have experience with AIs and group chats, which, even though Google or OpenAI or other folks may have a lot of experience with other things, that kind of product dynamic of having the actual experience that you’re trying to deliver the product in, I think that’s super important.

Similarly, one of the things that I’m pretty excited about: I think multimodality is a pretty important interaction, right? A lot of these things today are like, “Okay, you’re an assistant. I can chat with you in a box. You don’t change, right? It’s like you’re the same assistant every day,” and I think that’s not really how people tend to interact, right? In order to make things fresh and entertaining, even the apps that we use, they change, right? They get refreshed. They add new features.

And I think that people will probably want the AIs that they interact with, I think it’ll be more exciting and interesting if they do, too. So part of what I’m interested in is this isn’t just chat, right? Chat will be where most of the interaction happens. But these AIs are going to have profiles on Instagram and Facebook, and they’ll be able to post content, and they’ll be able to interact with people and interact with each other, right?

There’s this whole interesting set of flywheels around how that interaction can happen and how they can sort of evolve over time. I think that’s going to be very compelling and interesting, and obviously, we’re kind of starting slowly on that. So we wanted to build it so that it kind of worked across the whole Meta universe of products, including having them be able to, in the near future, be embodied as avatars in the metaverse, right?

So you go into VR and you have an avatar version of the AI, and you can talk to them there. I think that’s gonna be really compelling, right? It’s, at a minimum, creating much better NPCs and experiences when there isn’t another actual person who you want to play a game with. You can just have AIs that are much more realistic and compelling to interact with.

But I think having this crossover where you have an assistant or you have someone who tells you jokes and cracks you up and entertains you, and then they can show up in some of your metaverse worlds and be able to be there as an avatar, but you can still interact with them in the same way — I think it’s pretty cool.

Do you think the advent of these AI personas that are way more intelligent will accelerate interest in the metaverse and in VR?

I think that all this stuff makes it more compelling. It’s probably an even bigger deal for smart glasses than for VR.

You need something. You need a kind of visual or a voice control?

When I was thinking about what would be the key features for smart glasses, I kind of thought that we were going to get holograms in the world, and that was one. That’s kind of like augmented reality. But then there was always some vague notion that you’d have an assistant that could do something.

I thought that things like Siri or Alexa were very limited. So I was just like, “Okay, well, over the time period of building AR glasses, hopefully the AI will advance.” And now it definitely has. So now I think we’re at this point where it may actually be the case that for smart glasses, the AI is compelling before the holograms and the displays are, which is where we got to with the new version of the Ray-Bans that we’re shipping this year, right? When we started working on the product, all this generative AI stuff hadn’t happened yet.

So we actually started working on the product just as an improvement over the first generation so that the photos are better, the audio is a lot better, the form factor is better. It’s a much more refined version of the initial product. And there’s some new features, like you can livestream now, which is pretty cool because you can livestream what you’re looking at.

But it was only over the course of developing the product that we realized that, “Hey, we could actually put this whole generative AI assistant into it, and you could have these glasses that are kind of stylish Ray-Ban glasses, and you could be talking to AI all throughout the day about different questions you have.”

This isn’t in the first software release, but sometime early next year, we’re also going to have this multimodality. So you’re gonna be able to ask the AI, “Hey, what is it that I’m looking at? What type of plant is that? Where am I? How expensive is this thing?”

Because it has a camera built into the glasses, so you can look at something like, “Alright, you’re filming with some Canon camera. Where do I get one of those?” I think that’s going to be very interesting.

Again, this is all really novel stuff. So I’m not pretending to know exactly what the key use cases are or how people are going to use that. But smart glasses are very powerful for AI because, unlike having it on your phone, glasses, as a form factor, can see what you see and hear what you hear from your perspective.

So if you want to build an AI assistant that really has access to all of the inputs that you have as a person, glasses are probably the way that you want to build that. It’s this whole new angle on smart glasses that I thought might materialize over a five- to 10-year period but, in this odd twist of the tech industry, I think actually is going to show up maybe before even super high-quality holograms do.

Is overall interest in the Ray-Bans and the Quest line tacking with where you thought it would be at this point?

Let’s take each of those separately. Quest 1 was the first kind of standalone product. It did well, but all the content had to be developed for it. So it was really when we developed Quest 2, which was the next generation of it that already had all the content built, and it was sort of the refinement on it — that one blew up.

So Quest 2 was like a huge hit — tens of millions, right? That did very well and was the defining VR device so far. Then we shipped Quest Pro, which was making the leap to mixed reality, but it was $1,500. And what we’ve seen so far is that at least consumers are very cost-conscious. We expected to sell way fewer Quest Pros than Quest 2s, and that [bore] out. It’s always hard to predict exactly what it’ll be when you’re shipping a product at $1,500 for the first time, but it was kind of fine. Within expectations — it wasn’t like a grand slam, but it did fine.

And now Quest 3 is the refinement on mixed reality, kind of like Quest 1 was. With Quest 3, we’re sort of at the point where we’ve gotten mixed reality, which is even higher quality than what was in Quest Pro, but it’s a third of the price, right? So it’s $500. So I’m really excited to see how that one will go.

It seems like you all, based on my demos, still primarily think of it as a gaming device. Is that fair? That the main use cases for Quest 3 are going to be these kinds of “gaming meets social.” So you’ve got Roblox now.

I think social is actually the first thing, which is interesting because Quest used to be primarily gaming. And now, if you look at what experiences are people spending the most time in, it’s actually just different social metaverse-type experiences, so things like Rec Room, VRChat, Horizon, Roblox. Even with Roblox just kind of starting to grow on the platform, social is already more time spent than gaming use cases. It’s different if you look at the economics because people pay more for games. Whereas social kind of has that whole adoption curve thing that I talked about before, where, first, you have to kind of build out the big community, and then you can enable commerce and kind of monetize it over time.

This is sort of my whole theory for VR. People looked at it initially as a gaming device. I thought, “Hey, I think this is a new computing platform overall. Computing platforms tend to be good for three major things: gaming, social and communication, and productivity. And I’m pretty sure we can nail the social one. If we can find the right partners on productivity and if we can support the gaming ecosystem, then I think that we can help this become a big thing.”

Broadly, that’s on track. I thought it was going to be a long-term project, but I think the fact that social has now overtaken gaming as the thing that people are spending the most time on is an interesting software evolution in how they’re used. But like you’re saying: entertainment, social, gaming — still the primary things. Productivity, I think, still needs some time to develop.

I tried the Quest 3. It’s definitely a meaningful step change in terms of graphics and performance and all the things you guys have put into it. It feels still like we’re a little ways away from this medium becoming truly mainstream. Becoming something that millions…

When you say mainstream, what do you mean?

I know you’re already at [game] console-level sales, so you could say that’s mainstream, but I guess in terms of what you could think of as a general-purpose computing platform, so even like PC or something like that.

Well, in what sense? I think there’s a few parts of this. I think for productivity, you probably want somewhat higher-resolution screens. That, I think, will come, and I think we’re waiting for the cost curve to basically — like, we could have super high-resolution screens today, just the device would be thousands and thousands of dollars, which is basically the tradeoff that Apple made with their Vision Pro.

Have you tried it yet?

No, I haven’t, no.

I did, and you’re right. They guided toward that one spec. You can tell.

Yeah, you have to imagine that over the next five-plus years, there will be displays that are that good, and they’ll come down in cost, and we’re riding that curve.

For today, when you’re building one of these products, you basically have the choice of if you have it at that expensive, then you will sell hundreds of thousands of units. But we’re trying to build something where we build up the community of people using it. We’re trying to thread the needle and have the best possible display that we can while having it cost $500, not $3,500.

I reported on some comments you made to employees after Apple debuted the Vision Pro, and you didn’t seem super phased by it. It seemed like it didn’t bother you as much as it maybe could have. I have to imagine if they released a $700 headset, we’d be having a different conversation. But they’re shipping low volume, and they’re probably three to four years out from a general, lower-tier type release that’s at any meaningful scale. So is it because the market’s yours foreseeably then for a while?

Apple is obviously very good at this, so I don’t want to be dismissive. But because we’re relatively newer to building this, the thing that I wasn’t sure about is when Apple released a device, were they just going to have made some completely new insight or breakthrough that just made our effort…

Blew your R&D up?

Yeah, like, “Oh, well, now we need to go start over.” I thought we were doing pretty good work, so I thought that was unlikely, but you don’t know for sure until they show up with their thing. And there was just nothing like that.

There are some things that they did that are clever. When we actually get to use it more, I’m sure that there are going to be other things that we’ll learn that are interesting. But mostly, they just chose a different part of the market to go in.

I think it makes sense for them. I think that they sell… it must be 15 to 20 million MacBooks a year. And from their perspective, if they can replace those MacBooks over time with things like Vision Pro, then that’s a pretty good business for them, right? It’ll be many billions of dollars of revenue, and I think they’re pretty happy selling 20 million or 15 million MacBooks a year.

But we play a different game. We’re not trying to sell devices at a big premium and make a ton of money on the devices. You know, going back to the curve that we were talking about before, we want to build something that’s great, get it to be so that people use it and want to use it like every week and every day, and then, over time, scale it to hundreds of millions or billions of people.

If you want to do that, then you have to innovate, not just on the quality of the device but also in making it affordable and accessible to people. So I do just think we’re playing somewhat different games, and that makes it so that over time, you know, they’ll build a high-quality device and in the zone that they’re focusing on, and it may just be that these are in fairly different spaces for a long time, but I’m not sure. We’ll see as it goes.

From the developer perspective, does it help you to have developers building on… you could lean too much into the Android versus iOS analogy here, but yeah, where do you see that going? Does Meta really lean into the Android approach and you start licensing your software and technology to other OEMs?

I’d like to have this be a more open ecosystem over time. My theory on how these computing platforms evolve is there will be a closed integrated stack and a more open stack, and there have been in every generation of computing so far.

The thing that’s actually not clear is which one will end up being the more successful, right? We’re kind of coming off of the mobile one now, where Apple has truly been the dominant company. Even though there are technically more Android phones, there’s way more economic activity, and the center of gravity for all this stuff is clearly on iPhones.

In a lot of the most important countries for defining this, I think iPhone has a majority and growing share, and I think it’s clearly just the dominant company in the space. But that wasn’t true in computers and PCs, so our approach here is to focus on making it as affordable as possible. We want to be the open ecosystem, and we want the open ecosystem to win.

So I think it is possible that this will be more like PCs than like mobile, where maybe Apple goes for a kind of high-end segment, and maybe we end up being the kind of the primary ecosystem and the one that ends up serving billions of people. That’s the outcome that we’re playing for.

On the progress that you’re making with AR glasses, it’s my understanding that you’re going to have your first internal dev kit next year. I don’t know if you’re gonna show it off publicly or not, if that’s been decided, but is that progressing at the rate that you have hoped as well? It seems like Apple’s dealt with this, that everyone’s been dealing with kind of the technical problems with this.

I don’t think I have anything to announce on that today.

You said AR glasses are a kind of end-of-this-decade thing. And I guess what I’m trying to get at….

To be more of a mainstream consumer product, not like a v1. I don’t have anything new to announce today on this, and we have a bunch of versions of this that we’re building internally.

We’re kind of coming at it from two angles at once. We’re starting with Ray-Ban, which is like if you take stylish glasses today, what’s the most technology that you can cram into that and make it a good product? And then we’re coming out from the other side, which is like, “Alright, we want to create our ideal product with full holograms. You walk into a room, and there’s like as many holograms there as there are physical objects. You’re going to interact with people as holograms, AIs as holograms, all this stuff.” And then how do we get that to basically fit into a glasses-like form factor at as affordable of a price as we can get to?

I’m really curious to see how the second generation of the Ray-Bans does. And the first one, I think the reception was pretty good. There were a bunch of reports about the retention being somewhat lower, and I think that there’s a bunch of stuff that we just need to polish, where it’s like the cameras are just so much better, the audio is so much better. We didn’t realize that a lot of people were gonna want to use it for listening to podcasts when they go on a run, right? That wasn’t what we designed it for, but it was a great use case. So it’s like, “Okay, great. Let’s make sure that that’s good in v2.”

The cycle for iterating on this — if you’re doing a Threads release or Instagram, the cycle is like a month. For hardware, it’s like 18 months, right? Or two years. But I think this is the next step, and we’re going to climb up that curve.

But the initial interest, I think, is there. This is an interesting base to build from, so I feel good about that. Going the other direction, the technology is hard, right? And we are able to get it to work. It’s currently very expensive, so if you want to reach a consumer population —

— You’ve got to wait for the cost curve to come down?

Yeah.

So that’s the main limiting factor?

Well, I think there’s that. And we want to keep on improving it. But look, you learn by trying to assemble and integrate everything. You can’t just do a million R&D efforts in isolation and then hope that they come together. I think part of what lets you get to building the ultimate product is having a few tries practicing building the ultimate product.

And that’s like, “Oh, well, we did that, but it wasn’t quite as good on this one dimension as we wanted, so let’s not ship that one. Let’s hold that one and then do the next one.” So that’s some of the process that we’ve had is we have multiple generations of how we’re going to build this. When I look at the overall budget for Reality Labs, it’s augmented reality, and the glasses, I think, are the most expensive part of what we’re doing.

That’s why I asked. Because I think people are wondering, “Where’s all this going?”

At the end of the day, I’m quite optimistic about both augmented and virtual reality. I think AR glasses are going to be the thing that’s like mobile phones that you walk around the world wearing.

VR is going to be like your workstation or TV, which is when you’re like settling in for a session and you want a kind of higher fidelity, more compute, rich experience, then it’s going to be worth putting that on. But you’re not going to walk down the street wearing a VR headset. At least I hope not — that’s not the future that we’re working toward.

But I do think that there’s somewhat of a bias — maybe this in the tech industry or maybe overall — where people think that the mobile phone one, the glasses one, is the only one of the two that will end up being valuable.

But there are a ton of TVs out there, right? And there are a ton of people who spend a lot of time in front of computers working. So I actually think the VR one will be quite important, too, but I think that there’s no question that the larger market over time should be smart glasses.

Now, you’re going to have both all the immersive quality of being able to interact with people and feel present no matter where you are in a normal form factor, and you’re also going to have the perfect form factor to deliver all these AI experiences over time because they’ll be able to see what you see and hear what you hear.

So I don’t know. This stuff is challenging. Making things small is also very hard. It’s this fundamentally kind of counterintuitive thing where I think humans get super impressed by building big things, like the pyramids. I think a lot of time, building small things, like cures for diseases at a cellular level or miniaturizing a supercomputer to fit into your glasses, are maybe even bigger feats than building some really physically large things, but it seems less impressive for some reason. It’s super fascinating stuff.

I feel like every time we talk, a lot has happened in a year. You seem really dialed in to managing the company. And I’m curious what motivates you these days. Because you’ve got a lot going on, and you’re getting into fighting, you’ve got three kids, you’ve got the philanthropy stuff — there’s a lot going on. And you seem more active in day-to-day stuff, at least externally, than ever. You’re kind of the last, I think, founder of your era still leading the company of this large. Do you think about that? Do you think about what motivates you still? Or is it just still clicking, and it’s more subconscious?

I’m not sure that that much of the stuff that you said is that new. I mean, the kids are seven years old, almost eight now, so that’s been for a while. The fighting thing is relatively new over the last few years, but I’ve always been very physical.

We go through different waves in terms of what the company needs to be doing, and I think that that calls for somewhat different styles of leadership. We went through a period where a lot of what we needed to do was tackle and navigate some important social issues, and I think that that required a somewhat different style.

And then we went through a period where we had some quite big business challenges: handling in a recession and revenue not coming in the way that we thought and needing to do layoffs, and that required a somewhat different style. But now I think we’re squarely back in developing really innovative products, especially because of some of the innovations in AI. That, in some ways, plays exactly to my favorite style of running a company. But I don’t know. I think these things evolve over time.

It seems like you’re having more fun.

Well, how can you not? I mean, this is what’s great about the tech industry. Every once in a while, you get something like these AI breakthroughs, and it just changes everything. That can be threatening if you’re behind it, but I just think that that’s like when stuff changes and when awesome stuff gets built, so that’s exciting.

The world has been so weird over the last few years, right? Especially, you know, going back to the covid pandemic and all that stuff. And it was an opportunity for a lot of people to just reassess what they found meaningful in their lives. And there’s obviously a lot of stuff that was tough about it, but you know, the silver lining is I got to spend a lot more time with my family, and we got to spend more time out in nature because I wasn’t coming into the office quite as much.

It was definitely a period of reflection where I felt like since the time — I was basically 19 when I started the company. Every year, it was just, “Okay, we want to connect more people, right? Connecting people is good. That’s sort of what we’re here to do. Let’s make this bigger and bigger and connect more people and build more products that allow people to do that.”

And we just sort of hit the scale where what I found sort of satisfaction in life from and what I think is like the right strategy — I think both for like me personally and for the company — is less to just focus on like, “Okay, we’re going to just connect more people,” and more like, “Let’s do some awesome things.”

It sounds very technical.

There are a lot of different analogies on this, but someone made this point to me that doing good things is different from doing awesome things. And social media, in a lot of ways, it’s good, right? It gives a lot of people a voice, and it lets them connect, and it’s warm. It’s taking a basic technology and bringing it to billions of people, but I think that there’s an inherent awesomeness in doing some technical feat for the first time.

For the next phase of what we do, I’m just a little more focused on that. I think we’ve done a lot of good things. I think we need to make sure that they stay good. I think that there’s a lot of work that needs to happen on making sure the balance of all that is right. But for the next wave of my life and for the company — but also outside of the company with what I’m doing at CZI and some of my personal projects — I define my life at this point more in terms of getting to work on awesome things with great people who I like working with.

So I work on all this Reality Labs stuff with Boz and a team over there, and it’s just super exciting. And I get to work on all this AI stuff with Chris and Ahmed and the folks who are working on that, and it’s really exciting. And we get to work on some of the philanthropy work and helping to cure diseases with Priscilla and a lot of the best scientists in the world, and that’s really cool. And it’s like, then there’s like personal stuff, like we get to raise a family. That’s really neat — there’s no other person I’d rather do that with. I don’t know — to me, that’s just sort of where I am in life now.

Sounds like a nice place to be.

Ah, I mean, I’m enjoying it.

Mark Zuckerberg, the optimist.

I mean, always somewhat optimistic.

Thanks for the time, Mark.

Yeah, thank you.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Mark Zuckerberg on Threads, the future of AI, and Quest 3

Photo illustration by Alex Parkin / The Verge

In a rare interview, Meta’s CEO dives into where AI is going next, the new Quest 3 headset, and his ongoing rivalry with Elon Musk.

What motivates Mark Zuckerberg these days?

It’s a question I posed at the end of our interview last week, after we had spent an hour talking about Threads, his vision for how generative AI will reshape Meta’s apps, the Quest 3, and other news from the company’s Connect conference.

“We went through a period where a lot of what we needed to do was tackle and navigate some important social issues, and I think that that required a somewhat different style,” he told me, alluding to his “Senator, we sell ads” era. “And then we went through a period where we had some quite big business challenges: handling in a recession and revenue not coming in the way that we thought and needing to do layoffs.”

“But now I think we’re squarely back in developing really innovative products, especially because of some of the innovations in AI. That, in some ways, plays exactly to my favorite style of running a company.”

This time last year, Meta’s reputation — and by extension, Zuckerberg’s — couldn’t have been in a more different place. Doubts were swirling about whether its ads business would recover and if spending billions of dollars on a far-out metaverse strategy made any sense. It wasn’t clear if Elon Musk was going to actually buy Twitter, which ended up giving Zuckerberg the opening to build his own competitor with Threads. A potential cage match between the two billionaires certainly wasn’t on my bingo card at the time, but here we are.

That cage match isn’t going to happen after all. (Do you think Musk was ever serious about it? “You’d have to ask him.”) But Zuckerberg is dead set on Threads reaching a billion people, even with reports about its early spike in engagement falling off. He’s bullish on decentralized social media, which has the potential to reshape the power dynamic between platforms and their users for the better.

In the near term, he’s perhaps the most excited about infusing WhatsApp, Instagram, and Facebook with generative AI. Practically, that looks like lots of chatbots and some clever creative cools that will only get richer over time. In the not-too-distant future, he sees AI intersecting with the metaverse in a powerful way, though the exact timeline for when headsets hit the mainstream remains hazy.

After spending the past five years as a wartime CEO, Zuckerberg is getting back to basics, and he clearly feels good about it. “I think we’ve done a lot of good things,” he told me. “I think we need to make sure that they stay good. I think that there’s a lot of work that needs to happen on making sure the balance of all that is right.”

“But for the next wave of my life and for the company — but also outside of the company with what I’m doing at CZI [Chan Zuckerberg Initiative] and some of my personal projects — I define my life at this point more in terms of getting to work on awesome things with great people who I like working with.”

This transcript has been lightly edited for clarity.

Mark, I have to be honest. Not long ago, I was thinking we may be doing this as like a post-fight interview in Las Vegas, right outside of the Octagon after you get out of a fight with Elon.

Maybe next year. Not Elon, but someone. I want to keep competing, but I just need to find someone who asks me.

Do you think he was ever serious about fighting?

I don’t know. You’d have to ask him. But I don’t know. I just really enjoy doing it as a sport. For me, it’s a competition, and it’s a sport. I mean, I love doing it. I train with a bunch of guys, and you know, I definitely want to compete more, but we’ll see.

Are there any other tech CEO rivals you would want to fight if you could, or have you moved on from that?

I think it’ll be more fun to fight someone who actually fights.

Who takes it seriously?

Yeah.

Settling tech business rivalries by combat… you don’t think that’s going to become a thing now?

No, I don’t think so. I think that’s not generally the direction that our society is heading.

Probably for the best.

It probably is for the best. I think a little bit of a channel to get some aggression out is good. I think the one that was proposed with Elon could have been fun, but it’s okay.

If he came back to you and said, “I’ll fight on your terms, you pick the venue,” would you still do it?

I don’t think it’ll happen.

Fair. I agree with you.

There’s sort of a valorization where people look at this stuff and are like, “Oh, I could do that.” But I mean you have to train. It’s very technical. It’s very fun, very intellectual.

When I was a lot younger, I used to fence competitively. A lot of the striking aspects — I mean, obviously, it’s different because, I mean, [in] fencing, you’re playing for points, right? So when you get a touch, the sequence is done, whereas here, you have to worry about being countered and all that. It’s very intellectual.

I really enjoyed thinking about all the different combos and moves and all that. There’s a period where you’re ramping up and learning all the basic stuff before you can get to the intellectual part of it. But once you’re there… I don’t know, it’s super fun. I love doing it with friends.

Your mind doesn’t just shut off when you’re doing it? You actually find it to be mentally stimulating?

Yeah.

Last year, when Elon was close to taking over Twitter, I asked if you had any advice for him. I’m not going to ask you to give him advice this time, but a lot has changed in a year. You’ve got Threads now. I’d love to get into why you did Threads when you did and the approach that you took and kind of when you made that decision because it seemed like it happened pretty quickly.

Yeah. You know, I’ve always thought that the aspiration of Twitter — to build this, you know, text-based discussion — should be a billion-person social app, right? There are certain kinds of fundamental social experiences that, you know, I look at them, and I’m just like, “Okay, like if I were running that, I could scale that to reach a billion people.”

And that’s one of the reasons why, over time, we’ve done different acquisitions and why we’ve considered them.

You tried to buy Twitter way back in the day, right?

Yeah, we had conversations. I think this was, gosh, this was, I think, when Jack was leaving the first time. And look, I get it. I mean, different entrepreneurs have different goals for what they want to do, and some people want to run their companies independently, and that’s cool.

It’s good that there’s sort of a diversity of different outcomes. But I guess Twitter was sort of plodding along for a while before Elon came, and I think the rate of change in the product was pretty slow, right? So it just didn’t seem like they were on the trajectory that would maximize their potential, and then with Elon coming in, I think there was certainly an opportunity to change things up, and he has, right?

I mean, he’s definitely a change agent, right? I think it’s still not clear exactly what trajectory it’s on, but I do think he’s been pretty polarizing, so I think that the chance that it sort of reaches the full potential on the trajectory that it’s on is… I don’t know. I guess I’m probably less optimistic or just think there’s less of a chance now than there was before.

But I guess just watching all this play out, it just kind of reminded me and rekindled the sense that someone should build a version of this that can be more ubiquitous. And, you know, I look at some of the things around it… I think these days people just want… Well, let’s put it this way. A lot of the conversation around social media is around information and the utility aspect, but I think an equally important part of designing any product is how it makes you feel, right? What’s the kind of emotional charge of it, and how do you come away from that feeling?

I think Instagram is generally kind of on the happier end of the spectrum. I think Facebook is sort of in the middle because it has happier moments, but then it also has sort of harder news and things like that that I think tend to just be more critical and maybe, you know, make people see some of the negative things that are going on in the world. And I think Twitter indexes very strongly on just being quite negative and critical.

I think that that’s sort of the design. It’s not that the designers wanted to make people feel bad. I think they wanted to have a maximum kind of intense debate, right? Which I think that sort of creates a certain emotional feeling and load. I always just thought you could create a discussion experience that wasn’t quite so negative or toxic. I think in doing so, it would actually be more accessible to a lot of people. I think a lot of people just don’t want to use an app where they come away feeling bad all the time, right? I think that there’s a certain set of people who will either tolerate that because it’s their job to get that access to information or they’re just warriors in that way and want to be a part of that kind of intellectual combat.

But I don’t think that that’s the ubiquitous thing, right? I think the ubiquitous thing is people want to get fresh information. I think there’s a place for text-based, right? Even when the world is moving toward richer and richer forms of sharing and consumption, text isn’t going away. It’s still going to be a big thing, but I think how people feel is really important.

So that’s been a big part of how we’ve tried to emphasize and develop Threads. And, you know, over time, if you want it to be ubiquitous, you obviously want to be welcome to everyone. But I think how you seed the networks and the culture that you create there, I think, ends up being pretty important for how they scale over time.

Where with Facebook, we started with this real name culture, and it was grounded to your college email address. You know, it obviously hasn’t been grounded to your college email address for a very long time, but I think the kind of real authentic identity aspect of Facebook has continued and continues to be an important part of it.

So I think how we set the culture for Threads early on in terms of being a more positive, friendly place for discussion will hopefully be one of the defining elements for the next decade as we scale it out. We obviously have a lot of work to do, but I’d say it’s off to quite a good start. Obviously, there’s the huge spike, and then, you know, not everyone who tried it out originally is going to stick around immediately. But I mean, the monthly active’s and weekly’s, I don’t think we’re sharing stats on it yet.

You can if you’d like.

No, I mean, I feel quite good about that.

Because there’s been the reporting out there that engagement, which I think is natural with any spike like that, is not going to sustain. You guys set the original industry standard on engagement for these kinds of products, so I assume you’re guiding toward a similar kind of metric.

Yeah, we just have this playbook for how we do this. Phase one is to build a thing that kind of sparks some joy and that people appreciate. Then, from there, you want to get to something that is retentive. So that way, people who have a good experience with a thing come back and want to keep using it.

And those two things are not always the same. There are a lot of things that people think are awesome but may not always come back to. I think some of what people are seeing now around ChatGPT is part of that. Like this level of AI is a miracle. It’s awesome, right? But that doesn’t mean that everyone is going to have a use case every week.

First is to create the spark. Second is to create retention. Then, once you have retention, then you can start encouraging more people to join. But if people aren’t going to be retained by it, why would you ask people to go sign up for something?

Step one: spark; step two: retention; step three: growth and scaling the community. And then only at that point is step four, which is monetization. We take a while to go through all those. We’re really, in some sense, only getting started on the monetization of the messaging experiences like WhatsApp now with stuff like business messaging.

Took a while.

But 2 billion people use the product every day, right? So we scaled it pretty far. But I think with our model, that sort of works.

You are competing with Twitter, but you’re trying to do it differently. To me, as a Twitter addict for way too long and a very early Threads user — and I’ve been seeing similar feedback from others when Adam Mosseri has been asking for feedback on Threads — it still lacks that real-time feeling.

That’s what I go to Twitter for: news. And I know you guys aren’t necessarily trying to emphasize news in this experience, which is a whole other topic, really, but how do you get that kind of Twitter-like “This is what’s going on right now” feeling? Because I don’t think Threads quite has that yet.

I think it’s a thing that we’ll work on improving, but I mean, hard news content isn’t the only fresh content. Even within news, there’s a whole spectrum between sort of hard, critical news and people understanding what’s going on with the sports that they follow or the celebrities that they follow. It’s not as cutting as a lot of the kind of hard news — and especially the political discussion. I think it’s just so polarized that I think it’s hard to come away from reading news about politics these days feeling good, right?

But that doesn’t go for everything, and part of this overall is just how you tune the algorithm to basically encourage either recency or quality but less recency. So, I’m not sure that we have that balance exactly right yet. It may be the case that in a product like Threads, where people may want to see more recent content, as opposed to something like an Instagram or Facebook, where it’s more visual and the balance might just be balancing toward maybe a little more quality, even if it’s 12 hours ago instead of two hours ago. So I think that this is the type of stuff that we need to tune and optimize, but yeah, I think I agree with that point.

This hasn’t happened yet with Threads, but you’re eventually going to hook it into ActivityPub, which is this decentralized social media protocol. It’s kind of complicated in layman’s terms, but essentially, people run their own servers. So, instead of having a centralized company run the whole network, people can run their own fiefdoms. It’s federated. So Threads will eventually hook into this. This is the first time you’ve done anything really meaningful in the decentralized social media space.

Yeah, we’re building it from the ground up. I’ve always believed in this stuff.

Really? Because you run the largest centralized social media platform.

But I mean, it didn’t exist when we got started, right? I’ve had our team at various times do the thought experiment of like, “Alright, what would it take to move all of Facebook onto some kind of decentralized protocol?” And it’s like, “That’s just not going to happen.” There’s so much functionality that is on Facebook that it’s way too complicated, and you can’t even support all the different things, and it would just take so long, and you’d not be innovating during that time.

I think that there’s value in being on one of these protocols, but it’s not the only way to deliver value, so the opportunity cost of doing this massive transition is kind of this massive thing. But when you’re starting from scratch, you can just design it so it can work with that. And we want to do that with this because I thought that that was one of the interesting things that’s evolving around this kind of Twitter competitive space, and there’s a real ecosystem around that, and I think it’s interesting.

What does that mean for a company like yours long term if people gravitate more toward these decentralized protocols over time? Where does a big centralized player fit into that picture?

Well, I guess my view is that the more that there’s interoperability between different services and the more content can flow, the better all the services can be. And I guess I’m just confident enough that we can build the best one of the services, that I actually think that we’ll benefit and we’ll be able to build better quality products by making sure that we can have access to all of the different content from wherever anyone is creating it.

And I get that not everyone is going to want to use everything that we build. I mean, that’s obviously the case when it’s like, “Okay, we have 3 billion people using Facebook,” but not everyone wants to use one product, and I think making it so that they can use an alternative but can still interact with people on the network will make it so that that product also is more valuable.

I think that can be pretty powerful, and you can increase the quality of the product by making it so that you can give people access to all the content, even if it wasn’t created on that network itself. So, I don’t know. I mean, it’s a bet.

There’s kind of this funny counterintuitive thing where I just don’t think that people like feeling locked into a system. So, in a way, I actually think people will feel better about using our products if they know that they have the choice to leave.

If we make that super easy to happen… And obviously, there’s a lot of competition, and we do “download your data” on all our products, and people can do that today. But the more that’s designed in from scratch, I think it really just gives creators, for example, the sense that, “Okay, I have…”

Agency.

Yeah, yeah. So, in a way, that actually makes people feel more confident investing in a system if they know that they have freedom over how they operate. Maybe for phase one of social networking, it was fine to have these systems that people felt a little more locked into, but I think for the mature state of the ecosystem, I don’t think that that’s going to be where it goes.

I’m pretty optimistic about this. And then if we can build Threads on this, then maybe over time, as the standards get more built out, it’s possible that we can spread that to more of the stuff that we’re doing. We’re certainly working on interop with messaging, and I think that’s been an important thing. The first step was kind of getting interop to work between our different messaging systems.

Right, so they can talk to each other.

Yeah, and then the first decision there was, “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on.

But that basically has just meant completely rewriting Messenger and Instagram direct from scratch. So you’re basically going from a model where all the messages are stored in the cloud to completely inverting the architecture where now all the messages are stored locally and just the way…

While the plane’s in the air.

Yeah, that’s been a kind of heroic effort by just like a hundred or more people over a multiyear period. And we’re basically getting to the point where it’s starting to roll out now.

Now that we’re at the point where we can do encryption across those apps, we can also start to support more interop.

With other services that Meta doesn’t own?

Well, I mean, the plan was always to start with interop between our services, but then get to that. We’re starting to experiment with that, too.

I promised to stop bringing up Elon, but you and he were together with Sen. Chuck Schumer at the White House recently for this big AI summit, and a lot of it was closed doors.

Along with a lot of other people.

Along with a lot of other people. You guys were sitting at opposite sides of the table. I thought that was an interesting choice. What was your takeaway from that and where the government is in the US on regulating AI? What do you think is going to happen?

Well, I didn’t really know what to expect going into that conversation, but it was quite substantive, and I think we covered a lot more ground than I expected. You asked about what it says about where the government is, and aside from Sen. Schumer, who basically moderated the discussion, it was really an opportunity for them to hear from the people in the tech industry but also folks in civil society.

I mean, you had people running unions. You had people from Hollywood and representing the creative industry and intellectual property. You had researchers and people focused on AI safety, and one of the things that I actually thought was the most interesting was the senators didn’t really speak that much.

There’s sort of the meme that it’s like, “Okay, a lot of these politicians, they go to a place where they’ll get attention for themselves.” But, you know, this was a three-hour event, and I think there were like 40 senators sitting and listening and taking notes and not really participating in the discussion but just there to learn.

And I thought that was super interesting, right? In a way that reflects pretty well on our system and the intellectual curiosity of the people who are ultimately going to be making those kinds of legislative decisions.

So that was fascinating to see. I mean, I didn’t come away — you know, apart from seeing their heads nod when certain people made certain points — it wasn’t a time for us to really get their sense of where they are. I think it was more just they were hearing the discussion of the issues.

Have you seen some of the criticisms — and I don’t think it’s necessarily focused at you specifically — that the tech industry is performing regulatory capture right now with AI and is essentially trying to take the drawbridge up with them here? Are you worried about that at all?

I have seen that concern, and I’m somewhat worried about it myself. I mean, look, I think that there are real concerns here. So, I think a lot of these folks are truly earnest in their concerns. And I think that there is valuable stuff for the government to do, I think both in terms of protecting American citizens from harm and preserving I think what is a natural competitive advantage for the United States compared to other countries.

I think this is just gonna be a huge sector, and it’s going to be important for everything, not just in terms of the economy, but there’s probably defense components and things like that. And I think the US having a lead on that is important and I think having the government think through, “Okay, well, how do we want to leverage the fact that we have the leading work in the world happening here, and how do we want to kind of control that, and what restrictions do we want to put on that getting to other places?”

I think that that makes sense. There are a bunch of concerns there that I think are real. You know, one of the topics that I’ve spent a lot of time thinking about is open source. Because, you know, we do a lot of open-source work at Meta. Obviously, not everything we do is open source. There’s a lot of closed systems, too. I’m not like a zealot on this, right? But I think I lean probably a little more pro-open source than most of the other big companies.

And we believe that it’s generally positive to open-source a lot of our infrastructure for a few reasons. One is that we don’t have a cloud business, right? So it’s not like we’re selling access to the infrastructure, so giving it away is fine. And then, when we do give it away, we generally benefit from innovation from the ecosystem, and when other people adopt the stuff, it increases volume and drives down prices.

Like PyTorch, for example?

When I was talking about driving down prices, I was thinking about stuff like Open Compute, where we open-sourced our server designs, and now the factories that are making those kinds of servers can generate way more of them because other companies like Amazon and others are ordering the same designs, that drives down the price for everyone, which is good. PyTorch is great because it basically makes it so that it’s like the standard across the industry as people develop with this, which means that more libraries and modules are created for it, which just makes it better. And it makes it better for us to develop internally, too.

So I think that all that stuff is true and works well for open source. And also, I think it’s pretty well established that open-source software is generally more secure and safer because it’s just more scrutinized, right? Every piece of software has bugs and issues, but the more people who can look at it, the more you’re going to basically identify what those issues are and have eyes on fixing them. And then also because there’s sort of a standard that’s deployed across the industry, those fixes get rolled out everywhere, which is a big advantage for safety and security.

And when I think about AI safety, I think one of the big issues — if there’s like a single super intelligence and it’s closed, and someone figures out how to exploit it — is everyone kind of gets screwed at the same time. Whereas, in an open-source system, if people find issues and just like your Mac or whatever gets patched, people find the issues, and then it just gets rolled out across the industry.

So I think that that’s generally positive, but there’s obviously this whole debate where when you open-source stuff, we can build in safeguards, but if you open-source something, you’re not fundamentally going to be able to prevent bad guys from taking that and running with it, too. So there is sort of this debate around, “Okay, well, what’s the balance? How capable do you want the models that are open source?”

And I think that there is a real debate there. I do sometimes get the sense that some of the folks whose business model is to basically sell access to the closed models that they’re developing, I do think that they have to be careful because they are also talking their book when they’re talking about dangers of open source, and I think that there are dynamics like that that happen that I hear either overtly or sometimes behind closed doors something will get back to me, that’s like, “Oh, this company was talking about why they’re kind of against open source.” And it’s like, yeah, well, their whole business depends on selling access to proprietary models, so I think you have to be careful about that.

The regulatory capture thing, I think you need to be careful about things like that because I do think one of the big benefits of open source is it also just decreases the cost of adoption for small companies and a lot of other folks. So I do think that’s going to be a big thing to watch out for over time.

I think Llama and the Llama 2 release has been a big thing for startups because it is so free or just easy to use and access. I’m wondering, was there ever debate internally about “should we take the closed route?” You know, you’ve spent so much money on all this AI research. You have one of the best AI labs in the world, I think it’s safe to say. You have huge distribution — why not keep it all to yourself? You could have done that.

You know, the biggest arguments in favor of keeping it closed were generally not proprietary advantage.

Or competitive advantage?

No, it wasn’t competitive advantage. There was a fairly intense debate around this.

Did you have to be dissuaded? Did you know we have to have it open?

My bias was that I thought it should be open, but I thought that there were novel arguments on the risks, and I wanted to make sure we heard them all out, and we did a very rigorous process. We’re training the next version of Llama now, and I think we’ll probably have the same set of debates around that and how we should release it. And again, I sort of, like, lean toward wanting to do it open source, but I think we need to do all the red teaming and understand the risks before making a call.

But the two big arguments that people had against making Llama 2 open were one: it takes a lot of time to prepare something to be open. Our main business is basically building consumer products, right? And that’s what we’re launching at Connect. Llama 2 is not a consumer product. It’s the engine or infrastructure that powers a bunch of that stuff. But there was this argument — especially after we did this partial release of Llama 1 and there was like a lot of stir around that, then people had a bunch of feedback and were wondering when we would incorporate that feedback — which is like, “Okay, well, if we release Llama 2, is that going to distract us from our real job, which is building the best consumer products that we can?” So that was one debate. I think we got comfortable with that relatively quickly. And then the much bigger debate was around the risk and safety.

It’s like, what is the framework for how you measure what harm can be done? How do you compare that to other things? So, for example, someone made this point, and this was actually at the Senate event. Someone made this point that’s like, “Okay, we took Llama 2, and our engineers in just several days were able to take away the safeguards and ask it a question — ‘Can you produce anthrax?’ — and it answered.” On its face, that sounds really bad, right? That’s obviously an issue that you can strip off the safeguards until you think about the fact that you can actually just Google how to make anthrax and it shows up on the first page of the results in five seconds, right?

So there’s a question when you’re thinking through these things about what is the actual incremental risk that is created by having these different technologies. We’ve seen this in protecting social media as well. If you have, like, Russia or some country trying to create a network of bots or, you know, inauthentic behavior, it’s not that you’re ever going to stop them from doing it. It’s an economics problem. You want to make it expensive enough for them to do that that it is no longer their best strategy because it’s cheaper for them to go try to exploit someone else or something else, right? And I think the same is true here. So, for the risk on this, you want to make it so that it’s sufficiently expensive that it takes engineers several days to dismantle whatever safeguards we built in instead of just Googling it.

You feel generally good directionally with the safety work on that?

For Llama 2, I think that we did leading work on that. I think the white paper around Llama 2, where we basically outlined all the different metrics and all the different things that we did, and we did internal red teaming and external red teaming, and we’ve got a bunch of feedback on it. So, because we went into this knowing that nothing is going to be foolproof — some bad actor is going to be able to find some way to exploit it — we really knew that we needed to create a pretty high bar on that. So, yeah, I felt good about that for Llama 2, but it was a very rigorous process.

And you guys have now announced the Meta AI agent, which is proprietary. I’m sure it’s using Llama technology, but it’s a closed model, and you’re not really disclosing a lot about the model and its weights and all that. But this is the new agent that people are going to be seeing in the apps.

Yeah. So, at Connect, we announced a bunch of different things on this. Meta AI and the other AIs that we released are based on Llama 2. It’s not exactly the same thing that we open-sourced because we used that as the foundation, and then we built on top of that to build the consumer products. But there were a few different things that we announced.

I feel like that part — the AI, to me — feels like the biggest deal in the near term. Because a lot of people are going to be seeing it, and it may be the first time, even with all the coverage of GPT, that a lot of people experience a chatbot like this. And it’s free, which is different.

I’m very curious to see how the stuff gets used.

I used it for a little bit, and it can pull in web results. So it’s got recency, which is nice. It wouldn’t give me advice on how to break up with my girlfriend.

It wouldn’t?

I don’t have a girlfriend; I’m married. But I was trying to see what it won’t and will answer. It seems relatively safe.

It seems like the type of thing that it should be fine giving you advice on.

Well, I’m just telling you. But what do you imagine people using this for? Because it’s got that search engine component, but it can do a lot of things. I mean, is this a pure ChatGPT competitor in almost every way in your mind? How do you think about it?

I think that there’s a bunch of different spaces here that I think people are going to want to interact with AI around. Take a step back. I think that the vision for a bunch of folks in the industry, when I look at OpenAI or Google, is the sense that there’s going to be one big superintelligence, and they want to be it.

I just don’t think that’s the best future. I think the way that people tend to process the world is like, “We don’t have one person that we go to for everything. We don’t have one app that we go to for everything.” I don’t think that we want one AI.

It’s overwhelming. I find this with the current chatbots. I feel like it can do so much that I’m not actually sure what to ask it.

Yeah, so our view is that there’s actually going to be a lot of these that people talk to you for different things. One thought experiment that I did to sort of prove to myself that this would be the case is like, let’s say you’re a small business and you want to have an AI that can help you interface with customers to do sales and support. You want to be pretty confident that your AI isn’t going to be promoting your competitor’s products, right? So you want it to be yours. You want it to be aligned with you, so you’re going to want a separate agent than your competitor’s agent.

So, then, you get to this point where there’s going to be 100 million AIs just helping businesses sell things. Then you get the creator version of that, where, like every creator is going to want an AI assistant, something that can help them build their community. People are going to really want to interact with it; there’s just way more demand to interact with creators.

And there’s only one Kylie Jenner.

There’s, I think, a huge need here. People want to interact with Kylie. Kylie wants to cultivate her community, but there are only so many hours in a day. Creating an AI that’s sort of an assistant for her, where it’ll be clear to people that they’re not interacting with the physical Kylie Jenner, it would be kind of an AI version.

That will help the creators, and I think it’ll be fun for consumers. That one’s actually really hard because I think getting the creator one to work — we’re not actually launching that now, that’s, I think, more of a “next year” thing — because there’s so many… you can call it brand safety type concerns.

If you’re a creator, you really want to make sure that these AIs reflect the personality of the creator and don’t talk about things that the creator doesn’t want to get into or don’t say things that are going to be problematic for the creator and their endorsement deals.

The creator, I feel like, should have input in all of this. They should be able to say, “I don’t want this.”

Oh yeah, but in some ways, the technology doesn’t even exist yet to make it that trained. I mean, this isn’t code in the deterministic sense, right? It’s like a model that you need to be able to train it to stay right in certain bounds. And I think a lot of that is still getting developed.

So that’s more next year.

Yeah. So there’s businesses. There’s creators. That stuff is fun, and the business stuff is, I think, more useful. And then I think that there’s a bunch of stuff that’s just interesting kind of consumer use cases.

So there’s more of the utility, which is what Meta AI is, like answer any question. You’ll be able to use it to help navigate your Quest 3 and the new Ray-Ban glasses that we’re shipping. We should get to that in a second. That’ll be pretty wild, having Meta AI that you can just talk to all day long on your glasses.

So, yeah, I think that’ll be pretty powerful. But then there are also going to be all these other new characters that are getting created, which is somewhat of an easier question to start with than having AIs that are kind of acting as a real person because there aren’t as many kinds of brand safety concerns around that, but they could still be pretty fun. So we’re experimenting with a bunch of different AIs for different interests that people have, whether it’s interested in different kinds of sports or fashion.

The one I tried was a travel agent type.

Yeah, travel. There’s some that are more on giving people advice. There’s like a life coach and, you know, like an aunt, right? And then there’s some that are more game-y. Like Snoop Dogg is playing the dungeon master, and there’s a few that are just these text-based adventure games and the ability to just drop that into a thread and, you know, play a text-based game is going to be super fun.

So, I think part of this is that we want to create a diversity of different experiences to see what resonates and what we want to go deeper on. This is the first step toward building this AI studio that we’re working on. That will make it so that anyone can build their own AIs, sort of just like you create your own UGC, your own content across social networks.

So, you should be able to create your own AI and publish it. I think that’s going to be really wild.

I do agree it’s going to be wild. There’s a bit of uneasiness to it for me, just the idea that we as a society are going to be increasingly having relationships with AIs. I mean, there’s stories about Character.ai, which has a similar kind of library of personas you can interact with and people literally like falling in love with some of these chatbots. I mean, what do you think about that phenomenon? Is it just inevitable with where the tech is going?

That’s not where we’re starting. So I think that there’s a lot of use cases that are just a lot more clear than that, right? In terms of, you know, someone who can help you make workouts, someone who can help you with cooking, more utility, figure out travel — or even the game-type stuff.

I think that a bunch of these things can help you in your interactions with people. And I think that’s more our natural space. One of the things that we can do that’s harder for others to do is the ability to make it so you can drop these into group chats. So we’re starting with Meta AI. You can just invoke it in any thread. Like I could be having a one-on-one thread with you, and I could just ask Meta AI something. I can do that in a group chat thread. So I think that that’s going to be really fun, right? It’s just having these kinds of fun personalities in these threads, I think, will create sort of an interesting dynamic. I think especially around image generation, and we haven’t talked about that as much.

I used that. It was pretty impressive, and it was fast.

Yeah. I mean, I think the team has made awesome progress. We’re at good photorealistic quality.

For people who haven’t used it yet, you just type into the bot what you want the image to be, and it’ll just make it.

Yeah. And the fact that it’s fast and free, I think, is going to be pretty game-changing. I mean, there are photorealistic image generators out there, but a lot of them take a minute.

They’re hard to use and to find — on Discord or whatever.

Yeah. And you have to pay a subscription fee. So I think having it be free, fast, able to exist in group chat threads — I think people are just going to create a ton of images for fun. And I don’t know, I’m really curious to see how this gets used, but I think it’s going to be super fun.

I already just sit there with my kids, and the word you say to get it to make an image is “imagine,” and my daughter’s just like, “I just want to play ‘imagine.’” I’m just like, “Imagine this.” And we get an image, and “Oh, I actually want to change it. So imagine this,” and edit the prompt. But because it’s just a five-second turnaround, you could do that so easily. You could do it over the internet with group chat.

I think there are all these things where you can use these tools to facilitate connections and just create entertainment, which is actually probably more what the technology is capable of today than even some of the more utility use cases because there is the factuality issue. I mean, with the hallucinations and all that, and you know, we’re trying to address that by doing partnerships with search engines that you mentioned. So you can type in a question and ask in real time, “Who won this fight this weekend?” and it’ll be able to go do a search and bring that in. But hallucination hasn’t been solved completely in any of these.

So I think, to some degree, the thing that these language models have really been best at is — I mean, it’s kind of what the name “generative AI” suggests — being generative, right? Suggesting ideas. Coming up with things that could be interesting or funny. I wouldn’t necessarily yet want it to be my doctor and ask it for a diagnosis and have to rely that it’s not hallucinating.

So I think having it fit into a consumer product where the primary goals are suggesting interesting content and entertainment is actually maybe a more natural fit for what the technology is capable of today than some of the initial use cases that people thought about it, like, “Oh, it’s going to be this kind of like all intelligent assistant, or it’s going to be my new search engine or something.”

It’s fine for those a bunch of the time, and I think it’ll get there over the next few years, but I think the consumer thing is actually quite a good fit today.

It seems like a key differentiator for Meta in the whole model race is you have, probably second to maybe Google, the most user data to train on. And I know a lot of it’s private, and you wouldn’t ever train on private chats.

We don’t.

WhatsApp is encrypted, too, but public stuff — Reels, public Facebook posts — that seems pretty natural for this. Is that feeding into Meta AI right now?

Like you said, we don’t train on private chats that people have with their friends.

But you’re sitting on this just massive hoard of data. It could be interesting in a model like this.

I actually think a lot of the stuff that we’ve done today is actually still pretty basic. So I think there’s a lot of upside, and I think we need to experiment with it to see what ends up being useful.

But one of the things that I think is interesting is these AI problems, they’re so tightly optimized that having the AI basically live in the environment that you’re trying to get it to get better at is pretty important. So, for example, you have things like ChatGPT — they’re just in an abstract chat interface. But getting an AI to actually live in a group chat, for example, it’s actually a completely different problem because now you have this question of, “Okay, when should the AI jump in?”

In order to get an AI to be good at being in a group chat, you need to have experience with AIs and group chats, which, even though Google or OpenAI or other folks may have a lot of experience with other things, that kind of product dynamic of having the actual experience that you’re trying to deliver the product in, I think that’s super important.

Similarly, one of the things that I’m pretty excited about: I think multimodality is a pretty important interaction, right? A lot of these things today are like, “Okay, you’re an assistant. I can chat with you in a box. You don’t change, right? It’s like you’re the same assistant every day,” and I think that’s not really how people tend to interact, right? In order to make things fresh and entertaining, even the apps that we use, they change, right? They get refreshed. They add new features.

And I think that people will probably want the AIs that they interact with, I think it’ll be more exciting and interesting if they do, too. So part of what I’m interested in is this isn’t just chat, right? Chat will be where most of the interaction happens. But these AIs are going to have profiles on Instagram and Facebook, and they’ll be able to post content, and they’ll be able to interact with people and interact with each other, right?

There’s this whole interesting set of flywheels around how that interaction can happen and how they can sort of evolve over time. I think that’s going to be very compelling and interesting, and obviously, we’re kind of starting slowly on that. So we wanted to build it so that it kind of worked across the whole Meta universe of products, including having them be able to, in the near future, be embodied as avatars in the metaverse, right?

So you go into VR and you have an avatar version of the AI, and you can talk to them there. I think that’s gonna be really compelling, right? It’s, at a minimum, creating much better NPCs and experiences when there isn’t another actual person who you want to play a game with. You can just have AIs that are much more realistic and compelling to interact with.

But I think having this crossover where you have an assistant or you have someone who tells you jokes and cracks you up and entertains you, and then they can show up in some of your metaverse worlds and be able to be there as an avatar, but you can still interact with them in the same way — I think it’s pretty cool.

Do you think the advent of these AI personas that are way more intelligent will accelerate interest in the metaverse and in VR?

I think that all this stuff makes it more compelling. It’s probably an even bigger deal for smart glasses than for VR.

You need something. You need a kind of visual or a voice control?

When I was thinking about what would be the key features for smart glasses, I kind of thought that we were going to get holograms in the world, and that was one. That’s kind of like augmented reality. But then there was always some vague notion that you’d have an assistant that could do something.

I thought that things like Siri or Alexa were very limited. So I was just like, “Okay, well, over the time period of building AR glasses, hopefully the AI will advance.” And now it definitely has. So now I think we’re at this point where it may actually be the case that for smart glasses, the AI is compelling before the holograms and the displays are, which is where we got to with the new version of the Ray-Bans that we’re shipping this year, right? When we started working on the product, all this generative AI stuff hadn’t happened yet.

So we actually started working on the product just as an improvement over the first generation so that the photos are better, the audio is a lot better, the form factor is better. It’s a much more refined version of the initial product. And there’s some new features, like you can livestream now, which is pretty cool because you can livestream what you’re looking at.

But it was only over the course of developing the product that we realized that, “Hey, we could actually put this whole generative AI assistant into it, and you could have these glasses that are kind of stylish Ray-Ban glasses, and you could be talking to AI all throughout the day about different questions you have.”

This isn’t in the first software release, but sometime early next year, we’re also going to have this multimodality. So you’re gonna be able to ask the AI, “Hey, what is it that I’m looking at? What type of plant is that? Where am I? How expensive is this thing?”

Because it has a camera built into the glasses, so you can look at something like, “Alright, you’re filming with some Canon camera. Where do I get one of those?” I think that’s going to be very interesting.

Again, this is all really novel stuff. So I’m not pretending to know exactly what the key use cases are or how people are going to use that. But smart glasses are very powerful for AI because, unlike having it on your phone, glasses, as a form factor, can see what you see and hear what you hear from your perspective.

So if you want to build an AI assistant that really has access to all of the inputs that you have as a person, glasses are probably the way that you want to build that. It’s this whole new angle on smart glasses that I thought might materialize over a five- to 10-year period but, in this odd twist of the tech industry, I think actually is going to show up maybe before even super high-quality holograms do.

Is overall interest in the Ray-Bans and the Quest line tacking with where you thought it would be at this point?

Let’s take each of those separately. Quest 1 was the first kind of standalone product. It did well, but all the content had to be developed for it. So it was really when we developed Quest 2, which was the next generation of it that already had all the content built, and it was sort of the refinement on it — that one blew up.

So Quest 2 was like a huge hit — tens of millions, right? That did very well and was the defining VR device so far. Then we shipped Quest Pro, which was making the leap to mixed reality, but it was $1,500. And what we’ve seen so far is that at least consumers are very cost-conscious. We expected to sell way fewer Quest Pros than Quest 2s, and that [bore] out. It’s always hard to predict exactly what it’ll be when you’re shipping a product at $1,500 for the first time, but it was kind of fine. Within expectations — it wasn’t like a grand slam, but it did fine.

And now Quest 3 is the refinement on mixed reality, kind of like Quest 1 was. With Quest 3, we’re sort of at the point where we’ve gotten mixed reality, which is even higher quality than what was in Quest Pro, but it’s a third of the price, right? So it’s $500. So I’m really excited to see how that one will go.

It seems like you all, based on my demos, still primarily think of it as a gaming device. Is that fair? That the main use cases for Quest 3 are going to be these kinds of “gaming meets social.” So you’ve got Roblox now.

I think social is actually the first thing, which is interesting because Quest used to be primarily gaming. And now, if you look at what experiences are people spending the most time in, it’s actually just different social metaverse-type experiences, so things like Rec Room, VRChat, Horizon, Roblox. Even with Roblox just kind of starting to grow on the platform, social is already more time spent than gaming use cases. It’s different if you look at the economics because people pay more for games. Whereas social kind of has that whole adoption curve thing that I talked about before, where, first, you have to kind of build out the big community, and then you can enable commerce and kind of monetize it over time.

This is sort of my whole theory for VR. People looked at it initially as a gaming device. I thought, “Hey, I think this is a new computing platform overall. Computing platforms tend to be good for three major things: gaming, social and communication, and productivity. And I’m pretty sure we can nail the social one. If we can find the right partners on productivity and if we can support the gaming ecosystem, then I think that we can help this become a big thing.”

Broadly, that’s on track. I thought it was going to be a long-term project, but I think the fact that social has now overtaken gaming as the thing that people are spending the most time on is an interesting software evolution in how they’re used. But like you’re saying: entertainment, social, gaming — still the primary things. Productivity, I think, still needs some time to develop.

I tried the Quest 3. It’s definitely a meaningful step change in terms of graphics and performance and all the things you guys have put into it. It feels still like we’re a little ways away from this medium becoming truly mainstream. Becoming something that millions…

When you say mainstream, what do you mean?

I know you’re already at [game] console-level sales, so you could say that’s mainstream, but I guess in terms of what you could think of as a general-purpose computing platform, so even like PC or something like that.

Well, in what sense? I think there’s a few parts of this. I think for productivity, you probably want somewhat higher-resolution screens. That, I think, will come, and I think we’re waiting for the cost curve to basically — like, we could have super high-resolution screens today, just the device would be thousands and thousands of dollars, which is basically the tradeoff that Apple made with their Vision Pro.

Have you tried it yet?

No, I haven’t, no.

I did, and you’re right. They guided toward that one spec. You can tell.

Yeah, you have to imagine that over the next five-plus years, there will be displays that are that good, and they’ll come down in cost, and we’re riding that curve.

For today, when you’re building one of these products, you basically have the choice of if you have it at that expensive, then you will sell hundreds of thousands of units. But we’re trying to build something where we build up the community of people using it. We’re trying to thread the needle and have the best possible display that we can while having it cost $500, not $3,500.

I reported on some comments you made to employees after Apple debuted the Vision Pro, and you didn’t seem super phased by it. It seemed like it didn’t bother you as much as it maybe could have. I have to imagine if they released a $700 headset, we’d be having a different conversation. But they’re shipping low volume, and they’re probably three to four years out from a general, lower-tier type release that’s at any meaningful scale. So is it because the market’s yours foreseeably then for a while?

Apple is obviously very good at this, so I don’t want to be dismissive. But because we’re relatively newer to building this, the thing that I wasn’t sure about is when Apple released a device, were they just going to have made some completely new insight or breakthrough that just made our effort…

Blew your R&D up?

Yeah, like, “Oh, well, now we need to go start over.” I thought we were doing pretty good work, so I thought that was unlikely, but you don’t know for sure until they show up with their thing. And there was just nothing like that.

There are some things that they did that are clever. When we actually get to use it more, I’m sure that there are going to be other things that we’ll learn that are interesting. But mostly, they just chose a different part of the market to go in.

I think it makes sense for them. I think that they sell… it must be 15 to 20 million MacBooks a year. And from their perspective, if they can replace those MacBooks over time with things like Vision Pro, then that’s a pretty good business for them, right? It’ll be many billions of dollars of revenue, and I think they’re pretty happy selling 20 million or 15 million MacBooks a year.

But we play a different game. We’re not trying to sell devices at a big premium and make a ton of money on the devices. You know, going back to the curve that we were talking about before, we want to build something that’s great, get it to be so that people use it and want to use it like every week and every day, and then, over time, scale it to hundreds of millions or billions of people.

If you want to do that, then you have to innovate, not just on the quality of the device but also in making it affordable and accessible to people. So I do just think we’re playing somewhat different games, and that makes it so that over time, you know, they’ll build a high-quality device and in the zone that they’re focusing on, and it may just be that these are in fairly different spaces for a long time, but I’m not sure. We’ll see as it goes.

From the developer perspective, does it help you to have developers building on… you could lean too much into the Android versus iOS analogy here, but yeah, where do you see that going? Does Meta really lean into the Android approach and you start licensing your software and technology to other OEMs?

I’d like to have this be a more open ecosystem over time. My theory on how these computing platforms evolve is there will be a closed integrated stack and a more open stack, and there have been in every generation of computing so far.

The thing that’s actually not clear is which one will end up being the more successful, right? We’re kind of coming off of the mobile one now, where Apple has truly been the dominant company. Even though there are technically more Android phones, there’s way more economic activity, and the center of gravity for all this stuff is clearly on iPhones.

In a lot of the most important countries for defining this, I think iPhone has a majority and growing share, and I think it’s clearly just the dominant company in the space. But that wasn’t true in computers and PCs, so our approach here is to focus on making it as affordable as possible. We want to be the open ecosystem, and we want the open ecosystem to win.

So I think it is possible that this will be more like PCs than like mobile, where maybe Apple goes for a kind of high-end segment, and maybe we end up being the kind of the primary ecosystem and the one that ends up serving billions of people. That’s the outcome that we’re playing for.

On the progress that you’re making with AR glasses, it’s my understanding that you’re going to have your first internal dev kit next year. I don’t know if you’re gonna show it off publicly or not, if that’s been decided, but is that progressing at the rate that you have hoped as well? It seems like Apple’s dealt with this, that everyone’s been dealing with kind of the technical problems with this.

I don’t think I have anything to announce on that today.

You said AR glasses are a kind of end-of-this-decade thing. And I guess what I’m trying to get at….

To be more of a mainstream consumer product, not like a v1. I don’t have anything new to announce today on this, and we have a bunch of versions of this that we’re building internally.

We’re kind of coming at it from two angles at once. We’re starting with Ray-Ban, which is like if you take stylish glasses today, what’s the most technology that you can cram into that and make it a good product? And then we’re coming out from the other side, which is like, “Alright, we want to create our ideal product with full holograms. You walk into a room, and there’s like as many holograms there as there are physical objects. You’re going to interact with people as holograms, AIs as holograms, all this stuff.” And then how do we get that to basically fit into a glasses-like form factor at as affordable of a price as we can get to?

I’m really curious to see how the second generation of the Ray-Bans does. And the first one, I think the reception was pretty good. There were a bunch of reports about the retention being somewhat lower, and I think that there’s a bunch of stuff that we just need to polish, where it’s like the cameras are just so much better, the audio is so much better. We didn’t realize that a lot of people were gonna want to use it for listening to podcasts when they go on a run, right? That wasn’t what we designed it for, but it was a great use case. So it’s like, “Okay, great. Let’s make sure that that’s good in v2.”

The cycle for iterating on this — if you’re doing a Threads release or Instagram, the cycle is like a month. For hardware, it’s like 18 months, right? Or two years. But I think this is the next step, and we’re going to climb up that curve.

But the initial interest, I think, is there. This is an interesting base to build from, so I feel good about that. Going the other direction, the technology is hard, right? And we are able to get it to work. It’s currently very expensive, so if you want to reach a consumer population —

— You’ve got to wait for the cost curve to come down?

Yeah.

So that’s the main limiting factor?

Well, I think there’s that. And we want to keep on improving it. But look, you learn by trying to assemble and integrate everything. You can’t just do a million R&D efforts in isolation and then hope that they come together. I think part of what lets you get to building the ultimate product is having a few tries practicing building the ultimate product.

And that’s like, “Oh, well, we did that, but it wasn’t quite as good on this one dimension as we wanted, so let’s not ship that one. Let’s hold that one and then do the next one.” So that’s some of the process that we’ve had is we have multiple generations of how we’re going to build this. When I look at the overall budget for Reality Labs, it’s augmented reality, and the glasses, I think, are the most expensive part of what we’re doing.

That’s why I asked. Because I think people are wondering, “Where’s all this going?”

At the end of the day, I’m quite optimistic about both augmented and virtual reality. I think AR glasses are going to be the thing that’s like mobile phones that you walk around the world wearing.

VR is going to be like your workstation or TV, which is when you’re like settling in for a session and you want a kind of higher fidelity, more compute, rich experience, then it’s going to be worth putting that on. But you’re not going to walk down the street wearing a VR headset. At least I hope not — that’s not the future that we’re working toward.

But I do think that there’s somewhat of a bias — maybe this in the tech industry or maybe overall — where people think that the mobile phone one, the glasses one, is the only one of the two that will end up being valuable.

But there are a ton of TVs out there, right? And there are a ton of people who spend a lot of time in front of computers working. So I actually think the VR one will be quite important, too, but I think that there’s no question that the larger market over time should be smart glasses.

Now, you’re going to have both all the immersive quality of being able to interact with people and feel present no matter where you are in a normal form factor, and you’re also going to have the perfect form factor to deliver all these AI experiences over time because they’ll be able to see what you see and hear what you hear.

So I don’t know. This stuff is challenging. Making things small is also very hard. It’s this fundamentally kind of counterintuitive thing where I think humans get super impressed by building big things, like the pyramids. I think a lot of time, building small things, like cures for diseases at a cellular level or miniaturizing a supercomputer to fit into your glasses, are maybe even bigger feats than building some really physically large things, but it seems less impressive for some reason. It’s super fascinating stuff.

I feel like every time we talk, a lot has happened in a year. You seem really dialed in to managing the company. And I’m curious what motivates you these days. Because you’ve got a lot going on, and you’re getting into fighting, you’ve got three kids, you’ve got the philanthropy stuff — there’s a lot going on. And you seem more active in day-to-day stuff, at least externally, than ever. You’re kind of the last, I think, founder of your era still leading the company of this large. Do you think about that? Do you think about what motivates you still? Or is it just still clicking, and it’s more subconscious?

I’m not sure that that much of the stuff that you said is that new. I mean, the kids are seven years old, almost eight now, so that’s been for a while. The fighting thing is relatively new over the last few years, but I’ve always been very physical.

We go through different waves in terms of what the company needs to be doing, and I think that that calls for somewhat different styles of leadership. We went through a period where a lot of what we needed to do was tackle and navigate some important social issues, and I think that that required a somewhat different style.

And then we went through a period where we had some quite big business challenges: handling in a recession and revenue not coming in the way that we thought and needing to do layoffs, and that required a somewhat different style. But now I think we’re squarely back in developing really innovative products, especially because of some of the innovations in AI. That, in some ways, plays exactly to my favorite style of running a company. But I don’t know. I think these things evolve over time.

It seems like you’re having more fun.

Well, how can you not? I mean, this is what’s great about the tech industry. Every once in a while, you get something like these AI breakthroughs, and it just changes everything. That can be threatening if you’re behind it, but I just think that that’s like when stuff changes and when awesome stuff gets built, so that’s exciting.

The world has been so weird over the last few years, right? Especially, you know, going back to the covid pandemic and all that stuff. And it was an opportunity for a lot of people to just reassess what they found meaningful in their lives. And there’s obviously a lot of stuff that was tough about it, but you know, the silver lining is I got to spend a lot more time with my family, and we got to spend more time out in nature because I wasn’t coming into the office quite as much.

It was definitely a period of reflection where I felt like since the time — I was basically 19 when I started the company. Every year, it was just, “Okay, we want to connect more people, right? Connecting people is good. That’s sort of what we’re here to do. Let’s make this bigger and bigger and connect more people and build more products that allow people to do that.”

And we just sort of hit the scale where what I found sort of satisfaction in life from and what I think is like the right strategy — I think both for like me personally and for the company — is less to just focus on like, “Okay, we’re going to just connect more people,” and more like, “Let’s do some awesome things.”

It sounds very technical.

There are a lot of different analogies on this, but someone made this point to me that doing good things is different from doing awesome things. And social media, in a lot of ways, it’s good, right? It gives a lot of people a voice, and it lets them connect, and it’s warm. It’s taking a basic technology and bringing it to billions of people, but I think that there’s an inherent awesomeness in doing some technical feat for the first time.

For the next phase of what we do, I’m just a little more focused on that. I think we’ve done a lot of good things. I think we need to make sure that they stay good. I think that there’s a lot of work that needs to happen on making sure the balance of all that is right. But for the next wave of my life and for the company — but also outside of the company with what I’m doing at CZI and some of my personal projects — I define my life at this point more in terms of getting to work on awesome things with great people who I like working with.

So I work on all this Reality Labs stuff with Boz and a team over there, and it’s just super exciting. And I get to work on all this AI stuff with Chris and Ahmed and the folks who are working on that, and it’s really exciting. And we get to work on some of the philanthropy work and helping to cure diseases with Priscilla and a lot of the best scientists in the world, and that’s really cool. And it’s like, then there’s like personal stuff, like we get to raise a family. That’s really neat — there’s no other person I’d rather do that with. I don’t know — to me, that’s just sort of where I am in life now.

Sounds like a nice place to be.

Ah, I mean, I’m enjoying it.

Mark Zuckerberg, the optimist.

I mean, always somewhat optimistic.

Thanks for the time, Mark.

Yeah, thank you.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Amazon Japan raided by anti-monopoly authorities

Amazon Japan has said it will collaborate with...

Cars24’s Revenue Jumps 25% To Near INR 7,000 Cr...

SUMMARY Cars24 said that its operating zoomed 25% to...

Perplexity mulls getting into hardware

Perplexity, the AI-powered search engine, wants to get...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!