Alphabet CEO Sundar Pichai took the stage on Wednesday at a Stanford event held by the university’s business school, offering some small insights into how he thinks about running one of the world’s most valuable tech companies.
It was a notable appearance because Pichai’s been having a bit of a rough go lately. Google is widely perceived to have gotten a late start on generative AI, trailing behind Microsoft-funded OpenAI. That’s despite the fact that the company under Pichai has been focusing on AI for the better part of the last decade, and Google researchers wrote the formative paper on transformer models that really kicked off the generative AI revolution. More recently, Alphabet’s Gemini LLM was excoriated for generating bizarrely inaccurate images of historical situations, such as depicting America’s founding fathers as Black or Native American, rather than white English men, suggesting an overcorrection for certain types of bias.
The interviewer, Stanford Graduate School of Business Dean Jonathan Levin, wasn’t exactly a hostile inquisitor — at the end, he revealed that the two men’s sons had once played in a middle school band together — and Pichai is deft at answering difficult questions by posing them as further questions about how he thinks, rather than with direct answers. But there were a couple nuggets of interest during the talk.
At one point, Levin asked what Pichai tried to do to keep a company of 200,000 people innovative against all the startups battling to disrupt its business. It’s obviously something Pichai worries about.
“Honestly, it’s a question which has always kept me up at night through the years,” he started. “One of the inherent characteristics of technology is you can always develop something amazing with a small team from the outside. And history has shown that. Scale doesn’t always give you — regulators may not agree, but at least running the company, I’ve always felt you’re always susceptible to someone in a garage with a better idea. So I think, I think how do you as a company move fast? How do you have the culture of risk-taking? How do you incent for that? These are all things which you actually have to work at it a lot. I think at least larger organizations tend to default. One of the most counter-intuitive things I’ve seen is, the more successful things are, the more risk averse people become. It’s so counter-intuitive. You would often find smaller companies almost make decisions which bet the company, but the bigger you are, it’s true for large university, it’s true a large company, you have a lot more to lose, or you perceive you have a lot more to lose. And so you find you don’t take as many ambitious risk-taking initiatives. So you have to consciously do that. You have to push teams to do that.”
He didn’t offer any specific tactics that have proven successful at Google, but instead noted how difficult it is to create the proper incentives.
“One example for this is I think a lot about is how do you reward effort and risk-taking and good execution, and not always outcomes. It’s easy to think you should reward outcomes. But then people start gaming it, right? People take conservative things in which you will get a good outcome.”
He hearkened back to an earlier time in which Google was more willing to take weird risks, in particularly pointing to the firm’s ill-fated Google Glass; it didn’t work out, but it was one of the first devices to experiment with augmented reality.
“We recently said, we went back to a notion we had in early Google of Google Labs. And so we’re setting a thing up by which it’s easier to put out something without always worrying about, you know, the full brand and the weight of building a Google product. How can you put out something in the easy way, the lighter weight way? How do you allow people to prototype more easily internally and get it out to people?”
Later, Levin asked what advances Pichai was most excited about this year.
First, he cited the multimodality of Google’s latest LLM — that is, its ability to process different kinds of inputs, such as video and text, simultaneously.
“All our AI models now already are using Gemini 1.5 Pro; that’s a 1 million context window and it’s multimodal. The ability to process huge amounts of information in any type of modality on the input side and give it on the output side, I think it’s it’s mind blowing in a way that we haven’t fully processed.”
Second, he highlighted the ability of connecting different discrete answers together to provide smarter workflows. “Where today you’re using the LLMs as just an information-seeking thing, but chaining them together in a way that you can kind of tackle workflows, that’s going to be extraordinarily powerful. It could maybe make your billing system in Stanford Hospital a bit easier,” he joked.
You can watch the entire interview, along with an interview with Fed Chairman Jerome Powell that happened prior to it, on YouTube. Levin and Pichai start around 1 hour and 18 minutes in.