Part 2: Top AI Leaders Missed in TIME’s 100 AI 2023 List

Share via:

For the first time in this century, TIME magazine released a list dedicated to the most influential 100 personalities in AI. Yet, amidst the grand spectacle, some important individuals in AI found themselves absent from this illustrious list.

In continuation of the Part 1, we published earlier, here’s the part 2 of all the polymaths who couldn’t make it to the TIME’s list: 

Erik Brynjolfsson

Erik Brynjolfsson, an economist and visionary scholar, has been a guiding star in digital economics. He has been a driving force in the study of the digital economy, emphasizing the profound transformation brought about by technological advancements. While the rise of AI sparks concerns about job displacement, Brynjolfsson offers a more pragmatic perspective.

Everybody supersmart from doctors, to CEOs he has met, the first question they ponder upon is how can generative AI replace the work humans are doing?

In an NYT piece, Brynjolfsson encourages us to shift our gaze, he further said. “The other thing that I wish people would do more of is think about what new things could be done now that were never done before. Obviously that’s a much harder question.” It is also, he added, “where most of the value is.”

Rodney Brooks

Then there’s Rodney Brooks, a seasoned technologist who knows the difference between real progress and baseless hype as the majority of his predictions have been spot-on. 

His expertise in robotics and AI is unparalleled, having co-founded IRobot and contributed significantly to MIT’s computer and AI labs. Brooks, in his annual predictions, reminds us to temper our expectations, believing that the integration of robots into our lives will be a gradual, symbiotic process.

In his fifth annual scorecard in 2023. He confessed to having allowed hype to make him too optimistic about some developments.“My current belief is that things will go, overall, even slower than I thought five years ago,” he wrote.

Brooks expects “robots that will roam our homes and workplaces … to emerge gradually and symbiotically with our society” even as “a wide range of advanced sensory devices and prosthetics” emerge to enhance and augment our own bodies: “As our machines become more like us, we will become more like them. And I’m an optimist. I believe we will all get along.”

Yejin Choi

Despite AI breakthroughs in previously human-dominated language and visual art — our gravest concerns should probably be tempered, believes Yejin Choi. 

The computer scientist, who is also a 2022 recipient of the MacArthur “genius” grant, has been doing groundbreaking research on developing common sense and ethical reasoning in AI. 

In an interview with the NYT earlier this year, she elaborated how some people naïvely think if we teach AI “Don’t kill people while maximising paper-clip production,” that will take care of it. But the machine might then kill all the plants. That’s why it also needs common sense. it’s common sense not to go with extreme, degenerative solutions, she explained.

She reminds us that simply instructing AI not to commit certain actions is insufficient; AI must also possess the wisdom to make sensible decisions and consider the broader implications of its actions.

Jeff Dean

Jeff Dean has long headed the AI department at Google Brain with an ethos of a university, encouraging researchers to publish academic papers actively. Impressively, they officially pushed out nearly 500 studies since 2019, according to Google Research’s website.

On the one hand, there are concerns around AI development and its associated risks. And on the other, this is a natural progress in technology: Innovation happens quickly. It’s not an either/or. It’s a both/and. To Dean’s point, society can mitigate risk and be bold. Time and again, Dean has reminded us that the rapid development of AI is both exhilarating and worrisome, emphasising the need to balance innovation and risk mitigation.

Sergey Levine

The associate professor of electrical engineering and computer sciences and the leader of the Robotic AI & Learning (RAIL) Lab at UC Berkeley. The advocate of reinforcement learning who also holds an appointment with the Robotics at Google program, along with fellow researchers Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, and Peter Pastor,  recently published a review titled How to Train Your Robot with Deep Reinforcement Learning — Lessons We’ve Learned

In the latest, second of four Distinguished Lectures on the Status and Future of AI he has delivered, he extensively spoke about examining  algorithmic advances that can help ML systems retain both discernment and flexibility. 

He emphasised the relationship between data and optimization in problem-solving. Without adequate data, researchers are unable to address challenges innovatively. Conversely, optimization strategies struggle to find real-world applications without the necessary data. By combining both elements effectively, we can inch closer to creating a space-exploring robot capable of devising solutions to unexpected problems, Levine believes.

Pieter Abbeel

Peter Abbeel has had a long and upward career in robotics from learning to significantly improve robot manipulation to receiving the 2021 ACM Prize in Computing for pioneering work in robot learning.

Abbeel has journeyed from teaching robots to learn from humans to pioneering learning-through-trial-and-error techniques. His groundbreaking work forms the bedrock of the next generation of robotics, showcasing the potential of AI to evolve and adapt.

The post Part 2: Top AI Leaders Missed in TIME’s 100 AI 2023 List appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Part 2: Top AI Leaders Missed in TIME’s 100 AI 2023 List

For the first time in this century, TIME magazine released a list dedicated to the most influential 100 personalities in AI. Yet, amidst the grand spectacle, some important individuals in AI found themselves absent from this illustrious list.

In continuation of the Part 1, we published earlier, here’s the part 2 of all the polymaths who couldn’t make it to the TIME’s list: 

Erik Brynjolfsson

Erik Brynjolfsson, an economist and visionary scholar, has been a guiding star in digital economics. He has been a driving force in the study of the digital economy, emphasizing the profound transformation brought about by technological advancements. While the rise of AI sparks concerns about job displacement, Brynjolfsson offers a more pragmatic perspective.

Everybody supersmart from doctors, to CEOs he has met, the first question they ponder upon is how can generative AI replace the work humans are doing?

In an NYT piece, Brynjolfsson encourages us to shift our gaze, he further said. “The other thing that I wish people would do more of is think about what new things could be done now that were never done before. Obviously that’s a much harder question.” It is also, he added, “where most of the value is.”

Rodney Brooks

Then there’s Rodney Brooks, a seasoned technologist who knows the difference between real progress and baseless hype as the majority of his predictions have been spot-on. 

His expertise in robotics and AI is unparalleled, having co-founded IRobot and contributed significantly to MIT’s computer and AI labs. Brooks, in his annual predictions, reminds us to temper our expectations, believing that the integration of robots into our lives will be a gradual, symbiotic process.

In his fifth annual scorecard in 2023. He confessed to having allowed hype to make him too optimistic about some developments.“My current belief is that things will go, overall, even slower than I thought five years ago,” he wrote.

Brooks expects “robots that will roam our homes and workplaces … to emerge gradually and symbiotically with our society” even as “a wide range of advanced sensory devices and prosthetics” emerge to enhance and augment our own bodies: “As our machines become more like us, we will become more like them. And I’m an optimist. I believe we will all get along.”

Yejin Choi

Despite AI breakthroughs in previously human-dominated language and visual art — our gravest concerns should probably be tempered, believes Yejin Choi. 

The computer scientist, who is also a 2022 recipient of the MacArthur “genius” grant, has been doing groundbreaking research on developing common sense and ethical reasoning in AI. 

In an interview with the NYT earlier this year, she elaborated how some people naïvely think if we teach AI “Don’t kill people while maximising paper-clip production,” that will take care of it. But the machine might then kill all the plants. That’s why it also needs common sense. it’s common sense not to go with extreme, degenerative solutions, she explained.

She reminds us that simply instructing AI not to commit certain actions is insufficient; AI must also possess the wisdom to make sensible decisions and consider the broader implications of its actions.

Jeff Dean

Jeff Dean has long headed the AI department at Google Brain with an ethos of a university, encouraging researchers to publish academic papers actively. Impressively, they officially pushed out nearly 500 studies since 2019, according to Google Research’s website.

On the one hand, there are concerns around AI development and its associated risks. And on the other, this is a natural progress in technology: Innovation happens quickly. It’s not an either/or. It’s a both/and. To Dean’s point, society can mitigate risk and be bold. Time and again, Dean has reminded us that the rapid development of AI is both exhilarating and worrisome, emphasising the need to balance innovation and risk mitigation.

Sergey Levine

The associate professor of electrical engineering and computer sciences and the leader of the Robotic AI & Learning (RAIL) Lab at UC Berkeley. The advocate of reinforcement learning who also holds an appointment with the Robotics at Google program, along with fellow researchers Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, and Peter Pastor,  recently published a review titled How to Train Your Robot with Deep Reinforcement Learning — Lessons We’ve Learned

In the latest, second of four Distinguished Lectures on the Status and Future of AI he has delivered, he extensively spoke about examining  algorithmic advances that can help ML systems retain both discernment and flexibility. 

He emphasised the relationship between data and optimization in problem-solving. Without adequate data, researchers are unable to address challenges innovatively. Conversely, optimization strategies struggle to find real-world applications without the necessary data. By combining both elements effectively, we can inch closer to creating a space-exploring robot capable of devising solutions to unexpected problems, Levine believes.

Pieter Abbeel

Peter Abbeel has had a long and upward career in robotics from learning to significantly improve robot manipulation to receiving the 2021 ACM Prize in Computing for pioneering work in robot learning.

Abbeel has journeyed from teaching robots to learn from humans to pioneering learning-through-trial-and-error techniques. His groundbreaking work forms the bedrock of the next generation of robotics, showcasing the potential of AI to evolve and adapt.

The post Part 2: Top AI Leaders Missed in TIME’s 100 AI 2023 List appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

OP_CAT could go live on Bitcoin within 12 months:...

If approved, OP_CAT will introduce drastic changes to...

Bluesky reaches 15 million users; people look for alternatives...

Bluesky, one of the many microblogging platforms, has...

The ePlane Company Raises $14 Mn For eVTOL Development

SUMMARY ePlane raised $14 Mn in Series B to...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!