DeepMind’s Alpha series of AIs, known for their accomplishments in game-playing, are now demonstrating their versatility in various domains. The AIs, including AlphaGo, AlphaGo Zero, AlphaZero, and MuZero, have transitioned from learning games to tackling real-world tasks with surprising success.
Initially trained using human gameplay, these AIs evolved to learn solely by playing against themselves, mastering complex games like Go, Chess, and Shogi. MuZero went a step further, learning games without any prior knowledge of the rules. This approach allowed the AI to develop unique problem-solving methods.
At Google, the AlphaZero AI was introduced to Borg, a system responsible for managing task assignment at data centers. Borg employed manually-coded rules for task scheduling, resulting in inefficiencies due to evolving workload distributions. AlphaZero, analyzing Borg data, identified patterns and proposed new ways to predict and manage resource allocation. When implemented, it reduced underused hardware by up to 19%, a significant improvement at Google’s scale.
MuZero, on the other hand, focused on optimizing video compression for YouTube. The AI successfully reduced video bitrates by 4%, an impactful achievement considering the platform’s vast amount of content. MuZero also ventured into fine-tuning compression techniques such as frame grouping.
Furthermore, AlphaDev, a relative of AlphaZero, enhanced sorting algorithms and developed a more efficient hashing function for small byte ranges, resulting in a 30% reduction in load.
While these improvements may seem incremental, they highlight the adaptability of AI systems. Although general-purpose AI remains a distant goal, the ability of these AIs to apply problem-solving techniques across diverse domains is a promising development. It not only expands their applications but also suggests flexibility and robustness within their existing fields of operation. As AI continues to evolve, these advancements pave the way for further progress and innovation.