Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence. The Information broke the news today, citing an internal post it had seen.
According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more.
Meta AI communications representative Nisha Deo told The Verge in an email the change is intended to aid in development of AI features, but that the company will “continue to prioritize and invest in safe and responsible AI development.” Deo added that though members of the RAI team are now in the generative AI organization, they will “continue to support relevant cross-Meta efforts on responsible AI development and use.”
The team already saw a restructuring earlier this year, which Business Insider wrote included layoffs that left RAI “a shell of a team.” That report went on to say the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy stakeholder negotiations before they could be implemented.
RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest, WhatsApp AI sticker generation that results in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse materials.
Update November 25th, 2023, 12:37PM ET: Updated with statement from Meta.