Rubrik’s IPO filing reveals an AI governance committee. Get used to it.

Share via:


Tucked into Rubrik’s IPO filing this week — between the parts about employee count and cost statements — was a nugget that reveals how the data management company is thinking about generative AI and the risks that accompany the new tech: Rubrik has quietly set up a governance committee to oversee how artificial intelligence is implemented in its business.

According to the Form S-1, the new AI governance committee includes managers from Rubrik’s engineering, product, legal and information security teams. Together, the teams will evaluate the potential legal, security and business risks of using generative AI tools and ponder “steps that can be taken to mitigate any such risks,” the filing reads.

To be clear, Rubrik is not an AI business at its core — its sole AI product, a chatbot called Ruby that it launched in November 2023, is built on Microsoft and OpenAI APIs. But like many others, Rubrik (and its current and future investors) is considering a future in which AI will play a growing role in its business. Here’s why having AI governance could become the new normal.

Growing regulatory scrutiny

Some companies are adopting AI best practices to take the initiative, but others will be pushed to do so by regulations such as the EU AI Act.

Dubbed “the world’s first comprehensive AI law,” the landmark legislation — expected to become law across the bloc later this year — bans some AI use cases that are deemed to bring “unacceptable risk,” and defines other “high risk” applications. The bill also lays out governance rules aimed at reducing risks that might scale harms like bias and discrimination. This risk-rating approach is likely to be broadly adopted by companies looking for a reasoned way forward for adopting AI.

Privacy and data protection lawyer Eduardo Ustaran, a partner at Hogan Lovells International LLP, expects the EU AI Act and its myriad of obligations to amplify the need for AI governance, which will in turn require committees. “Aside from its strategic role to devise and oversee an AI governance program, from an operational perspective, AI governance committees are a key tool in addressing and minimizing risks,” he said. “This is because collectively, a properly established and resourced committee should be able to anticipate all areas of risk and work with the business to deal with them before they materialize. In a sense, an AI governance committee will serve as a basis for all other governance efforts and provide much-needed reassurance to avoid compliance gaps.”

In a recent policy paper on the EU AI Act’s implications for corporate governance, ESG and compliance consultant Katharina Miller concurred, recommending that companies establish AI governance committees as a compliance measure.

Legal scrutiny

Compliance isn’t only meant to please regulators. The EU AI Act has teeth, and “the penalties for non-compliance with the AI Act are significant,” British-American law firm Norton Rose Fulbright noted.

Its scope also goes beyond Europe. “Companies operating outside the EU territory may be subject to the provisions of the AI Act if they carry out AI-related activities involving EU users or data,” the law firm warned. If it is anything like GDPR, the legislation will have an international impact, especially amid increased EU-U.S. cooperation on AI.

AI tools can land a company in trouble beyond AI legislation. Rubrik declined to share comments with TechCrunch, likely because of its IPO quiet period, but the company’s filing mentions that its AI governance committee evaluates a wide range of risks.

The selection criteria and analysis include consideration of how use of generative AI tools could raise issues relating to confidential information, personal data and privacy, customer data and contractual obligations, open source software, copyright and other intellectual property rights, transparency, output accuracy and reliability, and security.

Keep in mind that Rubrik’s desire to cover legal bases could be due to a variety of other reasons. It could, for example, also be there to show it is responsibly anticipating issues, which is critical since Rubrik has previously dealt with not only a data leak and hack, but also intellectual property litigation.

A matter of optics

It goes without saying companies won’t solely look at AI through the lens of risk prevention. There will be opportunities they don’t want to miss, and neither do their clients. That’s one reason why generative AI tools are being implemented despite having obvious flaws like “hallucination” (i.e., a tendency to fabricate information).

It will be a fine balance for companies to strike. On one hand, boasting about their use of AI could boost their valuations, no matter how real said use is or what difference it makes to their bottom line. On the other hand, they will have to put minds at rest about potential risks.

“We’re at this key point of AI evolution where the future of AI highly depends on whether the public will trust AI systems and companies that use them,” the privacy counsel of privacy and security software provider OneTrust, Adomas Siudika, wrote in a blog post on the topic.

Establishing AI governance committees likely will be at least one way to try to help on the trust front.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Rubrik’s IPO filing reveals an AI governance committee. Get used to it.


Tucked into Rubrik’s IPO filing this week — between the parts about employee count and cost statements — was a nugget that reveals how the data management company is thinking about generative AI and the risks that accompany the new tech: Rubrik has quietly set up a governance committee to oversee how artificial intelligence is implemented in its business.

According to the Form S-1, the new AI governance committee includes managers from Rubrik’s engineering, product, legal and information security teams. Together, the teams will evaluate the potential legal, security and business risks of using generative AI tools and ponder “steps that can be taken to mitigate any such risks,” the filing reads.

To be clear, Rubrik is not an AI business at its core — its sole AI product, a chatbot called Ruby that it launched in November 2023, is built on Microsoft and OpenAI APIs. But like many others, Rubrik (and its current and future investors) is considering a future in which AI will play a growing role in its business. Here’s why having AI governance could become the new normal.

Growing regulatory scrutiny

Some companies are adopting AI best practices to take the initiative, but others will be pushed to do so by regulations such as the EU AI Act.

Dubbed “the world’s first comprehensive AI law,” the landmark legislation — expected to become law across the bloc later this year — bans some AI use cases that are deemed to bring “unacceptable risk,” and defines other “high risk” applications. The bill also lays out governance rules aimed at reducing risks that might scale harms like bias and discrimination. This risk-rating approach is likely to be broadly adopted by companies looking for a reasoned way forward for adopting AI.

Privacy and data protection lawyer Eduardo Ustaran, a partner at Hogan Lovells International LLP, expects the EU AI Act and its myriad of obligations to amplify the need for AI governance, which will in turn require committees. “Aside from its strategic role to devise and oversee an AI governance program, from an operational perspective, AI governance committees are a key tool in addressing and minimizing risks,” he said. “This is because collectively, a properly established and resourced committee should be able to anticipate all areas of risk and work with the business to deal with them before they materialize. In a sense, an AI governance committee will serve as a basis for all other governance efforts and provide much-needed reassurance to avoid compliance gaps.”

In a recent policy paper on the EU AI Act’s implications for corporate governance, ESG and compliance consultant Katharina Miller concurred, recommending that companies establish AI governance committees as a compliance measure.

Legal scrutiny

Compliance isn’t only meant to please regulators. The EU AI Act has teeth, and “the penalties for non-compliance with the AI Act are significant,” British-American law firm Norton Rose Fulbright noted.

Its scope also goes beyond Europe. “Companies operating outside the EU territory may be subject to the provisions of the AI Act if they carry out AI-related activities involving EU users or data,” the law firm warned. If it is anything like GDPR, the legislation will have an international impact, especially amid increased EU-U.S. cooperation on AI.

AI tools can land a company in trouble beyond AI legislation. Rubrik declined to share comments with TechCrunch, likely because of its IPO quiet period, but the company’s filing mentions that its AI governance committee evaluates a wide range of risks.

The selection criteria and analysis include consideration of how use of generative AI tools could raise issues relating to confidential information, personal data and privacy, customer data and contractual obligations, open source software, copyright and other intellectual property rights, transparency, output accuracy and reliability, and security.

Keep in mind that Rubrik’s desire to cover legal bases could be due to a variety of other reasons. It could, for example, also be there to show it is responsibly anticipating issues, which is critical since Rubrik has previously dealt with not only a data leak and hack, but also intellectual property litigation.

A matter of optics

It goes without saying companies won’t solely look at AI through the lens of risk prevention. There will be opportunities they don’t want to miss, and neither do their clients. That’s one reason why generative AI tools are being implemented despite having obvious flaws like “hallucination” (i.e., a tendency to fabricate information).

It will be a fine balance for companies to strike. On one hand, boasting about their use of AI could boost their valuations, no matter how real said use is or what difference it makes to their bottom line. On the other hand, they will have to put minds at rest about potential risks.

“We’re at this key point of AI evolution where the future of AI highly depends on whether the public will trust AI systems and companies that use them,” the privacy counsel of privacy and security software provider OneTrust, Adomas Siudika, wrote in a blog post on the topic.

Establishing AI governance committees likely will be at least one way to try to help on the trust front.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Matrimony.com Forays Into Online Job Market With ‘ManyJobs’

SUMMARY ManyJobs.com will target “grey collar” job seekers and...

Tesla says it has reached a ‘conditional’ settlement in...

Tesla and Rivian may have resolved a lawsuit...

FInd your lost wallet with your iPhone using SwitchBot...

I have an AirTag on my keychain to...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!