Christina Montgomery closed LLM approach: IBM disagrees with closed LLM approach adopted by Big Techs

Share via:


IBM does not agree with a closed large-language model approach several global companies such as Open AI, Microsoft, Google, and others have adopted, the company’s chief privacy and trust officer Christina Montgomery said.

The best way to develop artificial intelligence (AI)-based models was to be inclusive and transparent about the datasets used to train such LLMs, she told ET.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
IIT Delhi IITD Certificate Programme in Data Science & Machine Learning Visit
MIT MIT Technology Leadership and Innovation Visit
IIM Kozhikode IIMK Advanced Data Science For Managers Visit

“If you look at models, like ChatGPT, it is a very closed model. You do not see the inputs and they do not disclose a lot about the data that is being used to train. From an IBM perspective, we think that is wrong. The future is an open one, not a closed set of licensing machines,” Montgomery said.

Apart from holding the foundational AI models responsible for the training, companies and persons deploying these models should also be accountable and responsible for the response they generate.

Generative AI models, for example, should give their enterprise customers more information about the data that went into training their model which enables or empowers the end customer to be assured that there is no bias. Such disclosures, especially around data sets used for training and the labels on those data sets can also be handed over to the regulator as proof in high-risk use cases, she said.

“IBM has been spending so much time on governance products and the right management because our enterprise customers are going to need to have life-cycle management, governance over the AI that they deploy in high-risk contexts,” she said.

Discover the stories of your interest


IBM is part of a 75-member alliance of companies such as AMD, Meta, Oracle, Sony, and Uber which have committed to “accelerate open innovation across the AI technology landscape that responsibly benefits people and society everywhere”.The regulation of AI models or even LLMs, whether open or closed, should be based on the specific use cases of the models, instead of regulating the technology altogether. Attempts by regulators and governments to regulate technology as a whole, however, will never be successful as such guardrails will never be able to keep up with the pace of change of such models, she said.

“For example, you are training a model for credit determination that obviously has demographic and personal information. You want to make sure it is not biased. Such use cases should have requirements such as privacy assessment impact around them,” Montgomery said.

Akin to privacy regulations, the guardrails for AI should also be interoperable and consistent in jurisdictions across the world. This will help companies build common programs around the very basic requirements from governments globally and then such models can be tweaked to address local laws, rules, and regulations, she said.

“I think countries need to be aware as they revise their data privacy and data protection framework as well as the challenges associated with implementation. The last thing countries want to do is to make something so impossible that they are never going to have enough regulators and bodies to enforce against companies that are not complying,” Montgomery said.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Christina Montgomery closed LLM approach: IBM disagrees with closed LLM approach adopted by Big Techs


IBM does not agree with a closed large-language model approach several global companies such as Open AI, Microsoft, Google, and others have adopted, the company’s chief privacy and trust officer Christina Montgomery said.

The best way to develop artificial intelligence (AI)-based models was to be inclusive and transparent about the datasets used to train such LLMs, she told ET.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
IIT Delhi IITD Certificate Programme in Data Science & Machine Learning Visit
MIT MIT Technology Leadership and Innovation Visit
IIM Kozhikode IIMK Advanced Data Science For Managers Visit

“If you look at models, like ChatGPT, it is a very closed model. You do not see the inputs and they do not disclose a lot about the data that is being used to train. From an IBM perspective, we think that is wrong. The future is an open one, not a closed set of licensing machines,” Montgomery said.

Apart from holding the foundational AI models responsible for the training, companies and persons deploying these models should also be accountable and responsible for the response they generate.

Generative AI models, for example, should give their enterprise customers more information about the data that went into training their model which enables or empowers the end customer to be assured that there is no bias. Such disclosures, especially around data sets used for training and the labels on those data sets can also be handed over to the regulator as proof in high-risk use cases, she said.

“IBM has been spending so much time on governance products and the right management because our enterprise customers are going to need to have life-cycle management, governance over the AI that they deploy in high-risk contexts,” she said.

Discover the stories of your interest


IBM is part of a 75-member alliance of companies such as AMD, Meta, Oracle, Sony, and Uber which have committed to “accelerate open innovation across the AI technology landscape that responsibly benefits people and society everywhere”.The regulation of AI models or even LLMs, whether open or closed, should be based on the specific use cases of the models, instead of regulating the technology altogether. Attempts by regulators and governments to regulate technology as a whole, however, will never be successful as such guardrails will never be able to keep up with the pace of change of such models, she said.

“For example, you are training a model for credit determination that obviously has demographic and personal information. You want to make sure it is not biased. Such use cases should have requirements such as privacy assessment impact around them,” Montgomery said.

Akin to privacy regulations, the guardrails for AI should also be interoperable and consistent in jurisdictions across the world. This will help companies build common programs around the very basic requirements from governments globally and then such models can be tweaked to address local laws, rules, and regulations, she said.

“I think countries need to be aware as they revise their data privacy and data protection framework as well as the challenges associated with implementation. The last thing countries want to do is to make something so impossible that they are never going to have enough regulators and bodies to enforce against companies that are not complying,” Montgomery said.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Shanghai startup begins mass-producing its humanoid robots

AgiBot is backed by major investors such as...

Israeli fintech firm acquired by Italian company for $150m

Morning was originally founded in 2011 under the...

xAI is testing a standalone iOS app for its...

Elon Musk’s AI company, xAI, is testing out...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!