The best way to develop artificial intelligence (AI)-based models was to be inclusive and transparent about the datasets used to train such LLMs, she told ET.
“If you look at models, like ChatGPT, it is a very closed model. You do not see the inputs and they do not disclose a lot about the data that is being used to train. From an IBM perspective, we think that is wrong. The future is an open one, not a closed set of licensing machines,” Montgomery said.
Apart from holding the foundational AI models responsible for the training, companies and persons deploying these models should also be accountable and responsible for the response they generate.
Generative AI models, for example, should give their enterprise customers more information about the data that went into training their model which enables or empowers the end customer to be assured that there is no bias. Such disclosures, especially around data sets used for training and the labels on those data sets can also be handed over to the regulator as proof in high-risk use cases, she said.
“IBM has been spending so much time on governance products and the right management because our enterprise customers are going to need to have life-cycle management, governance over the AI that they deploy in high-risk contexts,” she said.
Discover the stories of your interest
IBM is part of a 75-member alliance of companies such as AMD, Meta, Oracle, Sony, and Uber which have committed to “accelerate open innovation across the AI technology landscape that responsibly benefits people and society everywhere”.The regulation of AI models or even LLMs, whether open or closed, should be based on the specific use cases of the models, instead of regulating the technology altogether. Attempts by regulators and governments to regulate technology as a whole, however, will never be successful as such guardrails will never be able to keep up with the pace of change of such models, she said.
“For example, you are training a model for credit determination that obviously has demographic and personal information. You want to make sure it is not biased. Such use cases should have requirements such as privacy assessment impact around them,” Montgomery said.
Akin to privacy regulations, the guardrails for AI should also be interoperable and consistent in jurisdictions across the world. This will help companies build common programs around the very basic requirements from governments globally and then such models can be tweaked to address local laws, rules, and regulations, she said.
“I think countries need to be aware as they revise their data privacy and data protection framework as well as the challenges associated with implementation. The last thing countries want to do is to make something so impossible that they are never going to have enough regulators and bodies to enforce against companies that are not complying,” Montgomery said.