Andrew Ng Launches A New Course on LLM Quality and Security 

Share via:

DeepLearning.ai’s Andrew Ng recently launched a new course that focuses on quality and safety for LLM applications, in collaboration with WhyLabs (an AI Fund portfolio company). 

This one hour long course would be led by Bernease Herman, a senior data scientist at WhyLabs, where she will be focusing on the best practices to monitor LLM systems, alongside showcasing how you can mitigate hallucinations, data leakage, and jailbreaks among others. 

You can join the course here

New course with WhyLabs: Quality and Safety for LLM Applications

With the open source community booming, developers can prototype LLM applications quickly. In the introductory video Andrew Ng explained,“One huge barrier to the practical deployment has been quality and safety.” 

For a company that aims to launch a chatbot or a QA system, there is a good possibility that the LLM would hallucinate or mislead users. “It can say something inappropriate or can open up a new security loophole where a user can input a tricky prompt called the prompt injection that makes the LLM do something bad,” Andrew elaborated.

The course explains what can go wrong and the best practices to mitigate the problems including prompt injections, hallucinations, leakage of confidential PII like personal identifiable information, such as email or government ID numbers, and toxic or other inappropriate outputs. Bernease said, “This course is designed to help you discover and create metrics needed to monitor your LLM systems. For both safety and quality issues.”

Andrew Ng has consistently released courses on his DeepLearning.ai over the past year on Generative AI and its applications. These courses have helped learners increase their knowledge and expertise in AI and deep learning.

The post Andrew Ng Launches A New Course on LLM Quality and Security  appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Andrew Ng Launches A New Course on LLM Quality and Security 

DeepLearning.ai’s Andrew Ng recently launched a new course that focuses on quality and safety for LLM applications, in collaboration with WhyLabs (an AI Fund portfolio company). 

This one hour long course would be led by Bernease Herman, a senior data scientist at WhyLabs, where she will be focusing on the best practices to monitor LLM systems, alongside showcasing how you can mitigate hallucinations, data leakage, and jailbreaks among others. 

You can join the course here

New course with WhyLabs: Quality and Safety for LLM Applications

With the open source community booming, developers can prototype LLM applications quickly. In the introductory video Andrew Ng explained,“One huge barrier to the practical deployment has been quality and safety.” 

For a company that aims to launch a chatbot or a QA system, there is a good possibility that the LLM would hallucinate or mislead users. “It can say something inappropriate or can open up a new security loophole where a user can input a tricky prompt called the prompt injection that makes the LLM do something bad,” Andrew elaborated.

The course explains what can go wrong and the best practices to mitigate the problems including prompt injections, hallucinations, leakage of confidential PII like personal identifiable information, such as email or government ID numbers, and toxic or other inappropriate outputs. Bernease said, “This course is designed to help you discover and create metrics needed to monitor your LLM systems. For both safety and quality issues.”

Andrew Ng has consistently released courses on his DeepLearning.ai over the past year on Generative AI and its applications. These courses have helped learners increase their knowledge and expertise in AI and deep learning.

The post Andrew Ng Launches A New Course on LLM Quality and Security  appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

DeFi lending protocol Nexo allocates $12M for ecosystem incentives

Crypto lending protocol Nexo is allocating 10 million...

Global crypto firms turn to Hong Kong for refuge...

The city’s web3 carnival this year drew noticeably...

Apple rolling out RC builds of iOS 17.5, watchOS...

Following the Let Loose event this morning, Apple...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!