Andrew Ng Launches A New Course on LLM Quality and Security 

Share via:

DeepLearning.ai’s Andrew Ng recently launched a new course that focuses on quality and safety for LLM applications, in collaboration with WhyLabs (an AI Fund portfolio company). 

This one hour long course would be led by Bernease Herman, a senior data scientist at WhyLabs, where she will be focusing on the best practices to monitor LLM systems, alongside showcasing how you can mitigate hallucinations, data leakage, and jailbreaks among others. 

You can join the course here

New course with WhyLabs: Quality and Safety for LLM Applications

With the open source community booming, developers can prototype LLM applications quickly. In the introductory video Andrew Ng explained,“One huge barrier to the practical deployment has been quality and safety.” 

For a company that aims to launch a chatbot or a QA system, there is a good possibility that the LLM would hallucinate or mislead users. “It can say something inappropriate or can open up a new security loophole where a user can input a tricky prompt called the prompt injection that makes the LLM do something bad,” Andrew elaborated.

The course explains what can go wrong and the best practices to mitigate the problems including prompt injections, hallucinations, leakage of confidential PII like personal identifiable information, such as email or government ID numbers, and toxic or other inappropriate outputs. Bernease said, “This course is designed to help you discover and create metrics needed to monitor your LLM systems. For both safety and quality issues.”

Andrew Ng has consistently released courses on his DeepLearning.ai over the past year on Generative AI and its applications. These courses have helped learners increase their knowledge and expertise in AI and deep learning.

The post Andrew Ng Launches A New Course on LLM Quality and Security  appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Andrew Ng Launches A New Course on LLM Quality and Security 

DeepLearning.ai’s Andrew Ng recently launched a new course that focuses on quality and safety for LLM applications, in collaboration with WhyLabs (an AI Fund portfolio company). 

This one hour long course would be led by Bernease Herman, a senior data scientist at WhyLabs, where she will be focusing on the best practices to monitor LLM systems, alongside showcasing how you can mitigate hallucinations, data leakage, and jailbreaks among others. 

You can join the course here

New course with WhyLabs: Quality and Safety for LLM Applications

With the open source community booming, developers can prototype LLM applications quickly. In the introductory video Andrew Ng explained,“One huge barrier to the practical deployment has been quality and safety.” 

For a company that aims to launch a chatbot or a QA system, there is a good possibility that the LLM would hallucinate or mislead users. “It can say something inappropriate or can open up a new security loophole where a user can input a tricky prompt called the prompt injection that makes the LLM do something bad,” Andrew elaborated.

The course explains what can go wrong and the best practices to mitigate the problems including prompt injections, hallucinations, leakage of confidential PII like personal identifiable information, such as email or government ID numbers, and toxic or other inappropriate outputs. Bernease said, “This course is designed to help you discover and create metrics needed to monitor your LLM systems. For both safety and quality issues.”

Andrew Ng has consistently released courses on his DeepLearning.ai over the past year on Generative AI and its applications. These courses have helped learners increase their knowledge and expertise in AI and deep learning.

The post Andrew Ng Launches A New Course on LLM Quality and Security  appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

AI tokens market cap falls 28% from December $70B...

AI cryptocurrencies have dropped nearly 30% in value,...

India’s Shiprocket to raise $25.6m

The funding follows approval from the Competition Commission...

Campa Cola against the world: Reliance’s aggressive pricing disrupts...

With the return of Campa Cola and the price...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!