DeepLearning.ai’s Andrew Ng recently launched a new course that focuses on quality and safety for LLM applications, in collaboration with WhyLabs (an AI Fund portfolio company).
This one hour long course would be led by Bernease Herman, a senior data scientist at WhyLabs, where she will be focusing on the best practices to monitor LLM systems, alongside showcasing how you can mitigate hallucinations, data leakage, and jailbreaks among others.
You can join the course here.
New course with WhyLabs: Quality and Safety for LLM Applications
With the open source community booming, developers can prototype LLM applications quickly. In the introductory video Andrew Ng explained,“One huge barrier to the practical deployment has been quality and safety.”
For a company that aims to launch a chatbot or a QA system, there is a good possibility that the LLM would hallucinate or mislead users. “It can say something inappropriate or can open up a new security loophole where a user can input a tricky prompt called the prompt injection that makes the LLM do something bad,” Andrew elaborated.
The course explains what can go wrong and the best practices to mitigate the problems including prompt injections, hallucinations, leakage of confidential PII like personal identifiable information, such as email or government ID numbers, and toxic or other inappropriate outputs. Bernease said, “This course is designed to help you discover and create metrics needed to monitor your LLM systems. For both safety and quality issues.”
Andrew Ng has consistently released courses on his DeepLearning.ai over the past year on Generative AI and its applications. These courses have helped learners increase their knowledge and expertise in AI and deep learning.
The post Andrew Ng Launches A New Course on LLM Quality and Security appeared first on Analytics India Magazine.