Knowledge
Blog
April 4, 2024
5
mn
read
Blanca Rivera Campos

New course with DeepLearningAI: Red Teaming LLM Applications

Our new course in collaboration with DeepLearningAI team provides training on red teaming techniques for Large Language Model (LLM) and chatbot applications. Through hands-on attacks using prompt injections, you'll learn how to identify vulnerabilities and security failures in LLM systems.
Red Teaming LLM Applications course

Hi there,

The Giskard team hopes you're having a good week!

This month we have the pleasure to announce our new course on Red Teaming LLM Applications in collaboration with Andrew Ng and the DeepLearning.ai team!

Learn how to make safer LLM apps. Enroll for free 👉 here.

What you’ll learn in this course 🤓

In this course, you'll attack various chatbot applications using prompt injections to see how the system reacts and understand security failures. LLM failures can lead to legal liability, reputational damage, and costly service disruptions. This course helps you mitigate these risks proactively.

Learn industry-proven red teaming techniques to proactively test, attack, and improve the robustness of your LLM applications, and:

  • Explore the nuances of LLM performance evaluation, and understand the differences between benchmarking foundation models and testing LLM applications.
  • Get an overview of fundamental LLM application vulnerabilities and how they affect real-world deployments.
  • Gain hands-on experience with both manual and automated LLM red-teaming methods.
  • See a full demonstration of red-teaming assessment, and apply the concepts and techniques covered throughout the course.

Get a sneak peek of the course in this video 🎥

👉 Enroll for free here

Happy LLM evaluation!

Thank you so much, and see you soon! ❤️

The Giskard Team 🐢

Continuously secure LLM agents, preventing hallucinations and security issues.
Book a demo

You will also like

Giskard's LLM Red Teaming

LLM Red Teaming: Detect safety & security breaches in your LLM apps

Introducing our LLM Red Teaming service, designed to enhance the safety and security of your LLM applications. Discover how our team of ML Researchers uses red teaming techniques to identify and address LLM vulnerabilities. Our new service focuses on mitigating risks like misinformation and data leaks by developing comprehensive threat models.

View post
OWASP Top 10 for LLM 2023

OWASP Top 10 for LLM 2023: Understanding the Risks of Large Language Models

In this post, we introduce OWASP's first version of the Top 10 for LLM, which identifies critical security risks in modern LLM systems. It covers vulnerabilities like Prompt Injection, Insecure Output Handling, Model Denial of Service, and more. Each vulnerability is explained with examples, prevention tips, attack scenarios, and references. The document serves as a valuable guide for developers and security practitioners to protect LLM-based applications and data from potential attacks.

View post
Giskard team at DEFCON31

AI Safety at DEFCON 31: Red Teaming for Large Language Models (LLMs)

DEFCON, one of the world's premier hacker conventions, this year saw a unique focus at the AI Village: red teaming of Large Language Models (LLMs). Instead of conventional hacking, participants were challenged to use words to uncover AI vulnerabilities. The Giskard team was fortunate to attend, witnessing firsthand the event's emphasis on understanding and addressing potential AI risks.

View post
Stay updated with
the Giskard Newsletter