WEBINAR

February 20- 10 AM PT

Are LLMs Teaching Developers to Hardcode API Keys?

Available On-Demand!

AI coding assistants like ChatGPT, GitHub Copilot, and others are changing how developers write code, but they might also be teaching dangerous habits. In our latest research, we found that most large language models (LLMs) recommend insecure practices, like hardcoding API keys and passwords.

This webinar will break down:

  • The result of our testing across 10 popular LLMs

  • Why these models perpetuate insecure coding patterns

  • How LLM usage in IDEs like VS Code impacts code security

  • Practical tips for identifying and mitigating these risks

Join us to explore why this matters and how developers can stay secure while leveraging LLMs.