Senior Security Researcher - GenAI
The Security Research team at Datadog tracks the evolving threat landscape and develops impactful detection content for our Security platform. As a Security Researcher focused on Generative AI, you’ll explore the security assumptions of foundational AI models, uncover vulnerabilities, and share your findings through detection content, product improvements, open-source contributions, and public research. You’ll work alongside a diverse group of hackers and builders, collaborate across teams, and help drive our leadership in the security community through publications, vulnerability disclosures, and conference presentations. This is a high-impact opportunity to help shape the security posture of AI technologies across Datadog and our customers.
At Datadog, we place value in our office culture - the relationships and collaboration it builds and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them.
What You’ll Do:
- Lead research initiatives to identify novel vulnerabilities and threats in Generative AI and large language model (LLM) technologies
- Develop and demonstrate proof-of-concept attacks and adversarial techniques for educational and defensive purposes
- Translate findings into actionable detection content, product improvements, and engineering backlogs
- Collaborate with product managers, detection engineers, and software teams to integrate research into Datadog’s platform
- Share research through blog posts, conference talks, and open-source contributions to promote customer trust and industry awareness
- Engage with external stakeholders—including cloud providers and open-source communities—for responsible disclosure and remediation
Who You Are:
- Experienced in Generative AI systems, with a deep understanding of vulnerabilities like prompt injection or model poisoning
- Familiar with the OWASP Top 10 for LLMs and the application of traditional security principles (e.g., input handling, access control) to AI environments
- Background in offensive security, penetration testing, or vulnerability research in cloud or SaaS environments
- Able to independently plan and execute research projects with stakeholder alignment
- Skilled in communicating complex findings clearly—through documentation, internal sharing, or public forums
- Proficient in programming languages such as Go, Python, or Rust for building tools and systems for research
Bonus Points:
- Experience working in a Security Research organization.
- A proven track record in discovering, disclosing, and documenting vulnerabilities.
- Experience presenting your research at large conferences.
- Experience working in an OKR driven environment.
Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you’re passionate about technology and want to grow your skills, we encourage you to apply.
Benefits and Growth:
- New hire stock equity (RSUs) and employee stock purchase plan (ESPP)
- Continuous professional development, product training, and career pathing
- Intradepartmental mentor and buddy program for in-house networking
- An inclusive company culture, ability to join our Community Guilds (Datadog employee resource groups)
- Access to Inclusion Talks, our internal panel discussions
- Free, global mental health benefits for employees and dependents age 6+
- Competitive global benefits