Lakera · 3 hours ago
AI Security Engineer - Red Team (United States, Remote)
Lakera is seeking an AI Security Engineer to join their Red Team and enhance AI security. The role involves leading security assessments, developing testing methodologies, and engaging with enterprise clients to secure their AI systems.
Artificial Intelligence (AI)Cyber SecuritySecuritySoftware
Responsibilities
Lead end-to-end delivery of AI red teaming security assessment engagements with enterprise customers
Collaborate with clients to scope projects, define testing requirements, and establish success criteria
Conduct comprehensive security assessments of AI systems, including text-based LLM applications and multimodal agentic systems
Author detailed security assessment reports with actionable findings and remediation recommendations
Present findings and strategic recommendations to technical and executive stakeholders through report readouts
Build upon and improve our established processes and playbooks to scale AI red teaming service delivery
Develop frameworks to ensure consistent, high-quality service delivery
Find the tedious, repetitive stuff and automate it - you don't need to be a world-class developer, just someone who can build tools that make the team more effective
Develop novel red teaming methodologies for emerging modalities: image, video, audio, autonomous systems
Stay ahead of the latest AI security threats, attack vectors, and defense mechanisms
Translate cutting-edge academic and industry research into practical testing approaches
Collaborate with our research and product teams to continuously level up our methodologies
Qualification
Required
3+ years of experience in cybersecurity with focus on red teaming, penetration testing, or security assessments
Experience with web application and API penetration testing preferred
Deep understanding of LLM vulnerabilities including prompt injection, data poisoning, and jailbreaking techniques
Practical experience with threat modeling complex systems and architectures
Proficiency in developing automated tooling to enable and enhance testing capabilities, improve workflows, and deliver deeper insights
Proven track record of leading client-facing security assessment projects from scoping through delivery
Excellent technical writing skills with experience creating executive-level security reports
Strong presentation and communication skills for diverse audiences
Experience building processes, documentation, and tooling for service delivery teams
Understanding of AI/ML model architectures, training processes, and deployment patterns
Familiarity with AI safety frameworks and alignment research
Knowledge of emerging AI attack surfaces including multimodal systems and AI agents
Preferred
Relevant security certifications (OSCP, OSWA, BSCP, etc.)
Hands-on experience performing AI red teaming assessments, with a strong plus for experience targeting agentic systems
Demonstrated experience designing LLM jailbreaks
Active participation in security research and tooling communities
Background in threat modeling and risk assessment frameworks
Previous speaking experience at security conferences or industry events
Benefits
Competitive compensation package with equity participation
Company
Lakera
Lakera is a real-time GenAI security company that utilizes AI to protect enterprises from LLM vulnerabilities.
H1B Sponsorship
Lakera has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2024 (1)
Funding
Current Stage
Early StageTotal Funding
$30MKey Investors
AtomicoredalpineFly Ventures
2025-09-16Acquired
2024-07-24Series A· $20M
2023-10-11Seed· $10M
Recent News
2025-12-18
2025-12-04
MarketScreener
2025-11-28
Company data provided by crunchbase