Anthropic · 1 day ago
Research Engineer / Scientist, Frontier Red Team (Cyber)
Anthropic is a public benefit corporation focused on creating reliable and safe AI systems. As a Research Scientist on the Frontier Red Team, you will develop frameworks and tools to defend against advanced AI-enabled cyber threats, collaborating with various teams to shape the company's cyberdefense research program.
Artificial Intelligence (AI)Foundational AIGenerative AIInformation TechnologyMachine Learning
Responsibilities
Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting
Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios
Design and build infrastructure for evaluating and enabling AI systems to operate in security environments
Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public
Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions
Senior candidates will also set research strategy, define what problems are worth solving, own the technical roadmap, and manage relationships with cross-functional partners
Qualification
Required
Have deep expertise in cybersecurity or security research
Are driven to find solutions to complex, high-stakes problems
Have experience doing technical research with LLM-based agents or autonomous systems
Have strong software engineering skills, particularly in Python
Can own entire problems end-to-end, including both technical and non-technical components
Design and run experiments quickly, iterating fast toward useful results
Thrive in collaborative environments
Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI
Are comfortable working on sensitive projects that require discretion and integrity
Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience
Preferred
Experience with offensive security research, vulnerability research, or exploit development
Research or professional experience applying LLMs to security problems
Track record in competitive CTFs, bug bounties, or other security-related competitions
Experience building security tools or automation
Track record of building demos or prototypes that communicate complex technical ideas
Experience working with external stakeholders (policymakers, government, researchers)
Familiarity with AI safety research and threat modeling for advanced AI systems
Benefits
Competitive compensation and benefits
Optional equity donation matching
Generous vacation and parental leave
Flexible working hours
A lovely office space in which to collaborate with colleagues
Company
Anthropic
Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.
H1B Sponsorship
Anthropic has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (105)
2024 (13)
2023 (3)
2022 (4)
2021 (1)
Funding
Current Stage
Late StageTotal Funding
$33.74BKey Investors
Lightspeed Venture PartnersGoogleAmazon
2025-09-02Series F· $13B
2025-05-16Debt Financing· $2.5B
2025-03-03Series E· $3.5B
Recent News
2026-01-25
iphoneincanada.ca
2026-01-25
2026-01-25
Company data provided by crunchbase