Great Value Hiring · 1 hour ago
AI Red-Teamer — Adversarial AI Testing
Great Value Hiring is seeking an AI Red-Teamer specializing in Adversarial AI Testing. The role involves red-teaming AI models and agents, generating high-quality human data, and applying structured testing methodologies to ensure consistent results across various projects.
Staffing & Recruiting
Responsibilities
Red-team AI models and agents: jailbreaks, prompt injections, misuse cases, exploits
Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent Document reproducibly: produce reports, datasets, and attack cases customers can act on
Flex across projects: support different customers, from LLM jailbreaks to socio-technical abuse testing
Qualification
Required
prior red-teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
curious and adversarial: you instinctively push systems to breaking points
structured: you use frameworks or benchmarks, not just random hacks
communicative: you explain risks clearly to technical and non-technical stakeholders
adaptable: thrive on moving across projects and customers
Preferred
Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
Cybersecurity: penetration testing, exploit development, reverse engineering
Socio-technical risk: harassment/disinfo probing, abuse analysis
Creative probing: psychology, acting, writing for unconventional adversarial thinking
Company
Great Value Hiring
We started "Great Value Hiring" with a simple idea: to make meaningful connections.
Funding
Current Stage
Early StageCompany data provided by crunchbase