Capital One · 5 hours ago
Applied Researcher II (AI Foundations)
Capital One is a leading financial services company focused on creating trustworthy and reliable AI systems. The Applied Researcher II role involves partnering with cross-functional teams to develop AI-powered products, leveraging advanced technologies to analyze large datasets and improve customer interactions.
BankingCredit CardsFinanceFinancial Services
Responsibilities
Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money
Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data
Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation
Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences
Flex your interpersonal skills to translate the complexity of your work into tangible business goals
Qualification
Required
Currently has, or is in the process of obtaining, PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields, with an exception that required degree will be obtained on or before the scheduled start date plus 2 years of experience in Applied Research or M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 4 years of experience in Applied Research
Has a deep understanding of the foundations of AI methodologies
Experience building large deep learning models, whether on language, images, events, or graphs, as well as expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF
An engineering mindset as shown by a track record of delivering models at scale both in terms of training data and inference volumes
Experience in delivering libraries, platform level code or solution level code to existing products
A professional with a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects
Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects
Preferred
PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields
LLM
PhD focus on NLP or Masters with 5 years of industrial NLP research experience
Multiple publications on topics related to the pre-training of large language models (e.g. technical reports of pre-trained LLMs, SSL techniques, model pre-training optimization)
Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)
Publications in deep learning theory
Publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR
PhD focus on topics in geometric deep learning (Graph Neural Networks, Sequential Models, Multivariate Time Series)
Multiple papers on topics relevant to training models on graph and sequential data structures at KDD, ICML, NeurIPs, ICLR
Worked on scaling graph models to greater than 50m nodes
Experience with large scale deep learning based recommender systems
Experience with production real-time and streaming environments
Contributions to common open source frameworks (pytorch-geometric, DGL)
Proposed new methods for inference or representation learning on graphs or sequences
Worked datasets with 100m+ users
PhD focused on topics related to optimizing training of very large deep learning models
Multiple years of experience and/or publications on one of the following topics: Model Sparsification, Quantization, Training Parallelism/Partitioning Design, Gradient Checkpointing, Model Compression
Experience optimizing training for a 10B+ model
Deep knowledge of deep learning algorithmic and/or optimizer design
Experience with compiler design
PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)
Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance
Experience deploying a fine-tuned large language model
Benefits
Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being.
Company
Capital One
Capital One is a financial services company that provides banking, credit card, auto loan, savings, and commercial banking services.
H1B Sponsorship
Capital One has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (723)
2024 (488)
2023 (545)
2022 (909)
2021 (672)
2020 (944)
Funding
Current Stage
Public CompanyTotal Funding
$5.45BKey Investors
Berkshire Hathaway
2025-09-11Post Ipo Debt· $2.75B
2025-01-30Post Ipo Debt· $1.75B
2023-05-15Post Ipo Equity· $954M
Leadership Team
Recent News
Business Wire
2025-12-30
2025-12-29
2025-12-26
Company data provided by crunchbase