NeurIPS 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.
Search Opportunities
Postdoctoral Scholar: Computational Medicine Research Group, University of California, Irvine (NIH Funded)
The Computational Medicine Research Group directed by Prof. Pratik Shah at the University of California, Irvine, invites applications for an NIH-funded Postdoctoral Scholar position. We seek outstanding Ph.D. candidates in computer science, biomedical informatics, statistics, or related fields to develop novel deep learning and AI technologies for digital biopsies from medical images and clinical decision-making from non-imaging datasets. Research areas include:
-
Generative AI for Medical Imaging & Digital Biopsies: Developing and interpreting DNNs for automated tissue analyses using high-parameter images (pathology, MRI, CT, RGB) and validating these models in collaboration with hospitals nationwide.
-
Generative & Predictive AI for Clinical Decision Support: Developing biologically informed statistical methods and uncertainty estimation generative models for explainable clinical decision-making from EMRs and genetic data.
Responsibilities include data preprocessing, training and real-world validation of generative deep learning models (GANs, Diffusion models, Transformers), developing novel statistical models, and publishing research in leading journals and conferences. Comprehensive training in publication, fellowship and grant writing, and career development for roles in academia, industry, or government will be provided.More information about the lab can be found at https://faculty.sites.uci.edu/pratikshahlab/
San Jose, CA, USA
Join Adobe as a skilled and proactive Machine Learning Ops Engineer to drive the operational reliability, scalability, and performance of our AI systems! This role is foundational in ensuring our AI systems operate seamlessly across environments while meeting the needs of both developers and end users. You will lead efforts to automate and optimize the full machine learning lifecycle—from data pipelines and model deployment to monitoring, governance, and incident response.
What you'll Do
-
Model Lifecycle Management: Manage model versioning, deployment strategies, rollback mechanisms, and A/B testing frameworks for LLM agents and RAG systems. Coordinate model registries, artifacts, and promotion workflows in collaboration with ML Engineers
-
Monitoring & Observability: Implement real-time monitoring of model performance (accuracy, latency, drift, degradation). Track conversation quality metrics and user feedback loops for production agents.
-
CI/CD for AI: Develop automated pipelines for timely/agent testing, validation, and deployment. Integrate unit/integration tests into model and workflow updates for safe rollouts.
-
Infrastructure Automation: Provision and manage scalable infrastructure (Kubernetes, Terraform, serverless stacks). Enable auto-scaling, resource optimization, and load balancing for AI workloads.
-
Data Pipeline Management: Craft and maintain data ingestion pipelines for both structured and unstructured sources. Ensure reliable feature extraction, transformation, and data validation workflows.
-
Performance Optimization: Monitor and optimize AI stack performance (model latency, API efficiency, GPU/compute utilization). Drive cost-aware engineering across inference, retrieval, and orchestration layers.
-
Incident Response & Reliability: Build alerting and triage systems to identify and resolve production issues. Maintain SLAs and develop rollback/recovery strategies for AI services.
-
Compliance & Governance: Enforce model governance, audit trails, and explainability standards. Support documentation and regulatory frameworks (e.g., GDPR, SOC 2, internal policy alignment).
What you need to succeed - 3–5+ years in MLOps, DevOps, or ML platform engineering. - Strong experience with cloud infrastructure (AWS/GCP/Azure), container orchestration (Kubernetes), and IaC tools (Terraform, Helm). - Familiarity with ML model serving tools (e.g., MLflow, Seldon, TorchServe, BentoML). - Proficiency in Python and CI/CD automation (e.g., GitHub Actions, Jenkins, Argo Workflows). - Experience with monitoring tools (Prometheus, Grafana, Datadog, ELK, Arize AI, etc.).
Preferred Qualifications - Experience supporting LLM applications, RAG pipelines, or AI agent orchestration. - Understanding of vector databases, embedding workflows, and model retraining triggers. - Exposure to privacy, safety, and responsible AI principles in operational contexts. - Bachelor's or equivalent experience in Computer Science, Engineering, or a related technical field.
New York
The D. E. Shaw group seeks exceptional software engineers with expertise in applied AI, AI agents, and agentic systems to join the firm. This role offers the chance to work directly with a variety of groups at the firm on innovative, greenfield projects that transform how teams operate—leveraging quantitative and programming skills to design, build, and deploy AI solutions that drive efficiency, enhance analytical capabilities, and accelerate decision-making across the firm.
What you’ll do day-to-day
You’ll join a dynamic team, with the potential to:
- Collaborate directly with internal groups and end users across various functions to build bespoke AI agents and applications tailored to nuanced, real-world business needs.
- Lead and contribute to greenfield AI projects, taking ownership from concept through production and helping shape internal AI strategy and adoption.
- Experiment with emerging AI tools and model capabilities, rapidly prototyping and integrating them across platforms to enhance usability, scalability, and effectiveness.
- Scale the adoption of AI tools firmwide by developing best practices, frameworks, and reusable components that drive innovation and productivity.
- Build foundational AI components, such as agent frameworks, reusable “skills,” and large-scale retrieval systems, to support AI tools and applications.
- Design, develop, and maintain shared AI infrastructure and agentic applications, ensuring firmwide data integration and enhancing software development efficiency.
Who we’re looking for
- A bachelor’s degree in any field is required, along with an extensive background in software development, and hands-on experience building and scaling AI solutions at the product, system, or company level.
- Solid understanding of AI technologies and an interest in developing advanced AI applications and frameworks.
- Demonstrated ability to thrive in technical or entrepreneurial environments, along with the capability to solve complex challenges and lead projects from inception to deployment.
- A record of strong academic or professional achievement, with analytical depth and creativity in AI-related projects.
- We welcome outstanding candidates at all experience levels who are excited to work in a collegial, collaborative, and fast-paced environment.
- The expected annual base salary for this position is USD 200,000 to USD 250,000. Our compensation and benefits package includes variable compensation in the form of a year-end bonus, guaranteed in the first year of hire, and benefits including medical and prescription drug coverage, 401(k) contribution matching, wellness reimbursement, family building benefits, and a charitable gift match program.
Work Location: Toronto, Ontario, Canada
Job Description
We are currently seeking talented individuals for a variety of positions, ranging from mid to senior levels, and will evaluate your application in its entirety.
Layer 6 is the AI research centre of excellence for TD Bank Group. We develop and deploy industry-leading machine learning systems that impact the lives of over 27 million customers, helping more people achieve their financial goals and needs. Our research broadly spans the field of machine learning with areas such as deep learning and generative AI, time series forecasting and responsible use of AI. We have access to massive financial datasets and actively collaborate with world renowned academic faculty. We are always looking for people driven to be at the cutting edge of machine learning in research, engineering, and impactful applications.
Day-to-day as a Technical Product Owner:
-
Translate broad business problems into sharp data science use cases, and craft use cases into product visions
-
Own machine learning products from vision to backlog; prioritizing features and defining minimum viable releases; maximizing the value your products generate, and the ROI of your pod
-
Guide Agile pods on continuous improvement, ensuring that the next sprint is delivered better than the previous
-
Work closely with stakeholders to identify, refine and (occasionally) reject opportunities to build machine learning products; collaborate with support functions such as risk, technology, model risk management and incorporate interfacing features
-
Facilitate the professional & technical development of your colleagues through mentorship and feedback
-
Anticipate resource needs as solutions move through the model lifecycle, scaling pods up and down as models are built, perform, degrade, and need to be rebuilt
-
Championing model development standards, industry best-practices and rigorous testing protocols to ensure model excellence
-
Self-direct, with the ability to identify meaningful work in down times and effectively prioritize in busy times
-
Drive value through product, feature & release prioritization, maximizing ROI & modelling velocity
-
Be an exceptional collaborator in a high-interaction environment
Job Requirements
-
Minimum five years of experience delivering major data science projects in large, complex organizations
-
Strong communication, business acumen and stakeholder management competencies
-
Strong technical skills: machine learning, data engineering, MLOps, cloud solution architecture, software development practices
-
Strong coding proficiency: python, R, SQL and / or Scala, cloud architecture
-
Certified Scrum Product Owner and / or Certified Scrum Master or equivalent experience
-
Familiarity with cloud solution architecture, Azure a plus
-
Master’s degree in data science, artificial intelligence, computer science or equivalent experience
Bala Cynwyd (Philadelphia Area), Pennsylvania United States
Overview
Susquehanna is expanding the Machine Learning group and seeking exceptional researchers to join our dynamic team. As a Machine Learning Researcher, you will apply advanced ML techniques to a wide range of forecasting challenges, including time series analysis, natural language understanding, and more. Your work will directly influence our trading strategies and decision-making processes.
This is a unique opportunity to work at the intersection of cutting-edge research and real-world impact, leveraging one of the highest-quality financial datasets in the industry.
What You’ll Do
Conduct research and develop ML models to enhance trading strategies, with a focus on deep learning and scalable deployment Collaborate with researchers, developers, and traders to improve existing models and explore new algorithmic approaches Design and run experiments using the latest ML tools and frameworks Develop automation tools to streamline research and system development Apply rigorous scientific methods to extract signals from complex datasets and shape our understanding of market behavior Partner with engineering teams to implement and test models in production environments
What we're looking for We’re looking for research scientists with a proven track record of applying deep learning to solve complex, high-impact problems. The ideal candidate will have a strong grasp of diverse machine learning techniques and a passion for experimenting with model architectures, feature engineering, and hyperparameter tuning to produce resilient and high-performing models.
PhD in Computer Science, Machine Learning, Mathematics, Physics, Statistics, or a related field Strong track record of applying ML in academic or industry settings, with 5+ years of experience building impactful deep learning systems A strong publication record in top-tier conferences such as NeurIPS, ICML, or ICLR Strong programming skills in Python and/or C++ Practical knowledge of ML libraries and frameworks, such as PyTorch or TensorFlow, especially in production environments Hands-on experience applying deep learning on time series data Strong foundation in mathematics, statistics, and algorithm design Excellent problem-solving skills with a creative, research-driven mindset Demonstrated ability to work collaboratively in team-oriented environments A passion for solving complex problems and a drive to innovate in a fast-paced, competitive environment
San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.
With more than 600 million users around the world and 400 billion ideas saved, Pinterest Machine Learning engineers build personalized experiences to help Pinners create a life they love. With just over 4,000 global employees, our teams are small, mighty, and still growing. At Pinterest, you’ll experience hands-on access to an incredible vault of data and contribute large-scale recommendation systems in ways you won’t find anywhere else.
We are seeking talented Staff Machine Learning Engineers for multiple openings across our Core Engineering organization, including teams such as Search, Notifications, and Content & User Engineering. In these roles, you will drive the development of state-of-the-art applied machine learning systems that power core Pinterest experiences.
What you’ll do:
- Design features and build large-scale machine learning models to improve user ads action prediction with low latency
- Develop new techniques for inferring user interests from online and offline activity
- Mine text, visual, and user signals to better understand user intention
- Work with product and sales teams to design and implement new ad products
What we’re looking for:
- Degree in computer science, machine learning, statistics, or related field
- 6+ years of industry experience building production machine learning systems at scale, data mining, search, recommendations, and/or natural language processing
- 2+ years of experience leading projects/teams
- Strong mathematical skills with knowledge of statistical methods
- Cross-functional collaborator and strong communicator
London
Flow Traders is looking for a Senior Research Engineer to join our Hong Kong office. This is a unique opportunity to join a leading proprietary trading firm with an entrepreneurial and innovative culture at the heart of its business. We value quick-witted, creative minds and challenge them to make full use of their capacities.
As a Senior Research Engineer, you will be responsible for helping to lead the development of our trading model research framework and using it to conduct research to develop models for trading in production. You'll expand the framework to become global standard way of training, consuming, combining, and transforming any data source in a data-driven systematic way. You will then partner with Quantitative Researchers to build the trading models themselves.
What You Will Do
- Help to lead the development and global rollout of our research framework for defining and training models through various optimization procedures (supervised learning, backtesting etc.), as well as its integration with our platform for deploying and running those models in production
- Partner with Quantitative Researchers to conduct research: test hypotheses and tune/develop data-driven systematic trading strategies and alpha signals
What You Need to Succeed
- Advanced degree (Master's or PhD) in Machine Learning, Statistics, Physics, Computer Science or similar
- 8+ years of hands-on experience MLOps, Research Engineering, or ML Research
- A strong background in mathematics and statistics
- Strong proficiency in programming languages such as Python, with experience in libraries like numpy, pytorch, polars, pandas, and ray.
- Demonstrated experience in designing and implementing end-to-end machine learning pipelines, including data preprocessing, model training, deployment, and monitoring
- Understanding of and experience with modern software development practices and tools (e.g. Agile, version control, automated testing, CI/CD, observability)
- Understanding of cloud platforms (e. g., AWS, Azure, GCP) and containerization technologies (e. g., Docker, Kubernetes)
Preference for on-site candidates in San Mateo, but remote possible.
BigHat is hiring a Principal ML Scientist. We've got an awesome high-throughput wetlab that pumps proprietary data into custom ETL and ML Ops infra to power our weekly design-build-train loop. Come solve hard-enough-to-be-fun problems in protein engineering in service of helping patients!
About Handshake AI Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired. Handshake AI is a human data labeling business that leverages the scale of the largest early career network. We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale. This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.
Now’s a great time to join Handshake. Here’s why: Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide. Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs. World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few. Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.
About the Role Handshake AI builds the data engines that power the next generation of large language models. Our research team works at the intersection of cutting-edge model post-training, rigorous evaluation, and data efficiency. Join us for a focused Summer 2026 internship where your work can ship directly into our production stack and become a publishable research contribution. To start between May and June 2026.
Projects You Could Tackle LLM Post-Training: Novel RLHF / GRPO pipelines, instruction-following refinements, reasoning-trace supervision. LLM Evaluation: New multilingual, long-horizon, or domain-specific benchmarks; automatic vs. human preference studies; robustness diagnostics. Data Efficiency: Active-learning loops, data value estimation, synthetic data generation, and low-resource fine-tuning strategies. Each intern owns a scoped research project, mentored by a senior scientist, with the explicit goal of an archive-ready manuscript or top-tier conference submission.
Desired Capabilities Current PhD student in CS, ML, NLP, or related field. Publication track record at top venues (NeurIPS, ICML, ACL, EMNLP, ICLR, etc.). Hands-on experience training and experimenting with LLMs (e.g., PyTorch, JAX, DeepSpeed, distributed training stacks). Strong empirical rigor and a passion for open-ended AI questions.
Extra Credit Prior work on RLHF, evaluation tooling, or data selection methods. Contributions to open-source LLM frameworks. Public speaking or teaching experience (we often host internal reading groups).