NeurIPS 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.
Search Opportunities
Redwood City, CA
Biohub is leading the new era of AI-powered biology to cure or prevent disease through its 501c3 medical research organization, with the support of the Chan Zuckerberg Initiative.
The Team Biohub supports the science and technology that will make it possible to help scientists cure, prevent, or manage all diseases by the end of this century. While this may seem like an audacious goal, in the last 100 years, biomedical science has made tremendous strides in understanding biological systems, advancing human health, and treating disease.
Achieving our mission will only be possible if scientists are able to better understand human biology. To that end, we have identified four grand challenges that will unlock the mysteries of the cell and how cells interact within systems — paving the way for new discoveries that will change medicine in the decades that follow:
Building an AI-based virtual cell model to predict and understand cellular behavior Developing state-of-the-art imaging systems to observe living cells in action Instrumenting tissues to better understand inflammation, a key driver of many diseases Engineering and harnessing the immune system for early detection, prevention, and treatment of disease As a Senior Data Scientist, you'll lead the creation of groundbreaking datasets that power our AI/ML efforts within and across our scientific grand challenges. Working at the intersection of data science, biology, and AI, your work will focus on creating large, AI-ready datasets, spanning single-cell sequencing, immune receptor profiling, and mass spectrometry peptidomics data. You will define data needs, format standards, analysis approaches and quality metrics and build pipelines to ingest, transform, and validate data products that form the foundation of our experiments.
Our Data Ecosystem:
These efforts will form a part of, and interoperate with our larger larger data ecosystem. We are generating unprecedented scientific datasets that drive biological innovation:
Billions of standardized cells of single-cell transcriptomic data, with a focus on measuring genetic and environmental perturbations 10s of thousands of donor-matched DNA & RNA samples 10s PBs-scale static and dynamic imaging datasets 100s TBs-scale mass spectrometry datasets Diverse, large multi-modal biological datasets that enable biological bridges across measurement types and facilitate multi-modal model training to define how cells act. When analysis of a dataset is complete, you will help publish it through public resources like CELLxGENE Discover, the CryoET Portal, and the Virtual Cell Platform, used by tens of thousands of scientists monthly to advance understanding of genetic variants, disease risk, drug toxicities, and therapeutic discovery.
You'll collaborate with cross-functional teams to lead dataset definition, ingestion, transformation, and delivery for AI modeling and experimental analysis. Success means delivering high-quality, usable datasets that directly address modeling challenges and accelerate scientific progress. Join us in building the data foundation that will transform our understanding of human biology and move us along the path to curing, preventing, and managing all disease.
New York
Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Machine Learning (ML) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.
At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 35 million financial instruments searchable, discoverable, and actionable across the global capital markets.
Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.
We are looking for Senior LLM Research Engineers with a strong expertise and passion for Large Language Modeling research and applications to join our team.
The advent of large language models (LLMs) presents new opportunities for expanding our NLP capabilities with new products. This would allow our clients to ask complex questions in natural language and receive insights extracted across our vast number of Bloomberg APIs or from potentially millions of structured and unstructured information sources.
Broad areas of applications and interest include: application and fine-tuning methods for LLMs, efficient methods for training, multimodal models, learning from feedback and human preferences, retrieval-augmented generation, summarization, semantic parsing and tool use, domain adaptation of LLMs to financial domains, dialogue interfaces, evaluation of LLMs, model safety and responsible AI.
What's in it for you: -Collaborate with colleagues on building and applying LLMs for production systems and applications -Write, test, and maintain production quality code -Train, tune, evaluate and continuously improve LLMs using large amounts of high-quality data to develop state-of-the-art financial NLP models -Demonstrate technical leadership by owning cross-team projects -Stay current with the latest research in AI, NLP and LLMs and incorporate new findings into our models and methodologies -Represent Bloomberg at scientific and industry conference and in open-source communities -Publish product and research findings in documentation, whitepapers or publications to leading academic venues
You'll need to have: -Practical experience with Natural Language Processing problems, and a familiarity with Machine Learning, Deep Learning and Statistical Modeling techniques -Ph.D. in ML, NLP or a relevant field or MSc in CS, ML, Math, Statistics, Engineering, or related fields and 2+ years of relevant work experience -Experience with Large Language Model training and fine-tuning frameworks such as PyTorch, Huggingface or Deepspeed -Proficiency in software engineering -An understanding of Computer Science fundamentals such as data structures and algorithms and a data oriented approach to problem-solving -Excellent communication skills and the ability to collaborate with engineering peers as well as non-engineering stakeholders. -A track record of authoring publications in top conferences and journals is a strong plus
Noumenal Labs | Remote-friendly | Full-time
Noumenal's Thermodynamic Computing Lab is building the foundations of physical AI at the intersection of robotics and novel hardware, As a Research Engineer, you will help to define, design, and deploy the hybrid computing stack powering a paradigm shift in which stochastic thermodynamic dynamics become the substrate of intelligence itself. The goal: robots that learn from tens of demonstrations instead of thousands and run an order of magnitude longer on the same battery.
What You’ll Do
~ Architect hybrid software–hardware systems that implement probabilistic frameworks using energy-based algorithms on thermodynamic chips. ~ Build sampling-based inference systems (e.g., MCMC, Gibbs sampling, variational inference) optimized for thermodynamic computing substrates. ~ Co-design algorithms jointly with hardware teams to map computation efficiently onto novel physical architectures. ~ Deploy, evaluate, and iterate on these systems in real robotic environments. ~ Collaborate closely with physicists, AI researchers, hardware engineers, and product teams to drive real-time adaptive computation. ~ Contribute to publications, patents, and open-source frameworks advancing the field of physical AI and intelligent thermodynamic systems.
Required Skills
~ Strong coding ability in Python and at least one ML framework (PyTorch, JAX, or TensorFlow). ~ Experience with probabilistic inference (MCMC, variational inference, or energy-based models). ~ Solid understanding of machine learning fundamentals — especially deep learning, Bayesian, and Maximum Entropy Inverse RL. ~ Enthusiasm about both non-traditional hardware (e.g., neuromorphic, analog, quantum, thermodynamic) and how algorithms map to computation beyond GPUs. ~ Interest in developing within the active inference framework. ~ A systems mindset focused on performance, energy efficiency, and robustness.
Ideal Background
~ Experience with diffusion/score-based models, or generative world models. ~ Interest in control as inference. ~ Robotics experience (simulators or physical robots).
What We Offer
~ Early access to thermodynamic computing hardware. ~ Collaboration with leading researchers in active inference, generative modeling, and novel computing. ~ Real robotic platforms for prototyping and deployment. ~ Remote-friendly culture with periodic on-site collaboration. ~ Strong support for research, publication, and open-source contributions. ~ Salary $100,000 to 150,000 USD + equity.
New York
Quantitative Analyst Ph.D. Intern (New York) – Summer 2026
The D. E. Shaw group seeks talented Ph.D. candidates with impressive records of academic and/or professional achievement to join the firm as quantitative analyst interns. Ph.D. interns explore how the analytical skills gained from their graduate programs may relate to the work done at the firm while interacting with fellow interns and employees of similar academic backgrounds in a collegial working environment. This 12-week program will take place in New York and is expected to run from June to August 2026.
What you'll do day-to-day
You’ll spend the summer working on a research project that typically involves exploring a variety of statistical modeling techniques and writing software to analyze financial data. You’ll have a dedicated mentor in one of our quantitative research groups and are encouraged to attend our academic speaker series and track academic progress in various areas that may be of interest.
Who we're looking for
- Individuals with impressive records of academic achievement, including advanced coursework in fields such as math, statistics, physics, engineering, computer science, or other technical and quantitative programs.
- Applicants should have notable research productivity in their respective areas of study as well as a track record of creativity in their field(s).
- Interest or experience working in a data-driven research environment, including manipulation of data using high-level programming languages such as Python, is preferred.
- An exceptional aptitude for abstract reasoning, problem solving, and quantitative thinking, in addition to prior probability or statistics knowledge, is a plus.
- No previous finance experience is necessary, though candidates should have an interest in learning about quantitative finance.
- Students who apply to this internship are usually approaching their final year of full-time study.
- The position offers a monthly base salary of 25,000USD, overtime pay, a sign-on bonus of 25,000USD, travel coverage to and from the internship, and choice of furnished summer housing or a 10,000USD housing allowance. It also includes a 3,300USD stipend for self-study materials and a 4,000USD stipend for personal technology equipment. If you have any questions about the compensation, please ask one of our recruiters.
JR2003228
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on experience with one of our industry leading Generative AI teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
What you will be doing: Design and implement algorithms that push the boundaries of generative AI, computer vision, robotics, and other technology domains relevant to NVIDIA’s business.
Collaborate with other team members, teams, and/or external researchers.
Transfer your research to product groups to enable new products or types of products. Deliverable results include prototypes, patents, products, and/or publishing original research.
What we need to see: Must be actively enrolled in a university pursuing a PhD degree in Computer Science, Electrical Engineering, or a related field, for the entire duration of the internship.
Depending on the internship, prior experience or knowledge requirements could include the following programming skills and technologies: Python, C++, CUDA, Deep Learning Frameworks (PyTorch, JAX, Tensorflow, etc.)
Strong background in research with publications at top conferences.
Excellent communication and collaboration skills.
Experience with large-scale model training is a plus.
Potential internships require research experience in at least one of the following areas: Multimodal Foundation Models
Diffusion Models
World Models
Image, Video, or Audio Generation
Large Language Models
Vision-Language Models
Action-Based Transformers
Long Context Methods
Physics-Based Simulation
Flow Based Generative Models
Synthetic Data Generation
AI for Science
Protein/Molecule Generation
Climate Modeling and Weather Forecasting
Partial Differential Equations (PDEs)
Our internship hourly rates are a standard pay based on the position, your location, year in school, degree, and experience. The hourly rate for our interns is 30 USD - 94 USD.
You will also be eligible for Intern benefits.
Applications are accepted on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
USA – Austin, Seattle
Job Overview
At Arm, we’re enabling the next wave of AI innovation - from cloud to edge, data center to device. Our AI Product Managers play a pivotal role in turning cutting-edge research and engineering into real-world solutions that scale across billions of devices. As part of a globally trusted ecosystem, you’ll define and shape products that power the future of intelligent, energy-efficient computing.
We’re looking for AI-focused Product Managers who thrive at the intersection of technology, strategy, and customer need - individuals who can align market trends with technical innovation, and help bring transformative AI products to life.
Responsibilities
As an AI Product Manager at Arm, your role may include: Defining and owning product roadmaps for AI/ML software, hardware, tools, or platforms Identifying emerging AI market opportunities and customer needs across domains Working closely with engineering, research, and design teams to guide product development Collaborating with business development and partner teams to support go-to-market strategy Ensuring delivery of impactful, scalable solutions aligned with Arm’s long-term vision
Required Skills and Experience
Demonstrated experience in product management, technical program management, or product strategy Familiarity with AI/ML technologies, platforms, or development workflows Strong ability to synthesize market trends, customer feedback, and technical input into clear product direction Excellent cross-functional collaboration and communication skills Ability to work across a range of stakeholders - from engineers to executives A strategic mindset with a drive to build products that solve real problems at scale
“Nice to Have” Skills and Experience
Experience with AI deployment in edge, embedded, cloud, or mobile environments Exposure to AI frameworks (e.g., TensorFlow, PyTorch), ML compilers, or hardware accelerators Background in developer tooling, ML model optimization, or platform product management Prior involvement in launching or scaling AI or infrastructure products
Various locations available
Adobe is looking for a Machine Learning intern who will apply AI and machine learning techniques to big-data problems to help Adobe better understand, lead and optimize the experience of its customers.
By using predictive models, experimental design methods, and optimization techniques, you will be working on the research and development of exciting projects like real-time online media optimization, sales operation analytics, customer churn scoring and management, customer understanding, product recommendation and customer lifetime value prediction.
All 2026 Adobe interns will be co-located hybrid. This means that interns will work between their assigned office and home. Interns will be based in the office where their manager and/or team are located, where they will get the most support to ensure collaboration and the best employee experience. Managers and their organization will determine the frequency they need to go into the office to meet priorities.
What You’ll Do
- Develop predictive models on large-scale datasets to address various business problems with statistical modeling, machine learning, and analytics techniques.
- Develop and implement scalable, efficient, and interpretable modeling algorithms that can work with large-scale data in production systems
- Collaborate with product management and engineering groups to develop new products and features.
What You Need to Succeed
- Currently enrolled full time and pursuing a Master’s or PhD degree in Computer Science, Computer Engineering; or equivalent experience required with an expected graduation date of December 2026 – June 2027
- Good understanding of statistical modeling, machine learning, deep learning, or data analytics concepts.
- Proficient in one or more programming languages such as Python, Java and C
- Familiar with one or more machine learning or statistical modeling tools such as R, Matlab and scikit learn
- Strong analytical and quantitative problem-solving ability.
- Excellent communication, relationship skills and a team player
- Ability to participate in a full-time internship between May-September
Remote - Americas
Machine Learning Engineer - HSTU
Join Shopify's innovative team as we work on the development and implementation of state of the art HSTU models (Hierarchical Sequential Transduction Unit) to recommend the best growth drivers and action for merchants and buyers. You'll play a pivotal role in solving high-impact data problems that directly improve merchant success and consumer experience. As a Machine Learning Engineering (MLE) lead or individual contributor, you'll be at the forefront of building AI solutions that anticipate both merchant needs and personalization for 100M+ shoppers.
Key Responsibilities:
- Develop and deploy Generative AI, natural language processing, and HSTU-based recommendation models at scale
- Design and implement scalable AI/ML system architectures supporting models
- Build sophisticated inference pipelines that process billions of events and deliver real-time recommendations
- Implement data pipelines for model training, fine-tuning, and evaluation across diverse data sources (merchant events, consumer interactions, payment sequences)
- Experiment with novel architectures
- Optimize for production through advanced techniques like negative sampling, ANN search, and distributed GPU training
- Collaborate cross-functionally with product teams, data scientists, and infrastructure engineers to deliver measurable business impact
- Communicate effectively with both technical and non-technical audiences, translating complex ML concepts into actionable insights
Qualifications:
- Mastery in recommendation systems, Gen AI or LLMs
- End-to-end experience in training, evaluating, testing, and deploying machine learning products at scale.
- Experience in building data pipelines and driving ETL design decisions using disparate data sources.
- Proficiency in Python, shell scripting, streaming and batch data pipelines, vector databases, DBT, BigQuery, BigTable, or equivalent, and orchestration tools.
-
Experience with running machine learning in parallel environments (e.g., distributed clusters, GPU optimization).
-
This role may require on-call work.*
At Shopify, we pride ourselves on moving quickly—not just in shipping, but in our hiring process as well. If you’re ready to apply, please be prepared to interview with us within the week. Our goal is to complete the entire interview loop within 30 days. You will be expected to complete a pair programming interview, using your own IDE. This role may require on-call work.
Ready to redefine e-commerce through AI innovation? Join the team that’s making commerce better for everyone.
We are Bagel Labs - a distributed machine learning research lab working toward open-source superintelligence.
Role Overview
You will design and optimize distributed diffusion model training and serving systems.
Your mission is to build scalable, fault-tolerant infrastructure that serves open-source diffusion models across multiple nodes and regions with efficient adaptation support.
Key Responsibilities
- Design and implement distributed diffusion inference systems for image, video, and multimodal generation.
- Architect high-availability clusters with failover, load balancing, and dynamic batching for variable resolutions.
- Build monitoring and observability systems for denoising steps, memory usage, generation latency, and CLIP score tracking.
- Integrate with open-source frameworks such as Diffusers, ComfyUI, and Invoke AI.
- Implement and optimize rectified flow, consistency distillation, and progressive distillation.
- Design distributed systems for ControlNet, IP-Adapter, and multimodal conditioning at scale.
- Build infrastructure for LoRA/LyCORIS adaptation with hot-swapping and memory-efficient merging.
- Optimize VAE decoding pipelines and implement tiled/windowed generation for ultra-high-resolution outputs.
- Document architectural decisions, review code, and publish technical deep-dives on blog.bagel.com.
Who You Might Be
You understand distributed systems and diffusion architectures deeply.
You’re excited about the evolution from DDPM to flow matching to consistency models, and you enjoy building infrastructure that handles complex, variable compute workloads.
Required Skills
- 5+ years in distributed systems or production ML serving.
- Hands-on experience with Diffusers, ComfyUI, or similar frameworks in production.
- Deep understanding of diffusion architectures (U-Net, DiT, rectified flows, consistency models).
- Experience with distributed GPU orchestration for high-memory workloads.
- Proven record of optimizing generation latency (CFG, DDIM/DPM solvers, distillation).
- Familiarity with attention optimization (Flash Attention, xFormers, memory-efficient attention).
- Strong grasp of adaptation techniques (LoRA, LyCORIS, textual inversion, DreamBooth).
- Skilled in variable-resolution generation and dynamic batching strategies.
Bonus Skills
- Contributions to open-source diffusion frameworks or research.
- Experience with video diffusion models and temporal consistency optimization.
- Knowledge of quantization techniques (INT8, mixed precision) for diffusion models.
- Experience with SDXL, Stable Cascade, Würstchen, or latent consistency models.
- Distributed training using EDM, v-prediction, or zero-terminal SNR.
- Familiarity with CLIP guidance, perceptual loss, and aesthetic scoring.
- Experience with real-time diffusion inference (consistency or adversarial distillation).
- Published work or talks on diffusion inference optimization.
What We Offer
- Top-of-market compensation
- A deeply technical culture where bold ideas are built, not just discussed
- Remote flexibility within North American time zones
- Ownership of work shaping decentralized AI
- Paid travel to leading ML conferences worldwide
Apply now - help us build the infrastructure for open-source superintelligence.
Amsterdam
As a Quantitative Research Intern, you will get to work with our research team of mathematicians, scientists and technologists, to help develop the models that drive Optiver’s trading. You will tackle a practical research challenge that has impact and directly influences Optiver’s trading decisions. In our business, where the markets are always evolving, you will use your skills to predict its movements.
What you’ll do Led by our in-house education team, you will delve into trading fundamentals and engage in research projects that make a real difference. You will enhance your understanding of trading principles and gain hands-on experience by trading on live markets using real Optiver technology, with simulated capital. For the ten-week internship, you will get support from experienced researchers during your research project work, providing you exposure to a variety of areas, including: • Deep dive into trading and research fundamentals, from theoretical concepts to financial markets, strategies and cutting-edge technology • Using statistical models and machine learning to develop trading algorithms • Leveraging big data technologies to analyse trading strategies and financial instruments to identify trading opportunities • Combining quantitative analysis and high-performance implementation to ensure efficiency and accuracy of your models • Gain exposure to various trading and research desks and experience the financial markets first-hand Based on your performance during the internship, you could receive an offer to join our firm full-time after your studies.
What you’ll get You’ll join a culture of collaboration and excellence, where you’ll be surrounded by curious thinkers and creative problem solvers. Motivated by a passion for continuous improvement, you’ll thrive in a supportive, high-performing environment alongside talented colleagues, working collectively to tackle the toughest problems in the financial markets. In addition, you’ll receive: • A highly competitive internship compensation package • Optiver-covered flights and accommodation in the city centre for the duration of the internship • Extensive office perks, including breakfast and lunch, world-class barista coffee and Friday afternoon drinks • The opportunity to participate in sports and leisure activities, along with social events exclusively organised for your intern cohort
Who you are • Penultimate year student in Mathematics, Statistics, Computer Science, Physics or a related STEM field, with the ability to work full time upon graduation in 2027 • Solid foundation in mathematics, probability and statistics • Excellent research, analytical and modelling skills • Independent research experience • Proficiency in any programming language • Knowledge of machine learning, time-series analysis and pattern recognition is a plus • Strong interest in working in a fast-paced, collaborative environment • Fluent in English with strong written and verbal communication skills
Diversity statement Optiver is committed to diversity and inclusion. We encourage applications from candidates from any and all backgrounds, and we welcome requests for reasonable adjustments during the process to ensure that you can best demonstrate your abilities. Please let us know if you would like to request any reasonable adjustments by contacting the Recruitment team via the contact form, selecting “Reasonable Adjustments” as the subject of your inquiry.
For answers to some of our most frequently asked questions, refer to our Campus FAQs.
For applicants based in India, our entry route is via the placement office internship hiring season (July/August).
*We accept one application per role per year. If you have previously applied to this position during this season and have been unsuccessful, you can reapply once the next recruitment season begins in 2026.