Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

San Jose, CA, USA


Join Adobe as a skilled and proactive Machine Learning Ops Engineer to drive the operational reliability, scalability, and performance of our AI systems! This role is foundational in ensuring our AI systems operate seamlessly across environments while meeting the needs of both developers and end users. You will lead efforts to automate and optimize the full machine learning lifecycle—from data pipelines and model deployment to monitoring, governance, and incident response.

What you'll Do

  • Model Lifecycle Management: Manage model versioning, deployment strategies, rollback mechanisms, and A/B testing frameworks for LLM agents and RAG systems. Coordinate model registries, artifacts, and promotion workflows in collaboration with ML Engineers

  • Monitoring & Observability: Implement real-time monitoring of model performance (accuracy, latency, drift, degradation). Track conversation quality metrics and user feedback loops for production agents.

  • CI/CD for AI: Develop automated pipelines for timely/agent testing, validation, and deployment. Integrate unit/integration tests into model and workflow updates for safe rollouts.

  • Infrastructure Automation: Provision and manage scalable infrastructure (Kubernetes, Terraform, serverless stacks). Enable auto-scaling, resource optimization, and load balancing for AI workloads.

  • Data Pipeline Management: Craft and maintain data ingestion pipelines for both structured and unstructured sources. Ensure reliable feature extraction, transformation, and data validation workflows.

  • Performance Optimization: Monitor and optimize AI stack performance (model latency, API efficiency, GPU/compute utilization). Drive cost-aware engineering across inference, retrieval, and orchestration layers.

  • Incident Response & Reliability: Build alerting and triage systems to identify and resolve production issues. Maintain SLAs and develop rollback/recovery strategies for AI services.

  • Compliance & Governance: Enforce model governance, audit trails, and explainability standards. Support documentation and regulatory frameworks (e.g., GDPR, SOC 2, internal policy alignment).

What you need to succeed - 3–5+ years in MLOps, DevOps, or ML platform engineering. - Strong experience with cloud infrastructure (AWS/GCP/Azure), container orchestration (Kubernetes), and IaC tools (Terraform, Helm). - Familiarity with ML model serving tools (e.g., MLflow, Seldon, TorchServe, BentoML). - Proficiency in Python and CI/CD automation (e.g., GitHub Actions, Jenkins, Argo Workflows). - Experience with monitoring tools (Prometheus, Grafana, Datadog, ELK, Arize AI, etc.).

Preferred Qualifications - Experience supporting LLM applications, RAG pipelines, or AI agent orchestration. - Understanding of vector databases, embedding workflows, and model retraining triggers. - Exposure to privacy, safety, and responsible AI principles in operational contexts. - Bachelor's or equivalent experience in Computer Science, Engineering, or a related technical field.

Location Beijing CHINA


Description

  1. About Us: The Beijing Academy of Artificial Intelligence (BAAI), established in November 2018, is a non-profit research institute dedicated to becoming a global leader in AI innovation. We strive to create the world's premier ecosystem for academic and technological advancement, tackling the most fundamental and critical challenges in the field. BAAI aims to be the source of academic thought, foundational theory, top talent, industrial innovation, and policy for artificial intelligence, fostering sustainable development for humanity, our environment, and intelligence itself.

  2. Open Research Tracks:

  3. Multimodal Large Model Researcher:
  4. Focus on exploring next-generation vision and multimodal foundation models (e.g., the Emu series). You will research novel algorithms and data systems, dedicated to solving core challenges in multimodal perception and generation.
  5. Embodied AI Researcher:
  6. Research and develop Vision-Language-Action (VLA) models and hierarchical architectures. You will work on the full pipeline from simulation and synthetic data to real-world deployment, aiming to build powerful embodied AI base models with exceptional generalization capabilities, enabling robots to understand and execute long-horizon, complex instructions in novel environments.
  7. Researcher (AI for Science):
  8. Leverage AI methods to solve cutting-edge problems in life sciences. You will design and develop new models and algorithms, participate in world-class scientific collaborations, and pioneer breakthroughs from 0 to 1 in the field of biological computation.

  9. We Are Looking For:

  10. A Ph.D. or outstanding Master's degree in Computer Science, Artificial Intelligence, Electronic Engineering, Life Sciences, or related fields.
  11. Solid foundation and research experience in at least one of the following areas:
  12. Multimodal: Deep understanding of mainstream large models and strong algorithm implementation skills.
  13. Embodied AI: Familiarity with VLA models, mainstream simulators, and experience with pre-training, fine-tuning, or real-world deployment.
  14. AI for Science: Strong mathematical foundation and machine learning knowledge, with a passion for solving life science problems.
  15. Proven Research Excellence: A track record of publications at top-tier conferences such as NeurIPS, ICML, ICLR, CVPR, ICRA, RSS, or experience in leading high-impact open-source projects.

  16. What We Offer:

  17. Work on the Cutting Edge: Confront the field's most challenging problems. Your work will directly contribute to breakthroughs in next-generation AI.
  18. Mentorship & Collaboration: Work alongside and receive guidance from renowned scientists and senior researchers within a world-class team.
  19. Freedom & Resources: Enjoy an atmosphere of academic freedom and access to abundant, state-of-the-art computational resources to support your ambitious research ideas.
  20. Global Impact: Publish your research at leading global conferences and see it potentially transformed into projects that advance industry and science.

How to Apply: Please send your CV, representative papers, or project portfolio to: [Zstar@baai.ac.cn] Use the email subject line: "NeurIPS - Z star - [Your Desired Track] - [Your Name]" (e.g., NeurIPS - Z star- Multimodal Large Model - Xiao Zhi)

Location United States


Description

As part of OCI Applied Science group, our objective is to create innovations that power internal and external solutions. The Multimodal AI team is working on developing cutting-edge AI solutions using Oracle's industry leading GPU-based AI clusters to disrupt industry verticals and push the state-of-the-art in Multimodal AI research. As a Principal Applied Scientist in the Applied Science team, you will be architecting, building and deploying cutting-edge, high-quality AI models and solutions at scale. You will work with a team of world-class scientists in exploring new frontiers of Generative AI and collaborate with cross-functional teams including software engineers and product managers to deploy these globally for real-world enterprise use-cases at the largest scale. You will play a key role in shaping the future of Generative AI & Analytics at Oracle and across the industry. Your contributions will be pivotal in delivering our new Generative AI-powered services for large-scale enterprise customers. Responsibilities

Responsibilities: Research and Development: Conduct in-depth research on image and video generation techniques, including diffusion models, flow-based models, generative adversarial networks (GANs), and other emerging approaches. Model Development: Design, develop, and train state-of-the-art image generation models that meet the highest quality standards. Team Leadership: Build and mentor a high-performing team of scientists and engineers. Collaboration: Work closely with cross-functional teams to integrate video generation capabilities into various applications and products. Innovation: Identify new opportunities for image generation and explore emerging technologies. Stay Updated: Maintain a deep understanding of industry trends and advancements in video generation.

Qualifications and Experience:

PhD Computer Science, Mathematics, Statistics, Physics, Linguistics or a related field with a dissertation, thesis or final project centered in Machine Learning and Deep Learning) with 4+ years relevant experience is preferred but not a must; OR Masters or Bachelor’s in related field with 8+ years relevant experience

Strong publication record, including as a lead author or reviewer, in top-tier journals or conferences. Extensive experience in image generation, computer vision, and deep learning Proven track record of leading research and development projects Strong understanding of machine learning algorithms and architectures Excellent problem-solving and analytical skills Strong leadership and communication abilities

If you are passionate about pushing the boundaries of image generation and have a proven track record of success, we encourage you to apply. Qualifications Disclaimer:

Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.

Range and benefit information provided in this posting are specific to the stated locations only

US: Hiring Range in USD from: $120,100 to $251,600 per annum. May be eligible for bonus, equity, and compensation deferral.

Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business. Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.

Oracle US offers a comprehensive benefits package which includes the following: 1. Medical, dental, and vision insurance, including expert medical opinion 2. Short term disability and long term disability 3. Life insurance and AD&D 4. Supplemental life insurance (Employee/Spouse/Child) 5. Health care and dependent care Flexible Spending Accounts 6. Pre-tax commuter and parking benefits 7. 401(k) Savings an

San Francisco


About this role

We’re looking for a Data Engineer to help design, build, and scale the data infrastructure that powers our analytics, reporting, and product insights. As part of a small but high-impact Data team, you’ll define the architectural foundation and tooling for our end-to-end data ecosystem.

You’ll work closely with engineering, product, and business stakeholders to build robust pipelines, scalable data models, and reliable workflows that enable data-driven decisions across the company. If you are passionate about data infrastructure, and solving complex data problems, we want to hear from you!

Tech stack

Core tools: Snowflake, BigQuery, dbt, Fivetran, Hightouch, Segment Periphery tools: AWS DMS, Google Datastream, Terraform, GithHub Actions

What you’ll do

Data infrastructure: * Design efficient and reusable data models optimized for analytical and operational workloads. * Design and maintain scalable, fault-tolerant data pipelines and ingestion frameworks across multiple data sources. * Architect and optimize our data warehouse (Snowflake/BigQuery) to ensure performance, cost efficiency, and security. * Define and implement data governance frameworks — schema management, lineage tracking, and access control.

Data orchestration: * Build and manage robust ETL workflows using dbt and orchestration tools (e.g., Airflow, Prefect). * Implement monitoring, alerting, and logging to ensure pipeline observability and reliability. * Lead automation initiatives to reduce manual operations and improve data workflow efficiency.

Data quality: * Develop comprehensive data validation, testing, and anomaly detection systems. * Establish SLAs for key data assets and proactively address pipeline or data quality issues. * Implement versioning, modularity, and performance best practices within dbt and SQL.

Collaboration & leadership: * Partner with product and engineering teams to deliver data solutions that align with downstream use cases. * Establish data engineering best practices and serve as a subject matter expert on our data pipelines, models and systems.

What we’re looking for

  • 5+ years of hands-on experience in a data engineering role, ideally in a SaaS environment.
  • Expert-level proficiency in SQL, dbt, and Python.
  • Strong experience with data pipeline orchestration (Airflow, Prefect, Dagster, etc.) and CI/CD for data workflows.
  • Deep understanding of cloud-based data architectures (AWS, GCP) — including networking, IAM, and security best practices.
  • Experience with event-driven systems (Kafka, Pub/Sub, Kinesis) and real-time data streaming is a plus.
  • Strong grasp of data modeling principles, warehouse optimization, and cost management.
  • Passionate about data reliability, testing, and monitoring — you treat pipelines like production software.
  • Thrive in ambiguous, fast-moving environments and enjoy building systems from the ground up.

Shanghai

Our Research Summer Internship program will give you real insights into how data and research is used to improve global financial markets. Expand your knowledge of the financial markets and solve challenging problems that could impact the way we trade. Plus, if you’ve excelled over the summer and shown us your potential, you could receive an offer to join us as a graduate quantitative researcher. With Optiver’s internship program, your work improving the market starts today.

Who we are: Optiver is a global market maker founded in Amsterdam, with offices in London, Chicago, Austin, New York, Sydney, Shanghai, Hong Kong, Singapore, Taipei and Mumbai. Established in 1986, today we are a leading liquidity provider, with close to 2,000 employees in offices around the world, united in our commitment to improve the market through competitive pricing, execution and risk management. By providing liquidity on multiple exchanges across the world in various financial instruments we participate in the safeguarding of healthy and efficient markets. We provide liquidity to financial markets using our own capital, at our own risk, trading a wide range of products: listed derivatives, cash equities, ETFs, bonds and foreign currencies.

What you’ll do: As a Quantitative Research Intern, you’ll work with our researchers and traders on real-life research projects, that directly impact the way we trade. Our quantitative researchers are responsible for the accuracy of our core pricing models. They work closely with our traders to analyse and improve all facets of our trading strategies. As part of the internship, you’ll get to: • Perform extensive analysis in order to implement new algorithms that support and improve our existing models. • Develop risk management and portfolio optimisation tools to improve our execution algorithm. • Work with petabytes of low latency, high-frequency market data sets. • Collaborate with our developers to test and drive changes to our trading system, that will improve our ability to make successful trades. • Keep up to date on the latest development of new models and technologies • No previous experience in trading or financial markets? You bring the passion and we’ll have the training to support you along the way

Who you are: • PhD student, who will graduate during 2028. • Major in a highly quantitative field. • Strong knowledge of probability and statistics, experience in machine learning and time-series analysis is a big plus. • Programming experience in any language (C, C++, Python, JAVA, etc.), ideally with a preference towards Python. • Ability to carry a project on your own in a structured way within a short timeframe. • Experience in working with large datasets. • Both a self-motivated contributor and a team player, with an entrepreneurial attitude and hunger for success. • Interest in the trading/quantitative finance industry.

What you’ll get: • The chance to work alongside diverse and intelligent peers in a rewarding environment. • Competitive remuneration, including an attractive bonus structure and additional leave entitlements. • Training, mentorship and personal development opportunities. • Daily breakfast, lunch and snacks. • Gym membership, sports and leisure activities, plus weekly in-house chair massages. • Regular social events, clubs and Friday afternoon drinks.

How to apply: If you’re interested in taking your career to the next level and work on one of the most exciting trading floors in China mainland, apply now via the form below. While we love how bilingual our teams are, be sure to submit the below application materials in English: • Resume • Academic transcripts, including Bachelors and Masters and PhD if any

For any other inquiries, please email chinacareers@optiver.com.au.

Location United States


Description At Oracle Cloud Infrastructure (OCI), we are building the future of cloud computing—designed for enterprises, engineered for performance, and optimized for AI at scale. We are a fast-paced, mission-driven team within one of the world’s largest cloud platforms. The Multimodal AI team in OCI Applied Science is working on developing cutting-edge AI solutions using Oracle's industry leading GPU-based AI clusters to disrupt industry verticals and push the state-of-the-art in Multimodal and Video GenAI research. You will work with a team of world-class scientists in exploring new frontiers of Generative AI and collaborate with cross-functional teams including software engineers and product managers to deploy these globally for real-world enterprise use-cases at the largest scale.

Responsibilities: - Contribute to the development and optimization of distributed multi-node training infrastructure - Stay Updated: Maintain a deep understanding of industry trends and advancements in video generatio, multimodal understanding, pretraining workflows and paradigms. -Model Development: Design, develop, and train state-of-the-art image and vide generation models that meet the highest quality standards. - Collaborate with cross-functional teams to support scalable and secure deployment pipelines. - Assist in diagnosing and resolving production issues, improving observability and reliability. - Write maintainable, well-tested code and contribute to documentation and design discussions

Minimum Qualifications - BS in Computer Science or related technical field. - 6+ years of experience in backend software development with cloud infrastructure. - Strong proficiency in at least one language such as Go, Java, Python, or C++. - Experience building and maintaining distributed services in a production environment. - Familiarity with Kubernetes, container orchestration, and CI/CD practices. - Solid understanding of computer science fundamentals such as algorithms, operating systems, and networking.

Preferred Qualifications - MS in Computer Science. - Experience in large-scale multi-node distributed training of LLMs and multimodal models. - Knowledge of cloud-native observability tools and scalable service design. - Interest in compiler or systems-level software design is a plus.

Why Join Us - Build mission-critical AI infrastructure with real-world impact. - Work closely with a collaborative and experienced global team. - Expand your knowledge in AI, cloud computing, and distributed systems. - Contribute to one of Oracle’s most innovative and fast-growing initiatives.

Disclaimer:

Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.

Range and benefit information provided in this posting are specific to the stated locations only

US: Hiring Range in USD from: $96,800 to $223,400 per annum. May be eligible for bonus and equity.

Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business. Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.

Oracle US offers a comprehensive benefits package which includes the following: 1. Medical, dental, and vision insurance, including expert medical opinion 2. Short term disability and long term disability 3. Life insurance and AD&D 4. Supplemental life insurance (Employee/Spouse/Child) 5. Health care and dependent care Flexible Spending Accounts 6. Pre-tax commuter and parking benefits 7. 401(k) Savings and Investment Plan with company match 8. Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non-overtime eligible) position. Accrued Vacation is provided to all other employees

Bala Cynwyd (Philadelphia Area), Pennsylvania United States


Overview

We’re looking for a Machine Learning Systems Engineer to strengthen the performance and scalability of our distributed training infrastructure. In this role, you'll work closely with researchers to streamline the development and execution of large-scale training runs, helping them make the most of our compute resources. You’ll contribute to building tools that make distributed training more efficient and accessible, while continuously refining system performance through careful analysis and optimization. This position is a great fit for someone who enjoys working at the intersection of distributed systems and machine learning, values high-performance code, and has an interest in supporting innovative machine learning efforts.

What You’ll Do

Collaborate with researchers to enable them to develop systems-efficient models and architectures Apply the latest techniques to our internal training runs to achieve impressive hardware efficiency for our training runs Create tooling to help researchers distribute their training jobs more effectively Profile and optimize our training runs

What we're looking for Experience with large-scale ML training pipelines and distributed training frameworks Strong software engineering skills in python Passion for diving deep into systems implementations and understanding fundamentals to improve their performance and maintainability Experience improving resource efficiency across distributed computing environments by leveraging profiling, benchmarking, and implementing system-level optimizations

Why Join Us?

Susquehanna is a global quantitative trading firm that combines deep research, cutting-edge technology, and a collaborative culture. We build most of our systems from the ground up, and innovation is at the core of everything we do. As a Machine Learning Systems Engineer, you’ll play a critical role in shaping the future of AI at Susquehanna — enabling research at scale, accelerating experimentation, and helping unlock new opportunities across the firm.

Bala Cynwyd (Philadelphia Area), Pennsylvania United States


Overview

We’re looking for a Machine Learning Systems Engineer to help build the data infrastructure that powers our AI research. In this role, you'll develop reliable, high-performance systems for handling large and complex datasets, with a focus on scalability and reproducibility. You’ll partner with researchers to support experimental workflows and help translate evolving needs into efficient, production-ready solutions. The work involves optimizing compute performance across distributed systems and building low-latency, high-throughput data services. This role is ideal for someone with strong engineering instincts, a deep understanding of data systems, and an interest in supporting innovative machine learning efforts.

What You’ll Do

Design and implement high-performance data pipelines for processing large-scale datasets with an emphasis on reliability and reproducibility Collaborate with researchers to translate their requirements into scalable, production-grade systems for AI experimentation Optimize resource utilization across our distributed computing infrastructure through profiling, benchmarking, and systems-level improvements Implement low-latency high-throughput sampling for models

What we're looking for

Experience building and maintaining data pipelines and ETL systems at scale Experience with large-scale ML infrastructure and familiarity with training and inference workflows Strong understanding of best practices in data management and processing Knowledge of systems level programming and performance optimization Proficiency in software engineering in python Understanding of AI/ML workloads, including data preprocessing, feature engineering, and model evaluation

Why Join Us?

Susquehanna is a global quantitative trading firm that combines deep research, cutting-edge technology, and a collaborative culture. We build most of our systems from the ground up, and innovation is at the core of everything we do. As a Machine Learning Systems Engineer, you’ll play a critical role in shaping the future of AI at Susquehanna — enabling research at scale, accelerating experimentation, and helping unlock new opportunities across the firm.

The Chan Zuckerberg Institute for Advanced Biological Imaging (CZ Imaging Institute) is building the next generation of imaging technologies to transform our understanding of biology in health and disease. Over the next decade, we aim to create breakthrough systems — spanning hardware, software, probes, and computational tools — that will empower scientists worldwide.

As part of the Chan Zuckerberg Initiative’s Imaging Program, the CZ Imaging Institute (https://czii.org/) combines engineering, computation, and biology to tackle grand challenges in biological imaging. Our work is shared broadly with the global scientific community through open science, direct collaborations, and partnerships.

The CZ Imaging Institute will create breakthrough technologies — hardware, software, biological probes, data, and platforms — that will be made available to the scientific community and adopted worldwide through a combination of direct access to the institute, open sharing of advances, and commercial partnerships. Researchers will collaboratively develop breakthrough biological imaging systems centered around grand challenges that push the boundaries of what we can see and measure.

We are seeking a creative and motivated Data Scientist to develop and apply cutting-edge computational methods for complex imaging problems. This role is ideal for candidates with expertise in applied mathematics, computational science, or physics, combined with modern machine learning approaches. You will design algorithms, build scalable tools, and collaborate across disciplines to advance scientific discovery.

This position is on-site in Redwood City, CA.

What You'll Do - Develop and apply algorithms for solving inverse problems in imaging and related computational challenges. - Use optimization, applied mathematics, and physics-inspired modeling to extract insights from high-dimensional data. - Incorporate modern machine learning and deep learning techniques to improve reconstruction, denoising, and feature detection. - Build robust, scalable pipelines for large-scale biological datasets. - Collaborate with biologists, microscopists, and engineers to design solutions aligned with scientific goals. - Contribute to technical documentation, publications, and presentations.

What You'll Bring - M.S. or Ph.D. in Applied Mathematics, Computer Science, Physics, Engineering, or a related field. - 1 - 5 years of relevant experience. - Strong foundation in inverse problems, optimization, or computational modeling. - Experience in machine learning and deep learning (e.g., PyTorch, TensorFlow). - Proficiency in Python or C++, and familiarity with scientific computing libraries. - Strong analytical, problem-solving, and communication skills. - Experience with imaging data (e.g., cryo-EM, tomography, or related modalities). - Familiarity with convex optimization, variational methods, or numerical PDEs. - Knowledge of GPU computing and high-performance environments. - Track record of scientific publications or open-source contributions.

NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.

Our internships offer an excellent opportunity to expand your career and get hands on experience with one of our industry leading Deep Learning Computer Architecture teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.

Throughout the 12-week minimum full-time internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing Bachelor's, Master's, or PhD degree within a relevant or related field.

What we need to see: Must be actively enrolled in a university pursuing a Bachelor's, Master's, or PhD degree in Electrical Engineering, Computer Engineering, or a related field, for the entire duration of the internship.

Course or internship experience related to the following areas could be required:
Computer Architecture experience in one or more of these focus areas: GPU Architecture, CPU Architecture, Deep Learning, GPU Computing, Parallel Programming, or High-Performance Computing Systems  

GPU Computing (CUDA, OpenCL, OpenACC), GPU Memory Systems, Deep Learning Frameworks (PyTorch, TensorFlow, Keras, Caffe), HPC (MPI, OpenMP)
Modelling/Performance Analysis, Parallel Processing, Neural Network Architectures, GPU Acceleration, Deep Learning Neural Networks, Compiler Programming
Performance Modeling, Profiling, Optimizing, and/or Analysis

Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies:
C, C++, Python, Perl, GPU Computing (CUDA, OpenCL, OpenACC), Deep Learning Frameworks (PyTorch, TensorFlow, Caffe), HPC (MPI, OpenMP)

Our internship hourly rates are a standard pay based on the position, your location, year in school, degree, and experience. The hourly rate for our interns is 20 USD - 71 USD.

You will also be eligible for Intern benefits. ​

Applications are accepted on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.