Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; New York, NY, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

Within the Monetization ML Engineering organization, we try to connect the dots between the aspirations of Pinners and the products offered by our partners. As a Distinguished Machine Learning Engineer, you will be responsible for developing and executing a vision for the evolution of the machine learning technology stack for Monetization. You will work on tackling new challenges in machine learning and deep learning to advance the statistical models that power ads performance and ads delivery that bring together Pinners and partners in this unique marketplace.


What you'll do:

  • Lead user-facing projects that involve end-to-end engineering development in both frontend and backend and ML.
  • Improve relevance and increase long term value for Pinners, Partners, Creators, and Pinterest through efficient Ads Delivery.
  • Improve our engineering systems to improve the latency, capacity, stability and reduce infra cost.
  • Collaborate with product managers and designers to develop engineering solutions for user-facing product improvements.
  • Collaborate with other engineering teams (infra, user modeling, content understanding) to leverage their platforms and signals.
  • Champion engineering excellence and a data driven culture, mentor senior tech talent and represent Pinterest externally in the tech and AI communities.

What we’re looking for:

  • Degree in computer science, machine learning, statistics, or a related field.
  • 15+ years of working experience in engineering teams that build large-scale, ML‑driven, user‑facing products.
  • Experience leading cross‑team engineering efforts that improve user experience in products.
  • Understanding of an object‑oriented programming language such as Go, Java, C++, or Python.
  • Experience with large‑scale data processing (e.g., Hive, Scalding, Spark, Hadoop, MapReduce).
  • Strong software engineering and mathematical skills, with knowledge of statistical methods.
  • Experience working across frontend, backend, and ML systems for large‑scale user‑facing products, with a good understanding of how they all work together.
  • Hands‑on experience with large‑scale online e‑commerce systems.
  • Background in computational advertising is preferred.
  • Excellent cross‑functional collaboration and stakeholder communication skills, with strong execution in project management.

New York


Machine Learning Research Engineer

The D. E. Shaw group seeks a machine learning research engineer to creatively apply their knowledge of ML and software engineering to design and build computational architectures for high-performance, large-scale knowledge discovery in financial data. In this dynamic role, the engineer will leverage cutting-edge ML research to turn new ideas into proof-of-concept implementations, solve tough low-level engineering problems, and set up infrastructure for broader, longer-term impact. This position will play a key role in improving the efficiency, scalability, and reliability of the firm’s ML efforts, and will directly impact the firm’s systematic research through ML engineering contributions, all within a collaborative and engaging environment.

What you'll do day-to-day

  • Rapidly prototype, implement, and evaluate state-of-the-art machine learning techniques.
  • Drive the computational agenda for ongoing and future ML projects.
  • Tackle complex engineering problems across software and hardware layers, setting technical direction and anticipating architectural needs.
  • Deploy ML models into real-world systems where they have direct, measurable impact on decision-making and trading.
  • Create compelling proof-of-concept systems, demonstrate them internally, and collaborate with others for development.
  • Partner with researchers to design and implement efficient training workflows, enabling rapid experimentation with deep learning models.

Who we're looking for

  • Bachelor’s degree or higher is required.
  • Proven track record of collaborating with researchers to translate ML ideas into high-performance solutions.
  • Experience driving computational and architectural innovation by rapidly prototyping and demonstrating novel ML ideas within a high-performance environment.
  • Interest in staying current with ML research and swift application of new techniques.
  • Expertise in performance optimization, low-level engineering, GPU programming and libraries (e.g., Pytorch, JAX, CUDA, XLA, Triton, or PTX).
  • Demonstrated ability to quickly solve complex computational problems, create inspiring technical demos, and transition work to broader teams.
  • Proactive approach in driving agendas and anticipating engineering bottlenecks in large systems.
  • Proficiency in modern ML frameworks, facility with deep learning tooling, and a solid understanding of hardware and architectural challenges.
  • The expected annual base salary for this position is 250,000 to 350,000USD. Our compensation and benefits package includes substantial variable compensation in the form of a year-end bonus, guaranteed in the first year of hire, a sign-on bonus, and benefits including medical and prescription drug coverage, 401(k) contribution matching, wellness reimbursement, family building benefits, and a charitable gift match program.

London

Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Artificial Intelligence (AI) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.

At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 1 billion proprietary and third-party data points published daily -- across all asset classes -- searchable, discoverable, and actionable.

Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.

We are looking for Senior GenAI Platform Engineers with strong expertise and passion for building platforms, especially for GenAI systems.

As a Senior GenAI Platform Engineer, you will have the opportunity to create a more cohesive, integrated, and managed GenAI development life cycle to enable the building and maintenance of our ML systems. Our teams make extensive use of open source technologies such as Kubernetes, KServe, MCP, Envoy AI Gateway, Buildpacks and other cloud-native and GenAI technologies. From technical governance to upstream collaboration, we are committed to enhancing the impact and sustainability of open source.

Join the AI Group as a Senior GenAI Platform Engineer and you will have the opportunity to: -Architect, build, and diagnose multi-tenant GenAI platform systems -Work closely with GenAI application teams to design seamless workflows for continuous model training, inference, and monitoring -Interface with both GenAI experts to understand workflows, pinpoint and resolve inefficiencies, and inform the next set of features for the platforms -Collaborate with open-source communities and GenAI application teams to build a cohesive development experience -Troubleshoot and debug user issues -Provide operational and user-facing documentation

We are looking for a Senior GenAI Platform Engineer with: -Proven years of experience working with an object-oriented programming language (Python, Go, etc.) -Experience with GenAI technologies like MCP, A2A, Langgraph, LlamaIndex, Pydantic AI, OpenAI APIs and SDKs -A Degree in Computer Science, Engineering, Mathematics, similar field of study or equivalent work experience -An understanding of Computer Science fundamentals such as data structures and algorithms -An honest approach to problem-solving, and ability to collaborate with peers, stakeholders and management

UK


Research Engineer - Novel AI applications and Next Generation Hardware

Mission: You will join the hardware team with the goal of supporting novel application areas and AI modes beyond current use cases. Responsibilities include researching the evolving landscape of AI applications and models, analyzing underlying model architectures, and building implementations on Groq. Further responsibilities include analyzing mappings to existing and future hardware, modeling performance, and working cross-functionally with the hardware design team on novel hardware features e.g. functional units, numeric modes, interconnect, system integration, etc to unlock novel application areas for Groq. There will be opportunities to participate in a wider range of R&D activities, either internally or externally with key Groq partners.

Responsibilities & opportunities in this role: AI application and model research Performance modeling Cross-functional work with hardware and software teams Next generation hardware architecture development Support internal and outward-facing R&D

Ideal candidates have/are: Strong foundation in computer science Experience with AI models and applications Knowledge of LLMs and other Gen AI applications Strong foundation in computer architecture and computer arithmetic Python and common ML frameworks such as PyTorch & TensorFlow Experience with performance analysis / modelling Problem solving mindset

Nice to Have: Experience with scientific computing & HPC Experience in optimizing applications on specialized accelerators (GPU, FPGA, or other custom accelerators). Experience with compiler tools and MLIR. Experience in delivering complex projects in a fast-moving environment.

Attributes of a Groqster: Humility - Egos are checked at the door Collaborative & Team Savvy - We make up the smartest person in the room, together Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously Curious & Innovative - Take a creative approach to projects, problems, and design Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking

Compensation: At Groq, a competitive base salary is part of our comprehensive compensation package, which includes equity and benefits. For this role, salary range is determined by your location, skills, qualifications, experience and internal benchmarks. Compensation for candidates outside the USA will be dependent on the local market.

This position may require access to technology and/or information subject to U.S. export control laws and regulations, as well as applicable local laws and regulations, including the Export Administration Regulations (EAR). To comply with these requirements, candidates for this role must meet all relevant export control eligibility criteria.

Rochester, Minnesota, USA

Mayo Clinic seeks a highly motivated individual to advance the development, validation, and real-world implementation of generative AI systems for clinical decision support in Gastroenterology and Hepatology. This role bridges research and translation into clinical workflows, focusing on building trustworthy AI systems that augment human presence and put the needs of the patient first. Research Fellows will work within a multidisciplinary team of data scientists, physicians, and engineers to design novel generative agentic architectures, develop useful benchmarks, and work together with clinical teams to decrease time to diagnosis and time to treatment. Contact shung.dennis@mayo.edu if interested.

Remote US or Canada


Mission: Join the team that builds and operates Groq’s real-time, distributed inference system delivering large-scale inference for LLMs and next-gen AI applications at ultra-low latency. As a Low-Level Production Engineer, your mission is to ensure reliability, fault tolerance, and operational excellence in Groq’s LPU-powered infrastructure. You’ll work deep in the stack—bridging distributed runtime systems with the hardware—to keep Groq systems fast, stable, and production-ready at scale.

Responsibilities & opportunities in this role: Production Reliability: Operate and harden Groq’s distributed runtime across thousands of LPUs, ensuring uptime and resilience under dynamic global workloads. Low-Level Debugging: Diagnose and resolve hardware-software integration issues in live environments, from datacenter level events to single component failures. Observability & Diagnostics: Build tools and infrastructure to improve real-time system monitoring, fault detection, and SLO tracking. Automation & Scale: Automate deployment workflows, failover systems, and operational playbooks to reduce overhead and accelerate reliability improvements. Performance & Optimization: Profile and tune production systems for throughput, latency, and determinism—every cycle counts. Cross-Functional Collaboration: Partner with compiler, hardware, infra, and data center teams to deliver robust, fault-tolerant production systems.

Ideal candidates have/are: Proven experience in production engineering across the stack and operating large-scale distributed systems. Deep knowledge of computer architecture, operating systems, and hardware-software interfaces. Skilled in low-level systems programming (C/C++ or Rust), with scripting fluency (Python, Bash, or Go). Comfortable debugging complex issues close to the metal—kernels, firmware, or hardware-aware code paths. Strong background in automation, CI/CD, and building reliable systems that scale. Thrive across environments—from kernel internals to distributed runtimes to data center operations. Communicate clearly, make pragmatic decisions, and take ownership of long-term outcomes.

Nice to have: Experience operating high-performance, real-time systems at scale (ML inference, HPC, or similar). Familiarity with GPUs, FPGAs, or ASICs in production environments. Prior exposure to ML frameworks (e.g., PyTorch) or compiler tooling (e.g., MLIR). Track record of delivering complex production systems in high-impact environments.

Attributes of a Groqster: Humility – Egos are checked at the door Collaborative & Team Savvy – We make up the smartest person in the room, together Growth & Giver Mindset – Learn it all versus know it all, we share knowledge generously Curious & Innovative – Take a creative approach to projects, problems, and design Passion, Grit, & Boldness – No-limit thinking, fueling informed risk taking

About Kumo.ai

Kumo.ai is redefining enterprise AI with foundation models for relational data, enabling organizations to predict, optimize, and act with speed and confidence.
Our mission is simple yet ambitious: make the world’s most important data also its most useful.

At Kumo, we are committed to building cutting-edge products that are also intuitive and easy to use. Our work blends deep technical innovation with thoughtful user-centric design.


Our Culture

We foster an inclusive, collaborative culture where every individual contributes to our shared mission.
We value:

  • Diversity of thought
  • Open and transparent communication
  • Working together to solve meaningful problems
  • Serving our customers with excellence
  • Building a supportive and thriving community

We’re Hiring

We are looking for ML/AI Engineers with experience in one or more of the following:

  • Graph Neural Networks (GNNs)
  • Graph Transformers
  • Agentic Frameworks
  • Applied Machine Learning

If you're excited about building the next generation of AI for relational data, we’d love to talk.

NVIDIA is developing the NVIDIA DRIVE AV Solution (NDAS), powered by the latest advancement in AI and accelerated computing. We are seeking a highly motivated software expert to join our Autonomous Vehicles (AV) Drive-Alpha team in US Santa Clara. You will be driving the engineering execution of feature development or exceeding the meaningful metric requirements, especially for L2++ and L3/L4.

Drive-Alpha consists of proficient domain-experts spanning the full stack of autonomous driving, including perception, fusion, prediction, planning and control, autonomous model, many with proven development experiences for the highly competitive market in key functions like Highway NOA (Navigation on Autopilot) and Urban NOA (Navigation on Autopilot), as well as p2p driving (including parking). This team is responsible for the integration and sign-off of NDAS component teams' merge request/change lists, promotes validated changes to merge into stable branch, analyzes the root-cause of the regression identified, and drive the corrective actions taken by component engineering teams for a productive CI/CD process of SW development. Also, the team members tightly integrate into component teams' development, acting as a dependency resolver for the component teams to deliver cross-function improvement that are most impactful to NDAS product. We nurture teamwork among component teams' engineers and establish positive relationships and communications with partner organizations.

What you’ll be doing: Provide in-depth and insightful technical feedback on the quality of NDAS L2++/L3/L4 SW stack, based on the performance metrics proven through offline-replay and in-car testing. Identify the weak link of the L2++/L3/L4 SW stack and make it strong. Integrate, test and sign-off SW stack's code change and model update, and drive the Root-Cause-Corrective-Action (RCCA) process to continuously improve the quality of NDAS SW. Decompose a complicated cross-function problem into actionable items and coordinate a concerted effort among multiple collaborators. Join force with component team developers, when necessary, provide your domain expert input to a solution for hard problems, and produce production-quality code to component code base.

What we need to see: BS/MS in Electrical Engineering, Computer Science, or related fields or equivalent experience. 5+ years related experience in software development, with hands-on dev experience in AD for automotive. Great coding skills in modern C++ and scripting languages like Python. Deep understanding of L2++/L3/L4 product features in the market. Hands-on experience in debugging AD SW problems. Excellent communication and interpersonal skills with ability to strive in a cross-disciplinary environment.

Ways To Stand Out From The Crowd: Experience of working as a hands-on tech lead for one or more autonomous driving components. Hands-on development experience of an SOP-ed AD and/or ADAS product. Rich experience of in-car testing with great intuition of first-level triaging (from symptom to component). Familiar with CI/CD process, test automation, Jenkins, Log-Sim reply.

NVIDIA has some of the most forward-thinking and hardworking people in the world. If you're creative and autonomous, we want to hear from you! Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 136,000 USD - 212,750 USD.

You will also be eligible for equity and benefits.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other char

New York


Quantitative Researchers (QRs) specialize in a variety of areas, including but not limited to: using sophisticated data analysis skills to build predictive models, driving construction of a complex multi asset portfolio by utilizing large scale portfolio optimization techniques, and developing sophisticated optimization algorithms. Researchers with an exceptional record of achievement in their respective fields and a drive to apply quantitative techniques to investing are encouraged to apply to GQS.