Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

At Pinterest Labs, you'll join a world-class team of research scientists and machine learning engineers to tackle cutting-edge challenges in machine learning and artificial intelligence. This role places you at the intersection of applied research and scalable infrastructure, focusing heavily on ML framework and efficiency.

You will conduct research that can be applied across Pinterest engineering teams, engaging in external collaborations and mentoring. Your research focus will specifically target ML efficiency and large-scale infrastructure challenges within high-impact areas such as: generative recommender systems, post-training, reinforcement learning, multi-modality representation learning, and graph neural networks.


What you’ll do:

  • Design, develop, maintain, and enhance advanced machine learning solutions across various key business areas.
  • Lead the technical strategy for optimizing and improving the efficiency of large-scale ML infrastructure.
  • Lead high-impact machine learning projects, overseeing priorities, deadlines, and deliverables while providing technical guidance.
  • Drive alignment and clarity on goals, outcomes, and timelines across teams.

What we’re looking for:

  • MS/PhD in Computer Science or a related field degree.
  • 10+ years of industry experience.
  • Experience in distributed systems, ML frameworks (e.g. PyTorch), and scaling laws.
  • Experience in research and in solving analytical problems.
  • Cross-functional collaborator and strong communicator.
  • Comfortable solving ambiguous problems and adapting to a dynamic environment.

As a Machine Learning Researcher at IMC, your work will directly impact our global trading strategies. You will leverage your superior analytical, mathematical, and computing skills to improve existing models and develop new ones. We will empower you to discover your unique niche and excel, taking on responsibility and ownership from the start. Machine Learning Researchers work closely with Traders and Developers in an environment where problem solving, innovation and teamwork are recognized and rewarded.

Toronto or Remote from US


Mission: As Senior Staff Compiler Engineer, you will be responsible for defining and developing compiler optimizations for our state-of-the-art compiler, targeting Groq's revolutionary LPU, the Language Processing Unit.

In this role you will drive the future of Groq's LPU compiler technology. You will be in charge of architecting new passes, developing innovative scheduling techniques, and developing new front-end language dialects to support the rapidly evolving ML space. You will also be required to benchmark and monitor key performance metrics to ensure that the compiler is producing efficient mappings of neural network graphs to the Groq LPU.

Ideal candidates have experience with LLVM and MLIR, and knowledge with functional programming languages an asset. Also, knowledge with ML frameworks such as TensorFlow and PyTorch, and portable graph models such as ONNX desired.

Responsibilities & opportunities in this role: Compiler Architecture & Optimization: Lead the design, development, and maintenance of Groq’s optimizing compiler, building new passes and techniques that push the performance envelope on the LPU. IR Expansion & ML Enablement: Extend Groq’s intermediate representation dialects to capture emerging ML constructs, portable graph models (e.g., ONNX), and evolving deep learning frameworks. Performance & Benchmarking: Benchmark compiler outputs, diagnose inefficiencies, and drive enhancements to maximize quality-of-results on LPU hardware. Cross-Disciplinary Collaboration: Partner with hardware architects and software leads to co-design compiler and system improvements that deliver measurable acceleration gains. Leadership & Mentorship: Mentor junior engineers, review contributions, and guide large-scale, multi-geo compiler projects to completion. Innovation & Impact: Publish novel compilation techniques and contribute thought leadership to top-tier ML, compiler, and computer architecture conferences.

Ideal candidates have/are: 8+ years of experience in the area of computer science/engineering or related 5+ years of direct experience with C/C++ and LLVM or compiler frameworks Knowledge of spatial architectures such as FPGA or CGRAs an asset Knowledge of functional programming an asset Experience with ML frameworks such as TensorFlow or PyTorch desired Knowledge of ML IR representations such as ONNX and Deep Learning

Additionally nice to have: Strong initiative and personal drive, able to self-motivate and drive projects to closure Keen attention to detail and high levels of conscientiousness Strong written and oral communication; ability to write clear and concise technical documentation Team first attitude, no egos Leadership skills and ability to motivate peers Optimistic Outlook, Coaching and mentoring ability

Attributes of a Groqster: Humility - Egos are checked at the door Collaborative & Team Savvy - We make up the smartest person in the room, together Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously Curious & Innovative - Take a creative approach to projects, problems, and design Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking

New York, New York


WRITER is experiencing an incredible market moment as generative AI has taken the world by storm. We're looking for a cloud platform engineer to establish our cloud platform team, focusing on building and scaling our multi-cloud architecture across AWS, GCP and Azure regions. In this founding role, you'll architect and implement highly scalable systems that handle complex multi-tenant workloads while ensuring proper tenant isolation and security.

As a cloud platform engineer, you'll work closely with our development teams to build robust, automated solutions for environment buildout, tenant management, and cross-region capabilities. You'll design and implement systems that ensure proper tenant isolation while enabling efficient environment lifecycle management. This is a unique opportunity to establish our cloud platform team and have a direct impact on our platform's scalability and security, working with cutting-edge technologies and solving complex distributed systems challenges.

Your responsibilities:

  • Establish and lead the cloud platform team
  • Architect and implement multi-cloud infrastructure across AWS, GCP and Azure regions
  • Design and build highly scalable, distributed systems for multi-tenant workloads
  • Define and implement best practices for automated infrastructure across cloud providers
  • Architect and implement tenant isolation mechanisms to ensure data security and compliance
  • Create and manage environment lifecycle automation for dev/qa/demo environments
  • Design and implement cross-region capabilities and active-active deployments
  • Develop tenant migration and pod management solutions
  • Collaborate with development teams to understand tenant requirements and constraints
  • Implement infrastructure as code for region-level deployments
  • Monitor and optimize tenant isolation and security
  • Document tenant management processes and best practices
  • Stay current with industry trends in multi-tenant architectures
  • Participate in on-call rotation for critical platform services
  • Contribute to technical decisions around tenant isolation and region management
  • Mentor and grow the cloud platform team
  • Implement and maintain deployment automation and CI/CD pipelines
  • Design and optimize Kubernetes infrastructure for multi-tenant workloads
  • Drive infrastructure cost optimization and efficiency

Is this you?

  • Have 8+ years of experience in cloud platform engineering or related role (site reliability engineering)
  • Have 3+ years of experience leading engineering teams
  • Are passionate about building secure, scalable multi-tenant platforms
  • Have extensive experience with cloud platforms (AWS, GCP, or Azure)
  • Are proficient in infrastructure as code (Terraform, CloudFormation, etc.)
  • Have deep experience with multi-tenant architectures and tenant isolation
  • Can write clean, maintainable code in Python, Go, or similar languages
  • Understand containerization and orchestration (Docker, Kubernetes)
  • Have proven experience with cross-region deployments and active-active architectures
  • Are comfortable working with multiple development teams
  • Can communicate technical concepts clearly to both technical and non-technical audiences
  • Take ownership of projects and drive them to completion
  • Are excited about building automated infrastructure solutions
  • Have a strong focus on security and tenant isolation
  • Are comfortable with on-call responsibilities
  • Have experience with agile development methodologies
  • Have experience establishing new teams and best practices
  • Can balance technical leadership with hands-on implementation
  • Are excited about solving complex distributed systems challenges
  • Have experience with multi-cloud architectures and hybrid deployments

About Handshake AI Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired. Handshake AI is a human data labeling business that leverages the scale of the largest early career network. We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale. This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.

Now’s a great time to join Handshake. Here’s why: Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide. Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs. World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few. Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.

About the Role Handshake AI builds the data engines that power the next generation of large language models. Our research team works at the intersection of cutting-edge model post-training, rigorous evaluation, and data efficiency. Join us for a focused Summer 2026 internship where your work can ship directly into our production stack and become a publishable research contribution. To start between May and June 2026.

Projects You Could Tackle LLM Post-Training: Novel RLHF / GRPO pipelines, instruction-following refinements, reasoning-trace supervision. LLM Evaluation: New multilingual, long-horizon, or domain-specific benchmarks; automatic vs. human preference studies; robustness diagnostics. Data Efficiency: Active-learning loops, data value estimation, synthetic data generation, and low-resource fine-tuning strategies. Each intern owns a scoped research project, mentored by a senior scientist, with the explicit goal of an archive-ready manuscript or top-tier conference submission.

Desired Capabilities Current PhD student in CS, ML, NLP, or related field. Publication track record at top venues (NeurIPS, ICML, ACL, EMNLP, ICLR, etc.). Hands-on experience training and experimenting with LLMs (e.g., PyTorch, JAX, DeepSpeed, distributed training stacks). Strong empirical rigor and a passion for open-ended AI questions.

Extra Credit Prior work on RLHF, evaluation tooling, or data selection methods. Contributions to open-source LLM frameworks. Public speaking or teaching experience (we often host internal reading groups).

The Department of Materials Science and Engineering (DMSE) together with the Schwarzman College of Computing (SCC) at Massachusetts Institute of Technology (MIT) in Cambridge, MA, seeks candidates at the level of tenure-track Assistant Professor to begin July 1, 2026 or on a mutually agreed date thereafter.

Materials engineering has always benefitted from theoretical and computational approaches to unveil relationships between structure, properties, processing, and performance. Recent advances in computing, including but not limited to artificial intelligence, are poised to dramatically advance the understanding and design of complex matter. DMSE and SCC jointly seek candidates with experience and interest in combining fundamental scientific principles with algorithmic innovations to empower discovery, understanding, and synthesis of materials with applications across critical societal domains --- healthcare, manufacturing, energy, sustainability, climate, and next-generation computing. This search encompasses all materials classes and scales, and is open to candidates with industry and start-up experience. Candidates are expected to develop research programs that target innovation in computational approaches well-suited to materials science and engineering research.

The successful candidate will have a shared appointment in both the Department of Materials Science and Engineering and SCC in either the Department of Electrical Engineering and Computer Science (EECS) or the Institute for Data, Systems, and Society (IDSS), depending on best fit.

Faculty duties include teaching at the undergraduate and graduate levels, advising students, conducting original scholarly research, and developing course materials at the graduate and undergraduate levels. Candidates are expected to teach in both the Department of Materials Science and Engineering and in the educational programs of SCC. The normal teaching load is two subjects per year.

Candidates should hold a Ph.D. in Materials Science and Engineering, Computer Science, Physics, Chemical Engineering, Chemistry, Applied Mathematics, or a related field. A PhD is required by the start of employment. The pay range for a 9-month academic appointment at the entry-level Assistant Professor rank (excluding summer salary): $140,000 - $150,000. The pay offered to a selected candidate during hiring will be based on factors such as (but not limited to) the scope and responsibilities of the position, the individual's work experience and education/training, internal peer equity, and applicable legal requirements. These factors impact where an individual's pay falls within a range. Employment is contingent upon the completion of a satisfactory background check, including verifying any finding of misconduct (or pending investigation) from prior employers.

Applications should include: (a) curriculum vitae, (b) research statement, (c) a teaching and mentoring plan. Each candidate should also include the names and contact information of 3 reference letter writers, who should upload their letters of recommendation by November 30, 2025.

Please submit online applications to https://faculty-searches.mit.edu/dmse_scc/register.tcl. To receive full consideration, completed applications must be submitted by November 30, 2025.

MIT is an equal opportunity employer. We value diversity and strongly encourage applications from individuals from all identities and backgrounds. All qualified applicants will receive equitable consideration for employment based on their experience and qualifications and will not be discriminated against on the basis of race, color, sex, sexual orientation, gender identity, pregnancy, religion, disability, age, genetic information, veteran status, or national or ethnic origin. See MIT's full policy on nondiscrimination. Know your rights.

Location United States


Description At Oracle Cloud Infrastructure (OCI), we are building the future of cloud computing—designed for enterprises, engineered for performance, and optimized for AI at scale. We are a fast-paced, mission-driven team within one of the world’s largest cloud platforms. The Multimodal AI team in OCI Applied Science is working on developing cutting-edge AI solutions using Oracle's industry leading GPU-based AI clusters to disrupt industry verticals and push the state-of-the-art in Multimodal and Video GenAI research. You will work with a team of world-class scientists in exploring new frontiers of Generative AI and collaborate with cross-functional teams including software engineers and product managers to deploy these globally for real-world enterprise use-cases at the largest scale.

Responsibilities: - Contribute to the development and optimization of distributed multi-node training infrastructure - Stay Updated: Maintain a deep understanding of industry trends and advancements in video generatio, multimodal understanding, pretraining workflows and paradigms. -Model Development: Design, develop, and train state-of-the-art image and vide generation models that meet the highest quality standards. - Collaborate with cross-functional teams to support scalable and secure deployment pipelines. - Assist in diagnosing and resolving production issues, improving observability and reliability. - Write maintainable, well-tested code and contribute to documentation and design discussions

Minimum Qualifications - BS in Computer Science or related technical field. - 6+ years of experience in backend software development with cloud infrastructure. - Strong proficiency in at least one language such as Go, Java, Python, or C++. - Experience building and maintaining distributed services in a production environment. - Familiarity with Kubernetes, container orchestration, and CI/CD practices. - Solid understanding of computer science fundamentals such as algorithms, operating systems, and networking.

Preferred Qualifications - MS in Computer Science. - Experience in large-scale multi-node distributed training of LLMs and multimodal models. - Knowledge of cloud-native observability tools and scalable service design. - Interest in compiler or systems-level software design is a plus.

Why Join Us - Build mission-critical AI infrastructure with real-world impact. - Work closely with a collaborative and experienced global team. - Expand your knowledge in AI, cloud computing, and distributed systems. - Contribute to one of Oracle’s most innovative and fast-growing initiatives.

Disclaimer:

Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.

Range and benefit information provided in this posting are specific to the stated locations only

US: Hiring Range in USD from: $96,800 to $223,400 per annum. May be eligible for bonus and equity.

Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business. Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.

Oracle US offers a comprehensive benefits package which includes the following: 1. Medical, dental, and vision insurance, including expert medical opinion 2. Short term disability and long term disability 3. Life insurance and AD&D 4. Supplemental life insurance (Employee/Spouse/Child) 5. Health care and dependent care Flexible Spending Accounts 6. Pre-tax commuter and parking benefits 7. 401(k) Savings and Investment Plan with company match 8. Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non-overtime eligible) position. Accrued Vacation is provided to all other employees

San Francisco


About this role

We’re looking for a Data Engineer to help design, build, and scale the data infrastructure that powers our analytics, reporting, and product insights. As part of a small but high-impact Data team, you’ll define the architectural foundation and tooling for our end-to-end data ecosystem.

You’ll work closely with engineering, product, and business stakeholders to build robust pipelines, scalable data models, and reliable workflows that enable data-driven decisions across the company. If you are passionate about data infrastructure, and solving complex data problems, we want to hear from you!

Tech stack

Core tools: Snowflake, BigQuery, dbt, Fivetran, Hightouch, Segment Periphery tools: AWS DMS, Google Datastream, Terraform, GithHub Actions

What you’ll do

Data infrastructure: * Design efficient and reusable data models optimized for analytical and operational workloads. * Design and maintain scalable, fault-tolerant data pipelines and ingestion frameworks across multiple data sources. * Architect and optimize our data warehouse (Snowflake/BigQuery) to ensure performance, cost efficiency, and security. * Define and implement data governance frameworks — schema management, lineage tracking, and access control.

Data orchestration: * Build and manage robust ETL workflows using dbt and orchestration tools (e.g., Airflow, Prefect). * Implement monitoring, alerting, and logging to ensure pipeline observability and reliability. * Lead automation initiatives to reduce manual operations and improve data workflow efficiency.

Data quality: * Develop comprehensive data validation, testing, and anomaly detection systems. * Establish SLAs for key data assets and proactively address pipeline or data quality issues. * Implement versioning, modularity, and performance best practices within dbt and SQL.

Collaboration & leadership: * Partner with product and engineering teams to deliver data solutions that align with downstream use cases. * Establish data engineering best practices and serve as a subject matter expert on our data pipelines, models and systems.

What we’re looking for

  • 5+ years of hands-on experience in a data engineering role, ideally in a SaaS environment.
  • Expert-level proficiency in SQL, dbt, and Python.
  • Strong experience with data pipeline orchestration (Airflow, Prefect, Dagster, etc.) and CI/CD for data workflows.
  • Deep understanding of cloud-based data architectures (AWS, GCP) — including networking, IAM, and security best practices.
  • Experience with event-driven systems (Kafka, Pub/Sub, Kinesis) and real-time data streaming is a plus.
  • Strong grasp of data modeling principles, warehouse optimization, and cost management.
  • Passionate about data reliability, testing, and monitoring — you treat pipelines like production software.
  • Thrive in ambiguous, fast-moving environments and enjoy building systems from the ground up.

New York


Software Developer: Generative AI Product Development

The D. E. Shaw group seeks exceptional software developers with expertise in generative AI (GAI) to join a small, fast-moving team building greenfield GAI products that directly transform how our teams operate. In this hands-on, entrepreneurial role you’ll partner with users across the firm to design, build, and deploy bespoke GAI solutions that drive efficiency, enhance analytical capabilities, and accelerate decision-making. This position offers the chance to lead projects from concept to production, and shape internal GAI strategy in a collaborative environment.

What you'll do day-to-day

You’ll join a dynamic team, with the potential to:

  • Lead and contribute to greenfield projects, driving innovation and defining the future of GAI at the firm through full-cycle ownership, from exploration to deployment.
  • Collaborate directly with internal groups and end users to build GAI applications tailored to nuanced, real-world business needs, and deliver solutions with immediate impact.
  • Experiment with emerging AI tools and applications, rapidly prototyping and integrating them across platforms to enhance usability and effectiveness firmwide.
  • Scale GAI tool adoption and improve integration with internal systems, with a focus on enabling seamless workflows and efficiency gains.

Who we're looking for

  • An extensive background in software and product development and a solid understanding of GAI technologies, demonstrated through hands-on experience building and scaling AI solutions at the product or company level.
  • Expertise in technical or entrepreneurial environments, with a record of solving complex challenges and taking projects from inception to deployment.
  • We welcome outstanding candidates at all experience levels who are excited to work in a collegial, collaborative, and fast-paced environment.
  • The expected annual base salary for this position is 200,000USD to 250,000USD. Our compensation and benefits package includes variable compensation in the form of a year-end bonus, guaranteed in the first year of hire, and benefits including medical and prescription drug coverage, 401(k) contribution matching, wellness reimbursement, family building benefits, and a charitable gift match program.

Job Description At Emmi AI, we are redefining how industries innovate. Traditional simulations are slow, expensive, and computationally heavy. We make them fast, scalable, and intelligent! Our AI-powered physics architecture and models unlock real-time interaction, slashing simulation times from days to seconds.

The Opportunity Work directly with our founding team as the strategic right hand to our CEO, COO, and Chief Scientist. You'll help navigate Emmi AI's growth trajectory in the specialized AI-physics simulation landscape through a blend of strategic insight and hands-on execution.

Your Mission - Strategic Intelligence & Decision-Making – Conduct market analysis of the specialized physics AI landscape, translating technical developments into strategic positioning - Lead Cross-Functional Projects – Orchestrate a wide range of high-impact special projects - VC & Investor Relations – Lead preparation for funding rounds, create compelling investor materials, and manage relationships with our European and international investors - High-Stakes Execution – Drive critical initiatives from conception to completion, adapting quickly as our strategic landscape evolves

Job Requirements - Venture Experience – 5+ years in VC, deep tech startups, consulting, or similar analytical roles - Strategic Thinking – Proven ability to analyze complex markets and translate insights into actionable recommendations - Project Management – Project management expertise across multiple workstreams - International Perspective – Experience working across European markets and innovation ecosystems - Exceptional Communication – Ability to translate complex technical concepts for diverse stakeholders

What Sets You Apart - Analytical Depth – You excel at quantitative and qualitative analysis, uncovering insights others miss - Technical Curiosity – While not necessarily a developer, you understand technology well enough to engage meaningfully with our research team - Growth Mindset – You thrive in ambiguity and see challenges as opportunities to push boundaries - Continental Vision – You understand Europe's unique deep tech ecosystem and can help position Emmi for success across multiple markets - Execution Excellence – You deliver consistently high-quality work and can adapt quickly as priorities evolve