Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

Location UC Berkeley, Berkeley, CA US


Description The Bakar Institute of Digital Materials for the Planet (BIDMaP) is an institute in UC Berkeley’s new College of Computing, Data Science, and Society (CDSS), bringing together AI, machine learning and data science with the natural sciences to address the planet’s most urgent challenges. BIDMaP is focused on developing new techniques in AI that will enhance and accelerate discovery in experimental natural sciences and development of novel materials to address planetary challenges. To this end, BIDMaP promotes collaboration between world-renowned AI/ML experts, chemists, physicists and other physical scientists. By combining cutting-edge chemistry with artificial intelligence, machine learning, and robotics, BIDMaP is reimagining how materials can be designed and optimized for clean energy, clean air, clean water, advanced batteries, and sustainable chemical production.

Amsterdam

If you enjoy mathematical challenges and writing computer programs, you could be instrumental in the success of Optiver’s dynamic trading floor as our next Graduate Quantitative Researcher. With your statistics knowledge and top-tier analytical abilities, you’ll create the insights that drive our trading strategies. Get ready to collaborate with world-class Traders and Software Engineers from more than 50 countries to improve financial markets across the globe. This is your chance to get involved and see how valuable research and data are to the future of electronic trading.

WHAT YOU’LL DO: Quantitative Research acts as the foundation upon which Optiver’s trading activities are built. Our research teams – experts in a variety of STEM subjects – utilise a scientific approach to research and design our world-class trading algorithms. This means applying and developing state-of-the-art stochastic models to price options and predict market volatility, as well as utilising Monte Carlo methods. It also means developing statistical arbitrage strategies by working with petabytes of low latency, high-frequency market data sets, an extensive high-powered computing back-testing framework and much more. Optiver Researchers believe in academic discourse, and therefore invite their teammates and Traders to challenge each hypothesis. Constant testing, analysis, refinement and innovation ensures our quantitative models remain at the cutting-edge of constantly evolving capital markets – you will play a key role in keeping us there.

WHO YOU ARE: We’re looking for aspiring Quantitative Researchers who are versatile and creative in innovating and suggesting new solutions. In return, we’ll give you the freedom to pursue your ideas and implement them right into our production systems. In terms of skills and qualifications, we’re looking for: • An academic degree in Engineering, Physics, Maths, Econometrics, Computer science or equivalent, with outstanding academic achievements • Programming experience in any language (preferably Python, but C, C++, Basic, JAVA, etc. are also a plus) • Ability to apply concepts of probability, calculus and linear algebra • Competitive attitude and eagerness to constantly improve • Ability to learn quickly • Excellent verbal and written English language skills

WHAT YOU’LL GET: You’ll join a culture of collaboration and excellence, where you’ll be surrounded by curious thinkers and creative problem solvers. Motivated by a passion for continuous improvement, you’ll thrive in a supportive, high-performing environment alongside talent colleagues, working collaboratively to tackle the toughest problems in the financial markets. In addition, you’ll receive: • A performance-based bonus structure, enabling all of our employees to benefit from our global profit pool • The opportunity to work alongside best-in-class professionals from over 50 countries • 25 paid vacation days in your first year, increasing to 30 from your second year onwards • Training opportunities, discounts on health insurance, and fully paid first-class commuting expenses • Extensive office perks, including breakfast, lunch and dinner, world-class barista, in-house physio and chair massages, organised sports and leisure activities, and Friday afternoon drinks • Training and continuous learning opportunities, including access to conferences and tech events • Competitive relocation packages and visa sponsorship where necessary for expats

HOW TO APPLY: Are you interested in furthering your career on one of the most dynamic and exciting trading floors in Europe? Apply directly via the form below for the position of Graduate Quantitative Researcher. Please provide us with a CV in English. Unfortunately we cannot accept applications via email for data protection reasons.

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

Within the Ads Delivery team, we try to connect the dots between the aspirations of Pinners and the products offered by our partners. We are looking for a Machine Learning Engineer/Economist with a strong theoretical and data analysis background that understands market design concepts and has the engineering skills to bring them to market. We are looking for an economist who can get their hands dirty and work side by side with other engineers, to advance the efficiency of the Pinterest Marketplace. The nature of projects within this team require a deep understanding of trade-offs, founded on both economic theory and data analysis, from the ideation phase all the way to launch review.


What you’ll do:

  • Build statistical models and production systems to improve marketplace design and operations for Pinners, Partners, and Pinterest.
  • Tune marketplace parameters (e.g., utility function), optimize ad diversity and load, implement auctions, and model long‑term effects to reduce ad fatigue and improve advertiser outcomes.
  • Define and implement experiments to understand long term Marketplace effects.
  • Develop strategies to balance long and short term business objectives.
  • Drive multi-functional collaboration with peers and partners across the company to improve knowledge of marketplace design and operations.
  • Work across application areas such as marketplace performance analysis, advertiser churn/retention modeling, promotional bandwidth allocation, ranking/pricing/mechanism design, bidding/budgeting innovation, and anticipating second‑order effects for new ad offerings.

What we’re looking for:

  • Degree in Computer Science, Machine Learning, Economics, Operations Research, Statistics or a related field.
  • Industry experience in applying economics or machine learning to real products (e.g., ads auctions, pricing, marketplaces, or large‑scale recommendation/search systems).
  • Knowledge in auction theory, market design, and econometrics with excellent data analysis skills.
  • Strong software engineering and mathematical skills and proficiency with statistical methods.
  • Experience with online experimentation and causal inference (A/B testing, long‑running experiments, or similar) in large‑scale systems.
  • Practical understanding of machine learning algorithms and techniques.
  • Impact‑driven, highly collaborative, and an effective communicator; prior ads or two‑sided marketplace experience strongly preferred.

Senior Research Fellow

Job Reference: 521361
Employment Type: Full Time (Fixed Term, 2 Years)
Location: Perth, Western Australia

Remuneration

Base salary: Level C, $144,143–$165,809 p.a. (pro-rata) plus 17% superannuation

The Research Centre

The Planning and Transport Research Centre (PATREC) at UWA conducts research with direct application to transport planning and road safety. RoadSense Analytics (RSA) is a video analytics platform for traffic analysis, developed through seven years of sustained R&D. The platform translates Australian research into a market-ready product for transport planning applications.

The Role

You will lead advanced research and development of computer vision and AI/ML models for traffic video analytics, focusing on detection, tracking, trajectory analysis, and robustness in complex conditions. You will conduct large-scale benchmarking, optimisation, and deployment of AI models, ensuring research innovations translate into real-world applications within the RoadSense Analytics platform. You will mentor junior researchers, collaborate with engineers, and contribute to knowledge building while pioneering state-of-the-art methods in multi-object tracking, trajectory reconstruction, and error reduction.

Selection Criteria

Essential:

  • Tertiary degree in Computer Science, Applied Mathematics/Statistics, Robotics, Physics, or related discipline, with excellent academic record
  • Demonstrated expertise and leadership in computer vision and machine learning research, including object detection, multi-object tracking, and segmentation
  • Evidence of leading research projects, teams, or collaborations, with measurable outcomes
  • Strong record of publications or equivalent applied research outputs in AI/ML or computer vision
  • Experience translating AI/ML research into real-world applications or systems

Further Information

Position Description: PD [Senior Research Fellow] [521361].pdf

Contact: Associate Professor Chao Sun
Email: chao.sun@uwa.edu.au

Work Location: Toronto, Ontario, Canada

Job Description

We are currently seeking talented individuals for a variety of positions, ranging from mid to senior levels, and will evaluate your application in its entirety.

Layer 6 is the AI research centre of excellence for TD Bank Group. We develop and deploy industry-leading machine learning systems that impact the lives of over 27 million customers, helping more people achieve their financial goals and needs. Our research broadly spans the field of machine learning with areas such as deep learning and generative AI, time series forecasting and responsible use of AI. We have access to massive financial datasets and actively collaborate with world renowned academic faculty. We are always looking for people driven to be at the cutting edge of machine learning in research, engineering, and impactful applications.

Day-to-day as a Technical Product Owner:

  • Translate broad business problems into sharp data science use cases, and craft use cases into product visions

  • Own machine learning products from vision to backlog; prioritizing features and defining minimum viable releases; maximizing the value your products generate, and the ROI of your pod

  • Guide Agile pods on continuous improvement, ensuring that the next sprint is delivered better than the previous

  • Work closely with stakeholders to identify, refine and (occasionally) reject opportunities to build machine learning products; collaborate with support functions such as risk, technology, model risk management and incorporate interfacing features

  • Facilitate the professional & technical development of your colleagues through mentorship and feedback

  • Anticipate resource needs as solutions move through the model lifecycle, scaling pods up and down as models are built, perform, degrade, and need to be rebuilt

  • Championing model development standards, industry best-practices and rigorous testing protocols to ensure model excellence

  • Self-direct, with the ability to identify meaningful work in down times and effectively prioritize in busy times

  • Drive value through product, feature & release prioritization, maximizing ROI & modelling velocity

  • Be an exceptional collaborator in a high-interaction environment

Job Requirements

  • Minimum five years of experience delivering major data science projects in large, complex organizations

  • Strong communication, business acumen and stakeholder management competencies

  • Strong technical skills: machine learning, data engineering, MLOps, cloud solution architecture, software development practices

  • Strong coding proficiency: python, R, SQL and / or Scala, cloud architecture

  • Certified Scrum Product Owner and / or Certified Scrum Master or equivalent experience

  • Familiarity with cloud solution architecture, Azure a plus

  • Master’s degree in data science, artificial intelligence, computer science or equivalent experience

The role
Nebius is hiring a driven and industry-savvy Lifesciences Solutions Partner - US to join our growing Healthcare & Life Sciences (HCLS) team. As a strategic connector between the Global Head of HCLS and regional Account Executives (AEs), you will play a pivotal role in accelerating go-to-market execution, deepening client engagement, and ensuring our cloud and AI solutions align with the business, scientific, and regulatory needs of the life sciences ecosystem. You will manage strategic client relationships, identify and develop new business opportunities, and collaborate with partners - with a strong focus on the Pharmaceutical, Biotechnology, Drug Development, and Genomics segments. Your ability to understand complex scientific and business challenges, craft tailored solutions, and thrive in a fast-moving, innovation-led environment will define your success. This role combines consultative selling, industry expertise, and commercial execution, helping customers unlock the full potential of the Nebius platform.

You’re welcome to work remotely from United States.

Your responsibilities will include:
- Demonstrate a deep understanding of Nebius and the value to our customers. - Own and grow your territory: Maintain and deliver against a strategic plan for region/territory. Help AE’s qualify and prioritise opportunities through an HCLS and compliance lens. Lead and - support strategic discussions with pharma and biotech. - Client Engagement: Develop deep relationships with key stakeholders across the enterprise, positioning our AI and cloud solutions to address client-specific challenges. Act as a trusted advisor to pharma and biotech clients, driving engagement and long-term relationships. Identify opportunities to apply AI/ML, HPC, and data platforms in drug discovery and clinical operations. - Deal Support & Sales Acceleration: Partner with Account Executives to shape account strategy, value messaging, and proposal content that will secure deals to meet revenue targets. Help qualify and prioritise opportunities through an HC&LS and compliance lens. Support complex deal cycles where domain credibility and regulatory insight are critical. - Solution Selling: Demonstrate the value of AI and cloud solutions through consultative selling, product demonstrations, and presentations. - Regional Representation: Represent Nebius AI at regional and industry events, and customer meetings. - Market Knowledge: Stay updated on industry trends, emerging technologies, and competitive landscape to position our solutions effectively. - Forecast with accuracy; progress deals through the Salesforce sales process and deliver against ACV / activity targets.

We expect you to have:
- Proven Experience: 8+ years of experience in B2B sales, particularly in AI, cloud, or data infrastructure, with a clear hunter track record. - Passion and desire to work in a startup culture, directly impacting the growth of the company -Comfortable selling cloud platforms (AWS, Azure, Google Cloud), AI solutions, and related technologies. - Strong commercial acumen: value mapping, negotiation, multi‑year deals, and exec-level‑ storytelling. - High energy, enthusiasm, and evidence of consistent growth vs. quota. - CRM Proficiency: Experience with CRM tools such as Salesforce, HubSpot, or similar.

Ability to travel as needed. - It will be an added bonus if you have:
- 5 - 10 years in pharma, biotech, or life sciences, ideally in consulting, GTM, product, or pre-sales roles. - Deep understanding of drug discovery and development processes, scientific data workflows, and regulatory frameworks. - Proven ability to communicate complex scientific and technical concepts to non-technical stakeholders. - Previous experience in a high-growth, start-up environment ideally selling cloud, AI/ML or HPC solutions. - Exposure to SaaS models or cloud infrastructure sales. - Experience selling to mid-market or enterprise-level clients

Successful hires will expand the group's efforts applying machine learning to drug discovery, biomolecular simulation, and biophysics. Areas of focus include generative models to help identify novel molecules for drug discovery targets, predict PK and ADME properties of small molecules, develop more accurate approaches for molecular simulations, and understand disease mechanisms. Ideal candidates will have strong Python programming skills. Relevant areas of experience might include deep learning techniques, systems software, high performance computation, numerical algorithms, data science, cheminformatics, medicinal chemistry, structural biology, molecular physics, and/or quantum chemistry, but specific knowledge of any of these areas is less critical than intellectual curiosity, versatility, and a track record of achievement and innovation in the field of machine learning. For more information, visit www.DEShawResearch.com.

Please apply using this link: https://apply.deshawresearch.com/careers/Register?pipelineId=597&source=NeurIPS_1

The expected annual base salary for this position is USD 300,000 - USD 800,000. Our compensation package also includes variable compensation in the form of sign-on and year-end bonuses, and generous benefits, including relocation and immigration assistance. The applicable annual base salary paid to a successful applicant will be determined based on multiple factors including the nature and extent of prior experience and educational background. We follow a hybrid work schedule, in which employees work from the office on Tuesday through Thursday, and have the option of working from home on Monday and Friday.

D. E. Shaw Research, LLC is an equal opportunity employer.

UK


Research Engineer - Novel AI applications and Next Generation Hardware

Mission: You will join the hardware team with the goal of supporting novel application areas and AI modes beyond current use cases. Responsibilities include researching the evolving landscape of AI applications and models, analyzing underlying model architectures, and building implementations on Groq. Further responsibilities include analyzing mappings to existing and future hardware, modeling performance, and working cross-functionally with the hardware design team on novel hardware features e.g. functional units, numeric modes, interconnect, system integration, etc to unlock novel application areas for Groq. There will be opportunities to participate in a wider range of R&D activities, either internally or externally with key Groq partners.

Responsibilities & opportunities in this role: AI application and model research Performance modeling Cross-functional work with hardware and software teams Next generation hardware architecture development Support internal and outward-facing R&D

Ideal candidates have/are: Strong foundation in computer science Experience with AI models and applications Knowledge of LLMs and other Gen AI applications Strong foundation in computer architecture and computer arithmetic Python and common ML frameworks such as PyTorch & TensorFlow Experience with performance analysis / modelling Problem solving mindset

Nice to Have: Experience with scientific computing & HPC Experience in optimizing applications on specialized accelerators (GPU, FPGA, or other custom accelerators). Experience with compiler tools and MLIR. Experience in delivering complex projects in a fast-moving environment.

Attributes of a Groqster: Humility - Egos are checked at the door Collaborative & Team Savvy - We make up the smartest person in the room, together Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously Curious & Innovative - Take a creative approach to projects, problems, and design Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking

Compensation: At Groq, a competitive base salary is part of our comprehensive compensation package, which includes equity and benefits. For this role, salary range is determined by your location, skills, qualifications, experience and internal benchmarks. Compensation for candidates outside the USA will be dependent on the local market.

This position may require access to technology and/or information subject to U.S. export control laws and regulations, as well as applicable local laws and regulations, including the Export Administration Regulations (EAR). To comply with these requirements, candidates for this role must meet all relevant export control eligibility criteria.

Join Shopify's Machine Learning Post-Grad Internship: Lead the Next Wave of AI Innovation in E-commerce

At Shopify, we're not just building models; we're redefining the e-commerce landscape with AI. As a Masters/PhD Research Intern, you'll engage with petabyte-scale data, leveraging cutting-edge ML/AI methods to develop and deploy models that impact millions. You'll push boundaries using technologies like LLM post-training, reinforcement learning, and model quantization.

Pair your research with real-world problems and data:

Engage in research that ties to real-world problems today that impact our merchants and customers worldwide.

Collaborate and Deliver:

Work side by side with engineers, applied scientists, and other teams to work with you on your research experiments or prototypes with the idea to get your work into production in the longer term.

Create and Learn:

Solve tangible problems that require longer-term research that require you to design, build, and deploy models. Do this in production or as a proof of concept.

Share and Grow:

Stay up to date with the latest techniques, learn trade-offs between models and techniques IRL and share with others at Shopify to make everyone better.

About You:

  • You're pursuing or have completed a Master’s or Doctorate in Computer Science, Computer Engineering, or a relevant technical field.
  • Your research experience spans areas like Machine Learning, Search, NLP, Recommendation Systems, Pattern Recognition, Agents, LLM or Gen AI.
  • You have hands-on experience with ML frameworks such as PyTorch, TensorFlow, or equivalent.
  • You're adept at translating insights into business recommendations and have experience with systems software or algorithms.
  • You have a proven track record of building and shipping high-quality, reliable work.
  • You excel in programming languages like Python, R, or MATLAB and can independently identify, design, and complete medium to large features.
  • You have demonstrated experience through internships, work, conferences, papers, coding competitions, or open-source contributions.
  • You enjoy solving complex problems and comparing alternative solutions to determine the best path forward.

Our Internship Experience:

  • Unique Matching Process: We pair you with teams where you'll thrive, aligned with both your skills and Shopify's needs.
  • Flexible Work Environment: Post-onboarding, work in-office three days a week at your choice, coordinating with your Manager and Mentor for optimal team synergy.
  • Legally Eligible: You must be authorized to work in Canada or the US for the internship duration. We don't offer immigration support for interns.
  • Locations: Interns work from our offices in Bellevue, NYC, or Toronto. Relocation to the closest office is necessary if not already based there.

Application Essentials:

  • Prepare Your Resume: Include it in your application.
  • Complete All Application Questions: Ensure nothing is left unanswered.
  • Engage with Assessments: If you advance, complete mandatory assessments to showcase your technical prowess.

Why Apply?

  • Shape the future of e-commerce with cutting-edge AI solutions.
  • Work in a dynamic, innovative environment that values creativity and experimentation.
  • Enhance your skills with hands-on experience and professional growth opportunities.

Application timeline: Applications are open from Tuesday, December 2, 2025 - December 5, 2025.

At Shopify, we're not just offering internships; we're crafting the future. Join us, and let's build what's next.

San Francisco


About this role

We’re looking for a Data Engineer to help design, build, and scale the data infrastructure that powers our analytics, reporting, and product insights. As part of a small but high-impact Data team, you’ll define the architectural foundation and tooling for our end-to-end data ecosystem.

You’ll work closely with engineering, product, and business stakeholders to build robust pipelines, scalable data models, and reliable workflows that enable data-driven decisions across the company. If you are passionate about data infrastructure, and solving complex data problems, we want to hear from you!

Tech stack

Core tools: Snowflake, BigQuery, dbt, Fivetran, Hightouch, Segment Periphery tools: AWS DMS, Google Datastream, Terraform, GithHub Actions

What you’ll do

Data infrastructure: * Design efficient and reusable data models optimized for analytical and operational workloads. * Design and maintain scalable, fault-tolerant data pipelines and ingestion frameworks across multiple data sources. * Architect and optimize our data warehouse (Snowflake/BigQuery) to ensure performance, cost efficiency, and security. * Define and implement data governance frameworks — schema management, lineage tracking, and access control.

Data orchestration: * Build and manage robust ETL workflows using dbt and orchestration tools (e.g., Airflow, Prefect). * Implement monitoring, alerting, and logging to ensure pipeline observability and reliability. * Lead automation initiatives to reduce manual operations and improve data workflow efficiency.

Data quality: * Develop comprehensive data validation, testing, and anomaly detection systems. * Establish SLAs for key data assets and proactively address pipeline or data quality issues. * Implement versioning, modularity, and performance best practices within dbt and SQL.

Collaboration & leadership: * Partner with product and engineering teams to deliver data solutions that align with downstream use cases. * Establish data engineering best practices and serve as a subject matter expert on our data pipelines, models and systems.

What we’re looking for

  • 5+ years of hands-on experience in a data engineering role, ideally in a SaaS environment.
  • Expert-level proficiency in SQL, dbt, and Python.
  • Strong experience with data pipeline orchestration (Airflow, Prefect, Dagster, etc.) and CI/CD for data workflows.
  • Deep understanding of cloud-based data architectures (AWS, GCP) — including networking, IAM, and security best practices.
  • Experience with event-driven systems (Kafka, Pub/Sub, Kinesis) and real-time data streaming is a plus.
  • Strong grasp of data modeling principles, warehouse optimization, and cost management.
  • Passionate about data reliability, testing, and monitoring — you treat pipelines like production software.
  • Thrive in ambiguous, fast-moving environments and enjoy building systems from the ground up.