Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

Abu Dhabi, UAE


The Department of Machine Learning at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) has faculty openings at all ranks (full/associate/assistant professors). Qualified candidates are invited to apply. Applicants are expected to conduct outstanding research and be committed to teaching. Successful applicants will be provided an attractive remuneration package and allocated generous research funding for each year. Long-term research on big problems is particularly encouraged.

More details about the positions and the submission are available at https://apply.interfolio.com/176242

Those attending NeurIPS are welcome to talk to us to learn more about the university and the positions.

Contact: Chih-Jen Lin (chihjen.lin@mbzuai.ac.ae)

AI Scientist - AI Retrieval Systems and Knowledge Graphs

Location: Boston (US) / Barcelona (Spain)

About us:

Axiomatic_AI is dedicated to accelerating R&D by developing Automated Interpretable Reasoning, the next generation of a verifiably truthful AI model built for reasoning in science and engineering, with the goal of empowering engineers specifically in hardware design and Electronic Design Automation (EDA). Our mission is to revolutionize the fields of hardware design and simulation in the photonics and semiconductor industry as a first step towards automated and reliable scientific reasoning. We seek highly motivated professionals to help us bring these innovations to life, driving the evolution from research, development to commercial products.

Position overview:

As an AI Scientist specialized in retrieval systems and knowledge graphs, you will play a key role in developing Axiomatic’s verifiable scientific reasoning. Your responsibilities will include designing, prototyping, developing, testing and iterating on the core architecture. You will also manage data curation, conduct benchmarking to evaluate performance, analyze reasoning flaws and propose solutions. Close collaboration with our focused cross-functional team, consisting of AI Engineers, Software Engineers, Physicists and AI scientists, and regular alignment of the development with the customer and business needs will be essential to the success of the project.

Your mission:

  • AI Research and Development: Contribute to the development of validated AI reasoning models and architectures, focusing on automated reasoning techniques and application to scientific fields where rigour and reliability are fundamental
  • Data & Benchmarking: Supervise dataset curation, run benchmarks, and analyze performance results to guide improvements.
  • Collaboration: Work closely with a cross-functional team of engineers and scientists, collaborating on solving challenging problems at the intersection of AI, physics and engineering.
  • Documentation and Reporting: Develop detailed technical documentation and present research findings to internal teams and external stakeholders.
  • Research & Publication: Contribute to cutting-edge research and publish results in top AI conferences and journals, helping advance the global AI research community whenever opportunities arise.

Key requirements:

  • PhD degree in Data Science, Computer Science, Information Technology, Artificial Intelligence, Physics or related field
  • 1–2 years of experience, preferably in a mathematical, engineering, scientific, or technical setting.
  • Relevant experience in knowledge graphs and retrieval systems
  • Strong communication skills
  • Ability to collaborate effectively within a multidisciplinary and multicultural environment
  • Curiosity, and a proactive, solution-oriented mindset
  • Excitement to work in a dynamic and fast-paced environment, ability to thrive in ambiguity

Technical skills:

  • Proficiency in Python
  • Understanding of fundamental computer science principles
  • Solid understanding of machine learning principles and architectures
  • Fundamentals of statistics
  • Excellent research and analytical skills
  • Experience in ontology engineering and semantic modeling
  • Experience in designing and developing RAG systems
  • Familiarity with Neo4j
  • Contributions to research (publications in top-tier conferences) or open-source projects

Preferred Qualifications (Nice to Have):

  • Proven excellence in relevant areas (e.g., awards, competition wins)
  • Proven ability to independently solve complex problems or lead challenging projects
  • Academic or practical background in physics or other natural sciences / engineering
  • Experience with good coding practices and software development standards
  • Proficiency in agentic and deep learning frameworks
  • Hands-on experience with large language models and/or other state of the art models

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; New York, NY, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

Within the Monetization ML Engineering organization, we try to connect the dots between the aspirations of Pinners and the products offered by our partners. As a Distinguished Machine Learning Engineer, you will be responsible for developing and executing a vision for the evolution of the machine learning technology stack for Monetization. You will work on tackling new challenges in machine learning and deep learning to advance the statistical models that power ads performance and ads delivery that bring together Pinners and partners in this unique marketplace.


What you'll do:

  • Lead user-facing projects that involve end-to-end engineering development in both frontend and backend and ML.
  • Improve relevance and increase long term value for Pinners, Partners, Creators, and Pinterest through efficient Ads Delivery.
  • Improve our engineering systems to improve the latency, capacity, stability and reduce infra cost.
  • Collaborate with product managers and designers to develop engineering solutions for user-facing product improvements.
  • Collaborate with other engineering teams (infra, user modeling, content understanding) to leverage their platforms and signals.
  • Champion engineering excellence and a data driven culture, mentor senior tech talent and represent Pinterest externally in the tech and AI communities.

What we’re looking for:

  • Degree in computer science, machine learning, statistics, or a related field.
  • 15+ years of working experience in engineering teams that build large-scale, ML‑driven, user‑facing products.
  • Experience leading cross‑team engineering efforts that improve user experience in products.
  • Understanding of an object‑oriented programming language such as Go, Java, C++, or Python.
  • Experience with large‑scale data processing (e.g., Hive, Scalding, Spark, Hadoop, MapReduce).
  • Strong software engineering and mathematical skills, with knowledge of statistical methods.
  • Experience working across frontend, backend, and ML systems for large‑scale user‑facing products, with a good understanding of how they all work together.
  • Hands‑on experience with large‑scale online e‑commerce systems.
  • Background in computational advertising is preferred.
  • Excellent cross‑functional collaboration and stakeholder communication skills, with strong execution in project management.

New York

Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Machine Learning (ML) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.

At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 35 million financial instruments searchable, discoverable, and actionable across the global capital markets.

Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.

We are looking for Senior LLM Research Engineers with a strong expertise and passion for Large Language Modeling research and applications to join our team.

The advent of large language models (LLMs) presents new opportunities for expanding our NLP capabilities with new products. This would allow our clients to ask complex questions in natural language and receive insights extracted across our vast number of Bloomberg APIs or from potentially millions of structured and unstructured information sources.

Broad areas of applications and interest include: application and fine-tuning methods for LLMs, efficient methods for training, multimodal models, learning from feedback and human preferences, retrieval-augmented generation, summarization, semantic parsing and tool use, domain adaptation of LLMs to financial domains, dialogue interfaces, evaluation of LLMs, model safety and responsible AI.

What's in it for you: -Collaborate with colleagues on building and applying LLMs for production systems and applications -Write, test, and maintain production quality code -Train, tune, evaluate and continuously improve LLMs using large amounts of high-quality data to develop state-of-the-art financial NLP models -Demonstrate technical leadership by owning cross-team projects -Stay current with the latest research in AI, NLP and LLMs and incorporate new findings into our models and methodologies -Represent Bloomberg at scientific and industry conference and in open-source communities -Publish product and research findings in documentation, whitepapers or publications to leading academic venues

You'll need to have: -Practical experience with Natural Language Processing problems, and a familiarity with Machine Learning, Deep Learning and Statistical Modeling techniques -Ph.D. in ML, NLP or a relevant field or MSc in CS, ML, Math, Statistics, Engineering, or related fields and 2+ years of relevant work experience -Experience with Large Language Model training and fine-tuning frameworks such as PyTorch, Huggingface or Deepspeed -Proficiency in software engineering -An understanding of Computer Science fundamentals such as data structures and algorithms and a data oriented approach to problem-solving -Excellent communication skills and the ability to collaborate with engineering peers as well as non-engineering stakeholders. -A track record of authoring publications in top conferences and journals is a strong plus

Who we are:

Peripheral is developing spatial intelligence, starting in live sports and entertainment. Our models generate interactive, photorealistic 3D reconstructions of sporting events, building the future of live media. We’re solving key research challenges in 3D computer vision, creating the foundations for the next generation of robotic perception and embodied intelligence.

We’re backed by Tier-1 investors and working with some of the biggest names in sports. Our team includes top robotics and machine learning researchers from the University of Toronto, advised by Dr. Steven Waslander and Dr. Igor Gilitshenski.

Our team is ambitious and looking to win. We’re seeking a Machine Learning engineer to develop our motion capture models through synthetic data curation, model training, and inference-time optimization.

What you’ll be doing:

  • Developing our data flywheel to autolabel and generate synthetic data,

  • Improving our motion capture accuracy by fine-tuning existing models on our domain,

  • Optimizing inference time through model distillation and quantization,

What we’d want to see:

  • Prior experience with 3D computer vision and training new ML models,

  • Strong understanding of GPU optimization methods (Profiling, Quantization, Model Distillation),

  • Proficiency in Python and real-time ML inference backends,

Ways to stand out from the crowd:

  • Previous experience in architecting and optimizing 3D computer vision systems,

  • Strong understanding of CUDA and Kernel programming,

  • Familiarity with state-of-the-art research in VLMs,

  • Top publications at conferences like NeurIPS, ICLR, ICML, CVPR, WACV, CoRL, ICRA,

Why join us:

  • Competitive equity as an early team member.

  • $80-120K CAD + bonuses, flexible based on experience.

  • Exclusive access to the world’s biggest sporting events and venues,

  • Work on impactful projects, developing the future of 3D media and spatial intelligence.

To explore additional roles, please visit: www.peripheral.so

Location: Toronto, ON, Canada

AI Platform Engineer

Location: Boston (US) / Barcelona (Spain)

Position Overview

As an AI Platform Engineer, you are the bridge between AI research and production software. You will:

  • Build and maintain AI infrastructure: model serving, vector databases, embedding pipelines
  • Enable AI developers to deploy their work reproducibly and safely
  • Design APIs for AI inference, prompt management, and evaluation
  • Implement MLOps pipelines: versioning, monitoring, logging, experimentation tracking
  • Optimize performance: latency, cost, throughput, reliability
  • Collaborate with backend engineers to integrate AI capabilities into the product

Key Responsibilities

AI Infrastructure

  • Deploy and serve LLMs (OpenAI, Anthropic, HuggingFace, fine-tuned models)
  • Optimize inference latency and costs
  • Implement caching, rate limiting, and retry strategies

MLOps & Pipelines

  • Version models, prompts, datasets, and evaluation results
  • Implement experiment tracking (Weights & Biases)
  • Build CI/CD pipelines for model deployment
  • Monitor model performance and drift
  • Set up logging and observability for AI services

API Development

  • Design and implement APIs (FastAPI)
  • Create endpoints for prompt testing, model selection, and evaluation
  • Integrate AI services with backend application
  • Ensure API reliability, security, and performance

Collaboration & Enablement

  • Work with AI Developers to productionize their experiments regarding improving user workflows
  • Define workflows: notebook/test repository → PR → staging → production
  • Document AI infrastructure and best practices
  • Review code and mentor AI developers on software practices

Required Skills & Experience

Must-Have

  • 7+ years of software engineering experience (Python preferred)
  • Experience with LLMs and AI/ML in production: OpenAI API, HuggingFace, LangChain, or similar
  • Understanding of vector databases (Pinecone, Chroma, Weaviate, FAISS)
  • Cloud infrastructure experience: GCP (Vertex AI preferred) or AWS (SageMaker)
  • API development: FastAPI, REST, async programming
  • CI/CD and DevOps: Docker, Terraform, GitHub Actions
  • Monitoring and observability
  • Problem-solving mindset: comfortable debugging complex distributed systems
  • Operating experience with AI deployment in enterprise environment

Nice-to-Have

  • Experience fine-tuning or training models
  • Familiarity with LangChain, Pydantic AI or similar frameworks
  • Knowledge of prompt engineering and evaluation techniques
  • Experience with real-time inference and streaming responses
  • Background in data engineering or ML engineering
  • Understanding of RAG architectures
  • Contributions to open-source AI/ML projects

Tech Stack

Current Stack:

  • Languages: Python (primary), Bash
  • AI/ML: OpenAI API, Anthropic, HuggingFace, LangChain, Pydantic AI
  • Vector DBs: Pinecone, Chroma, Weaviate, or FAISS
  • Backend: FastAPI, SQLAlchemy, Pydantic
  • Cloud: GCP (Vertex AI, Cloud Run), Terraform
  • CI/CD: GitHub Actions
  • Experiment Tracking: MLflow, Weights & Biases, or custom
  • Containers: Docker, Kubernetes (optional)

What we offer:

Competitive compensation

Stock Options Plan: Empowering you to share in our success and growth.
Cutting-Edge Tools: Access to state-of-the-art tools and collaborative opportunities with leading experts in artificial intelligence, physics, hardware and electronic design automation.
Work-Life Balance: Flexible work arrangements in one of our offices with potential options for remote work.
Professional Growth: Opportunities to attend industry conferences, present research findings, and engage with the global AI research community.
Impact-Driven Culture: Join a passionate team focused on solving some of the most challenging problems at the intersection of AI and hardware.

San Jose, CA, USA


Adobe is looking for a Senior Applied Researcher to use Generative AI and Machine Learning techniques to help Adobe better understand, lead, and optimize the experience of Adobe’s Digital Experience customers. Partnering with Adobe Research and other business units, the candidate will be building products that transform the way companies approach audience creation, journey optimization, and personalization at scale. You will join a diverse, lively group of engineers and scientists long established in the ML space. The work is dynamic, fast-paced, creative, collaborative and data-driven.

NOTE: This role is in the San Jose office. You must be in SJ or willing to relocate for this position.

What you'll do - Partner with Adobe Research to develop cutting edge models! - Design and build applications powered by generative AI, including working on traditional engineering problems such as defining APIs, integrating with UIs, deploying Cloud services, CICD, etc., as well as implementing ML- and LLM-Ops best practices.
- Engage in the product lifecycle, design, deployment, and production operations. - Provide technical leadership in everything from architectural design and technology choices to holistic evaluation of ML models.

What you need to succeed - The ideal candidate will have the following background: - PhD or MS degree in Computer Science, Data Science or related field required.
- 10+ years of applied research experience in software industry/academic research with 5+ years of shown experience developing, evaluating ML models, and deploying models into production. - Deep understanding of statistical modeling, machine learning, or analytics concepts, and a track record of solving problems with these methods; ability to quickly learn new skills and work in a fast-paced team. - Proficient in one or more programming languages such as Python, Scala, Java, SQL. Familiarity with cloud development on Azure/AWS. - Fluent in at least one deep learning framework such as TensorFlow or PyTorch. - Experience with LLMs and emerging area of prompt-engineering. - Recognized as a technical leader in related domain.
- Experience working with both research and product teams.
- Excellent problem-solving and analytic skills - Excellent communication and relationship building skills.

Boston/NYC/LA/SF

About Suno

At Suno, we are building a future where anyone can make music. You can make a song for any moment with just a few short words. Award-winning artists use Suno, but our core user base consists of everyday people making music — often for the first time.

We are a team of musicians and AI experts, including alumni from Spotify, TikTok, Meta and Kensho. We like to ship code, make music and drink coffee. Our company culture celebrates music and experimenting with sound — from lunchroom conversations to the studio in our office.

Over the last two years, nearly 100 million people have made music on Suno – many for the first time in their lives, discovering a passion they never knew they had. And this isn’t just a story about new creators: top producers and songwriters have integrated Suno into their daily workflows, and new artists emerging on Suno are being recognized by the industry’s most important charts. Suno has become a platform where imagination meets reality, at every level of the creative journey.

Recently Suno announced its 250M Series C at a 2.45B post-money valuation.

About the Role

We’re looking for research scientists and engineers to build foundation models for music and audio. Our research focuses not only on generative tasks, but also understanding such as source separation, captioning, lyrics transcription, midi transcription, and alignment.We have roles open for pre-training, post-training, multimodal architectures, data, distributed training, and inference optimization. We are a small research team with a very large cluster, serving millions of users daily.

Healthcare for you and your dependents, with vision and dental

401k with match

Generous commuter benefit

Flexible PTO

AI Scientist

The Role

This AI Scientist position will drive the development and optimization of Aizen's generative AI-based peptide drug discovery platform, DaX™. You will be responsible for incorporating state-of-the-art neural network architectures and high-performance computational biology software to improve the accuracy and throughput of our drug discovery efforts. Your work will be critical in translating experimental data and scientific insights into scalable, robust models.

Our Ideal Candidate

You are passionate about the company’s mission and a self-starter with an inextinguishable fire to compete and succeed. You thrive in an environment that requires crisp judgment, pragmatic decision-making, rapid course-corrections, and comfort with market ambiguity. You discharge your duties within a culture of mutual team respect, high performance, humility, and humor.

Key Responsibilities

  • Incorporate state-of-the-art neural network architectures and training methods to improve accuracy and throughput of DaX™, Aizen's generative AI-based peptide drug discovery platform.
  • Develop, test, deploy, and maintain high-performance computational biology software according to the needs and feedback of experimentalists at Aizen.
  • Orchestrate new and existing Aizen software tools into scalable, highly-available, and easy-to-use cloud pipelines.
  • Work closely with experimental scientists at Aizen to manage storage and access of Aizen's experimental data.

Candidate Skills and Experience

  • Ph.D. and/or postdoctoral studies in Computer Science, Computational Biology, Bioinformatics, or a related field.
  • Deep, demonstrated expertise in advanced Generative Models (e.g., Flow Matching, Diffusion Models) for de novo design in discrete and continuous spaces.
  • Experience integrating and leveraging data from physics-based simulations (e.g., Molecular Dynamics) into machine learning models.
  • Experience collecting, sanitizing, and training on biological property datasets, with a preference for prior experience with peptides.
  • Proficiency with Python, shell scripting, and a high-performance compiled language.
  • Entrepreneurial spirit, self-starter with proper balance of scientific creativity and disciplined execution.
  • Preferred: Experience designing and maintaining high-availability cloud architectures for hosting high-performance biological analysis software.
  • Preferred: Experience in chemical featurization, representation, and model application for peptide chemistry, non-canonical amino acids (NCAAs), and complex peptide macrocycles.
  • Preferred: Experience in protein/peptide folding dynamics, protein structural analysis, and resultant data integration to improve computation/design.

About Aizen

Aizen is an AI-driven biotechnology company pioneering Mirror Peptides, a novel class of biologic medicines. Mirror Peptides are synthetic, fully D-amino acid peptides that represent a vast, unexplored therapeutic chemical space. Backed by life science venture capital and based in the biotech hub of San Diego, CA.

Location & Compensation

  • Reporting: Principal AI Scientist
  • Location: This position offers fully remote work with monthly/quarterly trips to company facilities in California.
  • Compensation: Competitive base salary, stock options, and a benefits package including medical coverage.

Contact

To apply, please contact us at jobs@aizentx.com.

An equal opportunity employer V1

Various locations available


Adobe seeks a Machine Learning Engineer to enhance customer experiences through AI and generative technologies. This is an exciting internship opportunity inside Adobe Firefly’s applied research organization. You will be surrounded by amazing talents who build the Firefly family of models from research inception all the way to production. We offer internship roles situated at different stages of the development pipeline from fundamental research to advanced development to production engineering directly shaping the training and integration of Firefly production models. Your role will center on pioneering data, models, applications and scientific evaluation that shape the future of technology in the realms of images, videos, language and multimodal models. Join us in reshaping the future of technology and customer experiences at Adobe!

What You’ll Do

  • Work towards results-oriented research goals, while identifying intermediate achievements.
  • Contribute to research and advanced development that can be applied to Adobe product development.
  • Help integrating novel research work into Adobe Firefly product.
  • Lead and collaborate on projects across different teams.

What You Need to Succeed

  • Currently enrolled full time and pursuing a Master's or PhD in Computer Science, Computer Engineering, Electrical Engineer or related fields.
  • 1 + years of experience in computer vision, natural language processing or machine learning
  • Some experience in Generative AI
  • Experience communicating research to public audiences of peers.
  • Experience working in teams.
  • Knowledge in Python and typical machine learning development toolkits.
  • Ability to participate in a full-time internship between May-September.