Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

Successful hires will expand the group's efforts applying machine learning to drug discovery, biomolecular simulation, and biophysics. Areas of focus include generative models to help identify novel molecules for drug discovery targets, predict PK and ADME properties of small molecules, develop more accurate approaches for molecular simulations, and understand disease mechanisms. Ideal candidates will have strong Python programming skills. Relevant areas of experience might include deep learning techniques, systems software, high performance computation, numerical algorithms, data science, cheminformatics, medicinal chemistry, structural biology, molecular physics, and/or quantum chemistry, but specific knowledge of any of these areas is less critical than intellectual curiosity, versatility, and a track record of achievement and innovation in the field of machine learning. For more information, visit www.DEShawResearch.com.

Please apply using this link: https://apply.deshawresearch.com/careers/Register?pipelineId=597&source=NeurIPS_1

The expected annual base salary for this position is USD 300,000 - USD 800,000. Our compensation package also includes variable compensation in the form of sign-on and year-end bonuses, and generous benefits, including relocation and immigration assistance. The applicable annual base salary paid to a successful applicant will be determined based on multiple factors including the nature and extent of prior experience and educational background. We follow a hybrid work schedule, in which employees work from the office on Tuesday through Thursday, and have the option of working from home on Monday and Friday.

D. E. Shaw Research, LLC is an equal opportunity employer.

San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US


About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

With more than 600 million users around the world and 400 billion ideas saved, Pinterest Machine Learning engineers build personalized experiences to help Pinners create a life they love. With just over 4,000 global employees, our teams are small, mighty, and still growing. At Pinterest, you’ll experience hands-on access to an incredible vault of data and contribute large-scale recommendation systems in ways you won’t find anywhere else.

We are seeking talented Staff Machine Learning Engineers for multiple openings across our Core Engineering organization, including teams such as Search, Notifications, and Content & User Engineering. In these roles, you will drive the development of state-of-the-art applied machine learning systems that power core Pinterest experiences.


What you’ll do:

  • Design features and build large-scale machine learning models to improve user ads action prediction with low latency
  • Develop new techniques for inferring user interests from online and offline activity
  • Mine text, visual, and user signals to better understand user intention
  • Work with product and sales teams to design and implement new ad products

What we’re looking for:

  • Degree in computer science, machine learning, statistics, or related field
  • 6+ years of industry experience building production machine learning systems at scale, data mining, search, recommendations, and/or natural language processing
  • 2+ years of experience leading projects/teams
  • Strong mathematical skills with knowledge of statistical methods
  • Cross-functional collaborator and strong communicator

You are investigating the smartest way to interact with a browser page with an AI agent. You hammered a DOM based approach and sometimes dreamt of going visual. You tried out a pure visual approach and missed the good old DOM ways. You are now at the forefront. You know system thinking and how that reflects to a complex environment like in a browser agent platform. If that's you, please get in touch, we have a great opportunity waiting for you in San Francisco!

Successful candidates will contribute to building and deploying AI-powered systems, including automated code generation, smart agents, retrieval-augmented generation (RAG) frameworks, and tools that integrate cutting-edge AI with scientific software and machine learning research. These systems aim to support drug discovery programs, increase research productivity, and improve the quality and efficiency of ML model training through intelligent data workflows and feedback loops. Candidates should have a strong interest in artificial intelligence (specifically, generative and agentic AI), with responsibilities spanning end-to-end system design: from idea conception and rapid prototyping to production-scale deployment. They should be comfortable working in a fast-paced environment where innovation, experimentation, and rigorous software engineering are all valued, but specific knowledge of any of these areas is less critical than intellectual curiosity, versatility, and a track record of achievement and innovation in the field of AI. For more information, visit www.DEShawResearch.com.

Please apply using this link:

https://apply.deshawresearch.com/careers/Register?pipelineId=923&source=NeurIPS_1

The expected annual base salary for this position is USD 250,000 – USD 600,000. Our compensation package also includes variable compensation in the form of sign-on and year-end bonuses, and generous benefits, including relocation and immigration assistance. The applicable annual base salary paid to a successful applicant will be determined based on multiple factors including the nature and extent of prior experience and educational background. We follow a hybrid work schedule, in which employees work from the office on Tuesday through Thursday, and have the option of working from home on Monday and Friday.

D. E. Shaw Research, LLC is an equal opportunity employer.

Pinely is a privately owned algorithmic trading firm specializing in high-frequency and mid-frequency trading. We’re based in Amsterdam, Cyprus, and Singapore, and we’re experiencing rapid growth. We are seeking a Staff Deep Learning Scientist to drive advanced AI research. This senior individual contributor role focuses on leading technical innovation and shaping research direction across the team. The ideal candidate has deep curiosity, hands-on expertise in neural networks, and prior experience at top AI labs, contributing directly to building and deploying models.

Responsibilities:

  • Develop AI models powering every component of end-to-end trading strategies across global markets;
  • Tackle the hardest real-world AI problem — predicting financial markets — by understanding deep networks in extremely noisy, diverse, and ever-changing environments;
  • Shape research direction and elevate team capabilities through your insights;
  • Lead all stages of research from ideation to deployment, ensuring full production integration.

Requirements:

  • Senior/Staff/Principal Researcher at a top AI lab or faculty member at a leading institution (Stanford, Berkeley, MIT, CMU, ETH, Mila, UofT, Oxford, UCL, NYU, Princeton, etc.);
  • Preferably experienced in competitive AI domains: LLMs, reasoning architectures, generative models (e.g., video), mechanistic interpretability;
  • Motivated by deep research and meaningful impact on both the team and the field.

What we offer:

  • Significant impact across the company’s entire trading portfolio;
  • Competitive compensation with exceptional upside through profit-sharing;
  • A research-driven environment where deep technical insight directly influences outcomes;
  • Option to work part-time alongside an academic lab;
  • A culture that supports initiative, exploration, and high performance;
  • Flexible work location: Amsterdam office or fully remote, with optional business travel.

New. York, NY

Applications are invited for postdoctoral Flatiron Research Fellowships (FRFs) at the Center for Computational Mathematics (CCM) in the Flatiron Institute. FRF positions are initially two-year appointments, renewable for a third year contingent on performance. Fellows will be based, and have a principal office or workspace, at the Simons Foundation’s offices in New York City. Fellows may also be eligible for subsidized housing within walking distance of the Flatiron Institute. The start date is between July and October 2026.

To apply and for more details: https://apply.interfolio.com/173401

San Jose, CA, USA


Join Adobe as a skilled and proactive Machine Learning Ops Engineer to drive the operational reliability, scalability, and performance of our AI systems! This role is foundational in ensuring our AI systems operate seamlessly across environments while meeting the needs of both developers and end users. You will lead efforts to automate and optimize the full machine learning lifecycle—from data pipelines and model deployment to monitoring, governance, and incident response.

What you'll Do

  • Model Lifecycle Management: Manage model versioning, deployment strategies, rollback mechanisms, and A/B testing frameworks for LLM agents and RAG systems. Coordinate model registries, artifacts, and promotion workflows in collaboration with ML Engineers

  • Monitoring & Observability: Implement real-time monitoring of model performance (accuracy, latency, drift, degradation). Track conversation quality metrics and user feedback loops for production agents.

  • CI/CD for AI: Develop automated pipelines for timely/agent testing, validation, and deployment. Integrate unit/integration tests into model and workflow updates for safe rollouts.

  • Infrastructure Automation: Provision and manage scalable infrastructure (Kubernetes, Terraform, serverless stacks). Enable auto-scaling, resource optimization, and load balancing for AI workloads.

  • Data Pipeline Management: Craft and maintain data ingestion pipelines for both structured and unstructured sources. Ensure reliable feature extraction, transformation, and data validation workflows.

  • Performance Optimization: Monitor and optimize AI stack performance (model latency, API efficiency, GPU/compute utilization). Drive cost-aware engineering across inference, retrieval, and orchestration layers.

  • Incident Response & Reliability: Build alerting and triage systems to identify and resolve production issues. Maintain SLAs and develop rollback/recovery strategies for AI services.

  • Compliance & Governance: Enforce model governance, audit trails, and explainability standards. Support documentation and regulatory frameworks (e.g., GDPR, SOC 2, internal policy alignment).

What you need to succeed - 3–5+ years in MLOps, DevOps, or ML platform engineering. - Strong experience with cloud infrastructure (AWS/GCP/Azure), container orchestration (Kubernetes), and IaC tools (Terraform, Helm). - Familiarity with ML model serving tools (e.g., MLflow, Seldon, TorchServe, BentoML). - Proficiency in Python and CI/CD automation (e.g., GitHub Actions, Jenkins, Argo Workflows). - Experience with monitoring tools (Prometheus, Grafana, Datadog, ELK, Arize AI, etc.).

Preferred Qualifications - Experience supporting LLM applications, RAG pipelines, or AI agent orchestration. - Understanding of vector databases, embedding workflows, and model retraining triggers. - Exposure to privacy, safety, and responsible AI principles in operational contexts. - Bachelor's or equivalent experience in Computer Science, Engineering, or a related technical field.

Various locations available


Adobe Firefly is redefining creativity by bringing the power of generative AI to millions of users worldwide. The Evaluation Systems team builds the ML foundation that ensures Firefly’s creations are safe, high-quality, and aligned with evolving human needs.

We are seeking a Machine Learning Engineer with a passion for vision and multimodal understanding to help us advance the frontier of evaluating generative content. You will design, train, and deploy models that assess the quality, aesthetics, and safety of images and videos generated by foundation models. Your work will directly shape how creators engage with AI responsibly and at scale.

This is an opportunity to work at the intersection of state-of-the-art research, large-scale data, and production systems, in a team that values human-in-the-loop learning and model alignment as core principles.

What You’ll Do - Model Development: Build and fine-tune models (e.g., ViTs, VLMs, multimodal encoders) to evaluate generative content across quality, safety, and user alignment dimensions. - Human-in-the-Loop Training: Leverage large-scale, noisy human feedback data to train robust evaluation and reward models. - Production Deployment: Ship models as real-time services that gate content and provide quality guardrails, continuously monitoring and improving their performance. - Collaboration: Partner with product, research, and engineering teams to integrate evaluation signals into Firefly products and new creative experiences. - Exploration: Stay on top of the latest ML research (e.g., diffusion models, alignment methods, multimodal evaluation) and translate advances into practical solutions.

What You Need to Succeed - MS or PhD in Computer Science, Statistics, Electrical Engineering, Applied Math, Operations Research, Econometrics or equivalent experience required - Strong understanding of machine learning and deep learning concepts, especially in vision and multimodal domains. - Experience with model training, finetuning, and evaluation. Proficiency in Python and familiarity with frameworks like PyTorch. Familiarity with large-scale data pipelines and distributed training is a plus. - Ability to translate research concepts into scalable, production-ready systems. Prior exposure to vision-language models or human feedback training is a plus. - Strong analytical and quantitative problem-solving ability. - Excellent communication, relationship skills and a strong team player.

San Francisco / New York / Toronto

About Ideogram

Ideogram’s mission is to make world-class design accessible to everyone, multiplying human creativity. We build proprietary generative media models and AI native creative workflows, tackling unsolved challenges in graphic design. Our team includes builders with a track record of technology breakthroughs including early research in Diffusion Models, Google’s Imagen, and Imagen Video. We care about design, taste, and craft as much as research and engineering – shipping experiences that creatives actually love.

We’ve raised nearly $100M, led by Andreessen Horowitz and Index Ventures. Headquartered in Toronto with a growing team in NYC, we're scaling fast, aiming to triple over the next year. We're a flat team with a culture of high ownership, collaboration, and mentorship.

Explore Ideogram 3.0, Canvas, and Character blog posts, and try Ideogram at ideogram.ai.

The Opportunity

In this role, you will develop the post-training pipeline for our text-to-image foundation models end to end, from data strategy to deployment, advancing techniques such as RLHF, RLAIF, and work on personalization/customization. You will contribute to post-training research that drives measurable gains, and implement and maintain high-throughput fine-tune/eval pipelines. You'll work with a creative and ambitious team of engineers and researchers who are building the future of the creative economy.

What We're Looking For

  • 5+ years of experience in developing machine learning models in JAX, PyTorch, or TensorFlow.

  • Experience in implementing Machine Learning foundations (e.g., Transformer, VAE, Denoising Diffusion models) from scratch.

  • Track record in machine learning innovation and familiarity with Deep Learning and advanced Machine Learning.

  • End-to-end understanding of generative media applications and excitement for pushing the state-of-the-art in generative AI.

  • Ability to debug machine learning models to iteratively improve model quality and performance.

  • Nice to have: Familiarity with Kubernetes and docker.

  • Optional: Experience in low-level machine learning optimization, e.g., writing CUDA kernel code.

Our Culture

We’re a team of exceptionally talented, curious builders who love solving tough problems and turning bold ideas into reality. We move fast, collaborate deeply, and operate without unnecessary hierarchy, because we believe the best ideas can come from anyone.

Everyone at Ideogram rolls up their sleeves to make our products and our customers successful. We thrive on curiosity, creativity, and shared ownership. We believe that small, dedicated teams working together with trust and purpose can move faster, think bigger, and create amazing things.

Ideogram is committed to welcoming everyone — regardless of gender identity, orientation, or expression. Our mission is to create belonging and remove barriers so everyone can create boldly.

What We Offer

💸Competitive compensation and equity designed to recognize the value and impact of your contributions to Ideogram’s success. 🌴 4 weeks of vacation to recharge and explore. 🩺 Comprehensive health, vision, and dental coverage starting on day one. 💰 RRSP/401(k) with employer match up to 4% to invest in your future from the moment you join. 💻 Top-of-the-line tools and tech to fuel your creativity and productivity. 🔍 Autonomy to explore and experiment — whether you’re testing new ideas, running large-scale experiments, or diving into research, you’ll have access to compute/resources you need when there’s a clear business or creative use case. We encourage curiosity and bold thinking. 🌱 A culture of learning and growth, where curiosity is encouraged and mentorship is part of the journey. 🏡 Fully remote flexibility across North America, with regular in-person team meetups and collaboration opportunities.

London

Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Artificial Intelligence (AI) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.

At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 1 billion proprietary and third-party data points published daily -- across all asset classes -- searchable, discoverable, and actionable.

Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.

We are looking for Senior GenAI Platform Engineers with strong expertise and passion for building platforms, especially for GenAI systems.

As a Senior GenAI Platform Engineer, you will have the opportunity to create a more cohesive, integrated, and managed GenAI development life cycle to enable the building and maintenance of our ML systems. Our teams make extensive use of open source technologies such as Kubernetes, KServe, MCP, Envoy AI Gateway, Buildpacks and other cloud-native and GenAI technologies. From technical governance to upstream collaboration, we are committed to enhancing the impact and sustainability of open source.

Join the AI Group as a Senior GenAI Platform Engineer and you will have the opportunity to: -Architect, build, and diagnose multi-tenant GenAI platform systems -Work closely with GenAI application teams to design seamless workflows for continuous model training, inference, and monitoring -Interface with both GenAI experts to understand workflows, pinpoint and resolve inefficiencies, and inform the next set of features for the platforms -Collaborate with open-source communities and GenAI application teams to build a cohesive development experience -Troubleshoot and debug user issues -Provide operational and user-facing documentation

We are looking for a Senior GenAI Platform Engineer with: -Proven years of experience working with an object-oriented programming language (Python, Go, etc.) -Experience with GenAI technologies like MCP, A2A, Langgraph, LlamaIndex, Pydantic AI, OpenAI APIs and SDKs -A Degree in Computer Science, Engineering, Mathematics, similar field of study or equivalent work experience -An understanding of Computer Science fundamentals such as data structures and algorithms -An honest approach to problem-solving, and ability to collaborate with peers, stakeholders and management