Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

Bala Cynwyd (Philadelphia Area), Pennsylvania United States


Overview

We’re looking for a Machine Learning Systems Engineer to help build the data infrastructure that powers our AI research. In this role, you'll develop reliable, high-performance systems for handling large and complex datasets, with a focus on scalability and reproducibility. You’ll partner with researchers to support experimental workflows and help translate evolving needs into efficient, production-ready solutions. The work involves optimizing compute performance across distributed systems and building low-latency, high-throughput data services. This role is ideal for someone with strong engineering instincts, a deep understanding of data systems, and an interest in supporting innovative machine learning efforts.

What You’ll Do

Design and implement high-performance data pipelines for processing large-scale datasets with an emphasis on reliability and reproducibility Collaborate with researchers to translate their requirements into scalable, production-grade systems for AI experimentation Optimize resource utilization across our distributed computing infrastructure through profiling, benchmarking, and systems-level improvements Implement low-latency high-throughput sampling for models

What we're looking for

Experience building and maintaining data pipelines and ETL systems at scale Experience with large-scale ML infrastructure and familiarity with training and inference workflows Strong understanding of best practices in data management and processing Knowledge of systems level programming and performance optimization Proficiency in software engineering in python Understanding of AI/ML workloads, including data preprocessing, feature engineering, and model evaluation

Why Join Us?

Susquehanna is a global quantitative trading firm that combines deep research, cutting-edge technology, and a collaborative culture. We build most of our systems from the ground up, and innovation is at the core of everything we do. As a Machine Learning Systems Engineer, you’ll play a critical role in shaping the future of AI at Susquehanna — enabling research at scale, accelerating experimentation, and helping unlock new opportunities across the firm.

AI Platform Engineer

Location: Boston (US) / Barcelona (Spain)

Position Overview

As an AI Platform Engineer, you are the bridge between AI research and production software. You will:

  • Build and maintain AI infrastructure: model serving, vector databases, embedding pipelines
  • Enable AI developers to deploy their work reproducibly and safely
  • Design APIs for AI inference, prompt management, and evaluation
  • Implement MLOps pipelines: versioning, monitoring, logging, experimentation tracking
  • Optimize performance: latency, cost, throughput, reliability
  • Collaborate with backend engineers to integrate AI capabilities into the product

Key Responsibilities

AI Infrastructure

  • Deploy and serve LLMs (OpenAI, Anthropic, HuggingFace, fine-tuned models)
  • Optimize inference latency and costs
  • Implement caching, rate limiting, and retry strategies

MLOps & Pipelines

  • Version models, prompts, datasets, and evaluation results
  • Implement experiment tracking (Weights & Biases)
  • Build CI/CD pipelines for model deployment
  • Monitor model performance and drift
  • Set up logging and observability for AI services

API Development

  • Design and implement APIs (FastAPI)
  • Create endpoints for prompt testing, model selection, and evaluation
  • Integrate AI services with backend application
  • Ensure API reliability, security, and performance

Collaboration & Enablement

  • Work with AI Developers to productionize their experiments regarding improving user workflows
  • Define workflows: notebook/test repository → PR → staging → production
  • Document AI infrastructure and best practices
  • Review code and mentor AI developers on software practices

Required Skills & Experience

Must-Have

  • 7+ years of software engineering experience (Python preferred)
  • Experience with LLMs and AI/ML in production: OpenAI API, HuggingFace, LangChain, or similar
  • Understanding of vector databases (Pinecone, Chroma, Weaviate, FAISS)
  • Cloud infrastructure experience: GCP (Vertex AI preferred) or AWS (SageMaker)
  • API development: FastAPI, REST, async programming
  • CI/CD and DevOps: Docker, Terraform, GitHub Actions
  • Monitoring and observability
  • Problem-solving mindset: comfortable debugging complex distributed systems
  • Operating experience with AI deployment in enterprise environment

Nice-to-Have

  • Experience fine-tuning or training models
  • Familiarity with LangChain, Pydantic AI or similar frameworks
  • Knowledge of prompt engineering and evaluation techniques
  • Experience with real-time inference and streaming responses
  • Background in data engineering or ML engineering
  • Understanding of RAG architectures
  • Contributions to open-source AI/ML projects

Tech Stack

Current Stack:

  • Languages: Python (primary), Bash
  • AI/ML: OpenAI API, Anthropic, HuggingFace, LangChain, Pydantic AI
  • Vector DBs: Pinecone, Chroma, Weaviate, or FAISS
  • Backend: FastAPI, SQLAlchemy, Pydantic
  • Cloud: GCP (Vertex AI, Cloud Run), Terraform
  • CI/CD: GitHub Actions
  • Experiment Tracking: MLflow, Weights & Biases, or custom
  • Containers: Docker, Kubernetes (optional)

What we offer:

Competitive compensation

Stock Options Plan: Empowering you to share in our success and growth.
Cutting-Edge Tools: Access to state-of-the-art tools and collaborative opportunities with leading experts in artificial intelligence, physics, hardware and electronic design automation.
Work-Life Balance: Flexible work arrangements in one of our offices with potential options for remote work.
Professional Growth: Opportunities to attend industry conferences, present research findings, and engage with the global AI research community.
Impact-Driven Culture: Join a passionate team focused on solving some of the most challenging problems at the intersection of AI and hardware.

The Deep Learning for Precision Health Lab (www.montillolab.org ), a part of the Biodata Engineering Program of the Biomedical Engineering Department at the University of Texas Southwestern in Dallas, TX seeks a talented and motivated Computational Research Scientist to support large-scale multimodal neuroimaging and biomedical data analysis initiatives and to support advanced AI. The successful candidate will play a key role in curating and analyzing multimodal datasets, preparing resources for foundation model development, and supporting NIH-funded projects at the intersection of machine learning, medical image analysis, neuroscience, and oncology. This is a full-time, long-term staff scientist position focused on technical excellence, reproducible data management, and collaborative research in a dynamic academic environment. The successful candidate’s work will directly inform AI-driven discovery in neurological and oncologic diseases.

With cutting-edge computational infrastructure, access to leading neurology, neuroscience, and cancer experts, and an unparalleled trove of high-dimensional imaging and multi-omic data, our machine learning lab is poised for success in these research endeavors.

Primary Responsibilities

  • Configure/develop and run existing foundation or large-scale deep learning models for benchmarking.
  • Contribute to manuscript writing and code documentation.
  • Curate and manage large neuroimaging & bioimaging datasets that include structural, diffusion, and functional MRI, dynamic PET, EEG, fluorescence microscopy, and multi-omic or clinical data drawn from NIH-supported consortia.
  • Develop and maintain automated pipelines for data quality control and reproducibility.
  • Clean and prepare datasets for downstream ML and deep-learning workflows.

Qualifications

  • M.S. or Ph.D. in Computer Science, Biomedical Engineering, Electrical Engineering, Physics, Statistics, or a closely related computational field.
  • Candidates must have extensive neuroimaging and biomedical image analysis experience.
  • Candidates must have existing mastery of one or more mainstream DL frameworks (e.g., PyTorch, TensorFlow) and be able to explain intricacies of the DL models they have constructed.
  • Experience running and managing batch jobs on SLURM or similar HPC systems.
  • Preferred: familiarity with neuroimaging data formats (DICOM, NIfTI, HDF5, MP4, EEG) and web-scraping or data-discovery scripting.

Compensation and Appointment

  • Term and Location: Full-time, On-site in Dallas, TX (5 days/week)
  • Salary: Highly competitive and commensurate with experience.
  • Work Authorization: Must be legally authorized to work in the U.S.
  • Mentorship: Direct supervision by Dr. Albert Montillo with opportunities for co-authorship and professional growth in mentoring junior team members and leading publications.

For consideration:

Reach out for an in-person meeting in San Diego at NeurIPS 2025 (or virtually afterwards) via email to Albert.Montillo@UTSouthwestern.edu with the subject “ComputationalResearchScientist-Applicant-NeurIPS” and include: (1) CV, (2) contact information for 3 references, (3) up to three representative publications, and (4) your start window. Positions are open until filled; review begins immediately.

Faculty Positions in Electrical and Electronics Engineering – Koç University, Istanbul, Türkiye

Koç University invites exceptional candidates to apply for full-time faculty positions in Electrical and Electronics Engineering. We seek outstanding researchers in all areas of electrical and electronics engineering, including artificial intelligence, machine learning, computational neuroscience, intelligent systems, and signal processing.

Applicants should have a bold, interdisciplinary research vision capable of making transformative impacts across multiple domains. Successful candidates will leverage Koç University’s state-of-the-art research ecosystem, including the Koç University İş Bank Artificial Intelligence Research Center (KUIS AI), the Translational Medicine Research Center (KUTTAM), and the Nanofabrication and Nanocharacterization Center (n2STAR). KUIS AI provides a high-performance computation facility and scholarship support for KUIS AI graduate fellows, fostering close collaboration between faculty and students.

Koç University is a leading private, nonprofit institution in Istanbul, supported by the Vehbi Koç Foundation, with English as the medium of instruction. It hosts the highest number of ERC grant recipients in Türkiye and offers exceptional opportunities for interdisciplinary collaboration across engineering, medicine, and natural sciences. We offer competitive salaries, housing support, K–12 education assistance, private health insurance, and research startup funds.

We will be attending NeurIPS 2025 — interested candidates are welcome to reach out and schedule an informal discussion during the conference at alperdogan@ku.edu.tr.

Application materials: CV, research statement, teaching statement, and three references. Deadline: March 20, 2026 (applications reviewed on a rolling basis). Apply at: https://ee.ku.edu.tr/open-positions/faculty-positions/

Miami, Florida


As a ML/Research Engineer at Citadel Securities, you will work closely with researchers to design and build the next generation library for deep learning within the firm. You will combine the best available open-source tools with deep internal expertise in modelling and predicting financial markets. Your work will empower 100+ researchers to iterate faster on their agenda and perform experiments that were not possible before. Opportunities may be available from time to time in any location in which the business is based for suitable candidates. If you are interested in a career with Citadel, please share your details and we will contact you if there is a vacancy available.

Pinely is a privately owned algorithmic trading firm specializing in high-frequency and mid-frequency trading. We’re based in Amsterdam, Cyprus, and Singapore, and we’re experiencing rapid growth. Pinely is a high-frequency algorithmic trading firm based in Amsterdam. We develop robust and adaptive strategies across diverse markets and actively support the Olympiad movement; many team members are award-winning mathematicians, researchers, and engineers.

Researchers work in a fast-paced HFT environment where ideas quickly reach production. They are supported by a strong infrastructure team enabling large-scale experiments and reliable deployment. Our flat structure encourages autonomy, creativity, and direct impact. We value an informal, idea-driven culture.

We are opening a position for a Junior Deep Learning Researcher in our Amsterdam office.

Responsibilities:

  • Conduct research in AI, machine learning, and related quantitative fields
  • Develop and experiment with modern deep learning architectures
  • Analyze large, unstructured, noisy datasets
  • Collaborate with developers and researchers on optimizing trading strategies
  • Explore new methods and technologies to improve research outcomes

Requirements:

  • Publications in ICML, NeurIPS, ICLR, CVPR, ICCV
  • Degree in mathematics, physics, computer science, or another quantitative field (or expected within a year)
  • Knowledge of ML, probability theory, and statistics
  • Strong Python skills
  • Some C++ experience
  • Practical experience with modern DL architectures
  • Background in working with large noisy datasets

What we offer:

  • High base salary with substantial biannual bonuses
  • Relocation package to Amsterdam with flexible terms
  • Flexible workflow and schedule
  • Team of top mathematics and programming competition winners
  • Cutting-edge hardware, strong engineering support, and fast idea implementation
  • Internal training, comprehensive health insurance, sports reimbursement, and biannual corporate events

The role We are seeking a highly skilled and customer-focused professional to join our team as a Cloud Solutions Architect specializing in Cloud infrastructure and MLOps. As a Cloud Solutions Architect, you will play a pivotal role in designing and implementing cutting-edge solutions for our clients, leveraging cloud technologies for ML/AI teams and becoming a trusted technical advisor for building their pipelines.

You’re welcome to work remotely from the US or Canada.

Your responsibilities will include: - Act as a trusted advisor to our clients, providing technical expertise and guidance throughout the engagement. Conduct PoC, workshops, presentations, and training sessions to educate clients on GPU cloud technologies and best practices. - Collaborate with clients to understand their business requirements and develop solution architecture that align with their needs: design and document Infrastructure as code solutions, documentation and technical how-tos in collaboration with support engineers and technical writers. - Help customers to optimize pipeline performance and scalability to ensure efficient utilization of cloud resources and services powered by Nebius AI. - Act as a single point of expertise of customer scenarios for product, technical support, marketing teams. - Assist to Marketing department efforts during events (Hackathons, conferences, workshops, webinars, etc.)

We expect you to have: - 5 - 10 + years of experience as a cloud solutions architect, system/network engineer, developer or a similar technical role with a focus on cloud computing - Strong hands-on experience with IaC and configuration management tools (preferably Terraform/Ansible), Kubernetes, skills of writing code in Python - Solid understanding of GPU computing practices for ML training and inference workloads, GPU software stack components, including drivers, libraries (e.g. CUDA, OpenCL) - Excellent communication skills - Customer-centric mindset

It will be an added bonus if you have: - Hands-on experience with HPC/ML orchestration frameworks (e.g. Slurm, Kubeflow) - Hands-on experience with deep learning frameworks (e.g. TensorFlow, PyTorch) - Solid understanding of cloud ML tools landscape from industry leaders (NVIDIA, AWS, Azure, Google)

Waddle Labs: - we are an early-stage startup - we build robotics models to solve physical bottlenecks in science (eg. wet lab experiments) - YC W26

The other companies on this career site know what they're doing. We don't. Do you want to help us figure it out?

If you want to find out more, reach out to wave@waddlelabs.ai with 1 sentence about what you’re interested in.

Johns Hopkins University

We invite applications for Postdoctoral Fellow positions in the broad areas of data science and AI, with a focus on developing and applying novel data science approaches, computational tools and statistical methods to advance health and biomedical research. Johns Hopkins University has recently made transformative new investment in launching a new Data Science and AI institute that will serve as the hub for interdisciplinary data collaborations with faculties and students from across Johns Hopkins and will build the nation’s foremost destination for emerging applications, opportunities and challenges presented by data science, machine learning and AI.

The role We seek an experienced AI/ML Specialist Solutions Architect to support AI-focused customers leveraging Nebius services. In this role, you will be a trusted advisor, collaborating with clients to design scalable AI solutions, resolve technical challenges and manage large-scale AI deployments involving hundreds to thousands of GPUs.

You’re welcome to work remotely from the United States or Canada.

Your responsibilities will include: - Designing customer-centric solutions that maximize business value and align with strategic goals. - Building and maintaining long-term relationships to foster trust and ensure customer satisfaction. - Delivering technical presentations, producing whitepapers, creating manuals and hosting webinars for audiences with varying technical expertise. - Collaborating with engineering and product teams to effectively prioritize and relay customer feedback.

We expect you to have: - 7-10 + years of experience with cloud technologies in MLOps engineering, Machine Learning engineering or similar roles. - Strong understanding of ML ecosystems, including models, use cases and tooling. - Proven experience in setting up and optimizing distributed training pipelines across multi-node and multi-GPU environments. - Hands-on knowledge of frameworks like PyTorch or JAX. - Excellent verbal and written communication skills.

It will be an added bonus if you have: - Expertise in deploying inference infrastructure for production workloads. - Ability to transition ML pipelines from POC to scalable production systems.

Preferred tooling: - Programming Languages – Python, Go, Java, C++ - Orchestration – Kubernetes (K8s), Slurm - DevOps Tools – Git, Docker, Helm - Infrastructure as Code (IaC) – Terraform - ML Frameworks and Libraries – PyTorch, TensorFlow, JAX, HuggingFace, Scikit-learn