NeurIPS 2025 Career Opportunities
Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.
Search Opportunities
Location: Aalto University, Finland
Topic: Generative Models, Geometric Deep Learning, Neurosymbolic Methods
Applications: LLMs and Drug Discovery
Ideal background: Strong mathematical/theoretical training, and experience and comfort with programming in deep learning
Contact: Send an email with your CV to Vikas Garg (vgarg@csail.mit.edu)
San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Los Angeles, CA, US
About Pinterest:
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.
Pinterest is one of the fastest growing online ad platforms, and our success depends on mining rich user interest data that helps us connect users with highly relevant advertisers and products. We’re looking for a leader with experience in machine learning, data mining, and information retrieval to lead a team that develops new data-driven techniques to show the most engaging and relevant promoted content to the users. You’ll be leading a world-class ML team that is growing quickly and laying the foundation for Pinterest’s business success.
What you’ll do:
- Manage and grow the engineering team, providing technical vision and long-term roadmap.
- Design and implement algorithms for real-time bidding, ad scoring/ranking, inventory selection and yield optimization on the DSP.
- Hire, mentor and grow a team of engineers. Set technical direction, manage project roadmaps, and actively guide team execution. Collaborate across Product, Engineering, Marketing and Sales – translating business goals into ML requirements and ensuring solutions meet cross-functional needs.
- Drive product strategy and advocacy: define and evangelize the ML roadmap for programmatic products, present findings and roadmaps to senior leadership, and work with stakeholders to align analytics projects with company goals (e.g. privacy-first personalization, new DSP features).
What we’re looking for:
- Degree in Computer Science, Machine Learning, Statistics or related field.
- 10+ years of industry experience building production machine learning systems at scale, data mining, search, recommendations, and/or natural language processing.
- Experience in programmatic advertising or information retrieval contexts (RTB, DSP/SSP, search), with familiarity with ad campaign metrics, auction mechanics and yield optimization.
- Excellent cross-functional collaboration and stakeholder communication skills.
Location: Perth, Australia
Research Associate
Job Reference: 521359
Employment Type: Full Time (Fixed Term, 2 Years)
Categories: Arts, Business, Education, Law
Remuneration
Base salary: Level A, $83,499–$112,371 p.a. (pro-rata) plus 17% superannuation
The Research Centre
The Planning and Transport Research Centre (PATREC) at UWA conducts research with direct application to transport planning and road safety. RoadSense Analytics (RSA) is a video analytics platform for traffic analysis, developed through seven years of sustained R&D. The platform translates Australian research into a market-ready product for transport planning applications.
The Role
You will design, test, and refine computer vision models for traffic video analytics, including detection, tracking, segmentation, and post-processing tasks. You will prepare and manage datasets, benchmark emerging frameworks, and contribute to deployment testing and optimisation across varied environments. Working within a small team, you will document findings, produce technical outputs, and contribute to research that influences road safety and transport planning.
Selection Criteria
Essential:
- Tertiary degree in Computer Science, Applied Mathematics/Statistics, Robotics, Physics or related discipline, with excellent academic record
- Strong foundations in applied mathematics, computer vision and machine learning, particularly object detection and tracking
- Proficiency in deep learning frameworks (e.g., PyTorch, TensorFlow) and Python ML libraries (e.g., NumPy, OpenCV, scikit-learn)
- Experience with dataset preparation, training pipelines, and evaluation methods
- Ability to work independently and collaboratively in a research team
Further Information
Position Description: PD [Research Associate] [521359].pdf
Contact: Associate Professor Chao Sun
Email: chao.sun@uwa.edu.au
Palo Alto
Mission: Design and build the real-time data infrastructure that powers GroqCloud’s global revenue engine, processing hundreds of billions of events each day, sustaining millions of writes per second, and enabling a multi-billion-dollar business to operate in real time. Drive the intelligence layer that fuels global billing, analytics, and real-time business operations at worldwide scale.
Responsibilities & opportunities in this role: Architect high-performance data pipelines to ingest, process, and transform millions of structured and semi-structured events daily. Build distributed, fault-tolerant frameworks for streaming data from diverse sources. Create data services and APIs that make usage and billing data easily accessible across the platform. Develop lightweight tools and dashboards to monitor and visualize data ingestion, throughput, and system health.
Ideal candidates have/are: Strong background in real-time data processing, distributed systems, and analytics infrastructure. Hands-on experience with streaming technologies such as Kafka, Flink, Spark Streaming, or Redpanda and real-time analytics databases such as Clickhouse, Druid, or Pinot. Deep understanding of serialization, buffering, and data flow optimization in high-throughput systems.
Bonus points: Experience deploying and managing workloads on Kubernetes. A passion for systems performance, profiling, and low-latency optimization. Familiarity with gRPC and RESTful API design.
Attributes of a Groqster: Humility – Egos are checked at the door Collaborative & Team Savvy – We make up the smartest person in the room, together Growth & Giver Mindset – Learn it all versus know it all, we share knowledge generously Curious & Innovative – Take a creative approach to projects, problems, and design Passion, Grit, & Boldness – No-limit thinking, fueling informed risk taking
London
Flow Traders is committed to leveraging the most recent advances in machine learning, computer science, and AI to generate value in the financial markets. We are looking for Quantitative Researchers to join this challenge.
As a Quantitative Researcher at Flow Traders, you are an expert in mathematics and statistics. You are passionate about translating challenging problems into equations and models, and have the ability to optimize them using cutting-edge computational techniques. You collaborate with a global team of researchers and engineers to design, build, and optimize our next generation of models and trading strategies.
Are you at the top of your quantitative, modeling, and coding game, and excited by the prospect of demonstrating these skills in competitive live markets? Then this opportunity is for you.
AI Platform Engineer
Location: Boston (US) / Barcelona (Spain)
Position Overview
As an AI Platform Engineer, you are the bridge between AI research and production software. You will:
- Build and maintain AI infrastructure: model serving, vector databases, embedding pipelines
- Enable AI developers to deploy their work reproducibly and safely
- Design APIs for AI inference, prompt management, and evaluation
- Implement MLOps pipelines: versioning, monitoring, logging, experimentation tracking
- Optimize performance: latency, cost, throughput, reliability
- Collaborate with backend engineers to integrate AI capabilities into the product
Key Responsibilities
AI Infrastructure
- Deploy and serve LLMs (OpenAI, Anthropic, HuggingFace, fine-tuned models)
- Optimize inference latency and costs
- Implement caching, rate limiting, and retry strategies
MLOps & Pipelines
- Version models, prompts, datasets, and evaluation results
- Implement experiment tracking (Weights & Biases)
- Build CI/CD pipelines for model deployment
- Monitor model performance and drift
- Set up logging and observability for AI services
API Development
- Design and implement APIs (FastAPI)
- Create endpoints for prompt testing, model selection, and evaluation
- Integrate AI services with backend application
- Ensure API reliability, security, and performance
Collaboration & Enablement
- Work with AI Developers to productionize their experiments regarding improving user workflows
- Define workflows: notebook/test repository → PR → staging → production
- Document AI infrastructure and best practices
- Review code and mentor AI developers on software practices
Required Skills & Experience
Must-Have
- 7+ years of software engineering experience (Python preferred)
- Experience with LLMs and AI/ML in production: OpenAI API, HuggingFace, LangChain, or similar
- Understanding of vector databases (Pinecone, Chroma, Weaviate, FAISS)
- Cloud infrastructure experience: GCP (Vertex AI preferred) or AWS (SageMaker)
- API development: FastAPI, REST, async programming
- CI/CD and DevOps: Docker, Terraform, GitHub Actions
- Monitoring and observability
- Problem-solving mindset: comfortable debugging complex distributed systems
- Operating experience with AI deployment in enterprise environment
Nice-to-Have
- Experience fine-tuning or training models
- Familiarity with LangChain, Pydantic AI or similar frameworks
- Knowledge of prompt engineering and evaluation techniques
- Experience with real-time inference and streaming responses
- Background in data engineering or ML engineering
- Understanding of RAG architectures
- Contributions to open-source AI/ML projects
Tech Stack
Current Stack:
- Languages: Python (primary), Bash
- AI/ML: OpenAI API, Anthropic, HuggingFace, LangChain, Pydantic AI
- Vector DBs: Pinecone, Chroma, Weaviate, or FAISS
- Backend: FastAPI, SQLAlchemy, Pydantic
- Cloud: GCP (Vertex AI, Cloud Run), Terraform
- CI/CD: GitHub Actions
- Experiment Tracking: MLflow, Weights & Biases, or custom
- Containers: Docker, Kubernetes (optional)
What we offer:
Competitive compensation
Stock Options Plan: Empowering you to share in our success and growth.
Cutting-Edge Tools: Access to state-of-the-art tools and collaborative opportunities with leading experts in artificial intelligence, physics, hardware and electronic design automation.
Work-Life Balance: Flexible work arrangements in one of our offices with potential options for remote work.
Professional Growth: Opportunities to attend industry conferences, present research findings, and engage with the global AI research community.
Impact-Driven Culture: Join a passionate team focused on solving some of the most challenging problems at the intersection of AI and hardware.
Location Hybrid (2-3 days a week) on-site in San Mateo, CA.
BigHat is opening an ML Fellowship. We've got an awesome high-throughput wetlab that pumps proprietary data into custom data and ML Ops infra to power our weekly design-build-train loop. Come solve hard-enough-to-be-fun problems in protein engineering in service of helping patients!
Pittsburgh, PA
US Citizenship required (green card or visa does not suffice)
Work with the world leaders in computational game theory on software products for real problems of importance! Positions are available for working on the nation's best fighter pilot AI, on wargaming, on command and control, on missile defense, and on optimizing the world's nuclear stability. Work on the most important problems in the world! The work leverages the leading course-of-action generation and execution AI system, which we have developed.
Required qualifications:
-
Degree as indicated in the position announcement roles
-
Strong software development skills
-
Excitement to change the world with AI products
-
Desire to work with the world's leading experts in a fast-moving environment
-
US citizenship (green card or visa does not suffice), and eligibility to obtain Top Secret clearance
Why apply?
-
The company is the world leader in computational game theory AI
-
Unique opportunity to apply game theory-based software products to the real world
-
Ability to work directly with world-leading AI experts
-
The company is already profitable
-
CMU startup in close proximity to CMU
-
Competitive compensation, including equity in a fast-moving, profitable startup
-
The company has a no-jerks policy
** Our Founder, President, and CEO, Dr. Tuomas Sandholm, will be available to conduct interviews personally at NeurIPS between December 4th and 7th, 2025, and additional positions will be available thereafter as well. **
Global - United States, Europe, Asia
Quantitative Researchers play a key role in this mission by developing next-generation models and trading approaches for a range of investment strategies. You’ll get to challenge the impossible in quantitative research by applying sophisticated and complex statistical techniques to financial markets, some of the most complex data sets in the world.
AI Scientist
The Role
This AI Scientist position will drive the development and optimization of Aizen's generative AI-based peptide drug discovery platform, DaX™. You will be responsible for incorporating state-of-the-art neural network architectures and high-performance computational biology software to improve the accuracy and throughput of our drug discovery efforts. Your work will be critical in translating experimental data and scientific insights into scalable, robust models.
Our Ideal Candidate
You are passionate about the company’s mission and a self-starter with an inextinguishable fire to compete and succeed. You thrive in an environment that requires crisp judgment, pragmatic decision-making, rapid course-corrections, and comfort with market ambiguity. You discharge your duties within a culture of mutual team respect, high performance, humility, and humor.
Key Responsibilities
- Incorporate state-of-the-art neural network architectures and training methods to improve accuracy and throughput of DaX™, Aizen's generative AI-based peptide drug discovery platform.
- Develop, test, deploy, and maintain high-performance computational biology software according to the needs and feedback of experimentalists at Aizen.
- Orchestrate new and existing Aizen software tools into scalable, highly-available, and easy-to-use cloud pipelines.
- Work closely with experimental scientists at Aizen to manage storage and access of Aizen's experimental data.
Candidate Skills and Experience
- Ph.D. and/or postdoctoral studies in Computer Science, Computational Biology, Bioinformatics, or a related field.
- Deep, demonstrated expertise in advanced Generative Models (e.g., Flow Matching, Diffusion Models) for de novo design in discrete and continuous spaces.
- Experience integrating and leveraging data from physics-based simulations (e.g., Molecular Dynamics) into machine learning models.
- Experience collecting, sanitizing, and training on biological property datasets, with a preference for prior experience with peptides.
- Proficiency with Python, shell scripting, and a high-performance compiled language.
- Entrepreneurial spirit, self-starter with proper balance of scientific creativity and disciplined execution.
- Preferred: Experience designing and maintaining high-availability cloud architectures for hosting high-performance biological analysis software.
- Preferred: Experience in chemical featurization, representation, and model application for peptide chemistry, non-canonical amino acids (NCAAs), and complex peptide macrocycles.
- Preferred: Experience in protein/peptide folding dynamics, protein structural analysis, and resultant data integration to improve computation/design.
About Aizen
Aizen is an AI-driven biotechnology company pioneering Mirror Peptides, a novel class of biologic medicines. Mirror Peptides are synthetic, fully D-amino acid peptides that represent a vast, unexplored therapeutic chemical space. Backed by life science venture capital and based in the biotech hub of San Diego, CA.
Location & Compensation
- Reporting: Principal AI Scientist
- Location: This position offers fully remote work with monthly/quarterly trips to company facilities in California.
- Compensation: Competitive base salary, stock options, and a benefits package including medical coverage.
Contact
To apply, please contact us at jobs@aizentx.com.
An equal opportunity employer V1