Skip to yearly menu bar Skip to main content


NeurIPS 2025 Career Opportunities

Here we highlight career opportunities submitted by our Exhibitors, and other top industry, academic, and non-profit leaders. We would like to thank each of our exhibitors for supporting NeurIPS 2025.

Search Opportunities

London

Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Machine Learning (ML) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.

At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 35 million financial instruments searchable, discoverable, and actionable across the global capital markets.

Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.

We are looking for Senior AI Engineers with expertise and a passion for Information Retrieval, Search technologies, Natural Language Processing and Generative AI to join our AI Experiences team. Our teams are working on exciting initiatives such as:

-Developing and deploying robust Retrieval-Augmented Generation (RAG) systems, curating high-quality data for model training and evaluation, and building evaluation frameworks to enable rapid iteration and continuous improvement based on real-world user interactions. -Designing and implementing tools that enable LLM-powered search agents to effectively handle complex client queries, shaping Bloomberg's generative AI ecosystem, and scaling these innovative solutions to support thousands of users. -Leveraging both traditional ML approaches and Generative AI to prototype, build, and maintain high-performing, client-facing search and streaming applications that deliver timely and relevant financial insights. -Building robust APIs to facilitate search across diverse collections of data, ensuring highly relevant results and maintaining system stability and reliability.

You'll have the opportunity to: -Collaborate closely with cross-functional teams, including product managers and engineers, to integrate AI solutions into client facing products , enhance analytical capabilities and improve user experience. -Architect, develop, and deploy production-quality search systems powered by LLMs, emphasizing both ML innovation and solid software engineering practices. -Continuously identify areas for improvement within our search systems, proactively experiment with new ideas, and rapidly implement promising solutions—even when improvements rely purely on engineering without direct ML involvement. -Design, train, test, and iterate on models and algorithms while taking ownership of the entire lifecycle, from idea inception to robust deployment and operationalization. -Stay at the forefront of research in IR, NLP, and Generative AI, incorporating relevant innovations into practical, impactful solutions. -Represent Bloomberg at industry events, scientific conferences, and within open-source communities.

Various locations available


Adobe is looking for a Machine Learning intern who will apply AI and machine learning techniques to big-data problems to help Adobe better understand, lead and optimize the experience of its customers.

By using predictive models, experimental design methods, and optimization techniques, you will be working on the research and development of exciting projects like real-time online media optimization, sales operation analytics, customer churn scoring and management, customer understanding, product recommendation and customer lifetime value prediction.

All 2026 Adobe interns will be co-located hybrid. This means that interns will work between their assigned office and home. Interns will be based in the office where their manager and/or team are located, where they will get the most support to ensure collaboration and the best employee experience. Managers and their organization will determine the frequency they need to go into the office to meet priorities.  

What You’ll Do
- Develop predictive models on large-scale datasets to address various business problems with statistical modeling, machine learning, and analytics techniques. - Develop and implement scalable, efficient, and interpretable modeling algorithms that can work with large-scale data in production systems - Collaborate with product management and engineering groups to develop new products and features.

What You Need to Succeed
- Currently enrolled full time and pursuing a Master’s or PhD degree in Computer Science, Computer Engineering; or equivalent experience required with an expected graduation date of December 2026 – June 2027 - Good understanding of statistical modeling, machine learning, deep learning, or data analytics concepts. - Proficient in one or more programming languages such as Python, Java and C - Familiar with one or more machine learning or statistical modeling tools such as R, Matlab and scikit learn - Strong analytical and quantitative problem-solving ability. - Excellent communication, relationship skills and a team player - Ability to participate in a full-time internship between May-September

Redwood City, CA


Biohub is leading the new era of AI-powered biology to cure or prevent disease through its 501c3 medical research organization, with the support of the Chan Zuckerberg Initiative.

The Team Biohub supports the science and technology that will make it possible to help scientists cure, prevent, or manage all diseases by the end of this century. While this may seem like an audacious goal, in the last 100 years, biomedical science has made tremendous strides in understanding biological systems, advancing human health, and treating disease.

Achieving our mission will only be possible if scientists are able to better understand human biology. To that end, we have identified four grand challenges that will unlock the mysteries of the cell and how cells interact within systems — paving the way for new discoveries that will change medicine in the decades that follow:

Building an AI-based virtual cell model to predict and understand cellular behavior Developing state-of-the-art imaging systems to observe living cells in action Instrumenting tissues to better understand inflammation, a key driver of many diseases Engineering and harnessing the immune system for early detection, prevention, and treatment of disease As a Senior Data Scientist, you'll lead the creation of groundbreaking datasets that power our AI/ML efforts within and across our scientific grand challenges. Working at the intersection of data science, biology, and AI, your work will focus on creating large, AI-ready datasets, spanning single-cell sequencing, immune receptor profiling, and mass spectrometry peptidomics data. You will define data needs, format standards, analysis approaches and quality metrics and build pipelines to ingest, transform, and validate data products that form the foundation of our experiments.

Our Data Ecosystem:

These efforts will form a part of, and interoperate with our larger larger data ecosystem. We are generating unprecedented scientific datasets that drive biological innovation:

Billions of standardized cells of single-cell transcriptomic data, with a focus on measuring genetic and environmental perturbations 10s of thousands of donor-matched DNA & RNA samples 10s PBs-scale static and dynamic imaging datasets 100s TBs-scale mass spectrometry datasets Diverse, large multi-modal biological datasets that enable biological bridges across measurement types and facilitate multi-modal model training to define how cells act. When analysis of a dataset is complete, you will help publish it through public resources like CELLxGENE Discover, the CryoET Portal, and the Virtual Cell Platform, used by tens of thousands of scientists monthly to advance understanding of genetic variants, disease risk, drug toxicities, and therapeutic discovery.

You'll collaborate with cross-functional teams to lead dataset definition, ingestion, transformation, and delivery for AI modeling and experimental analysis. Success means delivering high-quality, usable datasets that directly address modeling challenges and accelerate scientific progress. Join us in building the data foundation that will transform our understanding of human biology and move us along the path to curing, preventing, and managing all disease.

New York

Description - Bloomberg’s Engineering AI department has 350+ AI practitioners building highly sought after products and features that often require novel innovations. We are investing in AI to build better search, discovery, and workflow solutions using technologies such as transformers, gradient boosted decision trees, large language models, and dense vector databases. We are expanding our group and seeking highly skilled individuals who will be responsible for contributing to the team (or teams) of Machine Learning (ML) and Software Engineers that are bringing innovative solutions to AI-driven customer-facing products.

At Bloomberg, we believe in fostering a transparent and efficient financial marketplace. Our business is built on technology that makes news, research, financial data, and analytics on over 35 million financial instruments searchable, discoverable, and actionable across the global capital markets.

Bloomberg has been building Artificial Intelligence applications that offer solutions to these problems with high accuracy and low latency since 2009. We build AI systems to help process and organize the ever-increasing volume of structured and unstructured information needed to make informed decisions. Our use of AI uncovers signals, helps us produce analytics about financial instruments in all asset classes, and delivers clarity when our clients need it most.

We are looking for Senior LLM Research Engineers with a strong expertise and passion for Large Language Modeling research and applications to join our team.

The advent of large language models (LLMs) presents new opportunities for expanding our NLP capabilities with new products. This would allow our clients to ask complex questions in natural language and receive insights extracted across our vast number of Bloomberg APIs or from potentially millions of structured and unstructured information sources.

Broad areas of applications and interest include: application and fine-tuning methods for LLMs, efficient methods for training, multimodal models, learning from feedback and human preferences, retrieval-augmented generation, summarization, semantic parsing and tool use, domain adaptation of LLMs to financial domains, dialogue interfaces, evaluation of LLMs, model safety and responsible AI.

What's in it for you: -Collaborate with colleagues on building and applying LLMs for production systems and applications -Write, test, and maintain production quality code -Train, tune, evaluate and continuously improve LLMs using large amounts of high-quality data to develop state-of-the-art financial NLP models -Demonstrate technical leadership by owning cross-team projects -Stay current with the latest research in AI, NLP and LLMs and incorporate new findings into our models and methodologies -Represent Bloomberg at scientific and industry conference and in open-source communities -Publish product and research findings in documentation, whitepapers or publications to leading academic venues

You'll need to have: -Practical experience with Natural Language Processing problems, and a familiarity with Machine Learning, Deep Learning and Statistical Modeling techniques -Ph.D. in ML, NLP or a relevant field or MSc in CS, ML, Math, Statistics, Engineering, or related fields and 2+ years of relevant work experience -Experience with Large Language Model training and fine-tuning frameworks such as PyTorch, Huggingface or Deepspeed -Proficiency in software engineering -An understanding of Computer Science fundamentals such as data structures and algorithms and a data oriented approach to problem-solving -Excellent communication skills and the ability to collaborate with engineering peers as well as non-engineering stakeholders. -A track record of authoring publications in top conferences and journals is a strong plus

San Jose, CA, USA


Adobe is looking for a Senior Software Engineer to contribute to building the platform that powers Adobe Experience Platform’s Generative AI capabilities. Partnering with other business units, you will be building products that transform the way companies approach audience creation, journey optimization, and personalization at scale. You will join a diverse, lively group of engineers and scientists long established in the ML space. The work is dynamic, fast-paced, creative, collaborative and data-driven.

What you'll Do - Architect solutions to implement functionality across multiple services and teams.
- Design and build solutions for comprehensive monitoring and alerting of anomalies.
- Design and build highly available services that scale horizontally - Participating in all aspects of software development activities, including design, coding, code review, unit/integration/end-to-end testing, refactoring, bug fixing, and documentation
- Work in multi-functional teams to ensure timely delivery of high-quality product features
- Fast prototyping of ideas and concepts and researching the latest industry trends.
- Experiment with upcoming technologies in a fast-paced environment.

What you need to succeed The ideal candidate will have the following background: - Bachelor's degree or higher in Computer Science, or equivalent experience in the field. - 10+ years of experience in web technologies - Proven programming skills with extensive experience in languages such as Java and Python. - A proven expertise building large scale distributed systems - Experience in building, deploying, and managing infrastructures in public clouds (Azure / AWS)
- Ability to demonstrate a high level of ownership for the entire SDLC, including designing, building, testing, deploying, and supporting production microservices in a fast-paced environment.
- Strong problem-solving and analytical abilities.
- Be a self-starter requiring minimal direction with ability to learn quickly and adapt to changing priorities and requirements. - Accept challenges outside one's comfort zone and deliver viable solutions within defined time boundaries.
- Ability to think through solutions from a short term and long-term lens in an iterative development cycle. - A dedication to learning and sharing ideas with your fellow engineers - Mastery of breaking down, discussing, and communicating abstract technical concepts - Familiarity with agile development methodologies - Real world experience working with Generative AI - Worked on Machine Learning infrastructure and applications

Remote - Americas

Applied Machine Learning Engineer - Search

Every day, millions of people search for products across Shopify's ecosystem. That's not just queries—that's dreams, businesses, and livelihoods riding on whether someone finds the perfect vintage jacket or the exact drill bit they need. As a Machine Learning Engineer specializing in Search Recommendations, you'll be the one making that magic happen. With a search index unifying over a billion products, you're tackling one of the hardest search problems at unprecedented scale. We're building cutting-edge product search from the ground up using the latest LLM advances and vector matching technologies to create search experiences that truly understand what people are looking for.

Key Responsibilities:

  • Design and implement AI-powered features to enhance search recommendations and personalization
  • Collaborate with data scientists and engineers to productionize data products through rigorous experimentation and metrics analysis
  • Build and maintain robust, scalable data pipelines for search and recommendation systems
  • Develop comprehensive tools for evaluation and relevance engineering, following high-quality software engineering practices
  • Mentor engineers and data scientists while fostering a culture of innovation and technical excellence

Qualifications:

  • Expertise in relevance engineering and recommendation systems, with hands-on experience in Elasticsearch, Solr, or vector databases
  • Strong proficiency in Python with solid object-oriented programming skills
  • Proven ability to write optimized, low-latency code for high-performance systems
  • Experience deploying machine learning, NLP, or generative AI products at scale (strong plus)
  • Familiarity with statistical methods and exposure to Ruby, Rails, or Rust (advantageous)
  • Track record of shipping ML solutions that real users depend on

This role may require on-call work

Ready to connect merchants with their perfect customers? Join the team that's making commerce better for everyone.


At Shopify, we pride ourselves on moving quickly—not just in shipping, but in our hiring process as well. If you're ready to apply, please be prepared to interview with us within the week. Our goal is to complete the entire interview loop within 30 days. You will be expected to complete a live pair programming session, come prepared with your own IDE.


Location UC Berkeley, Berkeley, CA US


Description The Bakar Institute of Digital Materials for the Planet (BIDMaP) is an institute in UC Berkeley’s new College of Computing, Data Science, and Society (CDSS), bringing together AI, machine learning and data science with the natural sciences to address the planet’s most urgent challenges. BIDMaP is focused on developing new techniques in AI that will enhance and accelerate discovery in experimental natural sciences and development of novel materials to address planetary challenges. To this end, BIDMaP promotes collaboration between world-renowned AI/ML experts, chemists, physicists and other physical scientists. By combining cutting-edge chemistry with artificial intelligence, machine learning, and robotics, BIDMaP is reimagining how materials can be designed and optimized for clean energy, clean air, clean water, advanced batteries, and sustainable chemical production.

NVIDIA is developing the NVIDIA DRIVE AV Solution (NDAS), powered by the latest advancement in AI and accelerated computing. We are seeking a highly motivated software expert to join our Autonomous Vehicles (AV) Drive-Alpha team in US Santa Clara. You will be driving the engineering execution of feature development or exceeding the meaningful metric requirements, especially for L2++ and L3/L4.

Drive-Alpha consists of proficient domain-experts spanning the full stack of autonomous driving, including perception, fusion, prediction, planning and control, autonomous model, many with proven development experiences for the highly competitive market in key functions like Highway NOA (Navigation on Autopilot) and Urban NOA (Navigation on Autopilot), as well as p2p driving (including parking). This team is responsible for the integration and sign-off of NDAS component teams' merge request/change lists, promotes validated changes to merge into stable branch, analyzes the root-cause of the regression identified, and drive the corrective actions taken by component engineering teams for a productive CI/CD process of SW development. Also, the team members tightly integrate into component teams' development, acting as a dependency resolver for the component teams to deliver cross-function improvement that are most impactful to NDAS product. We nurture teamwork among component teams' engineers and establish positive relationships and communications with partner organizations.

What you’ll be doing: Provide in-depth and insightful technical feedback on the quality of NDAS L2++/L3/L4 SW stack, based on the performance metrics proven through offline-replay and in-car testing. Identify the weak link of the L2++/L3/L4 SW stack and make it strong. Integrate, test and sign-off SW stack's code change and model update, and drive the Root-Cause-Corrective-Action (RCCA) process to continuously improve the quality of NDAS SW. Decompose a complicated cross-function problem into actionable items and coordinate a concerted effort among multiple collaborators. Join force with component team developers, when necessary, provide your domain expert input to a solution for hard problems, and produce production-quality code to component code base.

What we need to see: BS/MS in Electrical Engineering, Computer Science, or related fields or equivalent experience. 5+ years related experience in software development, with hands-on dev experience in AD for automotive. Great coding skills in modern C++ and scripting languages like Python. Deep understanding of L2++/L3/L4 product features in the market. Hands-on experience in debugging AD SW problems. Excellent communication and interpersonal skills with ability to strive in a cross-disciplinary environment.

Ways To Stand Out From The Crowd: Experience of working as a hands-on tech lead for one or more autonomous driving components. Hands-on development experience of an SOP-ed AD and/or ADAS product. Rich experience of in-car testing with great intuition of first-level triaging (from symptom to component). Familiar with CI/CD process, test automation, Jenkins, Log-Sim reply.

NVIDIA has some of the most forward-thinking and hardworking people in the world. If you're creative and autonomous, we want to hear from you! Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 136,000 USD - 212,750 USD.

You will also be eligible for equity and benefits.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other char

Research Fellow

Job Reference: 521360
Employment Type: Full Time (Fixed Term, 2 Years)
Location: Perth, Western Australia

Remuneration

Base salary: Level B, $118,150–$139,812 p.a. (pro-rata) plus 17% superannuation

The Research Centre

The Planning and Transport Research Centre (PATREC) at UWA conducts research with direct application to transport planning and road safety. RoadSense Analytics (RSA) is a video analytics platform for traffic analysis, developed through seven years of sustained R&D. The platform translates Australian research into a market-ready product for transport planning applications.

The Role

You will lead research and development of advanced computer vision models, multi-object tracking, and post-processing methods to improve traffic video analytics in complex environments. You will drive benchmarking, evaluation, and deployment optimisation of AI models, ensuring scalability and real-world performance. You will publish research, mentor junior staff, and collaborate with engineers and partners to translate innovations into production-ready solutions.

Selection Criteria

Essential:

  • Tertiary degree in Computer Science, Applied Mathematics/Statistics, Robotics, Physics, or related discipline, with excellent academic record
  • Demonstrated expertise in computer vision and machine learning, including object detection, segmentation, and multi-object tracking in challenging conditions such as occlusions, crowded scenes, and object re-identification
  • Proficiency in deep learning frameworks (e.g., PyTorch, TensorFlow) and Python ML libraries (e.g., NumPy, OpenCV, scikit-learn)
  • Experience implementing and evaluating state-of-the-art tracking algorithms such as DeepSORT, ByteTrack, and Transformer-based approaches
  • Proven ability to design and run rigorous experimental frameworks, including benchmarking, ablation studies, and field validation

Further Information

Position Description: PD [Research Fellow] [521360].pdf

Contact: Associate Professor Chao Sun
Email: chao.sun@uwa.edu.au