Skip to yearly menu bar Skip to main content


Tutorial

Application Development using Large Language Models

Andrew Ng · Isa Fulford

Hall E2 (level 1)
[ ]
Mon 11 Dec 11:45 a.m. PST — 2:15 p.m. PST

Abstract:

The rise of large language models (LLMs) offers a new approach for quickly building AI applications. While LLMs such as ChatGPT, Bard, and Bing chat are widely understood as consumer tools, the best practices for developers to effectively use these models through API calls remain poorly understood. This tutorial will share with the NeurIPS audience best practices for building AI applications using LLMs.
This course will include, but also go significantly beyond, “prompt engineering.” We will share best practices for integrating LLMs into more complex software systems, evaluating and continually improving their performance, and enhancing their safety. We will discuss best practices for using LLMs in common operations such as summarizing, making inferences, transforming text, and expanding text, as well as in-context learning, fine-tuning, and the utilization of both open-source and proprietary cloud-hosted LLMs.
LLMs are transforming the development process of AI applications. For example, a sentiment classifier that used to take weeks to build, via a process of collecting and labeling training examples, tuning a supervised model, and then finally deploying the model to make inferences, can now be built in hours by prompting an LLM API.
Through this tutorial, we hope to connect research and practice, and also inspire researchers to pursue new directions relevant to how LLMs are being used today.

Chat is not available.