Skip to yearly menu bar Skip to main content


Poster

Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments

Wen-Bo Du · Tian Qin · Tian-Zuo Wang · Zhi-Hua Zhou

East Exhibit Hall A-C #4008
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Machine learning (ML) has achieved remarkable success in prediction tasks. In many real-world scenarios, rather than solely predicting an outcome using an ML model, the crucial concern is how to make decisions to prevent the occurrence of undesired outcomes, known as the AUF (avoiding undesired future) problem. To this end, a new framework called rehearsal learning has been proposed recently, which works effectively in stationary environments by leveraging the influence relations among variables. In real tasks, however, the environments are usually non-stationary, where influence relations are dynamic and can lead to the failure of AUF by the existing method. In this paper, we present a novel sequential methodology to effectively deal with the non-stationarity and maintain the estimates of dynamic influence relations, which are essential for rehearsal learning to avoid undesired outcomes. Meanwhile, we take the cost of decision actions into account and formulate the AUF problem with minimal action cost in non-stationary environments. We prove that in linear cases, the formulated problem can be transformed into the well-studied convex quadratically constrained quadratic program (QCQP). In this way, we establish the first polynomial-time rehearsal-based approach for addressing the AUF problem. We provide the theoretical guarantees of our method, and experimental results validate the effectiveness and efficiency of the method.

Live content is unavailable. Log in and register to view live content