Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

Is the Next Winter Coming for AI?The Elements of Making Secure and Robust AI

Josh Harguess


Abstract:

While the recent boom in Artificial Intelligence (AI) has given rise to the technology's use and popularity across many domains, the same boom has exposed vulnerabilities of the technology to many threats that could cause the next "AI winter". AI is no stranger to "winters", or drops in funding and interest in the technology and its applications. Many in the field consider the early 1970's as the first AI winter with another proceeding in the late 1990's and early 2000's. There is some consensus that another AI winter is all but inevitable in some shape or form, however, current thoughts on the next winter do not consider secure and robust AI and the implications of the success or failure of these areas. The emergence of AI as an operational technology introduces potential vulnerabilities to AI's longevity. The National Security Commission on AI (NSCAI) report outlines recommendations for building secure and robust AI, particularly in government and Department of Defense (DoD) applications. However, are they enough to help us fully secure AI systems and prevent the next "AI winter"? An approaching "AI Winter" would have a tremendous impact in DoD systems as well as those of our adversaries. Understanding and analyzing the potential of this event would better prepare us for such an outcome as well as help us understand the tools needed to counter and prevent this "winter" by securing and robustifying our AI systems. In this paper, we introduce the following four pillars of AI assurance, that if implemented, will help us to avoid the next AI winter: security, fairness, trust, and resilience.

Chat is not available.