Skip to yearly menu bar Skip to main content


Demonstration

Deep Neural Net implementations with FPGAs

Thomas Boser · Paolo Calafiura · Ian Johnson

Pacific Ballroom Concourse #D10

Abstract:

With recent increases in the luminosity of Large Hadron Collider (LHC) collisions creating more tracking data an efficient track reconstruction solution has become necessary. As it currently stands during the level 1 trigger it is necessary to identify 50 million particle tracks per second with lower than 5 microsecond latency per track. This requires a low latency highly parallel implementation or a connect­the­dots track reconstruction algorithm. Current algorithms are implemented on ASIC chips or FPGAs and scale O(N2)or worse. It is projected that we’ll experience a O(10x) resource shortage with current implementations.

Simultaneously deep learning has become a standard technique in computer vision. We explore the viability of a deep learning solution for track reconstruction. We have explored various implementations of DNNs applied to the tracking problem and have promising preliminary results. We’ve explored using CNNs, RNNs, LSTMs, and Deep Kalman Filters. Current popular deep learning libraries are all heavily reliant on Graphics Processing Units (GPUs) to shoulder the bulk of heavy computation. These libraries show incredible results with rapidly improving throughput. Unfortunately this cannot be applied for latency sensitive applications such as our track reconstruction problem because GPUs cannot guarantee low latency.

Live content is unavailable. Log in and register to view live content