Timezone: »

Deep Neural Net implementations with FPGAs
Thomas Boser · Paolo Calafiura · Ian Johnson

Tue Dec 05 07:00 PM -- 10:30 PM (PST) @ Pacific Ballroom Concourse #D10

With recent increases in the luminosity of Large Hadron Collider (LHC) collisions creating more tracking data an efficient track reconstruction solution has become necessary. As it currently stands during the level 1 trigger it is necessary to identify 50 million particle tracks per second with lower than 5 microsecond latency per track. This requires a low latency highly parallel implementation or a connect­the­dots track reconstruction algorithm. Current algorithms are implemented on ASIC chips or FPGAs and scale O(N2)or worse. It is projected that we’ll experience a O(10x) resource shortage with current implementations.

Simultaneously deep learning has become a standard technique in computer vision. We explore the viability of a deep learning solution for track reconstruction. We have explored various implementations of DNNs applied to the tracking problem and have promising preliminary results. We’ve explored using CNNs, RNNs, LSTMs, and Deep Kalman Filters. Current popular deep learning libraries are all heavily reliant on Graphics Processing Units (GPUs) to shoulder the bulk of heavy computation. These libraries show incredible results with rapidly improving throughput. Unfortunately this cannot be applied for latency sensitive applications such as our track reconstruction problem because GPUs cannot guarantee low latency.

Author Information

Thomas Boser (UCSC/LBNL)

Computer Science student at University of California Santa Cruz. Summer student at Lawrence Berkeley National Laboratory.

Paolo Calafiura (LBNL)
Ian Johnson (Lawrence Berkeley National Laboratory)