Timezone: »
Identifying the top-K frequent items is one of the most common and important operations in large data processing systems. As a result, several solutions have been proposed to solve this problem approximately. In this paper, we identify that in modern distributed settings with both multi-node as well as multi-core parallelism, existing algorithms, although theoretically sound, are suboptimal from the performance perspective. In particular, for identifying top-K frequent items, Count-Min Sketch (CMS) has fantastic update time but lack the important property of reducibility which is needed for exploiting available massive data parallelism. On the other end, popular Frequent algorithm (FA) leads to reducible summaries but the update costs are significant. In this paper, we present Topkapi, a fast and parallel algorithm for finding top-K frequent items, which gives the best of both worlds, i.e., it is reducible as well as efficient update time similar to CMS. Topkapi possesses strong theoretical guarantees and leads to significant performance gains due to increased parallelism, relative to past work.
Author Information
Ankush Mandal (Georgia Institute of Technology)
He Jiang (Rice University)
Anshumali Shrivastava (Rice University)
Vivek Sarkar (Georgia Institute of Technology)
More from the Same Authors
-
2021 Spotlight: Practical Near Neighbor Search via Group Testing »
Joshua Engels · Benjamin Coleman · Anshumali Shrivastava -
2021 : PISTACHIO: Patch Importance Sampling To Accelerate CNNs via a Hash Index Optimizer »
Zhaozhuo Xu · Anshumali Shrivastava -
2022 : Adaptive Sparse Federated Learning in Large Output Spaces via Hashing »
Zhaozhuo Xu · Luyang Liu · Zheng Xu · Anshumali Shrivastava -
2022 Poster: The trade-offs of model size in large recommendation models : 100GB to 10MB Criteo-tb DLRM model »
Aditya Desai · Anshumali Shrivastava -
2022 Poster: Retaining Knowledge for Learning with Dynamic Definition »
Zichang Liu · Benjamin Coleman · Tianyi Zhang · Anshumali Shrivastava -
2022 Poster: Graph Reordering for Cache-Efficient Near Neighbor Search »
Benjamin Coleman · Santiago Segarra · Alexander Smola · Anshumali Shrivastava -
2021 Poster: Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures »
Zhaozhuo Xu · Zhao Song · Anshumali Shrivastava -
2021 Poster: Practical Near Neighbor Search via Group Testing »
Joshua Engels · Benjamin Coleman · Anshumali Shrivastava -
2021 Poster: Locality Sensitive Teaching »
Zhaozhuo Xu · Beidi Chen · Chaojian Li · Weiyang Liu · Le Song · Yingyan Lin · Anshumali Shrivastava -
2021 Poster: Raw Nav-merge Seismic Data to Subsurface Properties with MLP based Multi-Modal Information Unscrambler »
Aditya Desai · Zhaozhuo Xu · Menal Gupta · Anu Chandran · Antoine Vial-Aussavy · Anshumali Shrivastava -
2020 Poster: Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier with Application to Real-Time Information Filtering on the Web »
Zhenwei Dai · Anshumali Shrivastava -
2020 Session: Orals & Spotlights Track 03: Language/Audio Applications »
Anshumali Shrivastava · Dilek Hakkani-Tur -
2019 Poster: Fast and Accurate Stochastic Gradient Estimation »
Beidi Chen · Yingchen Xu · Anshumali Shrivastava -
2019 Poster: Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products »
Tharun Kumar Reddy Medini · Qixuan Huang · Yiqiu Wang · Vijai Mohan · Anshumali Shrivastava -
2016 Poster: Simple and Efficient Weighted Minwise Hashing »
Anshumali Shrivastava -
2014 Poster: Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS) »
Anshumali Shrivastava · Ping Li -
2014 Oral: Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS) »
Anshumali Shrivastava · Ping Li -
2013 Poster: Beyond Pairwise: Provably Fast Algorithms for Approximate $k$-Way Similarity Search »
Anshumali Shrivastava · Ping Li -
2011 Poster: Hashing Algorithms for Large-Scale Learning »
Ping Li · Anshumali Shrivastava · Joshua L Moore · Arnd C König