Timezone: »
Poster
Depth is More Powerful than Width with Prediction Concatenation in Deep Forest
Shen-Huan Lyu · Yi-Xiao He · Zhi-Hua Zhou
Random Forest (RF) is an ensemble learning algorithm proposed by \citet{breiman2001random} that constructs a large number of randomized decision trees individually and aggregates their predictions by naive averaging. \citet{zhou2019deep} further propose Deep Forest (DF) algorithm with multi-layer feature transformation, which significantly outperforms random forest in various application fields. The prediction concatenation (PreConc) operation is crucial for the multi-layer feature transformation in deep forest, though little has been known about its theoretical property. In this paper, we analyze the influence of Preconc on the consistency of deep forest. Especially when the individual tree is inconsistent (as in practice, the individual tree is often set to be fully grown, i.e., there is only one sample at each leaf node), we find that the convergence rate of two-layer DF \textit{w.r.t.} the number of trees $M$ can reach $\mathcal{O}(1/M^2)$ under some mild conditions, while the convergence rate of RF is $\mathcal{O}(1/M)$. Therefore, with the help of PreConc, DF with deeper layer will be more powerful than the shallower layer. Experiments confirm theoretical advantages.
Author Information
Shen-Huan Lyu (Hohai University)
Yi-Xiao He (Nanjing University)
Zhi-Hua Zhou (Nanjing University)
More from the Same Authors
-
2022 Spotlight: Real-Valued Backpropagation is Unsuitable for Complex-Valued Neural Networks »
Zhi-Hao Tan · Yi Xie · Yuan Jiang · Zhi-Hua Zhou -
2022 Spotlight: Lightning Talks 3A-2 »
shuwen yang · Xu Zhang · Delvin Ce Zhang · Lan-Zhe Guo · Renzhe Xu · Zhuoer Xu · Yao-Xiang Ding · Weihan Li · Xingxuan Zhang · Xi-Zhu Wu · Zhenyuan Yuan · Hady Lauw · Yu Qi · Yi-Ge Zhang · Zhihao Yang · Guanghui Zhu · Dong Li · Changhua Meng · Kun Zhou · Gang Pan · Zhi-Fan Wu · Bo Li · Minghui Zhu · Zhi-Hua Zhou · Yafeng Zhang · Yingxueff Zhang · shiwen cui · Jie-Jing Shao · Zhanguang Zhang · Zhenzhe Ying · Xiaolong Chen · Yu-Feng Li · Guojie Song · Peng Cui · Weiqiang Wang · Ming GU · Jianye Hao · Yihua Huang -
2022 Spotlight: Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning »
Yao-Xiang Ding · Xi-Zhu Wu · Kun Zhou · Zhi-Hua Zhou -
2022 Poster: Adapting to Online Label Shift with Provable Guarantees »
Yong Bai · Yu-Jie Zhang · Zhi-Hua Zhou · Masashi Sugiyama · Zhi-Hua Zhou -
2022 Poster: Theoretically Provable Spiking Neural Networks »
Shao-Qun Zhang · Zhi-Hua Zhou -
2022 Poster: Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning »
Yao-Xiang Ding · Xi-Zhu Wu · Kun Zhou · Zhi-Hua Zhou -
2022 Poster: Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge »
Tian-Zuo Wang · Tian Qin · Zhi-Hua Zhou -
2022 Poster: Efficient Methods for Non-stationary Online Learning »
Zhi-Hua Zhou · Yan-Feng Xie · Lijun Zhang · Zhi-Hua Zhou -
2022 Poster: Real-Valued Backpropagation is Unsuitable for Complex-Valued Neural Networks »
Zhi-Hao Tan · Yi Xie · Yuan Jiang · Zhi-Hua Zhou -
2017 Poster: Improved Dynamic Regret for Non-degenerate Functions »
Lijun Zhang · Tianbao Yang · Jinfeng Yi · Rong Jin · Zhi-Hua Zhou -
2017 Poster: Learning with Feature Evolvable Streams »
Bo-Jian Hou · Lijun Zhang · Zhi-Hua Zhou -
2017 Poster: Subset Selection under Noise »
Chao Qian · Jing-Cheng Shi · Yang Yu · Ke Tang · Zhi-Hua Zhou -
2015 Poster: Subset Selection by Pareto Optimization »
Chao Qian · Yang Yu · Zhi-Hua Zhou