Sciact
  • EN
  • RU

Distillation of Knowledge in Boosting Models Full article

Journal Pattern Recognition and Image Analysis
ISSN: 1054-6618 , E-ISSN: 1555-6212
Output data Year: 2025, Volume: 35, Number: 3, Pages: 313-318 Pages count : 6 DOI: 10.1134/S1054661825700221
Tags knowledge distillation, machine learning boosting, overfitting
Authors Nedel'ko V.M 1
Affiliations
1 Sobolev Inst Math, Novosibirsk 630090, Russia

Funding (1)

1 Sobolev Institute of Mathematics FWNF-2022-0015

Abstract: The paper explores the possibility of applying the idea of knowledge distillation to the boosting method. The rationale for this approach is that, in many cases, the best forecast quality is achieved in ensembles using trees of excess depth. In these cases, it may be worthwhile to train an ensemble of shallower trees using a deeper model as a "teacher." This makes it possible, in particular, to assess the real "depth" of dependences between variables in a problem, as well as to obtain more visual visualizations of solutions. The study also provides material for understanding the mechanisms of the effectiveness of the knowledge distillation procedure.
Cite: Nedel'ko V.M.
Distillation of Knowledge in Boosting Models
Pattern Recognition and Image Analysis. 2025. V.35. N3. P.313-318. DOI: 10.1134/S1054661825700221 WOS Scopus РИНЦ
Dates:
Submitted: Mar 25, 2025
Accepted: Apr 9, 2025
Published print: Oct 23, 2025
Published online: Oct 23, 2025
Identifiers:
Web of science: WOS:001597069500018
Scopus: 2-s2.0-105019389334
Elibrary: 83051313
Citing: Пока нет цитирований
Altmetrics: