ISSN :2582-9793

Ensemble Bayesian Inference: Leveraging Small Language Models to Achieve LLM-level Accuracy in Profile Matching Tasks

Original Research (Published On: 07-Aug-2025 )
DOI : https://doi.org/10.54364/AAIML.2025.53232

Haru-Tada Sato, Fuka Matsuzaki and Jun-ichiro Takahashi

Adv. Artif. Intell. Mach. Learn., 5 (3):4154-4173

1. Jun-ichiro Takahashi: Department of Data Science, i's Factory Corporation, Ltd.

2. Fuka Matsuzaki: Department of Data Science, i's Factory Corporation, Ltd.

3. Haru-Tada Sato: Department of Data Science, i's Factory Corporation, Ltd.

Download PDF Here Citation Info via Semantic Scholar

DOI: 10.54364/AAIML.2025.53232

Article History: Received on: 25-Apr-25, Accepted on: 01-Aug-25, Published on: 07-Aug-25

Corresponding Author: Haru-Tada Sato

Email: satoh@isfactory.co.jp

Citation: Haru-Tada Sato, et al. Ensemble Bayesian Inference: Leveraging Small Language Models to Achieve Llm-Level Accuracy in Profile Matching Tasks. Advances in Artificial Intelligence and Machine Learning. 2025;5(3):232.


Abstract

    

This study explores the potential of small language model (SLM) ensembles to achieve accuracy comparable to proprietary large language models (LLMs). We propose Ensemble Bayesian Inference (EBI), a novel approach that applies Bayesian estimation to combine judgments from multiple SLMs, allowing them to exceed the performance limitations of individual models. Our experiments on diverse tasks—aptitude assessments and consumer profile analysis in both Japanese and English—demonstrate EBI's effectiveness. Notably, we analyze cases where incorporating models with negative Lift values into ensembles improves overall performance, and we examine the method's efficacy across different languages. These findings suggest new possibilities for constructing high-performance AI systems with limited computational resources and for effectively utilizing models with individually lower performance. Building on existing research on LLM performance evaluation, ensemble methods, and open-source LLM utilization, we discuss the novelty and significance of our approach.

Statistics

   Article View: 2232
   PDF Downloaded: 12