Can We Predict Innovation? Narrow Experts versus Competent Generalists

Amir Hazem, Motohashi Kazuyuki


Abstract
In this paper, we investigate the role of large language models in predicting innovation. We contrast two main paradigms: i) narrow experts: which consists of supervised and semi-supervised models trained or fine-tuned on a specific task and ii) competent generalists: which consists of large language models with zero-shot and few-shots learning. We define the task of innovation modeling and present the first attempt to understand the transformation from research to innovation. We focus on product innovation which can be defined as the process of transforming technology to a product or service and bring it to the market. Our extensive empirical evaluation shows that most existing pretrained models are not suited and perform poorly on the innovation modeling task. We also show that injecting research information helps improving the alignment from technology to the market. Finally, we propose a new methodology and fine-tuning strategies that achieve significant performance boosts over the baselines.
Anthology ID:
2025.ranlp-1.50
Volume:
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Galia Angelova, Maria Kunilovskaya, Marie Escribe, Ruslan Mitkov
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
413–422
Language:
URL:
https://aclanthology.org/2025.ranlp-1.50/
DOI:
Bibkey:
Cite (ACL):
Amir Hazem and Motohashi Kazuyuki. 2025. Can We Predict Innovation? Narrow Experts versus Competent Generalists. In Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era, pages 413–422, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Can We Predict Innovation? Narrow Experts versus Competent Generalists (Hazem & Kazuyuki, RANLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.ranlp-1.50.pdf