Highlight your achievements on People@HES-SO More info
PEOPLE@HES-SO – Directory and Skills inventory
PEOPLE@HES-SO – Directory and Skills inventory

PEOPLE@HES-SO
Directory and Skills inventory

Help
language
  • fr
  • en
  • de
  • fr
  • en
  • de
  • SWITCH edu-ID
  • Administration
ID
« Back
Sternfeld Alexander

Sternfeld Alexander

Collaborateur-trice scientifique HES

Main skills

Large Language Models

Machine Learning

Big Data Analytics

Python

Teaching

Data Science

Cybersecurity

  • Contact

  • Publications

Main contract

Collaborateur-trice scientifique HES

Desktop: FOY

HES-SO Valais-Wallis - Haute Ecole de Gestion
Route de la Plaine 2, Case postale 80, 3960 Sierre, CH
HEG - VS
No data to display for this section

2025

PromptSight: Forecasting Emerging Technologies via Iterative Self- Prompting in Large Language Models.
Scientific paper

Sternfeld Alexander, Kucharavy Andrei, Percia David Dimitri, Alain Mermoud, Julian Jang-Jaccard

EEKE, 2025

Summary:

Forecasting emerging technologies is essential for guiding innovation and policy, yet traditional methods often struggle with the fast pace of technological change. Recent advances in machine learning (ML) and large language models (LLMs) are opening up new possibilities for technology forecasting by speeding up the review and summarization of technical expertise. However, the development of effective prompting strategies to fully realize these benefits is still largely underexplored. In this paper, we introduce the novel agentic AI self-prompting framework PromptSight, which enables LLMs to autonomously generate and refine prompts through multiple iterations, enhancing forecasting accuracy and granularity. Our results demonstrate that the technologies predicted through our framework are more specific compared to direct generation from an initial prompt. Additionally, we show that iterative prompting yields forecasts that are more structured, coherent, and comprehensive than baseline methods.

2024

LLM-Resilient Bibliometrics: Factual Consistency Through Entity Triplet Extraction.
Scientific paper

Sternfeld Alexander, Percia David Dimitri, Kucharavy Andrei, Alain Mermoud, Julian Jang-Jaccard

EEKE, 2024

Link to the publication

Summary:

The increase in power and availability of Large Language Models (LLMs) since late 2022 led to increased concerns with their usage to automate academic paper mills. In turn, this poses a threat to bibliometrics-based technology monitoring and forecasting in rapidly moving fields. We propose to address this issue by leveraging semantic entity triplets. Specifically, we extract factual statements from scientific papers and represent them as (subject, predicate, object) triplets before validating the factual consistency of statements within and between scientific papers. This approach heavily penalizes blind usage of stochastic text generators such as LLMs while not penalizing authors who used LLMs solely to improve the readability of their paper. Here, we present a pipeline to extract such triplets and compare them. While our pipeline is promising and sensitive enough to detect inconsistencies between papers from different domains, the intra-paper entity reference resolution needs to be improved to ensure that triplets are more specific. We believe that our pipeline will be useful to the general research community working on the factual consistency of scientific texts.

Extracting Semantic Entity Triplets by Leveraging LLMs.
Scientific paper

Sternfeld Alexander, Kucharavy Andrei, Percia David Dimitri, Julian Jang-Jaccard, Alain Mermoud

GTM, 2024

Summary:

As Large Language Models (LLMs) become increasingly powerful and accessible, there is a rise in concerns regarding the automatic generation of academic papers. Several instances of undeniable usage of LLMs in reputable journals have been reported. Probably significantly more articles were partially or entirely written by LLMs but have not yet been detected, posing a threat to the veracity of academic journals. The current consensus among researchers is that detecting LLM-generated text is ineffective or easy to evade in a general setting. Therefore, we explore an alternative approach, targeting the stochastic nature of LLMs by extracting semantic entity triplets. Such triplets can be used to assess a text’s factual accuracy and filter the publication corpus accordingly. However, such extraction is all but trivial, and prior approaches have reported poor suitability of both LLMs and embedding-based methods. Here, we show that these issues can be alleviated by few-shot prompting on recent LLMs, notably the Meta-Llama-3-8B-Instruct. We show that extracted triplets are more specific, and hallucinations are undetectable in our setting.

2023

Document Knowledge Transfer for Aspect-Based Sentiment Classification Using a Left-Center-Right Separated Neural Network with Rotatory Attention.
Scientific paper

Emily Fields, Robbert Rog, Sternfeld Alexander, Gonem Lau

NLDB, 2023

Link to the publication

Summary:

Hybrid Aspect-Based Sentiment Classification (ABSC) methods make use of domain-specific, costly ontologies to make up for the lack of available aspect-level data. This paper proposes two forms of transfer learning to exploit the plenteous amount of available document data for sentiment classification. Specifically, two forms of document knowledge transfer, pretraining (PRET) and multi-task learning (MULT), are considered in various combinations to extend the state-of-the-art LCR-Rot-hop++ model. For both the SemEval 2015 and 2016 datasets, we find an improvement over the LCR-Rot-hop++ neural model. Overall, the pure MULT model performs well across both datasets. Additionally, there is an optimal amount of document knowledge that can be injected, after which the performance deteriorates due to the extra focus on the auxiliary task. We observe that with transfer learning and L1 and L2 loss regularisation, the LCR-Rot-hop++ model is able to outperform the HAABSA++ hybrid model on the (larger) SemEval 2016 dataset. Thus, we conclude that transfer learning is a feasible and computationally cheap substitute for the ontology step of hybrid ABSC models.

Achievements

Media and communication
Contact us
Follow the HES-SO
linkedin instagram facebook twitter youtube rss
univ-unita.eu www.eua.be swissuniversities.ch
Legal Notice
© 2021 - HES-SO.

HES-SO Rectorat