Highlight your achievements on People@HES-SO More info
PEOPLE@HES-SO – Directory and Skills inventory
PEOPLE@HES-SO – Directory and Skills inventory

PEOPLE@HES-SO
Directory and Skills inventory

Help
language
  • fr
  • en
  • de
  • fr
  • en
  • de
  • SWITCH edu-ID
  • Administration
ID
« Back
Chanel Guillaume

Chanel Guillaume

Professeur-e HES associé-e

Main skills

Artificial Intelligence (AI)

Machine Learning

Signal processing

Wearables & Computer Vision

Human-Machine Interaction

Human factors

Social and affective computing

  • Contact

  • Teaching

  • Publications

  • Conferences

Main contract

Professeur-e HES associé-e

Desktop: I308

Haute école du paysage, d'ingénierie et d'architecture de Genève
Rue de la Prairie 4, 1202 Genève, CH
hepia
Faculty
Technique et IT
Main Degree Programme
Informatique et systèmes de communication

Guillaume Chanel holds a Ph.D. in Computer science, University of Geneva, 2009, where he worked on machine learning for the automatic assessment of emotions based on EEG and peripheral signals. From 2009 to 2010 he was at the KML-Knowledge Media Laboratory, Aalto University, Helsinki, Finland, studying the physiological correlates of social processes taking place between players during video-gaming. Now head of the SIMS group and senior lecturer jointly affiliated with the Swiss Center for Affective Sciences and with the Computer Science Depatement of the University of Geneva, his research investigates how machines can learn to behave in a social and affective environment. He is particularly interested in the use of multimodal and physiological measures for entertainment, and for improving man-machine and human remote interactions. Examples of his research include: dynamic adjustment of games mechanic based on players’ emotions, inclusion of physiological emotional cues in mediated social interactions, movie highlight detection based on spectators’ social reactions and adaptation of human social behaviors through machines.

See the top right corner links for more information and full list of publications.

BSc en Informatique et systèmes de communication - Haute école du paysage, d'ingénierie et d'architecture de Genève
  • Algorithmie avancée
  • Programmation système
  • Systèmes d'exploitation
  • Systèmes distribués

2025

Developing an AI-powered wound assessment tool: :
Scientific paper ArODES
a methodological approach to data collection and model optimization

Alessio Stefanelli, Sofia Zahia, Guillaume Chanel, Rania Niri, Swann Pichon, Sebastian Probst

BMC medical informatics and decision making,  2025, 25, 297

Link to the publication

Summary:

Background : Chronic wounds (CWs) represent a significant and growing challenge in healthcare due to their prolonged healing times, complex management, and associated costs. Inadequate wound assessment by healthcare professionals (HCPs), often due to limited training and high clinical workload, contributes to suboptimal treatment and increased risk of complications. This study aimed to develop an artificial intelligence (AI)-powered wound assessment tool, integrated into a mobile application, to support HCPs in diagnosis, monitoring, and clinical decision-making. Methods : A multicenter observational study was conducted across three healthcare institutions in Western Switzerland. Researchers compiled a hybrid dataset of approximately 4,000 wound images through both retrospective extraction from clinical records and prospective collection using a standardized mobile application. The prospective data included high-resolution images, short videos, and 3D scans, along with structured clinical metadata. Retrospective data were anonymized and manually annotated by wound care experts. All images were labeled for wound segmentation and tissue classification to train and validate deep learning models. Results : The resulting dataset represented a broad spectrum of wound types (acute and chronic), anatomical locations, skin tones, and healing stages. The AI-based wound segmentation model, developed using the Deeplabv3 + architecture with a ResNet50 backbone, achieved a DICE score of 92% and an Intersection-over-Union (IOU) score of 85%. Tissue classification yielded a preliminary mean DICE score of 78%, although accuracy varied across tissue types, especially fibrin and necrosis. The models were optimized for mobile implementation through quantization, achieving real-time inference with an average processing time of 0.3 seconds and only a 0.3% performance reduction. The dual approach to data collection, prospective and retrospective—ensured both image standardization and real-world variability, enhancing the model’s generalizability. Conclusions : This study laid the foundation for an AI-driven digital tool to assist clinical wound assessment and education. The integration of robust datasets and AI models demonstrated the potential to improve diagnostic precision, support personalized care, and reduce wound-related healthcare costs. Although challenges remained, particularly in tissue classification, this work highlighted the promise of AI in transforming wound care and advancing clinical training.

Wound segmentation with U-Net using a dual attention mechanism and transfer learning
Scientific paper ArODES

Rania Niri, Sofia Zahia, Alessio Stefanelli, Kaushal Sharma, Sebastian Probst, Swann Pichon, Guillaume Chanel

Journal of imaging informatics in medicine,  2025, 38, 3351–3365

Link to the publication

Summary:

Accurate wound segmentation is crucial for the precise diagnosis and treatment of various skin conditions through image analysis. In this paper, we introduce a novel dual attention U-Net model designed for precise wound segmentation. Our proposed architecture integrates two widely used deep learning models, VGG16 and U-Net, incorporating dual attention mechanisms to focus on relevant regions within the wound area. Initially trained on diabetic foot ulcer images, we fine-tuned the model to acute and chronic wound images and conducted a comprehensive comparison with other state-of-the-art models. The results highlight the superior performance of our proposed dual attention model, achieving a Dice coefficient and IoU of 94.1% and 89.3%, respectively, on the test set. This underscores the robustness of our method and its capacity to generalize effectively to new data.

2025

Noise detection in electrodermal activity using attention unet for wearable devices
Conference ArODES

Damian Spycher, Kaushal Sharma, Guillaume Chanel

AI days HES-SO '25

Link to the conference

Summary:

This paper proposes an attention Unet model to detect noise in electrodermal activity (EDA). Three databases containing EDA signals collected from 78 participants, together with sample-based expert annotations, are used for training and performance evaluation. The results demonstrate that adding an attentional mechanism in the skip connections of the Unet improves performance. In addition, the proposed attentional model achieved a performance superior to the state of the art by achieving a kappa score of 56%, demonstrating the possibility of detecting noise at the sample level.

PPG denoising using maximum-mean discrepancy based variational autoencoder with data from multiple datasets
Conference ArODES

Kaushal Sharma, Damian Spycher, Guillaume Chanel

AI days HES-SO '25

Link to the conference

Summary:

In this study, we implemented a maximum-mean discrepancy based variational autoencoder (MMD-VAE) for the denoising of photoplethysmogram (PPG) signals, using data from multiple datasets. We applied random masking to generate noisy counterparts for clean 10-second segments. We report evaluation results on PPG-DaLiA and WESAD. Using only PPG data, our approach outperforms existing methods on WESAD, and achieves performance similar to the state-of-the-art on PPG-DaLiA. The results highlight the importance of leveraging multiple datasets for effective model training. Overall, the findings validate the suitability of the MMD-VAE for PPG denoising.

Achievements

Media and communication
Contact us
Follow the HES-SO
linkedin instagram facebook twitter youtube rss
univ-unita.eu www.eua.be swissuniversities.ch
Legal Notice
© 2021 - HES-SO.

HES-SO Rectorat