Heben Sie Ihre Leistungen auf People@HES-SO hervor weitere Infos
PEOPLE@HES-SO - Verzeichnis der Mitarbeitenden und Kompetenzen
PEOPLE@HES-SO - Verzeichnis der Mitarbeitenden und Kompetenzen

PEOPLE@HES-SO
Verzeichnis der Mitarbeitenden und Kompetenzen

Hilfe
language
  • fr
  • en
  • de
  • fr
  • en
  • de
  • SWITCH edu-ID
  • Verwaltung
ID
« Zurück
Chanel Guillaume

Chanel Guillaume

Professeur-e HES associé-e

Hauptkompetenzen

Artificial Intelligence (AI)

Machine Learning

Signal processing

Wearables & Computer Vision

Human-Machine Interaction

Human factors

Social and affective computing

  • Kontakt

  • Lehre

  • Publikationen

  • Konferenzen

Hauptvertrag

Professeur-e HES associé-e

Büro: I308

Haute école du paysage, d'ingénierie et d'architecture de Genève
Rue de la Prairie 4, 1202 Genève, CH
hepia
Bereich
Technique et IT
Hauptstudiengang
Informatique et systèmes de communication
BSc en Informatique et systèmes de communication - Haute école du paysage, d'ingénierie et d'architecture de Genève
  • Algorithmie avancée
  • Programmation système
  • Systèmes d'exploitation
  • Systèmes distribués

2025

Developing an AI-powered wound assessment tool: :
Wissenschaftlicher Artikel ArODES
a methodological approach to data collection and model optimization

Alessio Stefanelli, Sofia Zahia, Guillaume Chanel, Rania Niri, Swann Pichon, Sebastian Probst

BMC medical informatics and decision making,  2025, 25, 297

Link zur Publikation

Zusammenfassung:

Background : Chronic wounds (CWs) represent a significant and growing challenge in healthcare due to their prolonged healing times, complex management, and associated costs. Inadequate wound assessment by healthcare professionals (HCPs), often due to limited training and high clinical workload, contributes to suboptimal treatment and increased risk of complications. This study aimed to develop an artificial intelligence (AI)-powered wound assessment tool, integrated into a mobile application, to support HCPs in diagnosis, monitoring, and clinical decision-making. Methods : A multicenter observational study was conducted across three healthcare institutions in Western Switzerland. Researchers compiled a hybrid dataset of approximately 4,000 wound images through both retrospective extraction from clinical records and prospective collection using a standardized mobile application. The prospective data included high-resolution images, short videos, and 3D scans, along with structured clinical metadata. Retrospective data were anonymized and manually annotated by wound care experts. All images were labeled for wound segmentation and tissue classification to train and validate deep learning models. Results : The resulting dataset represented a broad spectrum of wound types (acute and chronic), anatomical locations, skin tones, and healing stages. The AI-based wound segmentation model, developed using the Deeplabv3 + architecture with a ResNet50 backbone, achieved a DICE score of 92% and an Intersection-over-Union (IOU) score of 85%. Tissue classification yielded a preliminary mean DICE score of 78%, although accuracy varied across tissue types, especially fibrin and necrosis. The models were optimized for mobile implementation through quantization, achieving real-time inference with an average processing time of 0.3 seconds and only a 0.3% performance reduction. The dual approach to data collection, prospective and retrospective—ensured both image standardization and real-world variability, enhancing the model’s generalizability. Conclusions : This study laid the foundation for an AI-driven digital tool to assist clinical wound assessment and education. The integration of robust datasets and AI models demonstrated the potential to improve diagnostic precision, support personalized care, and reduce wound-related healthcare costs. Although challenges remained, particularly in tissue classification, this work highlighted the promise of AI in transforming wound care and advancing clinical training.

Wound segmentation with U-Net using a dual attention mechanism and transfer learning
Wissenschaftlicher Artikel ArODES

Rania Niri, Sofia Zahia, Alessio Stefanelli, Kaushal Sharma, Sebastian Probst, Swann Pichon, Guillaume Chanel

Journal of imaging informatics in medicine,  2025, 38, 3351–3365

Link zur Publikation

Zusammenfassung:

Accurate wound segmentation is crucial for the precise diagnosis and treatment of various skin conditions through image analysis. In this paper, we introduce a novel dual attention U-Net model designed for precise wound segmentation. Our proposed architecture integrates two widely used deep learning models, VGG16 and U-Net, incorporating dual attention mechanisms to focus on relevant regions within the wound area. Initially trained on diabetic foot ulcer images, we fine-tuned the model to acute and chronic wound images and conducted a comprehensive comparison with other state-of-the-art models. The results highlight the superior performance of our proposed dual attention model, achieving a Dice coefficient and IoU of 94.1% and 89.3%, respectively, on the test set. This underscores the robustness of our method and its capacity to generalize effectively to new data.

2025

Noise detection in electrodermal activity using attention unet for wearable devices
Konferenz ArODES

Damian Spycher, Kaushal Sharma, Guillaume Chanel

AI days HES-SO '25

Link zur Konferenz

Zusammenfassung:

This paper proposes an attention Unet model to detect noise in electrodermal activity (EDA). Three databases containing EDA signals collected from 78 participants, together with sample-based expert annotations, are used for training and performance evaluation. The results demonstrate that adding an attentional mechanism in the skip connections of the Unet improves performance. In addition, the proposed attentional model achieved a performance superior to the state of the art by achieving a kappa score of 56%, demonstrating the possibility of detecting noise at the sample level.

PPG denoising using maximum-mean discrepancy based variational autoencoder with data from multiple datasets
Konferenz ArODES

Kaushal Sharma, Damian Spycher, Guillaume Chanel

AI days HES-SO '25

Link zur Konferenz

Zusammenfassung:

In this study, we implemented a maximum-mean discrepancy based variational autoencoder (MMD-VAE) for the denoising of photoplethysmogram (PPG) signals, using data from multiple datasets. We applied random masking to generate noisy counterparts for clean 10-second segments. We report evaluation results on PPG-DaLiA and WESAD. Using only PPG data, our approach outperforms existing methods on WESAD, and achieves performance similar to the state-of-the-art on PPG-DaLiA. The results highlight the importance of leveraging multiple datasets for effective model training. Overall, the findings validate the suitability of the MMD-VAE for PPG denoising.

Errungenschaften

Medien und Kommunikation
Kontaktieren Sie uns
Folgen Sie der HES-SO
linkedin instagram facebook twitter youtube rss
univ-unita.eu www.eua.be swissuniversities.ch
Rechtliche Hinweise
© 2021 - HES-SO.

HES-SO Rectorat