Heben Sie Ihre Leistungen auf People@HES-SO hervor weitere Infos
PEOPLE@HES-SO - Verzeichnis der Mitarbeitenden und Kompetenzen
PEOPLE@HES-SO - Verzeichnis der Mitarbeitenden und Kompetenzen

PEOPLE@HES-SO
Verzeichnis der Mitarbeitenden und Kompetenzen

Hilfe
language
  • fr
  • en
  • de
  • fr
  • en
  • de
  • SWITCH edu-ID
  • Verwaltung
« Zurück
Atzori Manfredo

Atzori Manfredo

Adjoint-e scientifique HES A

Hauptkompetenzen

Applied Machine Learning

Computer Vision

Signal processing

Scientific projects writing and management

  • Kontakt

  • Publikationen

  • Konferenzen

  • Portfolio

Hauptvertrag

Adjoint-e scientifique HES A

Telefon-Nummer: +41 58 606 89 09

Büro: TP115

HES-SO Valais-Wallis - Haute Ecole de Gestion
Route de la Plaine 2, Case postale 80, 3960 Sierre, CH
HEG - VS
Es müssen keine Daten für diesen Abschnitt angezeigt werden.

2024

Toward improving reproducibility in neuroimaging deep learning studies
Wissenschaftlicher Artikel ArODES

Federico Del Pup, Manfredo Atzori

Frontiers in Neuroscience,  2024, 18

Link zur Publikation

A multi-scale CNN for transfer learning in sEMG-based hand gesture recognition for prosthetic devices
Wissenschaftlicher Artikel ArODES

Riccardo Fratti, Niccolò Marini, Manfredo Atzori, Henning Müller, Cesare Tiengo, Franco Bassetto

Sensors,  2024, 24, 22, 7147

Link zur Publikation

Zusammenfassung:

Advancements in neural network approaches have enhanced the effectiveness of surface Electromyography (sEMG)-based hand gesture recognition when measuring muscle activity. However, current deep learning architectures struggle to achieve good generalization and robustness, often demanding significant computational resources. The goal of this paper was to develop a robust model that can quickly adapt to new users using Transfer Learning. We propose a Multi-Scale Convolutional Neural Network (MSCNN), pre-trained with various strategies to improve inter-subject generalization. These strategies include domain adaptation with a gradient-reversal layer and self-supervision using triplet margin loss. We evaluated these approaches on several benchmark datasets, specifically the NinaPro databases. This study also compared two different Transfer Learning frameworks designed for user-dependent fine-tuning. The second Transfer Learning framework achieved a 97% F1 Score across 14 classes with an average of 1.40 epochs, suggesting potential for on-site model retraining in cases of performance degradation over time. The findings highlight the effectiveness of Transfer Learning in creating adaptive, user-specific models for sEMG-based prosthetic hands. Moreover, the study examined the impacts of rectification and window length, with a focus on real-time accessible normalizing techniques, suggesting significant improvements in usability and performance.

Multimodal representations of biomedical knowledge from limited training whole slide images and reports using deep learning
Wissenschaftlicher Artikel ArODES

Niccolò Marini, Stefano Marchesin, Marek Wodzinski, Alessandro Caputo, Damian Podareanu, Bryan Cardenas Guevara, Svetla Boytcheva, Simona Vatrano, Filippo Fraggetta, Francesco Ciompi, Gianmaria Silvello, Henning Müller, Manfredo Atzori

Medical Image Analysis,  2024, 97, 103303

Link zur Publikation

Zusammenfassung:

The increasing availability of biomedical data creates valuable resources for developing new deep learning algorithms to support experts, especially in domains where collecting large volumes of annotated data is not trivial. Biomedical data include several modalities containing complementary information, such as medical images and reports: images are often large and encode low-level information, while reports include a summarized high-level description of the findings identified within data and often only concerning a small part of the image. However, only a few methods allow to effectively link the visual content of images with the textual content of reports, preventing medical specialists from properly benefitting from the recent opportunities offered by deep learning models. This paper introduces a multimodal architecture creating a robust biomedical data representation encoding fine-grained text representations within image embeddings. The architecture aims to tackle data scarcity (combining supervised and self-supervised learning) and to create multimodal biomedical ontologies. The architecture is trained on over 6,000 colon whole slide Images (WSI), paired with the corresponding report, collected from two digital pathology workflows. The evaluation of the multimodal architecture involves three tasks: WSI classification (on data from pathology workflow and from public repositories), multimodal data retrieval, and linking between textual and visual concepts. Noticeably, the latter two tasks are available by architectural design without further training, showing that the multimodal architecture that can be adopted as a backbone to solve peculiar tasks. The multimodal data representation outperforms the unimodal one on the classification of colon WSIs and allows to halve the data needed to reach accurate performance, reducing the computational power required and thus the carbon footprint. The combination of images and reports exploiting self-supervised algorithms allows to mine databases without needing new annotations provided by experts, extracting new information. In particular, the multimodal visual ontology, linking semantic concepts to images, may pave the way to advancements in medicine and biomedical analysis domains, not limited to histopathology.

Improving quality control of whole slide images by explicit artifact augmentation
Wissenschaftlicher Artikel ArODES

Artur Jurgas, Marek Wodzinski, Marina D’Amato, Jeroen van der Laak, Manfredo Atzori, Henning Müller

Scientific Reports,  2024, 14, 17847

Link zur Publikation

Zusammenfassung:

The problem of artifacts in whole slide image acquisition, prevalent in both clinical workflows and research-oriented settings, necessitates human intervention and re-scanning. Overcoming this challenge requires developing quality control algorithms, that are hindered by the limited availability of relevant annotated data in histopathology. The manual annotation of ground-truth for artifact detection methods is expensive and time-consuming. This work addresses the issue by proposing a method dedicated to augmenting whole slide images with artifacts. The tool seamlessly generates and blends artifacts from an external library to a given histopathology dataset. The augmented datasets are then utilized to train artifact classification methods. The evaluation shows their usefulness in classification of the artifacts, where they show an improvement from 0.10 to 0.01 AUROC depending on the artifact type. The framework, model, weights, and ground-truth annotations are freely released to facilitate open science and reproducible research.

BIDSAlign :
Wissenschaftlicher Artikel ArODES
a library for automatic merging and preprocessing of multiple EEG repositories

Andrea Zanola, Federico Del Pup, Camille Porcaro, Manfredo Atzori

Journal of Neural Engineering,  2024, 21, 4, 046050

Link zur Publikation

Zusammenfassung:

Objective. This study aims to address the challenges associated with data-driven electroencephalography (EEG) data analysis by introducing a standardised library called BIDSAlign. This library efficiently processes and merges heterogeneous EEG datasets from different sources into a common standard template. The goal of this work is to create an environment that allows to preprocess public datasets in order to provide data for the effective training of deep learning (DL) architectures. Approach. The library can handle both Brain Imaging Data Structure (BIDS) and non-BIDS datasets, allowing the user to easily preprocess multiple public datasets. It unifies the EEG recordings acquired with different settings by defining a common pipeline and a specified channel template. An array of visualisation functions is provided inside the library, together with a user-friendly graphical user interface to assist non-expert users throughout the workflow. Main results. BIDSAlign enables the effective use of public EEG datasets, providing valuable medical insights, even for non-experts in the field. Results from applying the library to datasets from OpenNeuro demonstrate its ability to extract significant medical knowledge through an end-to-end workflow, facilitating group analysis, visual comparison and statistical testing. Significance. BIDSAlign solves the lack of large EEG datasets by aligning multiple datasets to a standard template. This unlocks the potential of public EEG data for training DL models. It paves the way to promising contributions based on DL to clinical and non-clinical EEG research, offering insights that can inform neurological disease diagnosis and treatment strategies.

A systematic comparison of deep learning methods for Gleason grading and scoring
Wissenschaftlicher Artikel ArODES

Juan P. Dominguez-Morales, Lourdes Duran-Lopez, Niccolò Marini, Saturnino Vicente-Diaz, Alejandro Linares-Barranco, Manfredo Atzori, Henning Müller

Medical Image Analysis,  2024, 95, 103191

Link zur Publikation

Zusammenfassung:

Prostate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.

RegWSI :
Wissenschaftlicher Artikel ArODES
whole slide image registration using combined deep feature- and intensity-based methods: winner of the ACROBAT 2023 challenge

Marek Wodzinski, Niccolò Marini, Manfredo Atzori, Henning Müller

Computer Methods and Programs in Biomedicine,  2024, 250, no 108187

Link zur Publikation

Zusammenfassung:

Background and objective The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. Methods We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. Results The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. Conclusions The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology.

SelfEEG :
Wissenschaftlicher Artikel ArODES
a Python library for self-supervised learning in electroencephalography

Federico Del Pup, Andrea Zanola, Louis Fabrice Tshimanga, Paolo Emilio Mazzon, Manfredo Atzori

Journal of Open Source Software,  2024, 9, 95, 6224

Link zur Publikation

Zusammenfassung:

SelfEEG is an open-source Python library developed to assist researchers in conducting Self- Supervised Learning (SSL) experiments on electroencephalography (EEG) data. Its primary objective is to offer a user-friendly but highly customizable environment, enabling users to efficiently design and execute self-supervised learning tasks on EEG data. SelfEEG covers all the stages of a typical SSL pipeline, ranging from data import to model design and training. It includes modules specifically designed to: split data at various granularity levels (e.g., session-, subject-, or dataset-based splits); effectively manage data stored with different configurations (e.g., file extensions, data types) during mini-batch construction; provide a wide range of standard deep learning models, data augmentations and SSL baseline methods applied to EEG data. Most of the functionality offered by selfEEG can be executed both on GPUs and CPUs, expanding its usability beyond the self-supervised learning area. Additionally, selfEEG can be employed for the analysis of other biomedical signals often coupled with EEGs, such as electromyography or electrocardiography data. These features make selfEEG a versatile deep learning tool for biomedical applications and a useful resource in SSL, one of the currently most active fields of artificial intelligence.

2023

Improving the classification of veterinary thoracic radiographs through inter-species and inter-pathology self-supervised pre-training of deep learning models
Wissenschaftlicher Artikel ArODES

Weronika Celniak, Marek Wodzinski, Artur Jurgas, Silvia Burti, Alessandro Zotti, Manfredo Atzori, Henning Müller, Tommaso Banzato

Scientific Reports,  13, 19518

Link zur Publikation

Zusammenfassung:

The analysis of veterinary radiographic imaging data is an essential step in the diagnosis of many thoracic lesions. Given the limited time that physicians can devote to a single patient, it would be valuable to implement an automated system to help clinicians make faster but still accurate diagnoses. Currently, most of such systems are based on supervised deep learning approaches. However, the problem with these solutions is that they need a large database of labeled data. Access to such data is often limited, as it requires a great investment of both time and money. Therefore, in this work we present a solution that allows higher classification scores to be obtained using knowledge transfer from inter-species and inter-pathology self-supervised learning methods. Before training the network for classification, pretraining of the model was performed using self-supervised learning approaches on publicly available unlabeled radiographic data of human and dog images, which allowed substantially increasing the number of images for this phase. The self-supervised learning approaches included the Beta Variational Autoencoder, the Soft-Introspective Variational Autoencoder, and a Simple Framework for Contrastive Learning of Visual Representations. After the initial pretraining, fine-tuning was performed for the collected veterinary dataset using 20% of the available data. Next, a latent space exploration was performed for each model after which the encoding part of the model was fine-tuned again, this time in a supervised manner for classification. Simple Framework for Contrastive Learning of Visual Representations proved to be the most beneficial pretraining method. Therefore, it was for this method that experiments with various fine-tuning methods were carried out. We achieved a mean ROC AUC score of 0.77 and 0.66, respectively, for the laterolateral and dorsoventral projection datasets. The results show significant improvement compared to using the model without any pretraining approach.

Modelling digital health data :
Wissenschaftlicher Artikel ArODES
the ExaMode ontology for computational pathology

Laura Menotti, Gianmaria Silvello, Manfredo Atzori, Svetla Boytcheva, Francesco Ciompi, Giorgio Maria Di Nunzio, Filippo Fraggetta, Fabio Giachelle, Ornella Irrera, Stefano Marchesin, Niccolò Marini, Henning Müller, Todor Primov

Journal of pathology informatic,  14, 100332

Link zur Publikation

Zusammenfassung:

Computational pathology can significantly benefit from ontologies to standardize the employed nomenclature and help with knowledge extraction processes for high-quality annotated image datasets. The end goal is to reach a shared model for digital pathology to overcome data variability and integration problems. Indeed, data annotation in such a specific domain is still an unsolved challenge and datasets cannot be steadily reused in diverse contexts due to heterogeneity issues of the adopted labels, multilingualism, and different clinical practices. Material and methods : This paper presents the ExaMode ontology, modeling the histopathology process by considering 3 key cancer diseases (colon, cervical, and lung tumors) and celiac disease. The ExaMode ontology has been designed bottom-up in an iterative fashion with continuous feedback and validation from pathologists and clinicians. The ontology is organized into 5 semantic areas that defines an ontological template to model any disease of interest in histopathology. Results :The ExaMode ontology is currently being used as a common semantic layer in: (i) an entity linking tool for the automatic annotation of medical records; (ii) a web-based collaborative annotation tool for histopathology text reports; and (iii) a software platform for building holistic solutions integrating multimodal histopathology data. Discussion : The ontology ExaMode is a key means to store data in a graph database according to the RDF data model. The creation of an RDF dataset can help develop more accurate algorithms for image analysis, especially in the field of digital pathology. This approach allows for seamless data integration and a unified query access point, from which we can extract relevant clinical insights about the considered diseases using SPARQL queries.

Functional synergies applied to a publicly available dataset of hand grasps show evidence of kinematic-muscular synergistic control
Wissenschaftlicher Artikel ArODES

Alessandro Scano, Néstor Jarque-Bou, Cristina Brambilla, Manfredo Atzori, Andrea D’Avella, Henning Müller

IEEE Access,  11, 108544-108560

Link zur Publikation

Zusammenfassung:

Hand grasp patterns are the results of complex kinematic-muscular coordination and synergistic control might help reducing the dimensionality of the motor control space at the hand level. Kinematic-muscular synergies combining muscle and kinematic hand grasp data have not been investigated before. This paper provides a novel analysis of kinematic-muscular synergies from kinematic and EMG data of 28 subjects, performing 20 hand grasps. Kinematic-muscular synergies were extracted from combined kinematic and muscle data with the recently introduced Mixed Matrix Factorization (MMF) algorithm. Seven synergies were first extracted from each subject, accounting on average for >75 % of the data variation. Then, cluster analysis was used to group synergies across subjects, with the aim of summarizing the coordination patterns available for hand grasps, and investigating relevant aspects of synergies such as inter-individual variability. Twenty-one clusters were needed to group the entire set of synergies extracted from 28 subjects, revealing high inter-individual variability. The number of kinematic-muscular motor modules required to perform the motor tasks is a reduced subset of the degrees of freedom to be coordinated; however, probably due to the variety of tasks, poor constraints and the large number of variables considered, we noted poor inter-individual repeatability. The results generalize the description of muscle and hand kinematics, better clarifying several limits of the field and fostering the development of applications in rehabilitation and assistive robotics.

Spatial and temporal muscle synergies provide a dual characterization of low-dimensional and intermittent control of upper-limb movements
Wissenschaftlicher Artikel ArODES

Cristina Brambilla, Manfredo Atzori, Henning Müller, Andrea D'Avela, Alessandro Scano

Neuroscience,  2023, vol. 514, pp. 100-122, 100-122

Link zur Publikation

Zusammenfassung:

Muscle synergy analysis investigates the neurophysiological mechanisms that the central nervous system employs to coordinate muscles. Several models have been developed to decompose electromyographic (EMG) signals into spatial and temporal synergies. However, using multiple approaches can complicate the interpretation of results. Spatial synergies represent invariant muscle weights modulated with variant temporal coefficients; temporal synergies are invariant temporal profiles that coordinate variant muscle weights. While non-negative matrix factorization allows to extract both spatial and temporal synergies, the comparison between the two approaches was rarely investigated targeting a large set of multi-joint upper-limb movements. Spatial and temporal synergies were extracted from two datasets with proximal (16 subjects, 10M, 6F) and distal upper-limb movements (30 subjects, 21M, 9F), focusing on their differences in reconstruction accuracy and inter-individual variability. We showed the existence of both spatial and temporal structure in the EMG data, comparing synergies with those from a surrogate dataset in which the phases were shuffled preserving the frequency content of the original data. The two models provide a compact characterization of motor coordination at the spatial or temporal level, respectively. However, a lower number of temporal synergies are needed to achieve the same reconstruction R2: spatial and temporal synergies may capture different hierarchical levels of motor control and are dual approaches to the characterization of low-dimensional coordination of the upper-limb. Last, a detailed characterization of the structure of the temporal synergies suggested that they can be related to intermittent control of the movement, allowing high flexibility and dexterity. These results improve neurophysiology understanding in several fields such as motor control, rehabilitation, and prosthetics.

2022

Mapping of the upper limb work-space :
Wissenschaftlicher Artikel ArODES
benchmarking four wrist smoothness metrics

Alessandro Scano, Cristina Brambilla, Henning Müller, Manfredo Atzori

Applied sciences,  12, 24, 12643

Link zur Publikation

Zusammenfassung:

Smoothness is a commonly used measure of motion control. Physiological motion is characterized by high smoothness in the upper limb workspace. Moreover, there is evidence that smoothness-based models describe effectively skilled motion planning. Typical smoothness measures are based on wrist kinematics. Despite smoothness being often used as a measure of motor control and to evaluate clinical pathologies, so far, a smoothness map is not available for the whole workspace of the upper limb. In this work, we provide a map of the upper limb workspace comparing four smoothness metrics: the normalized jerk, the speed metric, the spectral arc length, and the number of speed peaks. Fifteen subjects were enrolled, performing several reaching movements in the upper limb workspace in multiple directions in five planes (frontal, left, right, horizontal and up). Smoothness of the wrist of each movement was computed and a 3D workspace map was reconstructed. The four smoothness metrics were in general accordance. Lower smoothness was found in the less dexterous sectors (up and left sectors), with respect to the frontal, horizontal, and right sectors. The number of speed peaks, frequently used for evaluating motion in neurological diseases, was instead not suitable for assessing movements of healthy subjects. Lastly, strong correlation was found especially between the normalized jerk and speed metric. These results can be used as a benchmark for motor control studies in various fields as well as clinical studies.

Empowering digital pathology applications through explainable knowledge extraction tools
Wissenschaftlicher Artikel ArODES

Stefano Marchesin, Fabio Giachelle, Niccolò Marini, Manfredo Atzori, Svetla Boytcheva, Genziana Buttafuoco, Francesco Ciompi, Giorgio Maria Di Nunzio, Filippo Fraggetta, Ornella Irrera, Henning Müller, Todor Primov, Simona Vatrano, Gianmaria Silvello

Journal of pathology informatics,  2022, vol. 13, article no. 100139, pp. 1-14

Link zur Publikation

Zusammenfassung:

Exa-scale volumes of medical data have been produced for decades. In most cases, the diagnosis is reported in free text, encoding medical knowledge that is still largely unexploited. In order to allow decoding medical knowledge included in reports, we propose an unsupervised knowledge extraction system combining a rule-based expert system with pre-trained Machine Learning (ML) models, namely the Semantic Knowledge Extractor Tool (SKET). Combining rule-based techniques and pre-trained ML models provides high accuracy results for knowledge extraction. This work demonstrates the viability of unsupervised Natural Language Processing (NLP) techniques to extract critical information from cancer reports, opening opportunities such as data mining for knowledge extraction purposes, precision medicine applications, structured report creation, and multimodal learning. SKET is a practical and unsupervised approach to extracting knowledge from pathology reports, which opens up unprecedented opportunities to exploit textual and multimodal medical information in clinical practice. We also propose SKET eXplained (SKET X), a web-based system providing visual explanations about the algorithmic decisions taken by SKET. SKET X is designed/developed to support pathologists and domain experts in understanding SKET predictions, possibly driving further improvements to the system.

Unleashing the potential of digital pathology data by training computer-aided diagnosis models without human annotations
Wissenschaftlicher Artikel ArODES

Niccolò Marini, Stefano Marchesin, Sebastian Otálora, Marek Wodzinski, Alessandro Caputo, Mart Van Rijthoven, Witali Aswolinskiy, John-Melle Bokhorst, Damian Podareanu, Edyta Petters, Svetla Boytcheva, Genziana Buttafuoco, Simona Vatrano, Filippo Fraggetta, Maristella Agosti, Francesco Ciompi, Gianmaria Silvello, Henning Müller, Manfredo Atzori

npj Digital Medicine,  2022, vol. 5, article no. 102, pp. 1-18

Link zur Publikation

Zusammenfassung:

The digitalization of clinical workflows and the increasing performance of deep learning algorithms are paving the way towards new methods for tackling cancer diagnosis. However, the availability of medical specialists to annotate digitized images and free-text diagnostic reports does not scale with the need for large datasets required to train robust computer-aided diagnosis methods that can target the high variability of clinical cases and data produced. This work proposes and evaluates an approach to eliminate the need for manual annotations to train computer-aided diagnosis tools in digital pathology. The approach includes two components, to automatically extract semantically meaningful concepts from diagnostic reports and use them as weak labels to train convolutional neural networks (CNNs) for histopathology diagnosis. The approach is trained (through 10-fold cross-validation) on 3’769 clinical images and reports, provided by two hospitals and tested on over 11’000 images from private and publicly available datasets. The CNN, trained with automatically generated labels, is compared with the same architecture trained with manual labels. Results show that combining text analysis and end-to-end deep neural networks allows building computer-aided diagnosis tools that reach solid performance (micro-accuracy = 0.908 at image-level) based only on existing clinical data without the need for manual annotations.

Evaluation of methods for the extraction spatial muscle synergies
Wissenschaftlicher Artikel ArODES

Kunkun Zhao, Haiying Wen, Zhisheng Zhang, Manfredo Atzori, Henning Müller, Zhongqu Xie, Alessandro Scano

Frontiers in neuroscience,  2022, vol. 16, article 732156

Link zur Publikation

Zusammenfassung:

Muscle synergies have been largely used in many application fields, including motor control studies, prosthesis control, movement classification, rehabilitation, and clinical studies. Due to the complexity of the motor control system, the full repertoire of the underlying synergies has been identified only for some classes of movements and scenarios. Several extraction methods have been used to extract muscle synergies. However, some of these methods may not effectively capture the nonlinear relationship between muscles and impose constraints on input signals or extracted synergies. Moreover, other approaches such as autoencoders (AEs), an unsupervised neural network, were recently introduced to study bioinspired control and movement classification. In this study, we evaluated the performance of five methods for the extraction of spatial muscle synergy, namely, principal component analysis (PCA), independent component analysis (ICA), factor analysis (FA), nonnegative matrix factorization (NMF), and AEs using simulated data and a publicly available database. To analyze the performance of the considered extraction methods with respect to several factors, we generated a comprehensive set of simulated data (ground truth), including spatial synergies and temporal coefficients. The signal-to-noise ratio (SNR) and the number of channels (NoC) varied when generating simulated data to evaluate their effects on ground truth reconstruction. This study also tested the efficacy of each synergy extraction method when coupled with standard classification methods, including K-nearest neighbors (KNN), linear discriminant analysis (LDA), support vector machines (SVM), and Random Forest (RF). The results showed that both SNR and NoC affected the outputs of the muscle synergy analysis. Although AEs showed better performance than FA in variance accounted for and PCA in synergy vector similarity and activation coefficient similarity, NMF and ICA outperformed the other three methods. Classification tasks showed that classification algorithms were sensitive to synergy extraction methods, while KNN and RF outperformed the other two methods for all extraction methods; in general, the classification accuracy of NMF and PCA was higher. Overall, the results suggest selecting suitable methods when performing muscle synergy-related analysis.

Improving robotic hand prosthesis control with eye tracking and computer vision :
Wissenschaftlicher Artikel ArODES
a multimodal approach based on the visuomotor behavior of grasping

Matteo Cognolato, Manfredo Atzori, Roger Gassert, Henning Müller

Frontiers in artificial intelligence,  January 2022, vol. 4, article no. 744476, pp. 1-12

Link zur Publikation

Zusammenfassung:

The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.

2021

Questioning domain adaptation in myoelectric hand prostheses control :
Wissenschaftlicher Artikel ArODES
an inter- and intra-subject study

Giulio Marano, Cristina Brambilla, Robert Mihai Mira, Alessandro Scano, Henning Müller, Manfredo Atzori

Sensors,  2021, vol. 21, no. 22, article no. 7500, pp. 1-13

Link zur Publikation

Zusammenfassung:

One major challenge limiting the use of dexterous robotic hand prostheses controlled via electromyography and pattern recognition relates to the important efforts required to train complex models from scratch. To overcome this problem, several studies in recent years proposed to use transfer learning, combining pre-trained models (obtained from prior subjects) with training sessions performed on a specific user. Although a few promising results were reported in the past, it was recently shown that the use of conventional transfer learning algorithms does not increase performance if proper hyperparameter optimization is performed on the standard approach that does not exploit transfer learning. The objective of this paper is to introduce novel analyses on this topic by using a random forest classifier without hyperparameter optimization and to extend them with experiments performed on data recorded from the same patient, but in different data acquisition sessions. Two domain adaptation techniques were tested on the random forest classifier, allowing us to conduct experiments on healthy subjects and amputees. Differently from several previous papers, our results show that there are no appreciable improvements in terms of accuracy, regardless of the transfer learning techniques tested. The lack of adaptive learning is also demonstrated for the first time in an intra-subject experimental setting when using as a source ten data acquisitions recorded from the same subject but on five different days.

Multi_Scale_Tools :
Wissenschaftlicher Artikel ArODES
a Python library to exploit multi-scale whole slide images

Niccolò Marini, Sebastian Otálora, Damian Podareanu, Mart van Rijthoven, Jeroen van der Laak, Francesco Ciompi, Henning Müller, Manfredo Atzori

Frontiers in computer science,  2021, vol. 3, article no. 684521, pp. 1-12

Link zur Publikation

Zusammenfassung:

Algorithms proposed in computational pathology can allow to automatically analyze digitized tissue samples of histopathological images to help diagnosing diseases. Tissue samples are scanned at a high-resolution and usually saved as images with several magnification levels, namely whole slide images (WSIs). Convolutional neural networks (CNNs) represent the state-of-the-art computer vision methods targeting the analysis of histopathology images, aiming for detection, classification and segmentation. However, the development of CNNs that work with multi-scale images such as WSIs is still an open challenge. The image characteristics and the CNN properties impose architecture designs that are not trivial. Therefore, single scale CNN architectures are still often used. This paper presents Multi_Scale_Tools, a library aiming to facilitate exploiting the multi-scale structure of WSIs. Multi_Scale_Tools currently include four components: a pre-processing component, a scale detector, a multi-scale CNN for classification and a multi-scale CNN for segmentation of the images. The pre-processing component includes methods to extract patches at several magnification levels. The scale detector allows to identify the magnification level of images that do not contain this information, such as images from the scientific literature. The multi-scale CNNs are trained combining features and predictions that originate from different magnification levels. The components are developed using private datasets, including colon and breast cancer tissue samples. They are tested on private and public external data sources, such as The Cancer Genome Atlas (TCGA). The results of the library demonstrate its effectiveness and applicability. The scale detector accurately predicts multiple levels of image magnification and generalizes well to independent external data. The multi-scale CNNs outperform the single-magnification CNN for both classification and segmentation tasks. The code is developed in Python and it will be made publicly available upon publication. It aims to be easy to use and easy to be improved with additional functions.

Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations :
Wissenschaftlicher Artikel ArODES
an experiment on prostate histopathology image classification

Niccolò Marini, Sebastian Otálora, Henning Müller, Manfredo Atzori

Medical image analysis,  2021, vol. 73, article no. 102165, pp. 1-16

Link zur Publikation

Zusammenfassung:

Convolutional neural networks (CNNs) are state-of-the-art computer vision techniques for various tasks, particularly for image classification. However, there are domains where the training of classification models that generalize on several datasets is still an open challenge because of the highly heterogeneous data and the lack of large datasets with local annotations of the regions of interest, such as histopathology image analysis. Histopathology concerns the microscopic analysis of tissue specimens processed in glass slides to identify diseases such as cancer. Digital pathology concerns the acquisition, management and automatic analysis of digitized histopathology images that are large, having in the order of 100′0002 pixels per image. Digital histopathology images are highly heterogeneous due to the variability of the image acquisition procedures. Creating locally labeled regions (required for the training) is time-consuming and often expensive in the medical field, as physicians usually have to annotate the data. Despite the advances in deep learning, leveraging strongly and weakly annotated datasets to train classification models is still an unsolved problem, mainly when data are very heterogeneous. Large amounts of data are needed to create models that generalize well. This paper presents a novel approach to train CNNs that generalize to heterogeneous datasets originating from various sources and without local annotations. The data analysis pipeline targets Gleason grading on prostate images and includes two models in sequence, following a teacher/student training paradigm. The teacher model (a high-capacity neural network) automatically annotates a set of pseudo-labeled patches used to train the student model (a smaller network). The two models are trained with two different teacher/student approaches: semi-supervised learning and semi-weekly supervised learning. For each of the two approaches, three student training variants are presented. The baseline is provided by training the student model only with the strongly annotated data. Classification performance is evaluated on the student model at the patch level (using the local annotations of the Tissue Micro-Arrays Zurich dataset) and at the global level (using the TCGA-PRAD, The Cancer Genome Atlas-PRostate ADenocarcinoma, whole slide image Gleason score). The teacher/student paradigm allows the models to better generalize on both datasets, despite the inter-dataset heterogeneity and the small number of local annotations used. The classification performance is improved both at the patch-level (up to κ = 0.6127 ± 0.0133 from κ = 0.5667 ± 0.0285), at the TMA core-level (Gleason score) (up to κ = 0.7645 ± 0.0231 from κ = 0.7186 ± 0.0306) and at the WSI-level (Gleason score) (up to κ = 0.4529 ± 0.0512 from κ = 0.2293 ± 0.1350). The results show that with the teacher/student paradigm, it is possible to train models that generalize on datasets from entirely different sources, despite the inter-dataset heterogeneity and the lack of large datasets with local annotations.

Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification
Wissenschaftlicher Artikel ArODES

Sebastian Otálora, Niccolò Marini, Henning Müller, Manfredo Atzori

BMC Medical Imaging,  2021, vol. 21, article no. 77, pp. 1-14

Link zur Publikation

Zusammenfassung:

One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels.

2020

Variability of muscle synergies in hand grasps :
Wissenschaftlicher Artikel ArODES
analysis of intra- and inter-session data

Una Pale, Manfredo Atzori, Henning Müller, Alessandro Scano

Sensors,  2020, vol. 20, no 15, article 4297

Link zur Publikation

Zusammenfassung:

Background. Muscle synergy analysis is an approach to understand the neurophysiological mechanisms behind the hypothesized ability of the Central Nervous System (CNS) to reduce the dimensionality of muscle control. The muscle synergy approach is also used to evaluate motor recovery and the evolution of the patients’ motor performance both in single-session and longitudinal studies. Synergy-based assessments are subject to various sources of variability: natural trial-by-trial variability of performed movements, intrinsic characteristics of subjects that change over time (e.g., recovery, adaptation, exercise, etc.), as well as experimental factors such as di_erent electrode positioning. These sources of variability need to be quantified in order to resolve challenges for the application of muscle synergies in clinical environments. The objective of this study is to analyze the stability and similarity of extracted muscle synergies under the e_ect of factors that may induce variability, including inter- and intra-session variability within subjects and inter-subject variability di_erentiation. The analysis was performed using the comprehensive, publicly available hand grasp NinaPro Database, featuring surface electromyography (EMG) measures from two EMG electrode bracelets. Methods. Intra-session, inter-session, and inter-subject synergy stability was analyzed using the following measures: variance accounted for (VAF) and number of synergies (NoS) as measures of reconstruction stability quality and cosine similarity for comparison of spatial composition of extracted synergies. Moreover, an approach based on virtual electrode repositioning was applied to shed light on the influence of electrode position on inter-session synergy similarity. Results. Inter-session synergy similarity was significantly lower with respect to intra-session similarity, both considering coe_cient of variation of VAF (approximately 0.2–15% for inter vs. approximately 0.1% to 2.5% for intra, depending on NoS) and coe_cient of variation of NoS (approximately 6.5–14.5% for inter vs. approximately 3–3.5% for intra, depending on VAF) as well as synergy similarity (approximately 74–77% for inter vs. approximately 88–94% for intra, depending on the selected VAF). Virtual electrode repositioning revealed that a slightly di_erent electrode position can lower similarity of synergies from the same session and can increase similarity between sessions. Finally, the similarity of inter-subject synergies has no significant di_erence from the similarity of inter-session synergies (both on average approximately 84–90% depending on selected VAF). Conclusion. Synergy similarity was lower in inter-session conditions with respect to intra-session. This finding should be considered when interpreting results from multi-session assessments. Lastly, electrode positioning might play an important role in the lower similarity of synergies over di_erent sessions.

Gaze, behavioral, and clinical data for phantom limbs after hand amputation from 15 amputees and 29 controls
Wissenschaftlicher Artikel ArODES

Gianluca Saetta, Matteo Cognolato, Manfredo Atzori, Diego Faccio, Katia Giacomino, Anne-Gabrielle Mittaz Hager, Cesare Tiengo, Franco Bassetto, Henning Müller, Peter Bruger

Scientific Data,  2020, vol. 7, article 60, pp. 1-14

Link zur Publikation

Zusammenfassung:

Despite recent advances in prosthetics, many upper limb amputees still use prostheses with some reluctance. They often do not feel able to incorporate the artificial hand into their bodily self. Furthermore, prosthesis fitting is not usually tailored to accommodate the characteristics of an individual’s phantom limb sensations. These are experienced by almost all persons with an acquired amputation and comprise the motor and postural properties of the lost limb. This article presents and validates a multimodal dataset including an extensive qualitative and quantitative assessment of phantom limb sensations in 15 transradial amputees, surface electromyography and accelerometry data of the forearm, and measurements of gaze behavior during exercises requiring pointing or repositioning of the forearm and the phantom hand. The data also include acquisitions from 29 able-bodied participants, matched for gender and age. Special emphasis was given to tracking the visuo-motor coupling between eye-hand/eye-phantom during these exercises.

A large calibrated database of hand movements and grasps kinematics
Wissenschaftlicher Artikel ArODES

Néstor J. Jarque-Bou, Manfredo Atzori, Henning Müller

Scientific data,  2020, vol. 7, article 12, pp. 1-10

Link zur Publikation

Zusammenfassung:

Modelling hand kinematics is a challenging problem, crucial for several domains including robotics, 3D modelling, rehabilitation medicine and neuroscience. Currently available datasets are few and limited in the number of subjects and movements. The objective of this work is to advance the modelling of hand kinematics by releasing and validating a large publicly available kinematic dataset of hand movements and grasp kinematics. The dataset is based on the harmonization and calibration of the kinematics data of three multimodal datasets previously released (Ninapro DB1, DB2 and DB5, that include electromyography, inertial and dynamic data). The novelty of the dataset is related to the high number of subjects (77) and movements (40 movements, each repeated several times) for which we release for the first time calibrated kinematic data, resulting in the largest available kinematic dataset. Differently from the previous datasets, the data are also calibrated to avoid sensor nonlinearities. The validation confirms that the data are not affected by experimental procedures and that they are similar to data acquired in real-life conditions.

2019

PaWFE :
Wissenschaftlicher Artikel ArODES
fast signal feature extraction using parallel time windows

Manfredo Atzori, Henning Müller

Frontiers in Neurorobotics,  2019, vol. 13, article 74

Link zur Publikation

Zusammenfassung:

Motivation: Hand amputations can dramatically affect the quality of life of a person. Researchers are developing surface electromyography and machine learning solutions to control dexterous and robotic prosthetic hands, however long computational times can slow down this process. Objective: This paper aims at creating a fast signal feature extraction algorithm that can extract widely used features and allow researchers to easily add new ones. Methods: PaWFE (Parallel Window Feature Extractor) extracts the signal features from several time windows in parallel. The MATLAB code is publicly available and supports several time domain and frequency features. The code was tested and benchmarked using 1,2,4,8,16,32, and 48 threads on a server with four Xeon E7- 4820 and 128 GB RAM using the first 5 datasets of the Ninapro database, that are recorded with different acquisition setups. Results: The parallel time window analysis approach allows to reduce the computational time up to 20 times when using 32 cores, showing a very good scalability. Signal features can be extracted in few seconds from an entire data acquisition and in <100ms from a single time window, easily reducing of up to over 15 times the feature extraction procedure in comparison to traditional approaches. The code allows users to easily add new signal feature extraction scripts, that can be added to the code and on the Ninapro website upon request. Significance: The code allows researchers in machine learning and biosignals data analysis to easily and quickly test modern machine learning approaches on big datasets and it can be used as a resource for real time data analysis too.

An augmented reality environment to provide visual feedback to amputees during sEMG Data Acquisitions
Buchkapitel ArODES

Francesca Palermo, Matteo Cognolato, Ivan Eggel, Manfredo Atzori, Henning Müller

Dans Althoefer, Kaspar, Konstantinova, Jelizaveta, Zhang, Ketao, Towards autonomous robotic systems : 20th annual conference, TAROS 2019, London, UK, July 3–5, 2019, Proceedings, Part II  (12 p.). 2019,  Cham : Springer

Link zur Publikation

Zusammenfassung:

Myoelectric hand prostheses have the potential to improve the quality of life of hand amputees. Still, the rejection rate of functional prostheses in the adult population is high. One of the causes is the long time for fitting the prosthesis and the lack of feedback during training. Moreover, prosthesis control is often unnatural and requires mental effort during the training. Virtual and augmented reality devices can help to improve these difficulties and reduce phantom limb pain. Amputees can start training the residual limb muscles with a weightless virtual hand earlier than possible with a real prosthesis. When activating the muscles related to a specific grasp, the subjects receive a visual feedback from the virtual hand. To the best of our knowledge, this work presents one of the first portable augmented reality environment for transradial amputees that combines two devices available on the market: the Microsoft HoloLens and the Thalmic labs Myo. In the augmented environment, rendered by the HoloLens, the user can control a virtual hand with surface electromyography. By using the virtual hand, the user can move objects in augmented reality and train to activate the right muscles for each movement through visual feedback. The environment presented represents a resource for rehabilitation and for scientists. It helps hand amputees to train using prosthetic hands right after the surgery. Scientists can use the environment to develop real time control experiments, without the logistical disadvantages related to dealing with a real prosthetic hand but with the advantages of a realistic visual feedback.

Deep learning-based retrieval system for gigapixel histopathology cases and the open access literature
Wissenschaftlicher Artikel ArODES

Roger Schaer, Oscar Alfonso Jiménez del Toro, Sebastian Otálora, Manfredo Atzori, Henning Müller

Journal of pathology informatics,  1 juillet 2019

Link zur Publikation

Zusammenfassung:

Background: The introduction of digital pathology into clinical practice has led to the development of clinical workflows with digital images, in connection with pathology reports. Still, most of the current work is time‑consuming manual analysis of image areas at different scales. Links with data in the biomedical literature are rare, and a need for search based on visual similarity within whole slide images (WSIs) exists. Objectives: The main objective of the work presented is to integrate content‑based visual retrieval with a WSI viewer in a prototype. Another objective is to connect cases analyzed in the viewer with cases or images from the biomedical literature, including the search through visual similarity and text. Methods: An innovative retrieval system for digital pathology is integrated with a WSI viewer, allowing to define regions of interest (ROIs) in images as queries for finding visually similar areas in the same or other images and to zoom in/out to find structures at varying magnification levels. The algorithms are based on a multimodal approach, exploiting both text information and content‑based image features. Results: The retrieval system allows viewing WSIs and searching for regions that are visually similar to manually defined ROIs in various data sources (proprietary and public datasets, e.g., scientific literature). The system was tested by pathologists, highlighting its capabilities and suggesting ways to improve it and make it more usable in clinical practice. Conclusions: The developed system can enhance the practice of pathologists by enabling them to use their experience and knowledge to control artificial intelligence tools for navigating repositories of images for clinical decision support and teaching, where the comparison with visually similar cases can help to avoid misinterpretations. The system is available as open source, allowing the scientific community to test, ideate and develop similar systems for research and clinical practice.

A quantitative taxonomy of human hand grasps
Wissenschaftlicher Artikel ArODES

Francesca Stival, Stefano Michieletto, Matteo Cognolato, Enrico Pagello, Henning Müller, Manfredo Atzori

Journal of neuroengineering and rehabilitation,  2019, vol. 16, no. 28

Link zur Publikation

Zusammenfassung:

A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields.

2018

Muscle synergy analysis of a hand-grasp dataset :
Wissenschaftlicher Artikel ArODES
a limited subset of motor modules may underlie a large variety of grasps

Alessandro Scano, Andrea Chiavenna, Lorenzo Molinari Tosatti, Henning Müller, Manfredo Atzori

Frontiers in Neurorobotics,  2018, vol. 12

Link zur Publikation

Zusammenfassung:

Kinematic and muscle patterns underlying hand grasps have been widely investigated in the literature. However, the identification of a reduced set of motor modules, generalizing across subjects and grasps, may be valuable for increasing the knowledge of hand motor control, and provide methods to be exploited in prosthesis control and hand rehabilitation. Methods: Motor muscle synergies were extracted from a publicly available database including 28 subjects, executing 20 hand grasps selected for daily-life activities. The spatial synergies and temporal components were analyzed with a clustering algorithm to characterize the patterns underlying hand-grasps. Results: Motor synergies were successfully extracted on all 28 subjects. Clustering orders ranging from 2 to 50 were tested. A subset of ten clusters, each one represented by a spatial motor module, approximates the original dataset with a mean maximum error of 5% on reconstructed modules; however, each spatial synergy might be employed with different timing and recruited at different grasp stages. Two temporal activation patterns are often recognized, corresponding to the grasp/hold phase, and to the pre-shaping and release phase. Conclusions: This paper presents one of the biggest analysis of muscle synergies of hand grasps currently available. The results of 28 subjects performing 20 different grasps suggest that a limited number of time dependent motor modules (shared among subjects), correctly elicited by a control activation signal, may underlie the execution of a large variety of hand grasps. However, spatial synergies are not strongly related to specific motor functions but may be recruited at different stages, depending on subject and grasp. This result can lead to applications in rehabilitation and assistive robotics.

Image magnification regression using DenseNet for exploiting histopathology open access content
Buchkapitel ArODES

Sebastian Otálora, Manfredo Atzori, Vincent Andrearczyk, Henning Müller

Dans Stoyanov, Danail, Computational pathology and ophthalmic medical image analysis  (pp. 148-155). 2018,  Cham : Springer

Link zur Publikation

Zusammenfassung:

Open access medical content databases such as PubMed Central and TCGA offer possibilities to obtain large amounts of images for training deep learning models. Nevertheless, accurate labeling of large-scale medical datasets is not available and poses challenging tasks for using such datasets. Predicting unknown magnification levels and standardize staining procedures is a necessary preprocessing step for using this data in retrieval and classification tasks. In this paper, a CNN-based regression approach to learn the magnification of histopathology images is presented, comparing two deep learning architectures tailored to regress the magnification. A comparison of the performance of the models is done in a dataset of 34,441 breast cancer patches with several magnifications. The best model, a fusion of DenseNet-based CNNs, obtained a kappa score of 0.888. The methods are also evaluated qualitatively on a set of images from biomedical journals and TCGA prostate patches.

Head-mounted eye gaze tracking devices :
Wissenschaftlicher Artikel ArODES
an overview of modern devices and recent advances

Matteo Cognolato, Manfredo Atzori, Henning Müller

Journal of rehabilitation and assistive technologies engineering,  2018, vol. 5, pp. 1-13

Link zur Publikation

Zusammenfassung:

An increasing number of wearable devices performing eye gaze tracking have been released in recent years. Such devices can lead to unprecedented opportunities in many applications. However, staying updated regarding the continuous advances and gathering the technical features that allow to choose the best device for a specific application is not trivial. The last eye gaze tracker overview was written more than 10 years ago, while more recent devices are substantially improved both in hardware and software. Thus, an overview of current eye gaze trackers is needed. This review fills the gap by providing an overview of the current level of advancement for both techniques and devices, leading finally to the analysis of 20 essential features in six head-mounted eye gaze trackers commercially available. The analyzed characteristics represent a useful selection providing an overview of the technology currently implemented. The results show that many technical advances were made in this field since the last survey. Current wearable devices allow to capture and exploit visual information unobtrusively and in real time, leading to new applications in wearable technologies that can also be used to improve rehabilitation and enable a more active living for impaired persons.

Tumor proliferation assessment of whole slide images
Wissenschaftlicher Artikel ArODES

Mikael Rousson, Martin Hedlunda, Mats Anderssona, Ludwig Jacobsson, Gunnar Lathen, Bjorn Norella, Oscar Alfonso Jiménez del Toro, Henning Müller, Manfredo Atzori

Medical Imaging 2018 (SPIE) : Digital Pathology,  March 2018, vol. 10581

Link zur Publikation

Zusammenfassung:

Grading whole slide images (WSIs) from patient tissue samples is an important task in digital pathology, particularly for diagnosis and treatment planning. However, this visual inspection task, performed by pathologists, is inherently subjective and has limited reproducibility. Moreover, grading of WSIs is time consuming and expensive. Designing a robust and automatic solution for quantitative decision support can improve the objectivity and reproducibility of this task. This paper presents a fully automatic pipeline for tumor proliferation assessment based on mitosis counting. The approach consists of three steps: i) region of interest selection based on tumor color characteristics, ii) mitosis counting using a deep network based detector, and iii) grade prediction from ROI mitosis counts. The full strategy was submitted and evaluated during the Tumor Proliferation Assessment Challenge (TUPAC) 2016. TUPAC is the rst digital pathology challenge grading whole slide images, thus mimicking more closely a real case scenario. The pipeline is extremely fast and obtained the 2nd place for the tumor proliferation assessment task and the 3rd place in the mitosis counting task, among 17 participants. The performance of this fully automatic method is similar to the performance of pathologists and this shows the high quality of automatic solutions for decision support.

2017

Comparison of six electromyography acquisition setups on hand movement classification tasks
Wissenschaftlicher Artikel ArODES

Stefano Pizzolato, Luca Tagliapietra, Matteo Cognolato, Monica Reggiani, Henning Müller, Manfredo Atzori

Plos One,  2017, vol.

Link zur Publikation

Zusammenfassung:

Hand prostheses controlled by surface electromyography are promising due to the non-invasive approach and the control capabilities offered by machine learning. Nevertheless, dexterous prostheses are still scarcely spread due to control difficulties, low robustness and often prohibitive costs. Several sEMG acquisition setups are now available, ranging in terms of costs between a few hundred and several thousand dollars. The objective of this paper is the relative comparison of six acquisition setups on an identical hand movement classification task, in order to help the researchers to choose the proper acquisition setup for their requirements. The acquisition setups are based on four different sEMG electrodes (including Otto Bock, Delsys Trigno, Cometa Wave + Dormo ECG and two Thalmic Myo armbands) and they were used to record more than 50 hand movements from intact subjects with a standardized acquisition protocol. The relative performance of the six sEMG acquisition setups is compared on 41 identical hand movements with a standardized feature extraction and data analysis pipeline aimed at performing hand movement classification. Comparable classification results are obtained with three acquisition setups including the Delsys Trigno, the Cometa Wave and the affordable setup composed of two Myo armbands. The results suggest that practical sEMG tests can be performed even when costs are relevant (e.g. in small laboratories, developing countries or use by children). All the presented datasets can be used for offline tests and their quality can easily be compared as the data sets are publicly available.

Deep multimodal case–based retrieval for large histopathology datasets
Buchkapitel ArODES

Oscar Alfonso Jiménez del Toro, Sebastian Otálora, Manfredo Atzori, Henning Müller

Patch-based techniques in medical imaging : third International Workshop, Patch-MI 2017, held in conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, 2017, Proceedings  (pp. 149-157). 2017,  Cham : Springer

Link zur Publikation

Zusammenfassung:

The current gold standard for interpreting patient tissue samples is the visual inspection of whole–slide histopathology images (WSIs) by pathologists. They generate a pathology report describing the main findings relevant for diagnosis and treatment planning. Search-ing for similar cases through repositories for differential diagnosis is often not done due to a lack of efficient strategies for medical case–based re-trieval. A patch–based multimodal retrieval strategy that retrieves sim-ilar pathology cases from a large data set fusing both visual and text information is explained in this paper. By fine–tuning a deep convolu-tional neural network an automatic representation is obtained for the vi-sual content of weakly annotated WSIs (using only a global cancer score and no manual annotations). The pathology text report is embedded into a category vector of the pathology terms also in a non–supervised approach. A publicly available data set of 267 prostate adenocarcinoma cases with their WSIs and corresponding pathology reports was used to train and evaluate each modality of the retrieval method. A MAP (Mean Average Precision) of 0.54 was obtained with the multimodal method in a previously unseen test set. The proposed retrieval system can help in differential diagnosis of tissue samples and during the training of pathol-ogists, exploiting the large amount of pathology data already existing digital hospital repositories.

Analysis of histopathology images :
Buchkapitel ArODES
from traditional machine learning to deep learning

Oscar Alfonso Jiménez del Toro, Sebastian Otálora, Mats Andersson, Kristian Eurén, Martin Hedlund, Mikael Rousson, Henning Müller, Manfredo Atzori

Biomedical texture analysis : fundamentals, tools and challenges  (pp. 281–314). 2017,  [S. l.] : Elsevier

Link zur Publikation

Zusammenfassung:

Digitizing pathology is a current trend that makes large amounts of visual data available for automatic analysis. It allows to visualize and interpret pathologic cell and tissue samples in high-resolution images and with the help of computer tools. This opens the possibility to develop image analysis methods that help pathologists and support their image descriptions (i.e., staging, grading) with objective quantification of image features. Numerous detection, classification and segmentation algorithms of the underlying tissue primitives in histopathology images have been proposed in this respect. To better select the most suitable algorithms for histopathology tasks, biomedical image analysis challenges have evaluated and compared both traditional feature extraction with machine learning and deep learning techniques. This chapter provides an overview of methods addressing the analysis of histopathology images, as well as a brief description of the tasks they aim to solve. It is focused on histopathology images containing textured areas of different types.

Semi-automatic training of an object recognition system in scene camera data using gaze tracking and accelerometers
Buchkapitel ArODES

Matteo Cognolato, Mara Graziani, Francesca Giordaniello, Gianluca Saetta, Bassetto, Peter Brugger, Barbara Caputo, Henning Müller, Manfredo Atzori

Computer Vision Systems : 11th International Conference, ICVS 2017, Shenzhen, China, July 10-13, 2017  (pp. 175-184). 2017,  Cham : Springer

Link zur Publikation

Zusammenfassung:

Object detection and recognition algorithms usually require large, annotated training sets. The creation of such datasets requires expensive manual annotation. Eye tracking can help in the annotation procedure. Humans use vision constantly to explore the environment and plan motor actions, such as grasping an object. In this paper we investigate the possibility to semi-automatically train object recognition with eye tracking, accelerometer in scene camera data, learning from the natural hand-eye coordination of humans. Our approach involves three steps. First, sensor data are recorded using eye tracking glasses that are used in combination with accelerometers and surface electromyography that are usually applied when controlling prosthetic hands. Second, a set of patches are extracted automatically from the scene camera data while grasping an object. Third, a convolutional neural network is trained and tested using the extracted patches. Results show that the parameters of eye-hand coordination can be used to train an object recognition system semi-automatically. These can be exploited with proper sensors to fine-tune a convolutional neural network for object detection and recognition. This approach opens interesting options to train computer vision and multi-modal data integration systems and lays the foundations for future applications in robotics. In particular, this work targets the improvement of prosthetic hands by recognizing the objects that a person may wish to use. However, the approach can easily be generalized.

Elettromiografia, protesica e robotica in rapido progresso verso l'amputazione funzionale :
Wissenschaftlicher Artikel ArODES
i risultati del progetto Ninapro

Manfredo Atzori, Cesare Tiengo, Franco Bassetto, Henning Müller

Rivista di chirurgia della mano,

Link zur Publikation

Zusammenfassung:

Hand amputation can dramatically affect the capabilities of a person. Improving the functionality of robotic prosthetic hands is thus a challenge. The integration of advanced prosthetic and robotic technologies with functional amputations may bring to reality the non-invasive natural control of robotic hand prostheses in a near future. Scientific research and prosthetic market are rapidly advancing towards the natural control of dexterous robotic prosthetic hands. Myoelectric hand prostheses with many degrees of freedom are commercially available and recent advances in scientific research suggest that their natural control can be performed in real life through pattern recognition and the integration of multimodal data. However, robustness is still not sufficient to transfer scientific results to a real life. In this work we describe the Ninapro (Non Invasive Adaptive Prosthetics) database, which is aimed to study the relationships between sEMG, hand movement, force and clinical parameters. The data are publicly available to worldwide research groups. The Ninapro database allowed to obtain several important results including: showing that up to 11 hand movements can be recognized without any training in amputated subject; showing that multimodal data can strongly improve movement recognition; showing that several clinical parameters (including remaining forearm percentage and phantom limb sensation) are related to the capability of amputees to control the remnant muscles in the stump. 20 The Ninapro results, in combination with other scientific literature achievements, suggests that future "functional amputation" surgery procedures may better integrate with the prosthetic robotic limbs and contribute to solve natural control problems.

2016

Deep learning with convolutional neural networks applied to electromyography data :
Wissenschaftlicher Artikel ArODES
a resource for the classification of movements for prosthetic hands

Manfredo Atzori, Matteo Cognolato, Henning Müller

Frontiers in Neurorobotics,  Septembre 2016, vol. 10

Link zur Publikation

Zusammenfassung:

Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

Effect of clinical parameters on the control of myoelectric robotic prosthetic hands
Wissenschaftlicher Artikel ArODES

Manfredo Atzori, Henning Müller, et al.

Journal of rehabilitation research and development,  2016, vol. 53, no. 3, pp. 345-358

Link zur Publikation

Zusammenfassung:

Improving the functionality of prosthetic hands with noninvasive techniques is still a challenge. Surface electromyography (sEMG) currently gives limited control capabilities; however, the application of machine learning to the analysis of sEMG signals is promising and has recently been applied in practice, but many questions still remain. In this study, we recorded the sEMG activity of the forearm of 11 male subjects with transradial amputation who were mentally performing 40 hand and wrist movements. The classification performance and the number of independent movements (defined as the subset of movements that could be distinguished with >90% accuracy) were studied in relationship to clinical parameters related to the amputation. The analysis showed that classification accuracy and the number of independent movements increased significantly with phantom limb sensation intensity, remaining forearm percentage, and time since amputation. The classification results suggest the possibility of naturally controlling up to 11 movements of a robotic prosthetic hand with almost no training. Knowledge of the relationship between classification accuracy and clinical parameters adds new information regarding the nature of phantom limb pain as well as other clinical parameters, and it can lay the foundations for future "functional amputation" procedures in surgery.

2015

Control capabilities of myoelectric robotic prostheses by hand amputees :
Wissenschaftlicher Artikel ArODES
a scientific research and market overview

Manfredo Atzori, Henning Müller

Frontiers in systems neuroscience,

Link zur Publikation

Zusammenfassung:

Hand amputation can dramatically affect the capabilities of a person. Cortical reorganization occurs in the brain, but the motor and somatosensorial cortex can interact with the remnant muscles of the missing hand even many years after the amputation, leading to the possibility to restore the capabilities of hand amputees through myoelectric prostheses. Myoelectric hand prostheses with many degrees of freedom are commercially available and recent advances in rehabilitation robotics suggest that their natural control can be performed in real life. The first commercial products exploiting pattern recognition to recognize the movements have recently been released, however the most common control systems are still usually unnatural and must be learned through long training. Dexterous and naturally controlled robotic prostheses can become reality in the everyday life of amputees but the path still requires many steps. This mini-review aims to improve the situation by giving an overview of the advancements in the commercial and scientific domains in order to outline the current and future chances in this field and to foster the integration between market and scientific research.

Combining unsupervised feature learning and riesz wavelets for histopathology image representation: application to identifying anaplastic medulloblastoma
Buchkapitel ArODES

Manfredo Atzori, Henning Muller, et al.

Medical image computing and computer-assisted intervention – MICCAI 2015  (pp. 581-588). 2015,  Cham : Springer International Publishing

Link zur Publikation

Zusammenfassung:

Medulloblastoma (MB) is a type of brain cancer that represent roughly 25% of all brain tumors in children. In the anaplastic medulloblastoma subtype, it is important to identify the degree of irregularity and lack of organizations of cells as this correlates to disease aggressiveness and is of clinical value when evaluating patient prognosis. This paper presents an image representation to distinguish these subtypes in histopathology slides. The approach combines learned features from (i) an unsupervised feature learning method using topographic independent component analysis that captures scale, color and translation invariances, and (ii) learned linear combinations of Riesz wavelets calculated at several orders and scales capturing the granularity of multiscale rotation-covariant information. The contribution of this work is to show that the combination of two complementary approaches for feature learning (unsupervised and supervised) improves the classication performance. Our approach outperforms the best methods in literature with statistical signicance, achieving 99% accuracy over region-based data comprising 7,500 square regions from 10 patient studies diagnosed with medulloblastoma (5 anaplastic and 5 non-anaplastic).

2014

Electromyography data for non-invasive naturally controlled robotic hand prostheses
Wissenschaftlicher Artikel ArODES

Manfredo Atzori, Arjan Gijsberts, Claudio Castellini, Anne-Gabrielle Mittaz Hager, Barbara Caputo, Simone Elsig, Giorgio Giatsidis, Franco Bassetto, Henning Müller

Scientific data,  vol. 1, no. 140053, pp. 2-13

Link zur Publikation

Zusammenfassung:

Recent advances in rehabilitation robotics suggest that it may be possible for hand-amputated subjects to recover at least a significant part of the lost hand functionality. The control of robotic prosthetic hands using non-invasive techniques is still a challenge in real life: myoelectric prostheses give limited control capabilities, the control is often unnatural and must be learned through long training times. Meanwhile, scientific literature results are promising but they are still far from fulfilling real-life needs. This work aims to close this gap by allowing worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark scientific database. The database is targeted at studying the relationship between surface electromyography, hand kinematics and hand forces, with the final goal of developing non-invasive, naturally controlled, robotic hand prostheses. The validation section verifies that the data are similar to data acquired in real-life conditions, and that recognition of different hand tasks by applying state-of-the-art signal features and machine-learning algorithms is possible.

Electromyography low pass filtering effects on the classification of hand movements in amputated subjects
Wissenschaftlicher Artikel ArODES

Manfredo Atzori, Henning Müller

Scientific data,  Vol. 1, no. 140053, pp. 1-13

Link zur Publikation

Zusammenfassung:

People with transradial hand amputations can have control capabilities of prosthetic hands via surface electromyography (sEMG) but the control systems are limited and usually not natural. In the scientific literature, the application of pattern recognition techniques to classify hand movements in sEMG led to remarkable results but the evaluations are usually far from real life applications with all uncertainties and noise. Therefore, there is a need to improve the movement classification accuracy in real settings. Smoothing the signal with a low pass filter is a common pre– processing procedure to remove high–frequency noise. However, the filtering frequency modifies the signal strongly and can therefore affect the classification results. In this paper we analyze the dependence of the classification accuracy on the pre–processing low–pass filtering frequency in 3 hand amputated subjects performing 50 different movements. The results highlight two main interesting aspects. First, the filtering frequency strongly affects the classification accuracy, and choosing the right frequency between 1Hz–5Hz can improve the accuracy up to 5%. Second, different subjects obtain the best classification performance at different frequencies. Theoretically these facts could affect all the similar classification procedures re- ducing the classification uncertainity. Therefore, they contribute to set the field closer to real life applications, which could deeply change the life of hand amputated subjects.

The movement error rate for evaluation of machine learning methods for sEMG-based hand movement classification
Wissenschaftlicher Artikel ArODES

Arjan Gijsberts, Manfredo Atzori, Claudio Castellini, Henning Müller, Barbara Caputo

IEEE Transactions on neural systems and rehabiliation engineering,  juillet 2014, vol. 22, issue 4, pp. 735-744

Link zur Publikation

Zusammenfassung:

There has been increasing interest in applying learning algorithms to improve the dexterity of myoelectric prostheses. In this work, we present a large-scale benchmark evaluation on the second iteration of the publicly released NinaPro database, which contains surface electromyography data for 6 DOF force activations as well as for 40 discrete hand movements. The evaluation involves a modern kernel method and compares performance of three feature representations and three kernel functions. Both the force regression and movement classification problems can be learned successfully when using a non-linear kernel function, while the exp-χ2 kernel outperforms the more popular Radial Basis Function kernel in all cases. Furthermore, combining surface electromyography and accelerometry in a multimodal classifier results in significant increases in accuracy as compared to when either modality is used individually. Since window-based classification accuracy should not be considered in isolation to estimate prosthetic controllability, we also provide results in terms of classification mistakes and prediction delay. To this extent, we propose the Movement Error Rate as an alternative to the standard window-based accuracy. This error rate is insensitive to prediction delays and it allows therefore to quantify mistakes and delays as independent performance characteristics. This type of analysis confirms that the inclusion of accelerometry is superior, as it results in fewer mistakes while at the same time reducing prediction delay.

2024

A full pipeline to analyze lung histopathology images
Konferenz ArODES

Lluis Borràs Ferrís, Simon Püttmann, Niccoló Marini, Simona Vatranod, Filippo Fragetta, Alessandro Caputo, Francesco Ciompi, Manfredo Atzori, Henning Müller

Proceedings of Medical Imaging 2024 : Digital and Computational Pathology

Link zur Konferenz

Zusammenfassung:

Histopathology images involve the analysis of tissue samples to diagnose several diseases, such as cancer. The analysis of tissue samples is a time-consuming procedure, manually made by medical experts, namely pathologists. Computational pathology aims to develop automatic methods to analyze Whole Slide Images (WSI), which are digitized histopathology images, showing accurate performance in terms of image analysis. Although the amount of available WSIs is increasing, the capacity of medical experts to manually analyze samples is not expanding proportionally. This paper presents a full automatic pipeline to classify lung cancer WSIs, considering four classes: Small Cell Lung Cancer (SCLC), non-small cell lung cancer divided into LUng ADenocarcinoma (LUAD) and LUng Squamous cell Carcinoma (LUSC), and normal tissue. The pipeline includes a self-supervised algorithm for pre-training the model and Multiple Instance Learning (MIL) for WSI classification. The model is trained with 2,226 WSIs and it obtains an AUC of 0.8558 ± 0.0051 and a weighted f1-score of 0.6537 ± 0.0237 for the 4-class classification on the test set. The capability of the model to generalize was evaluated by testing it on the public The Cancer Genome Atlas (TCGA) dataset on LUAD and LUSC classification. In this task, the model obtained an AUC of 0.9433 ± 0.0198 and a weighted f1-score of 0.7726 ± 0.0438.

2023

Robust multiresolution and multistain background segmentation in whole slide images
Konferenz ArODES

Artur Jurgas, Marek Wodzinski, Manfredo Atzori, Henning Müller

The Latest Developments and Challenges in Biomedical Engineering

Link zur Konferenz

Zusammenfassung:

Background segmentation is an important step in analysis of histopathological images. It allows one to remove irrelevant regions and focus on the tissue of interest. However, background segmentation is challenging due to the variability of stain colors and intensity levels across different images, modalities, and magnification levels. In this paper, we present a learning-based model for histopathology background segmentation based on convolutional neural networks. We compare two multiresolution approaches to deal with the variability of magnification in histopathology images: (i) model that uses upscaling of smaller patches of the image, and (ii) model simultaneously trained on multiple resolution levels. Our model is characterized by solid performance both in multiresolution and multistain dyes (H &E and IHC), achieving good performance on publicly available dataset. The quantitative scores are, in terms of the Dice score, close to 94.71. The qualitative analysis presents strong performance on previously unseen cases from different distributions and various dyes. We freely release the model, weights, and ground-truth annotations to promote the open science and reproducible research.

Artifact augmentation for learning-based quality control of whole slide images
Konferenz ArODES

Artur Jurgas, Marek Wodzinski, Weronika Celniak, Manfredo Atzori, Henning Müller

Proceedings of the 45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society

Link zur Konferenz

Zusammenfassung:

The acquisition of whole slide images is prone to artifacts that can require human control and re-scanning, both in clinical workflows and in research-oriented settings. Quality control algorithms are a first step to overcome this challenge,as they limit the use of low quality images. Developing quality control systems in histopathology is not straightforward, also due to the limited availability of data related to this topic. We address the problem by proposing a tool to augment data with artifacts. The proposed method seamlessly generates and blends artifacts from an external library to a given histopathology dataset. The datasets augmented by the blended artifacts are then used to train an artifact detection network in a supervised way. We use the YOLOv5 model for the artifact detection with a slightly modified training pipeline. The proposed tool can be extended into a complete framework for the quality assessment of whole slide images.

A dexterous hand prosthesis based on additive manufacturing
Konferenz ArODES

Manfredo Atzori, Henning Müller, Alessandro Buosi, F. Reggiani, J. Lazzaro, G. Alberti, C. Tiengo, F. Bassetto, N. Petrone

Eight national congress of bioengineering proceedings (GNB 2023)

Link zur Konferenz

Zusammenfassung:

Upper limb amputation is a major injury that can strongly affect the daily life of a person. Prosthetic hands that can execute multiple movements are available, but they are expensive and difficult to control. Natural control via pattern recognition is promising but it is applied in real life prosthetics only in limited ways. Additive manufacturing and machine learning can revolutionize prosthetics with affordable and open-source solutions that can include 3D printed prosthetic hands, sockets and dexterous highly functional control. Nevertheless there are still intermediate steps to do into this direction. The objective of this paper is to introduce an ongoing project aimed at the development of low cost and dexterous prosthetic hands to be used in real life conditions based on open 3D models, additive manufacturing and machine learning. The results at the current state of advancement of the project include several versions of the prosthetic hand (powered by six servomotors and based on open design), of the control system (based on open electronic prototyping platforms) and of the socket. Preliminary tests of the hand demonstrate its dexterity, its potential and requirements to improve force. Once fully completed and released, the presented3D printed, dexterous, open-source prosthetic hand has the potential to improve the life of hand amputees worldwide and to foster improvements in research and for future commercial prosthetic hands.

2022

Unsupervised method for intra-patient registration of brain magnetic resonance images based on objective function weighting by inverse consistency :
Konferenz ArODES
contribution to the BraTS-Reg challenge

Marek Wodzinski, Artur Jurgas, Niccolò Marini, Manfredo Atzori, Henning Müller

Proceedings of the Brain Lesion (BrainLes) workshop 2022 (held in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI))

Link zur Konferenz

Zusammenfassung:

Registration of brain scans with pathologies is difficult, yet important research area. The importance of this task motivated researchers to organize the BraTS-Reg challenge, jointly with IEEE ISBI 2022 and MICCAI 2022 conferences. The organizers introduced the task of aligning pre-operative to follow-up magnetic resonance images of glioma. The main difficulties are connected with the missing data leading to large, nonrigid, and noninvertible deformations. In this work, we describe our contributions to both the editions of the BraTS-Reg challenge. The proposed method is based on combined deep learning and instance optimization approaches. First, the instance optimization enriches the state-of-the-art LapIRN method to improve the generalizability and fine-details preservation. Second, an additional objective function weighting is introduced, based on the inverse consistency. The proposed method is fully unsupervised and exhibits high registration quality and robustness. The quantitative results on the external validation set are: (i) IEEE ISBI 2022 edition: 1.85, and 0.86, (ii) MICCAI 2022 edition: 1.71, and 0.86, in terms of the mean of median absolute error and robustness respectively. Future work could transfer the inverse consistency-based weighting directly into the deep network training.

A multi-task multiple instance learning algorithm to analyze large whole slide images from bright challenge 2022
Konferenz ArODES

Niccolò Marini, Marek Wodzinski, Manfredo Atzori, Henning Müller

Proceedings of the 2022 IEEE International Symposium on Biomedical Imaging Challenges (ISBIC)

Link zur Konferenz

Zusammenfassung:

Malignant lesions in breast tissue specimen whole slide images (WSIs), may lead to a dangerous diagnosis, such as cancer. However, WSIs analysis is time-consuming and expensive, requiring the work of expert pathologists. This paper aims to present a method for the 2022 BRIGHT Challenge, that involves the analysis of breast WSIs. The organizers provided over 550 breast WSIs and over 3900 regions of interest (ROIs) to develop and validate methods for breast cancer images. The method presented in this work is based on a Multiple Instance Learning instance-based Convolutional Neural Network (CNN), allowing the combination of strongly-annotated data (from ROIs) and weakly-annotated data (from WSIs) via the optimization of a multi-task loss function. Furthermore, during the CNN training, the input patches are clustered and filtered according to their entropy, to reduce the non-informative content used to train the model. The CNN reaches an averaged F1-score=0.63±0.02 on the 3-class classification task and averaged F1-score=0.39±0.08 on the 6-class classification task, considering the validation partition; an averaged F1-score=0.65 on the cancer risk classification task and averaged F1-score=0.45 on the sub-typing cancer risk classification task, considering the best result achieved on the test partition. These results show that Multiple Instance Learning instance-based CNNs may represent a good resource to tackle this kind of problem.

2021

Multi-scale task multiple Instance learning for the classification of digital pathology images with global annotations
Konferenz ArODES

Niccoló Marini, Sebastian Otálora, Francesco Ciompi, Gianmaria Silvello, Stefano Marchesin, Simona Vatrano, Genziana Buttafuoco, Manfredo Atzori, Henning Müller

Proceedings of the third MICCAI workshop on Computational Pathology (COMPAY 2021)

Link zur Konferenz

Zusammenfassung:

Whole slide images (WSIs) are high-resolution digitized images of tissue samples, stored including di_erent magni_cation levels. WSIs datasets often include only global annotations, available thanks to pathology reports. Global annotations refer to global _ndings in the high-resolution image and do not include information about the location of the regions of interest or the magni_cation levels used to identify a _nding. This fact can limit the training of machine learning models, as WSIs are usually very large and each magni_cation level includes di_erent information about the tissue. This paper presents a Multi-Scale Task Multiple Instance Learning (MuSTMIL) method, allowing to better exploit data paired with global labels and to combine contextual and detailed information identi_ed at several magni_cation levels. The method is based on a multiple instance learning framework and on a multi-task network, that combines features from several magni_cation levels and produces multiple predictions (a global one and one for each magni_cation level involved). MuSTMIL is evaluated on colon cancer images, on binary and multilabel classi_cation. MuSTMIL shows an improvement in performance in comparison to both single scale and another multi-scale multiple instance learning algorithm, demonstrating that MuSTMIL can help to better deal with global labels targeting full and multi-scale images. Keywords: Multi-Scale Multiple Instance Learning, Multiple Instance Learning, Multiscale approach, Computational pathology.

Classification of noisy free-text prostate cancer pathology reports using natural language processing
Konferenz ArODES

Anjani Dhrangadhariya, Sebastian Otálora, Manfredo Atzori, Henning Müller

Pattern Recognition. ICPR International Workshops and Challenges : Virtual Event, January 10–15, 2021, Proceedings, Part I

Link zur Konferenz

Zusammenfassung:

Free-text reporting has been the main approach in clinical pathology practice for decades. Pathology reports are an essential information source to guide the treatment of cancer patients and for cancer registries, which process high volumes of free-text reports annually. Information coding and extraction are usually performed manually and it is an expensive and time-consuming process, since reports vary widely between institutions, usually contain noise and do not have a standard structure. This paper presents strategies based on natural language processing (NLP) models to classify noisy free-text pathology reports of high and low-grade prostate cancer from the open-source repository TCGA (The Cancer Genome Atlas). We used paragraph vectors to encode the reports and compared them with n-grams and TF-IDF representations. The best representation based on distributed bag of words of paragraph vectors obtained an f1-score of 0.858 and an AUC of 0.854 using a logistic regression classifier. We investigate the classifier’s more relevant words in each case using the LIME interpretability tool, confirming the classifiers’ usefulness to select relevant diagnostic words. Our results show the feasibility of using paragraph embeddings to represent and classify pathology reports.

Semi-supervised learning with a teacher-student paradigm for histopathology classification :
Konferenz ArODES
a resource to face data heterogeneity and lack of local annotations

Niccolò Marini, Sebastian Otálora, Henning Müller, Manfredo Atzori

Pattern Recognition. ICPR International Workshops and Challenges : Virtual Event, January 10–15, 2021, Proceedings, Part I

Link zur Konferenz

Zusammenfassung:

Training classification models in the medical domain is often difficult due to data heterogeneity (related to acquisition procedures) and due to the difficulty of getting sufficient amounts of annotations from specialized experts. It is particularly true in digital pathology, where models do not generalize easily. This paper presents a novel approach for the generalization of models in conditions where heterogeneity is high and annotations are few. The approach relies on a semi-supervised teacher/student paradigm to different datasets and annotations. The paradigm combines a small amount of strongly-annotated data, with a large amount of unlabeled data, for training two Convolutional Neural Networks (CNN): the teacher and the student model. The teacher model is trained with strong labels and used to generate pseudo-labeled samples from the unlabeled data. The student model is trained combining the pseudo-labeled samples and a small amount of strongly-annotated data. The paradigm is evaluated on the student model performance of Gleason pattern and Gleason score classification in prostate cancer images and compared with a fully-supervised learning approach for training the student model. In order to evaluate the capability of the approach to generalize, the datasets used are highly heterogeneous in visual characteristics and are collected from two different medical institutions. The models, trained with the teacher/student paradigm, show an improvement in performance above the fully-supervised training. The models generalize better on both the datasets, despite the inter-datasets heterogeneity, alleviating the overfitting. The classification performance shows an improvement both in the classification of Gleason pattern at patch level ( from ) and at in Gleason score classification, evaluated at WSI-level from.

2020

Effect of movement type on the classification of electromyography data for the control of dexterous prosthetic hands
Konferenz ArODES

Manfredo Atzori, Elisa Rosanda, Giorgio Pajardi, Franco Bassetto, Henning Müller

Proceedings of the 8th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob)

Link zur Konferenz

Zusammenfassung:

Hand amputations can dramatically affect the capabilities of a person. Machine learning is often applied to Surface Electromyography (sEMG) to control dexterous prosthetic hands. However, it can be affected by low robustness in real life conditions, mainly due to data variability depending on various factors (such as the position of the limb, of the electrodes or the characteristics of the subject). This paper aims at improving the understanding of sEMG for prosthesis control introducing the type of hand movement as a variable that influences classification performance in both intact subjects and hand amputees. Five hand amputees and five matched intact subjects were selected from the publicly available NinaPro database. The subjects were recorded while repeating 40 hand movements. Movement classification was performed on the sEMG data with a window-based approach (concatenating several signal features) and a Random Forest classifier. The results show that some hand movements are classified significantly better than others (p<0.001) and there is a correspondence in how well the same hand movements are classified in intact subjects and hand amputees. This work leads to advancements in the domain, highlighting the importance of the acquisition protocol for sEMG studies and suggesting that specific movements can lead to better performance for the control of prosthetic hands.

Semi-weakly supervised learning for prostate cancer image classification with teacher-student deep convolutional networks
Konferenz ArODES

Sebastian Otálora, Marini Niccolò, Henning Müller, Manfredo Atzori

Proceedings of the Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020 : Interpretable and Annotation-Efficient Learning for Medical Image Computing

Link zur Konferenz

Zusammenfassung:

Deep Convolutional Neural Networks (CNN) are at the backbone of the state–of–the art methods to automatically analyze Whole Slide Images (WSIs) of digital tissue slides. One challenge to train fully-supervised CNN models with WSIs is providing the required amount of costly, manually annotated data. This paper presents a semi-weakly supervised model for classifying prostate cancer tissue. The approach follows a teacher-student learning paradigm that allows combining a small amount of annotated data (tissue microarrays with regions of interest traced by pathologists) with a large amount of weakly-annotated data (whole slide images with labels extracted from the diagnostic reports). The task of the teacher model is to annotate the weakly-annotated images. The student is trained with the pseudo-labeled images annotated by the teacher and fine-tuned with the small amount of strongly annotated data. The evaluation of the methods is in the task of classification of four Gleason patterns and the Gleason score in prostate cancer images. Results show that the teacher-student approach improves significatively the performance of the fully-supervised CNN, both at the Gleason pattern level in tissue microarrays (respectively κ=0.594±0.022 and κ=0.559±0.034) and at the Gleason score level in WSIs (respectively κ=0.403±0.046 and κ=0.273±0.12). Our approach opens the possibility of transforming large weakly–annotated (and unlabeled) datasets into valuable sources of supervision for training robust CNN models in computational pathology.

Training deep neural networks for small and highly heterogeneous MRI datasets for cancer grading
Konferenz ArODES

Marek Wodzinski, Banzato Tommaso, Manfredo Atzori, Vincent Andrearczyk, Yashin Dicente Cid, Henning Müller

Proceedings of the 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society

Link zur Konferenz

Zusammenfassung:

Using medical images recorded in clinical practice has the potential to be a game-changer in the application of machine learning for medical decision support. Thousands of medical images are produced in daily clinical activity. The diagnosis of medical doctors on these images represents a source of knowledge to train machine learning algorithms for scientific research or computer-aided diagnosis. However, the requirement of manual data annotations and the heterogeneity of images and annotations make it difficult to develop algorithms that are effective on images from different centers or sources (scanner manufacturers, protocols, etc.). The objective of this article is to explore the opportunities and the limits of highly heterogeneous biomedical data, since many medical data sets are small and entail a challenge for machine learning techniques. Particularly, we focus on a small data set targeting meningioma grading. Meningioma grading is crucial for patient treatment and prognosis. It is normally performed by histological examination but recent articles showed that it is possible to do it also on magnetic resonance images (MRI), so non-invasive. Our data set consists of 174 T1-weighted MRI images of patients with meningioma, divided into 126 benign and 48 atypical/anaplastic cases, acquired using 26 different MRI scanners and 125 acquisition protocols, which shows the enormous variability in the data set. The performed preprocessing steps include tumor segmentation, spatial image normalization and data augmentation based on color and affine transformations. The preprocessed cases are passed to a carefully trained 2-D convolutional neural network. Accuracy above 74% was obtained, with the high-grade tumor recall above 74%. The results are encouraging considering the limited size and high heterogeneity of the data set. The proposed methodology can be useful for other problems involving classification of small and highly heterogeneous data sets.

Systematic comparison of deep learning strategies for weakly supervised Gleason grading
Konferenz ArODES

Sebastian Otálora, Manfredo Atzori, Amjad Khan, Oscar Alfonso Jiménez del Toro, Vincent Andrearczyk, Henning Müller

Proceedings of the SPIE Medical Imaging 2020

Link zur Konferenz

Zusammenfassung:

Prostate cancer (PCa) is one of the most frequent cancers in men. Its grading is required before initiating its treatment. The Gleason Score (GS) aims at describing and measuring the regularity in gland patterns observed by a pathologist on the microscopic or digital images of prostate biopsies and prostatectomies. Deep Learning based (DL) models are the state-of-the-art computer vision techniques for Gleason grading, learning high-level features with high classification power. However, for obtaining robust models with clinical-grade performance, a large number of local annotations are needed. Previous research showed that it is feasible to detect low and high-grade PCa from digitized tissue slides relying only on the less expensive report{level (weakly) supervised labels, thus global rather than local labels. Despite this, few articles focus on classifying the finer-grained GS classes with weakly supervised models. The objective of this paper is to compare weakly supervised strategies for classification of the five classes of the GS from the whole slide image, using the global diagnostic label from the pathology reports as the only source of supervision. We compare different models trained on handcrafted features, shallow and deep learning representations. The training and evaluation are done on the publicly available TCGA-PRAD dataset, comprising of 341 whole slide images of radical prostatectomies, where small patches are extracted within tissue areas and assigned the global report label as ground truth. Our results show that DL networks and class-wise data augmentation outperform other strategies and their combinations, reaching a kappa score of κ = 0:44, which could be further improved with a larger dataset or combining both strong and weakly supervised models.

Exploiting biomedical literature to mine out a large multimodal dataset of rare cancer studies
Konferenz ArODES

Anjani Dhrangadhariya, Oscar Alfonso Jiménez del Toro, Vincent Andrearczyk, Manfredo Atzori, Henning Müller

Proceedings of Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications

Link zur Konferenz

Zusammenfassung:

The overall lower survival rate of patients with rare cancers can be explained, among other factors, by the limitations resulting from the scarce available information about them. Large biomedical data repositories, such as PubMed Central Open Access (PMC-OA), have been made freely available to the scientific community and could be exploited to advance the clinical assessment of these diseases. A multimodal approach using visual deep learning and natural language processing methods was developed to mine out 15,028 light microscopy human rare cancer images. The resulting data set is expected to foster the development of novel clinical research in this field and help researchers to build resources for machine learning.

Generalizing convolution neural networks on stain color heterogeneous data for computational pathology
Konferenz ArODES

Amjad Khan, Manfredo Atzori, Sebastian Otálora, Vincent Andrearczyk, Henning Müller

Proceedings of medical imaging 2020 : digital pathology

Link zur Konferenz

Zusammenfassung:

Hematoxylin and Eosin (H&E) are one of the main tissue stains used in histopathology to discriminate between nuclei and extracellular material while performing a visual analysis of the tissue. However, histopathology slides are often characterized by stain color heterogeneity, due to different tissue preparation settings at different pathology institutes. Stain color heterogeneity poses challenges for machine learning-based computational analysis, increasing the difficulty of producing consistent diagnostic results and systems that generalize well. In other words, it is challenging for a deep learning architecture to generalize on stain color heterogeneous data, when the data are acquired at several centers, and particularly if test data are from a center not present in the training data. In this paper, several methods that deal with stain color heterogeneity are compared regarding their capability to solve center-dependent heterogeneity. Systematic and extensive experimentation is performed on a normal versus tumor tissue classification problem. Stain color normalization and augmentation procedures are used while training a convolutional neural networks (CNN) to generalize on unseen data from several centers. The performance is compared on an internal test set (test data from the same pathology institutes as the training set) and an external test set (test data from institutes not included in the training set). This also allows to measure generalization performance. An improved performance is observed when the predictions of the two best-performed stain color normalization methods with augmentation are aggregated. An average AUC and F1-score on external test are observed as 0:892±0:021 and 0:817±0:032 compared to the baseline 0:860±0:027 and 0:772 ± 0:024 respectively.

Studying public medical images from the Open Access literature and social networks for model training and knowledge extraction
Konferenz ArODES

Henning Müller, Vincent Andrearczyk, Oscar Alfonso Jiménez del Toro, Anjani Dhrangadhariya, Roger Schaer, Manfredo Atzori

Proceedings of the 26th International Conference on Multimedia Modeling (MMM2020)

Link zur Konferenz

Zusammenfassung:

Medical imaging research has long suffered problems getting access to large collections of images due to privacy constraints and to high costs that annotating images by physicians causes. With public scientific challenges and funding agencies fostering data sharing, repositories, particularly on cancer research in the US, are becoming available. Still, data and annotations are most often available on narrow domains and specific tasks. The medical literature (particularly articles contained in MedLine) has been used for research for many years as it contains a large amount of medical knowledge. Most analyses have focused on text, for example creating semi-automated systematic reviews, aggregating content on specific genes and their functions, or allowing for information retrieval to access specific content. The amount of research on images from the medical literature has been more limited, as MedLine abstracts are available publicly but no images are included. With PubMed Central, all the biomedical open access literature has become accessible for analysis, with images and text in structured format. This makes the use of such data easier than extracting it from PDF. This article reviews existing work on analyzing images from the biomedical literature and develops ideas on how such images can become useful and usable for a variety of tasks, including finding visual evidence for rare or unusual cases. These resources offer possibilities to train machine learning tools, increasing the diversity of available data and thus possibly the robustness of the classifiers. Examples with histopathology data available on Twitter already show promising possibilities. This article add links to other sources that are accessible, for example via the ImageCLEF challenges.

2019

Analyzing the trade-off between training session time and performance in myoelectric hand gesture recognition during upper limb movement
Konferenz ArODES

Matteo Cognolato, Lorenzo Brigato, Yashin Dicente Cid, Manfredo Atzori, Henning Müller

Proceedings of the 16th International Conference on Rehabilitation Robotics (ICORR) IEEE 2019

Link zur Konferenz

Zusammenfassung:

Although remarkable improvements have been made, the natural control of hand prostheses in everyday life is still challenging. Changes in limb position can considerably affect the robustness of pattern recognition-based myoelectric control systems, even if various strategies were proposed to mitigate this effect. In this paper, we investigate the possibility of selecting a set of training movements that is robust to limb position change, performing a trade-off between training time and accuracy. Four able-bodied subjects were recorded while following a training protocol for myoelectric hand prostheses control. The protocol is composed of 210 combinations of arm positions, forearm orientations, wrist orientations and hand grasps. To the best of our knowledge, it is among the most complete including changes in limb positions. A training reduction paradigm was used to select subsets of training movements from a group of subjects that were tested on the left-out subject’s data. The results show that a reduced training set (30 to 50 movements) allows a substantial reduction of the training time while maintaining reasonable performance, and that the trade-off between performance and training time appears to depend on the chosen classifier. Although further improvements can be made, the results show that properly selected training sets can be a viable strategy to reduce the training time while maximizing the performance of the classifier against variations in limb position.

2018

Hand gesture classification in transradial amputees using the myo armband classifier
Konferenz ArODES

Matteo Cognolato, Manfredo Atzori, Diego Faccio, Cesare Tiengo, Franco Bassetto, Roger Gassert, Henning Müller

Proceedings of the 7th IEEE RAS/EMBS International Conference on BioRob, 2018

Link zur Konferenz

Visual cues to improve myoelectric control of upper limb prostheses
Konferenz ArODES

Andrea Gigli, Arjan Gijsberts, Valentina Gregori, Matteo Cognolato, Manfredo Atzori, Barbara Caputo

Proceedings of the 7th IEEE International Conference on BioRob2018

Link zur Konferenz

Zusammenfassung:

The instability of myoelectric signals over time complicates their use to control poly-articulated prosthetic hands. To address this problem, studies have tried to combine surface electromyography with modalities that are less affected by the amputation and the environment, such as accelerometry and gaze information. In the latter case, the hypothesis is that a subject looks at the object he or she intends to manipulate, and that the visual characteristics of that object allow to better predict the desired hand posture. The method we present in this paper automatically detects stable gaze fixations and uses the visual characteristics of the fixated objects to improve the performance of a multimodal grasp classifier. Particularly, the algorithm identifies online the onset of a prehension and the corresponding gaze fixations, obtains high-level feature representations of the fixated objects by means of a Convolutional Neural Network, and combines them with traditional surface electromyography in the classification stage. Tests have been performed on data acquired from five intact subjects who performed ten types of grasps on various objects during both static and functional tasks. The results show that the addition of gaze information increases the grasp classification accuracy, that this improvement is consistent for all grasps and concentrated during the movement onset and offset.

Quantitative hierarchical representation and comparison of hand grasps from electromyography and kinematic data
Konferenz ArODES

Francesca Stival, Stefano Michieletto, Enrico Pagello, Henning Müller, Manfredo Atzori

Proceedings of the Learning Applications for Intelligent Autonomous Robots (LAIAR) 2018

Link zur Konferenz

Zusammenfassung:

Modeling human grasping and hand movements is important for robotics, prosthetics and rehabilitation. Several qualitative taxonomies of hand grasps have been proposed in scientic literature. However it is not clear how well they correspond to subjects movements

Determining the scale of image patches using a deep learning approach
Konferenz ArODES

Sebastian Otálora, Manfredo Atzori, Oscar Perdomo, Mats Andersson

Proceedings of IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)

Link zur Konferenz

Zusammenfassung:

Detecting the scale of histopathology images is important be-cause it allows to exploit various sources of information to train deep learning (DL) models to recognise biological structures of interest. Large open access databases with images exist, such as The Cancer Genome Atlas (TCGA) and PubMed Central but very few models can use such datasets because of the variability of the data in color and scale and a lack of metadata. In this article, we present and compare two deep learning architectures, to detect the scale of histopathology image patches. The approach is evaluated on a patch dataset from whole slide images of the prostate, obtaining a Cohen’s kappa coefficient of 0.9897 in the classification of patches with a scale of 5×, 10× and 20×. The good results represent a first step towards magnification detection in histopathology images that can help to solve the problem on more heteroge-neous data sources.

2017

Convolutional neural networks for an automatic classification of prostate tissue slides with high–grade Gleason score
Konferenz ArODES

Oscar Alfonso Jiménez del Toro, Manfredo Atzori, Sebastian Otálora, Mats Andersson, Kristian Eurén, Martin Hedlund, Peter Rönnquist, Henning Müller

Proceedings of SPIE Medical Imaging 2017 : Digital Pathology

Link zur Konferenz

Zusammenfassung:

The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high–grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions–of–interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2–class decision: high vs. low Gleason grade. Grades 7–8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re–training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for the WSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.

2015

Live ECG readings using Google Glass in emergency situations
Konferenz ArODES

Henning Müller, Roger Schaer, Fanny Salamin, Oscar Alfonso Jiménez del Toro, Manfredo Atzori, Antoine Widmer

Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2015

Link zur Konferenz

Zusammenfassung:

Most sudden cardiac problems require rapid treatment to preserve life. In this regard, electrocardiograms (ECG) shown on vital parameter monitoring systems help medical staff to detect problems. In some situations, such monitoring systems may display information in a less than convenient way for medical staff. For example, vital parameters are displayed on large screens outside the field of view of a surgeon during cardiac surgery. This may lead to losing time and to mistakes when problems occur during cardiac operations. In this paper we present a novel approach to display vital parameters such as the second derivative of the ECG rhythm and heart rate close to the field of view of a surgeon using Google Glass. As a preliminary assessment, we run an experimental study to verify the possibility for medical staff to identify abnormal ECG rhythms from Google Glass. This study compares 6 ECG rhythms readings from a 13.3 inch laptop screen and from the prism of Google Glass. Seven medical residents in internal medicine participated in the study. The preliminary results show that there is no difference between identifying these 6 ECG rhythms from the laptop screen versus Google Glass. Both allow close to perfect identification of the 6 common ECG rhythms. This shows the potential of connected glasses such as Google Glass to be useful in selected medical applications.

Effects of prosthesis use on the capability
Konferenz ArODES

Manfredo Atzori, Giorgio Giatsidis, Franco Bassetto, Henning Müller, Anne-Gabrielle Mittaz Hager, Simone Elsig

Proceedings of 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)

Link zur Konferenz

Zusammenfassung:

The natural control of robotic prosthetic hands with non-invasive techniques is still a challenge: myoelectric prostheses currently give some control capabilities; the application of pattern recognition techniques is promising and recently started to be applied in practice but still many questions are open in the field. In particular, the effects of clinical factors on movement classification accuracy and the capability to control myoelectric prosthetic hands are analyzed in very few studies. The effect of regularly using prostheses on movement classification accuracy has been previously studied, showing differences between users of myoelectric and cosmetic prostheses. In this paper we compare users of myoelectric and bodypowered prostheses and intact subjects. 36 machine-learning methods are applied on 6 amputees and 40 intact subjects performing 40 movements. Then, statistical analyses are performed in order to highlight significant differences between the groups of subjects. The statistical analyses do not show significant differences between the two groups of amputees, while significant differences are obtained between amputees and intact subjects. These results constitute new information in the field and suggest new interpretations to previous hypotheses, thus adding precious information towards natural control of robotic prosthetic hands.

The Ninapro database :
Konferenz ArODES
a resource for sEMG naturally controlled robotic hand prosthetics

Manfredo Atzori, Henning Müller

Proceedings of 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)

Link zur Konferenz

Zusammenfassung:

The dexterous natural control of robotic prosthetic hands with non-invasive techniques is still a challenge: surface electromyography gives some control capabilities but these are limited, often not natural and require long training times; the application of pattern recognition techniques recently started to be applied in practice. While results in the scientific literature are promising they have to be improved to reach the real needs. The Ninapro database aims to improve the field of naturally controlled robotic hand prosthetics by permitting to worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark database. Currently, the Ninapro database includes data from 67 intact subjects and 11 amputated subject performing approximately 50 different movements. The data are aimed at permitting the study of the relationships between surface electromyography, kinematics and dynamics. The Ninapro acquisition protocol was created in order to be easy to be reproduced. Currently, the number of datasets included in the database is increasing thanks to the collaboration of several research groups.

Advancements towards non invasive, naturally controlled robotic hand prostheses
Konferenz ArODES

Franco Bassetto, Giorgio Giatsidis, Henning Müller, Manfredo Atzori

Proceedings of the XX Congress of the Federation of European Societies for Surgery of the Hand 2015

Link zur Konferenz

Advancements towards a functional amputation of the hand
Konferenz ArODES

Henning Müller, Manfredo Atzori, Cesare Tiengo, Giorgio Giatsidis, Franco Bassetto

Proceedings of the 26th European Association of Plastic Surgeons (EURAPS) Annual Meeting 2015

Link zur Konferenz

Zusammenfassung:

The natural control of prosthetic robotic hands via surface electromyography (sEMG) remains a challenge despite the flexor-extensor muscular system of the fingers is usually partially preserved in patients with trans-radial amputations. In this work we analyze the Ninapro database (Non Invasive Adaptive Hand Prosthetics, http://ninapro.hevs.ch) which is currently the largest sEMG database of hand movements. The aim of the work is to identify relationships between clinical parameters of the amputation and movement recognition accuracy, in order to foster the integration between amputation surgery and innovative robotic hand prostheses.

Errungenschaften

2018

Electromyography control of the 3D printed prosthesis HANDi Hand

 2018 ; Prototype

Collaborateurs: Atzori Manfredo

Link zur Errungenschaft

Electromyography control of the 3D printed prosthesis HANDi Hand.

Medien und Kommunikation
Kontaktieren Sie uns
Folgen Sie der HES-SO
linkedin instagram facebook twitter youtube rss
univ-unita.eu www.eua.be swissuniversities.ch
Rechtliche Hinweise
© 2021 - HES-SO.

HES-SO Rectorat