Valorisez vos réalisations phares sur People@HES-SO Plus d'infos
PEOPLE@HES-SO – Annuaire et Répertoire des compétences
PEOPLE@HES-SO – Annuaire et Répertoire des compétences

PEOPLE@HES-SO
Annuaire et Répertoire des compétences

Aide
language
  • fr
  • en
  • de
  • fr
  • en
  • de
  • SWITCH edu-ID
  • Administration
« Retour
Eggel Ivan

Eggel Ivan

Adjoint-e scientifique HES A

Compétences principales

Container technologies

Backend development

Web frontend development

Server administration

Cloud Computing

Continuous Integration

Continuous Deployment

  • Contact

  • Publications

  • Conférences

Contrat principal

Adjoint-e scientifique HES A

Téléphone: +41 58 606 90 59

Bureau: TP118

HES-SO Valais-Wallis - Haute Ecole de Gestion
Route de la Plaine 2, Case postale 80, 3960 Sierre, CH
HEG - VS
Aucune donnée à afficher pour cette section

2021

Breast histopathology with high-performance computing and deep learning
Article scientifique ArODES

Mara Graziani, Ivan Eggel, François Deligand, Martin Bobák, Vincent Andrearczyk, Henning Müller

Computing and informatics,  2020, vol. 39, no. 4, pp. 780-807

Lien vers la publication

Résumé:

The increasingly intensive collection of digitalized images of tumor tissue over the last decade made histopathology a demanding application in terms of computational and storage resources. With images containing billions of pixels, the need for optimizing and adapting histopathology to large-scale data analysis is compelling. This paper presents a modular pipeline with three independent layers for the detection of tumoros regions in digital specimens of breast lymph nodes with deep learning models. Our pipeline can be deployed either on local machines or high-performance computing resources with a containerized approach. The need for expertise in high-performance computing is removed by the self-su_cient structure of Docker containers, whereas a large possibility for customization is left in terms of deep learning models and hyperparameters optimization. We show that by deploying the software layers in di_erent infrastructures we optimize both the data preprocessing and the network training times, further increasing the scalability of the application to datasets of approximatively 43 million images. The code is open source and available on Github.

2019

An augmented reality environment to provide visual feedback to amputees during sEMG Data Acquisitions
Chapitre de livre ArODES

Francesca Palermo, Matteo Cognolato, Ivan Eggel, Manfredo Atzori, Henning Müller

Dans Althoefer, Kaspar, Konstantinova, Jelizaveta, Zhang, Ketao, Towards autonomous robotic systems : 20th annual conference, TAROS 2019, London, UK, July 3–5, 2019, Proceedings, Part II  (12 p.). 2019,  Cham : Springer

Lien vers la publication

Résumé:

Myoelectric hand prostheses have the potential to improve the quality of life of hand amputees. Still, the rejection rate of functional prostheses in the adult population is high. One of the causes is the long time for fitting the prosthesis and the lack of feedback during training. Moreover, prosthesis control is often unnatural and requires mental effort during the training. Virtual and augmented reality devices can help to improve these difficulties and reduce phantom limb pain. Amputees can start training the residual limb muscles with a weightless virtual hand earlier than possible with a real prosthesis. When activating the muscles related to a specific grasp, the subjects receive a visual feedback from the virtual hand. To the best of our knowledge, this work presents one of the first portable augmented reality environment for transradial amputees that combines two devices available on the market: the Microsoft HoloLens and the Thalmic labs Myo. In the augmented environment, rendered by the HoloLens, the user can control a virtual hand with surface electromyography. By using the virtual hand, the user can move objects in augmented reality and train to activate the right muscles for each movement through visual feedback. The environment presented represents a resource for rehabilitation and for scientists. It helps hand amputees to train using prosthetic hands right after the surgery. Scientists can use the environment to develop real time control experiments, without the logistical disadvantages related to dealing with a real prosthetic hand but with the advantages of a realistic visual feedback.

2018

Evaluation-as-a-service for the computational sciences :
Article scientifique ArODES
overview and outlook

Frank Hopfgartner, Allan Hanbury, Henning Müller, Ivan Eggel, Balog Krisztian, Torben Brodt, Gordon V. Cormack, Jimmy Lin, Jayashree Kalpathy-Cramer, Noriko Kando, Makoto P. Kato, Anastasia Krithara, Tim Gollub, Martin Potthast, Evelyne Viegas, Simon Mercer

Journal of data and information quality,  October 2018, vol. 10, no. 4, pp. 1-32

Lien vers la publication

Résumé:

Evaluation in empirical computer science is essential to show progress and assess technologies developed. Several research domains such as information retrieval have long relied on systematic evaluation to measure progress: here, the Cranfield paradigm of creating shared test collections, defining search tasks, and collecting ground truth for these tasks has persisted up until now. In recent years, however, several new challenges have emerged that do not fit this paradigm very well: extremely large data sets, confidential data sets as found in the medical domain, and rapidly changing data sets as often encountered in industry. Also, crowdsourcing has changed the way that industry approaches problem-solving with companies now organizing challenges and handing out monetary awards to incentivize people to work on their challenges, particularly in the field of machine learning. This white paper is based on discussions at a workshop on Evaluation-as-a-Service (EaaS). EaaS is the paradigm of not providing data sets to participants and have them work on the data locally, but keeping the data central and allowing access via Application Programming Interfaces (API), Virtual Machines (VM) or other possibilities to ship executables. The objective of this white paper are to summarize and compare the current approaches and consolidate the experiences of these approaches to outline the next steps of EaaS, particularly towards sustainable research infrastructures. This white paper summarizes several existing approaches to EaaS and analyzes their usage scenarios and also the advantages and disadvantages. The many factors influencing EaaS are overviewed, and the environment in terms of motivations for the various stakeholders, from funding agencies to challenge organizers, researchers and participants, to industry interested in supplying real-world problems for which they require solutions.

2017

Using the cloud as a platform for evaluation and data preparation
Chapitre de livre ArODES

Ivan Eggel, Roger Schaer, Henning Müller

Cloud-based benchmarking of medical image analysis  (pp. 15-30). 2017,  Cham : Springer

Lien vers la publication

Résumé:

This chapter gives a brief overview of the VISCERAL Registration System that is used for all the VISCERAL Benchmarks and is released as open source on GitHub. The system can be accessed by both participants and administrators, reducing the direct participant–organizer interaction and handling the documentation available for each of the benchmarks organized by VISCERAL. Also, the upload of the VISCERAL usage and participation agreements is integrated, as well as the attribution of virtual machines that allow participation in the VISCERAL Benchmarks. In the second part, a summary of the various steps in the continuous evaluation chain mainly consisting of the submission, algorithm execution and storage as well as the evaluation of results is given. The final part consists of the cloud infrastructure detail, describing the process of defining requirements, selecting a cloud solution provider, setting up the infrastructure and running the benchmarks. This chapter concludes with a short experience report outlining the encountered challenges and lessons learned.

2016

Cloud-based evaluation of anatomical structure segmentation and landmark detection algorithms :
Article scientifique ArODES
VISCERAL anatomy benchmarks

Oscar Alfonso Jiménez del Toro, Henning Müller, Ivan Eggel, Roger Schaer, Yashin Dicente Cid

IEEE transactions on medical imaging,  2016, vol. 35, no 11, pp. 2459-2475

Lien vers la publication

Résumé:

Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

2015

Analyzing image search behaviour of radiologists :
Article scientifique ArODES
semantics and prediction of query results

Maria De-Arteaga, Ivan Eggel, Charles Kahn, Henning Müller

Journal of digital imaging,  October 2015, vol. 28, Issue 5, pp. 537-546

Lien vers la publication

Résumé:

Log files of information retrieval systems that record user behavior have been used to improve the outcomes of retrieval systems, understand user behavior, and predict events. In this article, a log file of the ARRS GoldMiner search engine containing 222,005 consecutive queries is analyzed. Time stamps are available for each query, as well as masked IP addresses, which enables to identify queries from the same person. This article describes the ways in which physicians (or Internet searchers interested in medical images) search and proposes potential improvements by suggesting query modifications. For example, many queries contain only few terms and therefore are not specific; others contain spelling mistakes or non-medical terms that likely lead to poor or empty results. One of the goals of this report is to predict the number of results a query will have since such a model allows search engines to automatically propose query modifications in order to avoid result lists that are empty or too large. This prediction is made based on characteristics of the query terms themselves. Prediction of empty results has an accuracy above 88 %, and thus can be used to automatically modify the query to avoid empty result sets for a user. The semantic analysis and data of reformulations done by users in the past can aid the development of better search systems, particularly to improve results for novice users. Therefore, this paper gives important ideas to better understand how people search and how to use this knowledge to improve the performance of specialized medical search engines.

Using MapReduce for large-scale medical image analysis
Article scientifique ArODES

Dimitrios Markonis, Roger Schaer, Ivan Eggel, Henning Müller, Adrien Depeursinge

ArXiv,

Lien vers la publication

Résumé:

The growth of the amount of medical image data produced on a daily basis in modern hospitals forces the adaptation of traditional medical image analysis and indexing approaches towards scalable solutions. The number of images and their dimensionality increased dramatically during the past 20 years. We propose solutions for large-scale medical image analysis based on parallel computing and algorithm optimization. The MapReduce framework is used to speed up and make possible three large-scale medical image processing use-cases: (i) parameter optimization for lung texture segmentation using support vector machines, (ii) content-based medical image indexing, and (iii) three-dimensional directional wavelet analysis for solid texture classification. A cluster of heterogeneous computing nodes was set up in our institution using Hadoop allowing for a maximum of 42 concurrent map tasks. The majority of the machines used are desktop computers that are also used for regular office work. The cluster showed to be minimally invasive and stable. The runtimes of each of the three use-case have been significantly reduced when compared to a sequential execution. Hadoop provides an easy-to-employ framework for data analysis tasks that scales well for many tasks but requires optimization for specific tasks.

Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS
Article scientifique ArODES

Henning Müller, Maria De-Arteaga, Ivan Eggel, Bao Do, Daniel Rubin, Charles Kahn Jr.

Journal of Biomedical Informatics,  2015, vol. 56, pp. 57-64

Lien vers la publication

Résumé:

Information search has changed the way we manage knowledge and the ubiquity of information access has made search a frequent activity, whether via Internet search engines or increasingly via mobile devices. Medical information search is in this respect no different and much research has been devoted to analyzing the way in which physicians aim to access information. Medical image search is a much smaller domain but has gained much attention as it has different characteristics than search for text documents. While web search log files have been analysed many times to better understand user behaviour, the log files of hospital internal systems for search in a PACS/RIS (Picture Archival and Communication System, Radiology Information System) have rarely been analysed. Such a comparison between a hospital PACS/RIS search and a web system for searching images of the biomedical literature is the goal of this paper. Objectives are to identify similarities and differences in search behaviour of the two systems, which could then be used to optimize existing systems and build new search engines. Log files of the ARRS GoldMiner medical image search engine (freely accessible on the Internet) containing 222,005 queries, and log files of Stanford’s internal PACS/RIS search called radTF containing 18,068 queries were analysed. Each query was preprocessed and all query terms were mapped to the RadLex (Radiology Lexicon) terminology, a comprehensive lexicon of radiology terms created and maintained by the Radiological Society of North America, so the semantic content in the queries and the links between terms could be analysed, and synonyms for the same concept could be detected. RadLex was mainly created for the use in radiology reports, to aid structured reporting and the preparation of educational material (Lanlotz, 2006) [1]. In standard medical vocabularies such as MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) specific terms of radiology are often underrepresented, therefore RadLex was considered to be the best option for this task. The results show a surprising similarity between the usage behaviour in the two systems, but several subtle differences can also be noted. The average number of terms per query is 2.21 for GoldMiner and 2.07 for radTF, the used axes of RadLex (anatomy, pathology, findings, . . .) have almost the same distribution with clinical findings being the most frequent and the anatomical entity the second; also, combinations of RadLex axes are extremely similar between the two systems. Differences include a longer length of the sessions in radTF than in GoldMiner (3.4 and 1.9 queries per session on average). Several frequent search terms overlap but some strong differences exist in the details. In radTF the term ‘‘normal’’ is frequent, whereas in GoldMiner it is not. This makes intuitive sense, as in the literature normal cases are rarely described whereas in clinical work the comparison with normal cases is often a first step. The general similarity in many points is likely due to the fact that users of the two systems are influenced by their daily behaviour in using standard web search engines and follow this behaviour in their professional search. This means that many results and insights gained from standard web search can likely be transferred to more specialized search systems. Still, specialized log files can be used to find out more on reformulations and detailed strategies of users to find the right content.

2024

Overview of LifeCLEF 2024 :
Conférence ArODES
challenges on species distribution prediction and identification

Alexis Joly, Lukáš Picek, Stefan Kahl, Hervé Goëau, Vincent Espitalier, Christophe Botella, Diego Marcos, Joaquim Estopinan, Cesar Leblanc, Théo Larcher, Milan Šulc, Marek Hrúz, Maximilien Servajean, Hervé Glotin, Robert Planqué, Willem-Pier Vellinga, Holger Klinck, Tom Denton, Ivan Eggel, Pierre Bonnet, Henning Müller

Experimental IR Meets Multilinguality, Multimodality, and Interaction (CLEF 2024)

Lien vers la conférence

Résumé:

Biodiversity monitoring using machine learning and AI-based approaches is becoming increasingly popular. It allows for providing detailed information on species distribution and ecosystem health at a large scale and contributes to informed decision-making on environmental protection. Species identification based on images and sounds, in particular, is invaluable for facilitating biodiversity monitoring efforts and enabling prompt conservation actions to protect threatened and endangered species. The multiplicity of methods developed, however, makes it important to evaluate their performance on realistic datasets and using standardized evaluation protocols. The LifeCLEF lab has been setting up such evaluations since 2011, encouraging machine learning researchers to work on this topic and promoting the adoption of the technologies developed by stakeholders. The 2024 edition proposes five data-oriented challenges related to the identification and prediction of biodiversity: (i) BirdCLEF: bird call identification in soundscapes, (ii) FungiCLEF: revisiting fungi species recognition beyond 0-1 cost, (iii) GeoLifeCLEF: remote sensing based prediction of species, (iv) PlantCLEF: Multi-species identification in vegetation plot images, and (v) SnakeCLEF: revisiting snake species identification in medically important scenarios. This paper overviews the motivation, methodology, and main outcomes of those five challenges.

2023

Overview of LifeCLEF 2023 :
Conférence ArODES
evaluation of AI models for the identification and prediction of birds, plants, nakes and fungi

Alexis Joly, Christophe Botella, Lukáš Picek, Stefan Kahl, Hervé Goëau, Benjamin Deneu, Diego Marcos, Joaquim Estopinan, Cesar Leblanc, Théo Larcher, Rail Chamidullin, Milan Šulc, Marek Hrúz, Maximilien Servajean, Hervé Glotin, Robert Planqué, Willem-Pier Vellinga, Holger Klinck, Tom Denton, Ivan Eggel, Pierre Bonnet, Henning Müller

Experimental IR Meets Multilinguality, Multimodality, and Interaction

Lien vers la conférence

Résumé:

Biodiversity monitoring through AI approaches is essential, as it enables the efficient analysis of vast amounts of data, providing comprehensive insights into species distribution and ecosystem health and aiding in informed conservation decisions. Species identification based on images and sounds, in particular, is invaluable for facilitating biodiversity monitoring efforts and enabling prompt conservation actions to protect threatened and endangered species. The LifeCLEF virtual lab has been promoting and evaluating advances in this domain since 2011. The 2023 edition proposes five data-oriented challenges related to the identification and prediction of biodiversity: (i) BirdCLEF: bird species recognition in long-term audio recordings (soundscapes), (ii) SnakeCLEF: snake identification in medically important scenarios, (iii) PlantCLEF: very large-scale plant identification, (iv) FungiCLEF: fungi recognition beyond 0–1 cost, and (v) GeoLifeCLEF: remote sensing-based prediction of species. This paper overviews the motivation, methodology, and main outcomes of that five challenges.

LifeCLEF 2023 teaser
Conférence ArODES
species Identification and Prediction Challenges

Alexis Joly, Hervé Goëau, Stefan Kahl, Lukáš Picek, Christophe Botella, Diego Marcos, Milan Šulc, Marek Hrúz, Titouan Lorieul, Sara Si Moussi, Maximilien Servajean, Benjamin Kellenberger, Elijah Cole, Andrew Durso, Hervé Glotin, Robert Planqué, Willem-Pier Vellinga, Holger Klinck, Tom Denton, Ivan Eggel, Pierre Bonnet, Henning Müller

Advances in Information Retrieval

Lien vers la conférence

Résumé:

Building accurate knowledge of the identity, the geographic distribution and the evolution of species is essential for the sustainable development of humanity, as well as for biodiversity conservation. However, the difficulty of identifying plants, animals and fungi is hindering the aggregation of new data and knowledge. Identifying and naming living organisms is almost impossible for the general public and is often difficult, even for professionals and naturalists. Bridging this gap is a key step towards enabling effective biodiversity monitoring systems. The LifeCLEF campaign, presented in this paper, has been promoting and evaluating advances in this domain since 2011. The 2023 edition proposes five data-oriented challenges related to the identification and prediction of biodiversity: (i) PlantCLEF: very large-scale plant identification from images, (ii) BirdCLEF: bird species recognition in audio soundscapes, (iii) GeoLifeCLEF: remote sensing based prediction of species, (iv) SnakeCLEF: snake recognition in medically important scenarios, and (v) FungiCLEF: fungi recognition beyond 0–1 cost.

2022

Overview of LifeCLEF 2022 :
Conférence ArODES
an evaluation of machine-learning based species identification and species distribution prediction

Alexis Joly, Hervé Goëau, Stefan Kahl, Lukáš Picek, Titouan Lorieul, Elijah Cole, Benjamin Deneu, Maximilien Servajean, Andrew Durso, Hervé Glotin, Planqué. Robert, Willem-Pier Vellinga, Amanda Navine, Holger Klinck, Tom Denton, Ivan Eggel, Pierre Bonnet, Milan Šulc, Marek Hrúz

Experimental IR Meets Multilinguality, Multimodality, and Interaction : 13th International Conference of the CLEF Association, CLEF 2022, Bologna, Italy, September 5–8, 2022, Proceedings

Lien vers la conférence

Résumé:

Building accurate knowledge of the identity, the geographic distribution and the evolution of species is essential for the sustainable development of humanity, as well as for biodiversity conservation. However, the difficulty of identifying plants, animals and fungi is hindering the aggregation of new data and knowledge. Identifying and naming living organisms is almost impossible for the general public and is often difficult even for professionals and naturalists. Bridging this gap is a key step towards enabling effective biodiversity monitoring systems. The LifeCLEF campaign, presented in this paper, has been promoting and evaluating advances in this domain since 2011. The 2022 edition proposes five data-oriented challenges related to the identification and prediction of biodiversity: (i) PlantCLEF: very large-scale plant identification, (ii) BirdCLEF: bird species recognition in audio soundscapes, (iii) GeoLifeCLEF: remote sensing based prediction of species, (iv) SnakeCLEF: snake species identification on a global scale, and (v) FungiCLEF: fungi recognition as an open set classification problem. This paper overviews the motivation, methodology and main outcomes of that five challenges.

2020

Overview of LifeCLEF 2020 :
Conférence ArODES
a system-oriented evaluation of automated species identification and species distribution prediction

Alexis Joly, Hervé Goëau, Stefan Kahl, Benjamin Deneu, Maximillien Servajean, Elijah Cole, Lukás Picek, Rafael Ruiz de Castañeda, Isabelle Bolon, Andrew Durso, Titouan Lorieul, Christophe Botella, Hervé Glotin, Julien Champ, Ivan Eggel, Willem-Pier Vellinga, Pierre Bonnet, Henning Müller

Proceedings of International conference of the cross-language evaluation forum for European languages (CLEF 2020)

Lien vers la conférence

Résumé:

Building accurate knowledge of the identity, the geographic distribution and the evolution of species is essential for the sustainable development of humanity, as well as for biodiversity conservation. However, the difficulty of identifying plants and animals in the field is hindering the aggregation of new data and knowledge. Identifying and naming living plants or animals is almost impossible for the general public and is often difficult even for professionals and naturalists. Bridging this gap is a key step towards enabling effective biodiversity monitoring systems. The LifeCLEF campaign, presented in this paper, has been promoting and evaluating advances in this domain since 2011. The 2020 edition proposes four data-oriented challenges related to the identification and prediction of biodiversity: (i) PlantCLEF: cross-domain plant identification based on herbarium sheets (ii) BirdCLEF: bird species recognition in audio soundscapes, (iii) GeoLifeCLEF: location-based prediction of species based on environmental and occurrence data, and (iv) SnakeCLEF: snake identification based on image and geographic location.

2018

Distributed container-based evaluation platform for private/large datasets
Conférence ArODES

Ivan Eggel, Roger Schaer, Henning Müller

Proceedings of the 17th IEEE International Symposium on Parallel and Distributed Computing (ISPDC 2018)

Lien vers la conférence

Résumé:

The rise of big data and artificial intelligence techniques such as deep learning has lead to an exponential increase in stored data in various fields, including medical imaging, genetics and financial trading. Sharing these increasing amounts of data for research is challenging, as privacy risks increase with the increased size of data. Physically moving very large datasets to researchers is inconvenient, as download or sending physical hard disks are not optimal. Research on sensitive data is often not possible, as sharing is not legal. The popularity of container-based technologies such as Docker has revolutionized the way applications are deployed, due to their self-sufficient, light-weight and portable nature. In this paper, we propose a novel distributed platform using containers for simple execution and evaluation of research applications on the data owner’s infrastructure, bringing the algorithms to the data. This approach avoids the cumbersome transfer of large datasets and can help circumventing problems linked to non-shareable data by providing a sandboxed execution environment with read-only access to the data. At no point the data leave the data owner’s site, giving researchers access to their evaluation results, only, and not the data themselves. The presented proof-of-concept confirms the feasibility of a distributed container-based evaluation platform for large and/or sensitive data. This has several advantages, including execution of code instead of submission of result files and availability of otherwise inaccessible data. The container architecture allows for minimal computational overhead, no software dependency management on the infrastructure, distributed runtime environment and isolation of processes from the underlying host system. A version addressing various identified architectural and security-related challenges has the potential to be deployed in a production setting and therefore allows researchers to gain insights from previously inaccessible data. One goal is to target hospitals with increasingly strong local infrastructure for storage and computation, needed for artificial intelligence based decision support (genetics and imaging).

2015

Report on the Evaluation-as-a-Service (EaaS) Expert Workshop
Conférence ArODES

Henning Müller, Frank Hopfgartner, Allan Hanbury, Noriko Kando, Simon Mercer, Jayashree Kalpathy-Cramer, Martin Potthast, Tim Gollub, Anastasia Krithara, Jimmy Lin, Krisztian Balog, Ivan Eggel

ACM SIGIR Forum

Lien vers la conférence

Résumé:

In this report, we summarize the outcome of the "Evaluation-as-a-Service" workshop that was held on the 5th and 6th March 2015 in Sierre, Switzerland. The objective of the meeting was to bring together initiatives that use cloud infrastructures, virtual machines, APIs (Application Programming Interface) and related projects that provide evaluation of information retrieval or machine learning tools as a service.

VISCERAL-VISual concept extraction challenge in radiology :
Conférence ArODES
Organsegmentierung : Übersicht, Einblicke und erste Ergebnisse

Henning Müller, Oscar Alfonso Jiménez del Toro, Marc André Weber, Ivan Eggel, Roger Schär, Marianne Winterstein, Katharina Grünberg, Allan Hanbury, Georgios Kontokotsios, Abdel Aziz Taha, Georg Langs, Orcun Göksel, Bjöern Menze, Markus Holzer, Markus Krenn

Proceedings Deutscher Röntgenkongress (DRK) 2015; 187 - WISS101_5

Lien vers la conférence

Résumé:

Since during clinical routine, only a small portion of the increasing amounts of medical imaging data are accessible, this project aims to provide the necessary data for clinical image assessment in short time, and to conduct competitions for identifying successful computational strategies.

Overview of the VISCERAL Challenge at ISBI 2015
Conférence ArODES

Henning Müller, Orcun Göksel, Antonio Foncubierta Rodríguez, Oscar Alfonso Jiménez del Toro, Georg Langs, Marc André Weber, Bjoern Menze, Ivan Eggel, Katharina Gruenberg, Marianne Winterstein, Markus Holzer, Markus Krenn, Georgios Kontokotsios, Sokratis Metallidis, Roger Schaer, Abdel Aziz Taha, András Jakab, Tomas Salas Fernández, Allan Hanbury

Proceedings of the Visual Concept Extraction Challenge in Radiology Anatomy3 Organ Segmentation Challenge co-located with IEEE International Symposium on Biomedical Imaging (VISCERAL@ISBI) 2015

Lien vers la conférence

Résumé:

This is an overview paper describing the data and evaluation scheme of the VISCERAL Segmentation Challenge at ISBI 2015. The challenge was organized on a cloud-based virtual- machine environment, where each participant could develop and submit their algorithms. The dataset contains up to 20 anatomical structures annotated in a training and a test set consisting of CT and MR images with and without contrast enhancement. The test-set is not accessible to participants, and the organizers run the virtual-machines with submitted segmentation methods on the test data. The results of the evaluation are then presented to the participant, who can opt to make it public on the challenge leaderboard displaying 20 segmentation quality metrics per-organ and permodality. Dice coefficient and mean-surface distance are presented herein as representative quality metrics. As a continuous evaluation platform, our segmentation challenge leaderboard will be open beyond the duration of the VISCERAL project.

VISCERAL-VISual concept extraction challenge in radiology :
Conférence ArODES
segmentation challenge : overview, insights and preliminary results

Henning Müller, Katharina Grüenberg, Marc André Weber, Oscar Alfonso Jiménez del Toro, Orcun Goksel, Bjöern Menze, Georg Langs, Ivan Eggel, Markus Holzer, Georgios Kontokotsios, Markus Krenn, Roger Schaer, Abdel Aziz Taha, Marianne Winterstein, Allan Hanbury

Proceedings of the 9th European Congress of Radiology (ECR) 2015

Lien vers la conférence

Résumé:

Since during clinical routine, only a small portion of increasing amounts of medical imaging data are accessible, this project aims to provide the necessary data for research, and to conduct competitions for identifying successful computational strategies.

Réalisations

Médias et communication
Nous contacter
Suivez la HES-SO
linkedin instagram facebook twitter youtube rss
univ-unita.eu www.eua.be swissuniversities.ch
Mentions légales
© 2021 - HES-SO.

HES-SO Rectorat