Auto-WCEBleedGen Challenge Version V2
Automatic Classification between Bleeding and Non-Bleeding frames and further Detection and Segmentation of the Bleeding Region
IMPORTANT UPDATES
- • Winners Revealed. Check Results.
- • Github links of the top 3 teams released. Check Key Links.
- • We extend our heartfelt gratitude to everyone for the overwhelming responses. The top three teams have been selected and notified via email.
- • Submissions have been CLOSED.
- • Test dataset has been released. Check KEY LINKS. Download version 2 only! Please note that submission guidelines have been updated.
- • Training dataset has been released. Check KEY LINKS.
- • Registrations are CLOSED.
- • We sincerely acknowledge and thank 31st 2024 IEEE International Conference on Image Processing (IEEE ICIP 2024) for the amazing opportunity. Heartfelt gratitude for their support and guidance in conducting this challenge!
OVERVIEW
Gastrointestinal (GI) bleeding is a medical condition characterized by bleeding in the GI tract, which circumscribes oesophagus, stomach, small intestine, large intestine (colon), rectum, and anus. Since blood flows into the GI tract, a cascade of risks emerges, ranging from immediate dangers to potential long-term consequences. Excessive blood loss from GI bleeding may lead to a drop in blood pressure, reduced oxygen delivery to organs and tissues, and potentially life-threatening organ dysfunction.
According to World Health Organization (WHO), GI bleeding is responsible for approximately 300,000 deaths every year globally. These statistics serve as a catalyst for research, propelling innovative treatment modalities and diagnostic advancements aimed at mitigating the dangers posed by GI bleeding. In last decade, the availability of advanced diagnostic innovations like Wireless Capsule Endoscopy (WCE) has led to better understanding of the GI bleeding in GI tract. The disposable capsule-shaped device travels inside the GI tract via peristalsis and comprises of an optical dome, a battery, an illuminator, an imaging sensor, and a radio-frequency transmitter. During 8-12 hours of WCE procedure, a video of the GI tract trajectory is recorded on a device attached to the patient’s belt which produces about 57,000-1,00,000 frames; analysed posterior by experienced gastroenterologists.
Presently, an experienced gastroenterologist takes approximately 2−3 hours to inspect the captured video of one-patient through a frame-by-frame analysis which is not only time-consuming but also susceptible to human error. In view of the poor ratio of patient-to-doctor across globe, there arises a need for investigation and state-of-the-art development of robust, interpretable and generalized Artificial Intelligence (AI) models. This will aid in reducing the burden on gastroenterologists and save their valuable time by computer-aided classification between bleeding and non-bleeding frames and further detection and segmentation of bleeding region in that frame.
Auto-WCEBleedGen Challenge Version V1 was a huge success with +1200 participation across globe. It was organized virtually by MISAHUB (Medical Imaging and Signal Analysis Hub), in collaboration with the 8th International Conference on CVIP (Computer Vision and Image Processing 2023), IIT Jammu, India from August 15 – October 14, 2023. It focused on automatic detection and classification of bleeding and non-bleeding frames in WCE.
Following its success, we bring to you, Auto-WCEBleedGen Challenge Version V2 which focuses on automatic classification of bleeding and non-bleeding frames and further detection and segmentation of bleeding region in that frame. We have updated the annotations of the multiple bleeding sites present in the training dataset (WCEBleedGen). We have also updated the annotations and class labels of the testing dataset (Auto-WCEBleedGen Test) and provided un-marked images of dataset 1.
CHALLENGE
- • The aim of the Auto-WCEBleedGen Challenge Version V2 is to provide an opportunity for the development, testing and evaluation of AI models for automatic classification of bleeding and non-bleeding frames and further detection and segmentation of bleeding region in that frame.
- • The challenge consists of distinct training and test datasets and promotes the development of vendor-independent, interpretable, and generalized AI models.
- • The training dataset consists of 2618 bleeding and non-bleeding WCE frames collected from multiple internet resources, and datasets with a vast variety and types of GI bleeding throughout the GI tract along with medically validated binary masks and bounding boxes in three formats (txt, XML and YOLO txt). The test dataset is an independently collected WCE data containing 564 bleeding and non-bleeding frames of 30 patients suffering from acute, chronic and occult GI bleeding referred at Department of Gastroenterology and HNU, All India Institute of Medical Sciences, New Delhi, India.
IMPORTANT DATES
Events | Dates |
---|---|
Launch of the challenge | January 20, 2024 |
E-Registration | January 20 – February 10, 2024 |
Release of Training Dataset | January 20, 2024 |
Release of Testing Dataset | February 11, 2024 |
Result submission | February 22 – February 24, 2024 |
Announcement of top three winning teams | March 06, 2024 |
Paper Submission (optional) | April 03, 2024 |
Presentation by the winning team (1st position only) – ICIP 2024 | October 27 – October 30, 2024 |
Registration and Rules
Rules for Participation:
- • This challenge will be open to all students (B. Tech/ M. Tech/ Ph.D. of all branches), and professionals across globe for free.
- • Participants will either register as a solo participant or can form a team.
- • The results will be submitted in a specific format over email to misahub2023@gmail.com.
- • In the case of ties, the organizing committee may rank teams based on the method's novelty and readability of codes. The organizing committee's decision in this regard will be final.
- • Paper submission in the main conference (ICIP 2024) is highly encouraged.
Rules for Team Formation:
- • A team can have a maximum of 4 participants.
- • Team members can be from same or different organizations/affiliations.
- • A participant can only be a part of a single team.
- • Only one member from the team has to register for the challenge.
- • One team can only have one registration. Multiple registrations can lead to disqualification.
- • There is no limitations on the number of teams from the same organizations/affiliations. (However, one participant can only be part of a unique team.)
Rules for use of Training Dataset:
- • Download the training dataset and randomly divide it into 80:20 ratio as training and validation dataset.
- • Develop a model to first classify bleeding and non-bleeding frames followed by detection and segmentation of bleeding region in that frame.
- • Store the model, associated weights and files.
- • Perform necessary evaluation for the developed model. Preferred evaluation metrics include: Accuracy, Precision, Recall, F1-Score, AUC-ROC
- • Evaluation metrics for Classification: Accuracy, Recall, F1-Score.
- • Evaluation metrics for Detection: Average Precision, Intersection over Union (IoU).
- • Evaluation metrics for Segmentation: Intersection over Union (IoU), Dice Coefficient.
- • Any ONE interpretability plot: CAMs, LIME, SHAP, feature importance, partial dependence, occlusion, and model explain, fairness etc.
Rules for use of Testing Dataset:
- • Download the testing dataset.
- • Call the stored model, associated weights and files. Classify, detect, and segment all the frames in the testing dataset 1 and 2 (separately)*.
- • Perform necessary evaluation for the testing dataset 1 and 2 (separately)*.
- • Prepare an excel sheet as per sample submission attached in the zip of the released testing dataset.
- • Any ONE interpretability plot: CAMs, LIME, SHAP, feature importance, partial dependence, occlusion, and model explain, fairness etc.
Submission Format:
Each team is required to submit the results via EMAIL to misahub2023@gmail.com with following rules in mind.
- The email should contain
- • Challenge name and Team name as the SUBJECT LINE.
- • Team member names and affiliation in the BODY OF THE EMAIL.
- • Contact number and email address in the BODY OF THE EMAIL.
- • A link of the GITHUB Repository (PUBLIC) in the BODY OF THE EMAIL.
- • Excel sheet (in xlsx format).
- • Readme file (in PDF format).
- The GITHUB Repository (PUBLIC) should contain the following:
- • Developed code for training, validation, and testing in .py / .mat etc in readable format with comments.
- • Stored model, associated weights or files (optional).
- • Any utils or assets or config. or checkpoints.
- • Excel sheet (in xlsx format).
- • Readme file (in PDF format).
- Note: A sample readme file and excel sheet have been attached in the zip file of the released testing dataset.
Important Notes:
- • The participating teams are requested NOT to utilize any other dataset while training their model.
- • Real-time analysis is highly encouraged.
- • One interpretability plot is a MUST for submission. Submissions without interpretability plot will be disqualified and NOT evaluated.
- • Submitted pictures MUST of high resolution. At least 600 dot-per-inch (DPI).
- • Generic YOLO-based or CNN-based submissions will be dis-qualified and NOT evaluated.
- • Separate classification/ detection/ segmentation-based submissions will be dis-qualified and NOT evaluated.
- • Participants are allowed to use any n. no. of models, ensemble/ combine etc. to perform automatic classification, detection, and segmentation of WCE frames.
- • Participants can use any AI model. The model evaluation will be done on ‘uniqueness’, ‘reproducibility’, ‘interpretability’, and ‘feasibility of model for real-time analyses.
- • Model files must be uploaded on GITHUB/ Google drive link to check ‘reproducibility’ of the code.
- • GITHUB repository should be PUBLIC. Repositories which require access will be NOT be considered for evaluation.
- • Submitted repository MUST be readable, documented properly, and interactive.
- • Participants are requested to STRICTLY follow the submission format.
- • The readme file (in PDF format), excel sheet, and GITHUB repository code will be evaluated for the challenge leader board.
- • Participants are requested to acknowledge and cite our datasets for any research purposes. Use of the datasets for commercial products or any purpose other than research is STRICTLY NOT ALLOWED.
- • The participants are requested not to retrain their AI model on test datasets or modify true labels. Such entries will be disqualified.
Criteria of judging a submission:
- • All the received email till February 24, 2024 (result submission closing date) will be considered for evaluation.
- • The GITHUB repository, readme file (in PDF format), and excel sheet (in xlsx format) received for all entries will be downloaded by the organizing team.
- • A common data file will be prepared to compare the evaluation metrics achieved for dataset 1 and 2 (separate)*.
- • A semi-automated python script file will be used to achieve the best evaluation metrics received among all entries.
- • Following checklist will be used to select the top three winning teams:
- • Best evaluation metrics on testing dataset 1 and 2 (separate)*.
- • Best interpretability plots achieved on testing dataset 1 and 2 (separate)*.
- • Model 'uniqueness', 'reproducibility', 'interpretability', and 'feasibility of model for real-time analyses'.
- • Readability of the readme file.
- • Readability of the GITHUB repository.
Datasets To Be Used
Training dataset: WCEbleedGen Version V2
The training dataset consists of 2618 bleeding and non-bleeding WCE frames collected from multiple internet resources, datasets with a vast variety and types of GI bleeding throughout the GI tract along with medically validated binary masks and bounding boxes in three formats (txt, XML and YOLO txt). The Version V1 of this dataset was utilized as a training dataset in Auto-WCEBleedGen Challenge Version V1.
After the challenge, a Version V2 was released on Nov 19, 2023. In this version, the multiple bleeding frames present in Version V1 were re-annotated. Their new XML and YOLO-TXT were also added. This dataset is first-of-its-kind and promotes generalized comparison with existing state-of-the-art methods, and aims to contribute to better interpretability, and reproducibility of such automated systems.
Testing dataset: AutoWCEBleedGen-Test Dataset Version V2
AutoWCEBleedGen-Test dataset is an independently collected WCE data containing bleeding and non-bleeding frames of 30 patients suffering from acute, chronic and occult GI bleeding referred at Department of Gastroenterology and HNU, All India Institute of Medical Sciences (AIIMS), New Delhi, India. It was utilized as a testing dataset in the AutoWCEBleedGen Challenge Version V1. It was only accessible to challenge participants and shared through Google drive link throughout the challenge.
It consists of a total 564 frames. It is divided into dataset 1 and 2. Dataset 1 contains 49 frames which were randomly collected from seven different patient's data at AIIMS. The frames were then annotated by a group of experienced gastroenterologists at AIIMS. The annotations were marked in the frames. The dataset 2 contains 515 frames which were collected from twenty-three different patient's data. The annotations were NOT marked in the frames. A list of image names with respect to patient were released for the challenge participants. Dataset 1 was developed for non-sequential, random frame analysis. Dataset 2 was developed for sequential-frame analysis.
After the challenge, we developed the improved version of the test dataset and released it Nov 14, 2023 at zenedo platform. In the improved version, we have updated the annotations (binary masks) of dataset 1 and 2, validated with the team of gastroenterologists at AIIMS, and provided un-marked images of dataset 1. Testing dataset of AutoWCEBleedGen Challenge Version V1 was also released in this.
For AutoWCEBleedGen Challenge Version V2, we will release the version V2 of the AutoWCEBleedGen-Test Dataset on February 11, 2024. This version will include medically annotated bounding boxes in three different formats (txt, XML and YOLO txt).
PRIZES
- • Presentation by the winning team (1st runner-up only) in ICIP 2024.
- • E-presentation-video and e-certificates to top three winning teams.
- • E-certificate to each team for participating in the challenge.
- Note: Participation will only be counted if a team submits their work as per relevant dates decided for the challenge.
- • Challenge paper writing collaboration with the top three winning teams.* (Subject to discussion after the challenge)
RESULTS
S.No. | Position | Team Name | Affiliation |
---|---|---|---|
1 | First Position | ColonNet | Indian Institute of Information Technology, Ranchi |
2 | Second Position | ACVLab | Institute of Data Science, National Cheng Kung University, Taiwan |
3 | Third Position | Failed Wizards | Indian Institute of Technology, Tirupati |
MEET THE TEAM
ORGANISERS
Prof. Nidhi Goel
Dept. of ECE
IGDTUW, Delhi
Palak Handa
Dept. of ECE
DTU, Delhi
Dr. Deepak Gunjan
Dept. of Gastroenterology and HNU
AIIMS Delhi
MISAHUB MEMBERS
Deepti Chhabra
IGDTUW, Delhi
(Website Management)
Anushka Saini
IGDTUW, Delhi
(Registrations, E-mail and Certificates)
Advika Thakur
IGDTUW, Delhi
(Social Media)
Divyansh Nautiyal
GGSIPU, New Delhi
(Dataset Development, Evaluation)
Manas Dhir
GGSIPU, New Delhi
(Evaluation)
Nishu Pandey
IGDTUW, Delhi
(Miscellaneous)
CONTACT
- • For query, please contact misahub2023@gmail.com.