Capsule Vision 2024 Challenge: Multi-Class Abnormality Classification for Video Capsule Endoscopy
Important Updates
- Result submissions for Capsule Vision 2024 Challenge are now CLOSED.
- The registration for Capsule Vision 2024 Challenge was officially CLOSED.
- Participants must ensure that their final results are validated using the Sanity Checker prior to submission for consideration. Check KEY LINKS.
- Testing dataset was released. Check KEY LINKS.
- The submission period for results was set from October 11, 2024 - October 25, 2024. All participants are requested to follow the submission instructions and rules strictly.
- The final deadline for all submissions was October 25, 2024 i.e. no extensions will be granted.
- The challenge website and GitHub repository had been updated accordingly. The arXiv will be updated on November 25, 2024 upon the official announcement of the results.
- Upon completion of the challenge, the benchmarking pipeline and its performance results based on the validation and testing datasets will be released.
- Participants are not required to compare their results with the benchmarking pipeline as this will be carried out by the organizing team.
- Balanced accuracy will also be considered as a goal metric along with mean AUC. Please check the updated codes on Github.
- Training dataset was released. Check KEY LINKS.
-
Registrations are now OPEN. HURRY, APPLY!
Goal
The aim of the challenge was to provide an opportunity for the development, testing and evaluation of AI models for automatic classification of abnormalities captured in video capsule endoscopy (VCE) video frames. It promotes the development of vendor-independent and generalized AI-based models for automatic abnormality classification pipeline with 10 class labels:
- Angioectasia
- Bleeding
- Erosion
- Erythema
- Foreign body
- Lymphangiectasia
- Polyp
- Ulcer
- Worms
- Normal
Data
Training and Validation Data
The training and validation datasets were released and are accessible here.
Testing Data
The testing dataset had been released and was accessible here.
Evaluation
Goal Metric
- Balanced Accuracy
- Mean AUC
Other Metrics
- AUC-ROC
- Specificity
- Mean Specificity
- F1 Score
- Mean F1 Score
- Average Precision
- Mean Average Precision
Submission Instructions and Rules
- The detailed submission instructions and challenge rules are available on ArXiv.
- The sample codes are available on GitHub for your reference.
- The sample submission report was available on Overleaf.
Prizes
- Cash prize sponsored by the 9th International Conference on Computer Vision & Image Processing (CVIP 2024).
- 1st Prize: 200 €
- 2nd Prize: 150 €
- 3rd Prize: 100 €
- E-certificate to top three winning teams.
- Co-authorship in the challenge summary paper for top 4 teams.
- Opportunity to showcase work at CVIP 2024.
Meet The Team
ORGANISERS
Dr. Palak Handa
Research Centre for MIAAI, DPU, Austria
Dr. Amirreza Mahbod
Research Centre for MIAAI, DPU, Austria
Dr. Florian Schwarzhans
Research Centre for MIAAI, DPU, Austria
Dr. Ramona Woitek
Research Centre for MIAAI, DPU, Austria
Dr. Nidhi Goel
Dept. of ECE, IGDTUW, Delhi, India
Dr. Deepak Gunjan
Dept. of Gastroenterology and HNU, AIIMS Delhi, India
Dr. Jagadeesh Kakarla
Dept. of CSE, IIITDM Kancheepuram, India
Dr. Balasubramanian Raman
Dept. of CSE, IIT Roorkee, India
MISAHUB MEMBERS
Deepti Chhabra
Dept. of AI & DS
IGDTUW, Delhi, India
Shreshtha Jha
Dept. of ECE
IGDTUW, Delhi, India
Manas Dhir
Dept. of AI-ML
USAR, GGSIPU, Delhi, India
Pallavi Sharma
Dept. of ECE
IGDTUW, Delhi, India
Vijay Thakur
Dept. of ECE
DTU, Delhi, India
Supported By
Contact
For any query, please contact ask.misahub@gmail.com.