Dr. Tongue: Sign-Oriented Multi-label Detection for Remote Tongue Diagnosis

Authors: Yiliang Chen, Steven SC Ho, Cheng Xu, Yao Jie Xie, Wing-Fai Yeung, Shengfeng He, Jing Qin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our methodology, we developed an extensive tongue image dataset specifically designed for telemedicine. Unlike existing datasets, ours is tailored for remote diagnosis, with a comprehensive set of attribute labels. This dataset will be openly available, providing a valuable resource for research. Initial tests have shown improved accuracy in detecting various tongue attributes, highlighting our framework’s potential as an essential tool for remote medical assessments.
Researcher Affiliation Academia 1School of Nursing, The Hong Kong Polytechnic University, Hong Kong 2Singapore Management University, Singapore EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Tongue Image Upright Orientation Process
Open Source Code Yes Resources https://github.com/tonguedx/tonguedx.
Open Datasets Yes To validate our methodology, we developed an extensive tongue image dataset specifically designed for telemedicine. ... This dataset will be openly available, providing a valuable resource for research.
Dataset Splits Yes Quantitative comparison among different methods using five-fold cross-validation (values in mean% std%).
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU models, CPU models, or memory details.
Software Dependencies No The paper mentions several software components like Grounding DINO, SAM, YOLOv5-Mobile Net V3, Mobile SAM, ResNet50, ViT, and Trans FG, but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup No The paper describes the loss function (Ltotal = wcolor Lcolor + wfur Lfur + Lattr) and mentions sigmoid cross-entropy losses and frequency-based weights. However, it does not provide specific numerical values for hyperparameters such as learning rates, batch sizes, number of epochs, or optimizer settings, which are crucial for a reproducible experimental setup.