Detection model outcomes in the medical field like Diabetic Foot Ulcer
by Emmi Manak/ August 11, 2022
The skin covers approximately 20 square meters, which makes it the largest organ in the human body. Skin is a good indicator of our general health. The skin has three layers: Epidermis, Dermis, and Hypodermis. There are the following diseases skin Acne, ulcer, Atopic dermatitis(eczema), Rosacea, Psoriasis, skin cancer like diabetic ulcer on the foot, etc.
Diabetic is one of the first documented human diseases. It is one of the world’s fastest-growing habitual conditions. Ancient Egyptian physicians described a condition of “excessive emptying of the urine” in Ebers papyrus, written around 1500 BC. The condition “mudhumeha” or “honey urine”. An open wound on the lower leg that occurs in approximately 15% of patients with diabetes, is called a diabetic foot ulcer.
Of the people who foster a foot ulcer, 6% will be hospitalized because of diseases and other ulcer-related difficulties. Ulcers form when the top layer of skin is exposed due to tissue breaking. They are commonly found underneath your immense toes and foot chunks, where they affect your feet down to your skeletal framework. In the United States, diabetes is the leading cause of non-horrible lower farthest point removals. With 14% to 24% of diabetic individuals who develop a foot requiring a removal. Amputations due to diabetes are preceded by 85% of foot ulcers.
Moreover, Diabetic Foot Ulcer is common throughout the world. Many people are suffering from this disease. Detection of DFU disease at its early stages is a challenging task. In the early stages if it detects then it is easy for doctors to cure it and patients save from amputation. I aim to detect the disease faster using detection models. To detect this disease in the early stage using object detection models such as YOLOv5s, Detectron2, and Faster-RCNN. A huge amount of research is done on these techniques and a lot of improvement in the models.
Object detection refers to determining the presence of an object in a picture or image by using bounding boxes and classes. Detection in any field is the way of finding instances of objects within an image. Detection of Ulcers is done by using many object detection models such as YOLO, YOLOX, YOLOR, and Detectron 2. These models help to localize as well as detect the position of ulcer on the foot.
With the help of these models, real detection is done quite smoothly and easily. By the detection of ulcer or any other object position, it is easy for doctors to easily cure diabetic foot ulcers. The early detection of such foot complications will defend diabetic patients from any dangerous stages that develop later and should need foot amputation.
One of the most popular algorithms to date for real-time object detection is YOLO (You Only Look Once), initially proposed by Redmond et. al. YOLO is an algorithm that uses neural networks to provide real-time object detection. YOLO is a state the art object detection algorithm. It is so fast to become the almost standard way of detecting objects in the field of computer vision.
YOLO algorithm works using the following three techniques:
- Residual blocks or Grids
- Bounding box regression
- Intersection Over Union (IOU)
It divides the image into different sizes of the grid. Each grid has a dimension of S x S. It is an extremely fast object detection technique that processes images in real-time at 45 frames per second. It uses the Darknet framework which is trained on many datasets.
A bounding box highlights an object in an image by defining its outline. Every bounding box in the image consists of the following attributes:
- Width (bw)
- Height (bh)
- Class © (for example, person, car, traffic light, etc.)
- Bounding box center (bx,by)
IOU (intersection over union) describes how boxes overlap in object detection. YOLO use IOU to provide an output box that perfectly surrounds the objects.
Grid cells predict bounding boxes and confidence scores according to their bounding boxes. Whenever the predicted bounding box matches the real bounding box, the IOU is equal to 1. This mechanism eliminates bounding boxes that are not equal to the real box.
The following image shows how the three techniques are applied to produce the final detection results.
Yolov5 is a new PyTorch implementation model and is available to everyone through GitHub. YOLOv5 is very close to YOLOv4 and it collects most of the performance from PyTorch training. YOLO algorithm first divides the given image into an NxN grid. Each grid gave the responsibility to detect objects. Each box is giving 5 parameters (x,y) coordinate, width, height, and confident level of probability of object detection. YOLOv5 raised the standard for (RTOP). Yolov5 has 3 important parts
Backbone: CSP Darknet
Neck: PANet
Head: Yolo Layer
Model Backbone is mainly used to Extract important features from the given input image. In YOLOv5 the CSP (cross Stage Partial Networks) are used as a backbone to extract rich informative features from an input image. YOLOv5 formulates model configuration in .yaml as opposed to .cfg files in Darknet. The YAML file is condensed to specify the different layers in the network and then multiplies those by the number of layers in the block. The CSP models are based on DenseNet. DenseNet was designed to connect layers in convolutional neural networks with the motivation to encourage the network to reuse features and reduce the number of network parameters.
Model Neck is mainly used to generate feature pyramids. Feature pyramids help models generalize well on object scaling. It helps to identify the same object with different sizes and scales feature pyramids are very useful and help models to perform well on unseen data.
The model Head is mainly used to perform the final detection part. It applied anchor boxes on features and generates final output vectors with class probabilities, objectness-score, and bounding boxes.
Content:
- Diabetic Foot Ulcer Dataset
- Labeling the Foot Ulcer Dataset In YOLO Format
- Data Directories Structure
- Configuration File Foot Ulcer Dataset
- Training in Google Colab
- Training in Anaconda (Jupiter Notebook)
- Result
- Code
Diabetic Foot Ulcer Dataset
Foot images displaying DFU were collected from Lancashire Teaching Hospitals over the past few years. Three digital cameras were used for capturing the foot images: Kodak DX4530 (5 megapixels), Nikon D3300 (24.2 megapixels), and Nikon COOLPIX P100 (10.3 megapixels).
The DFUC 2020 dataset consists of 4000 images with 2000 images used for training and 200 images used for validation and 2000 images used for testing. All training, validation, and test cases were annotated with the location of foot ulcers in xmin, ymin, xmax and ymax coordinates.
Makesense annotation tools were used to annotate the images with the bounding box indicating the ulcer location. In this dataset, the size of foot images varied between 640 x 480 pixels. For the released dataset, we resized all images to 480 x 480 pixels to reduce computational costs during training.
Labeling the Foot Ulcer Dataset In YOLO Format
There are a number of annotation platforms that support exporting to the YOLO labeling format, which provides a single image annotation text file. For each object in the image, a bounding-box annotation (BBox) is included in the text file. The annotations are normalized to the image size and lie within the range of 0 to 1. They are represented in the following format:
< object-class-ID> <X (coordinate) center> <Y (coordinate) center> <Box width> <Box height>
0 0.465862 0.333103 0.206897 0.257931
Data Directories Structure
The dataset is organized like the below structure:
Configuration File Foot Ulcer Dataset
The data-configurations file describes the dataset parameters. Since we are training on our Diabetic Foot Ulcer (DFU) dataset, we will edit this file and provide: the paths to the train, validation, and test (optional) datasets; the number of classes (nc); and the names of the classes in the same order as their index. Now we only have one class, named ‘ulcer’. We named our Diabetic Foot Ulcer data configurations files ‘data.yaml’ and placed them under the ‘data’ directory. The content of this YAML file is as follows:
The below code is set according to the dataset placed on google drive. When a dataset is trained on the local GPU then the path is set according to your own files position.
train: /content/drive/MyDrive/YOLO_dataset/train
val: /content/drive/MyDrive/YOLO_dataset/val
test: /content/drive/MyDrive/YOLO_dataset/test
nc: 1
names: ['ulcer']
Training in Google Colab
. For training, we have to define the following options.
- img: define input img size
- batch: determine batch size
- Epochs: define the numbers of training epochs
- data: set the path to our yaml file
- cfg: specify our model configuration
- Weights: specify a custom path to weights
For training the dataset on google colab:
- First, upload the train, validation, test, and data.yaml file on google drive.
- Secondly, open the notebook and write code to mount the drive.
Also, upload the dataset on Roboflow then create a link and copy the link in the notebook then unzip the dataset. All other steps are the same.
The below code is used for downloading the training testing results file from Colab is:
%cd /content/
!zip -r ./six_train_exp.zip /content/yolov5/runs/train/exp2
from google.colab import files
files.download('/content/six_train_exp.zip')
Result download file code
You can set the above code according to your understanding.
Training in Anaconda (Jupiter Notebook)
For training your images in your local GPU using Anaconda. Following for training on local GPU:
- Firstly, download the YOLOv5 master code from
(https://github.com/ultralytics/yolov5)
- Secondly, Unzip the YOLOv5 master code file then create a folder in that unzip file name dataset create subfolders (train, valid, test), and paste your all dataset in these subfolders
- Thirdly, install python version 3.7 and then install all requirements (from unzipping Yolo folder in which requirement file)in (command prompt or anaconda prompt)
- Fourthly, if you want to run the code in the Jupiter notebook then install all libraries in the anaconda prompt after that easily write code in the notebook and run, or if you want to run the code in the command prompt first change the Yolo-master folder directory (cmd command) then in the command prompt install the above libraries.
- Now if we run code on a command prompt first open the Yolo-master unzip the file then change the directory by writing cmd.
- Now the command prompt window opens writes the training code in it then runs the code.
Now to run code in Jupiter notebook first open the Jupiter notebook then create a new python3 file. But that must be open in the Yolo-master folder or the folder in which all requirements install.
Result
After training the dataset on local GPU this kind of result is appear:
Code
For local GPU
Training code
%cd /content/yolov5/
!python train.py --img 256 --batch 14 --epochs 100 --data '/content/data.yaml' --cfg /content/yolov5/models/yolov5s.yaml --weights '/content/best.pt' --cache --hyp /content/yolov5/data/hyps/hyp.Objects365.yaml –cache
Testing code
python /content/yolov5/detect.py --weights '/content/best.pt' --data '/content/data.yaml' --img 256 --source /content/test/images
For Google Colab
!git clone https://github.com/ultralytics/yolov5 # clone
%cd /content/yolov5
%pip install -qr requirements.txt # install
import torch
from yolov5 import utils
isplay = utils.notebook_init() # checks
Unzip the dataset folder if did paste the zip file otherwise no need
%cd /content
!curl -L "https://app.roboflow.com/ds/oLJXdHwDMN?key=Zscs9HWRZd" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
Training code
%cd /content/yolov5/
!python train.py --img 256 --batch 14 --epochs 100 --data '/content/data.yaml' --cfg /content/yolov5/models/yolov5s.yaml --weights '/content/best.pt' --cache --hyp /content/yolov5/data/hyps/hyp.Objects365.yaml –cache
Testing code
python /content/yolov5/detect.py --weights '/content/best.pt' --data '/content/data.yaml' --img 256 --source /content/test/images
Lastly, visit our website khaleejaffairs.com