DL_Track_US.gui_helpers package
Submodules
DL_Track_US.gui_helpers.calculate_architecture module
Description
This module contains functions to automatically or manually analyse muscle architecture in longitudinal ultrasonography images of human lower limb muscles. The scope of the automatic method is limited to the vastus lateralis, tibialis anterior, gastrocnemius medialis and soleus muscles due to training data availability. The scope of the manual method is not limited to specific muscles. The module was specifically designed to be executed from a GUI. When used from the GUI, the module saves the analysis results in a .xlsx file to a given directory. The user needs to provide paths to the image, model, and flipflag file directories.
Functions scope
- importAndReshapeImage
Function to import and reshape an image. Moreover, based upon user specification the image might be flipped.
- importImageManual
Function to import an image.
- importFlipFlagsList
Function to retrieve flip values from a .txt file.
- compileSaveResults
Function to save the analysis results to a .xlsx file.
- IoU
Function to compute the intersection over union score (IoU), a measure of prediction accuracy. This is sometimes also called Jaccard score.
- calculateBatch
Function to calculate muscle architecture in longitudinal ultrasonography images of human lower limb muscles. The values computed are fascicle length (FL), pennation angle (PA), and muscle thickness (MT).
- calculateBatchManual
Function used for manual calculation of fascicle length, muscle thickness and pennation angles in longitudinal ultrasonography images of human lower limb muscles.
Notes
Additional information and usage exaples can be found at the respective functions documentations.
- DL_Track_US.gui_helpers.calculate_architecture.IoU(y_true, y_pred, smooth: int = 1) float
Function to compute the intersection of union score (IoU), a measure of prediction accuracy. This is sometimes also called Jaccard score.
The IoU can be used as a loss metric during binary segmentation when convolutional neural networks are applied. The IoU is calculated for both, the training and validation set.
Parameters
- y_truetf.Tensor
True positive image segmentation label predefined by the user. This is the mask that is provided prior to model training.
- y_predtf.Tensor
Predicted image segmentation by the network.
- smoothint, default = 1
Smoothing operator applied during final calculation of IoU. Must be non-negative and non-zero.
Returns
- ioutf.Tensor
IoU representation in the same shape as y_true, y_pred.
Notes
The IoU is usually calculated as IoU = intersection / union. The intersection is calculated as the overlap of y_true and y_pred, whereas the union is the sum of y_true and y_pred.
Examples
>>> IoU(y_true=Tensor("IteratorGetNext:1", shape=(1, 512, 512, 1), dtype=float32), y_pred=Tensor("VGG16_U-Net/conv2d_8/Sigmoid:0", shape=(1, 512, 512, 1), dtype=float32), smooth=1) Tensor("truediv:0", shape=(1, 512, 512), dtype=float32)
- DL_Track_US.gui_helpers.calculate_architecture.calculateBatch(rootpath: str, apo_modelpath: str, fasc_modelpath: str, flip_file_path: str, file_type: str, scaling: str, spacing: int, filter_fasc: bool, apo_treshold: float, apo_length_tresh: int, fasc_threshold: float, fasc_cont_thresh: int, min_width: int, min_pennation: int, max_pennation: int, gui) None
Function to calculate muscle architecture in longitudinal ultrasonography images of human lower limb muscles. The values computed are fascicle length (FL), pennation angle (PA), and muscle thickness (MT).
The scope of this function is limited. Images of the vastus lateralis, tibialis anterior soleus and gastrocnemius muscles can be analyzed. This is due to the limited amount of training data for our convolutional neural networks. This functions makes extensive use of several other functions and was designed to be executed from a GUI.
Parameters
- rootpathstr
String variable containing the path to the folder where all images to be analyzed are saved.
- apo_modelpathstr
String variable containing the absolute path to the aponeurosis neural network.
- fasc_modelpathstr
String variable containing the absolute path to the fascicle neural network.
- flip_flag_pathstr
String variabel containing the absolute path to the flip flag .txt file containing the flip flags. Flipping is necessary as the models were trained on images of with specific fascicle orientation.
- filetypestr
String variable containg the respective type of the images. This is needed to select only the relevant image files in the root directory.
- scaling{“bar”, “manual”, “No scaling”}
String variabel determining the image scaling method. There are three types of scaling available: - scaling = “manual” (user must scale images manually) - sclaing = “bar” (image are scaled automatically. This is done by
detecting scaling bars on the right side of the image.)
scaling = “No scaling” (image is not scaled.)
Scaling is necessary to compute measurements in centimeter, if “no scaling” is chosen, the results are in pixel units.
- spacing{10, 5, 15, 20}
Distance (in milimeter) between two scaling bars in the image. This is needed to compute the pixel/cm ratio and therefore report the results in centimeter rather than pixel units.
- filter_fascbool
If True, fascicles will be filtered so that no crossings are included. This may reduce number of totally detected fascicles.
- apo_thresholdfloat
Float variable containing the threshold applied to predicted aponeurosis pixels by our neural networks. By varying this threshold, different structures will be classified as aponeurosis as the threshold for classifying a pixel as aponeurosis is changed. Must be non-zero and non-negative.
- apo_length_treshint
Integer variable containing the threshold applied to predicted aponeurosis length in pixels. By varying this threshold, different structures will be classified as aponeurosis depending on their length. Must be non-zero and non-negative.
- fasc_thresholdfloat
Float variable containing the threshold applied to predicted fascicle pixels by our neural networks. By varying this threshold, different structures will be classified as fascicle as the threshold for classifying a pixel as fascicle is changed.
- fasc_cont_thresholdfloat
Float variable containing the threshold applied to predicted fascicle segments by our neural networks. By varying this threshold, different structures will be classified as fascicle. By increasing, longer fascicle segments will be considered, by lowering shorter segments. Must be non-zero and non-negative.
- min_widthint
Integer variable containing the minimal distance between aponeuroses to be detected. The aponeuroses must be at least this distance apart to be detected. The distance is specified in pixels. Must be non-zero and non-negative.
- min_pennationint
Integer variable containing the mininmal (physiological) acceptable pennation angle occuring in the analyzed image/muscle. Fascicles with lower pennation angles will be excluded. The pennation angle is calculated as the amgle of insertion between extrapolated fascicle and detected aponeurosis. Must be non-negative.
- max_pennationint
Integer variable containing the maximal (physiological) acceptable pennation angle occuring in the analyzed image/muscle. Fascicles with higher pennation angles will be excluded. The pennation angle is calculated as the amgle of insertion between extrapolated fascicle and detected aponeurosis. Must be non-negative.
- guitk.TK
A tkinter.TK class instance that represents a GUI. By passing this argument, interaction with the GUI is possible i.e., stopping the calculation process after each image.
See Also
do_calculations.py for exact description of muscle architecture parameter calculation.
Notes
For specific explanations of the included functions see the respective function docstrings in this module. To see an examplenary PDF output and .xlsx file take at look at the examples provided in the “examples” directory. This function is called by the GUI. Note that the functioned was specifically designed to be called from the GUI. Thus, tk.messagebox will pop up when errors are raised even if the GUI is not started.
Examples
>>> calculateBatch(rootpath="C:/Users/admin/Dokuments/images", apo_modelpath="C:/Users/admin/Dokuments/models/apo_model.h5", fasc_modelpath="C:/Users/admin/Dokuments/models/apo_model.h5", flip_flag_path="C:/Users/admin/Dokuments/flip_flags.txt", filetype="/**/*.tif, scaline="bar", spacing=10, filter_fasc=False, apo_threshold=0.1, apo_length_tresh=600, fasc_threshold=0.05, fasc_cont_thres=40, curvature=3, min_pennation=10, max_pennation=35, gui=<__main__.DL_Track_US object at 0x000002BFA7528190>)
- DL_Track_US.gui_helpers.calculate_architecture.calculateBatchManual(rootpath: str, filetype: str, gui)
Function used for manual calculation of fascicle length, muscle thickness and pennation angles in longitudinal ultrasonography images of human lower limb muscles.
This function is not restricted to any specific muscles. However, its use is restricted to a specific method for assessing muscle thickness fascicle length and pennation angles.
- Muscle thickness:
Exactly one segment reaching from the superficial to the deep aponeuroses of the muscle must be drawn. If multiple measurement are drawn, these are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until it is not further required to draw the line (i.e., the other aponeurosis border is reached). Only the respective y-coordinates of the points where the cursor was clicked and released are considered for calculation of muscle thickness.
- Fascicle length:
Exactly three segments along the fascicleof the muscle must be drawn. If multiple fascicle are drawn, their lengths are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until one segment is finished (mostly where fascicle curvature occurs the other aponeurosis border is reached). Using the euclidean distance, the total fascicle length is computed as a sum of the segments.
- Pennation angle:
Exactly two segments, one along the fascicle orientation, the other along the aponeurosis orientation must be drawn. The line along the aponeurosis must be started where the line along the fascicle ends. If multiple angle are drawn, they are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until it is not further required to draw the line (i.e., the aponeurosis border is reached by the fascicle). The angle is calculated using the arc-tan function.
In order to scale the frame, it is required to draw a line of length 10 milimeter somewhere in the image. The line can be drawn in the same fashion as for example the muscle thickness. Here however, the euclidean distance is used to calculate the pixel / centimeter ratio. This has to be done for every frame.
We also provide the functionality to extent the muscle aponeuroses to more easily extrapolate fascicles. The lines can be drawn in the same fashion as for example the muscle thickness.
Parameters
- rootpathstr
String variable containing the path to the folder where all images to be analyzed are saved.
- guitk.TK
A tkinter.TK class instance that represents a GUI. By passing this argument, interaction with the GUI is possible i.e., stopping the calculation process after each image.
Notes
For specific explanations of the included functions see the respective function docstrings in this module. The function outputs an .xlsx file in rootpath containing the (averaged) analysis results for each image.
Examples
>>> calculateBatchManual(rootpath="C:/Users/admin/Dokuments/images", filetype="/**/*.tif, gui")
- DL_Track_US.gui_helpers.calculate_architecture.compileSaveResults(rootpath: str, dataframe: DataFrame) None
Function to save the analysis results to a .xlsx file.
A pd.DataFrame object must be inputted. The results inculded in the dataframe are saved to an .xlsx file. The .xlsx file is saved to the specified rootpath.
Parameters
- rootpathstr
String variable containing the path to where the .xlsx file should be saved.
- dataframepd.DataFrame
Pandas dataframe variable containing the image analysis results for every image anlyzed.
Examples
>>> saveResults(img_path = "C:/Users/admin/Dokuments/images", dataframe = [['File',"image1"],['FL', 12], ['PA', 17], ...])
- DL_Track_US.gui_helpers.calculate_architecture.getFlipFlagsList(flip_flag_path: str) list
Function to retrieve flip values from a .txt file.
The flip flags decide wether an image should be flipped or not. The flags can be 0 (image not flipped) or 1 (image is flipped). The flags must be specified in the .txt file and can either be on a seperate line for each image or on a seperate line for each folder. The amount of flip flags must equal the amount of images analyzed.
Parameters
- flip_flag_pathstr
String variabel containing the absolute path to the flip flag .txt file containing the flip flags.
Returns
- flip_flagslist
A list variable containing all flip flags included in the specified .txt file
Example
>>> getFlipFlagsList(flip_flag_path="C:/Desktop/Test/FlipFlags/flags.txt") [1, 0, 1, 0, 1, 1, 1]
- DL_Track_US.gui_helpers.calculate_architecture.importAndReshapeImage(path_to_image: str, flip: int)
Function to import and reshape an image. Moreover, based upon user specification the image might be flipped.
Usually flipping is only required when the imported image is of a specific structure that is incompatible with the trained models provided here
Parameters
- path_to_imagestr
String variable containing the imagepath. This should be an absolute path.
- flip{0, 1}
Integer value defining wheter an image should be flipped. This can be 0 (image is not flipped) or 1 (image is flipped).
Returns
- imgnp.ndarray
The loaded images is converted to a np.nadarray. This is done using the img_to_array kears functionality. The input image is futhter flipped (if selected), resized, respahed and normalized.
- img_copynp.ndarray
A copy of the input image.
- non_flipped_imgnp.ndarray
A copy of the input image. This copy is made prior to image flipping.
- heightint
Integer value containing the image height of the input image.
- widthint
Integer value containing the image width of the input image.
- filenamestr
String value containing the name of the input image, not the entire path.
Examples
>>> importAndReshapeImage(path_to_image="C:/Users/Dokuments/images/img1.tif", flip=0) [[[[0.10113753 0.09391343 0.09030136] ... [0 0 0]]], [[[28 26 25] ... [ 0 0 0]]], [[[28 26 25] ... [ 0 0 0]]], 512, 512, img1.tif
- DL_Track_US.gui_helpers.calculate_architecture.importImageManual(path_to_image: str, flip: int)
Function to import an image.
This function is used when manual analysis of the image is selected in the GUI. For manual analysis, it is not necessary to resize, reshape and normalize the image. The image may be flipped.
Parameters
- path_to_imagestr
String variable containing the imagepath. This should be an absolute path.
- flip{0, 1}
Integer value defining wheter an image should be flipped. This can be 0 (image is not flipped) or 1 (image is flipped).
Returns
- imgnp.ndarray
The loaded images as a np.nadarray in grayscale.
- filenamestr
String value containing the name of the input image, not the entire path.
Examples
>>> importImageManual(path_to_image="C:/Desktop/Test/Img1.tif", flip=0) [[[28 26 25] [28 26 25] [28 26 25] ... [[ 0 0 0] [ 0 0 0] [ 0 0 0]]], Img1.tif
DL_Track_US.gui_helpers.calculate_architecture_video module
Description
This module contains functions to automatically or manually analyse muscle architecture in longitudinal ultrasonography videos of human lower limb muscles. The scope of the automatic method is limited to the vastus lateralis, tibialis anterior, gastrocnemius medialis and soleus muscles due to training data availability. The scope of the manual method is not limited to specific muscles. The module was specifically designed to be executed from a GUI. When used from the GUI, the module saves the analysis results in a .xlsx file to a given directory. The user needs to provide paths to the video, model, and flipflag file directories. With both methods, every frame is analyzed seperately and the results for each frame are saved.
Functions scope
- importAndReshapeImage
Function to import and reshape an image. Moreover, based upon user specification the image might be flipped.
- importImageManual
Function to import an image.
- importFlipFlagsList
Function to retrieve flip values from a .txt file.
- compileSaveResults
Function to save the analysis results to a .xlsx file.
- calculateBatch
Function to calculate muscle architecture in longitudinal ultrasonography images of human lower limb muscles. The values computed are fascicle length (FL), pennation angle (PA), and muscle thickness (MT).
- calculateBatchManual
Function used for manual calculation of fascicle length, muscle thickness and pennation angles in longitudinal ultrasonography images of human lower limb muscles.
Notes
Additional information and usage examples can be found at the respective functions documentations.
See Also
calculate_architecture.py
- DL_Track_US.gui_helpers.calculate_architecture_video.calculateArchitectureVideo(rootpath: str, apo_modelpath: str, fasc_modelpath: str, filetype: str, scaling: str, flip: str, spacing: int, step: int, filter_fasc: bool, apo_treshold: float, apo_length_thresh: int, fasc_threshold: float, fasc_cont_thresh: int, min_width: int, min_pennation: int, max_pennation: int, gui)
Function to calculate muscle architecture in longitudinal ultrasonography videos of human lower limb muscles. The values computed are fascicle length (FL), pennation angle (PA), and muscle thickness (MT).
The scope of this function is limited. videos of the vastus lateralis, tibialis anterior soleus and gastrocnemius muscles can be analyzed. This is due to the limited amount of training data for our convolutional neural networks. This functions makes extensive use of several other functions and was designed to be executed from a GUI.
Parameters
- rootpathstr
String variable containing the path to the folder where all videos to be analyzed are saved.
- apo_modelpathstr
String variable containing the absolute path to the aponeurosis neural network.
- fasc_modelpathstr
String variable containing the absolute path to the fascicle neural network.
- flipstr
String variable determining wheter all frames of a video are flipped vetically. Flipping is necessary as the models were trained in images of with specific fascicle orientation.
- filetypestr
String variable containg the respective type of the videos. This is needed to select only the relevant video files in the root directory.
- scalingstr
String variable determining the image scaling method. There are three types of scaling available: - scaling = “manual” (user must scale the video manually.
This only needs to be done in the first frame.) detecting scaling bars on the right side of the image.)
scaling = “No scaling” (video frames are not scaled.)
Scaling is necessary to compute measurements in centimeter, if “no scaling” is chosen, the results are in pixel units.
- spacingint
Integer variable containing the distance (in milimeter) between two scaling bars in the image. This is needed to compute the pixel/cm ratio and therefore report the results in centimeter rather than pixel units.
- stepint
Integer variable containing the step for the range of video frames. If step != 1, frames are skipped according to the size of step. This might decrease processing time but also accuracy.
- filter_fascbool
If True, fascicles will be filtered so that no crossings are included. This may reduce number of totally detected fascicles.
- apo_thresholdfloat
Float variable containing the threshold applied to predicted aponeurosis pixels by our neural networks. By varying this threshold, different structures will be classified as aponeurosis as the threshold for classifying a pixel as aponeurosis is changed. Must be non-zero and non-negative.
- apo_length_treshint
Integer variable containing the threshold applied to predicted aponeurosis length in pixels. By varying this threshold, different structures will be classified as aponeurosis depending on their length. Must be non-zero and non-negative.
- fasc_thresholdfloat
Float variable containing the threshold applied to predicted fascicle pixels by our neural networks. By varying this threshold, different structures will be classified as fascicle as the threshold for classifying a pixel as fascicle is changed. Must be non-zero and non-negative.
- fasc_cont_thresholdfloat
Float variable containing the threshold applied to predicted fascicle segments by our neural networks. By varying this threshold, different structures will be classified as fascicle. By increasing, longer fascicle segments will be considered, by lowering shorter segments. Must be non-zero and non-negative.
- min_widthint
Integer variable containing the minimal distance between aponeuroses to be detected. The aponeuroses must be at least this distance apart to be detected. The distance is specified in pixels. Must be non-zero and non-negative.
- min_pennationint
Integer variable containing the mininmal (physiological) acceptable pennation angle occuring in the analyzed image/muscle. Fascicles with lower pennation angles will be excluded. The pennation angle is calculated as the angle of insertion between extrapolated fascicle and detected aponeurosis. Must be non-negative.
- max_pennationint
Integer variable containing the maximal (physiological) acceptable pennation angle occuring in the analyzed image/muscle. Fascicles with higher pennation angles will be excluded. The pennation angle is calculated as the angle of insertion between extrapolated fascicle and detected aponeurosis. Must be non-negative and larger than min_pennation.
- guitk.TK
A tkinter.TK class instance that represents a GUI. By passing this argument, interaction with the GUI is possible i.e., stopping the calculation process after each image.
See Also
do_calculations_video.py for exact description of muscle architecture parameter calculation.
Notes
For specific explanations of the included functions see the respective function docstrings in this module. To see an examplenary video output and .xlsx file take at look at the examples provided in the “examples” directory. This function is called by the GUI. Note that the functioned was specifically designed to be called from the GUI. Thus, tk.messagebox will pop up when errors are raised even if the GUI is not started.
Examples
>>> calculateBatch(rootpath="C:/Users/admin/Dokuments/images", apo_modelpath="C:/Users/admin/Dokuments/models/apo_model.h5", fasc_modelpath="C:/Users/admin/Dokuments/models/apo_model.h5", flip="Flip", filetype="/**/*.avi, scaline="manual", spacing=10, filter_fasc=False apo_threshold=0.1, fasc_threshold=0.05, fasc_cont_thres=40, curvature=3, min_pennation=10, max_pennation=35, gui=<__main__.DLTrack object at 0x000002BFA7528190>)
- DL_Track_US.gui_helpers.calculate_architecture_video.calculateArchitectureVideoManual(videopath: str, gui)
Function used for manual calculation of fascicle length, muscle thickness and pennation angles in longitudinal ultrasonography videos of human lower limb muscles.
This function is not restricted to any specific muscles. However, its use is restricted to a specific method for assessing muscle thickness fascicle length and pennation angles. Moreover, each video frame is analyzed seperately.
- Muscle thickness:
Exactly one segment reaching from the superficial to the deep aponeuroses of the muscle must be drawn. If multiple measurement are drawn, these are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until it is not further required to draw the line (i.e., the other aponeurosis border is reached). Only the respective y-coordinates of the points where the cursor was clicked and released are considered for calculation of muscle thickness.
- Fascicle length:
Exactly three segments along the fascicleof the muscle must be drawn. If multiple fascicle are drawn, their lengths are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until one segment is finished (mostly where fascicle curvature occurs the other aponeurosis border is reached). Using the euclidean distance, the total fascicle length is computed as a sum of the segments.
- Pennation angle:
Exactly two segments, one along the fascicle orientation, the other along the aponeurosis orientation must be drawn. The line along the aponeurosis must be started where the line along the fascicle ends. If multiple angle are drawn, they are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until it is not further required to draw the line (i.e., the aponeurosis border is reached by the fascicle). The angle is calculated using the arc-tan function.
In order to scale the frame, it is required to draw a line of length 10 milimeter somewhere in the image. The line can be drawn in the same fashion as for example the muscle thickness. Here however, the euclidean distance is used to calculate the pixel / centimeter ratio. This has to be done for every frame.
We also provide the functionality to extent the muscle aponeuroses to more easily extrapolate fascicles. The lines can be drawn in the same fashion as for example the muscle thickness.
Parameters
- videopathstr
String variable containing the absolute path to the video to be analyzed.
- guitk.TK
A tkinter.TK class instance that represents a GUI. By passing this argument, interaction with the GUI is possible i.e., stopping the calculation process after each image frame.
Notes
For specific explanations of the included functions see the respective function docstrings in this module. The function outputs an .xlsx file in rootpath containing the (averaged) analysis results for each image.
Examples
>>> calculateBatchManual(videopath="C:/Users/admin/Dokuments/videoss", gui")
- DL_Track_US.gui_helpers.calculate_architecture_video.exportToEcxel(path: str, filename: str, fasc_l_all: list, pennation_all: list, x_lows_all: list, x_highs_all: list, thickness_all: list)
Function to save the analysis results to a .xlsx file.
A list of each variable to be saved must be inputted. The inputs are inculded in a dataframe and saved to an .xlsx file. The .xlsx file is saved to the specified rootpath containing each analyzed frame. Estimates or fascicle length, pennation angle, muscle thickness and intersections of fascicles with aponeuroses are saved.
Parameters
- pathstr
String variable containing the path to where the .xlsx file should be saved.
- filenamestr
String value containing the name of the input video, not the entire path. The .xlsx file is named accordingly.
- fasc_l_alllist
List variable containing all fascicle estimates from a single frame that was analyzed.
- pennation_alllist
List variable containing all pennation angle estimates from a single frame that was analyzed.
- x_lows_alllist
List variable containing all x-coordinate estimates from intersection of the fascicle with the the lower aponeurosiis of a single frame that was analyzed.
- x_highs_alllist
List variable containing all x-coordinate estimates from the intersection of the fascicle with the upper aponeurosiis of a single frame that was analyzed.
- thickness_alllist
List variable containing all muscle thickness estimates from a single frame that was analyzed.
Examples
>>> exportToExcel(path = "C:/Users/admin/Dokuments/videos", filename="video1.avi", fasc_l_all=[7.8,, 6.4, 9.1], pennation_all=[20, 21.1, 24], x_lows_all=[749, 51, 39], x_highs_all=[54, 739, 811], thickness_all=[1.85])
- DL_Track_US.gui_helpers.calculate_architecture_video.importVideo(vpath: str)
Function to import a video. Video file types should be common ones like .avi or .mp4.
Parameters
- vpathstr
String variable containing the video. This should be an absolute path.
Returns
- capcv2.VideoCapture
Object that contains the video in a np.ndarrray format. In this way, seperate frames can be accessed.
- vid_lenint
Integer variable containing the number of frames present in cap.
- widthint
Integer variable containing the image width of the input image.
- filenamestr
String variable containing the name of the input video, not the entire path.
- vid_outcv2.VideoWriter
Object that is stored in the vpath folder. Contains the analyzed video frames and is titled “…_proc.avi” The name can be changed but must be different than the input video.
Examples
>>> importVideo(vpath="C:/Users/Dokuments/videos/video1.avi")
- DL_Track_US.gui_helpers.calculate_architecture_video.importVideoManual(vpath: str)
Function to import a video. Video file types should be common ones like .avi or .mp4. This function is used for manual analysis of videos.
Here, no processed video is saved subsequent to analysis.
Parameters
- vpathstr
String variable containing the video. This should be an absolute path.
Returns
- capcv2.VideoCapture
Object that contains the video in a np.ndarrray format. In this way, seperate frames can be accessed.
- vid_lennp.ndarray
A copy of the input image.
- vid_widthint
Integer value containing the image width of the input video.
- vid_heightint
Integer value containing the image height of the input video.
- widthint
Integer value containing the image width of the input image.
- filenamestr
String value containing the name of the input video, not the entire path.
Examples
>>> importVideo(vpath="C:/Users/Dokuments/videos/video1.avi")
DL_Track_US.gui_helpers.calibrate module
Description
This module contains functions to automatically or manually scale images. The scope of the automatic method is limited to scaling bars being present in the right side of the image. The scope of the manual method is not limited to specific scaling types in images. However, the distance between two selected points in the image required for the scaling must be known.
Functions scope
- mclick
Instance method to detect mouse click coordinates in image.
- calibrateDistanceManually
Function to manually calibrate an image to convert measurements in pixel units to centimeters.
- calibrateDistanceStatic
Function to calibrate an image to convert measurements in pixel units to centimeters.
- DL_Track_US.gui_helpers.calibrate.calibrateDistanceManually(img: ndarray, spacing: int)
Function to manually calibrate an image to convert measurements in pixel units to centimeters.
The function calculates the distance in pixel units between two points on the input image. The points are determined by clicks of the user. The distance (in milimeters) is determined by the value contained in the spacing variable. Then the ratio of pixel / centimeter is calculated. To get the distance, the euclidean distance between the two points is calculated.
Parameters
- imgnp.ndarray
Input image to be analysed as a numpy array. The image must be loaded prior to calibration, specifying a path is not valid.
- spacing{10, 5, 15, 20}
Integer variable containing the known distance in milimeter between the two placed points by the user. This can be 5, 10, 15 or 20 milimeter.
Returns
- calib_distint
Integer variable containing the distance between the two specified point in pixel units.
- scale_statementstr
String variable containing a statement how many milimeter correspond to how many pixels.
Examples
>>> calibrateDistanceManually(img=([[[[0.22414216 0.19730392 0.22414216] ... [0.2509804 0.2509804 0.2509804 ]]]), 5) 99, 5 mm corresponds to 99 pixels
- DL_Track_US.gui_helpers.calibrate.calibrateDistanceStatic(img: ndarray, spacing: str)
Function to calibrate an image to convert measurements in pixel units to centimeter.
The function calculates the distance in pixel units between two scaling bars on the input image. The bars should be positioned on the right side of image. The distance (in milimeter) between two bars must be specified by the spacing variable. It is the known distance between two bars in milimeter. Then the ratio of pixel / centimeter is calculated. To get the distance, the median distance between two detected bars is calculated.
Parameters
- imgnp.ndarray
Input image to be analysed as a numpy array. The image must be loaded prior to calibration, specifying a path is not valid.
- spacing{10, 5, 15, 20}
Integer variable containing the known distance in milimeter between the two scaling bars. This can be 5, 10, 15 or 20 milimeter.
Returns
- calib_distint
Integer variable containing the distance between the two specified point in pixel units.
- scale_statementstr
String variable containing a statement how many milimeter correspond to how many pixels.
Examples
>>> calibrateDistanceStatic(img=([[[[0.22414216 0.19730392 0.22414216] ... [0.2509804 0.2509804 0.2509804 ]]]), 5) 99, 5 mm corresponds to 99 pixels
- DL_Track_US.gui_helpers.calibrate.mclick(event, x_val, y_val, flags, param)
Instance method to detect mouse click coordinates in image.
This instance is used when the image to be analyzed should be cropped. Upon clicking the mouse button, the coordinates of the cursor position are stored in the instance attribute self.mlocs.
Parameters
- event
Event flag specified as Cv2 mouse event left mouse button down.
- x_val
Value of x-coordinate of mouse event to be recorded.
- y_val
Value of y-coordinate of mouse event to be recorded.
- flags
Specific condition whenever a mouse event occurs. This is not used here but needs to be specified as input parameter.
- param
User input data. This is not required here but needs to be specified as input parameter.
DL_Track_US.gui_helpers.calibrate_video module
Description
This module contains functions to manually scale video frames. The scope of the manual method is not limited to specific scaling types in images. However, the distance between two selected points in the image required for the scaling must be known. We assume that the scaling does not change throughout the video, which is why only the first frame is used for calibration.
Functions scope
- mclick
Instance method to detect mouse click coordinates in image.
- calibrateDistanceManually
Function to manually calibrate an image to convert measurements in pixel units to centimeters.
See Also
calibrate.py
- DL_Track_US.gui_helpers.calibrate_video.calibrateDistanceManually(cap, spacing: int)
Function to manually calibrate an image to convert measurements in pixel units to centimeters.
The function calculates the distance in pixel units between two points on the input image. The points are determined by clicks of the user. The distance (in milimeter) is determined by the value contained in the spacing variable. Then the ratio of pixel / centimeter is calculated. To get the distance, the euclidean distance between the two points is calculated.
Parameters
- capcv2.VideoCapture
Object that contains the video in a np.ndarrray format. In this way, seperate frames can be accessed.
- spacing{10, 5, 15, 20}
Integer variable containing the known distance in milimeter between the two placed points by the user. This can be 5, 10, 15 or 20 milimeter.
Returns
- calib_distint
Integer variable containing the distance between the two specified point in pixel units.
- scale_statementstr
String variable containing a statement how many milimeter correspond to how many pixels.
Examples
>>> calibrateDistanceManually(cap=VideoCapture 000002A261ADC590, 5) 99, 5 mm corresponds to 99 pixels
- DL_Track_US.gui_helpers.calibrate_video.mclick(event, x_val, y_val, flags, param)
Instance method to detect mouse click coordinates in image.
This instance is used when the image to be analyzed should be cropped. Upon clicking the mouse button, the coordinates of the cursor position are stored in the instance attribute self.mlocs.
Parameters
- event
Event flag specified as Cv2 mouse event left mouse button down.
- x_val
Value of x-coordinate of mouse event to be recorded.
- y_val
Value of y-coordinate of mouse event to be recorded.
- flags
Specific condition whenever a mouse event occurs. This is not used here but needs to be specified as input parameter.
- param
User input data. This is not required here but needs to be specified as input parameter.
DL_Track_US.gui_helpers.do_calculations module
Description
This module contains functions to calculate muscle architectural parameters based on binary segmentations by convolutional neural networks. The parameters include muscle thickness, pennation angle and fascicle length. First, input images are segmented by the CNNs. Then the predicted aponeuroses and fascicle fragments are thresholded and filtered. Fascicle fragments and aponeuroses are extrapolated and the intersections determined. This module is specifically designed for single image analysis. The architectural parameters are calculated and the results are plotted.
Functions scope
- sortContours
Function to sort detected contours from proximal to distal.
- contourEdge
Function to find only the coordinates representing one edge of a contour.
- doCalculations
Function to compute muscle architectural parameters based on convolutional neural network segmentation.
Notes
Additional information and usage examples can be found at the respective functions documentations.
- DL_Track_US.gui_helpers.do_calculations.contourEdge(edge: str, contour: list) ndarray
Function to find only the coordinates representing one edge of a contour.
Either the upper or lower edge of the detected contours is calculated. From the contour detected lower in the image, the upper edge is searched. From the contour detected higher in the image, the lower edge is searched.
Parameters
- edge{“T”, “B”}
String variable defining the type of edge that is searched. The variable can be either “T” (top) or “B” (bottom).
- contourlist
List variable containing sorted contours.
Returns
- xnp.ndarray
Array variable containing all x-coordinates from the detected contour.
- ynp.ndarray
Array variable containing all y-coordinated from the detected contour.
Examples
>>> contourEdge(edge="T", contour=[[[195 104]] ... [[196 104]]]) [196 197 198 199 200 ... 952 953 954 955 956 957], [120 120 120 120 120 ... 125 125 125 125 125 125]
- DL_Track_US.gui_helpers.do_calculations.doCalculations(img: ndarray, img_copy: ndarray, h: int, w: int, calib_dist: int, spacing: int, filename: str, model_apo, model_fasc, scale_statement: str, dictionary: dict, filter_fasc: bool)
Function to compute muscle architectural parameters based on convolutional neural network segmentation in images.
Firstly, images are segmented by the network. Then, predictions are thresholded and filtered. The aponeuroses edges are computed and the fascicle length and pennation angle calculated. This is done by extrapolating fascicle segments above a threshold length. Then the intersection between aponeurosis edge and fascicle structures are computed. Returns none when not more than one aponeurosis contour is detected in the image.
Parameters
- imgnp.ndarray
Normalized, reshaped and rescaled rayscale image to be analysed as a numpy array. The image must be loaded prior to model inputting, specifying a path is not valid.
- img_copynp.ndarray
A copy of the input image.
- hint
Integer variable containing the height of the input image (img).
- wint
Integer variable containing the width of the input image (img).
- calib_distint
Integer variable containing the distance between the two specified point in pixel units. This value was either computed automatically or manually. Must be non-negative. If “None”, the values are outputted in pixel units.
- spacing{10, 5, 15, 20}
Integer variable containing the known distance in milimeter between the two placed points by the user or the scaling bars present in the image. This can be 5, 10, 15 or 20 milimeter. Must be non-negative and non-zero.
- filenamestr
String value containing the name of the input image, not the entire path.
- apo_modelpathstr
String variable containing the absolute path to the aponeurosis neural network.
- fasc_modelpathstr
String variable containing the absolute path to the fascicle neural network.
- scale_statementstr
String variable containing a statement how many milimeter correspond to how many pixels. If calib_dist is “None”, scale statement will also be “None”
- dictionarydict
Dictionary variable containing analysis parameters. These include must include apo_threshold, apo_length_tresh, fasc_threshold, fasc_cont_threshold, min_width, max_pennation, min_pennation.
- filter_fascbool
If True, fascicles will be filtered so that no crossings are included. This may reduce number of totally detected fascicles.
Returns
- fasc_llist
List variable contianing the estimated fascicle lengths based on the segmented fascicle fragments in pixel units as float. If calib_dist is specified, then the length is computed in centimeter.
- pennationlist
List variable containing the estimated pennation angles based on the segmented fascicle fragments and aponeuroses as float.
- x_low1list
List variable containing the estimated x-coordinates of the lower edge from the upper aponeurosis as integers.
- x_high1list
List variable containing the estimated x-coordinates of the upper edge from the lower aponeurosis as integers.
- midthickfloat
Float variable containing the estimated distance between the lower and upper aponeurosis in pixel units. If calib_dist is specified, then the distance is computed in centimeter.
- figmatplotlib.figure
Figure including the input image, the segmented aponeurosis and the extrapolated fascicles.
Notes
For more detailed documentation, see the respective functions documentation.
Examples
>>> doCalculations(img=[[[[0.10113753 0.09391343 0.09030136] [0.10878626 0.10101581 0.09713058] [0.10878634 0.10101589 0.09713066] ... [0. 0. 0. ] [0. 0. 0. ] [0. 0. 0. ]]]], img_copy=[[[[0.10113753 0.09391343 0.09030136] [0.10878626 0.10101581 0.09713058] [0.10878634 0.10101589 0.09713066] ... [0. 0. 0. ] [0. 0. 0. ] [0. 0. 0. ]]]], h=512, w=512,calib_dist=None, spacing=10, filename=test1, apo_modelpath="C:/Users/admin/Documents/DL_Track/Models_DL_Track/Final_models/model-VGG16-fasc-BCE-512.h5", fasc_modelpath="C:/Users/admin/Documents/DL_Track/Models_DL_Track/Final_models/model-apo-VGG-BCE-512.h5", scale_statement=None, dictionary={'apo_treshold': '0.2', 'apo_length_tresh': '600', fasc_threshold': '0.05', 'fasc_cont_thresh': '40', 'min_width': '60', 'min_pennation': '10', 'max_pennation': '40'}, filter_fasc = False) [1030.1118966321328, 1091.096002143386, ..., 1163.07073327008, 1080.0001937069776, 976.6099281240987] [19.400700671533016, 18.30126098122986, ..., 18.505345607096586, 18.727693601171197, 22.03704574228162] [441, 287, 656, 378, 125, 15, ..., -392, -45, -400, -149, -400] [1410, 1320, 1551, 1351, 1149, ..., 885, 937, 705, 869, 507] 348.1328577
- DL_Track_US.gui_helpers.do_calculations.filter_fascicles(df: DataFrame) DataFrame
Filters out fascicles that intersect with their neighboring fascicles based on their x_low and x_high values.
Parameters
- dfpd.DataFrame
A DataFrame containing the fascicle data. Expected columns include ‘x_low’, ‘y_low’, ‘x_high’, and ‘y_high’.
Returns
- pd.DataFrame
A DataFrame with the fascicles that do not intersect with their neighbors.
Example
>>> data = {'x_low': [1, 3, 5], 'y_low': [1, 2, 3], 'x_high': [4, 6, 7], 'y_high': [4, 5, 6]} >>> df = pd.DataFrame(data) >>> print(filter_fascicles(df)) x_low y_low x_high y_high 0 1 1 4 4 2 5 3 7 6
- DL_Track_US.gui_helpers.do_calculations.sortContours(cnts: list)
Function to sort detected contours from proximal to distal.
The input contours belond to the aponeuroses and are sorted based on their coordinates, from smallest to largest. Moreover, for each detected contour a bounding box is built. The bounding boxes are sorted as well. They are however not needed for further analyses.
Parameters
- cntslist
List of arrays containing the detected aponeurosis contours.
Returns
- cntstuple
Tuple containing arrays of sorted contours.
- bounding_boxestuple
Tuple containing tuples with sorted bounding boxes.
Examples
>>> sortContours(cnts=[array([[[928, 247]], ... [[929, 247]]], dtype=int32), ((array([[[228, 97]], ... [[229, 97]]], dtype=int32), (array([[[228, 97]], ... [[229, 97]]], dtype=int32), (array([[[928, 247]], ... [[929, 247]]], dtype=int32)), ((201, 97, 747, 29), (201, 247, 750, 96))
DL_Track_US.gui_helpers.do_calculations_video module
Description
This module contains functions to caculate muscle architectural parameters based on binary segmentations by convolutional neural networks. The parameters include muscle thickness, pennation angle and fascicle length. First, input images are segmented by the CNNs. Then the predicted aponeuroses and fascicle fragments are thresholded and filtered. Fascicle fragments and aponeuroses are extrapolated and the intersections determined. This module is specifically designed for video analysis and is predisposed for execution from a tk.TK GUI instance. The architectural parameters are calculated. The results are plotted and converted to an output video displaying the segmentations. Each frame is evaluated separately, independently from the previous frames.
Functions scope
- doCalculations
Function to compute muscle architectural parameters based on convolutional neural netwrok segmentation.
Notes
Additional information and usage examples can be found at the respective functions documentations. See specifically do_calculations.py.
See Also
do_calculations.py
- DL_Track_US.gui_helpers.do_calculations_video.doCalculationsVideo(vid_len: int, cap, vid_out, flip: str, apo_model, fasc_model, calib_dist: int, dic: dict, step: int, filter_fasc: bool, gui)
Function to compute muscle architectural parameters based on convolutional neural network segmentation in videos.
Firstly, images are segmented by the network. Then, predictions are thresholded and filtered. The aponeuroses edges are computed and the fascicle length and pennation angle calculated. This is done by extrapolating fascicle segments above a threshold length. Then the intersection between aponeurosis edge and fascicle structures are computed. Returns none when not more than one aponeurosis contour is detected in the image.
Parameters
- vid_lenint
Integer variable containing the number of frames present in cap.
- capcv2.VideoCapture
Object that contains the video in a np.ndarrray format. In this way, seperate frames can be accessed.
- vid_outcv2.VideoWriter
Object that is stored in the vpath folder. Contains the analyzed video frames and is titled “…_proc.avi” The name can be changed but must be different than the input video.
- flip{“no_flip”, “flip”}
String variable defining wheter an image should be flipped. This can be “no_flip” (video is not flipped) or “flip” (video is flipped).
- apo_model
Aponeurosis neural network.
- fasc_model
Fascicle neural network.
- calib_distint
Integer variable containing the distance between the two specified point in pixel units. The points must be 10mm apart. Must be non-negative. If “None”, the values are outputted in pixel units.
- dicdict
Dictionary variable containing analysis parameters. These include must include apo_threshold, fasc_threshold, fasc_cont_threshold, min_width, max_pennation, min_pennation.
- stepint
Integer variable containing the step for the range of video frames. If step != 1, frames are skipped according to the size of step. This might decrease processing time but also accuracy.
- filter_fascbool
If True, fascicles will be filtered so that no crossings are included. This may reduce number of totally detected fascicles.
- guitk.TK
A tkinter.TK class instance that represents a GUI. By passing this argument, interaction with the GUI is possible i.e., stopping the calculation process after each image.
Returns
- fasc_l_alllist
List of arrays contianing the estimated fascicle lengths based on the segmented fascicle fragments in pixel units as float. If calib_dist is specified, then the length is computed in centimeter. This is computed for each frame in the video.
- pennation_alllist
List of lists containing the estimated pennation angles based on the segmented fascicle fragments and aponeuroses as float. This is computed for each frame in the video.
- x_lows_alllist
List of lists containing the estimated x-coordinates of the lower edge from the upper aponeurosis as integers. This is computed for each frame in the video.
- x_highs_alllist
List of lists containing the estimated x-coordinates of the upper edge from the lower aponeurosis as integers. This is computed for each frame in the video.
- midthick_alllist
List variable containing the estimated distance between the lower and upper aponeurosis in pixel units. If calib_dist is specified, then the distance is computed in centimeter. This is computed for each frame in the video.
Examples
>>> doCalculations(vid_len=933, cap=< cv2.VideoCapture 000002BFAD0560B0>, vid_out=< cv2.VideoWriter 000002BFACEC0130>, flip="no_flip", apo_modelpath="C:/Users/admin/Documents/DL_Track/Models_DL_Track/Final_models/model-VGG16-fasc-BCE-512.h5", fasc_modelpath="C:/Users/admin/Documents/DL_Track/Models_DL_Track/Final_models/model-apo-VGG-BCE-512.h5", calib_dist=98, dic={'apo_treshold': '0.2', 'fasc_threshold': '0.05', 'fasc_cont_thresh': '40', 'min_width': '60', 'min_pennation': '10', 'max_pennation': '40'}, filter_fasc = False, gui=<__main__.DL_Track_US object at 0x000002BFA7528190>) [array([60.5451731 , 58.86892027, 64.16011534, 55.46192704, 63.40711356]), ..., array([64.90849385, 60.31621836])] [[19.124207107383114, 19.409753216521565, 18.05706763600641, 20.54453899050867, 17.808652286488794], ..., [17.26241882195032, 16.284803480359543]] [[148, 5, 111, 28, -164], [356, 15, 105, -296], [357, 44, -254], [182, 41, -233], [40, 167, 42, -170], [369, 145, 57, -139], [376, 431, 32], [350, 0]] [[725, 568, 725, 556, 444], [926, 572, 516, 508], [971, 565, 502], [739, 578, 474], [554, 766, 603, 475], [1049, 755, 567, 430], [954, 934, 568], [968, 574]] [23.484416057267826, 22.465452189555794, 21.646971767045816, 21.602856412413924, 21.501286239714894, 21.331137350026623, 21.02446763240188, 21.250352548097883]
DL_Track_US.gui_helpers.manual_tracing module
Description
This module contains a class to manually annotate longitudinal ultrasonography images and videos. When the class is initiated, a graphical user interface is opened. There, the user can annotate muscle fascicle length, pennation angle and muscle thickness. Moreover, the images can be scaled in order to get measurements in centimeters rather than pixels. By clicking the respective buttons in the GUI, the user can switch between the different parameters to analyze. The analysis is not restricted to any specific muscles. However, its use is restricted to a specific method for assessing muscle thickness, fascicle length and pennation angles. Moreover, each video frame is analyzed separately. An .xlsx file is retuned containing the analysis results for muscle fascicle length, pennation angle and muscle thickness.
Functions scope
For scope of the functions see class documentation.
Notes
Additional information and usage examples can be found at the respective functions docstrings.
- class DL_Track_US.gui_helpers.manual_tracing.ManualAnalysis(img_list: str, rootpath: str)
Bases:
object
Python class to manually annotate longitudinal muscle ultrasonography images/videos of human lower limb muscles. An analysis tkinter GUI is opened upon initialization of the class. By clicking the buttons, the user can switch between different parameters to analyze in the images.
- Muscle thickness:
Exactly one segment reaching from the superficial to the deep aponeuroses of the muscle must be drawn. If multiple measurement are drawn, these are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until it is not further required to draw the line (i.e., the other aponeurosis border is reached). Only the respective y-coordinates of the points where the cursor was clicked and released are considered for calculation of muscle thickness.
- Fascicle length:
Exactly three segments along the fascicleof the muscle must be drawn. If multiple fascicle are drawn, their lengths are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until one segment is finished (mostly where fascicle curvature occurs the other aponeurosis border is reached). Using the euclidean distance, the total fascicle length is computed as a sum of the segments.
- Pennation angle:
Exactly two segments, one along the fascicle orientation, the other along the aponeurosis orientation must be drawn. The line along the aponeurosis must be started where the line along the fascicle ends. If multiple angle are drawn, they are averaged. Drawing can be started by clickling the left mouse button and keeping it pressed until it is not further required to draw the line (i.e., the aponeurosis border is reached by the fascicle). The angle is calculated using the arc-tan function.
In order to scale the image/video frame, it is required to draw a line of length 10 milimeter somewhere in the image. The line can be drawn in the same fashion as for example the muscle thickness. Here however, the euclidean distance is used to calculate the pixel / centimeter ratio. This has to be done for every frame. We also provide the functionality to extend the muscle aponeuroses to more easily extrapolate fascicles. The lines can be drawn in the same fashion as for example the muscle thickness. During the analysis process, care must be taken to not accidentally click the left mouse button as those coordinates might mess up the results given that calculations are based on a strict analysis protocol.
Attributes
- self.image_listlist
A list variable containing the absolute paths to all images / video to be analyzed.
- self.rootpathstr
Path to root directory where images / videos for the analysis are saved.
- self.lineslist
A list variable containing all lines that are drawn upon the image by the user. The list is emptied each time the analyzed parameter is changed.
- self.scale_coordslist
A list variable containing the xy-coordinates coordinates of the scaling line start- and endpoints to calculate the distance between. The list is emptied each time a new image is scaled.
- self.thick_coordslist
A list variable containing the xy-coordinates coordinates of the muscle thickness line start- and endpoints to calculate the distance between. The list is emptied each time a new image is scaled. Only the y-coordintes are used for further analysis.
- self.fasc_coordslist
A list variable containing the xy-coordinates coordinates of the fascicle length line segments start- and endpoints to calculate the total length of the fascicle. The list is emptied each time a new image is analyzed.
- self.pen_coordslist
A list variable containing the xy-coordinates coordinates of the pennation angle line segments start- and endpoints to calculate the angle of the fascicle. The list is emptied each time a new image is analyzed.
- self.coordsdict
Dictionary variable storing the xy-coordinates of mouse events during analysis. Mouse events are clicking and releasing of the left mouse button as well as dragging of the cursor.
- self.countint, default = 0
Index of image / video frame currently analysis in the list of image file / video frame paths. The default is 0 as the first image / frame analyzed always has the idex 0 in the list.
- self.dataframepd.DataFrame
Panadas dataframe that stores the analysis results such as file name, fascicle length, pennation angle and muscle thickness. This dataframe is then saved in an .xlsx file.
- self.headtk.TK
tk.Toplevel instance opening a window containing the manual image analysis options.
- self.modetk.Stringvar, default = thick
tk.Stringvar variable containing the current parameter analysed by the user. The parameters are - muscle thickness : thick - fascicle length : fasc - pennation angle : pen - scaling : scale - aponeurosis drawing : apo The variable is updaten upon selection of the user. The default is muscle thickness.
- self.cavastk.Canvas
tk.Canvas variable representing the canvas the image is plotted on in the GUI. The canvas is used to draw on the image.
- self.imgImageTk.PhotoImage
ImageTk.PhotoImage variable containing the current image that is analyzed. It is necessary to load the image in this way in order to plot the image.
- self.distint
Integer variable containing the length of the scaling line in pixel units. This variable is then used to scale the image.
Methods
- __init__
Instance method to initialize the class.
- calculateBatchManual
Instance method creating the GUI for manual image analysis.
- calculateBatchManual()
Instance method creating a GUI for manual annotation of longitudinal ultrasoud images of human lower limb muscles.
The GUI contains several analysis options for the current image openend. The user is able to switch between analysis of muscle thickness, fascicle length and pennation angle. Moreover, the image can be scaled and the aponeuroses can be extendet to easy fascicle extrapolation. When one image is finished, the GUI updated by clicking “next image”. The Results can be saved by clicking “save results”. It is possible to save each image seperately. The GUI can be closed by clicking “break analysis” or simply close the window.
Notes
The GUI can be initated from the main DL_Track GUI. It is also possible to initiate the ManualAnalis class from the command promt and interact with its GUI as a standalone. To do so, the lines 231 (self.head = tk.Tk()) and 314 (self.head.mainloop()) must be uncommented and the line 233 (self.head = tk.Toplevel()) must be commented.
- calculateFascicles(fasc_list: list) list
Instance method to calculate muscle fascicle legth as a sum of three annotated segments.
The length of three line segment is calculated by summing their individual length. Here, the length of a single annotated fascicle is calculated considering the three drawn segments of the respective fascicle. The euclidean distance between the start and endpoint of each segment is calculated and summed. Then the length of the fascicle is outputted in pixel units.
Parameters
- fasc_listlist
List variable containing the xy-coordinates of the first mouse click event and the mouse release event of each annotated fascicle segment.
Returns
- fascicleslist
List variable containing the fascicle length in pixel units. This value is calculated by summing the euclidean distance of each drawn fascicle segment. If multiple fascicles are drawn, multiple fascicle length values are outputted.
Example
>>> calculateFascicles(fasc_list=[[392.0, 622.0], [632.0, 544.0], [632.0, 544.0], [1090.0, 415.0], [1090.0, 415.0], [1274.0, 381.0], [449.0, 627.0], [748.0, 541.0], [748.0, 541.0], [1109.0, 429.0], [1109.0, 429.0], [1297.0, 390.0]]) [915.2921723246823, 881.0996335404545]
- calculatePennation(pen_list: list) list
Instance method to calculate muscle pennation angle between three points.
The angle between three points is calculated using the arc tangent. Here, the points used for calculation are the start and endpoint of the line segment drawn alongside the fascicle as well as the endpoint of the segment drawn along the aponeurosis. The pennation angle is outputted in degrees.
Parameters
- pen_listlist
List variable containing the xy-coordinates of the first mouse click event and the mouse release event of each annotated pennation angle segment.
Returns
- pen_angleslist
List variable containing the pennation angle in degrees. If multiple pennation angles are drawn, multiple angle values are outputted.
See Also
self.getAngle()
Example
>>> calculateFascicles(pen_list=[[760.0, 579.0], [620.0, 629.0], [620.0, 629.0], [780.0, 631.0], [533.0, 571.0], [378.0, 627.0], [378.0, 627.0], [558.0, 631.0]] ) [20.369984003523715, 21.137327423466722]
- calculateThickness(thick_list: list) list
Instance method to calculate distance between deep and superficial muscle aponeuroses, also known as muscle thickness.
The length of the segment is calculated by determining the absolute distance of the y-coordinates of two points. Here, the muscle thickness is calculated considering only the y-coordinates of the start and end points of the drawn segments. In this way, incorrect placement of the segments by drawing skewed lines can be accounted for. Then the thickness is outputted in pixel units.
Parameters
- thick_listlist
List variable containing the xy-coordinates of the first mouse click event and the mouse release event of the muscle thickness annotation. This must be the points at the beginning and end of the thickness segment.
Returns
- thicknesslist
List variable containing the muscle thickness in pixel units. This value is calculated using only the difference of the y-coordinates of the two points. If multiple segments are drawn, multiple thickness values are outputted.
Example
>>> calculateThickness(thick_list=[[450, 41.0], [459, 200]]) [159]
- click(event: str)
Instance method to record mouse clicks on canvas.
When the left mouse button is clicked on the canvas, the xy-coordinates are stored for further analysis. When the button is clicked multiple times, multiple values are stored.
Parameters
- eventstr
tk.Event variable containing the mouse event that is bound to the instance method.
Examples
>>> self.canvas.bind("<ButtonPress-1>", self.click)
- closeWindow()
Instance method to close window upon call.
This method is evoked by clicking the button “Break Analysis”. It is necessary to use a different function, otherwise the window would be destroyed upon starting.
- compileSaveResults()
Instance method to save the analysis results.
A pd.DataFrame object must be used. The results inculded in the dataframe are saved to an .xlsx file. Depending on the form of class initialization, the .xlsx file is either saved to self.root (GUI) or self.out (command prompt).
Notes
An .xlsx file is saved to a specified location containing all analysis results.
- drag(event: str)
Instance method to record mouse cursor dragging on canvas.
When the cursor is dragged on the canvas, the xy-coordinates are stored and updated for further analysis. This is used to draw the line that follows the cursor on the screen. Coordinates are only recorded when the left mouse button is pressed.
Parameters
- eventstr
tk.Event variable containing the mouse event that is bound to the instance method.
Examples
>>> self.canvas.bind("<B1-Motion>", self.click)
- getAngle(a: list, b: list, c: list) float
Instance method to calculate angle between three points.
The angle is calculated using the arc tangent. The arc tangent is used to find the slope in radians when Y and X co-ordinates are given. The output is the arc tangent of y/x in radians, which is between PI and -PI. Then the output is converted to degrees.
Parameters
- alist
List variable containing the xy-coordinates of the first mouse click event of the pennation angle annotation. This must be the point at the beginning of the fascicle segment.
- blist
List variable containing the xy-coordinates of the second mouse click event of the pennation angle annotation. This must be the point at the intersection between fascicle and aponeurosis.
- clist
List variable containing the xy-coordinates of the fourth mouse event of the pennation angle annotation. This must be the endpoint of the aponeurosis segment.
Returns
- angfloat
Float variable containing the pennation angle in degrees between the segments drawn on the canvas by the user. Only the angle smaller than 180 degrees is returned.
Example
>>> getAngle(a=[904.0, 41.0], b=[450,380], c=[670,385]) 25.6
- release(event: str)
Instance method to record mouse button releases on canvas.
When the left mouse button is released on the canvas, the xy-coordinates are stored for further analysis. When the button is released multiple times, multiple values are stored.
Parameters
- eventstr
tk.Event variable containing the mouse event that is bound to the instance method.
Examples
>>> self.canvas.bind("<ButtonRelease-1>", self.click)
- saveResults()
Instance Method to save the analysis results to a pd.DataFrame.
A list of each variable to be saved must be used. The list must contain the coordinates of the recorded events during parameter analysis. Then, the respective parameters are calculated using the coordinates in each list and inculded in a dataframe. Estimates or fascicle length, pennation angle, muscle thickness are saved.
Notes
In order to correctly calculate the muscle parameters, the amount of coordinates must be specific. See class documentatio of documentation of functions used to calculate parameters.
See Also
self.calculateThickness, self.calculateFascicle, self.calculatePennation
- setAngles()
Instance method to display the instructions for pennation angle analysis.
This function is bound to the “Pennation Angle” radiobutton and appears each time the button is selected. In this way, the user is reminded on how to analyze pennation angle.
- setApo()
Instance method to display the instructions for aponeuroses extending.
This function is bound to the “Draw Aponeurosis” radiobutton and appears each time the button is selected. In this way, the user is reminded on how to draw aponeurosis extension on the image.
- setFascicles()
Instance method to display the instructions for muscle fascicle analysis.
This function is bound to the “Muscle Fascicles” radiobutton and appears each time the button is selected. In this way, the user is reminded on how to analyze muscle fascicles.
- setScale()
Instance method to display the instructions for image scaling.
This function is bound to the “Scale Image” radiobutton and appears each time the button is selected. In this way, the user is reminded on how to scale the image.
- setThickness()
Instance method to display the instructions for muscle thickness analysis.
This function is bound to the “Muscle Thickness” radiobutton and appears each time the button is selected. In this way, the user is reminded on how to analyze muscle thickness.
- stopAnalysis()
Instance method to stop the current analysis on the canvas in the GUI.
The analysis is stopped upon clicking of “Break Analysis” button in the GUI. The current analysis results are saved when the analysis is terminated. The analysis can be terminated at any timepoint during manual annotation. Prior to terminating, the user is asked for confirmation.
- updateImage()
Instance method to update the current image displayed on the canvas in the GUI.
The image is updated upon clicking of “Next Image” button in the GUI. Images can be updated until all images have been analyzed. Even when the image is updated, the analysis results are stored.
DL_Track_US.gui_helpers.model_training module
Description
This module contains functions to train a VGG16 encoder U-net decoder CNN. The module was specifically designed to be executed from a GUI. When used from the GUI, the module saves the trained model and weights to a given directory. The user needs to provide paths to the image and label/ mask directories. Instructions for correct image labelling can be found in the Labelling directory.
Functions scope
- conv_block
Function to build a convolutional block for the U-net decoder path of the network. The block is built using several keras.layers functionalities.
- decoder_block
Function to build a decoder block for the U-net decoder path of the network. The block is built using several keras.layers functionalities.
- build_vgg16_model
Function that builds a convolutional network consisting of an VGG16 encoder path and a U-net decoder path.
- IoU
Function to compute the intersection over union score (IoU), a measure of prediction accuracy. This is sometimes also called Jaccard score.
- dice_score
Function to compute the Dice score, a measure of prediction accuracy.
- focal_loss
Function to compute the focal loss, a measure of prediction accuracy.
- load_images
Function to load images and manually labeled masks from a specified directory.
- train_model
Function to train a convolutional neural network with VGG16 encoder and U-net decoder. All the steps necessary to properly train a neural network are included in this function.
Notes
Additional information and usage examples can be found at the respective functions documentations.
- DL_Track_US.gui_helpers.model_training.IoU(y_true, y_pred, smooth: int = 1) float
Function to compute the intersection over union score (IoU), a measure of prediction accuracy. This is sometimes also called Jaccard score.
The IoU can be used as a loss metric during binary segmentation when convolutional neural networks are applied. The IoU is calculated for both the training and validation set.
Parameters
- y_truetf.Tensor
True positive image segmentation label predefined by the user. This is the mask that is provided prior to model training.
- y_predtf.Tensor
Predicted image segmentation by the network.
- smoothint, default = 1
Smoothing operator applied during final calculation of IoU. Must be non-negative and non-zero.
Returns
- ioutf.Tensor
IoU representation in the same shape as y_true, y_pred.
Notes
The IoU is usually calculated as IoU = intersection / union. The intersection is calculated as the overlap of y_true and y_pred, whereas the union is the sum of y_true and y_pred.
Examples
>>> IoU(y_true=Tensor("IteratorGetNext:1", shape=(1, 512, 512, 1), dtype=float32), y_pred=Tensor("VGG16_U-Net/conv2d_8/Sigmoid:0", shape=(1, 512, 512, 1), dtype=float32), smooth=1) Tensor("truediv:0", shape=(1, 512, 512), dtype=float32)
- DL_Track_US.gui_helpers.model_training.build_vgg16_unet(input_shape: tuple)
Function that builds a convolutional network consisting of a VGG16 encoder path and a U-net decoder path.
The model is built using several Tensorflow.Keras functions. First, the whole VGG16 model is imported and built using pretrained imagenet weights and the input shape. Then, the encoder layers are pulled from the model incldung the bridge. Subsequently the decoder path from the U-net is built. Lastly, a 1x1 convolution is applied with sigmoid activation to perform binary segmentation on the input.
Parameters
- input_shapetuple
Tuple describing the input shape. Must be of shape (…,…,…). Here we used (512,512,3) as input shape. The image size (512,512,) can be easily adapted. The channel numer (,,3) is given by the model and the pretrained weights. We advide the user not to change the image size segmentation results were best with the predefined size.
Returns
- model
The built VGG16 encoder U-net decoder convolutional network for binary segmentation on the input. The model can subsequently be used for training.
Notes
See our paper () and references for more detailed model description
References
[1] VGG16: Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.” arXiv preprint arXiv:1409.1556 (2014) [2] U-net: Ronneberger, O., Fischer, P. and Brox, T. “U-Net: Convolutional Networks for Biomedical Image Segmentation.” arXiv preprint arXiv:1505.04597 (2015)
- DL_Track_US.gui_helpers.model_training.conv_block(inputs, num_filters: int)
Function to build a convolutional block for the U-net decoder path of the network to be build. The block is built using several keras.layers functionalities.
Here, we decided to use ‘padding = same’ and and a convolutional kernel of 3. This is adaptable in the code but will influence the model outcome. The convolutional block consists of two convolutional layers. Each creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs.
Parameters
- inputsKerasTensor
Concattenated Tensorflow.Keras Tensor outputted from previous layer. The Tensor can be altered by adapting, i.e. the filter numbers but this will change the model training output. The input is then convolved using the built kernel.
- num_filtersint
Integer variable determining the number of filters used during model training. Here, we started with ‘num_filers = 512’. The filter number is halfed each layer. The number of filters can be adapted in the code. Must be non-negative and non-zero.
Returns
- xKerasTensor
Tensorflow.Keras Tensor used during model Training. The Tensor can be altered by adapting the input paramenters to the function or the upsampling but this will change the model training. The number of filters is halfed.
Example
>>> conv_block(inputs=KerasTensor(type_spec=TensorSpec(shape=(None, 256, 256, 128), dtype=tf.float32, name=None), num_filters=128) KerasTensor(type_spec=TensorSpec(shape=(None, 256, 256, 64), dtype=tf.float32, name=None)
- DL_Track_US.gui_helpers.model_training.decoder_block(inputs, skip_features, num_filters)
Function to build a decoder block for the U-net decoder path of the network to be build. The block is build using several keras.layers functionalities.
The block is built by applying a deconvolution (Keras.Conv2DTranspose) to upsample to input by a factor of 2. A concatenation with the skipped features from the mirrored vgg16 convolutional layer follows. Subsequently a convolutional block (see conv_block function) is applied to convolve the input with the built kernel.
Parameters
- inputsKerasTensor
Concattenated Tensorflow.Keras Tensor outputted from previous layer. The Tensor can be altered by adapting, i.e. the filter numbers but this will change the model training output.
- skip_featuresKeras Tensor
Skip connections to the encoder path of the vgg16 encoder.
- num_filtersint
Integer variable determining the number of filters used during model training. Here, we started with ‘num_filers = 512’. The filter number is halfed each layer. The number of filters can be adapted in the code. Must be non-neagtive and non-zero.
Returns
- xKerasTensor
Tensorflow.Keras Tensor used during model Training. The tensor is upsampled using Keras.Conv2DTranspose with a kernel of (2,2), ‘stride=2’ and ‘padding=same’. The upsampling increases image size by a factor of 2. The number of filters is halfed. The Tensor can be altered by adapting the input paramenters to the function or the upsampling but this will change the model training.
Example
>>> decoder_block(inputs=KerasTensor(type_spec=TensorSpec(shape=(None, 64, 64, 512), skip_features=KerasTensor(type_spec=TensorSpec(shape=(None, 64, 64, 512), dtype=tf.float32, name=None)), num_filters=256) KerasTensor(type_spec=TensorSpec(shape=(None, 128, 128, 256), dtype=tf.float32, name=None)
- DL_Track_US.gui_helpers.model_training.dice_score(y_true, y_pred) float
Function to compute the Dice score, a measure of prediction accuracy.
The Dice score can be used as a loss metric during binary segmentation when convolutional neural networks are applied. The Dice score is calculated for both the training and validation set.
Parameters
- y_truetf.Tensor
True positive image segmentation label predefined by the user. This is the mask that is provided prior to model training.
- y_predtf.Tensor
Predicted image segmentation by the network.
Returns
- scoretf.Tensor
Dice score representation in the same shape as y_true, y_pred.
Notes
The IoU is usually calculated as Dice = 2 * intersection / union. The intersection is calculated as the overlap of y_true and y_pred, whereas the union is the sum of y_true and y_pred.
Examples
>>> IoU(y_true=Tensor("IteratorGetNext:1", shape=(1, 512, 512, 1), dtype=float32), y_pred=Tensor("VGG16_U-Net/conv2d_8/Sigmoid:0", shape=(1, 512, 512, 1), dtype=float32), smooth=1) Tensor("dice_score/truediv:0", shape=(1, 512, 512), dtype=float32)
- DL_Track_US.gui_helpers.model_training.focal_loss(y_true, y_pred, alpha: float = 0.8, gamma: float = 2) float
Function to compute the focal loss, a measure of prediction accuracy.
The focal loss can be used as a loss metric during binary segmentation when convolutional neural networks are applied. The focal loss score is calculated for both, the training and validation set. The focal loss is specifically applicable when class imbalances, i.e. between foregroung (muscle aponeurosis) and background (not muscle aponeurosis), are existent.
Parameters
- y_truetf.Tensor
True positive image segmentation label predefined by the user. This is the mask that is provided prior to model training.
- y_predtf.Tensor
Predicted image segmentation by the network.
- alphafloat, default = 0.8
Coefficient used on positive exaples, must be non-negative and non-zero.
- gammafloat, default = 2
Focussing parameter, must be non-negative and non-zero.
Returns
- f_losstf.Tensor
Tensor containing the calculated focal loss score.
Examples
>>> IoU(y_true=Tensor("IteratorGetNext:1", shape=(1, 512, 512, 1), dtype=float32), y_pred=Tensor("VGG16_U-Net/conv2d_8/Sigmoid:0", shape=(1, 512, 512, 1), dtype=float32), smooth=1) Tensor("focal_loss/Mean:0", shape=(), dtype=float32)
- DL_Track_US.gui_helpers.model_training.loadImages(img_path: str, mask_path: str) list
Function to load images and manually labeled masks from a specified directory.
The images and masks are loaded, resized and normalized in order to be suitable and usable for model training. The specified directories must lead to the images and masks. The number of images and masks must be equal. The images and masks can be in any common image format. The names of the images and masks must match. The image and corresponding mask must have the same name.
Parameters
- img_pathstr
Path that leads to the directory containing the training images. Image must be in RGB format.
- mask_pathstr
Path that leads to the directory containing the mask images. Masks must be binary.
Returns
- train_imgsnp.ndarray
Resized, normalized training images stored in a numpy array.
- mask_imgsnp.ndarray
Resized, normalized training masks stored in a numpy array.
Notes
See labelling instruction for correct masks creation and use, if needed, the supplied ImageJ script to label your images.
Example
>>> loadImages(img_path = "C:/Users/admin/Dokuments/images", mask_path = "C:/Users/admin/Dokuments/masks") train_imgs([[[[0.22414216 0.19730392 0.22414216] ... [0.22414216 0.19730392 0.22414216]]]) mask_imgs([[[[0.] ... [0.]]])
- DL_Track_US.gui_helpers.model_training.trainModel(img_path: str, mask_path: str, out_path: str, batch_size: int, learning_rate: float, epochs: int, loss: str, gui) None
Function to train a convolutional neural network with VGG16 encoder and U-net decoder. All the steps necessary to properly train an neural network are included in this function.
This functions build upon all the other functions included in this module. Given that all input parameters are correctly specified, the images and masks are loaded, splittet into test and training sets, the model is compiled according to user specification and the model is trained.
Parameters
- img_pathstr
Path that leads to the directory containing the training images. Image must be in RGB format.
- mask_pathstr
Path that leads to the directory containing the mask images. Masks must be binary.
- out_path:
Path that leads to the directory where the trained model is saved.
- batch_sizeint
Integer value that determines the batch size per iteration through the network during model training. Although a larger batch size has advantages during model trainig, the images used here are large. Thus, the larger the batch size, the more compute power is needed or the longer the training duration. Must be non-negative and non-zero.
- learning_ratefloat
Float value determining the learning rate used during model training. Must be non-negative and non-zero.
- epochsint
Integer value that determines the amount of epochs that the model is trained befor training is aborted. The total amount of epochs will only be used if early stopping does not happen. Must be non-negative and non-zero.
- loss{“BCE”}
String variable that determines the loss function used during training. So far, only one type is supported here: - Binary cross-entropy. loss == “BCE”
- guitk.TK
A tkinter.TK class instance that represents a GUI. By passing this argument, interaction with the GUI is possible i.e., stopping the model training model process.
Notes
For specific explanations for the included functions see the respective function docstrings in this module. This function can either be run from the command prompt or is called by the GUI. Note that the functioned was specifically designed to be called from the GUI. Thus, tk.messagebox will pop up when errors are raised even if the GUI is not started.
Examples
>>> trainModel(img_path= "C:/Users/admin/Dokuments/images", mask_path="C:/Users/admin/Dokuments/masks", out_path="C:/Users/admin/Dokuments/results", batch_size=1, learning_rate=0.005, epochs=3, loss="BCE", gui)