Human visualization of brain tumor classifications using deep CNN: Xception + BiGRU
by Ashley Seong
Abstract – Throughout the world, brain tumors have become a medical priority as more people suffer from this malignant disease worldwide. In the field of computer science, researchers have been studying to utilize MRI scans to its fullest potential, in recognizing signs of tumors early on, and utilizing computers and convolutional neural networks to process massive amounts of patient data at once in hopes of saving lives. This investigation finds out the specifications of visualization of MRI scans and how filters and layers are used to identify lethal tumors in the brain. For one of our main methods, a pre-trained model to improve accuracy was used - the Xception model. This showed a contrast between previous existing models as those fully connected layers were added to the back of existing ones. Our main proposed model of Xception + Bidirectional GRU had the highest accuracy of 82% out of 7 different models. In our proposed model, Convolutional layers were used to extract specific features of an image and process other similar images in the same way. By using 3 layers of Convolution, Activation, and Max pooling, we saw the networks focus on the actual tumors in the brain by distinguishing patterns in images and focusing on that area to create visual representations. Principal components of this research were the ability to visualize abnormal features of brain scan images to filter out and layer regions to bring attention to tumors in the brain.
Not yet available.