亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A Fusion of Residual Blocks and Stack Auto Encoder Features for Stomach Cancer Classification

        2024-01-12 03:48:14AbdulHaseebMuhammadAttiqueKhanMajedAlhaisoniGhadahAldehimLeilaJamelUsmanTariqTaerangKimandJaeHyukCha
        Computers Materials&Continua 2023年12期

        Abdul Haseeb ,Muhammad Attique Khan ,Majed Alhaisoni ,Ghadah Aldehim ,Leila Jamel ,Usman Tariq ,Taerang Kim and Jae-Hyuk Cha

        1Department of Computer Science,HITEC University,Taxila,47080,Pakistan

        2Department of Computer Science and Mathematics,Lebanese American University,Beirut,1100,Lebanon

        3College of Computer Science and Engineering,University of Ha’il,Ha’il,81451,Saudi Arabia

        4Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,P.O.Box 84428,Riyadh,Saudi Arabia

        5Department of Management Information Systems,College of Business Administration,Prince Sattam Bin Abdulaziz University,Al-Kharj,16278,Saudi Arabia

        6Department of Computer Science,Hanyang University,Seoul,04763,Korea

        ABSTRACT Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced challenges,such as irrelevant feature extraction,high similarity among different disease symptoms,and the leastimportant features from a single source.This paper designed a new deep learning-based architecture based on the fusion of two models,Residual blocks and Auto Encoder.First,the Hyper-Kvasir dataset was employed to evaluate the proposed work.The research selected a pre-trained convolutional neural network(CNN)model and improved it with several residual blocks.This process aims to improve the learning capability of deep models and lessen the number of parameters.Besides,this article designed an Auto-Encoder-based network consisting of five convolutional layers in the encoder stage and five in the decoder phase.The research selected the global average pooling and convolutional layers for the feature extraction optimized by a hybrid Marine Predator optimization and Slime Mould optimization algorithm.These features of both models are fused using a novel fusion technique that is later classified using the Artificial Neural Network classifier.The experiment worked on the HyperKvasir dataset,which consists of 23 stomach-infected classes.At last,the proposed method obtained an improved accuracy of 93.90%on this dataset.Comparison is also conducted with some recent techniques and shows that the proposed method’s accuracy is improved.

        KEYWORDS Gastrointestinal cancer;contrast enhancement;deep learning;information fusion;feature selection;machine learning

        1 Introduction

        Gastrointestinal cancer,also known as digestive system cancer,refers to a group of cancers that occur in the digestive system or gastrointestinal tract,which includes the esophagus,stomach,small intestine,colon,rectum,liver,gallbladder,and pancreas [1,2].These cancers develop when cells in the digestive system grow abnormally and uncontrollably,forming a tissue mass known as a tumor[3].Depending on the type and stage of the disease,the symptoms of gastrointestinal cancer might include stomach discomfort,nausea,vomiting,changes in bowel habits,weight loss,and exhaustion[4].According to the National Institute of Health,one out of twelve deaths related to cancer is due to gastrointestinal cancer.Moreover,each year more than one million new cases of gastrointestinal cancer are diagnosed.Gastrointestinal Tract cancer may be treated by surgery,chemotherapy,radiation therapy,or a combination.Detection and treatment at an early stage can enhance survival chances and minimize the risk of complications[5].Despite a gradual decrease in gastric cancer incidence and mortality rates over the past 50 years,it remains the second most frequent cause of cancer-related deaths globally.However,from 2018 to 2020,both colorectal and stomach cancer have shown an upward trend in their rates[6].Global Cancer Statistics show that 26.3 percent of total cancer cases are from gastrointestinal cancer,whereas the mortality rate is 35.4 percent among all cancers[7].

        24. Tough willows: Willows can mean chastity (Biedermann 381), perhaps a reference to the princess s refusal to marry. The willow17 is also connected to the Bible because of its seemingly endless green branches (Biedermann 381). Willow was believed to help sick (Biedermann 381), and a weeping willow can symbolize43 death (Biedermann 381).Return to place in story.

        Identifying and categorizing gastrointestinal disorders subjectively is time-consuming and difficult,requiring much clinical knowledge and skill[8].Yet,the development of effective computer-aided diagnosis(CAD)technologies that can identify and categorize numerous gastrointestinal disorders in a fully automated manner might reduce these diagnostic obstacles to a great extent[9].Computer-aided diagnosis technologies can be of great value by aiding medical personnel in making accurate diagnoses and identifying appropriate therapies for serious medical diseases in their early stages [10,11].Over the past few years,the performance of diagnostic-based artificial intelligence (AI) computer-aided diagnosis tools in various medical fields has been significantly improved with the use of deep learning algorithms,particularly artificial neural networks (ANNs) [12].Generally,these ANNs are trained using optimization algorithms such as stochastic gradient descent [13] to achieve the best accurate representation of the training dataset.

        DL,which refers to deep learning,is a statistical approach that enables computers to automatically detect features from raw inputs,such as structured information,images,text,and audio[14,15].Many areas of clinical practice have been profoundly influenced by the significant advances made in AI based on DL[16,17].Computer-aided diagnosis systems are frameworks that use computational-based help to detect any disease.CAD systems in gastroenterology increasingly use artificial intelligence(AI)to improve the identification and characterization of abnormalities during endoscopy[18].The CNN,a neural network influenced by the visual cortex of life forms,uses convolutional layers with common two-dimensional weight sets.This enables the algorithm to recognize spatial data and employ layer clustering to filter out less significant information,eventually conveying the most pertinent and focused elements[19].However,these classifiers face a challenge in interpretability because they are often seen as “black boxes” that deliver accurate outcomes without explaining them [20].Despite technological developments,image classification for lesions of the gastrointestinal system remains difficult due to a lack of databases containing sufficient images to build models.In addition,the quality of accessible images has impeded the application of CNN models[21].

        1.1 Major Challenges

        In this work,Artificial Neural Networks(ANN)and Deep Neural Networks(DNN)extract the features of images from the Hyper-Kvasir dataset.The dataset contains twenty-three gastrointestinal tract classes with images in each class.However,some classes have only a few images,creating a data misbalancing problem.Data augmentation techniques are used for classes with fewer images to address this issue.Furthermore,feature selection techniques are implied to obtain the best features among feature sets.

        1.2 Major Contributions

        The major contributions of the proposed method are described as follows:

        Overall,the researchers improved their categorization of the Hyper-Kvasir dataset.Yet,a significant gap in the subject matter must be filled.So,it must utilize a wonderful hybrid strategy incorporating deep learning and machine learning methodologies to get exceptional outcomes.Using machine learning approaches to discover key characteristics and automated deep feature extraction to uncover them may help increase classification accuracy.

        – A new CNN architecture is designed based on the concept of pretrained Nasnetmobile.Several residual blocks have been added to increase the learning capability and reduction of parameters.

        – A stack Auto Encoder-Decoder network is designed that consists of five convolutional layers in the encoder phase and five in the decoder phase.

        – The extracted features have been optimized using improved Marine Predator optimization and Slime Mould optimization algorithm.

        – A new parallel fusion technique is proposed to combine the important information of both deep learning models.

        34. A regular blow-out: Blow-out is a colloquialism97 from the UK meaning An excessive spree of drinking, eating, spending or sex (Duckworth 2003). Andrew Lang considers phrase this to be an example of Hansel s vulgarity in a footnote to the story in The Blue Fairy Book.

        – A detailed experimental process in terms of accuracy,confusion matrix,andt-test-based analysis has been conducted to show the significance of the proposed framework.

        The rest of the manuscript is structured as follows:Section 2 describes the significant related work relevant to the study.Section 3 outlines the methodology utilized in the research,including the tools,methods,and resources employed.Section 4 comprises a discussion of the findings acquired from the study.Section 5 provides the conclusions of the research.

        2 Related Work

        Gastrointestinal tract classification is a hot topic in research.In recent years,researchers have achieved important milestones in this work domain[22].In their article,Borgli et al.introduced the Hyper-Kvasir dataset,which contains millions of images of gastrointestinal endoscopy examinations from Baerum Hospital located in Norway.The labeled images in this dataset can be used to train neural networks for discrimination purposes.The authors conducted experiments to train and evaluate classification models using two commonly used families of neural networks,ResNet and DenseNet,for the image classification problem.The labeled data in the Hyper-Kvasir dataset consist of twentythree classes of gastrointestinal disorders.While the authors achieved the best results by combining ResNet-152 and DenseNet-161,the overall performance was still unsatisfactory due to imbalanced development sets[23].In their proposal,Igarashi et al.employed AlexNet architecture to classify more than 85,000 input images from Hirosaki University Hospital.

        Moreover,the input images were categorized into 14 groups based on pattern classification of significant anatomical organs with manual classification.To train the model,the researchers used 49,174 images from patients with gastric cancer who had undergone upper gastrointestinal tract endoscopy.In comparison,the remaining 36,000 images were employed to test the model’s performance.The outcome indicated an impressive overall accuracy of 96.5%,suggesting its potential usefulness in routine endoscopy image classification[24].Gómez-Zuleta[25]developed a deep learning(DL) methodology to detect polyps in colonoscopy procedures automatically.For this task,three models were used,namely Inception-v3,ResNet-50,and VGG-16.Knowledge transfer through transfer learning was adopted for classification,and the resultant weights were used to commence a fresh training process utilizing the fine-tuning technique on colonoscopy images.The training data consisted of a combined dataset of five databases comprising more than 23000 images with polyps and more than 47000 images without polyps for validation,respectively.The data was split into a 70 by 30 ratio for training and testing purposes.Different metrics such as accuracy,F1 score,and receiver operating characteristic curve,commonly known as ROC,were employed to evaluate the performance.Pertrained models such as Inceptionv3,VGG16,and Resnet50 models achieved accuracy rates of 81%,73%,and 77%,respectively.The authors described that pretrained network models demonstrated an effective generalization ability towards the high irregularity of endoscopy videos,and their methodology may potentially serve as a valuable tool in the future[25].The authors employed three networks to classify medical images from the Kvasir database.They began using a preprocessing step to eliminate noise and improve image quality.Then,they utilized data augmentation methods to progress the network’s training and a dropout method to prevent overfitting.Yet,the researchers acknowledged that this technique resulted in a doubling of the training time.The researchers also implemented Adam to optimize the loss to minimize error.Moreover,transfer learning and finetuning techniques are implied.The resulting models were then used to categorize 5,000 images into five distinct categories,with eighty percent of the database allocated for training and twenty percent for validation.The accuracy rates achieved by the models were 96.7%for GoogLeNet,95%for ResNet-50%,and 97%for AlexNet[26].

        The Kvasir-Capsule dataset,presented in[27],includes 117 videos captured using video capsule endoscopy(VCE).The dataset comprises fourteen different categories of images and a total of more than 47000 identified categorized images.VCE technology involves a small capsule with a camera,battery,and other components.To validate the labeled dataset,two convolutional neural networks(CNNs),namely,DenseNet_161 and ResNet_152,were used for training.The study utilized a crossvalidation technique with definite cross-entropy-based loss to validate the models.They implemented this technique without class and with class weights and used weight-based sampling to balance the dataset by removing or adding images for every class.After evaluating the models,the best results were obtained by averaging the outcomes of both CNNs.The resulting accuracy rates were 73.66%for the micro average and 29.94%for the macro average.

        – Proposed fusion-based contrast enhancement technique based on the mathematical formulation of local and global information-enhanced filters,called Duo-contrast.

        3 Proposed Methodology

        The dataset used in this manuscript is highly imbalanced as some classes have few images.To resolve this problem,data augmentation techniques are adopted.Nasnetmobile and Stacked Autoencoders are used as feature extractors.Furthermore,the extracted feature vectors eV1from Nasnetmobile and eV2from Stacked Auto-encoder are reduced by applying feature optimization techniques.eV1is fed to the Marine Predator Algorithm(MPA)[28]while eV2is given as input to the Slime Mould Algorithm(SMA)[29]to extract selected feature vectors S(eV1)and S(eV2),respectively.Selected feature vectors S(eV1)and S(eV2)are fused.Moreover,artificial neural networks are used as classifiers to achieve results.Fig.1 shows the proposed methodology used in this paper.

        Figure 1:Proposed methodology of stomach cancer classification and polyp detection

        3.1 Dataset Description

        The Hyper-Kvasir dataset used in this study is a public dataset collected from Baerum Hospital in Norway [23].The dataset contains 10,662 gastrointestinal endoscopy images categorized into 23 classes.Among twenty-three classes,sixteen belong to the lower gastrointestinal area,while seven are related to the upper gastrointestinal segment.Table 1 describes the data misbalancing problem,as some of the classes have very few numbers of images.To nullify the issue,data augmentation techniques are applied.Fig.2 shows the sample images for each class.

        Table 1:Classes of Hyper-Kvasir dataset and number of images in each class

        Figure 2:Sample images of each class of the Hyper-Kvasir dataset

        3.2 Proposed Contrast Enhancement

        Data is augmented by applying different image enhancement techniques on the whole Hyper-Kvasir dataset,as these techniques change spatial properties but do not affect the image orientation.Brightness Preserving Histogram Equalization (BPHE) [30] and Dualistic Histogram Equalization(DHE)[31]are used in preprocessing.

        BPHE is a method employed in image processing to enhance an image’s visual quality by improving its contrast.This approach involves adjusting the distribution of intensity levels to generate a more uniform histogram.Unlike conventional histogram equalization techniques,brightness-preserving histogram equalization considers both bright and dark regions in an image.It independently adjusts the histograms of each region to retain the details in both bright and dark areas while enhancing the overall contrast.This technique is particularly useful in applications such as medical imaging,where preserving the details in both bright and dark regions is crucial.The input image is divided into two subparts;the first consists of pixels with low contrast values,while the second consists of pixels with high contrast values.Mathematically it is denoted as:

        Never in our forty-four years of marriage had I ever so much as touched her in anger or in rebuke8 of any kind. Never. I wasn t even tempted9, in fact. But now, when she needed me most. . . .

        Moreover,a function of probabilistic density for both subparts is derived as:

        The transform function for subparts is as follows:

        The best performance obtained using fused features is shown in Table 6.Features are given to ANNs and analyze the results.Analysis shows that the highest accuracy of 93.60 is achieved through WNN.Again,the time cost is high in the case of WNN,yet it is best for Narrow Neural Networks.Moreover,the lowest accuracy is obtained through a Narrow Neural Network.The confusion matrix for WNN is depicted in Fig.8.

        whereWlastis the weight matrix connecting the last hidden layer to the output layer,andblastis the bias vector for the output layer.Minimizing the reconstruction error between input and output trains using the stacked autoencoder.Features vector named asFeat_AEvecis extracted through the Stacked Auto-Encoder that consists of 1024 features.

        In the above equation,ImgBPHEis the Brightness Preserved Histogram Equalized image.

        After her shower, she glanced towards the back of Grandpa’s recliner but noticed that his cane6 was not leaning in its usual spot. Sensing something odd, she walked toward the recliner. He was gone. The closet door stood open and his hat and overcoat were missing. Fear ran down her spine7.

        DSIHE is an image enhancement approach that increases an image’s contrast by separating it into two subimages depending on a threshold value and then applying histogram equalization independently to each subimage.The significance of DSIHE resides in its capacity to improve the contrast of images containing dark and light areas.Contrast enhancement is done worldwide using classical histogram equalization,which can result in over enhancing bright parts and under enhancement of dark regions in an image.DSIHE tackles this issue by separating the picture into two subimages based on a threshold value that distinguishes between the light and dark regions.Afterward,histogram equalization is applied separately to each subimage,which helps to achieve an equilibrium across the two regions’contrast enhancement.It has been demonstrated that the DSIHE technique enhances the aesthetic quality of medical images.It is an easy,computationally efficient,and straightforward strategy to implement in image processing systems.

        LetMInpis an input image that is given to apply DSIHE,and the grey level of that image isMInp=Mgrey.Sub-images are denoted byMS1and MS2.The center pixel index is denoted byCpx.

        I was 28 y.o. then. Almost all my classmates were married at that time, but I even didn’t have serious relationships with a girl. One day my friend arranged a party in honor of their newborn baby. His wife made ravioli and put button into one of them. They invited former classmates, as well as their new friends.

        Aggregation of the grey-level original image is as follows:

        The aggregated PDF for the grey levels of the original image will be:

        For both subimages,the transformation function is given by:

        The output image is mathematically denoted by:

        3.3 Novelty:Designed CNN Model

        Feature extraction is extracting a subset of relevant features from raw data useful for solving a particular machine-learning task[32].In deep learning,feature extraction involves taking a raw input,such as an image or audio signal,and automatically extracting relevant features or patterns using a series of mathematical transformations.Deep learning relies on feature retrieval to help the network concentrate on the essential data and simplify the input,making it simpler to train and more accurate.In some cases,feature extraction can also help to reduce overfitting and improve generalization performance.In many deep learning applications,the network performs feature extraction automatically,typically using convolutional layers for image processing or recurrent layers for natural language processing.However,in some cases,manual feature extraction may be necessary,particularly when working with smaller datasets or trying to achieve high levels of accuracy on a specific task.In this study,two feature extractors are used to extract features.Stacked Auto-Encoder and Nasnetmobile are two frameworks that are used to extract features.

        CNNs have become a popular tool in the field of medical image processing.A neural network can be classified as a CNN if it contains at least one layer that performs convolution operations.During a convolution operation,a filter with multiple parameters of a specific size is applied to an input image using a sliding window approach.The resulting image is then passed on to the next layer for further processing.This operation can be represented mathematically as follows:

        Above,Moutis the output matrix havingHorzoutandVertoutrows and columns,respectively.Furthermore,the rectified linear unit function is applied to obtain the negative feature’s value as zero,which can be represented in the equation below:

        Furthermore,a pooling operation reduces the computational complexity and improves the processing time.This operation involves extracting the maximum or average values from a specific region and replacing them with the central input value.A fully connected layer then flattens the features to produce a one-dimensional vector.Mathematically,this can be represented as:

        3.3.1 Stacked Auto-Encoder

        A type of neural network known as a stacked autoencoder utilizes unsupervised learning to develop a condensed representation of input data.The architecture consists of multiple layers,each learning a compressed representation called a “hidden layer”of the input data.The output of one layer is used as input for the subsequent layer,and the final output layer generates the reconstructed data.To create a deeper architecture capable of learning more complex and abstract representations,hidden layers are added to the network.During training,the difference between the input and the reconstructed output data,known as the reconstruction error,is minimized using backpropagation to adjust the neural network’s weights [33].Stacked autoencoders are used in various applications,including speech and image recognition,anomaly detection,and data compression.

        Let Xinpbe the input data and Youtbe the reconstructed data.Let the stacked autoencoder have Last layers,with the hidden layers denoted as h_1layer,h_2layer,...,h_Llast-1.The output layer is denoted as h_Llast.A transformation function can represent each layer of the stacked autoencoder ftransthat maps the input to the output.The transformation function for the n-th layer is denoted as ftransl.The input data is fed into the first layer,which learns a compressed input representation.The output of the first layer is then passed as input to the next layer,which learns a compressed representation of the output from the first layer.This process continues until the final layer produces the reconstructed data yout.The compressed representation learned by each hidden layer can be represented as follows:

        wherehkis the output of the kth hidden layer,Wkis the weight matrix connecting the input to the kth hidden layer,andbkis the bias vector for the kth hidden layer.The reconstructed output Youtcan be calculated by passing the compressed representation of the input through the decoder network,which is essentially the reverse of the encoder network:

        We were stunned3. The boys had never slept in a bed. They were accustomed only to foam4 pads. That night we had a meeting and unanimously decided that beds would be the perfect gift. On Thursday night, a few adults in our group drove to the nearest city and bought beds and new bedding. They arranged for everything to be delivered on Friday.

        3.3.2 Feature Extraction Using Proposed CNN

        The weight of the mould is calculated mathematically as:

        The magic drew me back each week. No two Saturdays were the same. Rotations26 of therapy horses and riders gave volunteers the opportunity to get to know each animal and child. Every Saturday offered a glimpse of an intensely intimate connection between equine and human spirit. Every Saturday revealed the power of this fabled27 four?legged creature to triumph over a child s physical and mental adversity. Every Saturday, a child held the reins28 of freedom and borrowed Pegasus s wings.

        Figure 3:Generalization through transfer learning technique

        3.4 Novelty:Proposed Features Selection

        Feature selection is the operation of identifying a subset of appropriate features from a dataset’s larger set of features [35].Feature selection improves model performance and data interpretation and reduces computational resources.Two feature selection algorithms are used to tackle the curse of dimensionality.Slime Mould Algorithm (SMA) is used to select important features in vector.S(Feat_AEvec) fromFeat_AEvecextracted through the Stacked Auto-Encoder while the Marine Predator Algorithm (MPA) is used to extract selected features vectorS(Feat_NNMobilevec) formFeat_NNMobilevecthat is obtained through Nasnetmobile.S(Feat_AEvec) consists of 535 features whereasS(Feat_NNMobilevec)has 366 features.

        The Slime Mold Algorithm is a feature selection technique influenced by nature and centered around slime mold behavior.The method employs a system of artificial particles that interact with one another to identify the ideal solution.SMA approaches the food according to the strength of the odor the food source spreads.The following equations describe the behavior of the method for slime

        What possible argument could I muster18 against that? There was none. Did I eat the peas? You bet I did. I ate them that day and every other time they were served thereafter. The five dollars were quickly spent. My grandmother passed away a few years later. But the legacy19 of the peas lived on, as it lives on to this day. If I so much as curl my lip when they are served (because, after all, I still hate the horrid20 little things), my mother repeats the dreaded21 words one more time: You ate them for money, she says. You can eat them for love.

        Nasnetmobile is a pretrained neural network model [34] that has been trained using transfer learning.Transfer learning is a method that involves the knowledge transfer learned from a pretrained model to a new task.In the case of Nasnetmobile,it has been trained on the ImageNet dataset.To adapt the pretrained model for a new task,the transfer learning principles shown in Fig.3 are used to refine the model.However,since the pretrained model has been trained on a subset of classes,it is not directly applicable to a medical image classification task.Therefore,the network needs to be trained on a new Hyper-Kvasir dataset.To train the network on the augmented Hyper-Kvasir dataset is divided into 70%training and 30%testing images.Furthermore,the classification layer,soft-max layer,and the last fully connected layer of the Nasnetmobile model are replaced with new layers called“new_classification,”“new_softmax,”and“new_Prediction,”respectively.This allows the model to learn to classify medical images using the features extracted from the original pretrained model.Furthermore,features are extracted through a trained network and obtained using deep feature vectors.Feat_NNMobileveccontaining 1056 features.The layer used for feature extraction are“global_average_pooling2d_1”.

        Hen that lays the golden eggs: Even if they don t lay golden eggs, egg-laying hens have always been valuable commodities, especially before breeding increased the output of hens

        qis the random number from the range zero to one.bdis the best fit for the current iteration,asωdis the worst fit in the current iteration.The position updating is derived as:

        The Marine Predator Optimization Algorithm(MPO)is a metaheuristic optimization algorithm based on the foraging strategies of aquatic predators.MPO is an algorithm replicating the searching and preying behavior of deep-sea predatory animals such as sharks,orcas,and other ocean animals.Like most metaheuristic algorithms,MPA is a population-based approach in which the baseline answer is dispersed equally over the search area,as in the first experiment.Mathematically it is denoted by:

        Here,Aminis the lower bound,whereasAmaxis the upper bound for variables.Rand stands for a randomly chosen vector ranging from zero to one.Based on the notion of survival of the fittest,it is considered that the most efficient hunters in nature are the strongest predators.As a result,the top predator is regarded as the most efficient means of generating an elite matrix.These elite matrices are meant to detect and track prey by leveraging their location data.Each element in the elite matrix denotes predators in a position to search for food.The second matrix is called the prey matrix,where each element represents the prey looking for food.Both matrices haver×cdimensions wherershows the number of searching agents,whereascrepresents the number of dimensions.At each iteration,the fittest predator substitutes the previous fittest predator.

        There are three phases that MPA contains.Phase one is considered when a predator is moving faster than prey,and velocity is(V≥10).In this scenario,the best possible solution could be to stop updating the positions of predators.Mathematically it can be represented as:

        Phase two is considered as the unit velocity ratio when both prey and the predator have the same velocity,is(V≈10).In this phase,the prey is in exploitation mode and levy motion while the predator is in exploration mode with Brownian motion.For half of the population,this could be denoted by:

        In phase three,the prey has a low velocity compared to the predator’s velocity.In low ratio velocity,the value will be(V=0.1).In this scenario,the best motion for the predator will be the Levy motion,as shown in the Eq.(46).

        The reason for the change in marine predators’behavior is the environmental changes inserted in the algorithm such as eddy formation and Fish Aggregating Device(FAD)manipulation.These two effects are denoted by:

        Here,FADs=0.20 represents the likelihood of FADs’influence in the optimization procedure.A binary vector U is created by randomly creating a vector in the interval[0,1]and replacing its elements with zero if they are less than 0.2 and with one if they are more than 0.2.The subscript r denotes a uniformly random number in the interval[0,1].The vectorsandcontain the minimum and maximum dimensions.urand1andurand2denote the random indices of the prey matrix.

        3.5 Novelty:Proposed Feature Fusion

        The significance of feature fusion resides in its capacity to extract more meaningful information from numerous sources,which can lead to improved accuracy in classification,identification,and prediction[36].Feature fusion can increase the resilience and reliability of machine learning systems,especially in cases when data is few or noisy,by merging complementary information from many sources.As stated before,two feature vectors,S(Feat_AEvec)andS(Feat_NNMobilevec),are retrieved from both networks utilized in this process;hence,it is important to merge both vectors to create a larger,more informative feature vector.A correlation extended serial technique is utilized to combine both vectors,which can be mathematically represented as follows:

        With this procedure,the features with a positive correlation (+1) are chosen into a new vector labeled.Vec3and the features with a correlation value of 0 or-1 are added toVec4.Then,the mean value ofVec4is calculated as follows:

        Both vectorsVecupdandVec4are fused using the following formulation:

        Upper transformation is used for less bright images.

        The final fused vectorVecFusedhas 901 features.

        4 Results and Discussion

        The Hyper-Kvasir dataset is used for results and analysis purposes.The dataset contains 10662 images categorized into twenty-three classes.Data is highly imbalanced,so to cater to this issue,data is augmented.The augmented dataset contains 24,000 training images,while 520 are obtained for testing.The implementation uses a system with a core i7 Quad-core processor with 16 GB of RAM.Moreover,the system contains a graphics card with 4 GB of VRAM.MATLAB R2021a is used to achieve results.

        4.1 Numerical Results

        Results are shown in tabular and graphical form.Table 2 represents the results for the extracted features through Nasnetmobile that are given as input to the classifiers.The analysis shows that Wide Neural Network(WNN)has given the best overall accuracy of 93.90 percent,while Narrow Neural Networks,Bilayered Neural Networks,and Trilayered Neural Networks have the lowest accuracy of 93.10 percent.Time taken by WNN is also the highest among all other classifiers,while the lowest time cost is for Narrow Neural Networks.The confusion matrix for WNN is shown in Fig.4.

        Figure 4:Confusion matrix for WNN using Nasnetmobile features

        Similarly,Table 3 shows the results obtained by feeding the features extracted by implementing Stacked Auto-Encoders to classifiers.Analysis shows that WNN has the best performance with 80.50 percent accuracy,yet the time cost is also highest in the case of WNN and the lowest for Narrow Neural Networks.Moreover,the lowest accuracy is achieved by implementing a Narrow Neural Network.The confusion matrix for WNN is shown in Fig.5.

        Table 3:Performance of ANN classifiers using autoencoder features(1024 features)

        Figure 5:Confusion matrix for WNN using auto-encoder features

        Feature selection has given reduced features from the feature vector extracted through Nasnetmobile.Table 4 shows the results for the selected features using the Marine Predator Algorithm(MPA).Selected features are given to the classifiers to obtain results.Analysis shows that WNN has the highest accuracy,93.40,and the highest time cost.Furthermore,the lowest accuracy is obtained through a Trilayered Neural Network.A Narrow Neural Network has given the best time cost among all classifiers.The confusion matrix for WNN is shown in Fig.6.

        Table 4:Performance of ANN classifiers using selected Nasnetmobile features(366 features)

        Table 5 shows the results achieved using the selected features from the Stacked Auto-Encoder.The features are selected using the Slime Mold Algorithm.WNN has the best performance as the accuracy achieved is 78.40 percent.Moreover,the time cost is highest for WNN and lowest for Narrow Neural Networks.In addition,Trilayered Neural Network has given the lowest accuracy.The confusion matrix for WNN is described in Fig.7.

        32.Promise me never to talk with your mother alone: Promises, while important today, were more powerful in the past when honor was a great motivator. Also, before the time of literacy among the masses and written contracts, verbal promises were given greater weight. A promise was a contract and actionable by law if broken. Folklore emphasizes the importance of a promise by meting89 punishment upon those who do not keep their promises. Return to place in story.

        Table 5:Performance of ANN classifier autoencoder selected features(535 features)

        Figure 6:Confusion matrix for WNN using Nasnetmobile selected features

        Figure 7:Confusion matrix for WNN using auto-encoder selected features

        The final image having an equalized histogram with preserved brightness can be obtained by combining both equations,that is:

        Table 6:Performance of ANN classifiers using fused features(901 features)

        Figure 8:Confusion matrix for WNN using fused features

        4.2 Graphical Results

        This section shows the graphical representation of the results.Fig.9 shows the bar chart for all classifiers using the proposed fusion approach.In this figure,each classifier’s accuracy is plotted with different colors,and Wide Neural Network shows the best accuracy of 93.8%,which is improved than the other classifiers.Fig.10 shows the bar chart for the time cost for all classifiers after employing the final step of the proposed approach.Wide Neural Network(WNN)consumed the highest time of 772.93(s),whereas the trilayered neural network spent a minimum time of 372.53(s).Based on Figs.10 and 11,it is observed that the wide neural network gives better accuracy but consumes more time due to additional hidden layers.Fig.11 shows the time-based comparison of the proposed method.This figure shows that the time is significantly reduced after employing the feature selection step;however,a little increase occurs when the fusion step is performed.Overall,it is observed that the reduction of features impacts the computational time,which is a strength of this work.

        Figure 9:Accuracy bar for all selected classifiers using the proposed method

        Figure 10:Time bar for classifiers used in the proposed methodology

        Figure 11:Overall time-based comparison among all classifiers using the proposed method

        A detailed comparison is also conducted among all classifiers of the middle steps employed in the proposed method.Fig.12 shows the insight view of this comparison.This figure shows that the original accuracy of the fine-tuned model NasNet Mobile is better,and the maximum is 93.9%;however,this experiment consumes more time,as plotted in Fig.12.After the selection process,the accuracy is slightly reduced,but the time is significantly dropped.After the fusion process,it is noted that the difference in the classification accuracy of the wide neural network is just 0.1%,which is almost the same.Still,the time is significantly reduced,which is a strength of this work.

        Figure 12:Accuracy comparison of all classifiers using all middle steps of the proposed method

        LIME-based Visualization:Local Interpretable Model-Agnostic Explanations (LIME) [37] is a well-known technique for explainable artificial intelligence(XAI).It is a model-independent technique that may be used to explain the predictions of any machine learning algorithm,including sophisticated models like deep neural networks.LIME aims to produce locally interpretable models that approach the predictions of the original machine learning model in a limited part of the input space.Local models are simpler and easier to comprehend than the original model and can be used to explain specific predictions.The LIME approach generates a large number of perturbed versions of the input data and trains a local model on each disturbed version.Local models are trained to predict the output of the original model for each perturbed version and are then weighted according to their performance and resemblance to the original input.The final explanation offered by LIME is a mix of the weights of the local models and the most significant characteristics of each local model.An explanation can be offered to the user in the form of a heatmap or other visualization,as shown in Fig.13,indicating which input data characteristics were most influential in forming the prediction.

        Fig.14 shows the results of the fine-tuned Nasnetmobile deep model employed for infected region segmentation.The segmentation process employs the polyp images with corresponding ground truth images.This fine-tuned model is trained with static hyperparameters by employing original and ground truth images.After that,testing is performed to visualize a few images in binary form,as presented in Fig.14.For the segmentation process,the weights of the second convolutional layers have been plotted and then converted into binary form.

        Figure 13:Explanation of network’s predictions using LIME

        Figure 14:Proposed infection segmentation using fine-tuned Nasnetmobile deep model

        Table 7 compares the results achieved in this article with recent state-of-the-art works.Reference[38] used self-supervised learning to classify the Hyper-Kvasir dataset.The authors used six classes and achieved the highest accuracy of 87.45.Moreover,reference [27] used the Hyper-Kvasir dataset to classify the gastrointestinal tract and obtained 73.66 accuracy.In the study,the authors only used fourteen classes.In addition,reference[23]achieved 63 percent accuracy for the macro and used all 23 classes.It is clear that the proposed method has outperformed the state-of-the-art methodologies in recent years and achieved the best accuracy of 93.80 percent.Moreover,the computational complexity of the proposed framework isOwhereTdenotes the middle steps,Kare the parameters of deep learning architectures,andCdenote the constant values.

        As I read this message, a wave of sadness ran through me and I realized that she must have thought she was writing to her father the whole time. She and I would never have openly exchanged such words of affection. Feeling guilty for not clarifying, yet not wanting to embarrass her, I simply responded, Love you too! Have a good sleep!

        Table 7:Comparison of the proposed framework accuracy with state-of-the-art(SOTA)techniques

        5 Conclusion

        Gastrointestinal tract cancer is one of the most severe cancers in the world.Deep learning models are used to diagnose gastrointestinal cancer.The proposed model uses Nasnetmobile and Auto-Encoder to extract deep features and is used as input for Artificial Neural Network classifiers.Moreover,feature selection techniques such as the Marine Predator Algorithm and Slime Mould Algorithm are implemented hybrid to cater to the curse of dimensionality problems.In addition,the selected features are fused and fed for classification.The results analysis shows that classification through features extracted from Nasnetmobile gives the best overall validation accuracy of 93.90.Overall,we conclude the following:

        Below is an e-mail he sent to his sister. She then sent it to radio station 103.2 FM in Ft. Wayne, Indiana, who was sponsoring a worst job experience contest. Needless to say, she won:

        – Data augmentation using contrast enhancement techniques can better impact the learning of deep learning models instead of using flip and rotation-based approaches.

        – Extracting encoders and deep learning features give better information on selected disease classes.

        Both greeted me with that Parisian countenance13 that said: Yes, I speak English, but you ll have to struggle with your French if you want to talk to me

        – The selection of features in a hybrid fashion impacts the classification accuracy and reduces the time.

        – The fusion process improved the classification accuracy.

        The drawbacks of this work are: i) segmentation of infected regions is a challenging task due to the change of lesion shape and boundary location;ii) manual assignment of hyperparameters of deep learning models is not a good way,and it always affects the learning process of a network.The proposed framework will be shifted to infected region segmentation using deep learning and saliencybased techniques.Moreover,we will opt for a Bayesian Optimization technique for hyperparameter selection.Although the proposed methodology has achieved the best outcomes,better accuracy may be achieved through different approaches in the future.

        Acknowledgement:This work is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project,Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

        Funding Statement:This work was supported by “Human Resources Program in Energy Technology” of the Korea Institute of Energy Technology Evaluation and Planning (KETEP),granted financial resources from the Ministry of Trade,Industry &Energy,Republic of Korea (No.20204010600090).Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R387),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

        Author Contributions:The authors confirm contribution to the paper as follows: study conception and design: A Haseeb,MA Khan,M Alhaisoni;data collection: A Haseeb,MA Khan,L Jamel,G Aldehim,and U Tariq;analysis and interpretation of results:MA Khan,J.Cha,T Kim,and U Tariq;draft manuscript preparation:Haseeb,MA Khan,L Jamel,G Aldehim,and J Cha;validation:J Cha,T Kim,and U Tariq;funding:J Cha,T Kim,L Jamel and G Aldehim.All authors reviewed the results and approved the final version of the manuscript.

        Availability of Data and Materials:The Kvasir dataset used in this work is publically available.https://datasets.simula.no/kvasir/.

        Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

        美女被男人插得高潮的网站| 午夜一级成人| 99久久国产亚洲综合精品| 黄色a级国产免费大片| 中日韩精品视频在线观看| 揄拍成人国产精品视频| 亚洲av无码成人网站www| 成人黄网站免费永久在线观看| 日本亚洲中文字幕一区| 国内精品久久久人妻中文字幕 | 天天爽夜夜爽人人爽一区二区| 亚洲国产精品综合久久网各| 东京热久久综合久久88| 亚洲国产欧美日韩一区二区| 日韩在线精品视频免费| 国产精品日韩经典中文字幕| 欧美黑人又粗又大xxxx| 国产精品美女一区二区三区| 久久精品国产亚洲av大全相关 | 女人色熟女乱| 无码一区二区三区在线| 少妇邻居内射在线| 亚洲第一区二区快射影院| 国产一区二区三区口爆在线| 亚洲精品美女久久777777| 国产97色在线 | 日韩| 亚洲最大天堂无码精品区| 中文字幕乱码亚洲无线| 黄片视频大全在线免费播放| 真人抽搐一进一出视频| 免费无码毛片一区二区app| 精品国产自产久久久| 丰满少妇棚拍无码视频| 日本加勒比精品一区二区视频| 波多野结衣爽到高潮大喷| 人妻丰满av∨中文久久不卡| 中文字幕av人妻一区二区| 精品一区三区视频在线观看| 国语自产偷拍在线观看| 婷婷综合久久中文字幕蜜桃三电影 | 美女又色又爽视频免费|