Thursday, 03 November 2022 07:26

Speaker Naming in Arabic TV Programs

Mohamed Lazhar Bellagha

Higher Institute of Computer Science and Communication Techniques ISITCom, University of Sousse, Tunisia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Mounir Zrigui

Research Laboratory in Algebra, Numbers Theory and Intelligent Systems RLANTIS, University of Monastir, Tunisia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Automatic speaker identification is the problem of identifying speakers by their real identities. Previous approaches use textual information as a source of naming, try to associate names to neighbouring speaker segments using linguistic rules. However, these approaches have a few limitations that hinder their application on spoken text. Deep learning approaches for natural language processing have recently reached state-of-the-art results. However, deep learning requires a lot of annotated data which is difficult to obtain in the case of speaker identification task. In this paper, we present two contributions towards integrating deep learning for identifying speakers in news broadcasts: first we realise a dataset in which the names of mentioned speakers are related to the previous, next, current or other speaker turns. Moreover, we present our approach to solve the problem of speaker identification using information obtained from the transcription. We use a Long-term Recurrent Convolutional Network for name assignment and integer linear programming for name propagation into the different segments. We evaluate our model on both assignment and propagation tasks on the test part of the Arabic multi-genre broadcast dataset which consists of 17 TV programs from Aljazeera. The performance is analysed using the evaluation metrics, such as Estimated Global Error Rate (EGER) and Diarization Error Rate (DER). The outcome of the proposed method ensures better performance by achieving the lower EGER of 32.3% and DER of 8.3%.

Keywords: Speaker naming, speaker identification, name assignment, name propagation and CNN-LSTM.

Received May 12, 2020; accepted May 27, 2021

https://doi.org/10.34028/iajit/19/6/1

Full text

Thursday, 03 November 2022 07:24

VoxCeleb1: Speaker Age-Group Classification using Probabilistic Neural Network

Ameer Badr

Department of Computer Science, University of Technology, Iraq

This email address is being protected from spambots. You need JavaScript enabled to view it.

Alia Abdul-Hassan

Department of Computer Science, University of Technology, Iraq

 This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The human voice speech includes essentially paralinguistic information used in many applications for voice ‎recognition. Classifying speakers according to their age-group has been considered as a valuable tool in ‎various applications, as issuing different levels of permission for different age-groups. In the presented ‎research, an automatic system to classify speaker age-group without depending on the text is proposed. The ‎Fundamental Frequency (F0), Jitter, Shimmer, and Spectral Sub-Band Centroids (SSCs) are used as a ‎feature, while the Probabilistic Neural Network (PNN) is utilized as a classifier for the purpose of ‎classifying the speaker utterances into eight age-groups. Experiments are carried out on VoxCeleb1 dataset ‎to demonstrate the proposed system's performance, which is considered as the first effort of its kind. The ‎suggested system has an overall accuracy of roughly 90.25%, and the findings reveal that it is clearly ‎superior to a variety of base-classifiers in terms of overall accuracy.‎

Keywords: Speaker age-group recognition, features fusion, SSC, F0, jitter and shimmer.

Received May 23, 2020; accepted October 21, 2021

                https://doi.org/10.34028/iajit/19/6/2

 

Full text

 

Thursday, 03 November 2022 07:22

Dictionary Based Arabic Text Compression and Encryption Utilizing Two-Dimensional Random Binary Shuffling Operations

Ahmad Al-Jarrah

Applied Science Department, Al-Balqa Applied University, Jordan

This email address is being protected from spambots. You need JavaScript enabled to view it.

Mohammad Al-Jarrah

Computer Engineering Department, Yarmouk University, Jordan
This email address is being protected from spambots. You need JavaScript enabled to view it.

Amer Albsharat

Computer Engineering Department, Yarmouk University, Jordan
This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: This paper developed Arabic text encryption and compression based on dictionary indexing algorithm. The proposed algorithm includes encoding the Arabic text utilizing Arabic words dictionary, mapping encoded binary stream to a two-dimensional binary matrix, utilizing randomized variable size encryption key, applying randomly binary shuffling functions on the two-dimensional matrix, and mapping back the two-dimensional binary matrix into a sequential binary stream. The decryption algorithm at the receiver side implements the encryption steps reversely, utilizing the encryption key and the shared Arabic word dictionary. In this dictionary, the words of the formal Arabic language are classified into four categories according to the word length and sorted alphabetically. Each dictionary category is given an index size that is large enough to fit all words in that category. The proposed algorithm shuffles adjacent bits away from each other in random fashion through utilizing randomized variable length encryption key, two-dimensional shuffling functions, and repetition loop. Moreover, the index size is selected not to be multiple bytes to destroy any statistical feature that may be utilized to break the algorithm. The proposed algorithm analysis concluded that it could be broken after 3.215109 years. Moreover, the proposed algorithm achieved a less than 30% compression ratio.

Keywords: Encryption algorithm, decryption algorithm, compression algorithm, arabic text encryption, arabic text decryption, arabic text encoding, two-dimensional binary shuffling.

Received December 23, 2020; accepted January 9, 2022

https://doi.org/10.34028/iajit/19/6/3

Full text

Thursday, 03 November 2022 07:20

An Improved Quantile-Point-Based Evolutionary Segmentation Representation Method of Financial Time Series

Lei Liu

School of Computer and Software Engineering, Xihua

University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Zheng Pei

School of Computer and Software

Engineering, Xihua

University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Peng Chen*

School of Computer

 and Software Engineering, Xihua

University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Zhisheng Gao

School of Computer and Software

Engineering, Xihua

University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Zhihao Gan

School of Computer and Software

Engineering, Xihua

University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Kang Feng

School of Computer and Software Engineering, Xihua University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

  • Abstract: Effective and concise feature representation is crucial for time series mining. However, traditional time series feature representation approaches are inadequate for Financial Time Series (FTS) due to FTS' complex, highly noisy, dynamic and non-linear characteristics. Thus, we proposed an improved linear segmentation method named MS-BU-GA in this work. The critical data points that can represent financial time series are added to the feature representation result. Specifically, firstly, we propose a division criterion based on the quantile segmentation points. On the basis of this criterion, we perform segmentation of the time series under the constraint of the maximum segment fitting error. Then, a bottom-up mechanism is adopted to merge the above segmentation results under the maximum segment fitting error. Next, we apply Genetic Algorithm (GA) to the merged results for further optimization, which reduced the overall segment representation fitting error and the integrated factor of segment representation error and number of segments. The experimental result shows that the MS-BU-GA has outperformed existing methods in segment number and representation error. The overall average representation error is decreased by 21.73% and the integrated factor of the number of segments and the segment representation error is reduced by 23.14%.
    • Keywords: Time series, feature representation, quantile segmentation points, linear segmentation, genetic algorithm.

Received January 31, 2021; accepted January 9, 2022

https://doi.org/10.34028/iajit/19/6/4

Full text

Thursday, 03 November 2022 07:18

Neuroevolution of Augmenting Topologies for Artificial Evolution: A Case Study of Kinesis of Creature in the Various Mediums

Sunil Kumar Jha

Department of Physics, Adani University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Filip Josheski

University of Information Science and Technology “St. Paul the Apostle,” North Macedonia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Xiaorui Zhang

School of Computer and Software, Nanjing University of Information Science and Technology, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Zulfiqar Ahmad

Institute of Hydrobiology, Chinese Academy of Sciences, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The motivation of the present study is to evolve virtual creatures in a diverse simulated 3D environment. The proposed scheme is based on the artificial evolution using the Neuro Evolution of Augmenting Topologies (NEAT) algorithm to educe a neural network that controls the muscle forces of the artificial creatures. The morphologies of the creatures are established using the Genetic Algorithm (GA) method based on the distance metrics fitness function. The concept of damaging crossover of neural networks and genetic language for the morphology of creatures has been considered in the morphologies of the artificial creature. Creatures with certain morphological traits consume a large time to optimize their kinetics, thus they are placed in a separate species to limit the search. The simulation results in the significant kinetics of artificial creatures (2-5 limbs) in virtual mediums with varying dynamic and static coefficients of friction (0.0-4.0). The motion of artificial creatures in the simulated medium was determined at different angles and demonstrated in the 3D space.

Keywords: Artificial evolution, kinetics, NEAT algorithm, artificial neural network, genetic algorithm.

Received February 5, 2021; accepted October 31, 2021

https://doi.org/10.34028/iajit/19/6/5

Full text

Thursday, 03 November 2022 07:17

FaceSwap based DeepFakes Detection

Marriam Nawaz

Department of Software Engineering, University of Engineering and Technology, Pakistan

 This email address is being protected from spambots. You need JavaScript enabled to view it.

Momina Masood

Department of Computer Science, University of Engineering and Technology, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

Ali Javed

Department of Software Engineering, University of Engineering and Technology, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

Tahira Nazir

Faculty of Computing, Riphah International University, Islamabad, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.


Abstract: The progression of Machine Learning (ML) has introduced new trends in the area of image processing. Moreover, ML presents lightweight applications capable of running with minimum computational resources like Deepfakes, which generates widely manipulated multimedia data. Deepfakes introduce a serious danger to the confidentiality of humans and bring extensive religion, sect, and political anxiety. The FaceSwapp-based deepfakes are problematic to be identified by people due to their realism. Hence, the researchers are facing serious issues to detect visual manipulations. In the presented approach, we have proposed a novel technique for recognizing FaceSwap-based deepfakes. Initially, landmarks are computed from the input videos by employing Dlib-library. In the next step, the computed landmarks are used for training two classifiers namely Support Vector Machine (SVM) and Artificial Neural Network (ANN). The reported results demonstrate that SVM works well than ANN in classifying the manipulated samples due to its power to deal with over-fitted training data.

Keywords: Deepfakes, faceswap, ANN, SVM.

Received February 15, 2021; accepted January 23, 2022

https://doi.org/10.34028/iajit/19/6/6

Full text

Thursday, 03 November 2022 07:15

Analysis of Video Steganography in Military Applications on Cloud


Umadevi Ramamoorthy

Vivekanandha College for Women, Namakkal District, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Aruna Loganathan

Vivekanandha College for Women, Namakkal District, India

This email address is being protected from spambots. You need JavaScript enabled to view it.


Abstract: The analysis of secure video file transfer with military application for video steganography on cloud computing is essential role. Video steganography is the process of hiding the secret data which is presented in the video and it is based on the reversible and irreversible schemes. The reversible scheme has the capability to insert the secret data into a video and then recover the video without any failure of information when the secret data is extracted. Irreversible methods on video steganography often deal with sensitive information, making embedded payload an important concern in the design of these data hiding systems. In video steganography, irreversible contrast mapping is considered for extracting the secret data during the process of hiding the data. During this extraction process, high quality data hiding is carried in video steganography. The analysis consequences of the proposed method Video Steganography Cloud Security (VSCS) shows that the structure for secure communication and augments the confidentiality and security in cloud. This result of the proposed method shows the better security level.

Keywords: Video steganography, secure communication, information security, secret data, video code streams, data hiding.

Received April 7, 2021; accepted December 2, 2021

https://doi.org/10.34028/iajit/19/6/7

Full text

Thursday, 03 November 2022 07:13

An ML-Based Classification Scheme for Analyzing the Social Network Reviews of Yemeni People

Emran Al-Buraihy

Faculty of information Technology, Beijing University of Technology, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Wang Dan*

Faculty of information Technology, Beijing University of Technology, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Rafi Ullah Khan

Institute of Computer Science and Information Technology, The University of Agriculture Peshawar, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Mohib Ullah

Institute of Computer Science and Information Technology, The University of Agriculture Peshawar, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The social network allows individuals to create public and semi-public web-based profiles to communicate with other users in the network and online interaction sources. Social media sites such as Facebook, Twitter, etc., are prime examples of the social network, which enable people to express their ideas, suggestions, views, and opinions about a particular product, service, political entity, and affairs. This research introduces a Machine Learning-based (ML-based) classification scheme for analyzing the social network reviews of Yemeni people using data mining techniques. A constructed dataset consisting of 2000 MSA and Yemeni dialects records used for training and testing purposes along with a test dataset consisting of 300 Modern Standard Arabic (MSA) and Yemeni dialects records used to demonstrate the capacity of our scheme. Four supervised machine learning algorithms were applied and a comparison was made of performance algorithms based on Accuracy, Recall, Precision and F-measure. The results show that the Support Vector Machine algorithm outperformed the others in terms of Accuracy on both training and testing datasets with 90.65% and 90.00, respectively. It is further noted that the accuracy of the selected algorithms was influenced by noisy and sarcastic opinions.

Keywords: Social network, sentiment analysis, Arabic sentiment analysis, MSA, data mining, supervised machine learning.

Received March 18, 2020; accepted October 31, 2021

                https://doi.org/10.34028/iajit/19/6/8

Full text

Thursday, 03 November 2022 07:10

Text Mining Approaches for Dependent Bug Report Assembly and Severity Prediction

Bancha Luaphol

Department of Digital Technology, Kalasin University, Thailand

This email address is being protected from spambots. You need JavaScript enabled to view it.         

Jantima Polpinij

Department of Computer Science, Mahasarakham University, Thailand

This email address is being protected from spambots. You need JavaScript enabled to view it.

Manasawee Kaenampornpan

 Department of Computer Science, Mahasarakham University, Thailand

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: In general, most existing bug report studies focus only on solving a single specific issue. Considering of multiple issues at one is required for a more complete and comprehensive process of bug fixing. We took up this challenge and proposed a method to analyze two issues of bug reports based on text mining techniques. Firstly, dependent bug reports are assembled into an individual cluster and then the bug reports in each cluster are analyzed for their severity. The method of dependent bug report assembly is experimented with threshold-based similarity analysis. Cosine similarity and BM25 are compared with term frequency (tf) weighting to obtain the most appropriate method. Meanwhile, four classification algorithms namely Random Forest (RF), Support Vector Machines (SVM) with the RBF kernel function, Multinomial Naïve Bayes (MNB), and k-Nearest Neighbor (k-NN) are utilized to model the bug severity predictor with four term weighting schemes, i.e., tf, term frequency-inverse document frequency (tf-idf), term frequency-inverse class frequency (tf-icf), and term frequency-inverse gravity moment (tf-igm). After the experimentation process, BM25 was found to be the most appropriate for dependent bug report assemblage, while for severity prediction using tf-icf weighting on the RF method yielded the best performance value.

Keywords: Bug report, dependent bug report assembly, bug severity prediction, threshold-based similarity analysis, cosine similarity, BM25, term weighting, classification algorithm.

Received April 28, 2020; accepted February 13, 2022

https://doi.org/10.34028/iajit/19/6/9

Full text

Thursday, 03 November 2022 07:08

Implementation of Multimode Fiber in Eight Channels Mimo Multiplexing System by Eliminating Dispersion with Optical Grating Methodologies

Guruviah Karpagarajesh

Department of Electronics and Communication, Government College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: In recent years, with the swift development of the internet industry, people urgently necessitate extra capacity and speedy network systems. Under this circumstance, optical fibers are becoming the most favourable delivering media as it has a major important role in the information business, with their huge bandwidth and excellent transmission performance. Dispersion compensation is the main important attribute required in an optical fiber communication system because the presence of dispersion leads to pulse spreading that causes the output signal pulses to overlap. For a long-haul transmission system, an 8x1 Wavelength Division Multiplexing Multi Input Multi Output (WDM MIMO) channel with a novel dispersion compensation system has been designed. Various dispersion compensation techniques are explored in this work, including dispersion-compensating fibers, fiber Bragg grating, Ideal compensation Bragg grating, and opti-grating, with a 2.5 Gbit/s data rate for each channel. With a power of 20 dBm, the proposed model is simulated with 32768 samples. The length of an optical fibre can range from 5 to 100 kilometres. Throughout the simulation, the operation frequency is 193.1 THz. The software opti-system-17.0 was used to design and implement the system. The 8-channel device was used to simulate and calculate metrics such as Q-factor, Bit Error Rate (BER), and Signal to Noise Ratio (SNR). The proposed model enhances performance in terms of BER and quality factor at the receiver end of systems.

Keywords: Multimode fiber, dispersion compensation, optical grating, q factor, bit error rate.

Received August 28, 2020; accepted October 31, 2021

https://doi.org/10.34028/iajit/19/6/10

Full text

Thursday, 03 November 2022 07:06

Context Aware Mobile Application Pre-Launching Model using KNN Classifier

Malini Alagarsamy

Department of Computer Science,

Thiagarajar College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Ameena Sahubar Sathik

Infosys Pvt Ltd, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Mobile applications are the application software which can be executed in mobile devices. The Performance of the mobile application is major factor to be considered while developing the application software. Usually, the user uses a sequence of applications continuously. So, pre-launching of the mobile application is the best methodology used to increase the launch time of the mobile application. In Android Operating System (OS) they use cache policies to increase the launch time. But whenever a new application enters into the cache it removes the existing application from the cache even it is repeatedly used by the user. So the removed application needs to be re-launched again. To rectify it, we suggest K number of applications for pre-launching by calculating the affinity between the applications. Because, the user may uses the set of applications together for more than one time. We discover those applications from the usage pattern based on Launch Delay (LD), Power Consumption (PC), App Affinity, Spatial and Temporal relations and also, a K-Nearest Neighbour (KNN) classifier machine learning algorithm is used to increase the accuracy of prediction.

Keywords: Mobile application, launch time, app affinity, pre-launch, context-aware.

Received October 10, 2020; accepted December 14, 2021

https://doi.org/10.34028/iajit/19/6/11

Full text

Thursday, 03 November 2022 07:03

New Language Models for Spelling Correction

 

Saida Laaroussi

IT, Logistics and Mathematics, Ibn Tofail University, Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

Si Lhoussain Aouragh

IT and Decision Support System, Mohamed V University, Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abdellah Yousfi

Department of Economics and Management, Mohamed V University, Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

              

Mohamed Nejja

Department of Software Engineering, Mohamed V University, Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

Hicham Geddah

Department of Computer Science, Mohamed V University, Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

Said Ouatik El Alaoui

IT, Logistics and Mathematics, Ibn Tofail University, Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Correcting spelling errors based on the context is a fairly significant problem in Natural Language Processing (NLP) applications. The majority of the work carried out to introduce the context into the process of spelling correction uses the n-gram language models. However, these models fail in several cases to give adequate probabilities for the suggested solutions of a misspelled word in a given context. To resolve this issue, we propose two new language models inspired by stochastic language models combined with edit distance. A first phase consists in finding the words of the lexicon orthographically close to the erroneous word and a second phase consists in ranking and limiting these suggestions. We have applied the new approach to Arabic language taking into account its specificity of having strong contextual connections between distant words in a sentence. To evaluate our approach, we have developed textual data processing applications, namely the extraction of distant transition dictionaries. The correction accuracy obtained exceeds 98% for the first 10 suggestions. Our approach has the advantage of simplifying the parameters to be estimated with a higher correction accuracy compared to n-gram language models. Hence the need to use such an approach.

Keywords: Spelling correction, contextual correction, n-gram language models, edit distance, NLP.

Received January 1, 2021; accepted January 19, 2022

https://doi.org/10.34028/iajit/19/6/12

Full text

Thursday, 03 November 2022 06:58

Improved Superpixels Generation Algorithm for Qualified Graph-Based Technique

Asma Fejjari

MARS Research Laboratory, ISITCom, 4011, Hammam Sousse, University of Sousse, Tunisia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Karim Saheb Ettabaa
IMT Atlantique, Iti Department, Telecom Bretagne, France

This email address is being protected from spambots. You need JavaScript enabled to view it.

Ouajdi Korbaa

MARS Research Laboratory, ISITCom, 4011, Hammam Sousse, University of Sousse, Tunisia

This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Abstract: Hyperspectral Images (HSIs) represent an important source of information in the remote sensing field. Indeed, HSIs, which collect data in many spectral bands, are more simple interpretable and provide a detailed information about interest areas. However, hyperspectral imaging systems generate a huge amount of redundant data and an important level of noise. Dimensionality reduction is an important task that attempts to reduce dimensionality and remove noise so as to enhance the accuracy of remote sensing applications. The first dimensionality reduction approaches date back to 1970s, and various model-based methods have been proposed since these years. This field has known an increasing attention by the suggestion of graph based models that have yielded promising results. While graph based approaches generate considerable outputs, these models require often an important processing time to handle data. In this work, we aim to reduce the computational burden of a promising graph based method called the Modified Schroedinger Eigenmap Projections (MSEP). In this respect, we suggest an efficient superpixel algorithm, called Improved Simple Linear Iterative Clustering (Improved SLIC), to lessen the heavy computational load of the MSEP method. The proposed approach exploits the superpixels as inputs instead of pixels; and then runs the MSEP algorithm. While respecting the HSIs properties, the proposed scheme illustrates that the MSEP method can be performed with computational efficiency.

Keywords: Hyperspectral images, dimensionality reduction, graph, MSEP, superpixels, improved SLIC.

Received April 14, 2021; accepted March 27, 2022

https://doi.org/10.34028/iajit/19/6/13

Full text

Thursday, 03 November 2022 06:56

Hybrid User Acceptance Test Procedure to Improve the Software Quality

Natarajan Sowri Raja Pillai

Department of Information Technology

Raak College of Engineering and Technology,

India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Ranganathan Rani Hemamalini

Department of Electrical and Electronics Engineering

St. Peters Institute of Higher Education and Research, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Fast-growing software needs result in the rise of quality software in technical and time challenges in software development and the impact the cost and scarcity of resources addressed by the companies. Thus, this research focuses on optimal implementation of the User Acceptance Testing (UAT) and the process generation integration. The Software Development Life Cycle (SDLC) was adapted to develop software and introduce the UAT process right from the initial phase of the software development. Additionally, it is devised to maximise time reduction by implementing the client testing in all the three processes. A High Capability to Detect (HCD) procedure has been incorporated in the problem formulation that has optimally identified sensitive bugs. A Modified Reuse of Code (MRC) is proposed for a feasible time-saving solution. The proposed UAT will provide an optimal solution in the software testing phases implemented earlier than black-box testing. The proposed UAT has significantly better production time, development cost, and software quality in comparison to other traditional UATs. The study's findings were corroborated by the output data from the UAT cases. The UAT ensures the quality of the product in the early phase of the development and implementation of the projects. This will minimise the risk during and post-implementation of bugs and achieve the target audience’s needs.

Keywords: Black box testing, high capability to detect, modified reuse of code, user acceptance test.

Received April 18, 2021; accepted October 21, 2021

https://doi.org/10.34028/iajit/19/6/14

Full text

Thursday, 03 November 2022 06:52

A Hybrid Deep Learning Based Assist System for Detection and Classification of Breast Cancer from Mammogram Images

Lakshmi Narayanan

Department of Electronics and Communication Engineering, Francis Xavier Engineering College, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Santhana Krishnan

Department of Electronics and Communication Engineering, SCAD College of Engineering and Technology, India. This email address is being protected from spambots. You need JavaScript enabled to view it.

Harold Robinson

School of Information Technology and Engineering, Vellore Institute of Technology, India. This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The most common cancer disease among all women is breast cancer. This type of disease is caused due to genetic mutation of ageing and lack of awareness. The tumour that occurred may be a benign type which is a non-dangerous and malignant type that is dangerous. The Mammography technique utilizes the early detection of breast cancer. A Novel Deep Learning technique that combines the deep convolutional neural networks and the random forest classifier is proposed to detect and categorize Breast cancer. The feature extraction is carried over by the AlexNet model of the Deep Convolutional Neural Network and the classifier precision is increased by Random Forest Classifier. The images are collected from the various Mammogram images of predefined datasets. The performance results confirm that the projected scheme has enhanced performance compared with the state-of-art schemes.

Keywords: Breast cancer, mammogram, alexnet, deep convolutional neural networks, random forest.

Received May 13, 2021; accepted February 24, 2022

https://doi.org/10.34028/iajit/19/6/15

Full text

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…