Online ISSN: 2515-8260

Keywords : Deep learning


Dr.M.Rajaiah,Prof.V.Sreenatha Sarma,Mr.S.Satwik, Ms.T.Thanusha, Ms.V.Hemalatha, Ms.D.Rachana Pravalika .

European Journal of Molecular & Clinical Medicine, 2023, Volume 10, Issue 2, Pages 915-925

General traffic administration as well as infrastructure design may benefit from real-time vehicle surveillance on motorways, roads, and streets. This study introduces Traffic Detector ,a technology that makes use of deep learning methods to automatically monitor and classify vehicles on roads using a precise and stable camera. Despite being a well-established area visual programming research, improvements in neural networks for object recognition and categorization, particularly in the last years, made this area even more intriguing owing to efficacy of these methods. It is concentrated on region-based approaches like R-CNN (Region-based Convolutional Neural Network) and regression-based methods like YOLO (You Only Look Once) and also provided each enhanced versions in the subject of motor identification is being tackled by the quickly expanding domain of supervised learning approaches. Last but not least, we have a traffic offence detection module in place that examines traffic patterns and identifies various traffic offences in real-time. The Deep Neural Network (DNN) module of OpenCV accustomed implement the complete system. With excellent accuracy, We have indeed been fortunate in locating automobiles on the roads using YOLOv4. We used a quick YOLOv4-tiny model for motorcycle riders without helmets. Real-time vehicle tracking is accomplished using Deep SORT algorithm. For vehicle detection, YOLOv4 achieves a precision of 89%, while for helmet detection, YOLOv4-tiny achieves a delicacy of 96%. YOLOv6 97%. The backbone, neck, and head are the three crucial components the majority recent iteration, known as YOLOv5.YOLOv7 is anticipated to overtake YOLO v4This review paper aims to advance sophisticated deep learning frameworks for real-time vehicle detection.


1Gaurav D Saxena, 2Dr. SHAIK JUMLESHA, 3K.Susmitha, 4Mekala R, 5ABHAY R. SHIRODE, 6Dr. Amit Chauhan, 7Mr. Shailendra Singh Bhadauria ..

European Journal of Molecular & Clinical Medicine, 2023, Volume 10, Issue 1, Pages 4561-4570

Colorectal cancer is a well-known tumour that affects both men and women across the world and is quite common. According to a study published by the World Health Organization in 2018, colon cancer ranked third, with 1.80 million people afflicted. To be more specific, it is the cancer that comes after it that is the second most frequent cause of cancer in women and the third most common cause of cancer in men. Colorectal cancer is thought to be caused by a lack of control over the integrity of epidermal cells, which may occur in the intestine or during a malignancy. A reliable method of detecting colon cancer at an early stage, followed by intensive treatment, has the potential to significantly lower the mortality rates that result. A Gastroenterologist may resort to cancer diagnostic tests for pathological pictures in order to do Screening of Morphology of Malignant Tumor Cells in the Colon during a colonoscopy. Due to the unlimited number of glands in the gastrointestinal system, any Histology procedure will require a large amount of time, and the results may be incongruous. By diagnosing using computer algorithms, it is possible to get practical and beneficial outcomes.. In order to get trustworthy and useful morphological imaging data, correct gland segmentation is a critical pre-processing step that must be completed first. In recent years, researchers have used deep learning algorithms to pathological image analysis in order to improve the accuracy of cancer illness detection. According to our findings, diagnostic test characteristics that are provided as input to a deep learning architecture that is utilized in conjunction with a semantic segmentation algorithm may provide results that are more accurate than those produced by conventional picture segmentation methods. This paper presents an in-depth examination of deep learning architectures used for semantic segmentation on histological pictures of the colon, as well as their applications.

IoT Research On Healthcare Data Aimed At Preventing And Treating Oncology-Related Illnesses

Dr.Selvia arokiya mary amalanathan, Dr.Emaan Elsayed Hussein Mohammad

European Journal of Molecular & Clinical Medicine, 2023, Volume 10, Issue 1, Pages 4354-4359

Let's start with IoT, which enables developers to boost prediction even before they have all the data they need. Now that we have access to so much data, we can train machines to make more accurate predictions about how they will operate and when they will need repairs. According to the WHO, cancer is the second biggest cause of death throughout the globe. Those afflicted by cancer are more vulnerable to passing away during the present pandemic. A round-the-clock monitoring system is essential since the disease's prevalence rises steadily over time. The IoT may be used as a cancer monitoring system, allowing for the detection of early cancer indications, the ongoing monitoring of people who have cancer, and the testing of those who have been deemed cancer-free after treatment. This work lays out a comprehensive strategy for a disease monitoring and control system based on the Internet of Things, which might form the backbone of cancer diagnosis and management in far-flung locations


Visu P; Smitha P S; Muruganathan V

European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 8, Pages 1037-1045

Diagnosis is an area of utmost importance in medical treatment. A person can be cured only when the doctor properly diagnoses the disease and gives the appropriate treatment. However, wrong diagnosis, wrong treatment even when the disease is correctly diagnosed, and wrong treatment can cause side effects and delay the cure. Sometimes it can be life-threatening. It is important for us to know how allopathic doctors, who are now called modern doctors, diagnose disease. They first note down the patient's complaints in order. In this paper, an innovation development of disease identification was proposed using medical deep learning model. The complete and correct information they can provide helps in proper diagnosis. Only after that the doctors examine the patient's body. Testing is not just about checking pulse and blood pressure. All body parts will be examined like abdomen, nervous system-brain function, muscle, skeletal system. Urinary tract and sexual organs will also be examined by the appropriate doctor. Thus, after doing a full body examination, they find out what the patient is suffering from.

A study on Alzheimer Disease Detection using Machine learning and Deep Learning

E.Semmalar, Dr.R.Shobarani, Dr. M.J. Bharathi, Dr. T. Suganth

European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 7, Pages 9354-9362

Dementia, often known as Alzheimer's disease, is a serious neurodegenerative condition that kills brain cells and causes irreversible memory loss. The global burden of disease from a Disenormous. In order to stop the course of Alzheimer's disease (AD), early detection is essential. For him and his family, early diagnosis of Alzheimer's disease is really helpful. In this publication, we examined previous research on detecting Alzheimer disease using Machine Learning (ML) and Deep Learning (DL) techniques. We examined numerous machine learning and deep learning approaches in this survey to compare them and determine which one performs better

Protein Classification using CNN

Arun Kumar, Vishal Verma

European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 7, Pages 9158-9162

Living organisms have a variety of macromolecules which play crucial role in the biological funtions. These are comprised of amino acids chains. The structure and folding patterns of amino acids decide the functions and features of proteins. The knowledge of the functions of proteins play important role in identifying their biological processes, disorders, therapeutics and medicine and accordingly the measures to prevent the biological disorders can be taken. In the process of achieving the goals, related to proteins and their processes the classification of proteins is vital. This research helps in demonstrating the relationship between protein sequences and its related classification type. The objective is achieved by applying 1D convolutional neural network model on a dataset comprising of more than 3 lakh tuples. The results of above method is compared using learning rate charts. In the current work, researchers have focused on understanding the structural protein sequences dataset accessed from Protein Data Bank database available at website of Research Collaboratory for structural bioinformatics. Also after the detailed analyses of protein dataset, the classification of proteins has been demonstrated using deep learning approaches(convolutional neural network model).

Machine Learning Trained Edge Computing Device for PhysicallyDisabled

U.Vijaya Laxmi, V.Vijaya Ramaraju , P.Srividya Devi

European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 7, Pages 8994-9001

Biomedical devices play a crucial role in community as these are revolutionizing with breath taking approach in both the medication and the exposure of many diseases. The paper aims to Design edge-based home automation using ESP-32 for Physically Disabled People. Edge computing is an applicable manner to meet the immense estimation and flat-dormancy conditions of deep learning on edge devices and implements increased interests in isolation, bandwidth efficiency, and expandability. ESP-32 receives data from the sound sensor and recognizes the voice command which is already trained and trigger the relay. Automated system using ESP-32 with voice command controls the home appliances. The paper mainly focuses on disabled people to facilitate integrated system that is easy–to–use using Machine learning Technique. The home automation system allows one to control household appliance centralize wireless control unit. This paper aims to control home appliances with user handy, economical, effort less installation for Physically disabled people

A novel ensemble deep learning model for covid-19 Twitter sentiment analysis

Srikanth Jatla, Damodarma Avula

European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 7, Pages 7882-7901

Recent years have seen a rise in the significance of sentiment analysis as a direct result of the explosion in the amount of material available online. The practice of analyzing textual data created on social media sites such as Facebook and Twitter using a natural language processing approachis called sentiment analysis. Since the beginning of the COVID-19 pandemic, several postings, including videos and text messages, have been uploaded on the social media platform in order to provide real-time updates on the progression of the pandemic across the world's nations. 

Cascaded CNN with Haar Wavelet Feature based Brain Tumor Detection Technique

G. Dheepa1, S. Uma Shankari

European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 7, Pages 8395-8405

Abnormal tumor image identification from brain Magnetic Resonance Images (MRI) is essential for medical diagnostics. In this research, Cascaded Convolutional Neural Network (CCNN) with Haar wavelet features based brain tumor detection technique has been proposed for automatic identification of tumor images. The significant LL sub-band features are first extracted in all image slices. These slices are further processed using CCNN architecture for brain tumor detection. In this architecture, each image slice is convolved with three different 7 x 7, 3 x 3 and 5 x 5 kernels to produce three separate feature maps. These feature maps are cascaded to be processed into the hierarchy of convolutional, pooling and softmax layers to predict whether an image is having a tumor or not. This proposed algorithm is implemented using the BRATS-2018 training dataset. It achieves 96% of Accuracy, 97 % of F1-score, 97 % of Precision, 97 % of Specificity and 96 % of Sensitivity values.


M.Bommy, Paritala Jhansi Rani, Thomas Abraham J V, Gunjan Chhabra,Suresh Kumar Sharma, MK Jayanthi Kannan

European Journal of Molecular & Clinical Medicine, 2022, Volume 9, Issue 7, Pages 8546-8555

With the growth of the Internet, cyberattacks are evolving quickly, and the state of cyber security is not promising. Cyber security is often an extension of traditional information technology (IT) security that aims to safeguard systems, applications, and data that are vulnerable to various online assaults, such as data theft and espionage as well as data manipulation and denial of service attacks. Due to the losses incurred by countries, companies, and people as a result of numerous cybercrime assaults, there is a need for an increase in cyber security research. This study examines deep learning (DL), watermarking techniques for cyber security applications and illustrates how deep learning and water making is used in cybersecurity and how state-of-the-art solutions may be outperformed by deep learning ones. We advise professionals to think about integrating deep learning into security systems.

Convolutional Neural Network Architecture for Skin Cancer Diagnosis

Michael Cabanillas- Carbonell; Randy Verdecia- Peña

European Journal of Molecular & Clinical Medicine, 2021, Volume 8, Issue 3, Pages 2819-2833

In recent years, Malignant Melanoma Cancer has caused an increased exponential in human diseases, for this reason, it is essential to detect it from its early stages. Deep Learning is one of the most applied technologies for the analysis of images oriented to medicine, facilitating the diagnosis of diseases in patients, allowing them to make accurate decisions about their health. In this paper, we propose a convolutional neural network architecture derived from the evaluation of different convolutional neural networks that meet the objective of obtaining more pressure on the information of the acquired image. The model for the problem is based on a binary distribution, 1 in case of malignant and 0 for benign, so that melanoma can be detected early and is very useful, for this we used 2 different datasets with a total of 2650 images for training the architecture. Finally, a comparison of the results obtained in other research has been made, where the metrics of our project are considerably improved by having 3 layers. This new architecture is a proposed solution for the optimization of training and validation of images.

Implementation of Deep Learning for Automatic Classification of Covid-19 X-Ray Images

Muhammad Shofi Fuad; Choirul Anam; Kusworo Adi; Muhammad Ardhi Khalif; Geoff Dougherty

European Journal of Molecular & Clinical Medicine, 2021, Volume 8, Issue 2, Pages 1650-1662

Background:Reading radiographic images for Covid-19 identification by an expert radiologist requires significant time, therefore the development of an automated analysis system to assisting and saving time in diagnosing Covid-19 is important.
Objective: The purpose of this study is to implement the GoogleNet architecture with various epochs in hope achieving higher level of accuracy in Covid-19 detection.
Methods: We retrospectively used 813 images, i.e. 409 images indicating Covid-19 and 404 normal images. The deep TL model with GoogleNet architecture was implemented.The training was carried out several times to get the best acquisition value with a learning rate of 0.0001 for all levels. The network training was carried out with different epochs, i.e. 12, 18, and 24 epochs, and each epoch with 65 iterations.
Results: It was found that accuracy was determined by changes in the number of epochs. The classification accuracy was 96.9% in epoch 12, 98.2% in epoch 18, and 99.4% in epoch 24.
Conclusion: An increase in the number epochs increases the accuracy in the detection of Covid-19. In this study, the accuracy of the method reached 99.4%. These results are promising for the automation of Covid-19 detection from radiographic images.

Automatic Classification of the Severity of COVID-19 Patients Based on CT Scans and X-rays Using Deep Learning

Sara Bhatti; Dr. Asif Aziz; Dr. Naila Nadeem; Irfan Usmani; Prof. Dr. Muhammad Aamir; Dr. Irum Khan

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 10, Pages 1436-1455

The 2019 novel coronavirus (COVID-19), which originated from China, has been declared a pandemic by the World Health Organization (WHO) as it has surpassed over eighty three million cases worldwide, with nearly two million deaths. The unexpected exponential increase in positive cases and the limited number of ventilators, personal safety equipment and COVID-19 test kits, especially in Low to Middle Income Countries (LMIC), had put undue pressure on medical staff, first responders as well as the overall health care systems. The Real-Time Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) test is the decisive test for the diagnosis of COVID-19, but a significant percentage of positive tests return a false negative result. For patients in LMICs, the availability and affordability of routine Computerized Tomography (CT) scanning and chest X-rays is better compared to an RT-PCR test, especially in rural areas. Chest X-rays and CT scans can aid in the prognosis and management of COVID-19 positive patients, but are not recommended for diagnostic purposes. Using Deep Convolutional Neural Networks (CNN), three network based pre-trained models (AlexNet, GoogleNet and Resnet50) were used for the automatic classification of positive COVID-19 chest X-Rays and CT scans based on their severity into three classes- normal, mild/moderate, severe. This classification can aid health care workers in performing expeditious analysis of large numbers of thoracic CT scans and chest X-rays of COVID-19 positive patients, and aid in their prognosis and management. The images were obtained from public repositories, and were verified and classified by trained and highly experienced radiologist from Agha Khan University Hospital prior to simulations. The images were augmented and trained, and ResNet50 was concluded to achieve the highest accuracy. This research can be used for other lung infections, and can aid the authorities in the preparations of future pandemics.

A Study Of Preprocessing Techniques And Features For Ovarian Cancer Using Ultrasound Images

Ms.ArathiBoyanapalli .; Dr.Shanthini. A

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 10, Pages 293-303

Ovarian Cancer is the third leading cancer among women in India. The early detection-rate of ovarian cancer is very low [1]. Transvaginal ultrasound is the most common screening test to detect the presence of tumors but adnexal masses are very common in patients, the challenging part is to discriminate whether the masses are benign or malignant. This distinction is very essential for optimal surgical management, but reliable pre-surgical differentiation is sometimes difficult using clinical features, ultrasound examination, or tumor markers alone[2]. Recent trends in medical imaging facilitate the detection of most cancers at a very initial stage. Still, an ovarian cancer diagnosis is not accurate. The patient has to undergo painful practices such as biopsies or surgeries, even with benign nodules. Ultrasound images with deep learning techniques in ovarian cysts help in diagnosis whether the cyst is benign or malignant at a very early stage without any surgeries. This method not only cuts the medical expenses of the patient but also reduces the mental stress of the patients.

Segmentation on Brain Cancer Disease using Deep Learning Techniques

J. Josphin Mary; R. Charanya; V. Shanthi; G. Sridevi; Meda Srinivasa Rao

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 1439-1446
DOI: 10.31838/ejmcm.07.09.153

Segmenting brain tumors is a major challenge in the production of scientific pictures. To order to maximize care outcomes and increasing the hospital success rate, early detection of brain tumors plays an important part. A challenging and time-consuming job is the manual segmentation of brain tumors from large quantities of MRI images produced in clinical routine. Automatic brain tumor segmentation is possible. This article aims to analyze strategies for the segmentation of brain tumors dependent on MRI. Automatic segmentation using deep learning approaches has recently been proven common because these approaches accomplish the latest findings much better than other methods would solve this issue. Deep learning approaches may also provide for effective analysis and unbiased interpretation of vast volumes of picture evidence dependent on MRI. There are many papers on MRI based brain tumor segmentation which focus on traditional methods. Different from others, we concentrate on the recent trend in the field of deep learning. Next, the brain tumors and techniques for segmenting the brain tumor are added. Then, the new architectures are explored with a emphasis on the current development in deep learning methods. Finally, an evaluation is introduced and further improvements are discussed to standardize brain tumor segmentation procedures dependent on MRI in the day-to-day clinical practice.


Dr.C. ANNADURAI; Dr. I. Nelson

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 11, Pages 5228-5241

Underwater image processing has always been a promising and thrilling task due to the natural condition and the lighting effect for taking the image requires good artificial lights. While taking underwater images lots of difficulties are faced by photographers such as the shadows, non-uniform lighting, color shading, etc. Recognizing the object underwater is very difficult in order to the environmental condition. Man-made object recognition was made with underwater optical sensors to capture underwater images that have gained more attention from the users. Deep learning methods have demonstrated impressive performance in object recognition tasks from natural images. Anyhow it is hard to collect all the labelled underwater optical images for training the model. It is possible to acquire labelled images. Based on the assumption that it is possible to acquire sufficient labeled in-air images, the proposed work leverages a combination of deep learning and transfer learning to develop a novel recognition system for the man-made object from underwater optical images. The extracted features from the proposed network have high representative power and demonstrate robustness in both in-air and underwater imaging modalities. Therefore, our proposed framework has the ability to recognize underwater man-made objects using only labeled in-air images. The results of experiments on simulated data demonstrate that the proposed method outperforms traditional deep learning methods in the task of underwater man-made object recognition.


D.Raghu Raman; D. Saravanan; R. Parthiban; Dr.U. Palani; Dr.D.Stalin David; S. Usharani; D. Jayakumar

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 2531-2557

In today’s world, digitization plays an extremely prominent role in day-to-day applications.Its future deployment, needs an Internet of Things (IoT) to embrace automation, remote monitoring and predictive analysis. IoT is a device connected with an internet and it’s a combined embedded technology including actuator and sensor device. Also, it encompasses, wired and wireless communication devices, and real-world physical objects connected to the
internet. IoTis majorly used in diversified fields like smart classroom, smart banking, smart home, smart agriculture, smart healthcare application etc. Typically, IoT requires intelligence, to achieve theautomation process in an efficient way in many applications. Artificial Intelligence (AI) paves the way to makes the IoT smarter and efficient by its approaches. Due to enormous amount of data being generated in various applications, IoT combined with Machine Learning(ML) and Deep Learning(DL) models is used to enhance the functionality in complex applications. In this survey the applicationof AI, ML and DLmodels deployed in IoT are deeply explored.


S. Praveen; Dr. R. Priya

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 2832-2847

Text clustering is an important method for effectively organising, summarising, and navigating text information. The purpose of the clustering is to distinguish and classify the similarity among the text instance as label. However, in the absence of labels, the text data to be clustered cannot be used to train the text representation model based on deep learning as it contains high dimensional data with complex latent distributions. To address this problem, a new unified deep learning framework for text clustering based on deep representation learning is proposed using the deep adaptive fuzzy clustering in this paper to provide soft partition of data. Initially reconstruction of original data into feature space carried out using the word embedding process of deep learning. Word embedding process is a learnt representation of the text or sentence towards clustering into vector containing words, characters and N-grams of words. Further clustering of feature vector is carried out with max pooling layer to determine the inter-cluster seperability and intra-cluster compactness. Moreover learning of the feature space is processed with gradient descent. Moreover tuning of feature vector is fine tuned on basis of Discriminant information using hyper parameter optimization with fewer epochs. Finally representation learning and soft clustering has been achieved using deep adaptive fuzzy clustering and quantum annealing based optimization has been employed .The results demonstrate that the clustering approach more stable and accurate than the traditional FCM clustering algorithm on employing k fold validation for evaluation. The Experimental results demonstrates the proposed technique outperforms of state of arts approaches in terms of set based measures like Precision, Recall and F measure and rank based measures like Mean Average Precision and Cumulative Gain.

Survey On Aspect Based Sentiment Analysis Using Machine Learning Techniques

Syam Mohan E; R. Sunitha

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 10, Pages 1664-1684

Web 2.0 facilitates the expression of views through diverse Internet applications which serve as a rich source of information. The textual expressions have latent information that when processed and analysed reveal the sentiment of the user/people. This is known as sentiment analysis, which is the process of computationally extracting the opinions and viewpoints from textual data and it is also known as opinion mining, review mining or attitude mining, etc. Aspect-level sentiment analysis is one among the three main types of sentiment analysis, where granule level processing takes place in which the different aspects of entities are harnessed to identify the sentiment orientations. The emergence of machine learning and deep learning techniques has made a striking mark towards aspect-oriented sentiment analysis. This paper presents a survey and review of different works from the recent literature on aspect-based sentiment analysis done using machine learning techniques.


J. Josphin Mary; R. Charanya; V. Shanthi; G. Sridevi

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 9, Pages 1447-1453
DOI: 10.31838/ejmcm.07.09.154

Glaucoma is a persistent, permanent eye disease that contributes to vision and quality of life loss. Within this paper we build a deep learning system for the automatic diagnosis of glaucoma with a Convolutionary neural network. Deep learning algorithms, such as CNNs, that infer a hierarchical representation of images to differentiate between glaucoma and NG trends of diagnostic decisions. The DL architecture proposed contains six learning strategies: four Convolutionary strata and two entirely linked layers. Strategies for drop-out and data rise were implemented to further enhance the treatment of glaucoma. Extensive validation of ORIGA and SCES databases is carried out. The findings show that the recipient's operating curve field under curve (AUC) is significantly higher than the state of the art algorithms in glaucoma identification at 0,831 and 0,887 in the two databases. The method may be used for the detection of glaucoma.


Dr. Vikas Jain; Dr.S. Kirubakaran; Dr.G. Nalinipriya; Binny. S; Dr.M. Maragatharajan

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 3294-3301

Brain-computer interface (BCI) decoding connects the human nervous world to the external world.
People's brain signals to commands that computer devices can detect. In-depth study the performance
of brain-computer interface systems has recently increased. In this article, we will systematically
Investigate brain signal types for BCI and explore relevant in-depth study concepts for brain signal
analysis. In this study, we have a comparison of different traditional classification algorithms new
methods of in-depth study. We explore two different types Deep learning methods, i.e., traditional
neural networks Architecture with Long Short term Memory and Repetitive Neural Networks. We
check the classification Accuracy of Recent 5-Class Study-State Visual Evoked Opportunities dataset.
The results demonstrate in-depth expertise learning methods compared to traditional taxonomy

A Comparative Study On Performance Of Pre-Trained Convolutional Neural Networks In Tuberculosis Detection

Ms.SweetyBakyarani. E; Dr. H. Srimathi; Dr. P.J. Arul Leena Rose

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 3, Pages 4852-4858

India accounts for 26% of the words Tuberculosis population. The WHO’s Global TB Program states that in India, the number of people newly diagnosed with TB increased by 74% when compared to other countries from 1.2 million to 2.2 million between 2013 and 2019. Tuberculosis was and still remains a disease that causes high death rates in the country. Many of these deaths can be easily prevented if diagnosed at an early stage. The easiest, cost-effective and non-invasive method of detecting tuberculosis is through a frontal chest x-ray (CXR). But this requires a radiologist to manually examine and analyse each of the X-ray, considering the heavy patient count this puts a great burden on the resources available. A computer aided diagnosis system can easily mitigate this problem and can greatly help in reducing the cost. In recent times deep learning has made great progress in the field of image classification and has produced remarkable outputs in terms of image classification in various domains. But there still remains a scope for improvements when it comes to Tuberculosis detection. The aim of this study is toapply three pre-trained convolutional neural networks that have proven record in image classification on to publically available CXR dataset and classify CXR’s that manifest tuberculosis and compare their performances. The CNN models that are used on our CXR images dataset as a part of this study are VGG-16 ,VGG-19,AlexNet ,Xception and ResNet-50. Also visualization techniques have been applied to help understand the features whose weights played a role in the classification process. With the help of this system, we can easily classify CXR’s that have active TB and even CXR’s that show mild abnormalities, thus ensuring that high risk patients get the help they require on time.

Assessment of Patient Health Condition based on Speech Emotion Recognition (SER) using Deep Learning Algorithms

Dr. DNVSLS Indira; B. Lakshmi Hari Prasanna; Chunduri Pavani; Ganta Vandana

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1135-1147

Human Emotion detection either through face or speech became a relatively nascent research area. Speech Emotion Acknowledgment concerns the undertaking of perceiving a speaker's feelings from their discourse chronicles. Perceiving feelings from discourse can go far in deciding an individual's physical and mental condition of prosperity. These emotions can be used for further assessment of patient’s status for better diagnosis. This paper aims to categorize emotions in speech into four different categories which are happy, sad, angry and neutral. For this analysis, four different algorithms - the Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Random Forest (RF) and Convolutional Neural Network (CNN-1D) are developed. Detection of Emotion through speech of an individual might be a bit hectic, because of the dynamic changes in voice signal of the same person within a very subtle period of time. So, features like mfcc, chroma, tomez contrast and mel were extracted and given to the model in order to detect the emotions. Those features were given as input to the algorithms and the empirical results implicate that Convolutional Neural Network-1D performs well comparatively. RAVDESS database is chosen for the categorization. A good recognition rate of 89% was obtained from CNN-1D.

Identification and Detection of Abnormal Human Activities using Deep Learning Techniques


European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 408-417

In recent years, it is in public to use the surveillance cameras for continuous monitoring of public and private spaces because of increasing crime. Most current surveillance systems need a human operator to constantly watch them and are ineffective as the amount of video data is increasing day by day. Surveillance cameras will be more useful tools if instead of passively recording; they generate warnings or real-time actions when unusual activity is detected. But recognizing and classifying human activity as normal or abnormal from a live video stream is a stimulating job in the pitch of CPU vision. There is a need for a smart surveillance system for the automatic identification of abnormal behaviour of humans for a specific-scene. Presentpaperstretches an overview of different machine learning methods used in recent years to develop such a model. It also gives an exposure to the recent works in the field of anomaly detection in surveillance video and its applications



European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 3, Pages 2271-2285

With the rapid growth of COVID-19 pandemic infectious disease caused by the Corona Virus. It was first identified in Wuhan in December 2019. It expanded its circle all over the world and finally spreading its route to India. The whole world is fighting against the spread of this deadly disease, cases in India also gradually increasing day by day since May after lockdown. This article proposes how to contribute to utilizing the machine learning and deep learning models with the aim for understanding its everyday exponential behaviour along with the prediction of future reachability of the COVID-2019 across the nations by utilizing the real-time information from the Johns Hopkins. This paper studies the COVID-19 dataset and explore the data by data visualization with different libraries that are available in Python. The paper also discusses the current situation in India while tackling the Covid-19 pandemic and the ongoing development in AI and ML has significantly improved treatment, medication, screening tests, prediction, forecasting, contact tracing, and drug/vaccine development process for the Covid19 pandemic and reduce the human intervention in medical practice. However, most of the models are not deployed enough to show their real-world operation, but they are still up to the mark. Within this paper, we present Exploratory Data Analysis, Data Preprocessing, Data Cleaning and Manipulations, Machine Learning Algorithms, Pandemic Analyzing Engine GUI, and Deep Learning. We have performed linear regression, Decision Tree, SVM, Random Forest and for forecasting, we performed FBPrompet, ARIMA model to predict the next 15 day’s Pandemic situation.

Multi-Stage Classification Technique for Breast Cancer Detection in Histopathology Images using Deep Learning

Nagamani Gonthina; C. Jagadeeswari; Prabhavathi V; Sneha B

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1104-1110

This research paper proposes the past decenary, substantial improvement in computational ability and betterment in algorithms for analysis of Images has gained vast fame in resolving challenges in the area of medical diagnosis. Subsequently, computerized tissue histopathology at present is becoming tractable towards the implementation of digitized analysis of images and deep learning methods. Cancer is a cluster of disorders involving irregular cell maturation with the capability to conquer or proliferate to other organs of the body. Detection of cancer in the earlier stages is a exacting task due to which many people are prone to death. Treatment of cancer benefits from the pace, perfection of Deep Learning-obliged practice of diagnosis. Deep Learning techniques are utilized to diagnose the features of progressed carcinoma with enhanced perfection compared to individual pathologist. This paper suggests a deep convolution neural network for categorizing a tissue as malicious, there after segregate the tissue then ultimately perform multi-class detection and classification of Breast Cancer disease and its stages in histopathology images

Deep Learning in Tuberculosis Diagnosis: A Survey

B. Sandhiya; Dr.R. Punniyamoorthy; Saravanan. B; Vijay Prabhu. R; Subhash. V

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 2736-2740

Tuberculosis is a contagious syndrome that leads to death Worldwide. In majority of the developing countries, the access to the diagnostic tool and the test usage is relatively poor. Now the recent advancement in the field of Artificial Intelligence may help them to fill this technology gap. Computer Aided Detection and Diagnosis helps in diagnosing the diseases through some clinical symptoms as well as X-ray images of the patients. Nowadays many strategies are formulated to increase the classification accuracy of tuberculosis diagnosis using AI and Deep Learning approaches. Our survey paper, focus to describe the wide AI and deep learning approaches employed in the diagnosis of tuberculosis.

Human Activity Recognition using SVM and Deep Learning

V. Parameswari; S. Pushpalatha

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1984-1990

Human activity recognition is one among the foremost vital rising technology. Principle parts from the body parts territory are utilized for human movement acknowledgment to scale back spatial property. A multi scale delineation human action acknowledgment is done to save the segregate data before spatial property decrease. This paper could be a human action Recognition system for identification of person. It takes input a video of COVID-19 patients and searches for a match within the hold on pictures. This method is predicate d on Gabor options extraction mistreatment Gabor filter. For feature extraction the input image is matching with Gabor filter and further personal sample generation formula is employed to pick out a collection of informative and non redundant Gabor options. DNN (Deep learning Models) is used for matching the input human action image to the hold on pictures. This method is used in hospital management application for detecting the COVID-19 patient activity from surveillance cameras. By using the SVM and deep learning the human activity is recognized using matlab tool.

Video Based Fall Detection Using Deep Convolutional Neural Network

Gangireddy Prabhakar Reddy; M. Kalaiselvi Geetha

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5542-5551

Falling often causes deadly conditions such as unconsciousness and related injuries among the elderly population if failing provided with aid and caretakers nearby. In this context, an automatic fall monitoring system gains its popularity by solving the problem with immediate prompting, thereby allowing the caretakers and other persons to get activated with an alarm message. It assists older adults in living without fear of falling and being independent in society. In recent decades, vision-based fall monitoring receiving attention among research communities for its diversified features. It helps identify the human in the intended regions, and by using the collected phenomenon from the area, it trains the fall recognition classifiers. Besides, human detection errors and lack of massive-scale datasets make the vision-based fall monitoring face challenges like robustness and efficiency in performing generalization to invisible regions. Hence a robust learning and classification system is reasonably needed to combat the challenges. In this proposed system, automatic fall detection using deep learning is modeled using RGB images gathered from the single-camera source. More significantly, it determines the sensitive details that prevailed in the original images and ensures privacy, widely considered for safety and protection. Various experiments are carried out using real-time world fall data sets. The results show that the system enhances fall recognition awareness and achieves a high F-Score by performing high accurate fall detection from real-world environments.

An Empirical Study of Deep Learning Strategies for Spatial Data Mining

K. Sivakumar; A.S. Prakaash

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5124-5132

The emergence of scalable frameworks for machine learning to efficiently analyse and derive valuable insights from these data has triggered growing volumes of data collected. Huge spatial data frameworks cover a wide variety of priorities, including tracking of infectious diseases, simulation of climate change, etc. Conventional mining techniques, especially statistical frameworks to handling these data, are becoming exhausted due to the rise in the number, volume and quality of spatio-temporal data sets. Various machine learning tasks have recently shown efficiency with the development of deep learning methods. We therefore include a detailed survey in this paper on important impacts in the application of deep learning techniques to the mining of spatial data.


Gouri Nandan; Dr. Neeba E A

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 8, Pages 5467-5475

Sign languages are languages that solely utilize gestures to convey meaning. Communication, based on the sign language is a mix of manual explanations and non-manual elements. Sign language recognition framework positively reflects communication between the person who is hard of hearing and world around. It also helps in communicating with machines. One of the most utilized types of gesture based communication is the American sign language (ASL). In the proposed work, the letters are detected from a video frame using convolutional neural network (CNN) and then converted into speech using Google Text-to-Speech (gTTS). The systems are trained with 75% of images and tested with 25% of images from the database.

Helmet, Violation, Detection Using Deep Learning

Sherin Eliyas; K. Swaathi; Dr.P. Ranjana; A. Harshavardhan

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5173-5178

Road incidents are among the significant reasons, for the human passing. The majority of the passings in mishaps are because of harm to the top of the bike riders. Among the various sorts of street mishaps, bike mishaps are normal and cause extreme wounds. To reduce the involved risk for the motorcycle riders it is exceptionally fascinating to utilize helmet. The helmet is the motorcyclist's primary security. Many countries require the utilization of caps by motorcyclists, however numerous individuals neglect to comply with the law for different reasons. We present the advancement of a framework utilizing profound convolutional neural networks, (CNNs) for discovering bikers who are disregarding cap rules. The system involves motorcycle, detection, helmet, vs. no-helmet, classification, and method counting. Faster R-CNN with ResNet 50 network, model is implementing for motorcycle detector process. CNN classification model proposes for classify the helmet vs. no-helmet. Finally making alarm sound to alert the officer too preventing motorcycle accident. We assess the framework as far as accuracy and speed.

IOT based urban surveillance using RaspberryPi and Deep learning with Mobile-Net Pre-trained model

Sathya Vignesh R; Vaishnavi.R. G; G. Aravind; G. SreeHarsha; B. HariKrishnaReddy; Yogapriya J

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 2473-2477

The object detection is required to have a stronger protection in the surveillance areas. some of the surveillance systems uses cc cameras to monitor the area .It needs someone to check the output in particular area with-out rest. It is a difficult process for people who have to secure distant areas like fields , homes ,roads, restricted areas which cannot be monitored continuously by a person. object detection using raspberry pi and deep learning with pre-trained model can able to secure the place even without the person. It continuously monitors the area and identifies if any unwanted presence is detected and immediately sends an alert message to the respective device. The setup is fed with a lot of sample images like person, dog ,cat etc .The system checks the unwanted object to the sample images using mobile-net single shot detection by determining the accuracy of common features .Thus it helps to detect the unwanted presence with more accuracy than the previous existing systems.

Moving Towards Non-AI To AI

Nargis A Vakil; S.B. Goyal

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 5638-5646

A large number of researches have been conducted in the field of AI. This paper is all about the enhancements made in this popular field. Making a machine that is able to understand the background ideas of the words is very essential as it can increase the chances of better translation as well as can execute conversations as humans do. In particular, this paper states the difference between the AI and the Non-AI tasks. The work is generated for new candidates coming in the area of AI as well as some issues related to AI are also talked about.

Glioma Tumor Detection Through Faster Region-Based Convolutional Neural Networks Using Transfer Learning.

Shrwan Ram; Anil Gupta

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 2, Pages 4789-4815

Glioma Tumor is generally found in the brain and spinal cord. This tumor begins in glial cells that cover the nerve cells and control the function of that. The Glioma tumor is classified based on glial cells involved in the Glioma tumor formation. The tumor affects the normal activity of the patients such as loss of memory, difficulties in speech, confuse the identification of objects, and also causes difficulties to maintain the balance of the body. The early detection of Glioma tumor helps healthcare practitioners to suggest a suitable treatment for the disease. The detection of a Glioma tumor is a challenging task. Many types of approaches had been proposed by the researchers and academicians for accurately detecting the  Glioma tumor. Accurately detecting the brain tumor is still a big challenge. Because of recent advances in image processing and computer vision, healthcare professionals are using sophisticated disease diagnostic tools for disorders/disease prediction. The Neurosurgeons and Neuro-Physicians use the magnetic resonance imaging technique to identify multiple brain tumors. The approaches to computer vision play a significant role in the automated identification of different Brain tumors. This research paper explores the Convolutional neural network-based Faster R-CNN approach for the Glioma tumor detection using four pre-trained deep networks such as Alexnet, Resnet18, Resnet50, and Googlenet. The proposed approach of object detection as compared to other R-CNN approaches is more efficient and accurate having higher precision.  The proposed model detects the Glioma tumor with 99.9% accuracy. The pre-trained networks used to train the tumor detection model are Alexnet, Resnet18, and Resnet50, and Googlenet. As compare to Alexnet, resnet18, and Googlenet deep networks, the Resnet50 Pre-trained network performed well with higher accuracy of detection.

Vision Based Alert System for Road Signs Detection

K. Hemalatha; D.Uma Nandhini; Karthika S

European Journal of Molecular & Clinical Medicine, 2020, Volume 7, Issue 4, Pages 1872-1877

TOver recent years, there is a huge increase in road accidents which makes us take more surveillance actions to reduce road accidents. In recent due to researches there is a huge improvement in the field of deep learning and computer vision. Our project is mainly focused on developing a vision based alert system for drivers. We built the model with the help of convolution neural networks a sub field of deep learning and computer vision. We have taken road sign data and trained the model to detect 32 different road signs. The data has been collected from German road sign datasets which consists of 20000 images. We developed the learning model with Keras frame- work which is a high-level API. The Keras works on the Tensor Flow backend which is developed by Google. The Keras framework enables us to build a state-of-the-art model to detect the road sign. For developing the model and to pre-processes the image we have used python language which has a vast number of libraries for image computations and to build deep neural networks. The main aim of our project is to develop a vision based alert system for drivers which will help us to improve road safety. Our model will also help new learners to improve the driving experience.