البريد الالكتروني

[email protected]

رقم الهاتف

6163

العودة إلى الملف الشخصي
ميثم نبيل مقداد عجام

بحوث سكوبس — ميثم نبيل مقداد عجام

علوم الحاسبات • برمجيات

42 إجمالي البحوث
467 إجمالي الاستشهادات
2025 أحدث نشر
3 أنواع المنشورات
عرض 42 بحث
2025
7 بحث
Sepahvand M.; Meqdad M.N.; Abdali-Mohammadi F.
International Journal of Computers and Applications , Vol. 47 (1), pp. 1-16
2 استشهاد Article English ISSN: 1206212X
Department of Computer Engineering, Arak University, Markazi, Iran; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, Hillah, Iraq; Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran
Human activity recognition systems using wearable sensors is an important issue in pervasive computing, which applies to various domains related to healthcare, context aware and pervasive computing, sports, surveillance and monitoring, and the military. Three approaches can be considered for activity recognition: video sensor-based, physical sensor-based, and environmental sensor-based. This paper investigates the related work regarding the physical sensor-based approaches to motion processing. In this paper, a wide range of artificial intelligence models, from single classifications to methods based on deep learning, have been reviewed. The human activity detection accuracy of different methods, under natural and experimental conditions poses several challenges. These challenges cause problems regarding the accuracy and applicability of the proposed methods. This paper analyzes the methods, challenges, approaches, and future work. The goal of this paper is to establish a clear distinction in the field of motion detection using inertial sensors. © 2024 Informa UK Limited, trading as Taylor & Francis Group.
الكلمات المفتاحية: deep learning human activity recognition inertial motion unit machine learning Wearable computing
Meqdad M.N.; Abdali-Mohammadi F.; Kadry S.
International Journal on Engineering Applications , Vol. 13 (2), pp. 141-147
1 استشهاد Article English ISSN: 22812881
Intelligent Medical Systems Department, College of Sciences, Al-Mustaqbal University, Hillah, 51001, Iraq; Department of Computer Engineering and Information Technology, Razi University, Kermanshah, 6714414971, Iran; Department of Applied Data Science, Noroff University College, Kristiansand, 4612, Norway
The unique structure of the iris has established this biometric trait as an effective method for developing robust and reliable identification systems. However, it is crucial to extract features that provide a thorough description of the individual's iris, as identity recognition is a significant security concern that many businesses must implement correctly. In this study, a combined method is developed for iris segmentation, and a genetic algorithm-based approach is presented to compute optimal features in terms of separability. This approach encompasses three tasks: feature selection, feature weighting through a genetic algorithm, and learning new features through feature combination. The goal of this combined method is to extract features related to iris morphology and texture. Thus, three feature extraction methods, including local binary patterns and Gabor filters, were applied. Subsequently, the weighted genetic algorithm is employed to minimize the dimensions of the features while improving their discrimination ability. In the final detection stage, a single classification algorithm, the support vector machine, is used to implement lightweight classification, facilitating the method's implementation on devices with hardware limitations. Numerical evaluations of this classification demonstrate its acceptable accuracy compared to neural network-based methods. Experiments conducted on two datasets, IITD and CASIA Interval, resulted in detection rates of 99.55% and 93.50%, respectively. In comparison with state-of-the-art approaches, there is a meaningful difference in the outcomes of the proposed method. Copyright © 2025 Praise Worthy Prize S.r.l.-All rights reserved. © 2025 Praise Worthy Prize S.r.l.-All rights reserved.
الكلمات المفتاحية: Feature Reduction Gabor Features Genetic Algorithm Human Identification Iris Recognition Local Binary Pattern (LBP)
Kumar S.S.; Kriplani K.; Riadhusin R.; Sahoo N.P.; Srinivas V.; Salaman Z.N.; Meqdad M.N.; Abushraida A.A.J.
ICCR 2025 - 3rd International Conference on Cyber Resilience
Conference paper English
New Prince Shri Bhavani College of Engineering and Technology, Department of Eee, Tamil Nadu, Chennai, 600073, India; Ies College of Technology, Department of Computer Science & Engineering, Madhya Pradesh, Bhopal, 462044, India; Islamic University in Najaf, College of Technical Engineering, Department of Computers Techniques Engineering, Najaf, Iraq; Kalinga University, Department of Mathematics, Raipur, India; Gokaraju Rangaraju Institute of Engineering and Technology, Department of Cse, Telangana, Hyderabad, India; University of Hilla, Faculty of Sciences, Medical Physics Department, Babylon, 51011, Iraq; Al-Mustaqbal University, College of Sciences, Intelligent Medical Systems Department, Babylon, 51001, Iraq; University of Al-Ameed, College of Dentistry, PO Box 198, Karbala, Iraq
Medical imaging is fundamental for early illness diagnosis and tumor identification, providing essential visual information on anatomical and pathological features. The increasing need for automated, precise, and scalable diagnostic tools has propelled the development of advanced deep-learning (DL) models for image classification and object recognition. Though proficient in spatial feature extraction, traditional Convolutional Neural Networks (CNNs) are fundamentally constrained in their capability to simulate long-range relationships and multi-scale contextual information. These limitations result in inadequate performance in intricate medical situations characterized by nuanced textural differences, uneven tumor margins, and diverse imaging techniques. Moreover, current models often have elevated false positive rates and restricted generalization across various datasets. This study introduces a novel design utilizing the Swin Transformer-Based Medical Image Diagnosis and Detection Network (Swin-MedNet), a hierarchical Vision Transformer architecture that employs self-attention mechanisms for detecting and classifying brain tumors. This architecture effectively integrates local and global features with linear computing complexity. The model has a multi-stage encoder that gradually merges patches, facilitating scalable representation learning. The design incorporates a Feature Pyramid Network (FPN) and a Region Proposal Network (RPN) to improve semantic localization and tumor segmentation precision for item recognition. Experimental validation on benchmark datasets, including BraTS (brain tumor), showed enhanced performance, elevated classification accuracy, mean Average Precision (mAP), and reduced false detection rates compared to existing methodologies. © 2025 IEEE.
الكلمات المفتاحية: Attention Mechanism Brain Tumor Detection Image Classification Medical Imaging Object Detection Swin Transformer
Salwadkar M.; Al-Fatlawy R.R.; Bharadwaj V.Y.; Anita Sofia Liz D.R.; Singh K.R.; Meqdad M.N.; Ali Alwash M.
ICCR 2025 - 3rd International Conference on Cyber Resilience
Conference paper English
Kalinga University, Department of Electrical and Electronics Engineering, Raipur, India; The Islamic University, College of Technical Engineering, Department of Computers Techniques Engineering, Najaf, Iraq; Gokaraju Rangaraju Institute of Engineering and Technology, Department of Aiml, Telangana, Hyderabad, India; New Prince Shri Bhavani College of Engineering and Technology, Department of Cse, Tamilnadu, Chennai, 600073, India; Karpagam Academy of Higher Education, Department of Computer Science, Coimbatore, 641021, India; Al-Mustaqbal University, College of Sciences, Intelligent Medical Systems Department, Babylon, 51001, Iraq; University of Al-Ameed, College of Medicine, PO Box 198, Karbala, Iraq
The practice of money laundering poses a significant challenge for financial institutions, such as banks, as it harms the economy and facilitates the activities of criminals. Utilizing traditional methods of detection is challenging because they are difficult to comprehend and require specialized expertise. An Anti-Money Laundering (AML) explainable ensemble learning model is discussed in the article. This model utilizes SHAP (SHapley Additive exPlanations) to simplify the model and make it more accessible to viewers. Several different classifiers, including Random Forest, XGBoost, and LightGBM, are used to leverage their predictive capabilities. Through the examination of actual bank transaction data, the system acquires the ability to recognize suspicious behavior. The SHAP values might be of assistance in comprehending significant aspects such as the length of time and frequency with which this account is used. Additionally, they offer both local and global explanations for the events that occur within the model. It is clear from the findings that the models perform more effectively than a single model. It is estimated that they have a recall rate of 91% and an accuracy rate of 94%. SHAP might be used by those who have a great deal of knowledge about the matter to determine the reason for the presence of warnings and the significance of those warnings. Clear expectations are associated with dependable models, thanks to the straightforward design, which ultimately leads to more responsible anti-money laundering measures. © 2025 IEEE.
الكلمات المفتاحية: Anti-Money Laundering (AML) Ensemble Learning Explainable AI (XAI) Financial Fraud Detection SHAP Interpretability
Akhatova V.; Nosirova D.; Zakiryayeva P.; Normamatov B.; Kuvandikov G.; Eman M.; Meqdad M.N.; Ali Zearah S.
ICCR 2025 - 3rd International Conference on Cyber Resilience
Conference paper English
Samarkand State Medical University, Department of Internal Medicine, Samarkand, Uzbekistan; Samarkand State Medical University, Samarkand, Uzbekistan; Odam Anatomiyasi Kafedrasi, Samarqand Davlat Tibbiyot Universiteti, Samarqand, Uzbekistan; University of Hilla, Computer Center, Babylon, 51011, Iraq; Al-Mustaqbal University, College of Sciences, Intelligent Medical Systems Department, Babylon, 51001, Iraq; Al-Ayen University, Technical Engineering College, Department of Computer Technology Engineering, Thi-Qar, Iraq
Satellite images are important for studying the environment, managing disasters, and analyzing climate change. However, many of these images have low resolution, noise, and distortions, making them hard to interpret accurately. Traditional methods, like interpolation and deep learning-based super-resolution, often fail to recover fine details or keep image quality consistent. This study investigates low-quality satellite image improvement and restoration using Stable Diffusion Model (SDM). This model produces better, more detailed images by progressively eliminating noise in several stages. This research trained the model on a range of satellite images and evaluated its Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Fréchet Inception Distance (FID). By maintaining minor details and lowering mistakes, Stable diffusion model is revealed to be better than conventional approaches. These top-notch pictures benefit other remote sensing projects, enhance disaster response planning, and enable more precise research of climate patterns by experts. This study shows that one can effectively improve satellite images using a stable diffusion model. © 2025 IEEE.
الكلمات المفتاحية: Artificial Intelligence Climate Monitoring Disaster Management Image Enhancement Satellite Images Stable Diffusion Model Super-Resolution
Karthika D.; Radhika C.; Kiruthika S.; Mohanasathiya K.S.; Meqdad M.N.; Abd D.I.; Abushraida A.A.J.; Karim S.M.
ICCR 2025 - 3rd International Conference on Cyber Resilience
Conference paper English
Vet Institute of Arts and Science College, Erode, India; P.K.R. Arts College for Women, Department of Computer Science, Gobichettipalayam, India; Vet Institute of Arts and Science(Co-Education) College, Department of Information Technology, Erode, India; Al-Mustaqbal University, College of Sciences, Intelligent Medical Systems Department, Babylon, 51001, Iraq; University of Hilla, Faculty of Sciences, Ai Department, Babylon, 51011, Iraq; University of Al-Ameed, College of Dentistry, PO Box 198, Karbala, Iraq; Northern Technical University, College of Health and Medical Techniques, Al-Dour, Department of Optics Techniques, Salahaldin, Iraq
In computer-aided diagnosis systems, the segmentation of medical images and the Prediction of diseases are essential processes that must be carried out.These tasks need an effective use of computer resources and a great degree of accuracy. This work aims to develop a novel Hybrid Clustering and Classification Algorithm (HCCA) that combines unsupervised and supervised learning techniques in a complementary way to improve medical picture analysis. A pre-segmentation approach using fuzzy C-means (FCM) clustering helps to consistently identify and isolate regions of interest (ROIs) in several medical imaging modalities, including computed tomography (CT) scans and magnetic resonance imaging (MRI) scans. This is done to accurately and dependably identify and isolate ROIs. It is then included in a deep neural network (DNN) classifier that is improved using convolutional layers for disease categorization. This follows the segmentation of these traits having been finished. Among these issues are noise, unevenness in intensity, and overlapping borders of blood vessels. Benchmark medical datasets like BraTS and LIDC-IDRI are used to implement experimental evaluation techniques. During the assessment process, special focus is given to situations including lung nodules and brain tumors. Compared to conventional approaches, the results show notable increases in segmentation accuracy (as measured by Dice and Jaccard coefficients) and classification performance (in terms of precision, recall, and area under the curve). Particularly in the area of segmentation, these changes are rather clear. The HCCA architecture that has been suggested seems to have real-time clinical deployment as a promising use. Automatic and accurate segmentation and Prediction provide a solution for medical diagnostics that is both scalable, strong, and interpretable. © 2025 IEEE.
الكلمات المفتاحية: Deep Neural Network (DNN) Disease Prediction Fuzzy C-Means (FCM) Healthcare Analytics Hybrid Clustering Medical Image Segmentation
Sepahvand M.; N. Meqdad M.; Abdali-Mohammadi F.
Computer Methods in Biomechanics and Biomedical Engineering
Article English ISSN: 10255842
Department of Computer Engineering, Arak University, Markazi, Iran; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, Hillah, Iraq; Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran
When an arrhythmia occurs in the heart, all electrocardiogram (ECG) leads show evidence of it, but it is more prominent in some leads. This medical fact serves as the foundation for the knowledge distillation (KD) model proposed in this paper, which aims to enhance weak leads by leveraging information from stronger ones. The model employs single-lead signals for the student network and twelve-lead signals for the teacher network. Tucker decomposition is used in this KD model to decompose the teacher's feature maps. According to evaluations, the student model achieves an accuracy of 96.48% on the Chapman ECG dataset classification task. © 2025 Informa UK Limited, trading as Taylor & Francis Group.
الكلمات المفتاحية: arrhythmia classification feature representation knowledge distillation multi teacher learning Single-lead ECG tensor decomposition
2024
7 بحث
Zhang Y.; Aldosky A.J.; Goyal V.; Meqdad M.N.; Nutakki T.U.K.; Alsenani T.R.; Nguyen V.N.; Dahari M.; Nguyen P.Q.P.; Ali H.E.
Process Safety and Environmental Protection , Vol. 182, pp. 1171-1184
16 استشهاد Article English ISSN: 09575820
Shandong Provincial University Laboratory for Protected Horticulture, Weifang University of Science and Technology, Shouguang, 262700, China; Ambassador of Peace in the Field of Environmental Protection Duhok Governorate, Duhok, Kurdistan Region, Iraq; Department of Electronics and Communication Engineering, GLA University, Mathura, 281406, India; Intelligent Medical Systems Department, Al-Mustaqbal University College, Babil, Hillah, 51001, Iraq; Department of Chemical Engineering, American University of Ras Al Khaimah, United Arab Emirates; Department of Electrical Engineering, College of Engineering in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj, 11942, Saudi Arabia; Institute of Engineering, HUTECH University, Ho Chi Minh City, Viet Nam; Deparment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, 50603, Malaysia; PATET Research Group, Ho Chi Minh City University of Transport, Ho Chi Minh City, Viet Nam; Department of Physics, Faculty of Science, King Khalid University, P.O. Box 9004, Abha, Saudi Arabia
Municipal solid waste (MSW)-to-energy systems have gained significant attention in recent years for their potential to produce renewable energy from waste. These systems involve the conversion of MSW into electricity, heat or fuel. One of the most promising applications of MSW-to-energy systems is the production of hydrogen, which is considered a clean and sustainable fuel. Machine learning algorithms have the potential to revolutionize the way MSW-to-energy systems are managed. The integration of machine learning into MSW-to-energy systems has the potential to significantly improve the sustainability and profitability of this industry. In this study, a novel integrated MSW-to-energy system is modeled to produce hydrogen, power, and oxygen and with capacities of heating water and air. Hydrogen production, power production, oxygen storage, hot water, hot air, and system emission are predicted using machine learning algorithms based on regression models with high validity and R2 values more than 99.8% having errors smaller than 1%. The reduced regression models are developed by eliminating the insignificant variables from the full algorithms using the analysis of variance. The findings reveal high accuracy for the reduced regression models while their errors slightly decrease to 2%. This suggests that the machine learning algorithms can also be used as an effective tool to further improve MSW-to-energy systems. © 2024 The Institution of Chemical Engineers
الكلمات المفتاحية: Emission Environmental sustainability Hydrogen fuel Machine learning Waste treatment Waste-to-energy system
Mishra J.S.; Meqdad M.N.; Sharma A.; Deepak A.; Gupta N.K.; Bajaj R.; Pokhariya H.S.; Shrivastava A.
International Journal of Intelligent Systems and Applications in Engineering , Vol. 12 (5s), pp. 163-173
5 استشهاد Article English ISSN: 21476799
Department of Computer Science and Information Technology, Vaugh Institute of Agricultural Engineering and Technology (VIAET), Uttar Pradesh, Prayagraj, India; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, Hillah, 51001, Iraq; Department of Computer Engineering and Applications, GLA University, U.P, Mathura, India; Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Tamilnadu, Chennai, India; NB Global University, Rajasthan, Bikaner, India; Department of Computer Science & Engineering, Graphic Era Deemed to be University, Uttarakhand, Dehradun, India; Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Tamilnadu, Chennai, India
Heart disease is one of the major causes of death worldwide, it is crucial to discover problems with your health as soon as possible. To assess the efficacy of heart disease prediction models in accurately identifying individuals at risk, a performance analysis of these algorithms was conducted. A comprehensive dataset was gathered, encompassing patients both with and without cardiac disease, and incorporating diverse clinical and demographic variables. A number of machines learning methods, including logistic regression, decision trees, random forests, support vector machines, and artificial neural networks, were used to develop predictive models. Additionally, receiver operating characteristic (ROC) curves were employed to look into how well specificity and sensitivity work together. The analysis's findings showed that all examined models performed well in predicting heart disease. However, certain models exhibited superior performance in specific metrics. This information is crucial for healthcare professionals, as it enables informed decision-making regarding the selection of prediction models based on the desired balance between correctly identifying positive cases and minimizing false positives. The insights gained from this performance analysis offer valuable guidance on the strengths and limitations of different heart disease prediction models. They can inform future research endeavors and assist healthcare practitioners in implementing effective and accurate prediction systems that identify individuals at risk and facilitate timely interventions. © 2024, Ismail Saritas. All rights reserved.
الكلمات المفتاحية: Dataset Heart Disease Prediction Machine Learning Performance ROC curves
Sepahvand M.; Abdali-Mohammadi F.; Meqdad M.N.
Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization , Vol. 12 (1)
2 استشهاد Article Open Access English ISSN: 21681163
Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, Hillah, Iraq
Advancements in technology has accelerated the evolution of bone age assessment (BAA) methodologies, one of which is deep learning algorithms, which overcome the drawbacks of conventional approaches. In spite of excellent effectiveness of deep neural networks in detection of the correct class for bone age, they have a significant degree of complexity due to the numerous parameters they employ for each region of interest (ROI). In this paper, we propose a BAA method using a hybrid knowledge distillation (KD) paradigm in order to conquer this difficulty by mapping different ROIs into a single ROI. In this regard, the student receives knowledge from a teacher network that has been pre-trained on six ROIs including bones of five fingers and the wrist, transfers the knowledge of its final response layer and internal layers to the student. Then, six student models each of which is constructed based on just one of these ROIs, while receiving the information of the teacher model. Empirical results on digital hand atlas report that our student model trained on one ROI obtains 95% accuracy on 19 classes of bone age. © 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
الكلمات المفتاحية: Bone age assessment deep learning edge device knowledge distillation region of interest extraction
Ghasemi F.; Sepahvand M.; Meqdad M.N.; Abdali Mohammadi F.
Journal of Medical Engineering and Technology , Vol. 48 (6), pp. 223-235
2 استشهاد Article English ISSN: 03091902
Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran; Department of Computer Engineering, Arak University, Markazi, Iran; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, Iraq
Nowadays, photoplethysmograph (PPG) technology is being used more often in smart devices and mobile phones due to advancements in information and communication technology in the health field, particularly in monitoring cardiac activities. Developing generative models to generate synthetic PPG signals requires overcoming challenges like data diversity and limited data available for training deep learning models. This paper proposes a generative model by adopting a genetic programming (GP) approach to generate increasingly diversified and accurate data using an initial PPG signal sample. Unlike conventional regression, the GP approach automatically determines the structure and combinations of a mathematical model. Given that mean square error (MSE) of 0.0001, root mean square error (RMSE) of 0.01, and correlation coefficient of 0.999, the proposed approach outperformed other approaches and proved effective in terms of efficiency and applicability in resource-constrained environments. © 2024 Informa UK Limited, trading as Taylor & Francis Group.
الكلمات المفتاحية: generative model genetic programming mathematical model Photoplethysmogram scalability
Sepahvand M.; Meqdad M.N.; Abdali-Mohammadi F.
Journal of Medical Engineering and Technology , Vol. 48 (2), pp. 48-63
1 استشهاد Article English ISSN: 03091902
Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran; Department of Intelligent Medical Systems, Al-Mustaqbal University, Babil, Hillah, Iraq
Wearable computers can be used in different domains including healthcare. However, due to suffering from challenges such as faults their applications may be limited in real practice. So, in designing wearable devices, designer must take into account fault tolerance techniques. This study aims to investigate the challenging issues of fault tolerance in wearable computing. For this purpose, different aspects of fault tolerance in wearable computing namely hardware, software, energy, and communication are studied; and state of the art research regarding each category is analysed. In this analysis, the performed works using the fault tolerance techniques are included in the form of 25 components and referred to as “fault tolerance plan”. Using this fault tolerance plan and the appropriate profile, the fault tolerance of any wearable system can be evaluated. In this article, fault tolerances of several of the most prominent works conducted in the field of wearable computing were evaluated. The obtained results, with the medical profile, showed that only one wearable system had a fault tolerance of 91%, with the other systems having a fault tolerance of 24% or less. Also, the results obtained from evaluating these works, with the military profile, showed that only one wearable system had a fault tolerance of 76%, with the other systems having a fault tolerance of 19% or less. These mean that few studies have been conducted on the fault tolerance of wearable computing. © 2024 Informa UK Limited, trading as Taylor & Francis Group.
الكلمات المفتاحية: fault tolerance fault tolerance plan reliability vital applications Wearable computing
Meqdad M.N.; Al-Qudsy Z.N.; Kadry S.; Haleem A.S.
Ingenierie des Systemes d'Information , Vol. 29 (4), pp. 1461-1468
1 استشهاد Article Open Access English ISSN: 16331311
Intelligent Medical Systems Department, College of Sciences, Al-Mustaqbal University, Babil, 51001, Iraq; Intelligent Medical Systems Department, Biomedical Informatics College, University of Information Technology and Communications, Baghdad, 10011, Iraq; Applied Science Research Center, Applied Science Private University, Amman, 11937, Jordan; MEU Research Unit, Middle East University, Amman, 11831, Jordan
Predicting the secondary structure of proteins continues to be a significant hurdle in the field of bioinformatics. This anticipation plays a crucial role as an intermediary stage in addressing the challenge of predicting the tertiary structure of proteins, which is instrumental in determining their functions. This prediction holds the potential to facilitate drug development and contribute to the identification of viral diseases. One can forecast the secondary structure of a protein by examining its primary components, including the amino acid sequence and various additional factors. Through the examination of established sequences and recognized protein types, it becomes feasible to anticipate unfamiliar sequences. The objective of this article is to enhance the forecast accuracy of protein secondary structure by adjusting the current code, aiming to reach an 80% accuracy rate. Copyright: ©2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license.
الكلمات المفتاحية: detection of the second type of protein neural networks pattern recognition protein configuration
Darbandi M.; Meqdad M.N.; Hammoud A.; Nazif H.
Scientific Reports , Vol. 14 (1)
Article Open Access English ISSN: 20452322
Pôle Universitaire Léonard de Vinci, Paris, France; Intelligent Medical Systems Department, College of Sciences, Al-Mustaqbal University, Babil, 51001, Iraq; Department of Medical and Technical Information Technology, Bauman Moscow State Technical University, Moscow, Russian Federation; Department of Mathematics and Natural Sciences, Gulf University for Science and Technology, Mishref Campus, Mubarak Al-Abdullah, Kuwait; Department of Mathematics, Payame Noor University, Tehran, Iran
In the realm of petroleum extraction, well productivity declines as reservoirs deplete, eventually reaching a point where continued extraction becomes economically unfeasible. To counteract this, artificial lift techniques are employed, with gas injection being a prevalent method. Ideally, unrestricted gas injection could maximize oil output. However, gas scarcity necessitates judicious resource management to optimize oil production while minimizing gas usage. Gas injection serves to alleviate hydrostatic pressure within wells, thereby enhancing oil recovery. Conventional gas allocation strategies often prove inadequate when confronted with the complex, non-linear constraints of real-world scenarios, particularly under gas supply limitations. This research introduces an innovative approach to gas allocation optimization, leveraging Internet of Things (IoT) technology in conjunction with advanced computational methods. The study melds two optimization algorithms: Particle Swarm Optimization (PSO) and Atom Search Optimization (ASO). This hybrid technique harnesses IoT capabilities for real-time data acquisition and processing, enabling more precise and adaptive optimization. The proposed methodology incorporates PSO’s individual and collective learning mechanisms into the ASO framework, accelerating the solution refinement process. Additionally, it introduces dynamic parameters to balance broad exploration with focused exploitation of the solution space. The algorithm’s efficacy is further enhanced by implementing an adaptive force constant for each “atom” (solution candidate), which evolves based on the atom’s performance over successive iterations. Empirical evaluation of this novel approach demonstrated significant improvements in both energy efficiency and gas utilization. Specifically, the hybrid method achieved average reductions of 12.12% in energy consumption and 18.05% in gas injection volume compared to existing techniques. Also, the results showed that battery life and cost are better than the other methods and have been improved by an average of 7.67% and 9.48%, respectively. © The Author(s) 2024.
الكلمات المفتاحية: Atom search optimization Fuzzy Gas lift allocation Internet of things Multi-objective optimization
2023
9 بحث
Mohammadiounotikandi A.; Fakhruldeen H.F.; Meqdad M.N.; Ibrahim B.F.; Jafari Navimipour N.; Unal M.
Fire , Vol. 6 (4)
28 استشهاد Article Open Access English ISSN: 25716255
Department of Computer and IT Engineering, Faculty of Engineering, South Tehran Branch, Islamic Azad University, Tehran, 1584743311, Iran; Computer Techniques Engineering Department, Faculty of Information Technology, Imam Ja’afar Al-Sadiq University, Baghdad, 10011, Iraq; Electrical Engineering Department, College of Engineering, University of Kufa, Kufa, 540011, Iraq; Computer Technical Engineering Department, College of Technical Engineering, The Islamic University, Najaf, 54001, Iraq; Intelligent Medical Systems Department, Al-Mustaqbal University, Hillah, 51001, Iraq; Department of Information Technology, College of Engineering and Computer Science, Lebanese French University, Erbil, 44001, Iraq; Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Kadir Has University, Istanbul, 34083, Turkey; Future Technology Research Center, National Yunlin University of Science and Technology, Douliou, 64002, Taiwan; Department of Computer Engineering, Nisantasi University, Istanbul, 34485, Turkey
Concerns about fire risk reduction and rescue tactics have been raised in light of recent incidents involving flammable cladding systems and fast fire spread in high-rise buildings worldwide. Thus, governments, engineers, and building designers should prioritize fire safety. During a fire event, an emergency evacuation system is indispensable in large buildings, which guides evacuees to exit gates as fast as possible by dynamic and safe routes. Evacuation plans should evaluate whether paths inside the structures are appropriate for evacuations, considering the building’s electric power, electric controls, energy usage, and fire/smoke protection. On the other hand, the Internet of Things (IoT) is emerging as a catalyst for creating and optimizing the supply and consumption of intelligent services to achieve an efficient system. Smart buildings use IoT sensors for monitoring indoor environmental parameters, such as temperature, humidity, luminosity, and air quality. This research proposes a new way for a smart building fire evacuation and control system based on the IoT to direct individuals along an evacuation route during fire incidents efficiently. This research utilizes a hybrid nature-inspired optimization approach, Emperor Penguin Colony, and Particle Swarm Optimization (EPC-PSO). The EPC algorithm is regulated by the penguins’ body heat radiation and spiral-like movement inside their colony. The behavior of emperor penguins improves the PSO algorithm for sooner convergences. The method also uses a particle idea of PSO to update the penguins’ positions. Experimental results showed that the proposed method was executed accurately and effectively by cost, energy consumption, and execution time-related challenges to ensure minimum life and resource causalities. The method has decreased the execution time and cost by 10.41% and 25% compared to other algorithms. Moreover, to achieve a sustainable system, the proposed method has decreased energy consumption by 11.90% compared to other algorithms. © 2023 by the authors.
الكلمات المفتاحية: emergency rescue energy consumption fire fire evacuation system Internet of Things metaheuristic algorithms smart buildings
Ge Y.; Zhang G.; Meqdad M.N.; Chen S.
Artificial Intelligence in Medicine , Vol. 146
27 استشهاد Article English ISSN: 09333657
Key Laboratory of Intelligent Informatics for Safety & Emergency of Zhejiang Province, Wenzhou University, Wenzhou, 325100, China; School of Computer Science and Technology, Zhejiang Normal University, Jinhua, 321019, China; Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China; The Key Laboratory of Computer Vision and Systems (Ministry of Education), Tianjin University of Technology, Tianjin, 300384, China; College of Engineering, Ocean University of China, Qingdao, 266100, China; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, 51001, Iraq; Department of Breast Surgery, The Fifth Affiliated Hospital of Wenzhou Medical University, LiShui Municipal Central Hospital, Zhejiang, Lishui, 323000, China
Healthcare needs in rural areas differ significantly from those in urban areas. Addressing the healthcare challenges in rural communities is of paramount importance, as these regions often lack access to adequate healthcare facilities. Moreover, technological advancements, particularly in the realm of the Internet of Things (IoT), have brought about significant changes in the healthcare industry. IoT involves connecting real-world objects to digital devices, opening up various possibilities for improving healthcare delivery. One promising application of IoT is its use in monitoring the spread of diseases in remote villages through interconnected sensors and devices. Surprisingly, there has been a noticeable absence of comprehensive research on this topic. Therefore, the primary objective of this study is to conduct a thorough and systematic review of intelligent IoT-based healthcare systems in rural communities and their governance. The analysis covers research papers published until December 2022 to provide valuable insights for future researchers. The selected articles have been categorized into three main groups: monitoring, intelligent services, and body sensor networks. The findings indicate that IoT research has garnered significant attention within the healthcare community. Furthermore, the results illustrate the potential benefits of IoT for governments, especially in rural areas, in improving public health and strengthening economic ties. It is worth noting that establishing a robust security infrastructure is essential for implementing IoT effectively, given its innovative operational principles. In summary, this review enhances scholars' understanding of the current state of IoT research in rural healthcare settings while highlighting areas that warrant further investigation. Additionally, it keeps healthcare professionals informed about the latest advancements and applications of IoT in rural healthcare. © 2023 Elsevier B.V.
الكلمات المفتاحية: Body sensor networks Healthcare systems Intelligent IoT Intelligent service Monitoring Rural governance Rural society
Arivazhagu U.V.; Ilanchezhian P.; Meqdad M.N.; Prithivirajan V.
Ad-Hoc and Sensor Wireless Networks , Vol. 56 (3-4), pp. 223-252
8 استشهاد Article English ISSN: 15519899
Kingston Engineering College, Vellore, India; Department of Information Technology, Sona College of Technology, Salem, India; Computer Techniques Engineering Department, Al-Mustaqbal University College, Babil, Hillah, 51001, Iraq; Department of ECE, CMR Engineering College, Hyderabad, India
Nowadays the technology is developing in the area of wireless communication which can increase the internet based reconfigurable wireless gadgets. This creates huge revolution in people lives and economy across the economy. These innovative devices can perform the sensing operations, data processing as well as communications among the nodes. Since the internet enabled gadgets are increasing everyday which also faces different cyberattacks in the network. In order handle and defend those cyber-attacks and to improve the security, the intelligent framework is ultimate. Commonly the wireless gadgets are battery driven elements, at the same time IDS implementation consumes more energy which debilitates the attack detection accuracy. Consequently plan of the IDS is required which needs to lay out the great compromises between the energy and accuracy. The novel Gated Capsule Networks (GCN) is proposed in this paper to improve the detection accuracy of abnormal behaviour in wireless networks. Spotted Hyena Optimizer (SHO) and Gated Recurrent Unit (GRU) have been considered for an effective IDS design that can achieve a good trade-off between energy and accuracy. The experimentation has been carried out on the real time datasets under different attacks to measure the performance of the proposed method. Finally comparative analysis is done with existing learning models. For different attack predictions in WSN-IoT, the proposed framework achieved exemplary performance in terms of accuracy (99.99%), precision (99.98%), recall (99.99%), sensitivity (99.98%), and F1-score (99.98%). As a result, this framework provides extensive support for resource-constrained IPenabled wireless devices. © 2023 Old City Publishing, Inc.
الكلمات المفتاحية: Capsule Network GateRecurrent Unit (GRU) real time dataset Reconfigurable Wireless gadgets Spotted Hyena Optimizer
Meqdad M.N.; Hussein A.H.; Husain S.O.; Jawad A.M.
Indonesian Journal of Electrical Engineering and Computer Science , Vol. 30 (2), pp. 936-943
5 استشهاد Article Open Access English ISSN: 25024752
Computer Techniques Engineering Department, Al-Mustaqbal University College, Hillah, Iraq; Computer Engineering Techniques Department, Imam Al-kadhum University College, Najaf, Iraq; Computer Technical Engineering Department, College of Technical Engineering, The Islamic University, Najaf, Iraq; Department of Medical Instrumentation Techniques Engineering, Al-Mustaqbal University College, Hillah, Iraq
Categorization of cardiac abnormalities received from several centers is not possible within the quickest time because of privacy and security restrictions. Today, individuals’ security problem is considered as one of the most important research fields in most research sciences. This study provides a novel approach for detection of cardiac abnormalities based on federated learning (FL). This approach addresses the challenge of accessing data from remote centers and presents the possibility of learning without the need for transferring data from the main center. We present a novel aggregation approach in the FL for addressing the challenge of imbalanced data using the averaging stochastic weights (SWA) optimizer and a multivariate Gaussian in order to make a better and more accurate detection possible. The advantage of the present proposed approach is robust and secure aggregation for unbalanced electrocardiogram (ECG) data from heterogeneous clients. We were able to achieve 87.98% accuracy in testing with the robust VGG19 architecture. © 2023 Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Averaging stochastic weights Electrocardiogram Federated learning Imbalanced data
de Oliveira G.G.; Moghadamnia E.; Radfar R.; Khordehbinan M.W.; Sabzalian M.H.; Meqdad M.N.
Lecture Notes in Electrical Engineering , Vol. 1077, pp. 117-163
4 استشهاد Book chapter English ISSN: 18761100
School of Electrical and Computer, Engineering (FEEC), University of Campinas (Unicamp), Campinas-SP, Brazil; Department of Technology Management, Faculty of Management and Economics, Science and Research Branch, Islamic Azad University, Tehran, Iran; Department of Industrial Management, Faculty of Management and Economics, Science and Research Branch, Islamic Azad University, Tehran, Iran; Department of Civil Engineering, K.N. Toosi University of Technology, Tehran, Iran; Avenida Libertador Bernardo O’Higgins 3363, University of Santiago of Chile (USACH), Santiago, Chile; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, Hillah, 51001, Iraq
One of the most important agricultural products that are produced in Iran and has a very high use is the apple-tree, which unfortunately sometimes suffers due to the existence of defects and surface defects. The quality control of apple trees before their export is of special importance as a strategic product in the country. Machine vision is one of the new methods in the automatic classification of apple trees, whose algorithms include three stages cutting the image of the apple from the background, extracting the image, and finally examining the presence of defects in the cut image of the apple. In the existing methods for apple-tree image segmentation, assuming that the background of the image is known in advance, apple-tree segmentation is easily done with the help of simple thresholding algorithms. Various methods in the field of image processing and machine vision have been presented for the quality control of this pure product based on image processing and machine vision, each of which has its advantages and disadvantages. Soft calculations were done to detect apple tree defects. The main goal of this chapter is to design a suitable system for detecting surface defects in apple trees. To analyze and check the accuracy of the proposed system, the above-mentioned parameters were applied to COFILAB database images with 100 images. The efficiency of the proposed method was evaluated by comparing it with two high-efficiency methods based on three measurement parameters. The results indicated that using the proposed method with an 87% correct detection rate compared to the SVM-PSO method with 80% and Features of Color with 84%, has improved the efficiency of the system. © The Author(s), under exclusive license to Springer Nature Switzerland AG. 2023.
الكلمات المفتاحية: Apple-tree Fault detection Fuzzy system
Meqdad M.N.; Husain S.O.; Jawad A.M.; Kadry S.; Khekan A.R.
International Journal of Electrical and Computer Engineering , Vol. 13 (4), pp. 4692-4699
3 استشهاد Article Open Access English ISSN: 20888708
Computer Techniques Engineering Department, Al-Mustaqbal University College, Hillah, Iraq; Computer Technical Engineering Department, College of Technical Engineering, The Islamic University, Najaf, Iraq; Department of Medical Instrumentation Techniques Engineering, Al-Mustaqbal University College, Hillah, Iraq; Department of Applied Data Science, Noroff University College, Kristiansand, Norway; Artificial Intelligence Research Center, Ajman University, Ajman, United Arab Emirates; Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon; Bio Medical Informatics College, University of Information Technology and Communications, Baghdad, Iraq
Modern technologies are widely used today to diagnose epilepsy, neurological disorders, and brain tumors. Meanwhile, it is not cost-effective in terms of time and money to use a large amount of electroencephalography (EEG) data from different centers and collect them in a central server for processing and analysis. Collecting this data correctly is challenging, and organizations avoid sharing their and client information with others due to data privacy protection. It is difficult to collect these data correctly and it is challenging to transfer them to research centers due to the privacy of the data. In this regard, collaborative learning as an extraordinary approach in this field paves the way for the use of information repositories in research matters without transferring the original data to the centers. This study focuses on the use of a heterogeneous client balancing technique with an interval selection approach and classification of EEG signals with ResNet50 deep architecture. The test results achieved an accuracy of 99.14 compared to similar methods. © 2023 Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Client balancing Cooperative learning Electroencephalography ResNet50
Li Y.; Sedeh S.N.; Alizadeh A.; Meqdad M.N.; Hussien Alawadi A.; Nasajpour-Esfahani N.; Toghraie D.; Hekmatifar M.
Ain Shams Engineering Journal , Vol. 14 (11)
3 استشهاد Article Open Access English ISSN: 20904479
Wuhan Third Hospital, Wuhan, Hubei, 430070, China; Department of Mechanical Engineering, Khomeinishahr Branch, Islamic Azad University, Khomeinishahr, Iran; Department of Civil Engineering, College of Engineering, Cihan University-Erbil, Erbil, Iraq; Intelligent Medical Systems Department, Al-Mustaqbal University, Babil, 51001, Iraq; College of Technical Engineering, The Islamic University, Najaf, Iraq; College of Technical Engineering, The Islamic University of Al Diwaniyah, Iraq; College of Technical Engineering, The Islamic University of Babylon, Iraq; Department of Material Science and Engineering, Georgia Institute of Technology, Atlanta, 30332, United States
Nowadays, cardiovascular illnesses are among the leading causes of death in the world. Thus, many studies have been performed to diagnose and prevention of these diseases. Studies show that the computational hemodynamic method (CHD) is a very effective method to control and prevent the progression of this type of disease. In this computational paper, the impression of five non-Newtonian viscosity models (nNVMs) on cerebral blood vessels (CBV) is investigated by CHD. In this simulation, blood flow is supposed steady, laminar, incompressible, and non-Newtonian. The parameters of Nusselt number (Nu), dimensionless temperature (θ), pressure drop (Δp), and dimensionless average wall shear stress (DAWSS) are also investigated by considering the effects of heat generated by the body. Utilizing the FVM and SIMPLE scheme for pressure–velocity coupling is a good approach to investigating CBVs for five different viscosity models. In the results, it is shown that the θ and Δp+ increase with increasing Reynolds number (Re) in the CBVs. By enhancing the Re from 90 to 120 in the Cross viscosity model, the Δp+ changes about 1.391 times. The DAWSS grows by increasing the Re in all viscosity models. This increase in DAWSS leads to an increasing velocity gradient close to the cerebral vessel wall. © 2023 THE AUTHORS
الكلمات المفتاحية: Cerebral blood vessel Dimensionless pressure Non-Newtonian blood flow Nusselt number Thermal effect Viscosity model
Meqdad M.N.; Rauf H.T.; Kadry S.
Applied System Innovation , Vol. 6 (1)
2 استشهاد Article Open Access English ISSN: 25715577
Computer Techniques Engineering Department, Al-Mustaqbal University College, Hillah, 51001, Iraq; Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent, ST4 2DE, United Kingdom; Department of Applied Data Science, Noroff University College, Kristiansand, 4612, Norway; Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, 346, United Arab Emirates; Department of Electrical and Computer Engineering, Lebanese American University, Byblos, 1102-2801, Lebanon
The most suitable method for assessing bone age is to check the degree of maturation of the ossification centers in the radiograph images of the left wrist. So, a lot of effort has been made to help radiologists and provide reliable automated methods using these images. This study designs and tests Alexnet and GoogLeNet methods and a new architecture to assess bone age. All these methods are implemented fully automatically on the DHA dataset including 1400 wrist images of healthy children aged 0 to 18 years from Asian, Hispanic, Black, and Caucasian races. For this purpose, the images are first segmented, and 4 different regions of the images are then separated. Bone age in each region is assessed by a separate network whose architecture is new and obtained by trial and error. The final assessment of bone age is performed by an ensemble based on the Average algorithm between 4 CNN models. In the section on results and model evaluation, various tests are performed, including pre-trained network tests. The better performance of the designed system compared to other methods is confirmed by the results of all tests. The proposed method achieves an accuracy of 83.4% and an average error rate of 0.1%. © 2023 by the authors.
الكلمات المفتاحية: bone anomaly detection CNN ensemble method image segmentation
Meqdad M.N.; Hussein A.H.; Husain S.O.; Jawad A.M.; Kadry S.
IAES International Journal of Artificial Intelligence , Vol. 12 (3), pp. 1459-1467
Article Open Access English ISSN: 20894872
Department of Computer Techniques Engineering, Al-Mustaqbal University College, Babil, Hillah, Iraq; Department of Computer Techniques Engineering, Imam Al-kadhum University College, Najaf, Iraq; Department of Computer Technical Engineering, College of Technical Engineering, The Islamic University, Najaf, Iraq; Department of Medical Instrumentation Techniques Engineering, Al-Mustaqbal University College, Babil, Hillah, Iraq; Department of Applied Data Science, Noroff University College, Kristiansand, Norway; Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, United Arab Emirates; Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
Many studies have been conducted on human activity recognition (HAR) in the last decade. Accordingly, deep learning algorithms have been given more attention in terms of classification of human daily activities. Deep neural networks (DNNs) compute and extract complex features on voluminous data through some hidden layers that require large memory and powerful graphics processing units (GPUs). So, this study proposes a new joint learning (JL) approach to classify human activities using inertial sensors. To this end, a large complex donor model based on a convolutional neural network (CNN) is used to transfer knowledge to a smaller model based on CNN referred to as the acceptor model. The acceptor model can be deployed on mobile devices and low-power hardware due to decreased computing costs and memory consumption. The wireless sensor data mining (WISDM) dataset is used to test the proposed model. According to the experimental results, the HAR system based on the JL algorithm outperforms than other methods. © 2023, Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Convolutional neural network Deep neural network Graphics processing unit Human activity recognition Joint learning
2022
9 بحث
Vaiyapuri T.; Srinivasan S.; Sikkandar M.Y.; Balaji T.S.; Kadry S.; Meqdad M.N.; Nam Y.
Computers, Materials and Continua , Vol. 73 (3), pp. 5543-5557
23 استشهاد Article Open Access English ISSN: 15462218
College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia; Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha Nagar, Thandalam, Chennai, 602105, India; Department of Medical Equipment Technology, College of Applied Medical Sciences, Majmaah University, Al Majmaah, 11952, Saudi Arabia; Department of Electronics and Communication Engineering, College of Engineering & Technology, SRM Institute of Science and Technology, Vadapalani Campus, Chennai, 600026, India; Department of Applied Data Science, Noroff University College, Kristiansand, 4608, Norway; Department of Computer Techniques Engineering, Al-Mustaqbal University College, Babil, Hillah, 51001, Iraq; Department of Computer Science and Engineering, Soonchunhyang University, Asan, 31538, South Korea
In past decades, retinal diseases have become more common and affect people of all age grounds over the globe. For examining retinal eye disease, an artificial intelligence (AI) based multilabel classification model is needed for automated diagnosis. To analyze the retinal malady, the system proposes a multiclass and multi-label arrangement method. Therefore, the classification frameworks based on features are explicitly described by ophthalmologists under the application of domain knowledge, which tends to be time-consuming, vulnerable generalization ability, and unfeasible in massive datasets. Therefore, the automated diagnosis of multi-retinal diseases becomes essential, which can be solved by the deep learning (DL) models. With this motivation, this paper presents an intelligent deep learning-based multi-retinal disease diagnosis (IDL-MRDD) framework using fundus images. The proposed model aims to classify the color fundus images into different classes namely AMD, DR, Glaucoma, Hypertensive Retinopathy, Normal, Others, and Pathological Myopia. Besides, the artificial flora algorithm with Shannon’s function (AFA-SF) based multi-level thresholding technique is employed for image segmentation and thereby the infected regions can be properly detected. In addition, SqueezeNet based feature extractor is employed to generate a collection of feature vectors. Finally, the stacked sparse Autoencoder (SSAE) model is applied as a classifier to distinguish the input images into distinct retinal diseases. The efficacy of the IDL-MRDD technique is carried out on a benchmark multi-retinal disease dataset, comprising data instances from different classes. The experimental values pointed out the superior outcome over the existing techniques with the maximum accuracy of 0.963. © 2022 Tech Science Press. All rights reserved.
الكلمات المفتاحية: computer aided diagnosis deep learning fundus images intelligent models Multi-retinal disease segmentation
Cheng X.; Kadry S.; Meqdad M.N.; Crespo R.G.
Journal of Supercomputing , Vol. 78 (15), pp. 17114-17131
19 استشهاد Article Open Access English ISSN: 09208542
Department of Computer Science, Middlesex University, London, United Kingdom; Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, Norway; Al-Mustaqbal University College, Babil, Hillah, Iraq; Computer Science Department, School of Engineering and Technology, Universidad Internacional de La Rioja, Logrono, 26006, Spain
Skin Cancer is one of the acute diseases listed under top 5 groups in 2020 report of World Health Organisation. This research aims to propose a Convolutional Neural Network framework to extract and evaluate the suspicious skin region. This framework consists following phases; (i) Image collection and resizing, (ii) Suspicious skin section extraction using VGG-UNet, (iii) Deep-feature extraction, (iv) Handcrafted features mining from the suspicious skin section, (v) serial feature integration, and (vi) Classifier training and validation. This research considered dermoscopy images of International Skin Imaging Collaboration benchmark dataset for the experimental assessment and the result of the proposed framework is separately analysed for segmentation and classification tasks. In this work, benign and malignant class images are considered for the examination and during the classification task, integration of the deep and handcrafted features are considered. The experimental results of this study present a segmentation accuracy of > 98% with UNet and a classification accuracy of > 98% with VGG16 combined with Random Forest classifier. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
الكلمات المفتاحية: Benign Dermoscopy Malignant Skin cancer UNet VGG16
Arokiaraj Jovith A.; Mathapati M.; Sundarrajan M.; Gnanasankaran N.; Kadry S.; Meqdad M.N.; Aslam S.M.
Computers, Materials and Continua , Vol. 71 (2), pp. 3375-3392
13 استشهاد Article Open Access English ISSN: 15462218
Department of Networking and Communications College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, 603203, India; Department of Computer Science and Engineering, Rajarajeswari College of Engineering, Bengaluru, 560074, India; Department of Computer Science and Engineering, K. Ramakrishnan College of Engineering, Tiruchirappalli, 621112, India; Department of Computer Science, Thiagarajar College, Madurai, 625019, India; Department of Applied Data Science, Noroff University College, Kristiansand, Norway; Al-Mustaqbal University College, Hillah, Iraq; Department of Information Technology, College of Computing and Information Sciences, Majmaah University, Al Majmaah, 11952, Saudi Arabia
In recent times, Internet of Things (IoT) has become a hot research topic and it aims at interlinking several sensor-enabled devices mainly for data gathering and tracking applications. Wireless Sensor Network (WSN) is an important component in IoT paradigm since its inception and has become the most preferred platform to deploy several smart city application areas like home automation, smart buildings, intelligent transportation, disaster management, and other such IoT-based applications. Clustering methods are widely-employed energy efficient techniques with a primary purpose i.e., to balance the energy among sensor nodes. Clustering and routing processes are considered as Non-Polynomial (NP) hard problems whereas bio-inspired techniques have been employed for a known time to resolve such problems. The current research paper designs an Energy Efficient Two-Tier Clustering with Multi-hop Routing Protocol (EETTC-MRP) for IoT networks. The presented EETTC-MRP technique operates on different stages namely, tentative Cluster Head (CH) selection, final CH selection, and routing. In first stage of the proposed EETTC-MRP technique, a type II fuzzy logic-based tentative CH (T2FL-TCH) selection is used. Subsequently, Quantum Group Teaching Optimization Algorithm-based Final CH selection (QGTOA-FCH) technique is deployed to derive an optimum group of CHs in the network. Besides, Political Optimizer based Multihop Routing (PO-MHR) technique is also employed to derive an optimal selection of routes between CHs in the network. In order to validate the efficacy of EETTC-MRP method, a series of experiments was conducted and the outcomes were examined under distinct measures. The experimental analysis infers that the proposed EETTC-MRP technique is superior to other methods under different measures. © 2022 Tech Science Press. All rights reserved.
الكلمات المفتاحية: Clustering Energy efficiency Internet of things Metaheuristics Multi-hop routing Wireless networks
Joshua Samuel Raj R.; Varalatchoumy M.; Helen Josephine V.L.; Jegatheesan A.; Kadry S.; Meqdad M.N.; Nam Y.
Computers, Materials and Continua , Vol. 71 (1), pp. 1095-1109
7 استشهاد Article Open Access English ISSN: 15462218
Department of Information Science and Engineering, CMR Institute of Technology, Bengaluru, 560037, India; Department of Computer Science and Engineering, Cambridge Institute of Technology, Bengaluru, 560036, India; Department of Computer Applications, CMR Institute of Technology, Bengaluru, 560037, India; Department of Computer Science and Engineering, Swarnandhra College of Engineering and Technology, Narasapur, 534280, India; Department of Applied Data Science, Noroff University College, Kristiansand, 4612, Norway; Department of Computer Techniques Engineering, Al-Mustaqbal University College, Hillah, 51001, Iraq; Department of Computer Science and Engineering, Soonchunhyang University, 31538, South Korea
Internet of Things (IoT) is transforming the technical setting of conventional systems and finds applicability in smart cities, smart healthcare, smart industry, etc. In addition, the application areas relating to the IoT enabled models are resource-limited and necessitate crisp responses, low latencies, and high bandwidth, which are beyond their abilities. Cloud computing (CC) is treated as a resource-rich solution to the above mentioned challenges. But the intrinsic high latency of CCmakes it nonviable. The longer latency degrades the outcome of IoT based smart systems. CC is an emergent dispersed, inexpensive computing pattern with massive assembly of heterogeneous autonomous systems. The effective use of task schedulingminimizes the energy utilization of the cloud infrastructure and rises the income of service providers by the minimization of the processing time of the user job. With this motivation, this paper presents an intelligent Chaotic Artificial Immune Optimization Algorithm for Task Scheduling (CAIOA-RS) in IoT enabled cloud environment. The proposed CAIOA-RS algorithm solves the issue of resource allocation in the IoT enabled cloud environment. It also satisfies the makespan by carrying out the optimum task scheduling process with the distinct strategies of incoming tasks. The design of CAIOA-RS technique incorporates the concept of chaotic maps into the conventional AIOA to enhance its performance. A series of experiments were carried out on the CloudSim platform. The simulation results demonstrate that the CAIOA-RS technique indicates that the proposed model outperforms the original version, as well as other heuristics and metaheuristics. © 2022 Tech Science Press. All rights reserved.
الكلمات المفتاحية: Cloud computing Internet of things Metaheuristics Resource allocation Task scheduling
Al Attar F.; Kadry S.; Manic K.S.; Meqdad M.N.
Journal of Physics: Conference Series , Vol. 2318 (1)
3 استشهاد Conference paper Open Access English ISSN: 17426588
Department of Electrical and Communication Engineering, National University of Science and Technology, Muscat, Oman; Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, 94612, Norway; Al-Mustaqbal University College, Babil, Hillah, Iraq
The vital organ in human physiology is the brain, and abnormality in the brain will reason for various behavioural problems. Ischemic-Stroke is a medical emergency, and early detection and action will help the patient recover quickly. This scheme aims to implement Convolutional-Neural-Network (CNN) segmentation method to extract and evaluate the infected portion from the MRI slice of the brain. In our study the pre-trained UNet scheme is adopted to extract the stroke region from the Flair modality MRI slice with axial-, coronal- and sagittal plane. In this work, the ISLES2015 database is used for the experimental investigation. The segmented portion is further evaluated to the ground-truth and the metrics such as Jaccard, Dice and Accuracy are computed. The experimental investigation is implemented using Python software. The experimental outcome of this research proves that the proposed CNN scheme aids to improve segmentation accuracy on axial-plane images compared with other images. The performance of the CNN segmentation scheme is then validated with other related results existing in the literature. The outcome of this study confirms that UNet supported technique helps extract the stroke lesion from the MRI slice with more accurate accuracy. © Published under licence by IOP Publishing Ltd.
الكلمات المفتاحية: Accuracy Flair modality Segmentation Stroke lesion UNet
Meqdad M.N.; Kadry S.; Rauf H.T.
Future Internet , Vol. 14 (10)
3 استشهاد Article Open Access English ISSN: 19995903
Computer Techniques Engineering Department, Al-Mustaqbal University College, Hillah, 51001, Iraq; Department of Applied Data Science, Noroff University College, Kristiansand, 4612, Norway; Department of Electrical and Computer Engineering, Lebanese American University, Byblos, 1102, Lebanon; Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman, 20550, United Arab Emirates; Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent, ST4 2DE, United Kingdom
Things receive digital intelligence by being connected to the Internet and by adding sensors. With the use of real-time data and this intelligence, things may communicate with one another autonomously. The environment surrounding us will become more intelligent and reactive, merging the digital and physical worlds thanks to the Internet of things (IoT). In this paper, an optimal methodology has been proposed for distinguishing outlier sensors of the Internet of things based on a developed design of a dragonfly optimization technique. Here, a modified structure of the dragonfly optimization algorithm is utilized for optimal area coverage and energy consumption reduction. This paper uses four parameters to evaluate its efficiency: the minimum number of nodes in the coverage area, the lifetime of the network, including the time interval from the start of the first node to the shutdown time of the first node, and the network power. The results of the suggested method are compared with those of some other published methods. The results show that by increasing the number of steps, the energy of the live nodes will eventually run out and turn off. In the LEACH method, after 350 steps, the RED-LEACH method, after 750 steps, and the GSA-based method, after 915 steps, the nodes start shutting down, which occurs after 1227 steps for the proposed method. This means that the nodes are turned off later. Simulations indicate that the suggested method achieves better results than the other examined techniques according to the provided performance parameters. © 2022 by the authors.
الكلمات المفتاحية: improved dragonfly optimization algorithm Internet of things sensor detection
Kadry S.; Taniar D.; Meqdad M.N.; Srivastava G.; Rajinikanth V.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , Vol. 13119 LNAI, pp. 47-56
2 استشهاد Conference paper English ISSN: 03029743
Department of Applied Data Science, Noroff University College, Kristiansand, Norway; Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman, United Arab Emirates; Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon; Faculty of Information Technology, Monash University, Monash, Australia; Al-Mustaqbal University College, Babil, Hillah, Iraq; Department of Mathematics and Computer Science, Brandon University, 270 18th Street, Brandon, R7A 6A9, Canada; Department of Electronics and Instrumentation, St. Joseph’s College of Engineering, Tamilnadu, Chennai, 600 119, India
Medical image assessment plays a vital role in hospitals during the disease assessment and decision making. Proposed work aims to develop an image processing procedure to appraise the brain tumor fragment from Flair modality recorded MRI slice. The proposed technique employs joint thresholding and segmentation practice to extract the infected part from the chosen image. Initially, a tri-level thresholding based on Mayfly Algorithm and Kapur’s Entropy (MA + KE) is implemented to improve the tumor and then the tumor area is mined using the automated Watershed Segmentation Scheme (WSS). The merit of the employed procedure is verified on various 2D MRI planes, such as axial, coronal and sagittal and the experimental outcome confirmed that this technique helps to mine the tumor area with better accuracy. In this work, the necessary images are collected from BRATS2015 dataset and 30 patient’s information (10 slices per patient) is considered for the examination. The experimental investigation is implemented using MATLAB® and 300 images from every 2D plane are examined. The proposed technique helps to get better values of Jaccard-Index (>85%), Dice-coefficient (>91%) and Accuracy (98%) on the considered MRI slices. © 2022, Springer Nature Switzerland AG.
الكلمات المفتاحية: Brain tumor Flair modality Kapur Mayfly algorithm Watershed algorithm
Achuthan G.; Kadry S.; Suresh Manic K.; Meqdad M.N.
Journal of Physics: Conference Series , Vol. 2318 (1)
1 استشهاد Conference paper Open Access English ISSN: 17426588
Department of Electrical and Communication Engineering, National University of Science and Technology, Oman; Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, 94612, Norway; Al-Mustaqbal University College, Babil, Hillah, Iraq
Deep-Learning-Scheme (DLS) based medical data assessment has been widely employed in recent years due to its improved accuracy. Our goal is to study the performance of the pre-trained DLS on RGB-scale breast-histology images. The implemented idea holds these phases; (i) Data collection, pre-processing and resizing, (ii) Training the DLS with chosen test-pictures, (iii) Testing and validating the performance of the DLS with 5-fold cross-validation. This investigation considered the breast-histology pictures for the study and binary classification is employed to achieve Normal/Cancer class grouping of images. The proposed work compared the classification performance of AlexNet, VGG16 and VGG19.The experimental outcome of this study authenticates that the AlexNet with the Random-Forest (RF) classifier helps to get a higher classification accuracy (>87%) compared to VGG16 and VGG19. © Published under licence by IOP Publishing Ltd.
الكلمات المفتاحية: AlexNet Breast Cancer Classification Histology Random Forest
Kadry S.; Rajinikanth V.; Srivastava G.; Meqdad M.N.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , Vol. 13119 LNAI, pp. 57-66
1 استشهاد Conference paper English ISSN: 03029743
Department of Applied Data Science, Noroff University College, Kristiansand, Norway; Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman, United Arab Emirates; Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon; Department of Electronics and Instrumentation, St. Joseph’s College of Engineering, Tamilnadu, Chennai, 600 119, India; Department of Mathematics and Computer Science, Brandon University, 270 18th Street, Brandon, R7A 6A9, Canada; Al-Mustaqbal University College, Babil, Hillah, Iraq
Incidence rate of Breast Cancer (BC) is rising globally and the early detection is important to cure the disease. The detection of BC consist different phases from verification to clinical level diagnosis. Confirmation of the cancer and its stage is performed normally with breast biopsy. This research aims to develop a framework to identify Benign/Malignant class images from the Breast Histology Slide (BHS). This technique consist the following phases; (i) Cropping and resizing the image slice, (ii) Deep-feature extraction using pre-trained network, (iii) Discrete Wavelet Transform (DWT) feature mining, (iv) Optimal feature selection with Mayfly algorithm, (v) Serial feature concatenation, and (vi) Binary classification and validation. This work considered the test image with dimension 896 × 768 × 3 pixels. During the investigation, every picture is cropped into 25 slices and resized to 224 × 224 × 3 pixels. This work implements the following stages; (i) BC detection with deep-features and (ii) BC recognition with concatenated features. In both the cases, a 5-fold cross validation is employed and the experimental investigation of this research confirms that the proposed work helped to achieve an accuracy of 91.39% with deep-feature and 95.56% with concatenation features. © 2022, Springer Nature Switzerland AG.
الكلمات المفتاحية: Breast cancer Classification DWT features Histology slide ResNet18
2021
4 بحث
Ramasamy L.K.; Kadry S.; Nam Y.; Meqdad M.N.
International Journal of Electrical and Computer Engineering , Vol. 11 (3), pp. 2275-2284
47 استشهاد Article Open Access English ISSN: 20888708
Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India; Department of Mathematics and Computer Science, Faculty of Science, Beirut Arab University, Beirut, Lebanon; Department of Computer Science and Engineering, Soonchunhyang University, Asan, 31538, South Korea; Al-Mustaqbal University College, Hillah, Babil, Iraq
Sentiment analysis is a current research topic by many researches using supervised and machine learning algorithms. The analysis can be done on movie reviews, twitter reviews, online product reviews, blogs, discussion forums, Myspace comments and social networks. The Twitter data set is analyzed using support vector machines (SVM) classifier with various parameters. The content of tweet is classified to find whether it contains fact data or opinion data. The deep analysis is required to find the opinion of the tweets posted by the individual. The sentiment is classified in to positive, negative and neutral. From this classification and analysis, an important decision can be made to improve the productivity. The performance of SVM radial kernel, SVM linear grid and SVM radial grid was compared and found that SVM linear grid performs better than other SVM models. © 2021 Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Machine learning Sentiment analysis SVM model Twitter dataset
Khan F.; Kumar R.L.; Kadry S.; Nam Y.; Meqdad M.N.
International Journal of Electrical and Computer Engineering , Vol. 11 (4), pp. 3013-3021
40 استشهاد Article Open Access English ISSN: 20888708
Higher Colleges of Technology, United Arab Emirates; Hindusthan College of Engineering and Technology, Coimbatore, India; Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, Norway; Department of Computer Science and Engineering, Soonchunhyang University, South Korea; Al-Mustaqbal University College, Hillah, Iraq
Autonomous vehicles have been invented to increase the safety of transportation users. These vehicles can sense their environment and make decisions without any external aid to produce an optimal route to reach a destination. Even though the idea sounds futuristic and if implemented successfully, many current issues related to transportation will be solved, care needs to be taken before implementing the solution. This paper will look at the pros and cons of implementation of autonomous vehicles. The vehicles depend highly on the sensors present on the vehicles and any tampering or manipulation of the data generated and transmitted by these can have disastrous consequences, as human lives are at stake here. Various attacks against the different type of sensors on-board an autonomous vehicle are covered. © 2021 Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Autonomous vehicles Cooperative driving LiDAR Security Ultrasonic sensors
Khan F.; Lakshmana Kumar R.; Kadry S.; Nam Y.; Meqdad M.N.
International Journal of Electrical and Computer Engineering , Vol. 11 (4), pp. 3609-3616
40 استشهاد Article Open Access English ISSN: 20888708
Higher Colleges of Technology, United Arab Emirates; Hindusthan College of Engineering and Technology, Coimbatore, India; Department of Mathematics and Computer Science, Faculty of Science, Beirut Arab University, Lebanon, Lebanon; Department of Computer Science and Engineering, Soonchunhyang University, South Korea; Al-Mustaqbal University College, Hillah, Babil, Iraq
Cyber-physical system (CPS) is a terminology used to describe multiple systems of existing infrastructure and manufacturing system that combines computing technologies (cyber space) into the physical space to integrate human interaction. This paper does a literature review of the work related to CPS in terms of its importance in today's world. Further, this paper also looks at the importance of CPS and its relationship with internet of things (IoT). CPS is a very broad area and is used in variety of fields and some of these major fields are evaluated. Additionally, the implementation of CPS and IoT is major enabler for smart cities and various examples of such implementation in the context of Dubai and UAE are researched. Finally, security issues related to CPS in general are also reviewed. © 2021 Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Cyber physical systems internet of things Intelligent transportation Smart building Smart cities Smart grid Smart manufacturing
Alferov G.; Efimova P.; Shymanchuk D.; Kadry S.; Meqdad M.N.
Telkomnika (Telecommunication Computing Electronics and Control) , Vol. 19 (6), pp. 1962-1974
2 استشهاد Article Open Access English ISSN: 16936930
Faculty of Applied Mathematics and Control Processes, St. Petersburg State University, St.-Petersburg, Russian Federation; Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, Norway; Al-Mustaqbal University College, Hillah, Iraq
The main obstacle of the construction of efficient remote-control systems for space robots is a significant delay in transmissions of control signals to robots from the earth-based control center and receiving feedback signals. This significantly complicates the solution of control problem, especially if robot’s manipulators move objects that have mechanical constraints. Our work describes a method for bilateral control of a space robot with large delays. The uniqueness of this method lies in the special structure of the control algorithm. Bilateral control implies force feedback necessary for the interaction of a space robot with objects that have holonomic connections. We present a new mathematical model of the elements of the bilateral control system and their computer implementation using specific examples. © 2020. All Rights Reserved.
الكلمات المفتاحية: Bilateral control Force feedback Nonlinear control systems Remote control Space robots
2020
5 بحث
Sekaran K.; Meqdad M.N.; Kumar P.; Rajan S.; Kadry S.
Telkomnika (Telecommunication Computing Electronics and Control) , Vol. 18 (3), pp. 1275-1284
101 استشهاد Article Open Access English ISSN: 16936930
Vignan Institute of Technology and Science, India; Al-Mustaqbal University College, Iraq; G. H. Raisoni College of Engineering, India; Department of Mathematics and Computer Science, Beirut Arab University, Lebanon
In the world of digital era, an advance development with internet of things (IoT) were initiated, where devices communicate with each other and the process are automated and controlled with the help of internet. An IoT in an agriculture framework includes various benefits in managing and monitoring the crops. In this paper, an architectural framework is developed which integrates the internet of things (IoT) with the production of crops, different measures and methods are used to monitor crops using cloud computing. The approach provides real-time analysis of data collected from sensors placed in crops and produces result to farmer which is necessary for the monitoring the crop growth which reduces the time, energy of the farmer. The data collected from the fields are stored in the cloud and processed in order to facilitate automation by integrating IoT devices. The concept presented in the paper could increase the productivity of the crops by reducing wastage of resources utilized in the agriculture fields. The results of the experimentation carried out presents the details of temperature, soil moisture, humidity and water usage for the field and performs decision making analysis with the interaction of the farmer. © 2019 Universitas Ahmad Dahlan.
الكلمات المفتاحية: Internet of things (IoT) Management system Smart agriculture
Sekaran K.; Chandana P.; Jeny J.R.V.; Meqdad M.N.; Kadry S.
Telkomnika (Telecommunication Computing Electronics and Control) , Vol. 18 (3), pp. 1268-1274
31 استشهاد Article Open Access English ISSN: 16936930
Department of Computer Science and Engineering, Vignan Institute of Technology and Science, India; Al-Mustaqbal University College, Iraq; Department of Mathematics and Computer Sciecne, Beirut Arab Univeristy, Lebanon
Natural language processing is the trending topic in the latest research areas, which allows the developers to create the human-computer interactions to come into existence. The natural language processing is an integration of artificial intelligence, computer science and computer linguistics. The research towards natural Language Processing is focused on creating innovations towards creating the devices or machines which operates basing on the single command of a human. It allows various Bot creations to innovate the instructions from the mobile devices to control the physical devices by allowing the speech-tagging. In our paper, we design a search engine which not only displays the data according to user query but also performs the detailed display of the content or topic user is interested for using the summarization concept. We find the designed search engine is having optimal response time for the user queries by analyzing with number of transactions as inputs. Also, the result findings in the performance analysis show that the text summarization method has been an efficient way for improving the response time in the search engine optimizations. © 2019 Universitas Ahmad Dahlan.
الكلمات المفتاحية: Artificial intelligence Bot creation Natural language processing Search engine Text summarization
Vijayalaxmi B.; Anuradha C.; Sekaran K.; Meqdad M.N.; Kadry S.
Bulletin of Electrical Engineering and Informatics , Vol. 9 (3), pp. 1189-1197
13 استشهاد Article Open Access English ISSN: 20893191
Electronics& Communication Engineering Department, Vignan Institute of Technology and Science, India; Department of BS&H, Vignan Institute of Technology and Science, India; Department of Computer Science and Engineering, Vignan Institute of Technology & Science, India; Al-Mustaqbal University College, Iraq; Department of Mathematics and Computer Science, Faculty of Science, Beirut Arab University, Lebanon
Lately, many of the road accidents have been attributed to the driver stupor. Statistics revealed that about 32% of the drivers who met with such accidents demonstrated the symptoms of tiredness before the mishap though at vary ing levels. The purpose of this research paper is to revisit the various interventions that have been devised to provide for assistance to the vehicle users to avert unwarranted contingencies on the roads. The paper tries to make a sincere attempt to encapsulate the body of work that has been initiated so far in this direction. As is evident, there are numerous ways in which one can identify the fatigue of the driver, namely biotic or physiological gauges, vehicle type and more importantly the analysis of the face in terms of its alignment and other attributes. © 2020, Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Driver behavior Eye detection Face detection Fatigue Skin
Meqdad M.N.; Al-Akam R.; Kadry S.
Telkomnika (Telecommunication Computing Electronics and Control) , Vol. 18 (6), pp. 3331-3338
6 استشهاد Article Open Access English ISSN: 16936930
Al-Mustaqbal University College, Iraq; Koblenz-Landau University, Germany; Beirut Arab University, Lebanon
Information diffusion prediction is the study of the path of dissemination of news, information, or topics in a structured data such as a graph. Research in this area is focused on two goals, tracing the information diffusion path and finding the members that determine future the next path. The major problem of traditional approaches in this area is the use of simple probabilistic methods rather than intelligent methods. Recent years have seen growing interest in the use of machine learning algorithms in this field. Recently, deep learning, which is a branch of machine learning, has been increasingly used in the field of information diffusion prediction. This paper presents a machine learning method based on the graph neural network algorithm, which involves the selection of inactive vertices for activation based on the neighboring vertices that are active in a given scientific topic. Basically, in this method, information diffusion paths are predicted through the activation of inactive vertices by active vertices. The method is tested on three scientific bibliography datasets: The Digital Bibliography and Library Project (DBLP), Pubmed, and Cora. The method attempts to answer the question that who will be the publisher of the next article in a specific field of science. The comparison of the proposed method with other methods shows 10% and 5% improved precision in DBLP and Pubmed datasets, respectively. © 2020. All rights reserved.
الكلمات المفتاحية: Data spreading Machine learning Prediction Social network
Vijayalaxmi B.; Sekaran K.; Neelima N.; Chandana P.; Meqdad M.N.; Kadry S.
Bulletin of Electrical Engineering and Informatics , Vol. 9 (2), pp. 785-791
4 استشهاد Article Open Access English ISSN: 20893191
Vignan Institute of Technology & Science, Hyderabad, India; CMR Institute of Technology, Hyderabad, India; Al-Mustaqbal University College, Hillah, Babil, Iraq; Department of Mathematics and Computer Science, Faculty of Science, Beirut Arab University, Lebanon
Driver Assistance system is significant in drriver drowsiness to avoid on road accidents. The aim of this research work is to detect the position of driver’s eye for fatigue estimation. It is not unusual to see vehicles moving around even during the nights. In such circumstances there will be very high probability that a driver gets drowsy which may lead to fatal accidents. Providing a solution to this problem has become a motivating factor for this research, which aims at detecting driver fatigue. This research concentrates on locatingthe eye region failing which a warning signal is generated so as to alert the driver. In this paper, an efficient algorithm is proposed for detecting the location of an eye, which forms an invaluable insight for driver fatigue detection after the face detection stage. After detecting the eyes, eye tracking for input videos has to be achieved so that the blink rate of eyes can be determined. © 2020, Institute of Advanced Engineering and Science. All rights reserved.
الكلمات المفتاحية: Driver assistance system Eye region Fatigue detection Image Processing Tracking
2018
1 بحث
Meqdad M.N.; Majdi H.S.
9th International Symposium on Telecommunication: With Emphasis on Information and Communication Technology, IST 2018 , pp. 457-459
1 استشهاد Conference paper English
Al-Mustaqbal University College, Hillah, Babil, Iraq
This paper is concerning with the combination of two enhanced techniques to investigate the system efficiency of non-coherent spectral amplitude coding optical code division multiple access (SAC-OCDMA) that based upon zero cross correlation (ZCC) codes. These techniques are: The usage of semiconductor optical amplifier (SOA) method and the utilization of two code keying scheme. The outcomes obtained from OptiSystem simulator prove that the combination of these approaches enables a 5-channel non-coherent SAC-OCDMA system to transmit a data rate of 10 Gbps over 93 km distance at acceptable bit error rate (BER). © 2018 IEEE.
الكلمات المفتاحية: Optical Code Division Multiple Access Semiconductor Optical Amplifier Spectral Amplitude Coding Two Code Keying Scheme