البريد الالكتروني

[email protected]

رقم الهاتف

6163

العودة إلى الملف الشخصي
أ.م.د. علي كاظم محمد الغرابي

بحوث سكوبس — أ.م.د. علي كاظم محمد الغرابي

علوم حاسبات • شبكات الاستشعار اللاسلكية

32 إجمالي البحوث
504 إجمالي الاستشهادات
2026 أحدث نشر
3 أنواع المنشورات
عرض 32 بحث
2026
1 بحث
Al-Qurabat A.K.M.; Lateef H.M.; Matloob A.Z.K.; Mohammed A.K.
Telecommunication Systems , Vol. 89 (1)
1 استشهاد Article English ISSN: 10184864
Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Hillah, Babylon, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Hillah, Babylon, 51002, Iraq; Department of Cybersecurity, College of Information Technology, University of Babylon, Hillah, Babylon, 51002, Iraq; College of Dentistry, Al-Mustaqbal University, Hillah, Babylon, 51001, Iraq
Wireless Sensor Networks (WSNs) play a vital role in applications ranging from smart cities to environmental monitoring, yet their performance is often limited by inefficient cluster head (CH) selection. This paper introduces OCHSAT, a novel clustering framework that integrates Analytic Hierarchy Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to achieve robust and adaptive CH selection. Unlike prior Multi-Attribute Decision-Making (MADM)-based approaches, OCHSAT dynamically considers residual energy, spatial centrality, and distance to the base station, ensuring balanced energy consumption and scalability. Extensive simulations demonstrate that OCHSAT significantly improves network performance, extending lifetime by up to 68%, reducing delay by 42%, and enhancing throughput and reliability by up to 58% and 79%, respectively, compared to state-of-the-art protocols. These results are statistically validated (p<0.05), underscoring OCHSAT’s robustness. By enabling more sustainable and scalable WSN operations, OCHSAT contributes to applications aligned with global goals, including smart cities, clean water monitoring, and climate action. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2026.
الكلمات المفتاحية: AHP–TOPSIS Cluster Head Selection Energy Efficiency Environmental Monitoring Multi-Attribute Decision Making Smart Cities Wireless Sensor Networks
2025
12 بحث
Saeedi I.D.I.; Al-Qurabat A.K.M.
Physical Communication , Vol. 72
5 استشهاد Review Open Access English ISSN: 18744907
Department of Information Networks, College of Information Technology, University of Babylon, Babylon, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
Unmanned aerial vehicle (UAV)-based wireless networks have received increasing research interest in recent years and are gradually being utilized in various aspects of our society. Due to the growing demand of UAV applications such as disaster management, plant protection, and environment monitoring, Mobile edge computing (MEC) was introduced to resolve the conflict and the restricted resources of Internet of Thing (IoT) devices. Note that UAV support is crucial for establishing reliable connections in regions lacking or with inadequate communication infrastructure. Combining UAV-assisted communication with MEC has been seen as a potential model shift to handle the increasing demands for big data processing from UAV-aided IoT applications. In this paper, the overall performance of MEC is determined via offloading modeling. We provide a synopsis of all the relevant research on offloading modeling, including both historical developments and more current breakthroughs. First, we present some key aspects of edge computing architecture and then classify the previous works on computation offloading into different categories. Second, an overview of offloading and its metrics, as well as a discussion of UAVs, MEC, collaboration between UAV and MEC, and offloading strategies, methodologies, and factors. The two main categories of offloading strategies are full and partial offloading. Finally, discussion and future research directions related to offloading by UAV is presented. © 2025 Elsevier B.V.
الكلمات المفتاحية: Computation offloading Improve energy efficiency IoT Mobil edge computing Smart cities Unmanned aerial vehicle
Matloob A.Z.K.; Aksoy M.; Al-Qurabat A.K.M.; AL lawndi N.A.
Telecommunication Systems , Vol. 88 (2)
1 استشهاد Article English ISSN: 10184864
Department of Cybersecurity, College of Information Technology, University of Babylon, Babylon, Hillah, 51001, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, Hillah, 51001, Iraq; Computer Information Systems Department, Ahmed Bin Mohammed Military College, Doha, 22988, Qatar; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Hillah, 51001, Iraq; Technical Institute of Babylon, Al-Furat Al-Awsat Technical University, Babylon, Hillah, 51015, Iraq
This paper critically analyzes the influence of non-terrestrial networks (NTN) on the NR random access mechanism for 5G New Radio (NR). The use of NTN in 5G enables widespread connection but presents technological issues like heightened propagation delays, differential delays, and Doppler shifts. This work investigates the effect of NTN on Physical Random Access Channels (PRACH) preamble configurations, random access response window lengths, and uplink timing advance techniques. We present a novel method that maximizes these values to improve the NR random access efficiency in NTN environments. One thing that needs to be thought about is switching from stationary to adaptive timing advance models, as well as Doppler-resilient PRACH preamble designs and adaptive response window approaches. These improvements lower latency and increase synchronizing accuracy; hence, they enhance NTN-supported 5G NR implementations. The results of the research are vital for increasing the dependability and user experience of next-generation wireless communication systems coupled with NTN. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
الكلمات المفتاحية: 5G NR Non-terrestrial network Propagation delay Random access
Saeedi I.D.I.; Al-Qurabat A.K.M.
Journal of Supercomputing , Vol. 81 (4)
Article English ISSN: 09208542
Department of Information Networks, College of Information Technology, University of Babylon, Babylon, Hillah, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, Hillah, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Hillah, 51002, Iraq
Energy efficiency and prolonging network lifetime of Internet of Things (IoT) sensor node management in the context of Space-Air-Ground Integrated Networks (SAGINs) has high importance. This work presents an Energy-Efficient Cluster Head Selection using the Osprey Optimization Algorithm (EECHOOA), utilizing clustering as the first layer in SAGINs that clusters IoT sensor nodes into clusters to optimize data aggregation and communication. Existing methods, such as ZFO-SHO, PUAG, NCOGA and MMABC, often struggle with limited adaptability to dynamic network conditions and suboptimal energy efficiency. The proposed EECHOOA address these shortcomings by introducing dynamic cluster head (CH) selection and scalable clustering techniques optimized for dense IoT environment. At the individual node level, each sensor consumes energy. A CH controls the process and relays information to the upper layers, minimizing total energy consumption. We further enhance the clustering process with the osprey optimization algorithm (OOA) for intelligent CH selection. The OOA employs the unique behaviors of ospreys to identify optimal CHs dynamically as a function of node energy levels and proximity. Simulation results indicate that our proposed clustering approach integrated with OOA for CH selection achieves 56.25-76.35% energy savings and extends network lifetime by 50 to 100% compared to the state of the art, such as ZFO-SHO, PUAG, NCOGA and MMABC. This study demonstrates that clustering techniques can be utilized in conjunction with skilled optimization algorithms in SAGINs, to allow for more sustainable and effective IoT-based networks that are able to be responsive to a variety of applications. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
الكلمات المفتاحية: Improve energy efficiency IoT Osprey optimization Smart city Space-air-ground integrated networks
Mohammed A.K.; Al-Attar B.; Pokale N.B.; Fallah D.; Ahmad A.H.; Mahdi S.A.; Azize S.A.; Divekar N.; Sekhar R.
ICCR 2025 - 3rd International Conference on Cyber Resilience
Conference paper English
Al- Mustaqbal University, College of Engineering and Technologies, Babylon, 51001, Iraq; University of Al-Ameed, College of Medicine, Karbala, PO Box 198, Iraq; Dr. D. Y. Patil Institute of Technology, Department of Artificial Intelligence and Data Science, Pimpri, Maharashtra, Pune, 411018, India; Al-Turath University, Baghdad, Iraq; Al-Esrra University, Baghdad, Iraq; University of Hilla, Computer Center, Babylon, 51011, Iraq; Al-Ma'moon University College, Department of Computer Techniques Engineering, Al-Washash, Baghdad, Iraq; Symbiosis International (Deemed University) (SIU), Symbiosis Institute of Technology (SIT), Pune Campus, Maharashtra, Pune, 412115, India
This work presents a high-performance Transformer-based model for cross-language semantic code clone detection, leveraging multilingual token embeddings, structural abstraction fusion, and contrastive learning objectives. Utilizing fine-tuned CodeBERT as the backbone and combining AST/CFG-based semantic hints, the framework was evaluated on heterogeneous language pairs such as Java-Python, C++-JavaScript, Python-Ruby, and Java-C#. The model repeatedly provided clone detection with accuracy levels higher than 97.4%, with F1-scores reaching as high as 0.915 and Top-1 retrieval rates of more than 93.2%, ensuring its semantic accuracy in both Type-3 and Type-4 clone types. In comparison with conventional lexical and syntactic clone detectors, this architecture ensured a 27-32% decrease in false positives and posted a 5.2% average improvement in accuracy through structure-aware encoding. The incorporation of InfoNCE contrastive loss and hybrid semantic-path alignment allowed strong inter-language representation learning even under obfuscated identifiers and permutations in logic. With an average query latency of under 180ms for more than a million indexed functions, the model facilitates real-time, zero-shot inference across unseen repositories. Its encoder-agnostic architecture guarantees flexibility to changing programming languages and incorporation of CI/CD pipelines independent of handcrafted rules or language-specific parsing engines. © 2025 IEEE.
الكلمات المفتاحية: AST Fusion Code Clone Detection CodeBERT Contrastive Learning Cross-Language Embeddings Semantic Similarity Software Reuse Transformer Models
Oleiwi W.K.; Hussein A.M.; Gheni H.Q.; Al-Qurabat A.K.M.
International Journal of Safety and Security Engineering , Vol. 15 (10), pp. 2093-2102
Article Open Access English ISSN: 20419031
Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Hillah, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, Hillah, 51001, Iraq
This paper presents a new framework to address the vulnerabilities at the East-West interface in distributed Software Defined Networking (SDN) environments to increase privacy and security. Current decentralized SDN architectures are vulnerable to exposing sensitive data during controller-to-controller communication, due to which they are very prone to cyberattacks. That is why the proposed framework uses state-of-the-art cryptographic methods, e.g., homogeneous encryption and Zero-Knowledge Proofs (ZKPs), as well in smart contract consensus mechanism that surpasses conventional consensus protocols such as Practical Byzantine Fault Tolerance (PBFT) and Raft. The framework is based on three major layers: the privacy layer securing the data confidentiality through privacy-preserving schemes like robust encryption, the consensus mechanism for secured and efficient transaction validation, and its enhancement according to security needs by the addition of mutual authentication and periodic key revolutions to further strengthen them against the possibility of attacks. A total of a comprehensive mathematical model is developed for quantifying the key performance indicators, including privacy leakage, attack success rate, latency, and throughput. Experimental evaluations performed in controlled environments using Mininet, OpenDayLight, and GNS3 show significant improvement; a 100% reduction in confidentiality, a 90% reduction in attack success rate, a 30% reduction in latency, and a 25% increase in throughput compared to the existing solutions. Net result, the above model is able to recreate the sense of security on the same side of the East-West interface in the distributed SDN environment, where the sensitive information is secured hand in hand with the performance. The obtained results from the experiments are encouraging not only for the feasibility but also because they led to other research on the incorporation of machine learning threat detection systems and the adaptation of systems on a large scale in ultra-large-scale networks. ©2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
الكلمات المفتاحية: consensus model East-West interface privacy protection Software Defined Networking
Idan Saeedi I.; Al-Qurabat A.
International Journal of Communication Systems , Vol. 38 (10)
Article English ISSN: 10745351
Department of Information Networks, College of Information Technology, University of Babylon, Hillah, Babylon, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Hillah, Babylon, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Hillah, Babylon, Iraq
The integration of space–air–ground networks necessitates effective strategies for managing the energy consumption and operational longevity of IoT sensor nodes. Clustering, implemented as the foundational tier in SAGINs, is instrumental in organizing sensor nodes into efficient communication groups, thereby optimizing data aggregation and minimizing redundant transmissions. This study introduces an energy-efficient cluster head (CH) selection using osprey optimization algorithm (EECHOOA), to minimize the total energy expenditure by each sensor at the individual node level. These sensors are controlled by a CH that transmits data to the top levels. With the osprey optimization algorithm (OOA), we further improve the clustering procedure for accurate CH election. By emulating the unique strategies of ospreys, the OOA adaptively determines the best CHs, taking into account both the distance between nodes and their energy reserve. In comparison to cutting-edge techniques like MMABC, NCOGA, PUAG, and ZFO-SHO, the simulation findings show that our suggested clustering strategy combined with OOA to CH election made the network lifetime is extended by 50%–100% and the network consumption of energy is reduced by 56.25%–76.35% than existing protocols. In order to provide more sustainable and streamlined IoT-based networks capable of responding to a range of fields of application, this study shows how clustering techniques may be used in combination with expert optimization algorithms in SAGINs. © 2025 John Wiley & Sons Ltd.
الكلمات المفتاحية: improve energy efficiency IoT osprey optimization smart city space–air–ground integrated networks
Mohammed A.K.; Al-Attar B.; Alzamily H.; Almaiah M.A.; Hasan T.S.; Ataalla A.F.; Solke N.; Shah P.; Sekhar R.
ICCR 2025 - 3rd International Conference on Cyber Resilience
Conference paper English
Al- Mustaqbal University, College of Engineering and Technologies, Babylon, 51001, Iraq; University of Al-Ameed, College of Medicine, PO Box 198, Karbala, Iraq; University of Hilla, Computer Center, Babylon, 51011, Iraq; The University of Jordan, King Abdullah the Ii It School, Department of Computer Science, Amman, 11942, Jordan; Al-Ma'moon University, College Al-Washash, Department of Cyber Security and Cloud Computing, Baghdad, Iraq; University of Al Maarif, College of Technical Engineering, Department of Computer Engineering Techniques, Al Anbar, 31001, Iraq; Symbiosis International (Deemed University) (SIU), Symbiosis Institute of Technology (SIT), Pune Campus, Maharashtra, Pune, 412115, India
This research presents a high-performance AI framework for predictive failure detection in cloud infrastructure, integrating temporal convolutional networks (TCNs), attention-based deep learning, and reinforcement-guided optimization. Leveraging multivariate telemetry logs and real-time orchestration metrics, the system was evaluated across diverse cloud layers including compute nodes, container orchestration, storage I/O, and network subsystems. The model consistently achieved detection accuracies exceeding 98.1%, with F1-scores reaching 0.946 and forecast alignment scores up to 0.94. Compared to traditional threshold-based systems, the proposed framework reduced missed anomaly events by over 27% and significantly improved interpretability through SHAP and TCAV-based explanations. The fusion of contextual telemetry embeddings, subsystem indicators, and causal failure propagation modeling enabled precise, real-time anomaly forecasting and early fault localization. Unlike heavyweight recurrent models, the framework maintained sub-2.1 second training cycles per batch, ensuring operational feasibility in large-scale, production-grade cloud environments. The architecture generalizes effectively across heterogeneous cloud platforms and is readily adaptable to evolving infrastructure configurations without requiring architectural reengineering. © 2025 IEEE.
الكلمات المفتاحية: Anomaly Detection Cloud Failure Prediction Explainable AI Infrastructure Resilience Multivariate Telemetry Reinforcement Learning Temporal Convolution
Khalaf Q.M.; Al-Attar B.; Pokale N.B.; Mohammed A.K.; Aljanabi Y.I.H.; Fadhil R.; Alrazaq H.A.; Divekar N.; Sekhar R.
ICCR 2025 - 3rd International Conference on Cyber Resilience
Conference paper English
University of Fallujah, Construction and Projects Department, Fallujah, 31002, Iraq; University of Al-Ameed, College of Medicine, PO Box 198, Karbala, Iraq; Dr. D. Y. Patil Institute of Technology, Department of Artificial Intelligence and Data Science, Pimpri Maharashtra, Pune, 411018, India; Al-Mustaqbal University, College of Engineering and Technologies, Babylon, 51001, Iraq; Al-Turath University, Baghdad, Iraq; Al-Esrra University, College of Dentistry, Baghdad, Iraq; University of Hilla, Faculty of Science, Ai Department, Babylon, 51011, Iraq; Symbiosis International (Deemed University) (SIU), Symbiosis Institute of Technology (SIT), Pune Campus, Maharashtra, Pune, 412115, India
This research introduces a high-efficiency AI framework for real-time detection of multi-stage cyber-attacks in Industrial IoT (IIoT) networks, utilizing a fusion of Graph Attention Networks (GAT) and bi-directional LSTM encoders with spatio-temporal attention. By exploiting dynamic communication graphs and sequential flow telemetry from heterogeneous IIoT protocols, the model was benchmarked across attack stages such as privilege escalation, lateral traversal, beaconing, and exfiltration. It achieved up to 96.3% detection accuracy, with F1-scores reaching 0.946 and threat contextualization scores surpassing 0.92. Compared to signature-based IDS and CNN-RNN hybrids, the system reduced false positives by 47% and boosted explainability via GNN-Explainer and SHAP integration. Through spatiotemporal representation learning and attention-based fusion, the architecture delivered real-time stage-specific classification and prioritized alerting. With average inference latency under 420ms and support for 82K flows/sec on edge nodes, it outperforms traditional models in low-resource IIoT setups. Its protocol-agnostic design ensures seamless deployment across manufacturing, utilities, and SCADA environments without retraining or architecture redesign. © 2025 IEEE.
الكلمات المفتاحية: Edge AI Explainable AI Graph Attention Networks Industrial IoT Security LSTM Multi-Stage Cyber Attack Detection Threat Intelligence
Kareem M.I.; Matloob A.Z.K.; Ogaili K.I.; Al-Qurabat A.K.M.
International Journal of Intelligent Engineering and Systems , Vol. 18 (10), pp. 184-196
Article Open Access English ISSN: 2185310X
Department of Cybersecurity, College of Information Technology, University of Babylon, Hillah, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, Hillah, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Hillah, 51002, Iraq
The growth of cyber threats in 5G infrastructures requires intelligent and scalable intrusion detection systems. This paper proposes AttnDEC-PPR, a hybrid framework combining Attention-based Deep Embedded Clustering with a graph-based inference mechanism using Personalized PageRank for anomaly detection in 5G networks. The model employs deep self-attention within an autoencoder to learn latent traffic patterns while preserving contextual dependencies. Personalized PageRank enhances interpretability and decision consistency by modeling inter-sample influence in the embedding space. Evaluated on a real-world 5G attack dataset, AttnDEC-PPR achieved 99.48 percent accuracy, 99.23 percent precision, 99.92 percent recall, and 99.57 percent F1-score, surpassing conventional classifiers. Feature selection via the chi-square test reduced dimensionality without performance loss. Tests on the InSDN dataset yielded 99.71 percent accuracy, confirming adaptability across network environments. Comparative analysis, ROC and PR-AUC metrics, and SHAP-based interpretation highlight the model’s effectiveness and transparency in addressing evolving attack patterns. Cross-dataset evaluation further demonstrated robustness, with 99.95 percent recall and 99.63 percent F1-score on InSDN, and 99.77 percent precision on CICIDS2017 PortScan. © This article is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. License details: https://creativecommons.org/licenses/by-sa/4.0/
الكلمات المفتاحية: 5G security Deep clustering InSDN dataset Intrusion detection system (IDS) Unsupervised learning
Salman H.M.; Al-Qurabat A.K.M.; Finjan A.A.R.
International Journal of Computing and Digital Systems , Vol. 17 (1)
Article Open Access English ISSN: 2210142X
College of Material Engineering, University of Babylon, Babylon, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Hillah, Babylon, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, 51002, Iraq; Supreme Commission for Hajj and Umrah, Baghdad, Iraq
There are a number of modern disciplines in digital signal processing (DSP) that deal with so-called blind images. The core of this problem is that there are two images mixed into one image, which requires separating these images and recovering the original images. There are many methods and strategies used to solve this problem. One of these solutions is unsupervised machine learning mechanisms, as in Independent Component Analysis (ICA), which uses the statistical properties of the latent images. This method is essentially dependent upon the statistical characteristics of observation signals and the non-Gaussian limitations between the mixed image conditions. For all applications, the ICA needs to be enhanced; therefore many optimization methods used for that purpose. The swarm intelligence methods are one of many techniques utilized to enhance the ICA’s efficiency. For this purpose, in this paper, three swarm optimization methods used are Quantum Particle Swarm Optimization (QPSO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). These methods implemented, on nine gray-scale images with seven nixing cases, separately. The results are been evaluated under three metrics for assessment are Structural Similarity Index Measurement, Peak Signal to Noise Ratio, and Normalized Cross Correlation. The applying of this system gave optimal results under the specified measurements. © 2025 University of Bahrain. All rights reserved.
الكلمات المفتاحية: Blind Image Separation BSS Cocktail Party problem ICA
Al-Barmani Z.; Yousif A.Y.; Gheni H.Q.; Al-Qurabat A.K.M.
International Journal of Computing and Digital Systems , Vol. 18 (1)
Article Open Access English ISSN: 2210142X
Department of Computer Science, College of Science for Women, University of Babylon, Babylon, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Hillah, Babylon, 51001, Iraq
Persons can recognize a speaker by listening to their voice. The topic of speaker recognition research is fascinating since there are still a lot of unanswered questions and gaps in the literature that need further research. Many techniques utilizing deep learning (DL) and machine learning (ML) have been utilized to address issues with speaker recognition, particularly in research using large volumes of voice data. At the moment, the volume of data is growing at an incredibly rapid rate. Around the world, there will inevitably be a data explosion. Numerous sources produce hundreds of petabytes of data, such as social media, mobile devices, financial market data, astronomy, personal archives, health data, and cameras. As a result, finding the right technique for converting huge amounts of data into information that improves people’s lives is a difficult field of studies. In this work, a convolution neural network (CNN) based deep learning model for speaker identification is proposed. The suggested CNN-based methodology employs the standard Mel Frequency Cepstra1 coefficients (MFCCs)-based feature extraction method, which is the most widely used feature selection method for audio and voice signals. The speaker identification system is presented in brief in this research work, after which the general architecture of the system utilizing the CNN model is discussed. Comparing the results of this study to others is challenging due to variations in methodologies and implementation contexts, which influence accuracy rates and other evaluation metrics. Despite this, it is remains necessary to observe the variations in a brief summary to understand the distinctive features of each strategy when comparison to the proposed one. © 2025 University of Bahrain. All rights reserved.
الكلمات المفتاحية: Convolutional Neural Network deep learning with CNN speaker identification
Al-Qurabat A.K.M.; Mohammed A.K.; Matloob A.Z.K.; Abdulzahra S.A.
Cluster Computing , Vol. 28 (7)
Article English ISSN: 13867857
Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, Hillah, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Hillah, 51002, Iraq; College of Dentistry, Al-Mustaqbal University, Babylon, Hillah, 51001, Iraq; Department of Cybersecurity, College of Information Technology, University of Babylon, Babylon, Hillah, 51002, Iraq
The neurological disorder known as epilepsy has an ongoing negative impact on the brain. Identification of seizures is essential to the clinical care of individuals with epilepsy. Expert doctors frequently use visual electroencephalography (EEG) data analysis to detect epileptic seizures which is a method for observing the nonlinear electrical activity of the brain’s nerve cells. It is an epilepsy detection diagnostic tool. In this paper, we suggest an Internet of Things (IoT) framework for precise and effective seizure detection and monitoring for epileptic patients utilizing machine learning techniques. Three layers make up the proposed IoT framework: the things/devices, fog, and cloud tiers. The proposed method is summarized in transmitting the collected data from the thing layer to the FoG layer where a number of critical steps are carried out starting from segmenting the EEG data by converting it into 2-D table format and creating a Weighted Visibility Graph (WVG) from EEG data. Our suggested method extracts nine features from the WVG and an additional ten statistical features from the original EEG dataset. All these features are fed to the machine learning methods to classify the obtained signal as normal or abnormal. Two actions will be taken depending on the classification state either sending a notification to any predetermined caretaker in case of the occurrence of a seizure or reducing the data by using the threshold-based method in case of the absence of the seizure. As a result, in both cases, the data is uploaded to the cloud layer to be reviewed later by a specialized medical team. Four scenarios were used to evaluate our proposed method using performance evaluation metrics. The power of the provided methods is demonstrated by the proposed strategy, which yields a percentage of 100% in the fourth scenario which uses ML models with hyper-parameters, balanced EEG data, and extracted features. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
الكلمات المفتاحية: Epileptic seizure Health care Improve energy efficiency IoMT Mental health Smart cities Social safety Weighted visibility graph
2024
10 بحث
Mohammed Z.A.; Gheni H.Q.; Hussein Z.J.; Al-Qurabat A.K.M.
Engineering, Technology and Applied Science Research , Vol. 14 (1), pp. 12694-12701
30 استشهاد Article Open Access English ISSN: 22414487
Department of Computer Science, College of Science for Women, University of Babylon, Iraq; Department of Cyber Security, College of Science, Al-Mustaqbal University, Iraq
Communication system and internet dominance in our society, has made image security a matter of paramount concern. Cryptography involves encrypting data to protect information exchange between senders and receivers, establishing a foundation for secure communication. The Advanced Encryption Standard (AES) is an exceptional algorithm that plays a pivotal role in this area because of its ability to consistently transform plain data into cipher data using the same encryption key. This algorithm engages intricate encryption techniques, harnessing a variety of algorithms and transformations to ensure robust data security. This study introduces an image encryption technique to comprehensively address security requirements. The proposed approach uses encryption to provide high reliability and security, effectively protecting sensitive media from unauthorized access. The sender's file is divided into multiple pieces to maximize confidentiality, using an advanced algorithm. Upon proper decryption, these pieces seamlessly reconstruct the original file. The suggested technique enables customers to securely keep information on cloud storage, addressing concerns about possible leakage, damage or corruption. By integrating cloud storage and digital signatures, this method ensures protection and reliability for sensitive information. © 2024, Dr D. Pylarinos. All rights reserved.
الكلمات المفتاحية: AES algorithm cloud computing cryptography decryption digital signature encryption openSSL
Lateef H.M.; Al-Qurabat A.K.M.
International Journal of Computing and Digital Systems , Vol. 16 (1), pp. 797-812
17 استشهاد Article Open Access English ISSN: 2210142X
Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, 51001, Iraq
Wireless sensor networks (WSNs) can effectively address the issue with the static sink’s sink-hole or hot spot brought on by multi-hop routing by data collecting using mobile sinks (MS). The optimal path’s design, however, is a famous NP-hard issue. Data transmission from source nodes to the base station that is both successful and efficient while reducing energy consumption and data loss is what determines the architecture’s overall performance. In WSNs, data collection is done via mobile sinks or static sinks (SS). he MS data collection methods are more effective than sink-based approaches because they can gather sensor node data efficiently. Nevertheless, the MS-based data collection methods have a number of shortcomings and restrictions such as energy usage, complexity, cost implications, and scalability issues. Designing a trajectory is therefore an NP-hard task. In this work, we suggest a survey that utilizes path optimization techniques such as swarm intelligence, ant colony optimization (ACO), machine learning and artificial intelligence. We also have an overview of different approaches for using SS and MS-based techniques to collect data from a sensor network, as well as different kinds of data collection using MS and some of the difficulties it encounters. Lastly, we offer a level-based categorization of the various trajectory techniques that were employed to gather the data. We divided schemes into three categories at the first level: static, dynamic, and Hybrid. © 2024 University of Bahrain. All rights reserved.
الكلمات المفتاحية: Energy-efficiency Improve energy efficiency Mobile sink Path planning Static sink WSNs
Abdulzahra S.A.; Al-Qurabat A.K.M.
Journal of Supercomputing , Vol. 80 (13), pp. 19845-19897
14 استشهاد Article English ISSN: 09208542
Department of Information Networks, College of Information Technology, University of Babylon, Babylon, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, Hillah, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
The Internet of Things (IoT) has developed into a new area of study that promises to elevate human culture to a higher level of sophistication. The network is essential in IoT since it is responsible for relaying information from sensors to the sink. In the IoT, where many devices share finite resources, extending the lifespan of the network is a difficult challenge. The lifespan of a network can be prolonged by the use of clustering. However, initial network nodes’ energy might be quickly depleted by incorrectly selecting cluster heads (CHs). This research aims to provide a solution by suggesting a fuzzy-based optimized nature-inspired clustering technique (FONIC) to choose the best CH to sustain the network over time. When dealing with unreliable network conditions, the precise solution provided by fuzzy logic (FL) is invaluable. Therefore, in order to calculate a fitness value, FL is used on network metrics such as energy, distance, degree, and centrality. In the end, the right CH is chosen with the help of the Penguin Search Optimization Algorithm (PeSOA). Python is utilized to do extensive simulations that confirm the effectiveness of the suggested FONIC protocol. Other protocols, including FIGWO, HMGWO, LEACH-PRO, FGWSTERP, and SSMOECHS, are contrasted with the proposed FONIC protocol. Compared to other top-tier protocols, the suggested FONIC protocol was shown to perform better than any of them, improving the ratio of packet transmission by 10% and network lifespans by 10–15%. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
الكلمات المفتاحية: Cluster head Fuzzy logic Improve energy efficiency IoT Network lifetime PeSOA
Abdulzahra S.A.; Al-Qurabat A.K.M.
International Journal of Computing and Digital Systems , Vol. 15 (1), pp. 1565-1581
14 استشهاد Article Open Access English ISSN: 2210142X
Department of Information Networks, College of Information Technology, University of Babylon, Babylon, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
UAVs (unmanned aerial vehicles) and WSNs (wireless sensor networks) are now two well-established technologies for monitoring, target tracking, event detection, and remote sensing. Typically, WSN is made up of thousands or even millions of tiny, battery-operated devices that measure, gather, and send information from their surroundings to a base station or sink. Within the realm of wireless positioning and communication, UAVs have garnered a lot of interest because of their remarkable mobility and simplistic deployment to tackle the problems of imprecise sensor placement, inadequate infrastructure coverage, and the massive quantity of sensing data that WSN collects. A crucial prerequisite for many position-based WSN applications is node location, or localization. The use of UAVs for localization is more preferable than permanent terrestrial anchor nodes due to their high accuracy and minimal implementation complexity. The possible interference or signal block in such an operating environment, however, might cause the Global Positioning System (GPS) to become ineffective or unobtainable. In these conditions, the need for innovative UAV-based sensor node location technologies has become essential. Radio frequency (RF)-based localization techniques are reviewed in the current paper. We examine the available RF features for localization and look into the current approaches that work well for unmanned vehicles. The most recent research on RF-based UAV localization is reviewed, along with potential avenues for future investigation. © 2024 University of Bahrain. All rights reserved.
الكلمات المفتاحية: GPS Localization Radio Frequency Unmanned Aerial Vehicles Wireless Sensor Networks
Fanfakh A.; Abduljalil N.; Al-Qurabat A.K.M.
International Journal of Safety and Security Engineering , Vol. 14 (3), pp. 843-852
4 استشهاد Article Open Access English ISSN: 20419031
Department of Computer Science, University of Babylon, Babylon, 51002, Iraq; Department of Air Conditioning and Refrigeration, University of Warith Al-Anbiyaa, Karbala, 56001, Iraq; Department of Cyber Security, College of Science, Al-Mustaqbal University, Babylon, 51001, Iraq
Lightweight cryptographic algorithms like Speck, which are a family of block ciphers developed by the US National Security Agency (NSA), have become popular because of their efficient performance and small operational size. This paper introduces the execution on a parallel multi-core processor of the optimized version of the Speck cipher. However, this proposition fulfils the increased demand for developing quick and ultra-lightweight ciphers. In this work, this is addressed by optimizing the speck128/128 cipher by reducing its number of rounds to five. The optimization is accomplished by adding the dynamic substitution layer to increase the randomness of the cipher, which allows us to reduce the speck rounds. We conducted tests such as statistical, randomness, and cryptanalysis tests for linear and differential attacks on the optimized speck. The security results show that the optimized speck overcomes the original speck security level. The conducted experiments show that the new version of the speck runs faster than the original one in terms of execution time and throughput. The parallel execution over a multicore processor is applied, and its speedup ratio is equal to 2.64 when it's compared to the parallel execution of the original speck. Different message sizes and thread configurations are used in this work. The sequential execution of both speck ciphers is computed in terms of execution time and throughput, and the acceleration ratio of the optimized speck in this case is equal to 2.63. ©2024 The authors.
الكلمات المفتاحية: multi-core CPU parallel computing rounds reduction speck cryptography
Lateef H.M.; Al-Qurabat A.K.M.
2024 21st International Multi-Conference on Systems, Signals and Devices, SSD 2024 , pp. 33-42
4 استشهاد Conference paper English
College of Sciences, Al-Mustaqbal University, Department of Cyber Security, Babylon, 51001, Iraq; College of Science for Women, University of Babylon, Department of Computer Science, Babylon, Iraq
Wireless sensor networks (WSNs) can effectively address the hot-spot or sink-hole problem caused by multi-hop routing with the static sink by data collecting using mobile sinks (MS). The optimal path's design, however, is a well-known NP-hard problem. Data transmission from source nodes to base station that is both successful and efficient while reducing energy consumption and data loss is what determines the architecture's overall performance. In WSNs, data collection is done via mobile sinks or static sinks (SS). However, the effectiveness of MS based data collection techniques is higher than that of static sink-based approaches. Nevertheless, MS-based data collection methods have several drawbacks and limitations. Designing a trajectory is therefore an NP-hard task. In this work, we have developed a survey that utilizes path optimization techniques. We also have an overview of different approaches for using SS and MS-based techniques to collect data from a sensor network, as well as different kinds of data collection using MS and some of the difficulties it encounters. Lastly, we have suggested a level-based categorization of the various trajectory techniques that were employed to gather the data. We divided schemes into two categories at the first level: static and dynamic. © 2024 IEEE.
الكلمات المفتاحية: Energy-efficiency Mobile sink Path planning Static sink WSN
Salman H.M.; Al-Qurabat A.K.M.; Riyadh Finjan A.A.
International Journal of Computing and Digital Systems , Vol. 15 (1), pp. 595-604
3 استشهاد Article Open Access English ISSN: 2210142X
College of Material Engineering, University of Babylon, Babylon, Iraq; Cyber Security Science Department, College of Science, Al-Mustaqbal University, Babylon, 51001, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq; Supreme Commision for Hajj and Umrah, Baghdad, Iraq
One of the most intractable issues in contemporary digital signal processing, particularly with regards to blind source separation methods, is known as the cocktail party dilemma. This problem suppose there are many sensors record many signals at same time to produce many mixed signals. To solve this problem, one of an important methods used for this purpose is an Independent Component Analysis method. This method abbreviates in how separate mixed signals without any pre-knowledge about the mixing signals?. It treats on the statistical features of a mixing signals. This work introduces a novel method to solve the cocktail party problem, by using hybrid method from the Quantum Particle Swarm Optimization method and the Bell- Sejnowski neural method to enhance the performance of the Independent Component Analysis. In addition, the proposed method uses the Negentropy function to be the objective function of the optimization process. The proposed algorithm has been implemented on two cases of three really signals, with 8-KHz frequencies. The results of the separating process measured in two directions: firstly by comparing the results with other methods as Particle Swarm Optimization and the Quantum Particle Swarm Optimization, where the results appear that the proposed method appears very high results than other methods. Secondly, by using standard metrics as Absolute Value Correlation Coefficient, Signal to Distortion Ratio, and Signal to Noise Ratio. © 2024 University of Bahrain. All rights reserved.
الكلمات المفتاحية: Bell-Sejnowski (InfoMax) method BSS Cocktail-Party Problem ICA QPSO
Alwan E.H.; Al-Qurabat A.K.M.
International Journal of Computational Methods and Experimental Measurements , Vol. 12 (3), pp. 281-287
2 استشهاد Article Open Access English ISSN: 20460546
Department of Computer Science, College of Science for Women, University of Babylon, Babylon, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, 51001, Iraq
Loop unrolling is a well-known code-transforming method that can enhance program efficiency during runtime. The fundamental advantage of unrolling a loop is that it frequently reduces the execution time of the unrolled loop when compared to the original loop. Choosing a large unroll factor might initially save execution time by reducing loop overhead and improving parallelism, but excessive unrolling can result in increased cache misses, register pressure, and memory inefficiencies, eventually slowing down the program. Therefore, identifying the optimal unroll factor is of essential importance. This paper introduces three ensemble-learning techniques—XGBoost, Random Forest (RF), and Bagging—for predicting the efficient unroll factor for specific programs. A dataset comprises various programs derived from many benchmarks, which are Polybench, Shootout, and other programs. More than 220 examples, drawn from 20 benchmark programs with different loop iterations, used to train three ensemble-learning methods. The unroll factor with the biggest reduction in program execution time is chosen to be added to the dataset, and ultimately it will be a candidate for the unseen programs. Our empirical results reveal that the XGBoost and RF methods outperform the Bagging algorithm, with a final accuracy of 99.56% in detecting the optimal unroll factor. ©2024 The authors.
الكلمات المفتاحية: Bagging compiler optimization ensemble learning loop unroll Random Forest XGBoost
Alwan E.H.; Al-Qurabat A.K.M.
Ingenierie des Systemes d'Information , Vol. 29 (4), pp. 1611-1617
1 استشهاد Article Open Access English ISSN: 16331311
Department of Computer Science, College of Science for Women, University of Babylon, Babylon, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal University, Babylon, 51001, Iraq
Recently, the number of smaller and smarter embedded devices have rapidly increased. This increment puts more pressure on the compiler developer to develop more dedicated application programs for these devices. Modern compilers (like LLVM) offer standard optimization levels (flags) that deal with reducing the code size named Os and FOz flags. The question arise in this paper in is: Is it possible to find sequence that deliver smaller code compare to standard flags? A Sign Table, it is the suggested method that is introduced in this paper. It can suggest an optimization sequence that can reduce the code size for set of unseen program. Initially, two thousand optimization sequences are generated randomly. Each sequence is compiled with 50 programs, where the programs that give smaller code size compared with the Os or Oz flags are extracted. After building the signs table, which contains the sequences that give the average programs sizes smaller than the Os or Oz flags, the process of quantifying similarity between the unseen program and the programs contained within the signs table is performed. The sequences that belong to the most similar programs are selected to compile the unseen program. The proposed methodology is assessed through an empirical investigation, employing three benchmark suites, namely PolyBench, Shootout, and Stanford. The experiments show that the proposed method reduces the unseen program size by about 9% compared standsrd optimization flags. Copyright: ©2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license.
الكلمات المفتاحية: code size reduction LLVM optimization sequence
Abdulazeez Z.A.; Abduljalil N.; Fanfakh A.B.M.; Al-Qurabat A.K.M.; Alwan E.H.
Journal of Intelligent Systems and Internet of Things , Vol. 13 (2), pp. 129-140
Article English ISSN: 2769786X
College of Education for Human Sciences, University of Karbala, 56001, Iraq; Department of Air Conditioning and Refrigeration, University of Warith Al-Anbiyaa, Karbala, 56001, Iraq; Department of Computer Science, College of science for women, University of Babylon, Babylon, 51002, Iraq; Department of Cyber Security, College of Sciences, Al-Mustaqbal university, Babylon, 51001, Iraq
Dynamic voltage and frequency scaling (DVFS) is a tool used primarily to decrease computer processor energy consumption by lowering its operational frequency. Their only downside is that they distract from the efficiency of parallel applications while operating on parallel platforms. In a heterogeneous cluster architecture, however, a genetic algorithm is being implemented and applied to model the best trade-off between energy-saving and parallel application performance degradation. The proposed algorithm selects the best frequency vector in order to accomplish these objectives by providing the same compromise. So, the objective function of the genetic algorithm at the same time gives limited energy consumption and minimum decreases in performance. The SimGrid simulator will be used for all experiments. The suggested algorithm saves the average energy by (20 %) and the application performance degrades to the limit (0.15 %). © 2024, American Scientific Publishing Group (ASPG). All rights reserved.
الكلمات المفتاحية: energy consumption frequency scaling Genetic algorithm heterogeneous cluster
2023
4 بحث
Abdulzahra A.M.K.; Al-Qurabat A.K.M.; Abdulzahra S.A.
Internet of Things (Netherlands) , Vol. 22
119 استشهاد Article English ISSN: 25426605
Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq; Department of Dentistry, Al-Mustaqbal University College, Babylon, Iraq
Wireless Sensor Networks (WSNs) are the main data collection tools used by Internet of Things (IoT) devices. The WSN-based IoT is a collection of several small, geographically dispersed, battery-powered sensors that are devoted to carrying out a certain activity in a collaborative manner. In a dense WSN-based IoT network, numerous sensors that are near to one another simultaneously collect the same data about the occurrence. Even though WSN-based IoT has opened up previously unimaginable possibilities in a variety of application areas, they are still susceptible to resource limitations. The energy of nodes, which is needed to run well for extended periods of time in many activities, is the most crucial resource in a given WSN-based IoT. Increasing the lifetime of the network is a major focus of research in the field of WSN-based IoT because it is impossible to replace or recharge batteries in remote, harsh or dangerous environments. In this article, an energy-efficient fuzzy-based unequal clustering with a sleep scheduling (EFUCSS) protocol for IoT that uses WSN is proposed. This protocol makes the network last longer and uses less energy. It does this by using clustering, scheduling, and data transmission. Unequal clusters based on Fuzzy C-Means are formed using this protocol to balance the energy used via reducing the distance that data travels. The selection of the cluster head is carried out using fuzzy logic system. The gateway's (GW) distance, remaining energy, and centrality are input variables. The output fuzzy variable is chance. Fuzzy inference is performed using the Mamdani technique. The sleep scheduling strategy is used between the coupled nodes to reduce the number of transmitted nodes. Extensive Python-based simulation experiments are run in order to evaluate the performance of the proposed EFUCSS protocol. While taking into account different WSN-based IoT scenarios and several criteria, such as network stability, network lifetime, and energy efficiency, a comparison is made between the proposed EFUCSS protocol and other well-known conventional protocols. The results show that the proposed EFUCSS improves remaining energy by 26.92%–213.4% and network lifespan by 39.58%–408.13%. The suggested EFUCSS also results in a greater improvement in network lifespan compared to other comparable algorithms. © 2023 Elsevier B.V.
الكلمات المفتاحية: Clustering Energy-efficiency Fuzzy logic IoT Scheduling WSN
Nedham W.B.; Al-Qurabat A.K.M.
International Journal of Computer Applications in Technology , Vol. 72 (2), pp. 139-160
47 استشهاد Article English ISSN: 09528091
Department of Dentistry, Al-Mustaqbal University College, Babylon, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
Wireless Sensor Networks (WSNs) have become more popular in recent years due to their vast range of applications. The utilisation of WSNs is an absolute requirement for future revolutionary domains such as smart cities, the Internet of Things, or ecological fields, where hundreds or thousands of sensor nodes are placed. Moreover, because WSNs are energyconstrained networks, implementing energy-aware protocols is critical. Hierarchical techniques enhance network performance and extend network lifetime in large-scale WSNs. Within a WSN, hierarchy is achieved by dividing the network into sub-networks known as clusters, which are directed by Cluster Heads (CH). Clustering is the most common energy-efficient approach, and it offers several benefits, like reduced latency, scalability, lifetime and energy efficiency. This study presents a detailed assessment of several clustering techniques, together with their aims, features, etc. Furthermore, clustering techniques are classified and evaluated based on numerous cluster features, cluster head attributes and clustering procedures. © 2023 Inderscience Enterprises Ltd.
الكلمات المفتاحية: clustering techniques energy consumption energy-efficiency IoT WSNs
Nedham W.B.; Al-Qurabat A.K.M.
International Journal of Computer Applications in Technology , Vol. 71 (4), pp. 352-362
27 استشهاد Review English ISSN: 09528091
Department of Dentistry, Al-Mustaqbal University College, Babylon, Iraq; Department of Computer Science, University of Babylon, Babylon, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
The possibility for broad usage of Wireless Sensor Networks (WSNs) in many various sectors, such as environmental monitoring, security, home automation and many others, has increased research interest in WSNs. Although its successes, the broad proliferation of WSNs, especially in distant and inhospitable areas where their usage is most advantageous, is hindered by the primary obstacle of limited energy, as they are often battery operated. To provide these energy-hungry sensor nodes with a longer life expectancy, one technique to achieve this aim is to reduce the frequency of data transfer. Conversely, a portion of the observed data could be predicted to avoid initiating communications that might overwhelm the wireless channel. In this paper, we classify and analyse current prediction-based data reduction strategies for WSNs. Our key contribution is a systematic technique for choosing a prediction model in WSNs based on WSN limitations, prediction technique features and observed data. Copyright © 2023 Inderscience Enterprises Ltd.
الكلمات المفتاحية: prediction models time series models wireless sensor networks
Abdulzahra S.A.; Al-Qurabat A.K.M.
International Journal of Computing and Digital Systems , Vol. 13 (1)
6 استشهاد Article Open Access English ISSN: 2210142X
Department of Dentistry, Al-Mustaqbal University College, Babylon, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
In numerous Internet of Things contexts, there is an increasing interest to use wireless sensor technologies. One of the most difficult problems is gathering and analyzing commodity data, given the enormous rise of smart objects and their applications. Sensor nodes are battery-powered, and energy-efficient operations are important. To that end, before transmitting the final data to the central station, remove redundancy from the collected data by neighbouring nodes is beneficial for sensors. Data aggregation is one of the main strategies for reducing data redundancy and improving energy efficiency; it also extends the lifetime of wireless sensor networks. Moreover, network traffic can be minimized by an efficient data aggregation protocol. It may be sensed by more than one sensor when a particular target takes place in a particular area. This article provides an overview of different data aggregation methods and protocols, taking into account the key problems and facets of data aggregation in wireless sensor networks. The structures of data aggregation are grouped into four key classes, namely cluster-based, tree-based, chain-based and grid-based. The thorough comparison of the important approaches of each class often gives a suggestion for more research. © 2023 University of Bahrain. All rights reserved.
الكلمات المفتاحية: Data Aggregation Energy Consumption IoT WSN
2022
2 بحث
Nedham W.B.; Kadhum M. Al-Qurabat A.
2022 International Conference for Natural and Applied Sciences, ICNAS 2022 , pp. 23-28
46 استشهاد Conference paper English
Al-Mustaqbal University College, Dept. of Dentistry, Babylon, Iraq; Dept. of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
The energy resources available to nodes in wireless sensor networks are limited, so they must be wisely used. Clustering is a useful technique for reducing energy consumption and extending the life of a network. In this study, we presented an Energy-Saving Clustering Algorithm (ESCA) to reduce energy consumption and increase the network's lifetime. The clustering phase is based on cluster construction that is centralized and cluster heads that are distributed. The clustering is stationary and determined using a centralized K-means method, with the created clusters remaining static throughout the process. Subsequently, according to the varying amounts of energy in the nodes, it chooses and rotates the cluster heads (CHs) within those clusters to reduce energy expenditure before the data transmission phase to the base station (BS). The suggested ESCA is compared to the current two MOFCA, and IGHND clustering methods using a Python-based custom simulator. As a consequence, the suggested ESCA efficiently tackles the energy use issue while also greatly extending the network's lifetime. © 2022 IEEE.
الكلمات المفتاحية: Clustering Energy consumption K-Means WSN
Al-Qurabat A.K.M.; Abdulzahra S.A.; Idrees A.K.
Journal of Supercomputing , Vol. 78 (16), pp. 17844-17890
36 استشهاد Article English ISSN: 09208542
Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq; Department of Dentistry, Al-Mustaqbal University College, Babylon, Iraq
The Internet of things (IoT) is an omnipresent system that can be accessed from a long distance, linking a variety of devices (things), including wireless sensor networks (WSNs). Cyber-physical systems monitor things from a distance and control them. Because of its widespread usage in a variety of applications, WSN is among the most essential contributors to the IoT and plays a key part in the daily lives of people. The battery’s energy is a vital source in the sensor node, impacting the lifespan of the WSN. Energy scarcity is a serious concern in WSN, as a large volume of redundant data is gathered and transferred on a regular basis. As a result, efficient energy consumption is the fundamental approach to maximizing network lifetime. This article proposes a two-level data reduction approach for use at two network levels: sensor nodes and gateways (GWs). A novel Compression-Based Data Reduction (CBDR) technology and an effective transmitting data strategy derived from data correlation are being developed at the sensor node level. These strategies are designed to more efficiently compress data readings from IoT devices. CBDR compresses data in two stages: lossy SAX quantization and lossless LZW compression. The suggested approaches function as filtering at the GW level, allowing the GW to discover and subsequently delete groups of data that are duplicated and provided by surrounding nodes. At this level, two strategies are advised: the first is based on the data compression concept, and the second is to identify all couples of member nodes that produce duplicated sets so that redundancy may be eliminated before they are delivered to the sink. The proposed solutions are evaluated using extensive simulation tests made available by the network’s OMNeT++ simulator. The proposed methodologies’ efficiency is tested using four related works: the PFF protocol, the ATP protocol, the AVMDA protocol, and the PIP-DA protocol. The proposed solution uses up to 79%, 80%, 90%, and 6% less for each of the remaining data, transmitted data, energy, and data loss, respectively, depending on the results. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
الكلمات المفتاحية: Data compression Data reduction IoT LZW Network lifetime SAX quantization Sensor networks
2021
2 بحث
Abdulzahra S.A.; Al-Qurabat A.K.M.; Idrees A.K.
Baghdad Science Journal , Vol. 18 (1), pp. 184-198
41 استشهاد Article Open Access English ISSN: 20788665
Department of Dentistry, Al-Mustaqbal University College, Babylon, Iraq; Department of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
Energy savings are very common in IoT sensor networks because IoT sensor nodes operate with their own limited battery. The data transmission in the IoT sensor nodes is very costly and consume much of the energy while the energy usage for data processing is considerably lower. There are several energy-saving strategies and principles, mainly dedicated to reducing the transmission of data. Therefore, with minimizing data transfers in IoT sensor networks, can conserve a considerable amount of energy. In this research, a Compression-Based Data Reduction (CBDR) technique was suggested which works in the level of IoT sensor nodes. The CBDR includes two stages of compression, a lossy SAX Quantization stage which reduces the dynamic range of the sensor data readings, after which a lossless LZW compression to compress the loss quantization output. Quantizing the sensor node data readings down to the alphabet size of SAX results in lowering, to the advantage of the best compression sizes, which contributes to greater compression from the LZW end of things. Also, another improvement was suggested to the CBDR technique which is to add a Dynamic Transmission (DT-CBDR) to decrease both the total number of data sent to the gateway and the processing required. OMNeT++ simulator along with real sensory data gathered at Intel Lab is used to show the performance of the proposed technique. The simulation experiments illustrate that the proposed CBDR technique provides better performance than the other techniques in the literature. © 2021 University of Baghdad. All rights reserved.
الكلمات المفتاحية: Data Compression IoT LZW SAX Quantization Sensor Networks
Abdulzahra S.A.; Al-Qurabat A.K.M.; Idrees A.K.
Karbala International Journal of Modern Science , Vol. 7, pp. 340-351
20 استشهاد Article Open Access English ISSN: 2405609X
Department of Dentistry, Al-Mustaqbal University College, Babylon, Iraq; Dept. of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq
Wireless Sensor Network is one of the most important contributors to IoT and performs significant role in people's lives due to its extensive use in many applications. Energy-saving is essential since sensor nodes are working by their restricted battery. In this article, data reduction method proposed to work at Gateway level of network. In GW, proposed method works as filtering via enabling GW to identify, then remove, sets of data that are redundant and produced by neighboring nodes. Principle idea of method recommended at this level is to exploit the advantage of spatial correlation between sensors to minimize energy depletion. © 2021 University of Kerbala.
الكلمات المفتاحية: Energy-saving IoT Leader clustering Network lifetime Wireless Sensor Networks (WSNs)
2020
1 بحث
Al-Qurabat A.K.M.; Abdulzahra S.A.
IOP Conference Series: Materials Science and Engineering , Vol. 928 (3)
66 استشهاد Conference paper Open Access English ISSN: 17578981
Dept. of Computer Science, College of Science for Women, University of Babylon, Babylon, Iraq; Al Mustaqbal University College, Babylon, Iraq
Through developments in digital electronics and wireless technology, a variety of tiny devices have begun to be used in many aspects of everyday life. These devices can sense, compute and communicate. These typically consist of low-power radios, many smart sensors, and integrated CPUs. Such devices are utilized to establish a wireless sensor network (WSN) essential for the delivery of sensing services and monitoring of weather conditions. The concept of the Internet of things (IoT) is formed in conjunction with WSNs, where IoT can be described as an interconnection between recognizable devices in sensing and monitoring processes inside the internet networks. This paper offers a description of Periodic WSNs in general. It also offers an overview of the PWSN applications and challenges. © 2020 Published under licence by IOP Publishing Ltd.
الكلمات المفتاحية: IoT Periodic WSN Sensor Node Wireless Sensor Networks