About the Journal

The Journal of Telecommunications and Information Technology is published quarterly. It comprises original contributions, dealing with a wide range of topics related to telecommunications and information technology. All papers are peer-reviewed. The articles presented in JTIT focus primarily on experimental research results advancing scientific and technological knowledge about telecommunications and information technology.  

Current Issue

Vol. 102 No. 4 (2025)
Cover page issue 4/2025

Explore the current issue of the JTIT

The current issue of the Journal of Telecommunication and Information Technology (JTIT) offers high-quality original articles and showcases the results of key research projects conducted by recognized scientists and dealing with a variety of topics involving telecommunications and information technology, with a particular emphasis placed on the current literature, theory, research and practice.
The articles published in this issue are available under the open access (OA), “publish-as-you-go” scheme. Four issues of JTIT are published each year.
The Journal of Telecommunications and Information Technology is the official publication of the National Institute of Telecommunications - the leading government organization focusing on advances in telecommunications technologies.

We encourage you to sign up to receive free email alerts keeping you up to date with all of the latest articles by registering here.

Published: 2025-12-31

Full Issue

ARTICLES FROM THIS ISSUE

  • Aware Node Localization in Wireless Sensor Networks Using Harris Hawks Optimization

    Abstract

    Precise and efficient localization is a key enabler for context-aware operations in emerging 6G cognitive semantic communication (CSC) systems. In AI-native and semantic-aware networks, precise node positioning improves semantic compression, context-driven routing, and adaptive spectrum allocation, positively affecting communication reliability and resource utilization efficiency. This paper addresses the problem of localization in wireless sensor networks (WSNs) in the broader context of 6G CSC, formulating it as an optimization task. Based on the previous research, we explore the application of bio-inspired metaheuristic algorithms to achieve robust and high accuracy positioning. Specifically, we propose the use of the Harris hawks optimization (HHO) algorithm to develop a semantic-aware, stable, and efficient localization framework. The proposed approach is implemented and tested within the Matlab simulation environment. Performance evaluation is conducted through comparative experiments with two widely used optimization algorithms: particle swarm optimization (PSO) and cuckoo search optimization (CSO). The simulation results demonstrate that the proposed HHO-based localization method not only improves positioning accuracy by up to 25% compared to the benchmarks, but also provides enhanced stability, enabling its integration with CSC architectures for intelligent resource management in next-generation networks.

    Seddik Rabhi
    1-10
  • Low-complexity Optimized Version of AOR Algorithm for Signal Precoding in Large-scale MIMO Systems

    Abstract

    In recent years, there has been a growing focus on research concerning wireless communication technologies, with a particular emphasis placed on the emerging field of massive MIMO systems. In these systems, precoding performed at the base station (BS) is a crucial signal processing task which ensures reliable downlink transmission. In this paper, we propose a new modified accelerated overrelaxation (AOR) approach to enhance signal precoding in large-scale MIMO downlink systems. This approach uses distinctive matrix decompositions along with optimally selected relaxation and acceleration parameters. Specifically, the proposed method, termed "optimized symmetric accelerated over-relaxation (OSAOR)", exhibits two key advantages: low complexity (compared to the near optimal zero forcing (ZF) precoder) and iterative nature, with its parameters optimized by means of the particle swarm optimization (PSO) algorithm that is capable of boosting convergence and improving precoding precision. Simulation results are given to confirm the superiority of the proposed algorithm, as it may outperform conventional AOR and other existing solutions.

    Naceur Aounallah, Smail Labed
    11-19
  • Hybrid Approach for Detection and Mitigation of DDoS Attacks Using Multi-feature Selection, Unsupervised Learning, and Game Theory

    Abstract

    Software-defined networking (SDN) is now widely used in modern network infrastructures, but its centralized control design makes it vulnerable to distributed denial of service (DDoS) attacks targeting the SDN controller. These attacks are capable of disrupting the operation of the network and reducing its availability for genuine users. Existing detection and mitigation methods often suffer from numerous drawbacks, such as high computational costs and frequent false alarms, especially with standard machine learning or basic unsupervised approaches. To address these issues, a new framework is proposed that relies on multistep feature selection methods, including SelectKBest, ANOVA-F, and random forest to select the most important network features, to detect anomalies in an unsupervised manner using agglomerative clustering in order identify suspicious hosts, and to mitigate adverse impacts by relying on posterior probability and game theory. An evaluation conducted using benchmark datasets and validated through Mininet emulation demonstrates that the approach achieves better performance with silhouette scores of 0.86 for InSDN and 0.95 for Mininet. The framework efficiently computes reputation scores to distinguish malicious hosts, thus enabling to rely on adaptive defense against evolving attack patterns while maintaining minimal computational overhead.

    Amit Kachavimath, Narayan D.G.
    20-32
  • Virtual Machine Placement in Cloud Environments Using a Hybrid Cuckoo Search and Bat Algorithm

    Abstract

    The growing popularity of on-demand pay-as-you-go subscription models for online cloud computing requires increasing amounts of resources to ensure adequate quality of services. However, to satisfy the strong demand for these services, cloud infrastructure providers continue to scale up their data centers. This scaling often lacks an optimal resource management approach, thus leading to inefficiencies, excessive energy consumption, and higher costs. This creates challenges in the virtual machine placement (VMP) process focusing on identifying efficient ways for assigning virtual machines to physical hardware. This paper introduces a hybrid cuckoo search bat algorithm (HCS-BA) to solve VMP in heterogeneous cloud environments. The suitability of the cuckoo search algorithm for global searches is combined with the local refining capacity of the bat algorithm, therefore optimizing both energy consumption and resource utilization. The results of simulations carried out in Matlab and CloudSim for scalability testing demonstrate that HCS-BA outperforms both individual algorithms. It reduces energy consumption and improves resource utilization.

    Sifeddine Benflis, Sonia-Sabrina Bendib, Sedrati Maamar, Fatima Z. Cherhabil, Hanane Merouani
    33-42
  • A Hybrid Algorithm for the Synthesis of Distributed Antenna Arrays with Excitation Range Control

    Abstract

    Excitation coefficients with a low dynamic range ratio (DRR) are advantageous in controlling mutual coupling between the elements of an antenna array. Their use also reduces the output power loss and simplifies the design of the feeding network. In this paper, a hybrid algorithm based on invasive weed optimization and convex optimization for the synthesis of distributed arrays with two subarrays is proposed. Arrays of this type are used in numerous applications, e.g. in aircraft. A constraint is added to the optimization problem to control the DRR of the array's excitation vector. Numerical results are presented for position-only, as well as for position and excitation control approaches. The trade-off between the peak sidelobe ratio and the obtained DRR is illustrated by numerical examples.

    Magdy A. Abdelhay
    43-49
  • Blockchain-implied Architecture for Secure and Energy Efficient Processing of IoT Data in Pervasive WSNs

    Abstract

    Pervasive wireless sensor networks (PWSNs) are essential for real-time data transmission in Internet of Things (IoT) environments. However, conventional centralized models, while energy efficient, often face challenges related to data integrity and security. This paper proposes a decentralized blockchain-based architecture aimed at enhancing secure IoT data processing at the base station while preserving energy efficiency. The system utilizes a blockchain network among sink nodes and its operation is divided into four stages: deployment of a virtual machine on leaf nodes for real-time data collection, generation of hash keys to ensure secure transmission to sink nodes, implementation of a universal virtual machine (UVM) at the sink layer for block formation, and development of an integrated authentication and consensus module within the UVM. The proposed framework ensures efficient, verifiable and efficient data handling. Performance is evaluated using sensor node energy efficiency (SNEN), blockchain energy consumption level (BCLE), blockchain transmission efficiency (BCTE), and packet delivery in sink nodes (PDSN). Experimental results demonstrate improved energy efficiency in the sensor zone, reduced blockchain latency, and improved throughput, establishing a robust and secure model for data handling in PWSNs.

    Sushovan Das, Uttam Kr. Mondal
    50-60
  • Babai-guided Interference-aware Adaptive QRD-M Detection in MIMO-OFDM Communication Systems

    Abstract

    This paper presents an adaptive QRD-M detection algorithm designed to reduce the computational complexity of MIMO systems while maintaining near-maximum likelihood detection (near-MLD) performance. The proposed method introduces a dynamic threshold mechanism based on a breadth-first tree search, where pruning is guided by both symbol reliability and interlayer interference derived from the upper-triangular structure of the QR-decomposed channel matrix. The threshold is further refined using a Babai estimate obtained from Lenstra-Lenstra-Lovász (LLL) lattice reduction, allowing the algorithm to adaptively adjust the candidate set at each detection stage. The simulation results across 4 × 4 and 8 × 8 MIMO systems using 16-QAM and 64-QAM modulation schemes demonstrate that the proposed Babai-guided interference-aware adaptive QRD-M (BIA-QRD-M) algorithm achieves near-MLD performance. The proposed method achieves a reduction of up to 49% in the average number of branch metric computations at high SNR and an approximately 29% reduction over the entire 0-25 dB SNR range, compared to conventional QRD-M in an 8 × 8 MIMO-OFDM system with 16-QAM modulation.

    Mar Mar Lwin, Mohd Fadzli Mohd Salleh
    61-68
  • Half-duplex Two-way Relaying for Wireless Sensor Networks with Adaptive Coding Rate: A Performance Optimization Framework

    Abstract

    In this paper, a novel framework to enhance the reliability of wireless sensor networks (WSNs) by addressing the high probability of outage (OP) resulting from limited energy resources and unreliable channels. The framework integrates three techniques: half-duplex two-way relaying (HD-TWR), digital network coding (DNC), and rateless codes. Although these techniques have been extensively studied in isolation, a comprehensive analysis of their joint performance is provided as the main contribution. The proposed scheme leverages the energy efficiency of HD-TWR, the transmission reduction capability of DNC, and the retransmission-free resilience of rateless codes. Simulation results show that the integrated framework significantly reduces OP, offering a robust and practical solution to enhance the reliability enhancement. Furthermore, the impact of optimal relay node placement is investigated through parameter adjustments in the simulation stage to maximize performance gains.

    The-Anh Ngo, Viet-Thanh Le, Thien P. Nguyen, Duy-Hung Ha
    69-76
  • AI-based Violent Incident Detection in Surveillance Videos to Enhance Public Safety

    Abstract

    Acts of violence may occur at any moment, even in densely populated areas, making it important to monitor human activities to ensure public safety. Although surveillance cameras are capable of detecting the activity of people, around-the-clock monitoring still requires human support. As such, an automated framework capable of detecting violence, issuing early alerts, and facilitating quick reactions is required. However, automation of the entire process is challenging due to issues such as low video resolution and blind spots. This study focuses on detecting acts of violence using three video data sets (movies, hockey game and crowd) by applying and comparing advanced ResNet architectures (ResNet50V2, ResNet101V2, ResNet152V2) with the use of the bidirectional gated recurrent unit (BiGRU) algorithm. Spatial features of each video frame sequence are extracted using these pre-trained deep transfer learning models and classified by means of an optimized BiGRU model. The experimental results were then compared with those achieved by wavelet feature extraction approaches and other classification models, including CNN and LSTM. Such an analysis indicates that the combination of ResNet152V2 and BiGRU offers decent performance in terms of higher accuracy, recall, precision, and F1 score across the different datasets. Furthermore, the results indicate that deeper ResNet models significantly improve overall performance of the model in terms of violence detection scores, relative to shallower ResNet models. ResNet152V2 was found to be the ultimate model across the datasets when it comes to a high degree of accuracy in detecting acts of violence.

    Khaled Merit, Mohammed Beladgham
    77-89
  • Lightweight Flow-based Anomaly Detection for IoT Using HC-MTDNN: A Hierarchically Cascaded Multitask Deep Neural Network

    Abstract

    In this article, we propose a lightweight, hierarchical multi-task learning framework designed for detecting both high-level and fine-grained threats in IoT traffic. The developed model focuses on anomalies detectable through flow-level metadata. The deliberate choice to prioritize computational efficiency by excluding content analysis scopes the approach to payload-independent threats, while still enabling robust detection of key attack classes. To further enhance efficiency within this metadata-driven paradigm, we introduce HC-MTDNN, a hierarchical multitask model that integrates a gated feature mechanism and feature reuse to significantly reduce redundancy and computational overhead, improving upon previous hierarchical architectures and achieving high performance while dealing with volumetric and protocol-based attacks. The model is evaluated on four benchmark datasets: CICIoT2023, N-BaIoT, Bot-IoT, and Edge-IIoTset. It demonstrates strong performance in both binary and multiclass classification tasks, with an average inference time of 122 us per sample and a compact model size of 2.4 MB. The proposed framework effectively balances accuracy and computational efficiency, offering a practical and scalable solution for securing resource-constrained IoT environments.

    Mohamed Amine Beghoura, Younes Belouche
    90-102
  • Deep Learning-based Compensation for Doppler Shifts in Hybrid Beamforming for mmWave Communication

    Abstract

    Millimeter-wave (mmWave) communication is a key enabler of 5G and future wireless systems, providing vast bandwidth for high-speed data transfers. However, high user mobility leads to significant Doppler shifts, which can severely degrade the performance of beamforming - an essential technology for mmWave systems. The traditional hybrid beamforming (HBF) technique faces challenges in adapting to rapid channel variations caused by Doppler effects. Therefore, this paper introduces a deep learning framework to mitigate Doppler-induced channel distortions in hybrid beamforming. We propose a long-short-term memory (LSTM)-based neural network that predicts Doppler shifts and dynamically adjusts the hybrid beamforming vectors to compensate for these variations. This approach proactively addresses channel distortion, enhancing both spectral and energy efficiency. The simulation results and the performance comparison of proposed model against conventional beamforming and state-of-the-art techniques demonstrate the superiority of deep learning-based solution in maintaining robust communication links under high-mobility conditions, showcasing its potential to improve performance in next-generation wireless networks.

    Kartik Ramesh Patel, Sanjay Dasrao Deshmukh
    103-111
View All Issues