A MULTI-MODEL REGRESSION APPROACH FOR PREDICTING RESOURCE ALLOCATION EFFICIENCY
IN IOT-DRIVEN 6G NETWORKS
Hussain AlSalman
Department of Computer Science, College of Computer and Information Sciences, King
Saud University, Riyadh 11543, Saudi Arabia
ABSTRACT
Enabling healthcare services over emerging Sixth Generation (6G) networks and Internet of Things (IoT) introducesa strict requirementthetimely and reliable allocation of medical resources. Prediction of resource allocation efficiency based on rule-based or manual policies often fails to be adaptive to heterogeneous demands and dynamic conditions of IoT networks. To address this challenge, a multi-model regression-based approach is proposed to predict the efficiency of resource allocation for optimizing the MR infrastructures of IoT and 6G networks. The approach consists of data pre-processing, exploratory data analysis, multi-model regression learning, and operational factors interpretation. First, the dataset is loaded and non-informative identifier attributes are removed to reduce noise and improve generalization. Correlation analysis is performed through a heat map plot of numerical features to identify features that are strongly related to the target variable. Extensive experiments are conducted on a publicly available dataset to evaluate the proposed approach according to a number of performance metrics, such as the root mean square error (RMSE), determination coefficient (R-squared), and mean absolute error (MAE). Experimental results showed that the best regression model of proposed approach attains the highest prediction performance compared with other models and state-of-the-art work. In addition to predictive superiority, interpretation of best model’s outputs regarding to throughput and utilization of the network is reported to show the association between predicted efficiency, network speed, and utilization status, which will help to design an actionable plan for deploying intelligent allocation policies.
KEYWORDS
Medical Resource Allocation, Internet of Things (IoT),Sixth Generation (6G) Networks, Multi-model Regression, Coefficient Determination.
1.INTRODUCTION
Globally, with the rapid advancement of technical and informatics, healthcare systems are undergoing drastic shifts as a result of the integration of advanced computing technologies, wireless communications and sensors, customized within clinical and complex clinical environments [1]. One of the most prominent aspects of this transformation is the Healthcare Internet of Things (IoT) application, which is called the Medical Internet of Things. (IoMT) [2]. This technology has enabled real-time collection and sharing of diagnostic tools, wearable sensors and connected and connected hospital systems. The IoMT provides patient monitoring, prediction analysis, and remote therapy interventions, resulting in remarkable improvements in operational efficiency, patient outcomes, and accessibility of health care systems and services [2]. As the size and complexity of IoT networks expand, traditional health systems face significant challenges in processing the enormous amounts of data that produces quick responses to the resources to be provided to these networks [3].
Recently, as the Internet-related devices have increased, the development of the 5th (5G) and 6th (6G) generations networkshave opened the door to new horizons and address the existing challenges [4]. Through the 6G network, unique opportunities in the field have been available, as it enables a large number of devices to be connected and provide highly reliable communications with the lowest level of delay, as well as the provision of safe and smart services to the parties [5]. These features are essential in the healthcare sector, as they are closely related to patient safety, continuity of services and quality of care provided. However, the presence of highperformance networks alone is not sufficient to ensure the effectiveness of health operations. Decisions are still required on how to allocate resources in cases of uncertainty, using various indicators such as patient health status, wireless and sensor data, service priorities, and the status of linked networks and devices. All these prove that the allocation of medical resources is an appropriate option for a data-based decision-making task.Some studies have demonstrated the growing role of intelligent resource allocation in wireless and IoT-enabled networks. Specifically, a deep reinforcement learning approach has been used in Massive Multiple-Input MultipleOutput (MIMO)-Non-Orthogonal Multiple Access (NOMA) systems to minimize computing complexity of dynamic resource allocation and preserve high throughput in a variety of channel conditions [6]. Furthermore, energy efficiency and service performance have significantly improved with Quality-of-Service (QoS)-aware load balancing algorithms for 5G-enabled IoT sensor networks, demonstrating the need of achieving a balance between usage efficiency and QoS requirements in connected environments [7].
In the healthcare sector, the IoT systems are still facing a range of restrictions and limitations. One of these constraints is the reliance of many of these systems on seemingly fixed structures and low control speed with centralized decision-making processes, making these systems unsuitable for dealing with rapidly changing clinical needs and extreme emergencies [5]. Usually, the allocation of medical resources is managed in real time, such as the reservation of intensive care beds and the preference for access to cameras or wireless communications, through manual procedures or specific instructions. These methods cannot or can adapt enough, and it is difficult to expand them, especially in periods of crowded networks and the increase in incoming requests. Moreover, resource allocation and resource management strategies usually lack reliability of basic networks, which may vary significantly in hospital crowded environments [8]. Healthcare providers may experience ineffective or late reactions if there are no flexible and effective predictive models for resource allocation and management, which may have negative impacts on patient care and health. Machine learning provides outstanding tools for understanding customization patterns based on previous data and current context [9]. Machine learning models can detect non-linear relationships between the variables that control and affect the efficiency of customization, reducing the need to develop complex and ineffective policies manually. The results of the models can also be evaluated using standard forecasting metrics, with operational factors such as the case of use and association with network variables and network speed [10].
This study focuses on a machine learning methodology based on a data set related to the allocation of medical resources ofIoT sensors, in order to anticipate the efficiency of resource exploitation. This research follows a distinct approach, including: removing unhelpful features of learning, analysing statistical characteristics and relationships. Explanation and clarification of predictions through the study of relationships and the use of analytical graphs. The results indicate that data-driven models, especially regression-based learning, can help improve the consistency and efficiency of the allocation decision-making process in future healthcare settings that utilize IoT technology. The main contributions of this research study are summarized in the following points.
Proposing a multi-model regression approach for predicting medical resource allocation efficiency of IoT-enabled 6G networks that consists of data pre-processing, data analysis, model learning, and operational factors interpretation.
Conducting correlation analysis through visualizing a heat map plot to understand feature relationships and identify features that are strongly related to the target variable.
Benchmarking multiple regression models, including Linear Regression (LR), Ridge (R), Random Forest (RF), and Gradient Boosting (GB) on a public dataset and under a consistent 80:20 train-test split and 5-fold cross-validation procedure.
Evaluating adopted models with the performance evaluation metrics, such as root mean square error (RMSE), determination coefficient (R-squared), and mean absolute error (MAE), and selecting the best regression model based on the minimum value of RMSE metric that prioritizes the lower large error risk.
Validating learning generalization on a holdout test set and supporting the experimental results with some diagnostic plots of residual distribution for predicted vs. actual resource utilization efficiency.
Adding an interpretation steps of model outputs for analytical intuitions of networks’ operational factors through relating predicted resource utilization efficiency to network speed and utilization status, enabling actionable plan for designing intelligent allocation policies.
The rest of this paper is structured as follows: Section 2 reviews the methods and frameworks of related studies. Section 3 outlines the proposed approach in detail. Section 4 includes the experimental analysis, findings, and discussion. Section 5 offers the conclusions and summarizes the possible future research directions for the proposed work.
2.RELATED WORKS
The related works section deals with reviewing the frameworks and approaches of medical development in the field of IoT and 6G network, with a focus on the challenges and difficulties associated with these ongoing advancements. The significant shift in 6G networks and IoT toward connected healthcare systems is substantially dependent on medical equipment, remote monitoring and wireless sensors. This shift increases the difference and diversity of data and makes traditional medical resource allocation methods and strategies more difficult and complex to implement than smart adaptive decisions [11].The spread of IoT usage supported by the 6G network is one of the most important expected objectives, which are achieved by the key advantages, including improving the quality of network connection and providing high data transmission speeds[12]. At the global level, a substantial standardization is needed to ensure the effectiveness of these advantages[13]. However, there are still some challenges with the design of standard protocols that achieve reliable performance of IoT-based medical services. Most previous studies have focused on specific and theoretical backgroundof 6G network, IoT, and communication between Vehicle-to-Everything (V2X)that are widespread and adaptable[14]. The modern 6G network’s methodologies and frameworks have discussed the main points of its trends and challenges in managing and utilizing resources[9]. Integrating 6G technologies with the IoTattained an efficient use of resources by predicting the expected faults in the equipment.
The 6G networks and IoT sensors have enabled real-time patient monitoring and provision of appropriate medical services to reduce unused resources and hospital services [15]. Moreover, the logistics sector has improved the inventory tracking and management process, which has reduced the need for excess inventory and increased the efficiency of the storage space. The ability to support a large number of devices is one of the main benefits of integrating 6G networks and the IoT devices, which contributed to improving resource management in various sectors [9]. In addition, data collection and processing in low record time is one of the advantages offered by the 6G predictive service, which enhances effective resource management [9]. Alhashimi et al. [8] provided anin-depth a study focused on heterogeneous networks and resource management methods in 6Gnetwork and mobile communications systems.This study reflected current knowledge and identified promising areas for future research. One of the most prominent features of this work is the comprehensive review of spectrometry and interference management techniques, which are vital to improve service quality. Shen et al. [16]also presented an innovative approach to managing wireless resources within the high-intensity IT services within 6G networks. The authors have created a comprehensive simulation platform that includes various wireless resource management techniques, allowing for effective simulation of a variety of 6G networks and IoT services.
Other studies related to 6G network structures and requirements have shown that future scenarios, including the health sector, underscore the necessity of dynamic resource management, given the diverse needs of services between a range of heterogeneous devices, such as sensors, peripherals and platforms [17]. Cloud or terminals. There is a real challenge in the IoT data processing and decision-making thatoccurred close to the source, reducing the delay time and easing the burden on the basic network, which reflects an increased interest in peripheral medical devices in hospitals and health centres [18]. The effectiveness of resource management is vital in distributed systems, and this topic has been a recurring focus in academic research. In the field of 6th generation networks, the effectiveness of resource management includes the discovery and identification of all available resources, as well as the selection, arrangement and distribution of appropriate resources to enhance the performance of the benefit [19]. This improvement may include a wide range of variables, including overall performance, cost efficiency, energy efficiency, data accuracy, coverage, reliability, and more. Although important studies are being conducted in various fields of computing, resource management in 6G environments is still a major challenge that requires innovative solutions. As the fifth and sixth generation networks continue to develop, the issue of retail and resource management has become among the most researched topics, especially with the increasing reliance on learning algorithms in dynamic resource coordination [20].
Some of the works have explicitly focused on that machine learning solutions are suitable for addressing the complexity of allocation in IoT systems within 5G and 6G environments, both at the level of technology[21].Innovative machine learning and statistical models can develop predictive tools that enable highly accurate forecasting of network load. Learning progress and its adaptation on the network design and the ability of the management system depends on adjusting the allocation of resources to respond to the expected changes [22]. This may be complicated in cases where network conditions change quickly or unexpectedly. In such situations, the assigned resources may not match the actualrequirements. Sheng et al. [23]conducted a study to develop a double approach to improve the wireless coverage of the new 6G Satellite-Terrestrial Integrated Networks (STINs) through the use of large satellite groups. This approach focuses on network design analysis to meet the needs of the 6G service, which helps in planning STINs networks and improving resource scheduling to achieve better coverage performance. The sophisticated and innovative aspects of 6G technology can be challenging during implementation, and it can be difficult to manage smart resource planning in an efficient manner.
Few studies have explored learning-based resource allocation methods that focus on service quality in next-generation networks. An approach using deep reinforcement learning is proposed for dynamic resource allocation in Massive MIMO–NOMA systems [6]. This approach achieved a reduction in computational complexity while maintaining good performance even with channel changes over time.Dey et al. [7]presented two simple efficient distributed methods in IoT-based wireless sensor networks, including a Load Balanced Greedy Cluster Assignment (LBGCA) for large-scale IoT networks and a Multi-modal Load Balanced Greedy Cluster Assignment (MLBGCA) for QoS-aware applications. Simulation findings on several deployment patterns demonstrated major improvements in load balance and significant reductions in energy consumption as compared with previous techniques.In addition, Gad-Elrab et al. [24] proposed an adaptive fog–cloud resource allocation technique based on multi-criteria decision methods for improving response time and resource utilization efficiency in latency-sensitive applications.Almadhor et al. [25]provided an artificial intelligence-based 6G-IoT that contains an adaptable medical resource allocation. They used the eXtreme Gradient Boosting (XGBoost) method to predict the utilization efficiency of resources, and achieved 0.988 of R 2 and 0.0323 of RMSE. However, there is still a need to improve the achieved values of these assessment metrics.Recent research suggests that managing resources in healthcare sector, which relies on advanced IoT and networking technologies, has become more important. There is also an increase in machine learning reliance and data-based improvement methods. Previous studies focused on theoretical models or solutions tailored to specific systems. They lack a comprehensive assessment of resource allocation efficiency prediction using the IoT-driven 6G networks data availability and unified regression framework. This study contributes to the existing literature by adopting a data-based approach and focusing on both the accuracy of prediction and practical understanding of efficient allocation to medical resources.
3.MATERIALS AND METHODS
This section describes the data and proposed approach in terms of materials and methodsofthe study.The study aims to analyze and predict the efficiency of allocating medical resources in a healthcare setting supported by IoT technology and 6G networks. It begins with describing the dataset, covering its attributes and target variable, and endswith explaining the approach’s phases.
a. Dataset Description
To implement and assess the developed prediction approach of resource allocation efficiency for 6G networks-enabled IoT medical sensors, a public dataset, namely “IoT-Driven MR Allocation for 6G Network”, available on KAGGLE platform[26]. Table 1 describes the attributes of the dataset.