International Journal of Computer Networks & Communications (IJCNC)

AIRCC PUBLISHING CORPORATION

IJCNC 03

Analysis of LTE Radio Load and UserThroughput

Jari Salo1 and Eduardo Zacarías B.2

Nokia Networks 1Taguig City, Phillipines and 2Ulm, Germany

Abstract


A recurring topic in LTE radio planning pertains to the maximum acceptable LTE radio interface load, up to which a targeted user data rate can be maintained. We explore this topic by using Queuing Theory elements to express the downlink user throughput as a function of the LTE Physical Resource Block (PRB) utilization. The resulting formulas are expressed in terms of standardized 3GPP KPIs and can be readily evaluated from network performance counters. Examples from live networks are given to illustrate the results, and the suitability of a linear decrease model is quantified upon data from a commercial LTE network.

Keywords


LTE, Traffic Model, Processor Sharing, Network Measurements

Introduction


A key topic in radio network planning concerns mapping statistics generated in the radio network layer, to the end-user experienced performance. In practical operation, a question relating planning and current conditions often arises: “what is the maximum LTE radio load, so that the user throughput is still at a given level?”. Somewhat surprisingly, there are no practically useful engineering analyses available to this question in the literature. This is the main motivation for this study. A simple answer to the question is available by exploiting results from IP networking literature and basic queuing theory. In order to make those results practical, this paper states the existing theoretical results in terms of LTE radio utilization metrics, which can be easily computed from network performance statistics available in commercial LTE base station products.

Earlier work: The user throughput for elastic traffic transmitted over fixed-bandwidth, non-wireless links has been thoroughly investigated in engineering literature. An overview and references can be found in [1]. Wireless CDMA network user throughput has been analyzed in [2] and in other works of the authors thereof. Radio interface flow-level scheduler analysis from relatively theoretical  viewpoint has been presented in [3] and [4]. An overview of LTE scheduling has been presented in [5]. The idea of modelling LTE radio scheduler using M/G/1 is obviously not unheard of, see for example [6]. However, none of the aforementioned contributions present any validation of the theoretical results against real-world network data or the results are presented in terms of statistics that are not available in real-world networks.

In this paper, the focus is on the average user throughput over the LTE radio interface. In particular, the so called M/G/1 Processor Sharing (PS) approach is used to express user throughput in terms of LTE radio interface Physical Resource Block (PRB) utilization, the statistics of which are available in every commercial LTE system. For details on the LTE system details, the reader is referred to the well-established literature, such as [7, 8, 9].

Contributions: The contributions of this paper include the following.

• Formulation of LTE user throughput in terms of the M/G/1 PS model by using PRB utilization as the load metric.
• Verification of the validity of the M/G/1 PS approach by comparing to live network measurements.
• Extension to rate-capped user throughput by using the M/G/R PS model for non-integer 𝑅.
• Application to load balancing between frequency layers, including closed-form formula for the optimal traffic balancing ratio for the two-layer case.

The results can be used for both FDD and TDD variants of LTE. Although in principle the general approach is applicable to both downlink and uplink, the uplink case is dependent on the power control and link adaptation implementation, which leaves more degrees of freedom to be considered. For this reason, throughout the paper ‘throughput’ refers to the downlink case, with the uplink applicability left for further study.

In Section 2, the average user throughput is formulated as a function of the number of active data flows sharing radio scheduler resources. In Section 3 these results are related to physical resource block (PRB) utilization by means of the well-known M/G/1 processor sharing formula, resulting in ∝ 1/1−𝜌degradation in user throughput with 𝜌 denoting PRB utilization of the serving cell. An extension to the case where user throughput is externally rate-limited is also given. In Section 4 application of the result to traffic balancing between frequency layers is given. Section 5 discusses some aspects of the impact of cell interference coupling. In Section 6 live network measurement examples are provided to validate the results. Finally, conclusions are presented.

2. Throughput versus Number of Active Users


2.1 Basic Assumptions

The following assumptions are made:

• Full buffer traffic model so that, in the absence of other users, a single user obtains all availableradio resources. For example, constant bit rate streaming traffic would not satisfy this condition.
• If there is more than one user, the scheduler shares radio resources equally, on average, between users. This fair sharing principle is assumed independently of the radio conditions of the UEs sharing the scheduler resources.

The first assumption will be relaxed in Section 3.3, and the second assumption will be discussed in Section 5. The notion of “radio resource” could be defined in various ways, but in case of the LTE radio interface, it is convenient to choose Physical Resource Blocks (PRBs) as the resource being shared. For LTE downlink, PRB utilization can be equated with transmit power utilization as long as physical layer and common channel overhead are properly taken into account. With the assumptions above, instantaneous user throughput with 𝑥 active users downloading simultaneously would be 1/𝑥 of the maximum throughput. A UE is said to be active if there are data 34 International Journal of Computer Networks & Communications (IJCNC) Vol.9, No.6, November 2017 remaining in the transmit buffer1. As different UEs in the cell start and finish their data transfers,the number of active UEs (𝑥), and hence also the instantaneous user throughput, changes over time. An interesting metric is the average throughput experienced by UEs in the cell, which is discussed in the next section.

2.2 Two user throughput metrics

Consider a UE located somewhere in a cell and experiencing certain radio quality. In the absence of any other users, the UE is allocated all available PRBs and receives some throughput 𝑇1, where the subscript ‘1’ emphasizes that there is one active user (i.e., 𝑥 = 1). If there were 𝑥 > 1 active UEs in the cell, the throughput of the user would be 𝑇1/𝑥 instead. The maximum achievable user throughput, 𝑇1, depends on the user location in the cell, radio conditions, number of transmit antennas, and so on. The average user throughput 𝑇𝑢𝑒 is defined as the expected value of 𝑇1/𝑥 for positive integer 𝑥, in other words

where 𝐸[⋅] denotes expected value and 𝑇1 and 𝑥 ≥ 1 are the random variables being averaged. The user radio conditions, and hence the maximum throughput 𝑇1, can be assumed statistically independent of the number of active UEs in the cell, and (1) can thus be written as

The term 𝐶 = 𝐸[𝑇1] will be called cell capacity in this paper. Unfortunately, the second term 𝐸[1/𝑥], which is the average of the inverse of the number of active UEs, cannot be always computed since it is not usually available as a radio counter. On the other hand, the average active UEs, 𝐸[𝑥], is a standardized KPI defined in 3GPP TS 32.425 and thus commonly implemented in commercial products. Therefore, a more practical metric results if
𝐸[1/𝑥] in (2) is replaced by 1/𝐸[𝑥], or

In 3GPP TS 32.425 this is called scheduled “IP throughput”. It should be emphasized that 𝐸[𝑥] ≠𝐸[1/𝑥] and for this reason 𝑇𝑠𝑐ℎ and 𝑇𝑢𝑒 are different throughput metrics and not equal in value.The scheduled throughput can also be written as [10]

scheduled throughput is often used in fixed-line IP network throughput analysis where it goes by the name “flow throughput”. An application to wireless network setting can be found [2] and many  others published since.The scheduled throughput 𝑇𝑠𝑐ℎ is always lower than user throughput 𝑇𝑢𝑒. This is a direct resultof the concavity of 1/𝑥 and Jensen’s inequality: 𝐸[1/𝑥] > 1/𝐸[𝑥] and subsequently 𝑇𝑢𝑒 > 𝑇𝑠𝑐ℎ.

While typically one would be more interested in the user throughput 𝑇𝑢𝑒, in this paper the focus is on the scheduled throughput 𝑇𝑠𝑐ℎ because 𝑇𝑢𝑒 is not usually computable from LTE base station performance counters. A discussion on the differences between different throughput metrics can be found in [10, 11].

3. Throughput versus PRB Utilization 


In the previous section, it was shown that average scheduled throughput is inversely proportional to the average number of active UEs. However, the number of active UEs is perhaps not the most intuitive KPI for characterizing network load. A more commonly employed yardstick of network load is the fraction of utilized PRBs. It would, therefore be of practical interest to have some rule that relates PRB utilization to user throughput. Such is the topic of this section.
3.1 Definition of Radio Load

In fixed link throughput analysis, the link utilization 𝜌 is usually expressed in terms of average flow
size 𝑆 (bits) and flow arrival rate 𝜆 (1/sec) as

where 𝐶 is the link bandwidth (bits per second). The resource shared between users, in this case, is the link bandwidth 𝐶. For the LTE radio interface, another option is to use LTE physical resource blocks (PRBs) instead. In this context, the LTE cell scheduler can be considered a “processor” that shares the PRBs equally between active UEs. Furthermore, the PRB utilization is available from system counters in any practical LTE system.
To map bytes to PRBs, the file size 𝑆 needs to be converted to PRBs. For example, given an end user spectral efficiency of 𝜂 = 1 bits per second per Herz, one PRB can fit 180 bits of end user data after channel coding and physical layer overhead. Hence a one MegaByte web page would generate scheduler PRB load of 106 × 8/180 ≈ 44000 PRBs. More generally

where 𝜂 is the average cell spectral efficiency in bits per second per Hz, 𝑅prb is the PRB rate (e.g., 105 PRBs per second for a 20MHz cell), and the scaling factor 180 comes from the standardized PRB bandwidth of 180kHz.
In practice, it is not necessary to know the traffic parameters 𝜆 and 𝑆 since the PRB utilization 𝜌 can be extracted from network statistics directly. This is in contrast to cell capacity 𝐶, which is less straightforward to estimate from counters.

3.2 Throughput versus PRB utilization via M/G/1 PS model

To relate the number of active UEs, 𝐸[𝑥] or 1/𝐸[𝑥], in (1)–(3) to PRB utilization, some statistical assumptions on the arrival rate 𝜆 of the user data flows need to be made. In the IP engineering literature, it is common practice to model TCP flow throughput in wireline links using the so called M/G/1 Processor Sharing model. The M/G/1 PS model assumes that 𝜆, the number of data flow arrivals per time unit, has Poisson distribution. This is typically assumed well-justified, one reason being that it leads to simple formulas. In the sequel, the same approach is applied to LTE radio throughput.

It is well-known from basic textbooks [12], that under the assumption of Poisson distributed 𝜆 the proportion of time with 𝑥 active UEs is 𝜋𝑥 = (1 − 𝜌)𝜌𝑥, where 𝑥 is a non-negative integer and 𝜌 is the load defined in (6). With this information, the expected values 𝐸[1/𝑥] and 𝐸[𝑥] in (2) and (3) can be computed. The details can be found in a number of references and the result for user throughput is [12, 10, 11, 2]

while the scheduled throughput is simply

As mentioned, the measurement of user throughput 𝑇𝑢𝑒 is not very straightforward and not typically implemented in commercial base station products, while the scheduled throughput 𝑇𝑠𝑐ℎ is simpler to compute and has been standardized by 3GPP [13].

3.3 Throughput versus PRB utilization via M/G/R PS model

Consider the case where, due to some throughput limiting mechanism, a UE is able to use only a 1/𝑅th portion of the cell PRB resources even when there are no other active UEs. A typical example is the mobile operator capping the rates according to contractual conditions. A suitable model, in this case, is the M/G/R PS, where 𝑅 defines the fraction of PRBs the UE is allocated. The M/G/R processor sharing version of the scheduled throughput in (8) can be shown [14] to become where 𝐸2(𝑅, 𝑦) is Erlang’s second formula

Here 𝑅 is a positive integer and 𝑦 = 𝑅𝜌 > 0 is used for brevity. Setting 𝑅 = 1 results in the special
case of (8).
It can be observed that the seemingly innocent constraint of external throughput limitation results in a considerably more involved formula, that can no longer be calculated using pencil-and-paper. Another unpleasant finding is that 𝑅 is forced to be an integer which forces the single-user PRB utilization 1/𝑅 to be 1 2 , 1 3, … which is unnecessarily coarse for practical use. Fortunately, it is possible to generalize (10) to real-valued 𝑅. For example, [15] gives the formula

with 𝑔(𝛾, 𝑥) and 𝐺(𝛾, 𝑥) denoting the probability density function and cumulative density function
of the gamma distribution, respectively. In (12) 𝑅 ≥ 1 is a real number.

Fig. 1 illustrates the formula (9). In the figure, the user throughput on the vertical axis has been normalized with the cell throughput 𝐶 = 𝐸[𝑇1]. It can be seen that with increasing 𝑅, the user throughput becomes increasingly limited by the external constraint and less impacted by the load from other users. For example, for 𝑅 = 5 the user throughput is one-fifth of the cell throughput until it starts to decrease at around 𝜌 ≈ 0.5.

 

Figure 1: Normalized scheduled throughput versus PRB utilization, M/G/R PS model with different
rate capping settings.

4. Application: Traffic Balancing between Frequency Layers Considering two cells on different carrier frequencies that cover the same physical area (e.g. a ‘radio sector’), an interesting question is that of how the cell loads are related to the average user throughput of the sector. From earlier discussion, the throughput in case of a single cell is defined as 𝑇𝑠𝑐ℎ = 𝑆/𝑊
where 𝑆 is the average file size and 𝑊 is the average time to transmit the file to the UE. To average the user throughput over two cells, a traffic splitting ratio can be introduced. The fraction of arriving data flows assigned to the first cell is denoted with 𝛾 which leaves the portion of 1 − 𝛾 for the second cell. For a given average file size 𝑆, the average sector user throughput (8) can be written as a weighted sum
where 𝐶𝑖 and 𝜌𝑖 are the average cell throughput and load of the 𝑖th cell, respectively. The traffic
splitting ratio is assumed to apply to all traffic and thus the cell loads are2
Fig. 2 illustrates this result in case of 10Mbps offered traffic and 𝐶1 = 20Mbps. The average user throughput is shown for different traffic split between layers, for a few selected values of 𝐶2. It can be seen that an optimum traffic split maximizing user throughput exists. Interestingly, for 𝐶2 = 40Mbps all traffic should be carried by the second layer in order to maximize average user throughput.



Figure 2: Average sector user throughput as a function of traffic split. 𝐶1 = 20Mbps, sector total offered traffic is 10Mbps.

Fig. 2 invites the following question: if the sector offered traffic 𝜆𝑆 and the cell capacities arefixed, what is the optimum traffic balancing factor that maximizes the sector user throughput 𝑇𝑠𝑒𝑐 in (18)? Skipping some straightforward calculations, the optimum splitting ratio turns out be
where 𝜌1̂ = 𝜆𝑆/𝐶1 , i.e., the load of the first cell if the second cell was non-existent. The optimum traffic split depends only on the ratio of cell capacities, not on their actual values. When 𝐶1 = 𝐶2, the even  split, 𝛾opt = 1/2 is optimum, as expected. Fig. 3 illustrates the optimum 𝛾 for different values of 𝐶2= 𝐶1 v and 𝜌1̂ . It can be seen that if the capacity ratio 𝐶2/𝐶1 is higher than a certain threshold, that depends on total sector traffic via 𝜌1̂ , no positive 𝛾opt exists and to maximize user throughput the first cell should not carry any traffic at all.

  

Figure 3: Optimum traffic splitting factor 𝛾 for two cells of the same sector.

5. Discussion


In (8) the 𝜌 and 𝑇𝑠𝑐ℎ denote the PRB utilization and user throughput of the serving cell. Nothing is said about the load of the surrounding cells. This raises the question of how 𝑇𝑠𝑐ℎ = 𝐶(1 − 𝜌)  behaves when the neighbor cell load also increases at the same time with 𝜌. Interference received from neighbor cells degrades the spectral efficiency 𝜂 in the serving cell, hence it is expected that the cell capacity 𝐶 decreases as neighbor cell load increases. Decreasing 𝜂 also increases the PRB utilization (6) since the number of bits per PRB decreases (for a fixed traffic volume 𝑆 in bytes). However, such increase in 𝜌 due to other cell interference is indistinguishable from an increase due to traffic volume, and therefore in this sense it does not directly impact throughput calculation. However, cell capacity 𝐶 is expected to decrease with neighbor cell interference and hence the user throughput should decrease superlinearly. Such phenomena have however not been observed in measurements of several live networks.

Increased load can also have positive impact on spectral efficiency, namely in the form of multi-user diversity gain, where the scheduler exploits frequency selectivity of the wideband radio channel. UEs are opportunistically scheduled on the frequency subband that has the highest relative channel gain for that UE. The multiuser diversity increases with the number of UEs scheduled per TTI, which was related to PRB utilization in Section 3. Cell capacity gains of up to 50% have been simulated [16]. The impact on the present discussion is that if the serving cell load increases at the same time with the neighbor cell load, the degradation in spectral efficiency is partially compensated by multiuser scheduling gain. Depending on the implementation the radio scheduler may also trade off spectral efficiency for  latency by using free PRBs to transmit with lower channel coding rate. This improves latency since the probability of retransmission is reduced, but it also reduces spectral efficiency at low load. On the other hand, at high load, most PRBs tend to be in use and the packet scheduler is thus forced to operate at higher spectral efficiency. Other variations of the scheduling policies include prioritization according to radio quality or quality of service considerations.

6. Live Network Examples


6.1 Average Active UEs versus User Throughput

Fig. 4 shows an example of two cells serving a large number of smart-phone users. The horizontal axis is the average number of active UEs, 𝐸[𝑥], that has been extracted from hourly operations support system (OSS) performance counters over a period of two weeks. The hourly averages of 𝑥 have b een binned to integers and for each bin, the average flow throughput is plotted. The flow throughput shown on the vertical axis is also obtained from the OSS counters and computed according to the scheduled throughput (𝑇𝑠𝑐ℎ) definition in 3GPP TS 32.425. Each dot in the figure is the average UE throughput for the horizontal binned value of 𝐸[𝑥]. It can be seen that the scheduled throughput scales approximately inversely proportional to the average active UEs, as predicted by theory.

6.2 PRB Utilization versus User Throughput, M/G/1 PS model

  Fig. 5 illustrates scheduled throughput 𝑇𝑠𝑐ℎ for six cells from three different networks. PRB utilization is extracted from hourly counter measurements collected over a period of two weeks and binned to 2PRB granularity. Each dot presents the average scheduled throughput of the PRB bin computed based on the 3GPP method [13]. It can be seen that the theoretical model (8) fits measurements fairly well, and throughput falls approximately linearly as a function of radio utilization. For example, at 50% utilization user throughput has dropped to half of the cell throughput while for 75% radio load a single user receives on average only 25% of maximum throughput. Such simple rule of thumbs can provide useful capacity management guidance for LTE networks, including traffic steering between different frequency layers.

6.3 Applicability of the M/G/1 PS model across cells

 We now study the suitability of a linear decrease model between the scheduled throughput and the cell load (such as, for example, the one in eq. 8), across the cells of one example live network. The following results are based on hourly, cell-level performance measurement counters (PMC) for one



Figure 4: Scheduled throughput 𝑇𝑠𝑐ℎ versus average number of active UEs, two examples from
live network.

Let 𝜓 denote the correlation coefficient for a given cell, calculated upon the hourly scheduled throughput (𝑇𝑠𝑐ℎ) and PRB utilization (𝜌) samples from the PMC. Let also 𝜌𝑚𝑎𝑥 denote the   maximum load observed for the cell. The values of 𝜌 have been quantized to a resolution of 2 PRBs,
and 𝑇𝑠𝑐ℎ computed by aggregating the per-bin samples. At first glance, 72.4% of the cells in this example network have 𝜓 < -0.5, and could be therefore considered as having a linear decrease for 𝑇𝑠𝑐ℎ as a function of 𝜌. This can be further sliced according to 𝜌𝑚𝑎𝑥 since lowly-loaded cells often exhibit bursty traffic patterns that do not fit with the M/G/1/PS assumptions. Indeed, 90% of cells with 𝜌 > 0.7 have 𝜓 < -0.5. This is about 37% of the total cells.


The two-dimensional histogram of the per-cell tuples (𝜌𝑚𝑎𝑥, 𝜓) is shown in fig. 6. We observe that most of the cells with good linearity (e.g., 𝜓 < -0.5) also exhibit high maximum load (e.g., 𝜌𝑚𝑎𝑥 > 0.7). In contrast, lowly-loaded cells (for example, 𝜌𝑚𝑎𝑥 < 0.2) show a somewhat uniform looking distribution of 𝜓, suggesting that the model does not apply to them because one or more assumptions are violated.
Finally, we evaluate the mean absolute error when fitting 𝑇𝑠𝑐ℎ and 𝜌 to the linear model in eq. 8. For simplicity, the cell capacity 𝐶 is estimated via simple least squares (LS) fit:

The error associated to fitting the model for a given cell, on the other hand, is measured as an average of the absolute value of the relative error over the 𝑁𝑠 throughput samples (after binning 𝜌



Figure 5: Scheduled throughput versus PRB utilization, six cells from three different networks.
The correlation coefficient 𝜓 and mean absolute error of the fit are given for each cell.
where 𝑇𝑠̂ 𝑐ℎ,𝑖 = 𝐶𝐿𝑆(1 − 𝜌𝑖) is the value calculated based on the linear model in (8). Figure 7 shows that the average of 𝑒𝑟 among cells for the high-load, high linearity network slice is between 5 and 27%.

7. Conclusion


This paper discussed the mapping of LTE user throughput to radio utilization. The average scheduled throughput, measured using the 3GPP method, was shown to be fairly accurately predicted by the cell capacity divided by the average number of active UEs. Adding the usual assumption that user data flow arrivals are Poisson distributed this result was expressed in terms of Physical Resource Block utilization, the outcome being the well-known M/G/1 Processor Sharing formula that predicts the linear decrease of user throughput with PRB utilization. An extension to external rate limitation was given, as well as an application to load balancing between frequency layers was discussed. Measurement data from real-world networks indicate that the scheduled throughput degrades about linearly with the serving cell PRB utilization, and therefore the linear degradation predicted by 𝐶(1 − 𝜌) is a useful approximation for practical network operations purposes.



Figure 6: Joint distribution of (𝜌𝑚𝑎𝑥, 𝜓) across cells in a live network



Figure 7: Average of 𝑒𝑟 from eq. 21, in case of the linear model in eq. 8 and the LS fit for 𝐶 from
eq. 20.

References


[1] J. W. Roberts, “A survey on statistical bandwidth sharing,” Comput. Netw., vol. 45, no. 3, pp. 319–332, Jun. 2004. [Online]. Available: http://dx.doi.org/10.1016/j.comnet.2004.03.010

[2] T. Bonald and A. Proutière, “Wireless downlink data channels: user performance and cell dimensioning,” in Proc. ACM MOBICOM, 2003, pp. 339–352. [Online]. Available: http://doi.acm.org/10.1145/938985.939020 Cited on page(s): 33, 35, 37

[3] J. Melasniemi, P. Lassila, and S. Aalto, “Minimizing file transfer delays using SRPT in SDPA with terminal constraints,” in 4th Workshop on Network Control and Optimization, Ghent, Belgium, 2010. Cited on page(s): 33

[4] G. Arvanitakis and F. Kaltenberger, “PHY and MAC layer modeling of LTE and WiFi RATs,” Eurecom, Tech. Rep. EURECOM+4879, 03 2016. [Online]. Available: http://www.eurecom.fr/publication/4879 Cited on page(s): 33
[5] F. Capozzi, G. Piro, L. A. Grieco, G. Boggia, and P. Camarda, “Downlink packet scheduling in lte cellular networks: Key design issues and a survey.” IEEE Communications Surveys and Tutorials, vol. 15, no. 2, pp. 678–700, 2013. Cited on page(s): 33

[6] X. Li, U. Toseef, T. Weerawardane, W. Bigos, D. Dulas, C. Görg, A. Timm-Giel, and A. Klug, “Dimensioning of the LTE S1 interface,” in Proc. IFIP WMNC, 2010. Cited on page(s): 33 [7] H. Holma and A. Toskala, LTE for UMTS – OFDMA and SC-FDMA Based Radio  Access. Wiley Publishing, 2009. Cited on page(s): 33

[8] F. Khan, LTE for 4G Mobile Broadband: Air Interface Technologies and Performance, 1st ed. New York, NY, USA: Cambridge University Press, 2009. Cited on page(s): 33 [9] S. Sesia, I. Toufik, and M. Baker, LTE, The UMTS Long Term Evolution: From Theory to Practice. Wiley Publishing, 2009. Cited on page(s): 33 [10] N. Chen and S. Jordan, “Throughput in processor-sharing queues,” IEEE Trans. Automat. Contr., vol. 52, pp. 299–305, 2007. Cited on page(s): 35, 36, 37

[11] A. A. Kherani and A. Kumar, “Stochastic models for throughput analysis of randomly arriving elastic flows in the Internet,” in Proc. IEEE INFOCOM, 2002. Cited on page(s): 36, 37 [12] L. Kleinrock, Queueing Systems. Wiley Interscience, 1975, vol. 1,2. Cited on page(s): 37

[13] 3GPP, “Performance measurements Evolved Universal Terrestrial Radio Access Network (E-UTRAN),” 3rd Generation Partnership Project (3GPP), TS 32.425, 2016. Cited on page(s): 37, 41

[14] K. Lindberger, “Balancing quality of service, pricing and utilisation in multiservice networks with stream and elastic traffic,” in Proc. ITC 16, 1999, pp. 1127–1136. Cited on page(s): 37

[15] V. Naumov and O. Martikainen, “Queueing systems with fractional number of servers,”  The Research Institute of the Finnish Economy, Discussion Papers 1268, 2012. [Online]. Available: https://EconPapers.repec.org/RePEc:rif:dpaper:1268 Cited on page(s): 37

[16] A. Pokhariyal, T. E. Kolding, and P. E. Mogensen, “Performance of downlink frequency domain packet scheduling for the UTRAN Long Term Evolution,” in Proc. IEEE PIMRC, Helsinki, Sep. 2006. Cited on page(s): 41

 

%d bloggers like this: