Rate detection, or incoming event stream intensity measuring in the decay model
In the class of streaming algorithms, consider a subclass that solves the problem of heavy hitter detection or so called heavy hitters. In general, this problem is stated as “revealing most frequently occurring events in the incoming stream and measuring their rate.” In this publication of Qrator Labs employee Artem Shvorin an efficient algorithm for solving this problem is proposed.
Introduction
Heavy hitter detection algorithms help to solve such problems as network congestion mitigation, identifying network abnormalities and attacks, managing dynamic routing. For example, a well-known web server NGINX allows you to limit the rate of requests to a particular resource, and to do this, the rate must be measured quantitatively.
In this publication, we introduce the reader one more approach to measure the rate of the event flow in a case of a multitude of different (i.e. not identical) events. Let there be a set of types of events, and you need to evaluate how often the exact event of particular type occurs, as well as to recognize the cases when a certain event type is repeated “too often.”
An important characteristic of this tool is its high efficiency, so that it does not become a bottleneck in the entire traffic analysis and filtering system. In our case, we mean processing one event per just a few dozen cycles of a modern CPU.
In addition to measuring the actual rate, it is usually required to distinguish the event types that happen most often. Their rate should be measured more accurately, while rare event types do not play an important role, and their rate evaluation need not be necessarily accurate.
In this work, we propose the newly developed Decay-based Count Algorithm, that solves the problem of heavy hitter detection with high efficiency. The name of the algorithm refers to a physical process of decay of a radioactive substance.
Heavy hitter detection problem
Problem classification
All the problems described are divided into several subclasses:
- Threshold-t. The flows having a rate higher than a given fraction t of the rate of total incoming traffic are to be recognized.
- Top-k. For a given number k, the the highest k rate flows are to be selected.
- The flows having a rate higher than a given absolute value.
The proposed algorithm refers to the latter subclass.
Target architecture
Depending on the platform on which this flow algorithm is supposed to run, a number of hardware limitations arise. Sometimes it is required to integrate the necessary functionality into the network equipment (switch, router, etc.). The proposed algorithm is supposed to be used on a universal processor, but for some parameter values, it could be embedded in network equipment.
Problem parameters
- The absolute value of the flow rate, which is considered dangerous. The problem is to identify such flows.
- The size of the key which identifies the event type. For instance, if the packets with the same source IP address are declared to be the same, then the key is defined as the 4-bytes value src.ip. In this implementation, as in many other algorithms, key values are stored in some table, so the key size affects memory costs.
- A method for calculating the rate of a single flow from the timestamps of events belonging to the same type. In essence, this is an algorithmic definition of what is the rate of the flow. In this case, an exponential moving average is calculated, which has a single parameter — the typical time τ, during which the weight of the event after its arrival is taken into account.
Solution accuracy
Evaluation of the quality of the algorithm can be a relative or absolute error in estimating the rate of the flows. As a measure of accuracy estimation, the (ε, δ)-approximation may be used: if the error is not more than ε with probability 1 − δ, then the algorithms are said to have the accuracy measure (ε, δ).
If the error has a qualitative rather than a quantitative nature, such as the inclusion or non-inclusion of this flow in the list of the most rated ones in the top-k problem, then the probabilities of false positive and false negative triggering are taken into account.
Overheads
As a rule, the main property of the algorithm is the maximum rate of incoming events, that it is still capable to process. That is, only the time of processing one event is important. However, in practice, other hardware limitations, such as the size of the memory used to store the accumulated information about the flow, also affect. Especially severe hardware limitations occur in a case of integrating functionality into network equipment. But on ordinary computers, the presence of a large amount of memory does not always help, since access to DRAM can take an unacceptably long time. To achieve the required performance, you have to compromise with the accuracy of the measurement and limit the size of auxiliary data so that they could fit into the processor cache. In this implementation, with meaningful parameter values, the table fits into the L2 cache of the CPU. As a result, it becomes possible to ensure that the processing time for one event takes a few dozen CPU clock cycles.
Methods for estimating the rate of the event flow
To determine the rate of a given event type, it is required to calculate the number of events for some time. To do this, you need to record the time of the event and somehow save it. Typically, a counter is used for this purpose, which is incremented when the corresponding event occurs. Then the rate is estimated as the ratio of the counter value to the time interval during which the measurement is performed. More accurate methods of measuring the actual rate value use variations of the moving average. For example, the exponential moving average (EMA) is a sort of weighted moving average with exponentially decreasing weights.
In this work, we propose a new method for computing EMA. It is based on the exponential decay model (see Decay model section), which describes such a physical phenomenon as radioactive decay. This model has several advantages over the traditional approach. First, the data representation is more memory-efficient. Secondly, the decay model allows an efficient implementation of the counter update operation — it requires only a few integer arithmetic operations and a single cached memory access.
Methods of multiple flows accounting
In the problem of heavy hitter detection, the difficulty is not only the high rate of the input traffic but also the large number of different flows (event types) that you have to monitor. A naive implementation involves placing a separate counter for each flow, which results in a significant memory consumption. In this case, the risk is not only to exhaust all available memory but also a significant slowdown in performance due to cache misses. Therefore, instead of solve the problem of heavy hitter detection accurately, it is possible to drop some of the accumulated information about the flows. There known many methods for limiting the amount of memory used and reducing the processing time of an event, some of them are listed below:
- Packet sampling
- Space saving algorithm
- HashParallel
- HashPipe
- Count-min sketch
Problem formalization
A sequence of similar events is given in the form of a set of elementary events {event_k}, where the index k runs through a finite or infinite discrete interval I ⊂ ℤ, k ∈ I.
In the simplest case, the event is determined only by the time of its occurrence: event_k = t_k, and the events are ordered in time: t_i < t_j for i < j. In most implementations of event accounting systems, time is considered to be a discrete value: t_k ∈ ℤ, but for theoretical reasoning, it is convenient to generalize and consider the time to be continuous: t_k ∈ ℝ.
The main question that the systems of events accounting should answer is an estimate of the rate of the flow. The rate can be strictly formalized for a uniform flow of events. The uniform flow {t_k} is defined as follows: t_k = φ + p⋅k, k ∈ ℤ, where p > 0, φ ∈ [0, p) are flow parameters, pace and phase, respectively. Then the rate of such a flow, by definition, is expressed as r = 1/p.
For non-uniform flows, the formal definition of the rate r = r({t_k}) may vary depending on the problem formulation. The decay model remains consistent and useful in the case of non-uniform flows, however, an estimate of the quality of true rate calculation given below is valid only for uniform flows.
In other models, some extra characteristics of the event are to be taken into account, for example, the weight c_k defined as the value of the contribution of this event to the measured rate. In this case event_k = (t_k, c_k).
A typical implementation of event accounting system has a counter s, that in some way accumulates information about the certain event flow, and at any instant one can read the counter value and obtain an estimate of the flow rate r̂ as a function of s, such that r ≈ r̂. Updating the counter is performed at the arrival of the next event_k:
s ← update(s, event_k),
where update() is some function defined by the implementation. Below some variants for implementing the function update() are listed:
- Counting the number of events:
update(s, event_k)=s + 1
- Counting the number of events taking the weight into account:
update(s, event_k)=s + c_k
- Calculation of exponential moving average (EMA) with the β parameter. Here the counter s = (v, t) stores two values: the current value of the EMA and the time of the last update.
update((v, t), event_k) = (v′, t′),
where v′ = β + v⋅(1 − β)^(t_k − t), t′ = t_k.
In some cases it is required to distinguish event flows. Consider a number of distinct event types indexed by the index i = 1, …N, and the problem is to account separately each event type. Then the event is described as event_k = (t_k, i_k) (or, in case of taking into account the weight of the event, event_k = (t_k, i_k, c_k)), where i_k is the event type. The set of indices k, related to a given type of event i is denoted by the I_i = {k ∣ i_k = i}.
The decay model
Here we introduce the decay model. It is described as follows:
v(t) = v0⋅exp(−λt),
where v0 is the “quantity of matter” at the zero time moment, v(t) is the quantity at the time t, λ is the parameter of the model (the so-called exponential decay constant).
Also, we introduce the following notation:
τ = 1/λ, α = exp(λ).
In our case, it is more convenient to use τ instead of λ, as τ would have the dimension of time and informally could be defined as the characteristic time interval, during that the history of events is collected.
By definition, we will assume that each type of event corresponds to the value of v, which has a physical meaning of “quantity of matter” and depends on time in such a way that it sharply increases by one at the occurrence of the event and decreases in the remaining time in accordance with the exponential law shown above.
The figure below shows how the value of v changes with time for uniform flows with different rates and for a non-uniform flow.
The value of v is not explicitly stored in memory but can be computed at any time. Instead, the value s is stored, such that at time tNow the value v is expressed as follows:
v = α^(s − tNow). This representation corresponds to the decay model.
The calculation of the counter s in the decay model is equivalent to calculating the exponential moving average (EMA), which differs just by a normalizing factor (constant). An important feature of this representation is that the counter stores just a single value in contrast to the traditional approach where a pair of values (the time of the last event and the accumulated value) is stored.
Counter update
A non-trivial operation is to update the counter value s when an event occurs. In this case, you need to replace s with a new value s′, which is determined by the following equation:
α^(s′−tNow) = α^(s − tNow) + 1.
Here, the left hand side if the equation is the value of v immediately after the event, and the right hand side is the value of v immediately before the event increased by one, which is the contribution of the event. In the section “counter update operation” effective methods for calculating result are proposed.
Regarding the update() function from the definition we described earlier, the update operation is expressed as follows:
update(s, event_k) = t_k + log_α(1 + α^(s − t_k)).
Here, the execution moment tNow coincides with the arrival moment of the message t_k.
Rate definition
Let there be a uniform flow of events with rate r, which means that events occur with a pace p = 1/r. The uniform flow is given by definition.
If the measurement is made immediately after the next event, which means tNow = t_n for some n, then the accumulated value of the counter s for a long time would be expressed as:
Whence
where Δs = s − tNow is the relative value of a counter.
In a general case, the measurement time can occur between events: tNow = t_n + ψ, ψ ∈ [0, p). In this case, the counter would be shifted by the value of ψ:
The rate measuring problem is to estimate the rate from the value of the counter. Assuming that the flow is uniform, it is possible to obtain estimates of the true rate of the flow from above and below, substituting extreme values ψ = 0 и ψ = p in the latter equation:
r- ≤ r < r+,
Where
Both lower and upper bounds r- and r+ depend monotonically on the value of the counter s (see figure r-plus-minus). Thus, for example, comparing the current value of the counter with a given threshold does not require any additional calculations.
In the decay model, the rate of arbitrary (not necessarily uniform) flows is defined by the bounds r + and r-. If the measurement occurs immediately after the event processing, then the rate is considered to be exactly equal to the lower bound.
Applicability limitations of the decay model
There are some limitations to the rate value that could be correctly measured with the decay model.
First, if the time of events occurrence is measured as a discrete quantity, then the flow pace can not be less than 1. That is, the rate of the flow should not exceed r_max = 1. This implies a restriction on the relative value of the counter Δs = s − tNow, it should not exceed σ_max value, which is determined by the formula:
r-(σ_max) = r_max.
The value of σ_max is estimated as follows:
σ_max = τ⋅ln + ω(τ),
where 0 ≤ ω(τ) ≤ 1/2.
Secondly, it is difficult to estimate the rate of a weak (or low-rate) flows, since at low Δs, the accuracy of the upper and lower bounds of r+ and r- deteriorates, and for negative Δs the lower bound of the rate degenerates to zero.
Also, there is a limitation on the time scale used in the decay model, that relates to the counters overflow. Since the value of the counter can exceed tNow by no more than σ_max, the system’s runtime is determined by the counter capacity and the duration of one processor clock cycle. So, in some implementations, a mechanism for resetting the counters should be provided.
Multiple flows accounting algorithms
Decay model features
Using EMA as a counter value gives some benefits. After the event flow stops, the accumulated value quickly (exponentially by time) degrades and becomes indistinguishable from zero. In the decay model, this fact is used to reset the counter automatically. The time T_vanish, for which any previously accumulated value degrades close to zero, depends on the τ parameter and the required accuracy. It is expressed as T_vanish = T_min + σ_max, where T_min is the decay time of the value accumulated as a result of a single event. In the section Table Implementation, the exact T_min expression is given, but here it suffices to bear in mind the following fact:
T_vanish = O(τ⋅lnτ) for fixed accuracy.
This gives the upper bound of the memory amount used by the counter storage when multiple flows are taken into account for the case with keeping all the information about flows. The sufficient storage size is T_vanish ⋅ r_max cells. There are many applications of the heavy hitter detection, where large values of τ are not required, and all storage can be placed in RAM or even in the L3 or L2 processor cache. In this case, a short access time to the storage is provided, so the problem becomes feasible at the incredibly high rate of the input stream.
To implement the counter storage, a hash table is suitable, where the key is the type of the event. In this case, cells with the counter value s satisfying the condition s − tNow ≤ T_min assumed to be empty.
Numerical implementation of the decay-based count
Update value operation
Consider the following notation:
ρ_τ(x)=τ⋅ln(1 + exp(x/τ))=log_α(1 + α^x),
So the update operation is expressed as follows:
s′=tNow + ρ_τ(s − tNow).
Thus, the problem reduces to an effective calculation of the ρ_τ approximation. It is convenient to represent time as an integer, for example, measure it in CPU cycles, so it is required to construct an integer mapping R_τ: ℤ → ℤ, such that R_τ(T) ≈ ρ_τ(T).
For this problem, only absolute accuracy is important, and not relative, since operations with time values are mostly additive. Obviously, due to the integer representation of time, an error of less than 0.5 cycles is unattainable.
Also, the time measurement operation, as a rule, gives some error. For example, if there is a way of measuring time with an accuracy of 10 cycles, then it is sufficient to require that R_τ gives approximations of the same accuracy:
|R_τ(T)−ρ_τ(T)| ≤ 10.
One could suggest several different ways to calculate R_τ of different complexity and degree of efficiency.
Calculating Rτ: FPU
The easiest way is to use floating-point arithmetic and directly evaluate ρ_τ by definition. The execution time of this operation is about 100 CPU cycles, which is quite a lot compared to the methods suggested below.
Calculating Rτ: table implementation
The idea of a table implementation is to store the pre-calculated values of a function in a table.
First, using the identity
ρ_τ(−x) = − x + ρ_τ(x),
it is sufficient to construct R_τ(T) only for T ≤ 0.
Secondly, since
ρ_τ(x) → 0 for x → −∞,
There exist T_min > 0, such that R_τ(T) = 0 for T ≤ −T_min. Where T_min can be found from the following considerations:
T_min = ⌈T_min⌉, ρ_τ(T_min) = 1/2,
from where T_min = ⌈ − log_α(α^(½) − 1)⌉ = ⌈ − τ⋅ln(exp(1/2τ) − 1)⌉.
Thus, it suffices to define R_τ(T) for T_min < T ≤ 0.
The graph of the ρ_τ(x) function is shown below. The peculiarity of this function is that with the change of the τ parameter the same scaling would occur on both axes uniformly.
The straightforward implementation of the R_τ is to construct a table of T_min cells where the pre-calculated values would be stored. Since all values of the function on a given interval are in the interval between 0 and ρ_τ(0) = τ⋅ln2, the total size of the table is T_min⋅log_2(τ⋅ln2) bits.
The algorithm for updating the value is the following:
The absolute accuracy of this method is ½ since it gives the best integer approximation of a real-valued function.
Results and Conclusion
We compared three different tests realizing EMA:
- Naive implementation of EMA;
- Decay-based model with FPU calculations, i.e. with exp() and log() functions calls from standard math library;
- Decay-based model implemented via table.
C test source code: pastebin.com/N1Xg9dud.
The execution time of one update() call when at τ = 100000, in which case, the table for calculation is placed in the L1 cache, in implementations 1, 2 and 3 is 100, 125, 11 clock cycles, respectively.
Using an exponential moving average is a convenient way of counting events and estimating the rate. However, this operation is computationally expensive, discourages the use of EMA in real applications. Our implementation of the EMA algorithm based on the decay model is, in the first place, mathematically beautiful, and secondly, much more efficient than the naive implementation of the EMA calculation. Efficiency is provided by two factors: a table calculation of the transcendental function and a more memory-efficient counter representation.
Acknowledgements
This publication has been prepared within the pilot project of revealing the mechanisms of the Qrator Labs filtering network operation. We thank Anton Orlov, Artem Gavrichenkov, Eugene Nagradov, and Nikita Danilov for fruitful discussions and ideas.