Traditional Culture Encyclopedia - Traditional virtues - Distributed flow limiting redis-cell

Distributed flow limiting redis-cell

Since redis 4.0, redis-cell has been supported as an extension module. redis-cell is a flow-limiting module based on the token-bucket algorithm, written in rust, that provides atomic flow-limiting and allows for bursty traffic, which makes it easy to use in a distributed environment.

The principle of the token bucket algorithm is to define a bucket that generates tokens at a certain rate, and each time you go to the bucket to request a token, if there are not enough tokens in the bucket, the request fails, otherwise it succeeds. In the case of not many requests, the bucket will be basically saturated with tokens, and at this point, if there is a traffic surge, the request will not be rejected immediately, so this algorithm allows for a certain amount of traffic surge.

These steps can be implemented using the native commands provided by redis, but, but, but the data will be inconsistent when there is high concurrency, so redis-cell atomizes the process and perfectly solves the problem of data consistency in a distributed environment.

Officially, we provide two ways to install packages and source code compilation, source code compilation to install rust environment, more complex, here is the installation package way to install:

After performing the above steps, you can use it to provide the flow limiting features.

The module provides only one command: CL.THROTTLE

CL.THROTTLE test 100 400 60 3

test: redis key

100: Officially called max_burst, which I don't understand. Its value is the capacity of the token bucket - 1, and the bucket fills up by default the first time it's executed

400: which, along with the next parameter, indicates the number of accesses allowed within the specified window of time

60: a specified time window in seconds

3: Indicates the number of tokens to be requested this time, and defaults to 1 if not written

The above command indicates that 3 tokens are fetched from a token bucket with an initial value of 100, which has a rate-limit of 400 accesses/60 seconds< /strong> .

1: success or not, 0: success, 1: reject

2: capacity of the token bucket, size is initial value +1

3: tokens currently available in the token bucket

1: the number of tokens in the bucket. strong> 4: If the request is denied, this value indicates how long it will take before tokens are re-added to the token bucket, in seconds, which can be used as a retry time

5: Indicates how long it will take before the token bucket is full

Here's an example of a token bucket at a slightly slower rate to demonstrate this, executing the following commands in quick succession:

As you can see by the commands, 3 tokens are taken out of the bucket at a time, and when there are not enough tokens in the bucket, the request is rejected.

Because of business reasons (more requests than usual on weekends), the company's services have been hiccuping on weekends lately, and the firefighting group has been busy, and a few times there has been a service avalanche related to redis, and then the architecture side came out with suggestions for various business groups to reduce their dependence on other services.

On the one hand, other services are unreliable, on the one hand, some of the core business can not do degradation, and the company is growing, there are too many services, the cost of troubleshooting is too large, based on these reasons, can be resolved within their own services do not rely on other services.

Personally, I feel that the project is not big, maintenance costs are not high, you can use direct use of redsi-cell, otherwise you can consider fine-grained control to each service node to limit the flow, with the corresponding load balancing strategy to achieve. The above is a personal understanding, only for reference.