Sunday, February 10, 2008

Throttling requests - Java

In a post a few days ago I posted a problem - ensure that a service/API/resource is not called more than N times in any fixed interval. Here's the solution in Java.

Note that unlike STL code, I don't use queues here to eliminate a lot of unnecessary allocations. It makes code similar to C-ish implementation I wrote first.

For more discussion (and for perl implementation - I don't read perl so I can't vouch for its correctness though), see this post and comments: http://1-800-magic.blogspot.com/2008/02/throttling-service-requests-puzzle.html.


/**
* Class that throttles requests. Ensures that
* the StartRequest cannot be called more than
* a given amount of time in any given interval.
*
* StartRequest blocks until this requirement
* is satisfied.
*
* TryStartRequest returns 0 if the request was
* cleared, or a non-0 number of millisecond to
* sleep before the next attempt.
*
* Simple usage, 10 requests per second:
*
* Throttler t = new Throttler(10, 1000);
* ...
* ServiceRequest(Throtter t, ...) {
* t.StartRequest();
* .. do work ..
*
* @author sergey@solyanik.com (Sergey Solyanik)
*
*/
class Throttler {
/**
* The interval within we're ensuring max number
* of calls.
*/
private long interval;

/**
* The maximum number of calls that can be made
* within the interval.
*/
private int max_calls;

/**
* Previous calls within the interval.
*/
private long [] ticks;

/**
* Available element at the insertion point
* (back of the queue).
*/
private int tick_next;

/**
* Element at the removal point (front of
* the queue).
*/
private int tick_last;

/**
* Constructor.
* @param max_calls Max number of calls that
* can be made within the interval.
* @param interval The interval.
*/
public Throttler(int max_calls,
long interval) {
this.interval = interval;
this.max_calls = max_calls + 1;
this.ticks = new long[this.max_calls];
this.tick_last = this.tick_next = 0;
}

/**
* Returns the next element in the queue.
* @param index The element for which to
* compute the next.
* @return
*/
private int Next(int index) {
index += 1;
return index < max_calls ? index : 0;
}

/**
* Attempts to clear the request.
* @return Returns 0 if successful, or a
* time hint (ms) in which we should
* attempt to clear it again.
*/
public synchronized long TryStartRequest() {
long result = 0;
long now = System.currentTimeMillis();
while (tick_last != tick_next) {
if (now - ticks[tick_last] < interval)
break;
tick_last = Next(tick_last);
}

int next = Next(tick_next);
if (next != tick_last) {
ticks[tick_next] = now;
tick_next = next;
} else {
result = interval - (now -
ticks[tick_last]);
}
return result;
}

/**
* Clears the request. Blocks until the
* request can execute.
*/
public void StartRequest() {
long sleep;
while ((sleep = TryStartRequest()) > 0) {
try {
Thread.sleep(sleep);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
}

6 comments:

Alex Efros said...

Sadly, but while you don't read Perl I don't write Java/C++. :( I've tried to rewrite your Java in my way, but found I've no idea how to work with code ref or threads in Java (my previous and only code in Java was "hello world" many years ago).

I can write this task with threads using concurrent language 'Limbo', but I expect you doesn't read Limbo as well, so it has no sense to do.

So I'll try to explain differences between your and my solutions, because I believe my solution is much simpler.

Main difference is in interface. I trying to write simpler code, so I decide to control execution of user's requests from Throttler - i.e. it is Throttler who execute user's code, unlike your interface where user's code will execute Throttler. In perl, which doesn't really support threads, this can be realized by initializing Throttler with code ref to user's function which will do real work, and then running main loop in Throttler code, which will execute user's function when appropriate. In Limbo this can be realized by spawning Throttler in separate thread, and having it to work like signal generator, which will send "1" over a channel every interval, while user's code will read (blocking) from this channel before executing next request.

Second difference is using current time (mod array size) for indexing circle array instead of using last/next indexes in separate variables. This became possible because code of my Throttler is executing all of time (as main loop or in separate thread). This allow us to remove few lines of code which are liable to off-by-one errors.

Anonymous said...

It turns out that you can solve the problem without using any arrays of previous activity at all.

If you're trying to send at most N queries in any interval of size T, then pick a positive integer k and have a timer that fires every T/k. In each T/k interval, send no more than N/(k+1) queries, and you'll be safe. Regardless of clock skew and query loading in the intervals, the +1 in the denominator saves you.

Of course, your maximum sustained rate is N * k / (k+1) queries per T, but you can make this arbitrarily close to N per T by making k large. (Without knowing N or T, I'd suspect that k in the range 5..10 is a good place to start.)

Alex Efros said...

Why not simplify it even more? You don't need k and you know T is 1 second:

> A service can process N requests per second.

So, fire a timer every 1 second, and send no more than N requests from queue.

But this solution doesn't take in account time needed to send request, possibility for different requests may need different time to send, and needless up to 1 second delay before sending request in some cases.

Sergey Solyanik said...

To Jimbo:

Welcome to my blog!

I think firing a frequent timer is probably more expensive than maintaining the array of integers (for 1000 qps I waste 4k). The timer can of course be virtualized to just check if the time has expired when you're processing an event. But the queries would still need to block unnecessarily if I get a burst of 1k in one second, 900 would have to pend, whereas with an array I can process them all right away and still adhere to the contract. A creative solution nevertheless.

To Alex:
Because if I want to guarantee 1000 qps, with the firing timer I can get into the situation where 1000 queries are processed at interval t + 0.75, and another 1000 are processed at interval t + 1.25 (assuming that timer fires at t, t + 1, t + 2, etc). Then in the interval [t + 0.5, t + 1.5) I've actually gotten 2000 qps. Violates the contract.

David said...

Thanks a lot for the post. Can you please give a better example of how to use Throttler when multiple HttpClients are trying to call an extrnal service that has a limit of 10 req / sec? How do they share the Throttler? Thanks very much, David

Anonymous said...

This algorithm has a bit of shortage: The memory! The length of queue depends on the value of threshold. Normally there are some "crazy" requirements, such as 10000 requests per day. The memory will be a big challenge.