Simple way to rate limit your fetch requests
#typescript #api-rate-limit #async #fetch #rate-limit #request
A few weeks ago, my colleagues at SprintEins shared an article describing a solution for rate limiting fetch requests in the Deno to avoid reaching an API rate limit. I used to write similar code in the past, so this really caught my attention.
Original implementation
The original implementation described in the first part of the artile series proposes an initial “not production ready” solution:
import { delay } from "https://deno.land/std@0.202.0/async/delay.ts";
interface IMyRequest {
url: string;
resolve: (value: Response | Error) => void;
}
const queue: IMyRequest[] = [];
const myFetch = (url: string) => {
const promise = new Promise((resolve) => {
queue.push({
url,
resolve,
});
});
return promise;
};
const loop = async (interval: number) => {
while (true) {
await delay(interval);
const req = queue.shift();
if (!req) continue;
try {
const response = await fetch(req.url);
req.resolve(response);
} catch (error) {
req.resolve(error);
}
}
};
The self criticism the author brings, and the reason they call it “not
production code”, is that the code doesn’t respect the fetch
interface.
However, the version from
Github linked
in the second article fixes this by pushing FetchInput
and RequestInit
parameters into the queue:
fetch(input: FetchInput, init?: RequestInit): Promise<Response> {
let promise = undefined as unknown as Promise<Response>;
promise = new Promise((resolve, reject) => {
this.#queue.push({
promise,
input,
init,
resolve,
reject,
});
});
if (this.#queue.length === 1) this.#loop();
return promise;
}
After a quick glance at the entire code, one can notice that the promise
field
of the IRequestEntity
is never used, so fetch
could be simplified to:
fetch(input: FetchInput, init?: RequestInit): Promise<Response> {
const promise = new Promise((resolve, reject) => {
this.#queue.push({
input,
init,
resolve,
reject,
});
});
if (this.#queue.length === 1) this.#loop();
return promise;
}
The private #loop
method has also been slightly changed from its original
proposal:
async #loop() {
while (true) {
const entity = this.#queue.shift();
if (!entity) break;
try {
const response = await fetch(entity.input, entity.init);
entity.resolve(response);
} catch (error) {
entity.reject(error);
}
await delay(this.#options.interval);
}
}
Moving the delay
at the end of the #loop
method is actually a mistake. The
following code executes the fetch requests immediately:
const limiter = new HTTPLimiter({
interval: 1000,
});
for (let i = 10; i--; ) {
console.log(i);
await limiter.fetch(`https://jsonplaceholder.typicode.com/todos/${i}`);
}
This is because, as soon as we push the item into the queue, this.#queue.length === 1
is true
, and we invoke the #loop
method. We then pop the item from the
queue and we proceed with the request. Processing the item immediately means
that this.#queue.length === 1
is true
for every limiter.fetch
call, thus
creating a new while(true)
loop each time. Each of these loops will then wait
for the delay and will be interrupted in the next cycle due to the if (!entity) break;
statement. Moving back the delay
function at the beginning of the
while
loop will block it for at least 1 second before it continues with
processing the items. This behavior is closer to what we need. If we remove the
await
in front of the limiter.fetch
call, we will notice that all requests
get queued and then processed one-by-one with 1 second delay between them.
Extended version
The second
part
of the series goes in a direction I don’t quite agree with. Instead, I would
rename the interval
option to intervalMilliseconds
, add a new option called
requestsPerInterval
, and update the logic correspondingly. Here is the entire
HTTPLimiter.ts
:
import { delay } from "../deps.ts";
interface IRequestEntity {
init?: RequestInit;
input: FetchInput;
resolve: (value: Response) => void;
reject: (value: Error) => void;
}
interface IHTTPLimiterOptions {
intervalMilliseconds: number;
requestsPerInterval: number;
}
type FetchInput = URL | Request | string;
export class HTTPLimiter {
#options: IHTTPLimiterOptions = {
intervalMilliseconds: 0,
requestsPerInterval: 0,
};
#queue: IRequestEntity[] = [];
constructor(options?: Partial<IHTTPLimiterOptions>) {
if (options) {
for (const [key, value] of Object.entries(options)) {
// @ts-ignore
if (key in this.#options) this.#options[key] = value;
}
}
}
async #loop() {
while (true) {
await delay(this.#options.intervalMilliseconds);
const entities = this.#queue.splice(0, this.#options.requestsPerInterval);
if (!entities.length) break;
for (const entity of entities) {
try {
const response = await fetch(entity.input, entity.init);
entity.resolve(response);
} catch (error) {
entity.reject(error);
}
}
}
}
fetch(input: FetchInput, init?: RequestInit): Promise<Response> {
const promise = new Promise<Response>((resolve, reject) => {
this.#queue.push({
input,
init,
resolve,
reject,
});
});
if (this.#queue.length === 1) this.#loop();
return promise;
}
}
and an example how to test it:
import { HTTPLimiter } from "./src/HTTPLimiter.ts";
const limiter = new HTTPLimiter({
intervalMilliseconds: 1000,
requestsPerInterval: 2,
});
for (let i = 10; i; i--) {
limiter
.fetch(`https://jsonplaceholder.typicode.com/todos/${i}`)
.then((response) => response.json())
.then((json) => console.log(json));
}
If we are executing the code above, we will notice that only two requests are
being fired every second, until all 10 requests are done. We still have a
problem though. As pointed out in the series, our fetch
requests are being
executed sequentially. To better visualize this, we can use the following example code:
import { HTTPLimiter } from "./src/HTTPLimiter.ts";
const limiter = new HTTPLimiter({
intervalMilliseconds: 1000,
requestsPerInterval: 2,
});
let lastTime = new Date().getTime();
let totalTime = 0;
for (let i = 10; i; i--) {
limiter
.fetch(`https://jsonplaceholder.typicode.com/todos/${i}`)
.then((response) => response.json())
.then((json) => {
const now = new Date().getTime();
const delta = now - lastTime;
totalTime += delta;
console.log(totalTime, delta, json);
lastTime = now;
});
}
and, we can also mock the global fetch
to control the time it takes to resolve a request:
const fetch = (input, _) =>
new Promise<Response>((resolve) =>
setTimeout(() => {
const response = {
json: () => new Promise<Response>((res) => res({ input } as any)),
} as Response;
resolve(response);
}, 300)
);
The code returns a Promise
that looks like a Response
(has a json
method which
returns a Promise
) and resolves after 300ms. Running it should output something similar to:
$ deno run --allow-net example.ts
1306 1306 { input: "https://jsonplaceholder.typicode.com/todos/10" }
1607 301 { input: "https://jsonplaceholder.typicode.com/todos/9" }
2910 1303 { input: "https://jsonplaceholder.typicode.com/todos/8" }
3211 301 { input: "https://jsonplaceholder.typicode.com/todos/7" }
4514 1303 { input: "https://jsonplaceholder.typicode.com/todos/6" }
4816 302 { input: "https://jsonplaceholder.typicode.com/todos/5" }
6119 1303 { input: "https://jsonplaceholder.typicode.com/todos/4" }
6420 301 { input: "https://jsonplaceholder.typicode.com/todos/3" }
7724 1304 { input: "https://jsonplaceholder.typicode.com/todos/2" }
8026 302 { input: "https://jsonplaceholder.typicode.com/todos/1" }
That’s around 8 seconds instead of around 5. Not good. Let’s now fix this by making our loop run the requests in parallel:
async #loop() {
while (true) {
await delay(this.#options.intervalMilliseconds);
const entities = this.#queue.splice(0, this.#options.requestsPerInterval);
if (!entities.length) break;
for (const entity of entities) {
fetch(entity.input, entity.init)
.then(entity.resolve)
.catch(entity.reject);
}
}
}
Indeed, this solution yields the desired results, as you can see below.
$ deno run --allow-net example.ts
1307 1307 { input: "https://jsonplaceholder.typicode.com/todos/10" }
1315 8 { input: "https://jsonplaceholder.typicode.com/todos/9" }
2308 993 { input: "https://jsonplaceholder.typicode.com/todos/8" }
2309 1 { input: "https://jsonplaceholder.typicode.com/todos/7" }
3310 1001 { input: "https://jsonplaceholder.typicode.com/todos/6" }
3311 1 { input: "https://jsonplaceholder.typicode.com/todos/5" }
4312 1001 { input: "https://jsonplaceholder.typicode.com/todos/4" }
4313 1 { input: "https://jsonplaceholder.typicode.com/todos/3" }
5314 1001 { input: "https://jsonplaceholder.typicode.com/todos/2" }
5315 1 { input: "https://jsonplaceholder.typicode.com/todos/1" }
Yey, we managed to reached our goal of having a fully functional API-rate-limiter implementation with just a few simple changes to the original code!