Full Trust European Hosting

BLOG about Full Trust Hosting and Its Technology - Dedicated to European Windows Hosting Customer

Node.js Hosting - HostForLIFE :: Node.js API Rate Limiting Explained: Token Bucket & Leaky Bucket Techniques

clock August 25, 2025 09:25 by author Peter

By restricting the number of requests a client may make in a given amount of time, rate limiting guards against abuse and evens out spikes. Without it, a problem or a noisy neighbor could overload your server, raise expenses, and make the experience worse for everyone. Rate limitation is usually included as Express middleware in Node.js, and you select an algorithm based on your traffic trends.

Why Rate Limit? (Simple Words)

  • Fairness: Prevent one user from hogging resources.
  • Stability: Avoid sudden traffic spikes that crash servers.
  • Security: Mitigate brute‑force login attempts and scraping.
  • Cost Control: Keep bandwidth and compute costs predictable.

Core Ideas You’ll Use

  • Identity (the key): How you group requests (e.g., by IP, API key, user ID).
  • Allowance: How many requests are allowed per window or per second.
  • Storage: Where you remember counts/tokens (in‑memory for a single instance; Redis for a cluster).
  • Backoff/Signals: How the client should slow down (HTTP 429 + headers like Retry-After).

Algorithm Overview (When to Use What)

  • Fixed Window Counter: Simple. “100 requests every 60s.” Can burst at window edges.
  • Sliding Window (Log or Rolling): Smoother than fixed. More accurate but heavier.
  • Token Bucket: Allows short bursts but enforces an average rate. Great for user‑facing APIs.
  • Leaky Bucket (Queue/Drip): Smooth, constant outflow; good when you must strictly pace downstream systems.


Baseline: Fixed Window Counter (In‑Memory)
Good as a learning step or for single‑process dev environments.
// middleware/fixedWindowLimiter.js
const WINDOW_MS = 60_000; // 60 seconds
const MAX_REQUESTS = 100; // per window per key

const store = new Map(); // key -> { count, windowStart }

function getKey(req) {
  return req.ip; // or req.headers['x-api-key'], req.user.id, etc.
}

module.exports = function fixedWindowLimiter(req, res, next) {
  const key = getKey(req);
  const now = Date.now();
  const entry = store.get(key) || { count: 0, windowStart: now };

  if (now - entry.windowStart >= WINDOW_MS) {
    entry.count = 0;
    entry.windowStart = now;
  }

  entry.count += 1;
  store.set(key, entry);

  const remaining = Math.max(0, MAX_REQUESTS - entry.count);
  res.setHeader('X-RateLimit-Limit', MAX_REQUESTS);
  res.setHeader('X-RateLimit-Remaining', Math.max(0, remaining));
  res.setHeader('X-RateLimit-Reset', Math.ceil((entry.windowStart + WINDOW_MS) / 1000));

  if (entry.count > MAX_REQUESTS) {
    res.setHeader('Retry-After', Math.ceil((entry.windowStart + WINDOW_MS - now) / 1000));
    return res.status(429).json({ error: 'Too Many Requests' });
  }

  next();
};

Token Bucket (Burst‑friendly Average Rate)
How it works: You have a bucket that slowly refills with tokens (e.g., 5 tokens/second) up to a max capacity (burst). Each request consumes a token. No tokens? The request is limited.
// middleware/tokenBucketLimiter.js
const RATE_PER_SEC = 5;      // refill speed
const BURST_CAPACITY = 20;   // max tokens

const buckets = new Map();   // key -> { tokens, lastRefill }

function getKey(req) { return req.ip; }

module.exports = function tokenBucketLimiter(req, res, next) {
  const key = getKey(req);
  const now = Date.now();
  let bucket = buckets.get(key);
  if (!bucket) {
    bucket = { tokens: BURST_CAPACITY, lastRefill: now };
    buckets.set(key, bucket);
  }

  // Refill based on elapsed time
  const elapsedSec = (now - bucket.lastRefill) / 1000;
  bucket.tokens = Math.min(BURST_CAPACITY, bucket.tokens + elapsedSec * RATE_PER_SEC);
  bucket.lastRefill = now;

  if (bucket.tokens >= 1) {
    bucket.tokens -= 1; // consume for this request
    res.setHeader('X-RateLimit-Policy', `${RATE_PER_SEC}/sec; burst=${BURST_CAPACITY}`);
    res.setHeader('X-RateLimit-Tokens', Math.floor(bucket.tokens));
    return next();
  }

  const needed = 1 - bucket.tokens;
  const waitSeconds = needed / RATE_PER_SEC;
  res.setHeader('Retry-After', Math.ceil(waitSeconds));
  return res.status(429).json({ error: 'Too Many Requests' });
};

When to use: You want to permit quick bursts (nice UX) but keep a sustained average.

Leaky Bucket (Constant Outflow) 

How it works: Requests enter a queue (the bucket). They “leak” at a fixed rate. If the bucket is full, you reject or drop new requests.
// middleware/leakyBucketLimiter.js
const LEAK_RATE_PER_SEC = 5;    // how many requests per second can pass
const BUCKET_CAPACITY = 50;     // max queued requests

const buckets = new Map();      // key -> { queue, lastLeak }

function getKey(req) { return req.ip; }

module.exports = function leakyBucketLimiter(req, res, next) {
  const key = getKey(req);
  const now = Date.now();
  let bucket = buckets.get(key);
  if (!bucket) {
    bucket = { queue: 0, lastLeak: now };
    buckets.set(key, bucket);
  }

  // Leak based on elapsed time
  const elapsedSec = (now - bucket.lastLeak) / 1000;
  const leaked = Math.floor(elapsedSec * LEAK_RATE_PER_SEC);
  if (leaked > 0) {
    bucket.queue = Math.max(0, bucket.queue - leaked);
    bucket.lastLeak = now;
  }

  if (bucket.queue >= BUCKET_CAPACITY) {
    res.setHeader('Retry-After', 1);
    return res.status(429).json({ error: 'Too Many Requests (bucket full)' });
  }

  bucket.queue += 1; // enqueue this request
  // In practice, you would defer processing; for middleware demo we let it pass immediately
  next();
};

When to use: You must strictly pace downstream dependencies (e.g., payment gateway rate caps).

Wiring It Up in Express
// server.js
const express = require('express');
const fixedWindowLimiter = require('./middleware/fixedWindowLimiter');
const tokenBucketLimiter = require('./middleware/tokenBucketLimiter');
// const leakyBucketLimiter = require('./middleware/leakyBucketLimiter');

const app = express();

// Example: apply global limiter
app.use(tokenBucketLimiter);

// Or apply per‑route
app.get('/public', fixedWindowLimiter, (req, res) => res.send('ok'));
app.get('/payments', /* leakyBucketLimiter, */ (req, res) => res.send('paid'));

app.listen(3000, () => console.log('API on :3000'));


Production‑Ready Storage with Redis

In clustered or serverless environments, in‑memory maps don’t work across instances. Use a shared store like Redis to coordinate limits.
// middleware/redisTokenBucket.js
const IORedis = require('ioredis');
const redis = new IORedis(process.env.REDIS_URL);

const RATE_PER_SEC = 10;
const BURST_CAPACITY = 40;

function keyFor(clientKey) { return `rl:tb:${clientKey}`; }

module.exports = async function redisTokenBucket(req, res, next) {
  try {
    const clientKey = req.ip; // replace with API key or user id in real apps
    const now = Date.now();
    const k = keyFor(clientKey);

    // Read bucket state
    const data = await redis.hmget(k, 'tokens', 'lastRefill');
    let tokens = parseFloat(data[0]);
    let lastRefill = parseInt(data[1], 10);

    if (Number.isNaN(tokens)) tokens = BURST_CAPACITY;
    if (Number.isNaN(lastRefill)) lastRefill = now;

    const elapsedSec = (now - lastRefill) / 1000;
    tokens = Math.min(BURST_CAPACITY, tokens + elapsedSec * RATE_PER_SEC);

    if (tokens >= 1) {
      tokens -= 1;
      await redis.hmset(k, 'tokens', tokens, 'lastRefill', now);
      await redis.expire(k, Math.ceil(BURST_CAPACITY / RATE_PER_SEC) + 60);
      res.setHeader('X-RateLimit-Policy', `${RATE_PER_SEC}/sec; burst=${BURST_CAPACITY}`);
      res.setHeader('X-RateLimit-Tokens', Math.floor(tokens));
      return next();
    }

    const needed = 1 - tokens;
    const waitSeconds = needed / RATE_PER_SEC;
    res.setHeader('Retry-After', Math.ceil(waitSeconds));
    return res.status(429).json({ error: 'Too Many Requests' });
  } catch (err) {
    // Fail‑open or fail‑closed? Choose policy. Here we fail‑open so API stays usable.
    console.error('Rate limiter error', err);
    next();
  }
};


Testing Your Limiter (Quick Ideas)

  • Unit tests: Simulate timestamps and assert counters/tokens.
  • Load tests: Use autocannon or k6 to verify 429 rates, latencies, and headers.
  • Chaos tests: Kill Redis or introduce latency—does your API fail open or closed?


Helpful HTTP Headers
Return clear metadata so clients can self‑throttle:

  • X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset
  • Retry-After on 429
  • (Optional, standardized) RateLimit-Limit, RateLimit-Remaining, RateLimit-Reset

Best Practices & Tips

  • Choose the key wisely: Prefer API key/user ID over raw IP (NATs/proxies share IPs).
  • Protect sensitive routes more: e.g., logins: 5/min per user + per IP.
  • Combine with caching & auth: Rate limit after auth to identify the true principal.
  • Use Redis for scale: In‑memory only works on a single instance.
  • Expose headers & docs: Tell clients how to back off.
  • Observe: Log 429s, export metrics (Prometheus) and set alerts.
  • Legal & UX: Don’t silently drop; return 429 with guidance.

Choosing an Algorithm (Cheat Sheet)

  • Public API with bursts OK: Token Bucket
  • Strict pacing to external vendor: Leaky Bucket
  • Simple per‑minute cap: Fixed/Sliding Window
  • High accuracy under spiky traffic: Sliding Window (rolling)

Summary
Rate limiting is essential for reliable Node.js APIs. Start by defining who you limit (key), how much (policy), and where you store state (Redis for multi‑instance). Pick an algorithm that matches your needs: fixed/sliding windows for simplicity, a token bucket for burst‑friendly average rates, or a leaky bucket for steady pacing. Implement as Express middleware, return helpful headers, test under load, and monitor 429s. With these patterns, your API stays fast, fair, and resilient—even during traffic spikes.



Node.js Hosting - HostForLIFE :: What Are Node.js's Typical Use Cases?

clock August 20, 2025 08:11 by author Peter

Why Node.js is Popular?
Node.js is fast, event-driven, and non-blocking, which means it can handle many tasks at the same time without slowing down. This makes it a popular choice for developers who need scalable and efficient applications.

 


Building APIs
Node.js is commonly used to build RESTful or GraphQL APIs. APIs allow different applications or services to communicate with each other.

Example
const express = require('express');
const app = express();
app.use(express.json());

app.get('/users', (req, res) => {
  res.json([{ id: 1, name: 'Alice' }, { id: 2, name: 'Bob' }]);
});

app.listen(3000, () => {
  console.log('API server running on port 3000');
});


Node.js handles multiple API requests at the same time, making it suitable for backend services.

Real-Time Applications
Node.js is perfect for real-time apps such as chat applications, online games, or collaborative tools because it supports fast, two-way communication using WebSockets.

Example
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });

wss.on('connection', ws => {
  ws.send('Welcome!');
  ws.on('message', message => {
    console.log(`Received: ${message}`);
  });
});


WebSockets allow the server and client to communicate instantly, making real-time interactions possible.

Streaming Applications
Node.js is ideal for streaming audio, video, or large files efficiently because it processes data in chunks.

Example
const fs = require('fs');
const http = require('http');

http.createServer((req, res) => {
  const stream = fs.createReadStream('large-video.mp4');
  stream.pipe(res);
}).listen(3000, () => {
  console.log('Streaming server running on port 3000');
});


Streams send data in small pieces, preventing memory overload and improving performance.

Microservices

Node.js works well for microservices, where an application is divided into small, independent services that handle specific tasks.

Example
const express = require('express');
const app = express();
app.use(express.json());

app.post('/orders', (req, res) => {
  const order = req.body;
  res.json({ message: 'Order created', order });
});

app.listen(4000, () => {
  console.log('Order microservice running on port 4000');
});

Each microservice handles a specific domain, communicates via APIs, and can be scaled independently.

Summary
Node.js is widely used for APIs, real-time applications, streaming services, and microservices. Its event-driven, non-blocking architecture allows developers to handle multiple tasks efficiently, making it perfect for scalable and responsive applications. Understanding these use cases helps developers choose Node.js for projects requiring speed, performance, and easy scalability.

HostForLIFE.eu Node.js Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.



Node.js Hosting - HostForLIFE :: What is the Event Loop in Node.js, and How Does It Work?

clock August 14, 2025 07:41 by author Peter

The Event Loop is the secret of Node.js's ability to manage thousands of concurrent actions despite being single-threaded, as developers often learn. Even with a single main thread, this approach makes sure Node.js runs code effectively without interfering with other processes.

The Reason for the Event Loop
JavaScript was created to manage keystrokes and clicks on interactive web sites when they ran in browsers.  The event loop in a browser guarantees fluid interactions without causing the user interface to freeze. JavaScript was brought to the server side by Node.js, where it handles I/O tasks including sending network requests, reading files, and querying databases.  These can be managed without halting the execution of other code thanks to the Event Loop.

How the Event Loop Works in Node.js
The Event Loop is managed by libuv, a C library that provides asynchronous I/O. Here’s the step-by-step process:

  • Call Stack Execution: Node.js runs your synchronous code first.
  • Delegating Tasks: When asynchronous functions like setTimeout or fs.readFile are called, they are handed over to background APIs or the thread pool.
  • Callback Queue: Once the background task is done, its callback is added to the queue.
  • Event Loop Processing: The event loop checks if the call stack is empty and then pushes the next callback from the queue to be executed.

Event Loop Phases
The Node.js Event Loop runs in phases:

  • Timers: Executes callbacks from setTimeout and setInterval.
  • Pending Callbacks: Executes callbacks for system operations.
  • Idle, Prepare: Internal use only.
  • Poll: Retrieves new I/O events; executes I/O callbacks.
  • Check: Executes setImmediate callbacks.
  • Close Callbacks: Executes close events (e.g., socket.on('close')).


Microtasks (like process.nextTick() and resolved promises) run between these phases, before moving to the next phase.

Example: Event Loop in Action
Example:
console.log("Start");

setTimeout(() => {
  console.log("Timeout callback");
}, 0);

Promise.resolve().then(() => {
  console.log("Promise callback");
});

console.log("End");


Output:

  • Start
  • End
  • Promise callback
  • Timeout callback

Explanation:
Promise callback runs before Timeout callback because promises are microtasks, which have higher priority than macrotasks like setTimeout.

Understanding Microtasks vs. Macrotasks
Microtasks: process.nextTick(), Promise.then(). Run immediately after the current operation.
Macrotasks: setTimeout(), setImmediate(), I/O callbacks. Run in the normal event loop phases.

Key Points to Remember
Node.js is single-threaded for JavaScript execution.
The Event Loop allows asynchronous, non-blocking operations.
Microtasks always run before the next macrotask.
libuv handles background tasks and the thread pool.

Summary
The Event Loop is the heart of Node.js's asynchronous programming model. It ensures that even though JavaScript runs on a single thread, Node.js can handle thousands of concurrent tasks without blocking. By delegating I/O operations to the background and using a queue system for callbacks, it keeps applications fast and responsive. Understanding the Event Loop is essential for writing efficient Node.js applications.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Tag cloud

Sign in