Full Trust European Hosting

BLOG about Full Trust Hosting and Its Technology - Dedicated to European Windows Hosting Customer

AngularJS Hosting Europe - HostForLIFE :: How to Use Reactive Forms to Manage Form Validation in Angular?

clock October 8, 2025 08:52 by author Peter

Create a Basic Reactive Form
Start by importing ReactiveFormsModule in your Angular module:

// app.module.ts
import { ReactiveFormsModule } from '@angular/forms';

@NgModule({
  imports: [ReactiveFormsModule, /* other imports */],
})
export class AppModule {}


Then, build a form in your component using FormBuilder:
// user-form.component.ts
import { Component } from '@angular/core';
import { FormBuilder, FormGroup, Validators } from '@angular/forms';

@Component({ selector: 'app-user-form', templateUrl: './user-form.component.html' })
export class UserFormComponent {
  userForm: FormGroup;

  constructor(private fb: FormBuilder) {
    this.userForm = this.fb.group({
      name: ['', [Validators.required, Validators.minLength(2)]],
      email: ['', [Validators.required, Validators.email]],
      password: ['', [Validators.required, Validators.minLength(6)]],
    });
  }
}


In the template, bind the form and controls:
<!-- user-form.component.html -->
<form [formGroup]="userForm" (ngSubmit)="onSubmit()">
  <label>
    Name
    <input formControlName="name" />
  </label>
  <div *ngIf="userForm.get('name')?.touched && userForm.get('name')?.invalid">
    <small *ngIf="userForm.get('name')?.errors?.required">Name is required.</small>
    <small *ngIf="userForm.get('name')?.errors?.minlength">Name must be at least 2 characters.</small>
  </div>

  <label>
    Email
    <input formControlName="email" />
  </label>
  <div *ngIf="userForm.get('email')?.touched && userForm.get('email')?.invalid">
    <small *ngIf="userForm.get('email')?.errors?.required">Email is required.</small>
    <small *ngIf="userForm.get('email')?.errors?.email">Enter a valid email.</small>
  </div>

  <button type="submit" [disabled]="userForm.invalid">Submit</button>
</form>


Built-in Validators
Angular provides several built-in validators:

  • Validators.required — field must have a value.
  • Validators.email — value must be a valid email.
  • Validators.min / Validators.max — numeric limits.
  • Validators.minLength / Validators.maxLength — string length limits.
  • Validators.pattern — regex-based validation.

You can combine validators in an array for a control, as shown in the example above.

Custom Synchronous Validators

For rules that don’t exist out of the box (e.g., username format), write a custom validator function that returns either null (valid) or an error object:
import { AbstractControl, ValidationErrors } from '@angular/forms';

export function usernameValidator(control: AbstractControl): ValidationErrors | null {
  const value = control.value as string;
  if (!value) return null;
  const valid = /^[a-z0-9_]+$/.test(value);
  return valid ? null : { invalidUsername: true };
}

// usage in form builder
this.userForm = this.fb.group({
  username: ['', [Validators.required, usernameValidator]],
});

Show helpful messages in the template when invalidUsername exists.

Cross-Field Validation (Password Match)

Some validations depend on multiple controls. Use a validator on the FormGroup:
function passwordMatchValidator(group: AbstractControl): ValidationErrors | null {
  const password = group.get('password')?.value;
  const confirm = group.get('confirmPassword')?.value;
  return password === confirm ? null : { passwordsMismatch: true };
}

this.userForm = this.fb.group({
  password: ['', Validators.required],
  confirmPassword: ['', Validators.required],
}, { validators: passwordMatchValidator });

In the template, show the group-level error:
<div *ngIf="userForm.errors?.passwordsMismatch && userForm.touched">
  <small>Passwords do not match.</small>
</div>


Async Validators (e.g., Check Email Uniqueness)

Async validators are useful for server checks like "is this email taken?". They return an Observable or Promise.
import { AbstractControl } from '@angular/forms';
import { map } from 'rxjs/operators';
import { of } from 'rxjs';

function uniqueEmailValidator(apiService: ApiService) {
  return (control: AbstractControl) => {
    if (!control.value) return of(null);
    return apiService.checkEmail(control.value).pipe(
      map(isTaken => (isTaken ? { emailTaken: true } : null))
    );
  };
}

// in component
this.userForm = this.fb.group({
  email: ['', {
    validators: [Validators.required, Validators.email],
    asyncValidators: [uniqueEmailValidator(this.apiService)],
    updateOn: 'blur' // run async validator on blur to reduce calls
  }]
});

Use updateOn: 'blur' to prevent calling the server on every keystroke.

Displaying Validation State and UX Tips

  • Show errors only after user interaction — use touched or dirty to avoid overwhelming users with errors on load.
  • Disable submit while invalid — [disabled]="userForm.invalid" prevents sending bad data.
  • Focus the first invalid control — on submit, set focus to the first invalid field for better UX.
  • Use updateOn: 'blur' or debounce — reduces validation frequency and server calls.

Example to focus first invalid:
onSubmit() {
  if (this.userForm.invalid) {
    const invalidControl = this.el.nativeElement.querySelector('.ng-invalid');
    invalidControl?.focus();
    return;
  }
  // process valid form
}

Reacting to Value Changes and Live Validation
You can subscribe to valueChanges for any control or the whole form to implement live validation messages, dynamic rules, or enable/disable fields.
this.userForm.get('country')?.valueChanges.subscribe(country => {
  if (country === 'US') {
    this.userForm.get('state')?.setValidators([Validators.required]);
  } else {
    this.userForm.get('state')?.clearValidators();
  }
  this.userForm.get('state')?.updateValueAndValidity();
});

Remember to unsubscribe in ngOnDestroy or use the takeUntil pattern.

Integrating with Backend Validation
Server-side validation is the final source of truth. When the backend returns validation errors, map them to form controls so users can correct them:
// after API error response
handleServerErrors(errors: Record<string, string[]>) {
  Object.keys(errors).forEach(field => {
    const control = this.userForm.get(field);
    if (control) {
      control.setErrors({ server: errors[field][0] });
    }
  });
}


Show control.errors.server messages in the template.

Testing Form Validation
Unit test reactive forms by creating the component, setting values, and asserting validity:
it('should invalidate empty email', () => {
  component.userForm.get('email')?.setValue('');
  expect(component.userForm.get('email')?.valid).toBeFalse();
});


For async validators, use fakeAsync and tick() to simulate time.

  • Accessibility (A11y) Considerations
  • Always link error messages to inputs with aria-describedby.
  • Use clear error language and avoid technical terms.
  • Ensure focus management sends keyboard users to errors on submit.

Example
<input id="email" formControlName="email" aria-describedby="emailError" />
<div id="emailError" *ngIf="userForm.get('email')?.invalid">
  <small>Enter a valid email address.</small>
</div>

Performance Tips and Best Practices

  • Use OnPush change detection where appropriate to reduce re-renders.
  • Avoid heavy computation inside valueChanges subscribers.
  • Use debounceTime for expensive validations or server calls:

this.userForm.get('search')?.valueChanges.pipe(debounceTime(300)).subscribe(...);

Clean up subscriptions with takeUntil or async pipe.

Summary
An effective, testable method for managing form validation is provided by Angular's Reactive Forms. For common rules, use the built-in validators; for special cases, create your own sync and async validators; and for cross-field checks, such as password confirmation, use group validators. Enhance the user experience by integrating server-side errors using setErrors, emphasizing the initial incorrect control, and displaying errors upon interaction. Use performance techniques like debouncing and OnPush change detection, test your validations, and consider accessibility.



Europe mySQL Hosting - HostForLIFEASP.NET :: What happens if you restart the database service provided by WAMP, MySQL?

clock October 6, 2025 08:59 by author Peter

What happens when you restart MySQL (WAMP’s database service)?

  • Active connections are dropped → any application connected to MySQL will lose its session.
  • Running queries/transactions are aborted → if a query was in the middle of writing, MySQL will roll back that transaction (thanks to transaction logs in InnoDB).
  • Tables/data themselves are safe → MySQL ensures durability, so committed data is not lost.
  • Non-transactional tables (MyISAM) are riskier → if you still have MyISAM tables, they can become corrupted if a write was in progress when the service stopped.

Risks of Restarting Every 3 Hours

  • Apps/websites using the DB may fail while the service is down.
  • Any batch jobs, cron jobs, or API calls during restart will error out.
  • If you restart during heavy writes, performance may be affected briefly.

Tables themselves won’t get corrupted in InnoDB, but MyISAM tables can.

Safer Alternatives
Only restart if the service fails

Instead of restarting every 3 hours, configure Task Scheduler to start the service if it’s stopped (health check).

Example batch
sc query wampmysqld64 | find "RUNNING" >nul
if %errorlevel%==1 net start wampmysqld64
sc query wampapache64 | find "RUNNING" >nul
if %errorlevel%==1 net start wampapache64


This way it only starts services if they’re not running.

Schedule a restart during off-peak hours

e.g. once daily at 3 AM, when traffic is minimal.

Use MySQL config for stability
Instead of forced restarts, tune MySQL memory, query cache, etc., so it doesn’t need frequent restarting.

Answer to your question
No, restarting won’t corrupt data in InnoDB tables.

Yes, it can cause temporary downtime and aborted queries, so apps may face errors.

If you use MyISAM tables, there is a small risk of corruption.



Node.js Hosting - HostForLIFE :: Understanding package.json and package-lock.json in Node.js

clock October 3, 2025 08:48 by author Peter

1. What is package.json?
package.json is the heart of any Node.js project. It declares your project’s dependencies and provides metadata about your application.


Key Features

  • Lists dependencies and devDependencies.
  • Specifies version ranges using semantic versioning ( ^ , ~ ).
  • Includes project metadata like name, version, scripts, author, and license.
  • Human-readable and editable.

{
  "name": "my-app",
  "version": "1.0.0",
  "dependencies": {
    "lodash": "^4.17.21"
  },
  "devDependencies": {
    "jest": "~29.0.0"
  },
  "scripts": {
    "start": "node index.js"
  }
}


Key Point: package.json specifies what versions your project is compatible with , not the exact installed version.

2. What is package-lock.json?
package-lock.json is automatically generated by npm to lock the exact versions of every installed package, including nested dependencies.

Key Features

  • Records the exact version installed for each package.
  • Contains resolved URLs and integrity hashes to ensure packages are not tampered with.
  • Records nested dependencies (dependencies of dependencies).
  • Not intended for manual editing.

{
  "name": "my-app",
  "lockfileVersion": 3,
  "dependencies": {
    "lodash": {
      "version": "4.17.21",
      "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
      "integrity": "sha512-xyz"
    }
  }
}


Key Point: package-lock.json ensures that every environment installs exactly the same versions , even if package.json allows ranges.

3. Main Differences Between package.json and package-lock.json

Featurepackage.jsonpackage-lock.json
Purpose Declares dependencies and project info Locks exact versions of installed packages
Edited by Developer npm automatically
Version Can specify ranges (^, ~) Exact versions installed
Nested dependencies Not recorded Fully recorded
Effect on installation npm uses ranges to resolve versions Ensures consistent installs
Human-readable? Yes Not really

4. How npm install Works

The npm install command is used to install packages based on package.json and package-lock.json.

# Install all dependencies listed in package.json
npm install

# Install a specific package and save it to dependencies
npm install lodash

# Install a package as a dev dependency
npm install --save-dev jest

# Install a package globally
npm install -g typescript


Process

  • Reads package.json for dependencies.
  • Resolves the latest versions allowed by version ranges (if package-lock.json doesn’t exist).
  • Downloads packages to node_modules.
  • Updates or creates package-lock.json with exact versions.


5. What Happens If You Delete package-lock.json?

If package-lock.json is deleted and you run:

npm install

  • npm will resolve latest versions matching the ranges in package.json.
  • Download new packages and regenerate package-lock.json.
  • This may result in different versions from the previous install, which could break your code.

Safe scenarios for deleting:

  • Intentionally updating packages.
  • Starting a fresh project or refreshing dependencies.

Why are both files important

  • package.json defines what your project needs.
  • package-lock.json ensures everyone gets the exact same package versions for consistent development and production environments.

Conclusion
package.json = “What I want” (dependency ranges and project info)
package-lock.json = “Exactly what I got” (locked versions)


Deleting package-lock.json can lead to installing newer package versions, which may cause unexpected issues. Always commit package-lock.json to version control for consistency.



Node.js Hosting - HostForLIFE :: How to resolve a "Cannot find module" error using Node.js?

clock September 24, 2025 07:25 by author Peter

The "Cannot find module" issue in Node.js happens when the runtime is unable to detect a necessary dependency. Incorrect routes, missing installs, or configuration problems are usually the cause of this. Root causes, solutions, and best practices for fixing the mistake are explained in this tutorial.

Conceptual Background
Node.js loads modules using the require or import syntax. The runtime searches in the following order:

  • Core modules (e.g., fs, path)
  • node_modules folder in the current directory
  • Parent directories up to the system root

When the requested module cannot be located in this resolution path, Node.js throws:

Error: Cannot find module 'MODULE_NAME'

Step-by-Step Walkthrough
1. Check the Module Name

  • Ensure the module name is spelled correctly.
  • Common mistakes: case sensitivity (express vs Express) or typos.

// Wrong (typo)
const exress = require('exress');

// Correct
const express = require('express');


2. Install Missing Dependencies
npm install MODULE_NAME

or with yarn
yarn add MODULE_NAME

Example
npm install express

3. Verify Local vs Global Installations
Some modules are installed globally, but Node.js expects them locally.

Check if installed
npm list MODULE_NAME

If missing locally
npm install MODULE_NAME

4. Fix File Path Requires
When requiring local files, always use relative or absolute paths.
// Wrong (missing ./)
const config = require('config');

// Correct (relative path)
const config = require('./config');


5. Clear Node.js Cache
Sometimes cached modules cause issues. Clear cache:
npm cache clean --force

Then reinstall
rm -rf node_modules package-lock.json
npm install


6. Check NODE_PATH Environment Variable
If you rely on custom paths, ensure NODE_PATH is set correctly.

On macOS/Linux
export NODE_PATH=./src

On Windows (PowerShell)
$env:NODE_PATH = ".\src"

7. Use Absolute Paths with path.resolve
For complex directory structures, avoid relative path confusion:
const path = require('path');
const config = require(path.resolve(__dirname, 'config.js'));

Code Snippet Example

// index.js
const express = require('express');
const path = require('path');
const config = require(path.resolve(__dirname, 'config.js'));

const app = express();
app.get('/', (req, res) => {
res.send('Hello, World!');
});
app.listen(3000, () => console.log('Server running on port 3000'));


Workflow JSON Example

{
  "name": "fix-node-module-error",
  "steps": [
    { "check": "Verify spelling of module name" },
    { "command": "npm install MODULE_NAME" },
    { "check": "Ensure local installation in node_modules" },
    { "fix": "Add ./ for relative file imports" },
    { "command": "npm cache clean --force" },
    { "command": "rm -rf node_modules package-lock.json && npm install" }
  ]
}


Use Cases / Scenarios

  • Web applications using Express.js where express is missing.
  • CLI tools failing due to global vs local installs.
  • Microservices with deep folder structures requiring absolute paths.

Limitations / Considerations

  • Clearing the cache removes all installed packages; it may require reinstallation.
  • Global modules are not accessible in local projects by default.
  • Path resolution may vary across operating systems.

Fixes for Common Pitfalls

  • Typos → Double-check module names.
  • Wrong relative path → Always use ./ for local files.
  • Corrupted node_modules → Delete and reinstall.
  • Environment misconfiguration → Ensure correct NODE_PATH.
Conclusion
The "Cannot find module" error in Node.js typically arises from missing installations, path issues, or misconfigurations. By verifying module names, reinstalling dependencies, fixing paths, and clearing the cache, most errors can be resolved quickly.



European Visual Studio 2022 Hosting - HostForLIFE.eu :: New Features in Visual Studio 2026

clock September 17, 2025 08:18 by author Peter

Visual Studio 2026 is Microsoft's most audacious move into the AI-first age of software development, not simply another update. With extensive integration of GitHub Copilot, support for C# 14 and.NET 10, a redesigned user interface, and significant performance enhancements, this release aims to modernize the IDE for the upcoming ten years while increasing developer productivity.

Here’s a detailed breakdown of everything new in Visual Studio 2026.

Deep AI Integration with GitHub Copilot

The standout theme of Visual Studio 2026 is its AI-first design philosophy. Copilot is no longer a sidekick plugin — it’s fully baked into the IDE.

Key AI-Powered Features

  • Adaptive Paste (“Paste & Fix”): When you paste code from the web or another project, Copilot automatically rewrites it to match your project’s naming conventions, formatting style, and architecture.
  • Context-Aware Suggestions: Copilot now understands your entire solution context, offering smarter code completions and recommendations that reflect your codebase rather than just generic snippets.
  • Inline Testing and Docs: Copilot can generate test cases, explain methods inline, and draft documentation that matches project standards.
  • Performance & Security Insights: A new Profiler Agent runs in the background, catching performance bottlenecks and security flaws before you push changes or open pull requests.

This isn’t just autocomplete — it’s a developer co-pilot that saves time and reduces cognitive load.

.NET 10 and C# 14 Support
Another major leap is full support for .NET 10 and C# 14, making Visual Studio 2026 future-proof for modern enterprise development.

What’s New in .NET 10
?

  • Improved cross-platform compatibility for cloud-native and AI-driven workloads.
  • Better runtime performance for APIs, microservices, and containerized apps.
  • Enhanced support for minimal APIs, making it easier to build lightweight, high-performance web services.
  • Expanded tooling for MAUI (Multi-platform App UI), bringing richer cross-device app development.

What’s New in C# 14?

  • Expanded pattern matching that simplifies handling complex data structures.
  • Improved async/await support, reducing boilerplate in concurrent programming.
  • New syntax sugar (shorter property and lambda declarations) for cleaner, more concise code.
  • Enhanced source generators with fewer limitations, making metaprogramming more powerful.

Together, these updates make C# a stronger competitor to modern languages like Rust, Go, and Kotlin — but still deeply tied to enterprise ecosystems.

Modernized UI and Developer Experience
Visual Studio 2026 brings a much-needed UI refresh:

  • Fluent UI-based design with cleaner icons, consistent spacing, and smoother navigation.
  • New bottom editor margin: Displays line numbers, selection counts, encoding, and other vital info in one place.
  • 11 new themes (tinted variants) with accessibility improvements for color blindness and contrast.
  • Simplified settings migration: Preferences and keyboard shortcuts carry over from previous versions seamlessly.

The new design is meant to reduce fatigue, especially for developers spending 8+ hours inside the IDE.

Performance Gains Across the Board
Performance was a top complaint in older versions — Microsoft clearly listened.

  • Faster startup times (especially for large enterprise solutions).
  • Snappier branch switching in Git workflows.
  • Reduced build times, even for solutions with thousands of projects.
  • Hot Reload improvements (especially for Razor and Blazor projects).
  • Better IntelliSense performance with fewer lags and smarter caching.

For teams working on massive monorepos or cloud-scale projects, these improvements save hours every week.

Language & Platform Improvements
In addition to .NET 10 and C# 14, developers get more modern language tooling:

  • C++26 preview support and updated STL libraries for system-level and game development.
  • Improved Razor editor for web developers, making Hot Reload more stable.
  • Service Fabric tooling modularized into extensions (no longer bundled, keeping the IDE leaner).
  • Expanded diagnostic tooling, including better memory analyzers and async call visualization.

Release Model and Compatibility
Microsoft is also changing how updates roll out:

  • Insiders Channel replaces the old “Preview Channel” — developers can try monthly feature builds earlier, with some instability.
  • Side-by-side installation: You can install VS2026 alongside VS2022 safely.
  • Extension compatibility: Most VS2022 extensions work out-of-the-box in VS2026.

This ensures smoother adoption for enterprise teams that rely heavily on custom extensions.

Challenges and Trade-Offs

  • Not everything is perfect in VS2026. Developers should be aware of:
  • AI fatigue: Some devs feel Copilot interrupts flow; you’ll likely need to tune or disable features.
  • Incomplete C++26 features — still under development.
  • Legacy dependencies: Some internal parts of VS still rely on older frameworks.
  • Stability risks in the Insiders channel — not recommended for mission-critical production work yet.

Why This Release Matters?

  • Visual Studio 2026 represents Microsoft’s fusion of traditional IDE power with AI-driven coding assistance.
  • For developers: Less boilerplate, faster builds, and cleaner UI.
  • For enterprises: Confidence in modern frameworks (.NET 10, C# 14) with better productivity tooling.
  • For the future: A clear move toward AI-first development environments, where IDEs actively help build, test, and optimize code.

Final Thoughts
Visual Studio 2026 isn’t just an incremental upgrade. it’s a redefinition of the development experience. With AI and Copilot as core features, support for the latest .NET and C#, and a focus on speed, modern design, and compatibility, it positions Microsoft’s IDE as the tool of choice for the next generation of developers. If you’re building modern apps, services, or AI-driven platforms, VS2026 will likely become the default enterprise IDE.



Node.js Hosting - HostForLIFE :: Node.js API Rate Limiting Explained: Token Bucket & Leaky Bucket Techniques

clock August 25, 2025 09:25 by author Peter

By restricting the number of requests a client may make in a given amount of time, rate limiting guards against abuse and evens out spikes. Without it, a problem or a noisy neighbor could overload your server, raise expenses, and make the experience worse for everyone. Rate limitation is usually included as Express middleware in Node.js, and you select an algorithm based on your traffic trends.

Why Rate Limit? (Simple Words)

  • Fairness: Prevent one user from hogging resources.
  • Stability: Avoid sudden traffic spikes that crash servers.
  • Security: Mitigate brute‑force login attempts and scraping.
  • Cost Control: Keep bandwidth and compute costs predictable.

Core Ideas You’ll Use

  • Identity (the key): How you group requests (e.g., by IP, API key, user ID).
  • Allowance: How many requests are allowed per window or per second.
  • Storage: Where you remember counts/tokens (in‑memory for a single instance; Redis for a cluster).
  • Backoff/Signals: How the client should slow down (HTTP 429 + headers like Retry-After).

Algorithm Overview (When to Use What)

  • Fixed Window Counter: Simple. “100 requests every 60s.” Can burst at window edges.
  • Sliding Window (Log or Rolling): Smoother than fixed. More accurate but heavier.
  • Token Bucket: Allows short bursts but enforces an average rate. Great for user‑facing APIs.
  • Leaky Bucket (Queue/Drip): Smooth, constant outflow; good when you must strictly pace downstream systems.


Baseline: Fixed Window Counter (In‑Memory)
Good as a learning step or for single‑process dev environments.
// middleware/fixedWindowLimiter.js
const WINDOW_MS = 60_000; // 60 seconds
const MAX_REQUESTS = 100; // per window per key

const store = new Map(); // key -> { count, windowStart }

function getKey(req) {
  return req.ip; // or req.headers['x-api-key'], req.user.id, etc.
}

module.exports = function fixedWindowLimiter(req, res, next) {
  const key = getKey(req);
  const now = Date.now();
  const entry = store.get(key) || { count: 0, windowStart: now };

  if (now - entry.windowStart >= WINDOW_MS) {
    entry.count = 0;
    entry.windowStart = now;
  }

  entry.count += 1;
  store.set(key, entry);

  const remaining = Math.max(0, MAX_REQUESTS - entry.count);
  res.setHeader('X-RateLimit-Limit', MAX_REQUESTS);
  res.setHeader('X-RateLimit-Remaining', Math.max(0, remaining));
  res.setHeader('X-RateLimit-Reset', Math.ceil((entry.windowStart + WINDOW_MS) / 1000));

  if (entry.count > MAX_REQUESTS) {
    res.setHeader('Retry-After', Math.ceil((entry.windowStart + WINDOW_MS - now) / 1000));
    return res.status(429).json({ error: 'Too Many Requests' });
  }

  next();
};

Token Bucket (Burst‑friendly Average Rate)
How it works: You have a bucket that slowly refills with tokens (e.g., 5 tokens/second) up to a max capacity (burst). Each request consumes a token. No tokens? The request is limited.
// middleware/tokenBucketLimiter.js
const RATE_PER_SEC = 5;      // refill speed
const BURST_CAPACITY = 20;   // max tokens

const buckets = new Map();   // key -> { tokens, lastRefill }

function getKey(req) { return req.ip; }

module.exports = function tokenBucketLimiter(req, res, next) {
  const key = getKey(req);
  const now = Date.now();
  let bucket = buckets.get(key);
  if (!bucket) {
    bucket = { tokens: BURST_CAPACITY, lastRefill: now };
    buckets.set(key, bucket);
  }

  // Refill based on elapsed time
  const elapsedSec = (now - bucket.lastRefill) / 1000;
  bucket.tokens = Math.min(BURST_CAPACITY, bucket.tokens + elapsedSec * RATE_PER_SEC);
  bucket.lastRefill = now;

  if (bucket.tokens >= 1) {
    bucket.tokens -= 1; // consume for this request
    res.setHeader('X-RateLimit-Policy', `${RATE_PER_SEC}/sec; burst=${BURST_CAPACITY}`);
    res.setHeader('X-RateLimit-Tokens', Math.floor(bucket.tokens));
    return next();
  }

  const needed = 1 - bucket.tokens;
  const waitSeconds = needed / RATE_PER_SEC;
  res.setHeader('Retry-After', Math.ceil(waitSeconds));
  return res.status(429).json({ error: 'Too Many Requests' });
};

When to use: You want to permit quick bursts (nice UX) but keep a sustained average.

Leaky Bucket (Constant Outflow) 

How it works: Requests enter a queue (the bucket). They “leak” at a fixed rate. If the bucket is full, you reject or drop new requests.
// middleware/leakyBucketLimiter.js
const LEAK_RATE_PER_SEC = 5;    // how many requests per second can pass
const BUCKET_CAPACITY = 50;     // max queued requests

const buckets = new Map();      // key -> { queue, lastLeak }

function getKey(req) { return req.ip; }

module.exports = function leakyBucketLimiter(req, res, next) {
  const key = getKey(req);
  const now = Date.now();
  let bucket = buckets.get(key);
  if (!bucket) {
    bucket = { queue: 0, lastLeak: now };
    buckets.set(key, bucket);
  }

  // Leak based on elapsed time
  const elapsedSec = (now - bucket.lastLeak) / 1000;
  const leaked = Math.floor(elapsedSec * LEAK_RATE_PER_SEC);
  if (leaked > 0) {
    bucket.queue = Math.max(0, bucket.queue - leaked);
    bucket.lastLeak = now;
  }

  if (bucket.queue >= BUCKET_CAPACITY) {
    res.setHeader('Retry-After', 1);
    return res.status(429).json({ error: 'Too Many Requests (bucket full)' });
  }

  bucket.queue += 1; // enqueue this request
  // In practice, you would defer processing; for middleware demo we let it pass immediately
  next();
};

When to use: You must strictly pace downstream dependencies (e.g., payment gateway rate caps).

Wiring It Up in Express
// server.js
const express = require('express');
const fixedWindowLimiter = require('./middleware/fixedWindowLimiter');
const tokenBucketLimiter = require('./middleware/tokenBucketLimiter');
// const leakyBucketLimiter = require('./middleware/leakyBucketLimiter');

const app = express();

// Example: apply global limiter
app.use(tokenBucketLimiter);

// Or apply per‑route
app.get('/public', fixedWindowLimiter, (req, res) => res.send('ok'));
app.get('/payments', /* leakyBucketLimiter, */ (req, res) => res.send('paid'));

app.listen(3000, () => console.log('API on :3000'));


Production‑Ready Storage with Redis

In clustered or serverless environments, in‑memory maps don’t work across instances. Use a shared store like Redis to coordinate limits.
// middleware/redisTokenBucket.js
const IORedis = require('ioredis');
const redis = new IORedis(process.env.REDIS_URL);

const RATE_PER_SEC = 10;
const BURST_CAPACITY = 40;

function keyFor(clientKey) { return `rl:tb:${clientKey}`; }

module.exports = async function redisTokenBucket(req, res, next) {
  try {
    const clientKey = req.ip; // replace with API key or user id in real apps
    const now = Date.now();
    const k = keyFor(clientKey);

    // Read bucket state
    const data = await redis.hmget(k, 'tokens', 'lastRefill');
    let tokens = parseFloat(data[0]);
    let lastRefill = parseInt(data[1], 10);

    if (Number.isNaN(tokens)) tokens = BURST_CAPACITY;
    if (Number.isNaN(lastRefill)) lastRefill = now;

    const elapsedSec = (now - lastRefill) / 1000;
    tokens = Math.min(BURST_CAPACITY, tokens + elapsedSec * RATE_PER_SEC);

    if (tokens >= 1) {
      tokens -= 1;
      await redis.hmset(k, 'tokens', tokens, 'lastRefill', now);
      await redis.expire(k, Math.ceil(BURST_CAPACITY / RATE_PER_SEC) + 60);
      res.setHeader('X-RateLimit-Policy', `${RATE_PER_SEC}/sec; burst=${BURST_CAPACITY}`);
      res.setHeader('X-RateLimit-Tokens', Math.floor(tokens));
      return next();
    }

    const needed = 1 - tokens;
    const waitSeconds = needed / RATE_PER_SEC;
    res.setHeader('Retry-After', Math.ceil(waitSeconds));
    return res.status(429).json({ error: 'Too Many Requests' });
  } catch (err) {
    // Fail‑open or fail‑closed? Choose policy. Here we fail‑open so API stays usable.
    console.error('Rate limiter error', err);
    next();
  }
};


Testing Your Limiter (Quick Ideas)

  • Unit tests: Simulate timestamps and assert counters/tokens.
  • Load tests: Use autocannon or k6 to verify 429 rates, latencies, and headers.
  • Chaos tests: Kill Redis or introduce latency—does your API fail open or closed?


Helpful HTTP Headers
Return clear metadata so clients can self‑throttle:

  • X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset
  • Retry-After on 429
  • (Optional, standardized) RateLimit-Limit, RateLimit-Remaining, RateLimit-Reset

Best Practices & Tips

  • Choose the key wisely: Prefer API key/user ID over raw IP (NATs/proxies share IPs).
  • Protect sensitive routes more: e.g., logins: 5/min per user + per IP.
  • Combine with caching & auth: Rate limit after auth to identify the true principal.
  • Use Redis for scale: In‑memory only works on a single instance.
  • Expose headers & docs: Tell clients how to back off.
  • Observe: Log 429s, export metrics (Prometheus) and set alerts.
  • Legal & UX: Don’t silently drop; return 429 with guidance.

Choosing an Algorithm (Cheat Sheet)

  • Public API with bursts OK: Token Bucket
  • Strict pacing to external vendor: Leaky Bucket
  • Simple per‑minute cap: Fixed/Sliding Window
  • High accuracy under spiky traffic: Sliding Window (rolling)

Summary
Rate limiting is essential for reliable Node.js APIs. Start by defining who you limit (key), how much (policy), and where you store state (Redis for multi‑instance). Pick an algorithm that matches your needs: fixed/sliding windows for simplicity, a token bucket for burst‑friendly average rates, or a leaky bucket for steady pacing. Implement as Express middleware, return helpful headers, test under load, and monitor 429s. With these patterns, your API stays fast, fair, and resilient—even during traffic spikes.



Node.js Hosting - HostForLIFE :: What Are Node.js's Typical Use Cases?

clock August 20, 2025 08:11 by author Peter

Why Node.js is Popular?
Node.js is fast, event-driven, and non-blocking, which means it can handle many tasks at the same time without slowing down. This makes it a popular choice for developers who need scalable and efficient applications.

 


Building APIs
Node.js is commonly used to build RESTful or GraphQL APIs. APIs allow different applications or services to communicate with each other.

Example
const express = require('express');
const app = express();
app.use(express.json());

app.get('/users', (req, res) => {
  res.json([{ id: 1, name: 'Alice' }, { id: 2, name: 'Bob' }]);
});

app.listen(3000, () => {
  console.log('API server running on port 3000');
});


Node.js handles multiple API requests at the same time, making it suitable for backend services.

Real-Time Applications
Node.js is perfect for real-time apps such as chat applications, online games, or collaborative tools because it supports fast, two-way communication using WebSockets.

Example
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });

wss.on('connection', ws => {
  ws.send('Welcome!');
  ws.on('message', message => {
    console.log(`Received: ${message}`);
  });
});


WebSockets allow the server and client to communicate instantly, making real-time interactions possible.

Streaming Applications
Node.js is ideal for streaming audio, video, or large files efficiently because it processes data in chunks.

Example
const fs = require('fs');
const http = require('http');

http.createServer((req, res) => {
  const stream = fs.createReadStream('large-video.mp4');
  stream.pipe(res);
}).listen(3000, () => {
  console.log('Streaming server running on port 3000');
});


Streams send data in small pieces, preventing memory overload and improving performance.

Microservices

Node.js works well for microservices, where an application is divided into small, independent services that handle specific tasks.

Example
const express = require('express');
const app = express();
app.use(express.json());

app.post('/orders', (req, res) => {
  const order = req.body;
  res.json({ message: 'Order created', order });
});

app.listen(4000, () => {
  console.log('Order microservice running on port 4000');
});

Each microservice handles a specific domain, communicates via APIs, and can be scaled independently.

Summary
Node.js is widely used for APIs, real-time applications, streaming services, and microservices. Its event-driven, non-blocking architecture allows developers to handle multiple tasks efficiently, making it perfect for scalable and responsive applications. Understanding these use cases helps developers choose Node.js for projects requiring speed, performance, and easy scalability.

HostForLIFE.eu Node.js Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.



Node.js Hosting - HostForLIFE :: What is the Event Loop in Node.js, and How Does It Work?

clock August 14, 2025 07:41 by author Peter

The Event Loop is the secret of Node.js's ability to manage thousands of concurrent actions despite being single-threaded, as developers often learn. Even with a single main thread, this approach makes sure Node.js runs code effectively without interfering with other processes.

The Reason for the Event Loop
JavaScript was created to manage keystrokes and clicks on interactive web sites when they ran in browsers.  The event loop in a browser guarantees fluid interactions without causing the user interface to freeze. JavaScript was brought to the server side by Node.js, where it handles I/O tasks including sending network requests, reading files, and querying databases.  These can be managed without halting the execution of other code thanks to the Event Loop.

How the Event Loop Works in Node.js
The Event Loop is managed by libuv, a C library that provides asynchronous I/O. Here’s the step-by-step process:

  • Call Stack Execution: Node.js runs your synchronous code first.
  • Delegating Tasks: When asynchronous functions like setTimeout or fs.readFile are called, they are handed over to background APIs or the thread pool.
  • Callback Queue: Once the background task is done, its callback is added to the queue.
  • Event Loop Processing: The event loop checks if the call stack is empty and then pushes the next callback from the queue to be executed.

Event Loop Phases
The Node.js Event Loop runs in phases:

  • Timers: Executes callbacks from setTimeout and setInterval.
  • Pending Callbacks: Executes callbacks for system operations.
  • Idle, Prepare: Internal use only.
  • Poll: Retrieves new I/O events; executes I/O callbacks.
  • Check: Executes setImmediate callbacks.
  • Close Callbacks: Executes close events (e.g., socket.on('close')).


Microtasks (like process.nextTick() and resolved promises) run between these phases, before moving to the next phase.

Example: Event Loop in Action
Example:
console.log("Start");

setTimeout(() => {
  console.log("Timeout callback");
}, 0);

Promise.resolve().then(() => {
  console.log("Promise callback");
});

console.log("End");


Output:

  • Start
  • End
  • Promise callback
  • Timeout callback

Explanation:
Promise callback runs before Timeout callback because promises are microtasks, which have higher priority than macrotasks like setTimeout.

Understanding Microtasks vs. Macrotasks
Microtasks: process.nextTick(), Promise.then(). Run immediately after the current operation.
Macrotasks: setTimeout(), setImmediate(), I/O callbacks. Run in the normal event loop phases.

Key Points to Remember
Node.js is single-threaded for JavaScript execution.
The Event Loop allows asynchronous, non-blocking operations.
Microtasks always run before the next macrotask.
libuv handles background tasks and the thread pool.

Summary
The Event Loop is the heart of Node.js's asynchronous programming model. It ensures that even though JavaScript runs on a single thread, Node.js can handle thousands of concurrent tasks without blocking. By delegating I/O operations to the background and using a queue system for callbacks, it keeps applications fast and responsive. Understanding the Event Loop is essential for writing efficient Node.js applications.



Node.js Hosting - HostForLIFE :: Uploading Files from React to Cloudinary: A Comprehensive Guide with Secure Backend, Progress, and Preview (2025)

clock July 28, 2025 07:24 by author Peter

A common feature of many contemporary web applications is the ability to upload files, such as documents, videos, and images. In this tutorial, we'll demonstrate how to use Cloudinary, a potent media management platform, to create a robust file upload feature in a React (v18+) application. We'll also go over secure backend-signed uploads using Node.js (Express). Using functional components, React Hooks, and contemporary best practices, we'll create a reusable upload component that supports a variety of file types, displays a preview when feasible, tracks the upload process, and securely uploads media using a backend-generated signature.

 

What is Cloudinary?
Cloudinary is a cloud-based service for storing, optimizing, and delivering images, videos, and other media files. It simplifies media handling by providing:

  • Media upload and storage
  • CDN delivery and transformation
  • Automatic optimization and responsive images
  • Support for multiple media types

What Will We Build?
A full-stack app (React + Node.js) that:

  • Accepts images, videos, and documents as input
  • Shows previews for image/video types
  • Tracks upload progress
  • Generates a secure upload signature on the backend
  • Uploads securely to Cloudinary

Project Structure

 

cloudinary-react-upload/
├── client/            # React frontend
│   ├── src/
│   │   ├── components/FileUploader.jsx
│   │   ├── App.jsx
│   │   └── main.jsx
│   └── .env
├── server/            # Node.js backend
│   ├── index.js
│   └── .env
├── package.json (root - manages both client/server via scripts)

 

Step 1. Cloudinary Setup

  • Sign up at cloudinary.com
  • Go to your dashboard and note:
    • Cloud Name
    • API Key
    • API Secret
  • Navigate to Settings > Upload > Upload Presets
    • Create a new signed preset
    • Enable "Auto format" and "Auto resource type"

Backend .env (in server/.env)
CLOUD_NAME=your_cloud_name
CLOUD_API_KEY=your_api_key
CLOUD_API_SECRET=your_api_secret
UPLOAD_PRESET=your_signed_preset

Step 2: Backend Setup with Node.js (Express)
Install dependencies
cd server
npm init -y
npm install express dotenv cors cloudinary

server/index.js
import express from 'express';
import cors from 'cors';
import dotenv from 'dotenv';
import { v2 as cloudinary } from 'cloudinary';

dotenv.config();
const app = express();
app.use(cors());

cloudinary.config({
  cloud_name: process.env.CLOUD_NAME,
  api_key: process.env.CLOUD_API_KEY,
  api_secret: process.env.CLOUD_API_SECRET
});

app.get('/get-signature', (req, res) => {
  const timestamp = Math.floor(Date.now() / 1000);
  const signature = cloudinary.utils.api_sign_request(
    {
      timestamp,
      upload_preset: process.env.UPLOAD_PRESET,
    },
    process.env.CLOUD_API_SECRET
  );

  res.json({
    timestamp,
    signature,
    cloudName: process.env.CLOUD_NAME,
    apiKey: process.env.CLOUD_API_KEY,
    uploadPreset: process.env.UPLOAD_PRESET,
  });
});

const PORT = process.env.PORT || 4000;
app.listen(PORT, () => console.log(`Server running on port ${PORT}`));


Run the backend:
node index.js

Step 3. React Frontend Setup (Vite)
Create project and install dependencies:
npm create vite@latest client -- --template react
cd client
npm install axios

Frontend .env (in client/.env)
VITE_API_URL=http://localhost:4000

Step 4. FileUploader Component (Secure Upload)

client/src/components/FileUploader.jsx

import { useState, useRef } from 'react';
import axios from 'axios';

const FileUploader = () => {
  const [file, setFile] = useState(null);
  const [previewUrl, setPreviewUrl] = useState(null);
  const [progress, setProgress] = useState(0);
  const [uploadedUrl, setUploadedUrl] = useState(null);
  const inputRef = useRef();

  const handleFileChange = (e) => {
    const selected = e.target.files[0];
    setFile(selected);
    setUploadedUrl(null);
    setProgress(0);

    if (selected?.type.startsWith('image') || selected?.type.startsWith('video')) {
      const url = URL.createObjectURL(selected);
      setPreviewUrl(url);
    } else {
      setPreviewUrl(null);
    }
  };

  const handleUpload = async () => {
    if (!file) return;

    try {
      const { data: signatureData } = await axios.get(`${import.meta.env.VITE_API_URL}/get-signature`);

      const formData = new FormData();
      formData.append('file', file);
      formData.append('api_key', signatureData.apiKey);
      formData.append('timestamp', signatureData.timestamp);
      formData.append('upload_preset', signatureData.uploadPreset);
      formData.append('signature', signatureData.signature);

      const { data } = await axios.post(
        `https://api.cloudinary.com/v1_1/${signatureData.cloudName}/auto/upload`,
        formData,
        {
          onUploadProgress: (e) => {
            const percent = Math.round((e.loaded * 100) / e.total);
            setProgress(percent);
          },
        }
      );

      setUploadedUrl(data.secure_url);
      inputRef.current.value = null;
    } catch (err) {
      console.error('Upload failed:', err);
      alert('Upload failed. Check console.');
    }
  };

  return (
    <section style={{ padding: '1rem' }}>
      <h2>Secure File Upload to Cloudinary</h2>

      <input
        ref={inputRef}
        type="file"
        accept="image/*,video/*,.pdf,.doc,.docx"
        onChange={handleFileChange}
      />

      {previewUrl && file?.type.startsWith('image') && (
        <img src={previewUrl} alt="Preview" width={200} style={{ marginTop: '1rem' }} />
      )}

      {previewUrl && file?.type.startsWith('video') && (
        <video width={300} controls style={{ marginTop: '1rem' }}>
          <source src={previewUrl} type={file.type} />
        </video>
      )}

      {!previewUrl && file && (
        <p style={{ marginTop: '1rem' }}>Selected File: {file.name}</p>
      )}

      <button onClick={handleUpload} disabled={!file} style={{ marginTop: '1rem' }}>
        Upload
      </button>

      {progress > 0 && <p>Progress: {progress}%</p>}

      {uploadedUrl && (
        <div style={{ marginTop: '1rem' }}>
          <p>Uploaded Successfully!</p>
          <a href={uploadedUrl} target="_blank" rel="noopener noreferrer">View File</a>
        </div>
      )}
    </section>
  );
};

export default FileUploader;


Step 5. Use Component in App
client/src/App.jsx

import FileUploader from './components/FileUploader';

function App() {
  return (
    <div style={{ maxWidth: '600px', margin: '0 auto', fontFamily: 'sans-serif' }}>
      <h1>Cloudinary File Uploader</h1>
      <FileUploader />
    </div>
  );
}

export default App;

Why Use Signed Uploads?

Cloudinary offers two ways to upload files:

  • Unsigned Uploads: Anyone with your upload preset can upload files. Not recommended for production because it's insecure.
  • Signed Uploads (used in this guide): The backend signs each upload request using your Cloudinary secret key, making it secure. This ensures:
    • Files are uploaded only by authenticated users (if you add auth)
    • Upload presets can't be abused
    • You have more control over what's uploaded

Best Practices

  • Use /auto/upload endpoint to auto-detect file type (image/video/raw)
  • Don’t expose Cloudinary secret API keys in frontend
  • Limit file size on client and/or backend

Supported File Types
Cloudinary accepts:

  • Images: jpg, png, webp, etc.
  • Videos: mp4, mov, avi
  • Documents: pdf, doc, docx, txt (uploaded as raw)

Conclusion
In this post, we developed a cutting-edge React file uploader that works flawlessly with Cloudinary. It offers a safe, production-ready starting point, preview capabilities, progress tracking, and support for a variety of file types. Blogs, admin panels, profile setups, and CMSs can all make use of this uploader. Take into account backend signed uploads or Cloudinary's transformation capabilities for more complex use cases.

HostForLIFE.eu Node.js Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.



AngularJS Hosting Europe - HostForLIFE :: Angular Subscription Management: Using RxJS to Fix Memory Leaks

clock July 21, 2025 08:26 by author Peter

Angular uses RxJS Observables quite extensively for asynchronous data anything from HTTP requests, form value changes, events, route parameters, and many more. Most of the time, you would subscribe to them, but not unsubscribing properly may cause memory leaks and unexpected behavior, especially in large or long-running apps. In this article, we share how we faced a real-life issue related to missed unsubscriptions, how we identified the leak, and how we applied best practices, such as the takeUntil operator and a reusable base class.

Real Scenario
In various dashboard applications, several components had the possibility of listening to data streams coming from the API, user interactions, and changes in route parameters. Such components use subscription on observables through the RxJS subscribe() method inside the Angular lifecycle hooks of ngOnInit().

Example
ngOnInit(): void {
  this.route.params.subscribe(params => {
    this.loadData(params['id']);
  });

  this.userService.getUser().subscribe(user => {
    this.user = user;
  });
}


After navigating between routes multiple times, we noticed the following issues.

  • Console logs appeared multiple times for the same action.
  • Network requests were duplicated.
  • The browser’s memory usage slowly increased over time.

Root Cause

  • Upon inspection using Chrome DevTools memory tab and Angular DevTools, we found that components were not being garbage collected. This was due to active subscriptions holding references to destroyed components.
  • Solution: Using the takeUntil Pattern with Subject.

To fix this, we implemented the takeUntil pattern with a private Subject.

Step 1. Declare an Unsubscribe Subject.
private destroy$ = new Subject<void>();

Step 2. Use takeUntil(this.destroy$) in Every Subscription.
ngOnInit(): void {
  this.route.params
    .pipe(takeUntil(this.destroy$))
    .subscribe(params => this.loadData(params['id']));

  this.userService.getUser()
    .pipe(takeUntil(this.destroy$))
    .subscribe(user => this.user = user);
}


Step 3. Emit and complete the Subject in ngOnDestroy().
ngOnDestroy(): void {
  this.destroy$.next();
  this.destroy$.complete();
}

This pattern ensures that all subscriptions automatically unsubscribe when the component is destroyed.

Improvement: Create a Base Component Class

To avoid repeating the same code in every component, we created a base class.
export abstract class BaseComponent implements OnDestroy {
  protected destroy$ = new Subject<void>();

  ngOnDestroy(): void {
    this.destroy$.next();
    this.destroy$.complete();
  }
}


Now, in any component.
export class MyComponent extends BaseComponent implements OnInit {

  ngOnInit(): void {
    this.dataService.getData()
      .pipe(takeUntil(this.destroy$))
      .subscribe(data => this.data = data);
  }

}


Alternative Approach: AsyncPipe for Simpler Cases
In cases where we can bind observables directly in the template, we prefer using Angular’s AsyncPipe, which handles subscription and unsubscription automatically.

Instead of,
this.dataService.getData().subscribe(data => {
  this.data = data;
});


Use in the template.
data$ = this.dataService.getData();

<div *ngIf="data$ | async as data">
  {{ data.name }}
</div>


Conclusion
Failing to unsubscribe from observables in Angular can lead to performance issues, duplicate API calls, and memory leaks. Using takeUntil with a Subject is a reliable and scalable solution, especially when combined with a base component class. For simpler use cases, Angular's AsyncPipe provides a clean and safe way to handle subscriptions in templates. Adhering to these practices keeps your Angular applications running smoothly, easy to maintain, and protected from those memory leaks that can improve performance. You will maintain both efficiency and code clarity as a result.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Tag cloud

Sign in