Categories
Express JavaScript Nodejs

Add Timeout Capability to Express Apps with connect-timeout

To prevent requests from taking too long, it’s a good idea to have a maximum processing time for requests.

We can do this by making requests that take too long to time out. The connect-timeout middleware can help us do this.

In this article, we’ll look at how to use it to implement time out features in our app.

One Catch

This middleware sends a timeout response back to the client if a request exceeds the maximum time that we specify.

However, in the background, it’ll continue to run whatever was running before the request timed out.

Resources like CPU and memory will continue to be used until the process terminates.

We may need to end the process and free up any resources that it was using.

Usage

The timeout function takes 2 arguments. The first is the time for the maximum time for the request to process, and the second is an options object.

Time

The time is either a number in milliseconds or a string accepted by the ms module.

On timeout, req will emit timeout .

Options

The options object takes an optional object that can have the following property:

  • respond — controls if this module will response in the form of forwarding an error. If it’s true , the timeout error is passed to the next so that we may customize the response behavior. The error has a timeout property and status code 503. The default is true .

req.clearTimeout()

The clearTimeout method clears the timeout on the request and it won’t fire for this request in the future.

req.timedout

A boolean that is true if timeout is fired and false otherwise.

Example

We can use it as follows as top-level middleware. We have to stop the flow to the next middleware if the request timed out so they won’t run.

For example, we can write:

const express = require('express');  
const bodyParser = require('body-parser');  
const timeout = require('connect-timeout');  
const app = express();  
const haltOnTimedout = (req, res, next) => {  
  if (!req.timedout) {  
    next();  
  }  
}  
app.use(timeout('5s'))  
app.use(bodyParser.json({ extended: true }))  
app.use(haltOnTimedout)
app.get('/', (req, res) => {  
  res.send('success');  
})
app.listen(3000);

Then our endpoints will time out after 5 seconds and only allow to proceed beyond the bodyParser middleware if the request hasn’t timed out.

We can check time out by adding setTimeout to our route handler as follows:

const express = require('express');  
const bodyParser = require('body-parser');  
const timeout = require('connect-timeout');  
const app = express();  
const haltOnTimedout = (req, res, next) => {  
  if (!req.timedout) {  
    next();  
  }  
}  
app.use(timeout('5s'))  
app.use(bodyParser.json({ extended: true }))  
app.use(haltOnTimedout)
app.get('/', (req, res) => {  
  setTimeout(() => {  
    res.send('success');  
  }, Math.random() * 7000);  
})
app.listen(3000);

The some requests will time out and get us a 503 response.

We can catch the error and handle it ourselves as follows:

const express = require('express');  
const bodyParser = require('body-parser');  
const timeout = require('connect-timeout');  
const app = express();  
const haltOnTimedout = (req, res, next) => {  
  if (!req.timedout) {  
    next();  
  }  
}
app.use(timeout('5s'))  
app.use(bodyParser.json({ extended: true }))  
app.use(haltOnTimedout)
app.get('/', (req, res, next) => {  
  setTimeout(() => {  
    if (req.timedout) {  
      next();  
    }  
    else {  
      res.send('success');  
    }  
  }, Math.random() * 7000);  
})
app.use((err, req, res, next) => {  
  res.send('timed out');  
})
app.listen(3000);

In the setTimeout callback, we check the req.timedout property and called next() if it times out so that it’ll go to our error handler.

Note that we have the error handler after our route so calling next will go to our error handler.

We should get timed out if our request timed out instead of the error message and stack trace.

Conclusion

We can use the connect-timeout middleware to implement time out functionality for our routes.

This sends a 503 response if a request times out, but it doesn’t clear all the resources that’s being used for the request and end the process that’s still running.

We can check if a request has timed out with the req.timedout property.

Finally, we can catch the error by adding our own error handler after our routes and calling next .

Categories
JavaScript

Storing Data Efficiently or Privately with JavaScript WeakMaps

With ES6, a new data structure called WeakMaps is introduced. It’s introduced along regular Maps.

Like Maps, WeakMaps stores data as key-value pairs.

The difference is that WeakMaps don’t interfere with garbage collection since once the reference to the objects is destroyed, the entry with the destroyed object as the key can’t be accessed and it’s garbage collected.

Use Cases for WeakMaps

Since the entry is destroyed as soon as the object which is the key for the WeakMap is destroyed, it’s more efficient than Maps and also more private since only the code with the object reference can access the entry stored in the WeakMap.

This means that there’re a few uses cases for WeakMaps:

  • We can use it to keep private data about a specific object and only allow people with the object to access a value
  • Keeping data without using extra resources
  • Keep temporary data
  • Storing key-value pairs that can be destroyed on the fly without explicitly doing so.

Defining WeakMaps

We can define WeakMaps as follows:

let jane= {  
  name: "John"  
};  

let weakMap = new WeakMap([  
  [jane, 1]  
]);

Then we can access the entry with the object reference with the get method:

console.log(weakMap.get(jane));

Keys in a WeakMap can only be objects. If we try to create an entry with a primitive value, we’ll get an error.

For example, if we try to run:

let weakMap = new WeakMap([  
  [1, 1]  
]);

We’ll get the error ‘Uncaught TypeError: Invalid value used as weak map key.’

Once the object reference is destroyed, then we can no longer access the key-value pair that had the destroyed object as the key.

For example, if we write:

let jane = {  
  name: "Jane"  
}; 
 
let weakMap = new WeakMap([  
  [jane, 1]  
]);

jane = null;  
console.log(weakMap.get(jane));

Then we get undefined from the console.log since jane is set to null , so the reference of the original object was removed from memory.

Manipulating WeakMaps

Unlike Maps, WeakMaps has fewer methods for manipulating the entries. WeakMap instances only have the get , set , delete , and has methods.

get

The get method is used for looking up a value with the given key. It takes one argument, which is the key that we used to look up the value with. We can use it as follows:

let jane = {  
  name: "Jane"  
};

let weakMap = new WeakMap([  
  [jane, 1]  
]);

console.log(weakMap.get(jane));

Then we get 1 as the value.

Note that we have to use the exact object reference to get the value from a WeakMap. A new object that looks like the original reference won’t work. For example, if we write:

let jane = {  
  name: "Jane"  
};

let weakMap = new WeakMap([  
  [jane, 1]  
]);

console.log(weakMap.get({  
  name: "Jane"  
}));

We will getundefined because we need to pass in jane to get 1. Any other object that has the same content doesn’t have the same reference in memory. This is why we can’t get the value with anything else other than the original key.

set

The set method is used to add a new key-value pair to the WeakMap. It takes an object as the first argument for the key, and any value for the second argument for the value of the corresponding key.

For instance, we can use the set method as follows:

let jane = {  
  name: "Jane"  
};

let john = {  
  name: "john"  
};

let weakMap = new WeakMap([  
  [jane, 1]  
]);

weakMap.set(john, 2);  
console.log(weakMap.get(john));

delete

The delete method takes one argument, which is an object for the key . We can remove it with the object reference as follows:

let jane = {  
  name: "Jane"  
};

let john = {  
  name: "john"  
};

let weakMap = new WeakMap([  
  [jane, 1]  
]);

weakMap.set(john, 2);  
weakMap.delete(john)  
console.log(weakMap.get(john));

We would get undefined from the last line after we called delete with john passed in since we removed the key-value pair from the WeakMap.

has

The has method lets us check if a value exists for a given key. It takes one argument, which is the key that we want to look up the value with.

For example, we can use it as follows:

let jane = {  
  name: "Jane"  
};

let weakMap = new WeakMap([  
  [jane, 1]  
]);

console.log(weakMap.has(jane));

We will get true from the console.log output since we have an entry with jane as the key.

Once again, like the get method, we have to use the exact object reference as the key to get true from the has method.

Therefore, the following will log false :

let jane = {  
  name: "Jane"  
};

let weakMap = new WeakMap([  
  [jane, 1]  
]);

console.log(weakMap.has({  
  name: "Jane"  
}));

We will get false because what we pass in isn’t the exact object reference for the [jane, 1] . The key is jane , not another object that looks like jane since the reference in memory would be different.

WeakMaps are handy for storing data that can be destroyed easily since the keys are exact object references. It’s also handy for storing data that can be garbage collected once references to it destroyed. The ease of garbage collection makes freeing up resources an easy process.

Also, it’s great for private data since the keys can be destroyed easily. Once the key object is no longer in memory, then the entry with the given key is gone forever.

Categories
Flow JavaScript

JavaScript Type Checking with Flow — More Utility Types

Flow is a type checker made by Facebook for checking JavaScript data types. It has many built-in data types we can use to annotate the types of variables and function parameters.

In this article, we’ll look at the built-in more utility types that come with Flow.

$NonMaybeType<T>

$NonMaybeType<T> converts type T to a non-maybe type. This means that we can’t assign null or undefined to any properties of the type that’s return returned with this utility type.

We can use it as follows:

type MaybeAge = ?number;  
type Age = $NonMaybeType<MaybeAge>;  
let age: Age = 1;

We can’t assign null or undefined to anything fo type Age :

let age2: Age = null;

The code above will give an error.

$ObjMap<T, F>

$ObjMap<T, F> returns a type that’s maps the object type T by the function type F .

For example, if we have a type for a mapping function:

type ExtractReturnType = <V>(() => V) => V;

Then we have the following function that runs a function:

function run<O: Object>(o: O): $ObjMapi<O, ExtractReturnType>{  
  return Object.keys(o).map(key => o[key]());  
}

Then given that we have the following object:

const o = {  
  a: () => 1,  
  b: () => 'foo'  
};

Then we can get the return type of the method of the o object as follows:

(run(o).a: number);  
(run(o).b: string);

$ObjMapi<T, F>

$ObjMapi<T, F> is similar to $ObjMap<T, F> but F will be called with both the key and value types of the elements of the object type T .

For example, if we have:

type ExtractReturnType = <V>(() => V) => V;  
function run<O: Object>(o: O): $ObjMapi<O, ExtractReturnType>{  
  return Object.keys(o).map(key => o[key]());  
}  
const o = {  
  a: () => 1,  
  b: () => 'foo'  
};

Then we get the following types returned for a and b :

(run(o).a: { k: 'a', v: number });  
(run(o).b: { k: 'b', v: string });

In the code above, k is the key of o and v is the corresponding value of keys from o .

$TupleMap<T, F>

$TupleMap<T, F> takes an iteral type like tuples or arrays T and a function type F and returns the iterable type obtained by mapping each value in the iterable with each entry being a function of type F .

It’s the same as calling map in arrays in JavaScript.

For example, we can use it as follows:

type ExtractReturnType = <V>(() => V) => Vfunction run<A, I: Array<() => A>>(iter: I): $TupleMap<I, ExtractReturnType> {  
  return iter.map(fn => fn());  
}

const arr = [() => 1, () => 2];  
(run(arr)[0]: number);  
(run(arr)[1]: number);

Note that the return type of each function in the arr array have to be the same. Otherwise, we’ll get an error.

$Call<F, T...>

$Call<F, T…> is a type that results in calling the function with type F with 0 or more arguments T... . It’s analogous to calling a function at run time but the type is returned instead.

For example, we can use it as follows:

type Add = (number, number) => string;  
type Sum = $Call<Add, number, number>;  
let x: Sum = '1';

In the code above, given that we have the Add type, which is function type that takes in 2 numbers and returns a string. We created a new type from it by writing:

type Sum = $Call<Add, number, number>;

Then we get that the Sum type is a string.

We can also write:

const add = (a: number, b: number) => (a + b).toString();  
type Add = (number, number) => string;  
type Sum = $Call<typeof add, number, number>;  
let x: Sum = '1';

As we can see, it’s useful for getting the return type of a function without actually calling it.

Class<T>

Class<T> is used for passing in the type into a class. It lets us make a generic class that can take on multiple types.

For example, given the following class:

class Foo<T>{  
  foo: T;  
  constructor(foo: T){  
    this.foo = foo;  
  } getFoo(): T {  
    return this.foo;  
  }  
}

We can use it to create multiple classes:

type NumFoo = Foo<number>;  
type StringFoo = Foo<string>;

Then we can instantiate the classes as follows:

let numFoo: NumFoo = new Foo<number>(1);  
let stringFoo: StringFoo = new Foo<string>('abc');

$Shape<T>

$Shape<T> is a type that contains a subset of the properties included in T .

For example, given the Person class:

type Person = {  
  name: string,  
  age: number  
}

Then we can use the $Shape<T> type as follows:

const age: $Shape<Person> = { age: 10 };

Notice that we didn’t include the name property in the object assigned.

$Shape<T> isn’t the same as T with all its fields marked optional. $Shape<T> can be cast into T . For example, the age constant that we defined earlier can be cast as follows:

(age: Person);

$Exports<T>

$Exports<T> lets us import types from another file. For example, the following are the same:

import typeof * as T from './math';  
type T = $Exports<'./math'>;

In Flow, we have specific utility types for objects which have methods to the return type of methods. Also, we have a type for mapping iterable objects with functions of the same return type to the return type of each function.

In addition, there’s the Class<T> utility type for defining generic classes, the $Shape<T> type for getting a subset of properties of type T as its own type.

There’s also the $Call<F, T,...> for retrieving the return type of F without calling it.

Finally, we have the $Exports<T> type for getting the types from another file.

Categories
JavaScript Nodejs

Node.js FS Module — Truncating and Removing Files

Manipulating files and directories are basic operations for any program. Since Node.js is a server side platform and can interact with the computer that it’s running on directly, being able to manipulate files is a basic feature. Fortunately, Node.js has a fs module built into its library. It has many functions that can help with manipulating files and folders. File and directory operation that are supported include basic ones like manipulating and opening files in directories. Likewise, it can do the same for files. It can do this both synchronously and asynchronously. It has an asynchronous API that have functions that support promises. Also it can show statistics for a file. Almost all the file operations that we can think of can be done with the built in fs module. In this article, we will truncate files with the truncate family of functions and remove files and symbolic links with the unlink family of functions.

Truncate Files with the fs.truncate Family of Functions

We can truncate files with the Node.js truncate family of functions. Truncating a file is to shrink the file to a specified size. To truncate a file asynchronously we can use the truncate function. The function takes 3 arguments. The first argument is the path object, which can be a string, a Buffer object or an URL object. When a file descriptor is passed into the first argument instead of the path, it will automatically call ftruncate to truncate the file with the given file descriptor. Passing in a file descriptor is deprecated and may thrown an error in the future. The second argument is the length of the file in bytes that you want to truncate it to. The default value is 0. Any extra data bigger than the size specified is lost when the size is smaller than the original size. The third argument is a callback function that is run when the truncate operation ends. It takes an err parameter which is null when the truncate operation succeeds and has an object with the error information otherwise.

To truncate a file with the truncate function, we can write the following code:

const fs = require("fs");  
const truncateFile = "./files/truncateFile.txt";

fs.truncate(truncateFile, 1, err => {  
  if (err) throw err;  
  console.log("File truncated");  
});

If we run the code above, there should be a single byte of content left in the file you’re truncating.

The synchronous version of the truncate function is the truncateSync function. The function takes 2 arguments. The first argument is the path object, which can be a string, a Buffer object or an URL object. When a file descriptor is passed into the first argument instead of the path, it will automatically call ftruncateSync to truncate the file with the given file descriptor. Passing in a file descriptor is deprecated and may thrown an error in the future .The second argument is the length of the file in bytes that you want to truncate it to. The default value is 0. Any extra data bigger than the size specified is lost when the size is smaller than the original size. It returns undefined .

We can use the truncateSync function like in the following code:

const fs = require("fs");  
const truncateFile = "./files/truncateFile.txt";

try {  
  fs.truncateSync(truncateFile, 1);  
  console.log("File truncated");  
} catch (error) {  
  console.error(error);  
}

If we run the code above, there should be a single byte of content left in the file you’re truncating.

There’s also a promise version of the truncate function. The function takes 2 arguments. The first argument is the path object, which can be a string, a Buffer object or an URL object.The second argument is the length of the file in bytes that you want to truncate it to. The default value is 0. Any extra data bigger than the size specified is lost when the size is smaller than the original size. It returns a promise that is resolved with no arguments when the operation is successful.

To truncate a file with the promise version of the truncate function, we can write the following code:

const fsPromises = require("fs").promises;  
const truncateFile = "./files/truncateFile.txt";

(async () => {  
  try {  
    await fsPromises.truncate(truncateFile, 1);  
    console.log("File truncated");  
  } catch (error) {  
    console.error(error);  
  }  
})();

If we run the code above, again there should be a single byte of content left in the file you’re truncating.

Remove Files and Symbolic Links with the fs.unlink Family of Functions

We can remove a file or a symbolic link with the unlink function. The function takes 2 arguments. The first argument is the path object, which can be a string, a Buffer object or an URL object.The second argument is a callback function that takes an err object, which is null when the file or symbolic link removal operation succeeds, and has the error data if the operation failed. The unlink function doesn’t work on directories in any state. To remove directories, we should use the rmdir function.

To use the unlink function to remove a file, we can something like the code below:

const fs = require("fs");  
const fileToDelete = "./files/deleteFile.txt";

fs.unlink(fileToDelete, err => {  
  if (err) {  
    throw err;  
  }  
  console.log("Removal complete!");  
});

If we run the code above, the file that’s to be deleted should be gone.

The synchronous version of the unlink function is the unlinkSync function. The function takes one argument. The only argument is the path object, which can be a string, a Buffer object or an URL object. It returns undefined .

We can use it like in the following code:

const fs = require("fs");  
const fileToDelete = "./files/deleteFile.txt";

try {  
  fs.unlinkSync(fileToDelete);  
  console.log("Removal complete!");  
} catch (error) {  
  console.error(error);  
}

There’s also a promise version of the unlink function. The function takes one argument. The only argument is the path object, which can be a string, a Buffer object or an URL object. It returns a promise that’s resolved with no argument when the operation is successful.

If we run the code above, the file that’s to be deleted should be gone.

We can use it like in the following code:

const fsPromises = require("fs").promises;  
const fileToDelete = "./files/deleteFile.txt";

(async () => {  
  try {  
    await fsPromises.unlink(fileToDelete);  
    console.log("Removal complete!");  
  } catch (error) {  
    console.error(error);  
  }  
})();

If we run the code above, the file that’s to be deleted should be gone.

The promise version of the unlink function is a much better choice than the unlinkSync function when you want to do multiple things sequentially that includes a call to the unlink function since it doesn’t tie up the whole program waiting for the file or symbolic link deletion operation to complete before continuing to program other parts of the program.

In this article, we truncate files with the truncate family of functions and remove files and symbolic links with the unlink family of functions. The truncate family of functions let us specify the number of bytes to keep while truncating the rest of the file. The unlink family of functions remove files and symbolic links. If we want to do these operations sequentially with other operations, the promise versions of these functions. Even though the API is still experimental, it is much better than the synchronous versions of these functions since it allows for sequential and asynchronous operations with promises. Also, it helps avoid callback hell where we nest promises in too many levels.

Categories
JavaScript Rxjs

More Rxjs Transformation Operators — Scan and Window

Rxjs is a library for doing reactive programming. Creation operators are useful for generating data from various data sources to be subscribed to by Observers.

In this article, we’ll look at how to use transformation operators scan , switchMap , switchMapTo and window .

scan

The scan operator applies an accumulator function over the source Observation by combining the emitted values. Then it returns each intermediate result, with an optional seed value.

It takes up to 2 arguments. The first is the accumulator function, which is the function that combines the value that was accumulated so far with the new emitted value.

The second is an optional argument, which is the seed , or the initial accumulation value.

For example, we can use it as follows:

import { of } from "rxjs";  
import { scan } from "rxjs/operators";
const of$ = of(1, 2, 3);  
const seed = 0;  
const count = of$.pipe(scan((acc, val) => acc + val, seed));  
count.subscribe(x => console.log(x));

The code above starts with an of Observable and a seed initial value of 0. Then we pipe the values from the of$ Observable to the scan operator, which has the accumulation function, which adds the accumulated value to the newly emitted value. We also set seed to 0.

Then we subscribe to the count Observable that’s results from this.

switchMap

The switchMap operator projects each source value to an Observable which is then merged into one output Observable. Only the values from the most recently projected Observable is emitted.

It takes up to 2 arguments. The first is a project function which takes the emitted value of the source Observable as a parameter and then returns a new Observable from it.

The second is an optional resultSelector argument. It’s a function that lets us select the result from the emitted values of the new Observable.

We can use it as follows:

import { of } from "rxjs";  
import { switchMap } from "rxjs/operators";
const switched = of(1, 2, 3).pipe(switchMap(x => of(x, x * 2, x * 3)));  
switched.subscribe(x => console.log(x));

The code above takes the values emitted from the of(1, 2, 3) Observable and then pipe the result into the switchMap operator, which has a function which maps the value emitted from the of(1, 2, 3) Observable and return of(x, x * 2, x * 3) , where x is 1, 2 or 3 from the of(1, 2, 3) Observable.

This means that we get 1, 1*2 and 1*3 which are 1, 2 and 3. Then the same is done with 2, so we get 2 , 2*2 and 2*3 , which are 2, 4 and 6. Finally, we get 3 , 3*2 , and 3*3 , which 3, 6 and 9.

So we get the following output:

1  
2  
3  
2  
4  
6  
3  
6  
9

switchMapTo

The switchMapTo operator projects each source value to the same Observable. The Observable then is flatten multiple times with switchMap in the output Observable.

It takes up to 2 arguments. The first is an Observable which we replace each emitted value of the source Observable with.

The second is an optional argument, which is the resultSelector function which lets us select the value from the new Observable.

For example, we can use it as follows:

import { of, interval } from "rxjs";  
import { switchMapTo, take } from "rxjs/operators";
const switched = of(1, 2, 3).pipe(switchMapTo(interval(1000).pipe(take(3))));  
switched.subscribe(x => console.log(x));

The code above will map the emitted values of the of(1, 2, 3) Observable into the interval(1000).pipe(take(3)) which emits values from 0 to 3 spaced 1 second apart.

The result will be that we get 0, 1, 2, and 3 as the output.

window

The window operator branch out the source Observable values as a nested Observable when the windowBoundaries Observable emits.

It takes one argument, which is the windowBoundaries Observable that’s used to complete the previous window and starts a new window.

Like buffer it buffers the emitted values then emits them all at once some condition is met but emits an Observable instead of an array.

For instance, we can use it as follows:

import { interval, timer } from "rxjs";  
import { window, mergeAll, map, take } from "rxjs/operators";
const timer$ = timer(3000, 1000);  
const sec = interval(6000);  
const result = timer$.pipe(  
  window(sec),  
  map(win => win.pipe(take(2))),  
  mergeAll()  
);  
result.subscribe(x => console.log(x));

The code above has the timer(3000, 1000) Observable which emits values every second starting 3 seconds after it’s been initialized.

We also have a sec Observable that emits numbers every 6 seconds. We use that for windowing with the window operator. This means that value from the timer$ Observable will emit values, which will then be pipe d to the window operator.

This will then be piped to the map operator, which will take the first 2 values that were emitted from the window operator. Then all the results will be merged together with mergeAll .

In the end, we get 2 numbers from timer$ emitted every 6 seconds.

The scan operator applies an accumulator function over the source Observation by combining the emitted values. Then it returns each intermediate result, with an optional seed value.

switchMap projects each source value to an Observable which is then merged into one output Observable. Only the values from the most recently projected Observable are emitted.

Like switchMap,switchMapTo projects each source value an Observable. But unlike switchMap, it projects to the same Observable. The Observable then is flatten multiple times with switchMap in the output Observable.

The window operator branch out the source Observable values as a nested Observable when the windowBoundaries Observable, which is used for closing and opening the window emits.