Categories
Express JavaScript

Guide to the Express Request Object — Body, Cookies and More

The request object lets us get the information about requests made from the client in middlewares and route handlers.

In this article, we’ll look at the properties of Express’s request object in detail, including getting the request body and cookies.

Request Object

The req parameter we have in the route handlers above is the req object.

It has some properties that we can use to get data about the request that’s made from the client-side. The more important ones are listed below.

req.app

The req.app property holds a reference to the instance of an Express app that’s using the middleware.

For example, we can use it as follows:

const express = require('express');  
const bodyParser = require('body-parser');  
const path = require('path');  
const app = express();
app.use(bodyParser.json());  
app.use(bodyParser.urlencoded({ extended: true }));
app.set('foo', 'bar');
app.get('/', (req, res) => {  
  res.send(req.app.get('foo'));  
})
app.listen(3000);

We ran app.set(‘foo’, ‘bar’); to set the foo setting to the value'bar' , and then we can get the value with req.app ‘s get method.

req.baseUrl

The req.baseUrl property holds the base URL of the router instance that’s mounted.

For example, if we have:

const express = require('express');
const app = express();  
const greet = express.Router();  
greet.get('/', (req, res) => {  
  console.log(req.baseUrl);  
  res.send('Hello World');  
})
app.use('/greet', greet);
app.listen(3000, () => console.log('server started'));

Then we get /greet from the console.log .

req.body

req.body has the request body. We can parse JSON bodies with express.json() and URL encoded requests with express.urlencoded() .

For example, if we have:

const express = require('express')  
const app = express()
app.use(express.json())  
app.use(express.urlencoded({ extended: true }))
app.post('/', (req, res) => {  
  res.json(req.body)  
})
app.listen(3000, () => console.log('server started'));

Then when we make a POST request with a JSON body, then we get back the same that we sent in the request.

req.cookies

We can get cookies that are sent by the request with the req.cookies property.

For example, we can use it as follows:

const express = require('express');  
const bodyParser = require('body-parser');  
const cookieParser = require('cookie-parser');  
const app = express();  
app.use(cookieParser());
app.get('/', (req, res) => {  
  res.send(req.cookies.name);  
})
app.listen(3000);

Then when we send the Cookie request header with name as the key like name=foo , then we get foo displayed.

req.fresh

The fresh property indicates that the app is ‘fresh’. App app is ‘fresh’ if the cache-control request-header doesn’t have a no-cache directive and any of the following are true :

  • if-modified-since request header is specified and last-modified request header is equal to or earlier than the modified request header.
  • if-none-match request header is *
  • if-none-match request header, after being parsed into its directives, doesn’t match the etag response header.

We can use it as follows:

const express = require('express');  
const bodyParser = require('body-parser');  
const path = require('path');  
const app = express();
app.use(bodyParser.json());  
app.use(bodyParser.urlencoded({ extended: true }));
app.get('/', (req, res) => {  
  res.send(req.fresh);  
})
app.listen(3000);

Then we should get false if those conditions aren’t met.

req.hostname

We can get the hostname from the HTTP header with req.hostname .

When the trust proxy setting doesn’t evaluate to false , then Express will get the value from the X-Forwarded-Host header field. The header can be set by the client or by the proxy.

If there’s more than one X-Forwarded-Host header, then the first one will be used.

For example, if we have:

const express = require('express')  
const app = express()
app.use(express.json())  
app.use(express.urlencoded({ extended: true }))
app.get('/', (req, res) => {  
  res.json(req.hostname)  
})
app.listen(3000, () => console.log('server started'));

Then we get the domain name that the app is hosted in if there’re no X-Forwarded-Host headers and trust proxy doesn’t evaluate to false .

req.ip

We can get the IP address that the request is made from with this property.

For example, we can use it as follows:

const express = require('express');  
const bodyParser = require('body-parser');  
const cookieParser = require('cookie-parser');  
const app = express();  
app.use(cookieParser());
app.get('/', (req, res) => {  
  res.send(req.ip);  
})
app.listen(3000);

Then we get something like ::ffff:172.18.0.1 displayed.

req.ips

When the trust proxy setting isn’t false , this property contains the array of IP address specified by the X-Forwarded-For request header.

Otherwise, it’s an empty array. The X-Forwarded-For can be set by the client or the proxy.

For example, we can use it as follows:

const express = require('express');  
const bodyParser = require('body-parser');  
const app = express();  
app.set('trust proxy', true);
app.get('/', (req, res) => {  
  res.send(req.ips);  
})
app.listen(3000);

Then we get the remote IPs from the client or the X-Forwarded-For header.

req.method

The method property has the request method of the request, like GET, POST, PUT or DELETE.

Conclusion

The request object has many properties for getting various pieces of information about the HTTP request received.

Third-party middleware like cookie-parser adds new properties to the request object to get things like cookies data and more.

Categories
Flow JavaScript

JavaScript Type Checking with Flow — Classes

Flow is a type checker made by Facebook for checking JavaScript data types. It has many built-in data types we can use to annotate the types of variables and function parameters.

In this article, we’ll look at how to add Flow types to classes.

Class Definition

In Flow, the syntax for defining classes is the same as in normal JavaScript, but we add in types.

For example, we can write:

class Foo {    
  name: string;  
  constructor(name: string){  
    this.name= name;  
  }    

  foo(value: string): number {    
    return +value;  
  }  
}

to define the Foo class. The only difference between a regular JavaScript class and the class with the Flow syntax is the addition of type annotations in the fields and parameters and the return value types for methods.

In the code above, the type annotation for fields is:

name: string;

The value parameter also has a type annotation added to it:

value: string

and we have the number return type annotation after the signature of the foo method.

We can also define class type definition without the content of the class as follows:

class Foo {    
  name: string;  
  foo: (string) => number;  
  static staticField: number;  
}

In the code above, we have a string field name , a method foo that takes a string and returns a number and a static staticField that is a number.

Then we can set the values for each outside the class definition as follows:

class Foo {    
  name: string;  
  foo: (string) => number;  
  static staticField: number;  
  static staticFoo: (string) => number;  
}

const reusableFn = function(value: string): number {  
  return +value;  
}

Foo.name = 'Joe';  
Foo.prototype.foo = reusableFn  
Foo.staticFoo = reusableFn  
Foo.staticField = 1;

An instance method in JavaScript corresponds to its prototype’s methods. A static method is a method that’s shared between all instances like in other languages.

Generics

We can pass in generic type parameters to classes.

For example, we can write:

class Foo<A, B> {  
  name: A;  
  constructor(name: A) {  
    this.name = name;  
  }    

  foo(val: B): B {  
    return val;  
  }  
}

Then to use the Foo class, we can write:

let foo: Foo<string, number> = new Foo('Joe');

As we can see, defining classes in Flow isn’t that much different from JavaScript. The only difference is that we can add type annotations to fields, parameters and the return types of methods.

Also, we can make the types generic by passing in generic type markers to fields, parameters and return types.

With Flow, we can also have class definitions that only have the property and method identifiers and their corresponding types and signatures respectively.

Once the types are set, Flow will check the type if we set the values of these properties outside the class. Class methods are the same as their prototype’s methods. Static methods are just a method within the class, and it’s shared by all instances of the class.

Categories
JavaScript TypeScript

TypeScript Advanced Types — Conditional Types

TypeScript has many advanced type capabilities and which makes writing dynamically typed code easy. It also facilitates the adoption of existing JavaScript code since it lets us keep the dynamic capabilities of JavaScript while using the type-checking capability of TypeScript. There are multiple kinds of advanced types in TypeScript, like intersection types, union types, type guards, nullable types, and type aliases, and more.

In this article, we’ll look at conditional types.

Conditional Types

Since TypeScript 2.8, we can define types with conditional tests. This lets us add types to data that can have different types according to the condition we set. The general expression for defining a conditional type in TypeScript is the following:

T extends U ? X : Y

T extends U describes the relationship between the generic types T and U . If T extends U is true then the X type is expected. Otherwise, the Y type is expected. For example, we can use it as in the following code:

interface Animal {    
  kind: string;  
}

interface Cat extends Animal {  
  name: string;  
}

interface Dog {  
  name: string;  
}

type CatAnimal = Cat extends Animal ? Cat : Dog;  
let catAnimal: CatAnimal = <Cat>{  
  name: 'Joe',  
  kind: 'cat'  
}

In the code above, we created the CatAnimal type alias which is set to the Cat type if Cat extends Animal . Otherwise, it’s set to Dog . Since Cat does extend Animal , the CatAnimal type alias is set to the Cat type.

This means that in the example above if we change <Cat> to <Dog> like we do in the following code:

interface Animal {    
  kind: string;  
}

interface Cat extends Animal {  
  name: string;  
}

interface Dog {  
  name: string;  
}

type CatAnimal = Cat extends Animal ? Cat : Dog;  
let catAnimal: CatAnimal = <Dog>{  
  name: 'Joe',  
  kind: 'cat'  
}

We would get the following error message:

Property 'kind' is missing in type 'Dog' but required in type 'Cat'.(2741)

This ensures that we have the right type for catAnimal according to the condition expressed in the type. If we want to Dog to be the type for catAnimal , then we can write the following instead:

interface Animal {    
  kind: string;  
}

interface Cat  {  
  name: string;  
}

interface Dog extends Animal {  
  name: string;  
}

type CatAnimal = Cat extends Animal ? Cat : Dog;  
let catAnimal: CatAnimal = <Dog>{  
  name: 'Joe'  
}

We can also have nested conditions to determine the actual type from multiple conditions. For example, we can write:

interface Animal {    
  kind: string;  
}

interface Bird  {  
  name: string;  
}

interface Cat  {  
  name: string;  
}

interface Dog extends Animal {  
  name: string;  
}

type AnimalTypeName<T> =  
  T extends Animal ? Cat :      
  T extends Animal ? Dog :      
  T extends Animal ? Bird :  
  Animaltype t0 = AnimalTypeName<Cat>;    
type t1 = AnimalTypeName<Dog>;  
type t2 = AnimalTypeName<Animal>;  
type t3 = AnimalTypeName<Bird>;

Then we get the following types for the type alias t0 , t1 , t2 , and t3 :

type t0 = Animal  
type t1 = Cat  
type t2 = Cat  
type t3: Animal

The exact doesn’t have to be chosen immediately, we can also have something like:

interface Foo {}

interface Bar extends Foo {  
    
}

function bar(x) {  
  return x;  
}

function foo<T>(x: T) {  
  let y: T extends Foo ? string : number = bar(x);  
  let z: string | number = y;  
}

foo<Bar>(1);  
foo<Bar>('1');  
foo<Bar>(false);

As we can see we can pass in anything into the foo even though we have the conditional types set. This is because the actual type in the type condition hasn’t been chosen yet., so TypeScript doesn’t make any assumption about what we can assign to the variables in the foo function.

Distributive Conditional Types

Conditional types are distributive. If we have multiple conditional types that can possibly extend one type as we have in the following code:

interface A {}  
interface B {}  
interface C {}  
interface D {}  
interface X {}  
interface Y {}type TypeName = (A | B | C) extends D ? X : Y;

Then the last line is equivalent to:

(A extends D ? X : Y) | (B extends D ? X : Y) | (C extends D ? X : Y)

For example, we can use it to filter out types with various conditions. For example, we can write:

type Diff<T, U> = T extends U ? never : T;

To remove types from T that are assignable to U . If T extends U, then the Diff<T, U> type is never, which means that we can assign anything to it, otherwise it takes on the type T. Likewise, we can write:

type Filter<T, U> = T extends U ? T : never;

to remove types from T that aren’t assignable to U . In this case, if T extends U, then the Filter type is the same as the T type, otherwise, it takes on the never type. For example, if we have:

type Diff<T, U> = T extends U ? never : T;  
type TypeName = Diff<string| number | boolean, boolean>;

Then TypeName has the type string | number . This is because Diff<string| number | boolean, boolean> is the same as:

(string extends boolean ? never : string) | (number extends boolean ? never: number) | (boolean extends boolean ? never: boolean)

On the other hand, if we write:

type Filter<T, U> = T extends U ? T : never;  
type TypeName = Filter<string| number | boolean, boolean>;

Then TypeName has the boolean type. This is because Diff<string| number | boolean, boolean> is the same as:

(string extends boolean ? string: never) | (number extends boolean ? number: never) | (boolean extends boolean ? boolean: never)

Predefined Conditional Types

TypeScript 2.8 has the following predefined conditional types, They’re the following:

  • Exclude<T, U> – excludes from T those types that are assignable to U.
  • Extract<T, U> – extract from T those types that are assignable to U.
  • NonNullable<T> – exclude null and undefined from T.
  • ReturnType<T> – get the return type of a function type.
  • InstanceType<T> – get the instance type of a constructor function type.

Since TypeScript 2.8, we can define types with conditional tests. The general expression for defining a conditional type in TypeScript is T extends U ? X : Y . They’re distributive, so (A | B | C) extends D ? X : Y; is the same as (A extends D ? X : Y) | (B extends D ? X : Y) | (C extends D ? X : Y) .

Categories
JavaScript Nodejs

Node.js FS Module — Read Streams

Manipulating files and directories are basic operations for any program. Since Node.js is a server-side platform and can interact with the computer that it’s running on directly, being able to manipulate files is a basic feature.

Fortunately, Node.js has a fs module built into its library. It has many functions that can help with manipulating files and folders. File and directory operations that are supported include basic ones like manipulating and opening files in directories.

Likewise, it can do the same for files. It can do this both synchronously and asynchronously. It has an asynchronous API that has functions that support promises.

Also, it can show statistics for a file. Almost all the file operations that we can think of can be done with the built-in fs module. In this article, we will create read streams to read a file’s data sequentially and listen to events from a read stream. Since Node.js ReadStreams are descendants of the Readable object, we will also listen to events to it.

Streams are collections of data that may not be available all at once and don’t have to fit in memory. This makes stream handy for processing large amounts of data.

It’s handy for files because files can be big and streams can let us get a small amount of data at one time. In the fs module, there are 2 kinds of streams. There’s the ReadStream and the WriteStream.

ReadStream

ReadStreams are for reading in data from a file and then outputting them a small part at a time. A ReadStream can read a small part of a file or it can read in the whole file.

To create a ReadStream, we can use the fs.createReadStream function. The function takes in 2 arguments. The first argument is the path of the file.

The path can be in the form of a string, a Buffer object, or an URL object.

The second argument is an object that can have a variety of options as properties. The flag option is the file system flag for setting the mode for opening the file. The default flag is r. The list of flags are below:

  • 'a' – Opens a file for appending, which means adding data to the existing file. The file is created if it does not exist.
  • 'ax' – Like 'a' but exception is thrown if the path exists.
  • 'a+' – Open file for reading and appending. The file is created if it doesn’t exist.
  • 'ax+' – Like 'a+' but exception is thrown if the path exists.
  • 'as' – Opens a file for appending in synchronous mode. The file is created if it does not exist.
  • 'as+' – Opens a file for reading and appending in synchronous mode. The file is created if it does not exist.
  • 'r' – Opens a file for reading. An exception is thrown if the file doesn’t exist.
  • 'r+' – Opens a file for reading and writing. An exception is thrown if the file doesn’t exist.
  • 'rs+' – Opens a file for reading and writing in synchronous mode.
  • 'w' – Opens a file for writing. The file is created (if it does not exist) or overwritten (if it exists).
  • 'wx' – Like 'w' but fails if the path exists.
  • 'w+' – Opens a file for reading and writing. The file is created (if it does not exist) or overwritten (if it exists).
  • 'wx+' – Like 'w+' but exception is thrown if the path exists.

The encoding option is a string that sets the character encoding in the form of the string. The default value is null .

The fd option is the integer file descriptor which can be obtained with the open function and its variants. If the fd option is set, then the path argument will be ignored. The default value is null .

The mode option is the file permission and sticky bits of the file, which is an octal number that are the same as Unix or Linux file permissions. It’s only set if the file is created. The default value is 0o666. The autoClose option specifies that the file descriptor will be closed automatically. The default value is true .

If it’s false , then the file descriptor won’t be closed even if there’s an error. It’s completely up to us to close it it autoClose is set to false to make sure there’s no file descriptor leak. Otherwise, the file descriptor will be closed automatically if there’s an error or end event emitted.

The emitClose option will emit the close event when the read stream ends. The default value is false .

The start and end options specifies the beginning and end parts of the file to read. Everything in between will be read in addition to the start and end . start and end are numbers that are the starting and ending bytes of the file to read.

The highWaterMark option is limit to the number of bytes that are read in the stream. The read stream will continue to be read and buffered if the highWaterMark value is reached, but the memory usage will be high and the garbage collection performance will be poor, or it can crash your program with the Allocation failed - JavaScript heap out of memory error.

The createReadStream function returns a ReadStream object where you can attach event handlers to it.

To create a ReadStream, we can use the createReadStream like in the following code:

const fs = require("fs");  
const file = "./files/file.txt";  
const stream = fs.createReadStream(file, {  
  flags: "r",  
  encoding: "utf8",  
  mode: 0o666,  
  autoClose: true,  
  emitClose: true,  
  start: 0  
});

stream.on("open", () => {  
  console.log("Stream opened");  
});

stream.on("ready", () => {  
  console.log("Stream ready");  
});

stream.on("data", data => {  
  console.log(data);  
});

stream.on("readable", () => {  
  while ((chunk = stream.read())) {  
    console.log(chunk);  
  }  
});

stream.on("close", () => {  
  console.log("Stream closed");  
});

When we run the code above, we should get something like the following outputted to the screen, assuming that you have ‘datadatadatadata’ written your a files.txt file:

Stream opened  
Stream ready  
datadatadatadata  
datadatadatadata  
Stream closed  
Stream closed

ReadStream Events

With a ReadStream, we can listen to the following events. There’s a close event that is emitted when the close event is emitted after the file is read.

The open event is emitted when the stream is opened. The file descriptor number fd will be passed with the event when it’s emitted. The ready event is emitted when the ReadStream is ready to be used. It’s fired immediately after the open event is fired.

The ReadStream extends the stream Readable object, which emits events of its own. The data event is emitted whenever the stream data is sent to the consumer. It’s emitted when the readable.pipe() function or readable.resume() are called, or by attaching a listener callback to the data event.

The data event will also be emitted when the readable.read() function is called and a chunk of data is available to be returned. The end event is emitted when there’s no more data to be consumed from the stream. It won’t be emitted until the data is completely consumed.

This can be done by switch the stream to flowing more or calling stream.read() repeated until all the data are consumed.

The error event is emitted whenever an error occurs during the streaming or consumption of the stream. It can be because the stream can’t generate data due to internal failure or a stream attempts to push invalid chunks of data. The pause event is emitted whenever ReadStream.pause() is called and readableFlowing isn’t false .

readableFlowing can have one of 3 states. One is null. When it’s null , this means that no mechanism for consuming the stream’s data is provided and therefore the stream won’t generate data.

When readableFlowing is null, attaching a listener for the 'data' event, calling the readable.pipe() method, or calling the readable.resume() method will switch readable.readableFlowing to true, causing the ReadStream to start emitting events as data is generated.

Calling readable.pause(), readable.unpipe(), or receiving backpressure, which is the situation where data fills the buffer, readable.readableFlowing to be set as false, temporarily halting the flow of events but not halting the generation of data.

Attaching a listener for the 'data' event will not switch readable.readableFlowing to true when readable.readableFlowing is set as false .

The readable event is emitted when there’s data available to be read from the stream or the end of the stream has been reached. Attaching an event listener for the readable event may cause some amount of data to be read into an internal buffer.

It will also be emitted when the end of the stream is reached but before the end event is emitted. The resume event is emitted when ReadStream.resume() is called and the readableFlowing isn’t true .

A ReadStream object also has the following properties. The bytesRead property let us get the number of bytes read so far.

The path property is a string or a buffer that gets us the reference to the file. It’s the same as the first argument of createReadStream() .

The data type will also be the same as what we pass in as the first argument. The pending property is a boolean which is true if the underlying file hasn’t been opened yet, or before the ready event is emitted.

By using the fs.createReadStream function, we created read streams to read a file’s data sequentially and listen to events from a read stream. Since Node.js ReadStreams are descendants of the Readable object, we will also listen to events to it.

We have lots of control over how the read stream is created. We can set the path or file descriptor of the file. Also, we can set the mode of the file to be read and the permission and sticky bit of the file being read.

Also, we can choose to close the streams automatically or not or emit close event automatically. We can also set the highWaterMark option which sets the event of maximum buffer size for storing the read data.

Also, we can call pipe to move data to a writable stream, and pause the streaming of data with the pause function, and resume streaming with the resume function.

Categories
JavaScript

More Lodash Features that are Available in Plain JavaScript

In recent years, new features in JavaScript have been rolling out at a rapid pace. The deficiencies that are filled in by other libraries before have become built-in features of plain JavaScript.

In this article, we’ll look at the methods in Lodash that are now available in plain JavaScript, like function currying, partially applied functions and more.

Some features are better with Lodash but for others, plain JavaScript will suffice.

Curry

The curry method in Lodash returns a function that has one or more arguments of the function originally passed in. We can use it as follows:

const subtract = (a, b) => a - b;  
const currySubtract = _.curry(subtract);  
const subtract1 = currySubtract(1);  
const diff = subtract1(5);  
console.log(diff);

In the code above, we defined the subtract function which returns the first parameter subtracted by the second.

Then we called the curry method with the subtract method passed in to create a new method that makes one argument and returns the subtract function with the first argument set by the parameter. That’s the currySubtract function.

Then we call the currySubtract to set the argument of the subtract function and return the function with the first argument set. Finally, we call the subtract1 function with the second argument of subtract to get the final result.

We can do the same thing with plain JavaScript by writing:

const currySubtract = a => b => a - b;  
const subtract1 = currySubtract(1);  
const diff = subtract1(5);  
console.log(diff);

It does exactly the same thing, but without calling the curry method.

Partial

Lodash also has a method for partially applying a function, which is different from curry since some of the arguments of the function are passed into the function directly and the new function is returned.

For example, we can write the following:

const add = (a, b) => a + b;  
const add1 = _.partial(add, 1);  
const sum = add1(2);  
console.log(sum);

The partial method passed in the first argument and returns the function with the first argument passed in. This gets us the add1 function.

Then when can call the add1 function with the second argument, which is 2 in the code above, and we get 3 for the sum .

In plain JavaScript, we can write:

const add = (a, b) => a + b;  
const add1 = b => add(1, b);  
const sum = add1(2);  
console.log(sum);

Again, we can skip the Lodash partial method call like we did with the curry method call.

Eq

Lodash has the eq method to compare values. For example, we can write:

const equal = _.eq(1, 1);

It does the same thing as the Object.is, so we can just use that.

Add

It also has the add method, which we can use as we do in the following code:

const sum = _.add(1, 1);

We see the value is 2. It does the same thing as the + operator, so we can use that instead.

Nesting Operators

The good thing is that we can pass these methods straight into other Lodash methods like map and reduce as follows:

const mult = _.map([1, 2, 3], n => _.multiply(n, 2));

We get [2, 4, 6] from the code above, and we get 6 from:

const sum = _.reduce([1, 2, 3], _.add);

At

The at method lets us access the value of the properties of an object or an entry of an array by its index.

For example, given the following object, we can write the following:

const obj = { a: [{ b: { c: 2 } }, 1] };

We can get the value of the c property with at by writing:

const c = _.at(obj, ["a[0].b.c"]);

Then we get 2 for c .

Also, we can access more than one property of an object by passing more paths into the array above:

const vals = _.at(obj, ["a[0].b.c", "a[0].b"]);

Then we et:

2  
{c: 2}

In JavaScript, we can access the paths directly:

const vals = [obj.a[0].b.c, obj.a[0].b];

However, it’s good for access paths that may not exist. For example, given the same object, if we write the following:

const vals = _.at(obj, ["a[0].b.c", "d.e"]);

Then we get undefined for the second entry instead of crashing the app.

As we can see, Lodash still has some advantages, with object path access. However, other operators like add, multiply, curry and partial, we can define easily with plain JavaScript ourselves, so Lodash still has some value.