Categories
JavaScript Rxjs

More Rxjs Operators

Rxjs is a library for doing reactive programming. Creation operators are useful for generating data from various data sources to be subscribed to by Observers.

In this article, we’ll look at some join creation operators to combine data from multiple Observables into one Observable. We’ll look at the merge , race and zip join creation operators, and also the buffer and bufferCount transformation operators.

Join Creation Operators

These operators combine the values emitted from multiple Observers into one.

merge

The merge operator takes multiple Observables and concurrently emits all values from every given input Observable.

It takes one array of Observables or a comma-separated list of Observables as arguments.

For example, we can use it as follows:

import { merge, of } from "rxjs";
const observable1 = of(1, 2, 3);  
const observable2 = of(4, 5, 6);  
const combined = merge(observable1, observable2);  
combined.subscribe(x => console.log(x));

Another example would be combining multiple timed Observables as follows:

import { merge, interval } from "rxjs";
const observable1 = interval(1000);  
const observable2 = interval(2000);  
const combined = merge(observable1, observable2);  
combined.subscribe(x => console.log(x));

We’ll see that the first observable1 will emit a value first, then observable2 . Then observable1 will continue to emit values every second, and observable2 will emit values every 2 seconds.

race

The race operator takes multiple Observables and returns the Observable that emits an item from the arguments.

It takes a comma-separated list of Observables as arguments.

For example, we can use it as follows:

import { race, of } from "rxjs";
const observable1 = of(1, 2, 3);  
const observable2 = of(4, 5, 6);  
const combined = race(observable1, observable2);  
combined.subscribe(x => console.log(x));

We have observable1 , which emits data before observable2 . We should get the output:

1  
2  
3

since observable emits values first.

zip

The zip operator combines multiple Observables and returns an Observable whose values are calculated from the values, in order of each of its input Observables.

It takes a list of Observables as arguments. We can use it as follows:

import { zip, of } from "rxjs";
const observable1 = of(1, 2, 3);  
const observable2 = of(4, 5, 6);  
const combined = zip(observable1, observable2);  
combined.subscribe(x => console.log(x));

Then we get the following:

[1, 4]  
[2, 5]  
[3, 6]

We can also map them to objects as follows to make values from one Observable easier to distinguish from the other.

To do this, we can write the following:

import { zip, of } from "rxjs";  
import { map } from "rxjs/operators";
const age$ = of(1, 2, 3);  
const name$ = of("John", "Mary", "Jane");  
const combined = zip(age$, name$);  
combined  
  .pipe(map(([age, name]) => ({ age, name })))  
  .subscribe(x => console.log(x));

Transformation Operators

buffer

The buffer operator buffers the source Observable values until the closingNotifier emits.

It takes one argument, which is the closingNotifier . It’s an Observable that signals the buffer to be emitted on the output Observable.

For example, we can use it as follows:

import { fromEvent, timer } from "rxjs";  
import { buffer } from "rxjs/operators";
const observable = timer(1000, 1000);  
const clicks = fromEvent(document, "click");  
const buffered = observable.pipe(buffer(clicks));  
buffered.subscribe(x => console.log(x));

In the code above, we have an Observable created by the timer operator which emits numbers every second after 1 second of waiting. Then we pipe our results into the clicks Observable, which emits as clicks are made to the document.

This means that as we click the page, the emitted data that are buffered by the buffer operator will emit the data that was buffered. Also, this means that as we click our document, we’ll get anything from an empty array to an array of values that were emitted between clicks.

bufferCount

bufferCount is slightly different from buffer in that it buffers the data until the size hits the maximum bufferSize .

It takes 2 arguments, which are the bufferSize , which is the maximum size buffered, and the startBufferEvery parameter which is an optional parameter indicating the interval at which to start a new buffer.

For example, we can use it as follows:

import { fromEvent } from "rxjs";  
import { bufferCount } from "rxjs/operators";  
const clicks = fromEvent(document, "click");  
const buffered = clicks.pipe(bufferCount(10));  
buffered.subscribe(x => console.log(x));

The code above will emit the MouseEvent objects that are buffered into the array once we clicked 10 times since this is when we 10 MouseEvent objects are emitted by the originating Observable.

As we can see, the join creation operators lets us combine Observables’ emitted data in many ways. We can pick the first ones emitted, we can combine all the emitted data into one, and we can get them concurrently.

Also, we can buffer Observable’s emitted data and emit them when a given amount is buffered or a triggering event will emit the data in the buffer.

Categories
JavaScript Nodejs

Node.js FS Module — Renaming Item sand Removing Directories

Manipulating files and directories are basic operations for any program. Since Node.js is a server side platform and can interact with the computer that it’s running on directly, being able to manipulate files is a basic feature. Fortunately, Node.js has a fs module built into its library. It has many functions that can help with manipulating files and folders. File and directory operation that are supported include basic ones like manipulating and opening files in directories. Likewise, it can do the same for files. It can do this both synchronously and asynchronously. It has an asynchronous API that have functions that support promises. Also it can show statistics for a file. Almost all the file operations that we can think of can be done with the built in fs module. In this article, we will rename items stored on disk with the rename family of functions and remove directories with the rmdir family of functions.

Renaming Items with fs.rename and fs.renameSync

To rename items stored on disk in a Node.js program, we can call the rename function asynchronously. It takes 3 arguments. The first argument is the old path of the file. which can be a string, a Buffer object, or an URL object.

The second argument is the new path of the file, which also can be a string, a Buffer object, or an URL object.

The last argument is a callback function that’s called when the item rename operation ends. The callback function takes an err parameter which has the error data if the rename operation ends with an error, otherwise, the err object is null .

The original file must exist before renaming it. If the path of the item you want to rename to already exists, then that item will be overwritten. If the destination path is a directory, then an error will be raised.

For example, we can use it like in the following code:

const fs = require("fs");  
const sourceFile = "./files/originalFile.txt";  
const destFile = "./files/renamedFile.txt";

fs.rename(sourceFile, destFile, err => {  
  if (err) throw err;  
  console.log("Rename complete!");  
});

We can do the same for directories:

const fs = require("fs");  
const oldDirectory = "./files/oldFolder";  
const newDirectory = "./files/newFolder";

fs.rename(oldDirectory, newDirectory, err => {  
  if (err) throw err;  
  console.log("Directory rename complete!");  
});

The synchronous version of the rename function is the renameSync function. It takes the same arguments as the rename but without the callback. The first argument is the old path of the file. which can be a string, a Buffer object, or an URL object. The second argument is the new path of the file, which also can be a string, a Buffer object, or an URL object. It returns undefined .

For example, we can rename a file with the renameSync function like in the following code:

const fs = require("fs");  
const sourceFile = "./files/originalFile.txt";  
const destFile = "./files/renamedFile.txt";

try {  
  fs.renameSync(sourceFile, destFile);  
  console.log("Rename complete!");  
} catch (error) {  
  console.log(error);  
}

There’s also a promise version of the rename function, which also does the rename operation asynchronously. It takes 2 arguments.

The first argument is the old path of the file. which can be a string, a Buffer object, or an URL object.

The second argument is the new path of the file, which also can be a string, a Buffer object, or an URL object. The promise version of the rename function returns a promise that resolves without argument with the rename operation is successful. For example, we can use it like in the following code:

const fsPromises = require("fs").promises;  
const sourceFile = "./files/originalFile.txt";  
const destFile = "./files/renamedFile.txt";

(async () => {  
  try {  
    await fsPromises.rename(sourceFile, destFile);  
    console.log("Rename complete!");  
  } catch (error) {  
    console.log(error);  
  }  
})();

This is a better choice than renameSync for running sequential operations because asynchronous operations like promises won’t holding the program’s execution when it’s running, which means that other parts of the program can run if the operation isn’t finished.

Removing Directories with fs.rmdir and fs.rmdirSync

To remove directories asynchronously we can use the rmdir function. It takes 3 arguments.

The first is the path of the directory, which can be a string, a Buffer object or an URL object.

The second argument is an object that takes a few option properties. The emFileWait property is an integer that let our program retry if an EMFILE error is encountered.

It is the maximum number of milliseconds that we wait to try deleting the directory again. The rmdir function will retry every 1ms until the emFileWait value is reached.

Default value is 1000. The maxBusyTries is an integer is the number of retries when the EBUSY , ENOTEMPTY or EPERM error is encountered. It will retry every 100 milliseconds up to the maxBusyTries value. The recursive property is a boolean property.

If it’s set to true , then it will recursively delete data inside the directory along with the directory itself. In recursive mode, errors aren’t reported if path doesn’t exist and operations are retried on failure. The default value is false .

Recursive mode is an experimental feature. The last argument is a callback function which has an err parameter. It’s called when the removal operation ends. It’s null if the directory removal operation succeeds.

Otherwise, it returns an object with the error information. Using the regular asynchronous version of the rmdir function with files results in the promise being rejected with the ENOENT error on Windows and an ENOTDIR error on POSIX operating systems.

For example, we can use it like in the following code:

const fs = require("fs");  
const dirToDelete = "./files/deleteFolder";

fs.rmdir(  
  dirToDelete,  
  {  
    emfileWait: 2000,  
    maxBusyTries: 5,  
    recursive: false  
  },  
  err => {  
    if (err) {  
      throw err;  
    }  
    console.log("Removal complete!");  
  }  
);

The directory with the given path should be gone when the code above is ran if it exists and it’s not being used by other programs.

The synchronous version of the rmdir function is the rmdirSync function. It takes similar arguments as the rmdir function. The first argument is the path to the directory, which can be a string, a Buffer object or an URL object.

The second argument is an object that takes one option property. The recursive property is a boolean property. If it’s set to true , then it will recursively delete data inside the directory along with the directory itself. In recursive mode, errors aren’t reported if path doesn’t exist and operations are retried on failure. The default value is false .

Recursive mode is an experimental feature. It returns undefined .

We can use the rmdirSync function like in the following code:

const fs = require("fs");  
const dirToDelete = "./files/deleteFolder";

fs.rmdirSync(dirToDelete, {  
  recursive: false  
});  
console.log("Removal complete!");

The directory with the given path should be gone when the code above is ran if it exists and it’s not being used by other programs.

The promise version of the rmdir function does the same thing as the regular rmdir function. It takes 2 arguments. The first is the path of the directory, which can be a string, a Buffer object or an URL object. The second argument is the an object that takes a few option properties.

The emFileWait property is an integer that let our program retry if an EMFILE error is encountered. It is the maximum number of milliseconds that we wait to try deleting the directory again. The rmdir function will retry every 1ms until the emFileWait value is reached. Default value is 1000.

The maxBusyTries is an integer is the number of retries when the EBUSY , ENOTEMPTY or EPERM error is encountered. It will retry every 100 milliseconds up to the maxBusyTries value.

The recursive property is a boolean property. If it’s set to true , then it will recursively delete data inside the directory along with the directory itself. In recursive mode, errors aren’t reported if path doesn’t exist and operations are retried on failure. The default value is false .

Recursive mode is an experimental feature. It returns a promise which resolves with no argument when the directory removal operation succeeds. Using the promise version of the rmdir function with files results in the promise being rejected with the ENOENT error on Windows and an ENOTDIR error on POSIX operating systems.

We can use it like in the following code:

const fsPromises = require("fs").promises;  
const dirToDelete = "./files/deleteFolder";

(async () => {  
  try {  
    await fsPromises.rmdir(dirToDelete, {  
      emfileWait: 2000,  
      maxBusyTries: 5,  
      recursive: false  
    });  
    console.log("Removal complete!");  
  } catch (error) {  
    console.error(error);  
  }  
})();

The directory with the given path should be gone when the code above is run if it exists and it’s not being used by other programs. We used the try...catch block to catch errors with the async and await syntax with the promise version of the rmdir .

This is a better choice than rmdirSync for running sequential operations because asynchronous operations like promises won’t hold up the program’s execution when it’s running, which means that other parts of the program can run if the operation isn’t finished.

We renamed items stored on disk with the rename family of functions and remove directories with the rmdir family of functions.

With the rename family of functions, we just pass in the original path and the path that we want to rename to and then anything that’s passed in will be renamed if it’s valid.

The rmdir family let us remove directories by specifying the path. The asynchronous versions of the rmdir functions, which include the regular and the promise version let us specify how it will retry when an error occurs. This is very handy for handling errors gracefully.

Categories
Flow JavaScript

JavaScript Type Checking with Flow — Classes

Flow is a type checker made by Facebook for checking JavaScript data types. It has many built-in data types we can use to annotate the types of variables and function parameters.

In this article, we’ll look at how to add Flow types to classes.

Class Definition

In Flow, the syntax for defining classes is the same as in normal JavaScript, but we add in types.

For example, we can write:

class Foo {    
  name: string;  
  constructor(name: string){  
    this.name= name;  
  }    

  foo(value: string): number {    
    return +value;  
  }  
}

to define the Foo class. The only difference between a regular JavaScript class and the class with the Flow syntax is the addition of type annotations in the fields and parameters and the return value types for methods.

In the code above, the type annotation for fields is:

name: string;

The value parameter also has a type annotation added to it:

value: string

and we have the number return type annotation after the signature of the foo method.

We can also define class type definition without the content of the class as follows:

class Foo {    
  name: string;  
  foo: (string) => number;  
  static staticField: number;  
}

In the code above, we have a string field name , a method foo that takes a string and returns a number and a static staticField that is a number.

Then we can set the values for each outside the class definition as follows:

class Foo {    
  name: string;  
  foo: (string) => number;  
  static staticField: number;  
  static staticFoo: (string) => number;  
}

const reusableFn = function(value: string): number {  
  return +value;  
}

Foo.name = 'Joe';  
Foo.prototype.foo = reusableFn  
Foo.staticFoo = reusableFn  
Foo.staticField = 1;

An instance method in JavaScript corresponds to its prototype’s methods. A static method is a method that’s shared between all instances like in other languages.

Generics

We can pass in generic type parameters to classes.

For example, we can write:

class Foo<A, B> {  
  name: A;  
  constructor(name: A) {  
    this.name = name;  
  }    

  foo(val: B): B {  
    return val;  
  }  
}

Then to use the Foo class, we can write:

let foo: Foo<string, number> = new Foo('Joe');

As we can see, defining classes in Flow isn’t that much different from JavaScript. The only difference is that we can add type annotations to fields, parameters and the return types of methods.

Also, we can make the types generic by passing in generic type markers to fields, parameters and return types.

With Flow, we can also have class definitions that only have the property and method identifiers and their corresponding types and signatures respectively.

Once the types are set, Flow will check the type if we set the values of these properties outside the class. Class methods are the same as their prototype’s methods. Static methods are just a method within the class, and it’s shared by all instances of the class.

Categories
JavaScript TypeScript

TypeScript Advanced Types — Conditional Types

TypeScript has many advanced type capabilities and which makes writing dynamically typed code easy. It also facilitates the adoption of existing JavaScript code since it lets us keep the dynamic capabilities of JavaScript while using the type-checking capability of TypeScript. There are multiple kinds of advanced types in TypeScript, like intersection types, union types, type guards, nullable types, and type aliases, and more.

In this article, we’ll look at conditional types.

Conditional Types

Since TypeScript 2.8, we can define types with conditional tests. This lets us add types to data that can have different types according to the condition we set. The general expression for defining a conditional type in TypeScript is the following:

T extends U ? X : Y

T extends U describes the relationship between the generic types T and U . If T extends U is true then the X type is expected. Otherwise, the Y type is expected. For example, we can use it as in the following code:

interface Animal {    
  kind: string;  
}

interface Cat extends Animal {  
  name: string;  
}

interface Dog {  
  name: string;  
}

type CatAnimal = Cat extends Animal ? Cat : Dog;  
let catAnimal: CatAnimal = <Cat>{  
  name: 'Joe',  
  kind: 'cat'  
}

In the code above, we created the CatAnimal type alias which is set to the Cat type if Cat extends Animal . Otherwise, it’s set to Dog . Since Cat does extend Animal , the CatAnimal type alias is set to the Cat type.

This means that in the example above if we change <Cat> to <Dog> like we do in the following code:

interface Animal {    
  kind: string;  
}

interface Cat extends Animal {  
  name: string;  
}

interface Dog {  
  name: string;  
}

type CatAnimal = Cat extends Animal ? Cat : Dog;  
let catAnimal: CatAnimal = <Dog>{  
  name: 'Joe',  
  kind: 'cat'  
}

We would get the following error message:

Property 'kind' is missing in type 'Dog' but required in type 'Cat'.(2741)

This ensures that we have the right type for catAnimal according to the condition expressed in the type. If we want to Dog to be the type for catAnimal , then we can write the following instead:

interface Animal {    
  kind: string;  
}

interface Cat  {  
  name: string;  
}

interface Dog extends Animal {  
  name: string;  
}

type CatAnimal = Cat extends Animal ? Cat : Dog;  
let catAnimal: CatAnimal = <Dog>{  
  name: 'Joe'  
}

We can also have nested conditions to determine the actual type from multiple conditions. For example, we can write:

interface Animal {    
  kind: string;  
}

interface Bird  {  
  name: string;  
}

interface Cat  {  
  name: string;  
}

interface Dog extends Animal {  
  name: string;  
}

type AnimalTypeName<T> =  
  T extends Animal ? Cat :      
  T extends Animal ? Dog :      
  T extends Animal ? Bird :  
  Animaltype t0 = AnimalTypeName<Cat>;    
type t1 = AnimalTypeName<Dog>;  
type t2 = AnimalTypeName<Animal>;  
type t3 = AnimalTypeName<Bird>;

Then we get the following types for the type alias t0 , t1 , t2 , and t3 :

type t0 = Animal  
type t1 = Cat  
type t2 = Cat  
type t3: Animal

The exact doesn’t have to be chosen immediately, we can also have something like:

interface Foo {}

interface Bar extends Foo {  
    
}

function bar(x) {  
  return x;  
}

function foo<T>(x: T) {  
  let y: T extends Foo ? string : number = bar(x);  
  let z: string | number = y;  
}

foo<Bar>(1);  
foo<Bar>('1');  
foo<Bar>(false);

As we can see we can pass in anything into the foo even though we have the conditional types set. This is because the actual type in the type condition hasn’t been chosen yet., so TypeScript doesn’t make any assumption about what we can assign to the variables in the foo function.

Distributive Conditional Types

Conditional types are distributive. If we have multiple conditional types that can possibly extend one type as we have in the following code:

interface A {}  
interface B {}  
interface C {}  
interface D {}  
interface X {}  
interface Y {}type TypeName = (A | B | C) extends D ? X : Y;

Then the last line is equivalent to:

(A extends D ? X : Y) | (B extends D ? X : Y) | (C extends D ? X : Y)

For example, we can use it to filter out types with various conditions. For example, we can write:

type Diff<T, U> = T extends U ? never : T;

To remove types from T that are assignable to U . If T extends U, then the Diff<T, U> type is never, which means that we can assign anything to it, otherwise it takes on the type T. Likewise, we can write:

type Filter<T, U> = T extends U ? T : never;

to remove types from T that aren’t assignable to U . In this case, if T extends U, then the Filter type is the same as the T type, otherwise, it takes on the never type. For example, if we have:

type Diff<T, U> = T extends U ? never : T;  
type TypeName = Diff<string| number | boolean, boolean>;

Then TypeName has the type string | number . This is because Diff<string| number | boolean, boolean> is the same as:

(string extends boolean ? never : string) | (number extends boolean ? never: number) | (boolean extends boolean ? never: boolean)

On the other hand, if we write:

type Filter<T, U> = T extends U ? T : never;  
type TypeName = Filter<string| number | boolean, boolean>;

Then TypeName has the boolean type. This is because Diff<string| number | boolean, boolean> is the same as:

(string extends boolean ? string: never) | (number extends boolean ? number: never) | (boolean extends boolean ? boolean: never)

Predefined Conditional Types

TypeScript 2.8 has the following predefined conditional types, They’re the following:

  • Exclude<T, U> – excludes from T those types that are assignable to U.
  • Extract<T, U> – extract from T those types that are assignable to U.
  • NonNullable<T> – exclude null and undefined from T.
  • ReturnType<T> – get the return type of a function type.
  • InstanceType<T> – get the instance type of a constructor function type.

Since TypeScript 2.8, we can define types with conditional tests. The general expression for defining a conditional type in TypeScript is T extends U ? X : Y . They’re distributive, so (A | B | C) extends D ? X : Y; is the same as (A extends D ? X : Y) | (B extends D ? X : Y) | (C extends D ? X : Y) .

Categories
JavaScript

More Lodash Features that are Available in Plain JavaScript

In recent years, new features in JavaScript have been rolling out at a rapid pace. The deficiencies that are filled in by other libraries before have become built-in features of plain JavaScript.

In this article, we’ll look at the methods in Lodash that are now available in plain JavaScript, like function currying, partially applied functions and more.

Some features are better with Lodash but for others, plain JavaScript will suffice.

Curry

The curry method in Lodash returns a function that has one or more arguments of the function originally passed in. We can use it as follows:

const subtract = (a, b) => a - b;  
const currySubtract = _.curry(subtract);  
const subtract1 = currySubtract(1);  
const diff = subtract1(5);  
console.log(diff);

In the code above, we defined the subtract function which returns the first parameter subtracted by the second.

Then we called the curry method with the subtract method passed in to create a new method that makes one argument and returns the subtract function with the first argument set by the parameter. That’s the currySubtract function.

Then we call the currySubtract to set the argument of the subtract function and return the function with the first argument set. Finally, we call the subtract1 function with the second argument of subtract to get the final result.

We can do the same thing with plain JavaScript by writing:

const currySubtract = a => b => a - b;  
const subtract1 = currySubtract(1);  
const diff = subtract1(5);  
console.log(diff);

It does exactly the same thing, but without calling the curry method.

Partial

Lodash also has a method for partially applying a function, which is different from curry since some of the arguments of the function are passed into the function directly and the new function is returned.

For example, we can write the following:

const add = (a, b) => a + b;  
const add1 = _.partial(add, 1);  
const sum = add1(2);  
console.log(sum);

The partial method passed in the first argument and returns the function with the first argument passed in. This gets us the add1 function.

Then when can call the add1 function with the second argument, which is 2 in the code above, and we get 3 for the sum .

In plain JavaScript, we can write:

const add = (a, b) => a + b;  
const add1 = b => add(1, b);  
const sum = add1(2);  
console.log(sum);

Again, we can skip the Lodash partial method call like we did with the curry method call.

Eq

Lodash has the eq method to compare values. For example, we can write:

const equal = _.eq(1, 1);

It does the same thing as the Object.is, so we can just use that.

Add

It also has the add method, which we can use as we do in the following code:

const sum = _.add(1, 1);

We see the value is 2. It does the same thing as the + operator, so we can use that instead.

Nesting Operators

The good thing is that we can pass these methods straight into other Lodash methods like map and reduce as follows:

const mult = _.map([1, 2, 3], n => _.multiply(n, 2));

We get [2, 4, 6] from the code above, and we get 6 from:

const sum = _.reduce([1, 2, 3], _.add);

At

The at method lets us access the value of the properties of an object or an entry of an array by its index.

For example, given the following object, we can write the following:

const obj = { a: [{ b: { c: 2 } }, 1] };

We can get the value of the c property with at by writing:

const c = _.at(obj, ["a[0].b.c"]);

Then we get 2 for c .

Also, we can access more than one property of an object by passing more paths into the array above:

const vals = _.at(obj, ["a[0].b.c", "a[0].b"]);

Then we et:

2  
{c: 2}

In JavaScript, we can access the paths directly:

const vals = [obj.a[0].b.c, obj.a[0].b];

However, it’s good for access paths that may not exist. For example, given the same object, if we write the following:

const vals = _.at(obj, ["a[0].b.c", "d.e"]);

Then we get undefined for the second entry instead of crashing the app.

As we can see, Lodash still has some advantages, with object path access. However, other operators like add, multiply, curry and partial, we can define easily with plain JavaScript ourselves, so Lodash still has some value.