Categories
JavaScript TypeScript

TypeScript Advanced Types — Nullable Types and Type Aliases

TypeScript has many advanced type capabilities and which make writing dynamically typed code easy. It also facilitates the adoption of existing JavaScript code since it lets us keep the dynamic capabilities of JavaScript while using the type-checking capability of TypeScript. There are multiple kinds of advanced types in TypeScript, like intersection types, union types, type guards, nullable types, and type aliases, and more. In this article, we look at nullable types and type aliases.

Nullable Types

To let us assign undefined to a property with the --strictNullChecks flag on, TypeScript supports nullable types. With the flag on, we can’t assign undefined to type members that don’t have the nullable operator attached to it. To use it, we just put a question mark after the member name for the type we want to use.

If we have the strictNullChecks flag on and we set a value of a property to null or undefined , then like we do in the following code:

interface Person {  
  name: string;  
  age: number;  
}

let p: Person = {  
  name: 'Jane',  
  age: null  
}

Then we get the following errors:

Type 'null' is not assignable to type 'number'.(2322)input.ts(3, 3): The expected type comes from property 'age' which is declared here on type 'Person'

The errors above won’t appear if we have strictNullChecks off and the TypeScript compiler will allow the code to be compiled.

If we have the strictNullChecks flag on and we want to be able to set undefined to a property as the value, then we can make the property nullable. For example, we can set a member of an interface to be nullable with the following code:

interface Person {  
  name: string;  
  age?: number;  
}

let p: Person = {  
  name: 'Jane',  
  age: undefined  
}

In the code above, we added a question mark after the age member in the Person interface to make it nullable. Then when we define the object, we can set age to undefined. We can’t still set age to null . If we try to do that, we get:

Type 'null' is not assignable to type 'number | undefined'.(2322)input.ts(3, 3): The expected type comes from property 'age' which is declared here on type 'Person'

As we can see, a nullable type is just a union type between the type that we declared and the undefined type. This also means that we can use type guards with it like any other union type. For example, if we want to only get the age if it’s defined, we can write the following code:

const getAge = (age?: number) => {  
  if (age === undefined) {  
    return 0  
  }  
  else {  
    return age.toString();  
  }  
}

In the getAge function, we first check if the age parameter is undefined . If it is, then we return 0. Otherwise, we can call the toString() method on it, which is available to number objects.

Likewise, we can eliminate null values with a similar kind of code, for instance, we can write:

const getAge = (age?: number | null) => {  
  if (age === null) {  
    return 0  
  }    
  else if (age === undefined) {  
    return 0  
  }  
  else {  
    return age.toString();  
  }  
}

This comes in handy because nullable types exclude null from being assigned with strictNullChecks on, so if we want null to be able to be passed in as a value for the age parameter, then we need to add null to the union type. We can also combine the first 2 if blocks into one:

const getAge = (age?: number | null) => {  
  if (age === null || age === undefined) {  
    return 0  
  }  
  else {  
    return age.toString();  
  }  
}

Type Aliases

If we want to create a new name for an existing type, we can add a type alias to the type. This can be used for many types, including primitives, unions, tuples, and any other type that we can write by hand. To create a type alias, we can use the type keyword to do so. For example, if we want to add an alias to a union type, we can write the following code:

interface Person {  
  name: string;  
  age: number;  
}

interface Employee {  
  employeeCode: number;  
}

type Laborer = Person & Employee;  
let laborer: Laborer = {  
  name: 'Joe',  
  age: 20,  
  employeeCode: 100  
}

The declaration of laborer is the same as using the intersection type directly to type the laborer object, as we do below:

let laborer: Person & Employee = {  
  name: 'Joe',  
  age: 20,  
  employeeCode: 100  
}

We can declare type alias for primitive types like we any other kinds of types. For example, we can make a union type with different primitive types as we do in the following code:

type PossiblyNumber = number | string | null | undefined;  
let x: PossiblyNumber = 2;  
let y: PossiblyNumber = '2';  
let a: PossiblyNumber = null;  
let b: PossiblyNumber = undefined;

In the code above, the PossiblyNumber type can be a number, string, null or undefined . If we try to assign an invalid to it like a boolean as in the following code:

let c: PossiblyNumber = false;

We get the following error:

Type 'false' is not assignable to type 'PossiblyNumber'.(2322)

just like any other invalid value assignment.

We can also include generic type markers in our type aliases. For example, we can write:

type Foo<T> = { value: T };

Generic type aliases can also be referenced in the properties of a type. For example, we can write:

type Tree<T> = {  
  value: T;  
  left: Tree<T>;  
  center: Tree<T>;  
  right: Tree<T>;  
}

Then we can use the Tree type as we do in the following code:

type Tree<T> = {    
  value: T,  
  left: Tree<T>;  
  center: Tree<T>;  
  right: Tree<T>;  
}

let tree: Tree<string> = {} as Tree<string>;  
tree.value = 'Jane';tree.left = {} as Tree<string>  
tree.left.value = 'Joe';  
tree.left.left = {} as Tree<string>;  
tree.left.left.value = 'Amy';  
tree.left.right = {} as Tree<string>  
tree.left.right.value = 'James';tree.center = {} as Tree<string>  
tree.center.value = 'Joe';tree.right = {} as Tree<string>  
tree.right.value = 'Joe';console.log(tree);

The console.log for tree on the last line should get us:

{  
  "value": "Jane",  
  "left": {  
    "value": "Joe",  
    "left": {  
      "value": "Amy"  
    },  
    "right": {  
      "value": "James"  
    }  
  },  
  "center": {  
    "value": "Joe"  
  },  
  "right": {  
    "value": "Joe"  
  }  
}

Nullable types are useful is we want to be able to assign undefined to a property when strictNullChecks flag is on when in our TypeScript compiler configuration. It’s simply a union type between whatever type you have and undefined . It’s denoted by a question mark after the property name. This means we can use type guards with it like any other union type. Note that nullable types don’t allow null values to be assigned to it since nullable types are only needed when strictNullChecks flag is on. Type alias let us create a new name for types that we already have. We can also use generics with type alias, but we can’t use them as standalone types.

Categories
JavaScript Rxjs

Some Useful Rxjs Creation Operators

Rxjs is a library for doing reactive programming. Creation operators are useful for generating data from various data sources to be subscribed to by Observers.

In this article, we’ll look at some creation operators from Rxjs.

Ajax

We can use the ajax() operator to fetch response objects returned from APIs.

For example, we can use it as follows:

const observable = ajax(`https://api.github.com/meta`).pipe()  
  map(response => {  
    console.log(response);  
    return response;  
  }),  
  catchError(error => {  
    console.log(error);  
    return of(error);  
  })  
);

observable.subscribe(res => console.log(res));

We pipe the data from the response with the map operator. Also, we can catch HTTP errors with the catchError operator.

Also, we can use ajax.getJSON() to simplify the operation as follows:

import { ajax } from "rxjs/ajax";  
import { map, catchError } from "rxjs/operators";  
import { of } from "rxjs";

const observable = ajax.getJSON(`https://api.github.com/meta`).pipe()  
  map(response => {  
    console.log(response);  
    return response;  
  }),  
  catchError(error => {  
    console.log("error: ", error);  
    return of(error);  
  })  
);

observable.subscribe(res => console.log(res));

Note that in both examples, we return the response in the callback of the map that we passed into the map operator.

It also works for POST requests:

import { ajax } from "rxjs/ajax";  
import { map, catchError } from "rxjs/operators";  
import { of } from "rxjs";

const observable = ajax({  
  url: "https://jsonplaceholder.typicode.com/posts",  
  method: "POST",  
  headers: {  
    "Content-Type": "application/json"  
  },  
  body: {  
    id: 1,  
    title: "title",  
    body: "body",  
    userId: 1  
  }  
}).pipe(  
  map(response => console.log("response: ", response)),  
  catchError(error => {  
    console.log("error: ", error);  
    return of(error);  
  })  
);

observable.subscribe(res => console.log(res));

As we can see, we can set headers and body of the request, so ajax can deal with most HTTP requests.

Errors can also be caught with the catchError operator that we pipe in:

import { ajax } from "rxjs/ajax";  
import { map, catchError } from "rxjs/operators";  
import { of } from "rxjs";

const observable = ajax(`https://api.github.com/404`).pipe()  
  map(response => {  
    console.log(response);  
    return response;  
  }),  
  catchError(error => {  
    console.log(error);  
    return of(error);  
  })  
);

observable.subscribe(res => console.log(res));

bindCallback

bindCallback converts a callback API to a function that returns an Observable.

It can convert a function with parameters to an Observable by emitting the parameters.

It takes 3 arguments. The first is a function, which takes a callback function as a parameter. Whatever is passed into the callback function will be emitted by the Observable.

The second argument is an optional resultSelector . We can pass in a function to select the emitted results here.

The last argument is an optional scheduler. We can pass in a scheduler if we want to change the way the callback function in the first argument is scheduled to be called.

import { bindCallback } from "rxjs";

const foo = fn => {  
  fn("a", "b", "c");  
};

const observableFn = bindCallback(foo);  
observableFn().subscribe(res => console.log(res));

Then we’ll see 'a' , 'b' and 'c' since we passed them into our fn callback function, which is a parameter of foo .

Then we return a function that returns an Observable with the bindCallback function. Then we can subscribe to the returned Observable.

defer

defer lets us create an Observable that are only created when a subscription is made.

It takes one argument, which is an Observable factory function. For example, we can write:

import { defer, of } from "rxjs";
const clicksOrInterval = defer(() => {  
  return Math.random() > 0.5 ? of([1, 2, 3]) : of([4, 5, 6]);  
});  

clicksOrInterval.subscribe(x => console.log(x));

Then we can have an Observable that either subscribes to of([1, 2, 3]) or of([4, 5, 6]) depending on whether Math.random() return 0.5 or less or bigger than 0.5.

empty

Creates an Observable that emits nothing to Observers except for complete notification.

It takes one optional argument, which is the scheduler that we want to use.

For example, we can use it as follows:

import { empty } from "rxjs";
const result = empty();  
result.subscribe(x => console.log(x));

Then we should see nothing logged.

Another example would be to emit the value 'odd' when odd numbers are emitted from the original Observable:

import { empty, interval, of } from "rxjs";  
import { mergeMap } from "rxjs/operators";const interval$ = interval(1000);  
const result = interval$.pipe(  
  mergeMap(x => (x % 2 === 1 ? of("odd") : empty()))  
);  
result.subscribe(x => console.log(x));

from

from creates an Observable from an array, array-like object, a promise, iterable object or Observable-like object.

It takes 2 arguments, which is an array, array-like object, a promise, iterable object or Observable-like object.

The other argument is an optional argument, which is a scheduler.

For example, we can use it as follows:

import { from } from "rxjs";
const array = [1, 2, 3];  
const result = from(array);result.subscribe(x => console.log(x));

We can also use it to convert a promise to an Observable as follows:

import { from } from "rxjs";

const promise = Promise.resolve(1);  
const result = from(promise);
result.subscribe(x => console.log(x));

This is handy for situations where we want to do that, like converting fetch API promises to Observables.

As we can see, the creation operators are pretty useful for turning various data sources to Observables.

We have the ajax operator for getting HTTP request responses. The bindCallback function turns callback arguments into Observable data. defer let us create Observables on the fly when something subscribes to the Observable returned by the defer operator.

Finally, we have the empty operator to create an Observable that emits nothing, and a from operator to create Observables from an array, array-like object, a promise, iterable object or Observable-like object.

Categories
JavaScript Nodejs

Node.js FS Module — Renaming Item sand Removing Directories

Manipulating files and directories are basic operations for any program. Since Node.js is a server side platform and can interact with the computer that it’s running on directly, being able to manipulate files is a basic feature. Fortunately, Node.js has a fs module built into its library. It has many functions that can help with manipulating files and folders. File and directory operation that are supported include basic ones like manipulating and opening files in directories. Likewise, it can do the same for files. It can do this both synchronously and asynchronously. It has an asynchronous API that have functions that support promises. Also it can show statistics for a file. Almost all the file operations that we can think of can be done with the built in fs module. In this article, we will rename items stored on disk with the rename family of functions and remove directories with the rmdir family of functions.

Renaming Items with fs.rename and fs.renameSync

To rename items stored on disk in a Node.js program, we can call the rename function asynchronously. It takes 3 arguments. The first argument is the old path of the file. which can be a string, a Buffer object, or an URL object.

The second argument is the new path of the file, which also can be a string, a Buffer object, or an URL object.

The last argument is a callback function that’s called when the item rename operation ends. The callback function takes an err parameter which has the error data if the rename operation ends with an error, otherwise, the err object is null .

The original file must exist before renaming it. If the path of the item you want to rename to already exists, then that item will be overwritten. If the destination path is a directory, then an error will be raised.

For example, we can use it like in the following code:

const fs = require("fs");  
const sourceFile = "./files/originalFile.txt";  
const destFile = "./files/renamedFile.txt";

fs.rename(sourceFile, destFile, err => {  
  if (err) throw err;  
  console.log("Rename complete!");  
});

We can do the same for directories:

const fs = require("fs");  
const oldDirectory = "./files/oldFolder";  
const newDirectory = "./files/newFolder";

fs.rename(oldDirectory, newDirectory, err => {  
  if (err) throw err;  
  console.log("Directory rename complete!");  
});

The synchronous version of the rename function is the renameSync function. It takes the same arguments as the rename but without the callback. The first argument is the old path of the file. which can be a string, a Buffer object, or an URL object. The second argument is the new path of the file, which also can be a string, a Buffer object, or an URL object. It returns undefined .

For example, we can rename a file with the renameSync function like in the following code:

const fs = require("fs");  
const sourceFile = "./files/originalFile.txt";  
const destFile = "./files/renamedFile.txt";

try {  
  fs.renameSync(sourceFile, destFile);  
  console.log("Rename complete!");  
} catch (error) {  
  console.log(error);  
}

There’s also a promise version of the rename function, which also does the rename operation asynchronously. It takes 2 arguments.

The first argument is the old path of the file. which can be a string, a Buffer object, or an URL object.

The second argument is the new path of the file, which also can be a string, a Buffer object, or an URL object. The promise version of the rename function returns a promise that resolves without argument with the rename operation is successful. For example, we can use it like in the following code:

const fsPromises = require("fs").promises;  
const sourceFile = "./files/originalFile.txt";  
const destFile = "./files/renamedFile.txt";

(async () => {  
  try {  
    await fsPromises.rename(sourceFile, destFile);  
    console.log("Rename complete!");  
  } catch (error) {  
    console.log(error);  
  }  
})();

This is a better choice than renameSync for running sequential operations because asynchronous operations like promises won’t holding the program’s execution when it’s running, which means that other parts of the program can run if the operation isn’t finished.

Removing Directories with fs.rmdir and fs.rmdirSync

To remove directories asynchronously we can use the rmdir function. It takes 3 arguments.

The first is the path of the directory, which can be a string, a Buffer object or an URL object.

The second argument is an object that takes a few option properties. The emFileWait property is an integer that let our program retry if an EMFILE error is encountered.

It is the maximum number of milliseconds that we wait to try deleting the directory again. The rmdir function will retry every 1ms until the emFileWait value is reached.

Default value is 1000. The maxBusyTries is an integer is the number of retries when the EBUSY , ENOTEMPTY or EPERM error is encountered. It will retry every 100 milliseconds up to the maxBusyTries value. The recursive property is a boolean property.

If it’s set to true , then it will recursively delete data inside the directory along with the directory itself. In recursive mode, errors aren’t reported if path doesn’t exist and operations are retried on failure. The default value is false .

Recursive mode is an experimental feature. The last argument is a callback function which has an err parameter. It’s called when the removal operation ends. It’s null if the directory removal operation succeeds.

Otherwise, it returns an object with the error information. Using the regular asynchronous version of the rmdir function with files results in the promise being rejected with the ENOENT error on Windows and an ENOTDIR error on POSIX operating systems.

For example, we can use it like in the following code:

const fs = require("fs");  
const dirToDelete = "./files/deleteFolder";

fs.rmdir(  
  dirToDelete,  
  {  
    emfileWait: 2000,  
    maxBusyTries: 5,  
    recursive: false  
  },  
  err => {  
    if (err) {  
      throw err;  
    }  
    console.log("Removal complete!");  
  }  
);

The directory with the given path should be gone when the code above is ran if it exists and it’s not being used by other programs.

The synchronous version of the rmdir function is the rmdirSync function. It takes similar arguments as the rmdir function. The first argument is the path to the directory, which can be a string, a Buffer object or an URL object.

The second argument is an object that takes one option property. The recursive property is a boolean property. If it’s set to true , then it will recursively delete data inside the directory along with the directory itself. In recursive mode, errors aren’t reported if path doesn’t exist and operations are retried on failure. The default value is false .

Recursive mode is an experimental feature. It returns undefined .

We can use the rmdirSync function like in the following code:

const fs = require("fs");  
const dirToDelete = "./files/deleteFolder";

fs.rmdirSync(dirToDelete, {  
  recursive: false  
});  
console.log("Removal complete!");

The directory with the given path should be gone when the code above is ran if it exists and it’s not being used by other programs.

The promise version of the rmdir function does the same thing as the regular rmdir function. It takes 2 arguments. The first is the path of the directory, which can be a string, a Buffer object or an URL object. The second argument is the an object that takes a few option properties.

The emFileWait property is an integer that let our program retry if an EMFILE error is encountered. It is the maximum number of milliseconds that we wait to try deleting the directory again. The rmdir function will retry every 1ms until the emFileWait value is reached. Default value is 1000.

The maxBusyTries is an integer is the number of retries when the EBUSY , ENOTEMPTY or EPERM error is encountered. It will retry every 100 milliseconds up to the maxBusyTries value.

The recursive property is a boolean property. If it’s set to true , then it will recursively delete data inside the directory along with the directory itself. In recursive mode, errors aren’t reported if path doesn’t exist and operations are retried on failure. The default value is false .

Recursive mode is an experimental feature. It returns a promise which resolves with no argument when the directory removal operation succeeds. Using the promise version of the rmdir function with files results in the promise being rejected with the ENOENT error on Windows and an ENOTDIR error on POSIX operating systems.

We can use it like in the following code:

const fsPromises = require("fs").promises;  
const dirToDelete = "./files/deleteFolder";

(async () => {  
  try {  
    await fsPromises.rmdir(dirToDelete, {  
      emfileWait: 2000,  
      maxBusyTries: 5,  
      recursive: false  
    });  
    console.log("Removal complete!");  
  } catch (error) {  
    console.error(error);  
  }  
})();

The directory with the given path should be gone when the code above is run if it exists and it’s not being used by other programs. We used the try...catch block to catch errors with the async and await syntax with the promise version of the rmdir .

This is a better choice than rmdirSync for running sequential operations because asynchronous operations like promises won’t hold up the program’s execution when it’s running, which means that other parts of the program can run if the operation isn’t finished.

We renamed items stored on disk with the rename family of functions and remove directories with the rmdir family of functions.

With the rename family of functions, we just pass in the original path and the path that we want to rename to and then anything that’s passed in will be renamed if it’s valid.

The rmdir family let us remove directories by specifying the path. The asynchronous versions of the rmdir functions, which include the regular and the promise version let us specify how it will retry when an error occurs. This is very handy for handling errors gracefully.

Categories
JavaScript Rxjs

More Rxjs Operators

Rxjs is a library for doing reactive programming. Creation operators are useful for generating data from various data sources to be subscribed to by Observers.

In this article, we’ll look at some join creation operators to combine data from multiple Observables into one Observable. We’ll look at the merge , race and zip join creation operators, and also the buffer and bufferCount transformation operators.

Join Creation Operators

These operators combine the values emitted from multiple Observers into one.

merge

The merge operator takes multiple Observables and concurrently emits all values from every given input Observable.

It takes one array of Observables or a comma-separated list of Observables as arguments.

For example, we can use it as follows:

import { merge, of } from "rxjs";
const observable1 = of(1, 2, 3);  
const observable2 = of(4, 5, 6);  
const combined = merge(observable1, observable2);  
combined.subscribe(x => console.log(x));

Another example would be combining multiple timed Observables as follows:

import { merge, interval } from "rxjs";
const observable1 = interval(1000);  
const observable2 = interval(2000);  
const combined = merge(observable1, observable2);  
combined.subscribe(x => console.log(x));

We’ll see that the first observable1 will emit a value first, then observable2 . Then observable1 will continue to emit values every second, and observable2 will emit values every 2 seconds.

race

The race operator takes multiple Observables and returns the Observable that emits an item from the arguments.

It takes a comma-separated list of Observables as arguments.

For example, we can use it as follows:

import { race, of } from "rxjs";
const observable1 = of(1, 2, 3);  
const observable2 = of(4, 5, 6);  
const combined = race(observable1, observable2);  
combined.subscribe(x => console.log(x));

We have observable1 , which emits data before observable2 . We should get the output:

1  
2  
3

since observable emits values first.

zip

The zip operator combines multiple Observables and returns an Observable whose values are calculated from the values, in order of each of its input Observables.

It takes a list of Observables as arguments. We can use it as follows:

import { zip, of } from "rxjs";
const observable1 = of(1, 2, 3);  
const observable2 = of(4, 5, 6);  
const combined = zip(observable1, observable2);  
combined.subscribe(x => console.log(x));

Then we get the following:

[1, 4]  
[2, 5]  
[3, 6]

We can also map them to objects as follows to make values from one Observable easier to distinguish from the other.

To do this, we can write the following:

import { zip, of } from "rxjs";  
import { map } from "rxjs/operators";
const age$ = of(1, 2, 3);  
const name$ = of("John", "Mary", "Jane");  
const combined = zip(age$, name$);  
combined  
  .pipe(map(([age, name]) => ({ age, name })))  
  .subscribe(x => console.log(x));

Transformation Operators

buffer

The buffer operator buffers the source Observable values until the closingNotifier emits.

It takes one argument, which is the closingNotifier . It’s an Observable that signals the buffer to be emitted on the output Observable.

For example, we can use it as follows:

import { fromEvent, timer } from "rxjs";  
import { buffer } from "rxjs/operators";
const observable = timer(1000, 1000);  
const clicks = fromEvent(document, "click");  
const buffered = observable.pipe(buffer(clicks));  
buffered.subscribe(x => console.log(x));

In the code above, we have an Observable created by the timer operator which emits numbers every second after 1 second of waiting. Then we pipe our results into the clicks Observable, which emits as clicks are made to the document.

This means that as we click the page, the emitted data that are buffered by the buffer operator will emit the data that was buffered. Also, this means that as we click our document, we’ll get anything from an empty array to an array of values that were emitted between clicks.

bufferCount

bufferCount is slightly different from buffer in that it buffers the data until the size hits the maximum bufferSize .

It takes 2 arguments, which are the bufferSize , which is the maximum size buffered, and the startBufferEvery parameter which is an optional parameter indicating the interval at which to start a new buffer.

For example, we can use it as follows:

import { fromEvent } from "rxjs";  
import { bufferCount } from "rxjs/operators";  
const clicks = fromEvent(document, "click");  
const buffered = clicks.pipe(bufferCount(10));  
buffered.subscribe(x => console.log(x));

The code above will emit the MouseEvent objects that are buffered into the array once we clicked 10 times since this is when we 10 MouseEvent objects are emitted by the originating Observable.

As we can see, the join creation operators lets us combine Observables’ emitted data in many ways. We can pick the first ones emitted, we can combine all the emitted data into one, and we can get them concurrently.

Also, we can buffer Observable’s emitted data and emit them when a given amount is buffered or a triggering event will emit the data in the buffer.

Categories
JavaScript TypeScript

Introduction to TypeScript Classes- Access Modifiers

Classes in TypeScript, like JavaScript are a special syntax for its prototypical inheritance model that is a comparable inheritance in class-based object oriented languages. Classes are just special functions added to ES6 that are meant to mimic the class keyword from these other languages. In JavaScript, we can have class declarations and class expressions, because they are just functions. So like all other functions, there are function declarations and function expressions. This is the same with TypeScript. Classes serve as templates to create new objects. TypeScript extends the syntax of classes of JavaScript and then add its own twists to it. In this article, we’ll look at how to define TypeScript classes and how they inherit from each other. In this article, we’ll look at the access modifiers for class members in TypeScript.

Public, private, and protected modifiers

Public

In TypeScript, class member can have access modifiers added to them. This lets us control the access of the members of class by different parts of the program outside of the class that the members are defined in. The default access modifier for class members in TypeScript is public . This means that class member that have no access modifiers will be designated as public members. For example, we can use the public modifier like in the following code:

class Person {  
  public name: string;  
  public constructor(name: string) {  
    this.name = name;  
  } 

  public getName(): string{  
    return this.name;  
  }  
}

const person = new Person('Jane');  
console.log(person.getName());  
console.log(person.name);

In the example above, we designated all the members in our Person class as public so that we can access them outside the Person class. We can call the getName method on the Person instance and also we can get the name field directly from outside the class. The public access modifier on the constructor method is extra because constructor should always be public so we can instantiate the class with it.

Private and Protected

When a member of a class is marked as private , then it can’t be accessed outside of its containing class. Protected members are only available from within sub-classes of the class that has the protected member and the class that has the member and is marked with the keyword protected. For example, if we have a private member in our class in like the following code:

class Person {  
  private name: string;  
  constructor(name: string) {  
    this.name = name;  
  } 

  public getName(): string{  
    return this.name;  
  }  
}

const person = new Person('Jane');  
console.log(person.getName());  
console.log(person.name);

Then we get an error when we try to access it like we did with the member name in the Person class that we have above. If we try to compile and run the code above, the TypeScript compiler will not compile the code and gives the error message “Property ‘name’ is private and only accessible within class ‘Person’.(2341)“ like we expect for private members.

TypeScript compares type by their structure for public members. If the 2 types have the same public members listed, then they’re marked as being compatible by TypeScript. However, for private and protected members, this isn’t the case. For private and protected members, for 2 classes to be considered equal, then both classes must have the same private and protected members from the same origin for them to be considered to be the same type. For example, if we have the following code:

class Person {  
  private name: string;  
  constructor(name: string) {  
    this.name = name;  
  }  
}

class Human {  
  private name: string;  
  constructor(name: string) {  
    this.name = name;  
  }  
}

const human: Human = new Person('Jane');

Then we would get the error “ Type ‘Person’ is not assignable to type ‘Human’. Types have separate declarations of a private property ‘name’.(2322)“. This means that because both the Person and Human have the same private member called name , that they can’t be considered the same type, so we can’t assign an instance of Person to a variable that’s of type Human . This is the same for protected members, so if we have the following code:

class Person {  
  protected name: string;  
  constructor(name: string) {  
    this.name = name;  
  }  
}

class Human {  
  protected name: string;  
  constructor(name: string) {  
    this.name = name;  
  }  
}

const human: Human = new Person('Jane');

We would get the same error. But if we change protected to public like we do in the code below, then it would work:

class Person {  
  public name: string;  
  constructor(name: string) {  
    this.name = name;  
  }  
}

class Human {  
  public name: string;  
  constructor(name: string) {  
    this.name = name;  
  }  
}

const human: Human = new Person('Jane');  
console.log(human.name);

If we run the code above, we’ll see ‘Jane’ logged from the console.log statement on the last line.

If we have private members in our classes, then they must be in the super-class for both classes for both classes to be considered equal. For example, we can write the following code to make the Person and Human class to be considered the same while having a common private member name for each class:

class Animal {  
  private name: string;  
  constructor(name: string) {  
    this.name = name;        
  }  
}

class Person extends Animal{  
  constructor(name: string) {  
    super(name);      
  }  
}

class Human extends Animal{  
  constructor(name: string) {  
    super(name);      
  }  
}

const human: Human = new Person('Jane');  
console.log(human);

In the code above, we have the Animal class that the private name member, and both the Person and Human classes extends the Animal class, so that the Human and Person will be considered equal since they don’t have separate implementations of the private member name , but rather, a common name member in the Animal class instead which they both inherit from. When we run console.log on human in the last line, we would see the Person object being logged.

Likewise, for protected members, we can do the something similar like in the following code:

class Animal {  
  protected name: string;  
  constructor(name: string) {  
    this.name = name;        
  }  
}

class Person extends Animal{  
  constructor(name: string) {  
    super(name);      
  } 

  getName() {  
    return this.name;  
  }  
}

class Human extends Animal{  
  constructor(name: string) {  
    super(name);      
  } 

  getName() {  
    return this.name;  
  }  
}

const human: Human = new Person('Jane');  
console.log(human.getName());

In the code above, we have the protected member name which can be accessed by its sub-classes Human and Person , so we can return the value of the name member with a getName method on each class and the value of the name member in the Animal class. We need this method because protected members are only available from within sub-classes of the class that has the protected member and the class that has the member. If we run the code above, we would get ‘Jane’ from the console.log output from the last line of the code above.

In TypeScript, class members can have access modifiers applied to them. Public is the default access modifier for members if nothing is specified. 2 class are considered equal if they both have the same public members, or that they inherit protected and private members from the same source and have the same public members.