Categories
Angular Nodejs

How to Build a Job Queue With Node.js

If you want to make an app that handles long-running tasks, you need a job queue running in the background. Otherwise, your user will be kept waiting for requests, and the server hosting your app may hang. That’s not a pleasant user experience for anyone. Node.js has libraries for building a job queue that run in the background without too much hassle.


Preparation

In this piece, we’ll build a YouTube video downloader that lets users enter a URL from YouTube. Our app will download the video to a local folder where it can be downloaded automatically from the UI once it’s done. The download progress will be displayed while it’s being downloaded. The user can’t download another video until the first one is finished. The way it works is that when a user enters a valid YouTube video URL, a database entry for the job will be recorded in the database. Then a background job will be created which will be downloaded in the background. The job’s progress will be reported back via Socket.io so it can be displayed to the user. Once the job is done, it’ll be marked as done in the database entry for the job. If it fails, it’ll be removed from the queue. The URL for the video will be sent back to the user, and then it will be downloaded automatically.

We’ll build a back end app with Express and a front end app with Angular. To do this, we use Express Generator. With the latest versions of Node.js, we can run npx express-generator after we make a folder for our back end app. This will generate the code files. Next, we need to install some packages. We do this by running npm i in our back end project folder’s root.

We’ll need to install some libraries in order to use the latest JavaScript features, build our queue, store our environment variables, and manipulate our database. We install these libraries by running npm i sequelize @babel/register babel-polyfill body-parser bull cors dotenv pg pg-hstore uuid ytdl-core. We’ll use PostgresSQL as our database, meaning we’ll need the pg and pg-hstore packages. We need theuuid package to generate UUIDs. ytld-core is the YouTube download library. babel-polyfill and @babel/register allow us to use the latest JavaScript features. We also need Sequelize CLI to create our models and allow us to run database migrations to change our database’s structure. To do this, we run npm i -g sequelize-cli.

Now, we need to create our database. First, we create an empty database with pgAdmin 3.x by connecting to our server and doubleclicking. Right-click the database item, then click New Database. pgAdmin 3.x is used because it’s much faster than 4.x and has more features.

Finally, we need to initialize our Sequelize code. We run npx sequelize-cli init in our back end app’s project folder to do this.


The Code

Now we can write some code.

Building the back end

In bin/www, we put:

#!/usr/bin/env node

/**
 * Module dependencies.
 */

const app = require('../app');
const debug = require('debug')('backend:server');
const http = require('http');
/**
 * Get port from environment and store in Express.
 */

const port = normalizePort(process.env.PORT || '3000');
app.set('port', port);

/**
 * Create HTTP server.
 */

const server = http.createServer(app);
const io = require('socket.io')(server, { origins: '*:*' });
global.io = io;

/**
 * Listen on provided port, on all network interfaces.
 */

server.listen(port);
server.on('error', onError);
server.on('listening', onListening);
io.on('connection', (socket) => {
  socket.emit('connected', { message: 'connected' });
});

/**
 * Normalize a port into a number, string, or false.
 */

function normalizePort(val) {
  const port = parseInt(val, 10);

if (isNaN(port)) {
    // named pipe
    return val;
  }

if (port >= 0) {
    // port number
    return port;
  }

return false;
}

/**
 * Event listener for HTTP server "error" event.
 */

function onError(error) {
  if (error.syscall !== 'listen') {
    throw error;
  }

const bind = typeof port === 'string'
    ? 'Pipe ' + port
    : 'Port ' + port;

// handle specific listen errors with friendly messages
  switch (error.code) {
    case 'EACCES':
      console.error(bind + ' requires elevated privileges');
      process.exit(1);
      break;
    case 'EADDRINUSE':
      console.error(bind + ' is already in use');
      process.exit(1);
      break;
    default:
      throw error;
  }
}

/**
 * Event listener for HTTP server "listening" event.
 */

function onListening() {
  const addr = server.address();
  const bind = typeof addr === 'string'
    ? 'pipe ' + addr
    : 'port ' + addr.port;
  debug('Listening on ' + bind);
}

This is the entry point of our app. We initialize Socket.io here to allow us to listen for messages from client-side. It will also set the socket object globally so that it can be used in other files.

Next, in the config folder, we rename config.json, which is generated when running npx sequelize-cli init to config.js and add the following:

require('dotenv').config();
const dbHost = process.env.DB_HOST;
const dbName = process.env.DB_NAME;
const dbUsername = process.env.DB_USERNAME;
const dbPassword = process.env.DB_PASSWORD;
const dbPort = process.env.DB_PORT || 5432;

module.exports = {
    development: {
        username: dbUsername,
        password: dbPassword,
        database: dbName,
        host: dbHost,
        port: dbPort,
        dialect: 'postgres'
    },
    test: {
        username: dbUsername,
        password: dbPassword,
        database: 'youtube_app_test',
        host: dbHost,
        port: dbPort,
        dialect: 'postgres'
    },
    production: {
        use_env_variable: 'DATABASE_URL',
        username: dbUsername,
        password: dbPassword,
        database: dbName,
        host: dbHost,
        port: dbPort,
        dialect: 'postgres'
    }
};

This allows us to use environment variables instead of hard coding database credentials to our database. Then we make a files folder in the root and put an empty .gitkeep file in it so it can be committed to Git.

Then, we make a database migration with Sequelize to build our database. We run:

npx sequelize-cli model:generate --name Job --attributes status:enum,url:string,fileLocation:string

to create a migration file and its corresponding model file. In the model file, which should be called job.js in the models folder, we put:

'use strict';
module.exports = (sequelize, DataTypes) => {
  const Job = sequelize.define('Job', {
    status: DataTypes.ENUM('started', 'cancelled', 'done'),
    url: DataTypes.STRING,
    fileLocation: DataTypes.STRING
  }, {});
  Job.associate = function(models) {
    // associations can be defined here
  };
  return Job;
};

and in index.js in the models folder, we put:

'use strict';

const fs = require('fs');
const path = require('path');
const Sequelize = require('sequelize');
const basename = path.basename(__filename);
const env = process.env.NODE_ENV || 'development';
const config = require(__dirname + '/../config/config.js')[env];
const db = {};

let sequelize;
if (config.use_env_variable) {
  sequelize = new Sequelize(process.env[config.use_env_variable], config);
} else {
  sequelize = new Sequelize(config.database, config.username, config.password, config);
}

fs
  .readdirSync(__dirname)
  .filter(file => {
    return (file.indexOf('.') !== 0) && (file !== basename) && (file.slice(-3) === '.js');
  })
  .forEach(file => {
    const model = sequelize['import'](path.join(__dirname, file));
    db[model.name] = model;
  });

Object.keys(db).forEach(modelName => {
  if (db[modelName].associate) {
    db[modelName].associate(db);
  }
});

db.sequelize = sequelize;
db.Sequelize = Sequelize;

module.exports = db;

The most import part is renaming config.json to config.js in const config = require(__dirname + ‘/../config/config.js’)[env];.

Next, we build our queue with the bull package. We create a folder called queue in the project root folder and add video.js. In that file, we put:

const Queue = require('bull');
const fs = require('fs');
const models = require('../models');
const ytdl = require('ytdl-core');
const uuidv1 = require('uuid/v1');
const util = require('util');

const createVideoQueue = () => {
    const videoQueue = new Queue('video transcoding', {
        redis: {
            port: process.env.REDIS_PORT,
            host: process.env.REDIS_URL
        }
    });

videoQueue.process(async (job, done) => {
        const data = job.data;
        try {
            job.progress(0);
            global.io.emit('progress', { progress: 0, jobId: data.id });
            const uuid = uuidv1();
            const fileLocation = `./files/${uuid}.mp4`;
            await new Promise((resolve) => {
                ytdl(data.url)
                    .on('progress', (length, downloaded, totallength) => {
                        const progress = (downloaded / totallength) * 100;
                        global.io.emit('progress', { progress, jobId: data.id });
                        if (progress >= 100) {
                            global.io.emit('videoDone', { fileLocation: `${uuid}.mp4`, jobId: data.id });
                            global.io.emit('progress', { progress: 100, jobId: data.id });
                        }
                    })
                    .pipe(fs.createWriteStream(fileLocation))
                    .on('finish', () => {
                        resolve();
                    })
            })
            await models.Job.update({
                status: 'done',
                fileLocation: `${uuid}.mp4`
            }, {
                    where: {
                        id: data.id
                    }
                })
            done();
        }
        catch (ex) {
            console.log(ex);
            job.moveToFailed();
        }
    });
    return videoQueue;
}

module.exports = { createVideoQueue };

Note that we passed in the socket object to send progress back to the client, and that we converted all the asynchronous code to promises so they can be called sequentially. We use ytdl to download YouTube videos. It has a progress event handler which reports progress of the download, which we send back to the client via Socket.io’s broadcast function. This sends messages to all the clients. We will filter out the irrelevant messages on the client side. Any failed jobs will be removed from the queue.

Next, we create our routes. In the routes folder, we add a new file called jobs.js and put:

const express = require('express');
const models = require('../models');
const path = require('path');
const router = express.Router();
const ytdl = require('ytdl-core');
const { createVideoQueue } = require('../queue/video');

router.post('/new', async (req, res) => {
  const url = req.body.url;
  try {
    const isValidUrl = ytdl.validateURL(url);
    if (!isValidUrl) {
      res.status(400);
      return res.send({ error: 'invalid URL' });
    }
    const job = await models.Job.create({
      url,
      status: 'started'
    })
    await createVideoQueue().add({ url, id: job.id });
    return res.send(job);
  }
  catch (ex) {
    console.log(ex);
    res.status(400);
    return res.send({ error: ex });
  }
});

router.get('/file/:fileName', (req, res) => {
  const fileName = req.params.fileName;
  const file = path.resolve(__dirname, `../files/${fileName}`);
  res.download(file);
})

module.exports = router;

We need a route to add new jobs and to download the generated files. We validate the URL submitted before creating the job to minimize errors. In this line:

await createVideoQueue(global.socket).add({ url, id: job.id });

we pass in the global.socket object we created when the client connects to this app in binwww . Note that we don’t wait for the job to be done before returning a response. This is why we need Socket.io, to communicate the results back to the client.

In app.js, we add the initialization code. We add the following code to the file:

require("@babel/register");
require("babel-polyfill");
require('dotenv').config();
const createError = require('http-errors');
const express = require('express');
const path = require('path');
const cookieParser = require('cookie-parser');
const logger = require('morgan');
const bodyParser = require('body-parser')
const cors = require('cors')
const indexRouter = require('./routes/index');
const usersRouter = require('./routes/users');
const jobsRouter = require('./routes/jobs');
const app = express();

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');

app.use(logger('dev'));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));
app.use(express.static(path.join(__dirname, 'files')));
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json())
app.use(cors())
app.use('/', indexRouter);
app.use('/users', usersRouter);
app.use('/jobs', jobsRouter);

// catch 404 and forward to error handler
app.use((req, res, next) => {
  next(createError(404));
});

// error handler
app.use((err, req, res, next) => {
  // set locals, only providing error in development
  res.locals.message = err.message;
  res.locals.error = req.app.get('env') === 'development' ? err : {};

// render the error page
  res.status(err.status || 500);
  res.render('error');
});

module.exports = app;

We add app.use(express.static(path.join(__dirname, ‘files’))); to expose the files folder that we created to the public, and we add:

const jobsRouter = require('./routes/jobs');

and

app.use('/jobs', jobsRouter);

so that clients can access the route we created.

Finally, we create an .env file and put the following:

REDIS_URL='localhost'
REDIS_PORT='6379'
DB_HOST='localhost'
DB_NAME='youtube_app_development'
DB_USERNAME='postgres'
DB_PASSWORD='postgres'

The bull package requires Redis, so we have to install it. To do so, we run the following in Ubuntu or related Linux distributions:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install redis-server
$ sudo systemctl enable redis-server.service
$ sudo service redis-server restart

The first two commands are run to update the package repository references and to update our Linux packages. We run sudo apt-get install redis-server to install Redis, and we run the fourth line to enable Redis on startup. If Redis is not started or needs restarting, we run sudo service redis-server restart.

Note—there is no recent Windows version of Redis, so Linux is required. Now we have everything needed to run the back end.

Building the UI

The back end is done and we can move on to building the UI. We build it with Angular and Angular Material. To get started, we install the Angular CLI by running npm i -g @angular/cli . Then we run ng new frontend in our top-level project folder to create the app. Be sure to choose to include routing and use SCSS for styling when prompted. After that, we run npm i @angular/cdk @angular/material file-saver socket.io-client. The first two packages are Angular Material packages. file-saver helps us download files, and socket.io-client allows us to connect to the back end to get download progress and file location.

In environment.ts, we put:

export const environment = {
  production: false,
  apiUrl: 'http://localhost:3000',
  socketIoUrl: 'http://localhost:3000'
};

Then we create our components and services.

We run ng g component homePage and ng g service video to create our code files.

In video.service.ts, we put:

import { Injectable } from '@angular/core"';
import { HttpClient, HttpHeaders } from '@angular/common/http';
import { environment } from 'src/environments/environment';

@Injectable({
  providedIn: 'root'
})
export class VideoService {

  constructor(
    private http: HttpClient
  ) { }

  addVideoToQueue(data) {
    return this.http.post(`${environment.apiUrl}/jobs/new`, data);
  }

  getVideo(videoUrl: string) {
    return this.http.get<Blob>(videoUrl, {
      headers: new HttpHeaders({
        'accept': 'application/octet-stream',
        'content-type': 'application/json'
      }),
      responseType: 'blob' as 'json'
    })
  }
}

to let our app make requests to add YouTube videos to the queue for download, and we call the getVideo to download. Note that we set the accept header to ‘application/octet-stream’ so that we can download video files.

Next in home-page.component.ts, we put:

import { Component, OnInit } from '@angular/core"';
import { VideoService } from '../video.service';
import { NgForm } from '@angular/forms';
import io from 'socket.io-client';
import { environment } from 'src/environments/environment';
import { saveAs } from 'file-saver';

@Component({
  selector: 'app-home-page',
  templateUrl: './home-page.component.html',
  styleUrls: ['./home-page.component.scss']
})
export class HomePageComponent implements OnInit {
  videoData: any = <any>{};
  progress: number = 0;
  fileLocation: string;
  downloaded: boolean = false;
  jobId: number;
  connected: boolean = false;
  socket;
  getVideoSub;

constructor(
    private videoService: VideoService
  ) { }

ngOnInit() {
      this.addConnectionHandlers();
  }

  addConnectionHandlers() {
    const manager = io.Manager(environment.socketIoUrl);
    manager.on('connect_error', () => {
      this.socket = io.connect(environment.socketIoUrl);
    });

      this.socket = io.connect(environment.socketIoUrl);
    this.socket.on('connect', (data) => {
      this.socket.on('connected', (msg) => {

});

      this.socket.on('progress', (msg) => {
        if (this.jobId != msg.jobId) {
          return;
        }
        this.progress = msg.progress;
        if (msg.progress == 100) {
          this.progress = 0;
        }
      });

      this.socket.on('videoDone', (msg) => {
        if (this.jobId != msg.jobId || this.downloaded) {
          return;
        }
        this.getVideoSub = this.videoService.getVideo(`${environment.apiUrl}/jobs/file/${msg.fileLocation}`)
          .subscribe(res => {
            if (!this.downloaded) {
              saveAs(res, `${msg.fileLocation}.mp4`);
              this.progress = 0;
              this.downloaded = true;
              this.getVideoSub.unsubscribe();
            }
          })
      });
    });
  }

  addVideoToQueue(videoForm: NgForm) {
    this.downloaded = false;
    if (videoForm.invalid) {
      return;
    }
    this.videoService.addVideoToQueue(this.videoData)
      .subscribe(res => {
        this.jobId = (res as any).id;
      }, err => {
        alert('Invalid URL');
      })
  }
}

This provides the logic for the UI to let the user enter their YouTube URLs, watch their video’s download progress, and download it when it’s done. Since we used socket.broadcast.emit in the back end, we have to filter it out in the front end. The back end returns the jobId for the download job, so we can filter out by jobId. We also need to add retry in case the back end app goes down with the setTimeout block in the connect_error handler. We check if the same file has been downloaded before with the this.downloaded flag so it won’t download again. Otherwise, it might try to download too many times, causing freezes and crashes.

In home-page.component.html, we put:

<div class="center">
    <h1>Download Video From YouTube</h1>
</div>
<div id='content'>
    <form #videoForm='ngForm' (ngSubmit)='addVideoToQueue(videoForm)'>
        <mat-form-field>
            <input matInput placeholder="YouTube URL" required #url='ngModel' name='url' [(ngModel)]='videoData.url'
                [disabled]='progress != 0'>
            <mat-error *ngIf="url.invalid && (url.dirty || url.touched)">
                <div *ngIf="url.errors.required">
                    URL is required.
                </div>
            </mat-error>
        </mat-form-field>
        <br>
        <button mat-raised-button type='submit'>Convert</button>
    </form>
    <br>
    <mat-card *ngIf='progress > 0'>
        Downloading: {{progress}}%
    </mat-card>
</div>

to let the user enter their YouTube URL and display progress. Note that we disabled input when a video is downloaded, so that users can’t keep entering new requests.

In home-page.component.scss, we put:

#content {
  width: 95vw;
  margin: 0 auto;
}

to add some padding to the form.

In app-routing.module.ts, we put:

import { NgModule } from '@angular/core"';
import { Routes, RouterModule } from '@angular/router';
import { HomePageComponent } from './home-page/home-page.component';

const routes: Routes = [
  { path: '', component: HomePageComponent }
];

@NgModule({
  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule]
})
export class AppRoutingModule { }

so that users can see our page.

In app.component.html, we put:

<router-outlet></router-outlet>

so that our page will be displayed. In app.module.ts, we put:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core"';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import {
  MatButtonModule,
  MatCheckboxModule,
  MatInputModule,
  MatMenuModule,
  MatSidenavModule,
  MatToolbarModule,
  MatTableModule,
  MatDialogModule,
  MAT_DIALOG_DEFAULT_OPTIONS,
  MatDatepickerModule,
  MatSelectModule,
  MatCardModule,
  MatFormFieldModule
} from @angular/material;
import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { HomePageComponent } from './home-page/home-page.component';
import { FormsModule } from '@angular/forms';
import { HttpClientModule } from '@angular/common/http';

@NgModule({
  declarations: [
    AppComponent,
    HomePageComponent
  ],
  imports: [
    BrowserModule,
    AppRoutingModule,
    MatButtonModule,
    BrowserAnimationsModule,
    MatButtonModule,
    MatCheckboxModule,
    MatFormFieldModule,
    MatInputModule,
    MatMenuModule,
    MatSidenavModule,
    MatToolbarModule,
    MatTableModule,
    FormsModule,
    HttpClientModule,
    MatDialogModule,
    MatDatepickerModule,
    MatSelectModule,
    MatCardModule
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

so that we can use Angular Material widgets in our app.

In styles.scss, we put:

/* You can add global styles to this file, and also import other style files */
@import "~@angular/material/prebuilt-themes/indigo-pink.css";
body {
  font-family: "Roboto", sans-serif;
  margin: 0;
}

form {
  mat-form-field {
    width: 95vw;
    margin: 0 auto;
  }
}

.center {
  text-align: center;
}

to include Material Design styles and add some padding to our forms and style for centering text.

In index.html, we put:

<!doctype html>
<html lang="en">

<head>
  <meta charset="utf-8">
  <title>YouTube Download App</title>
  <base href="/">
  <link href="https://fonts.googleapis.com/css?family=Roboto&display=swap" rel="stylesheet">
  <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="icon" type="image/x-icon" href="favicon.ico">
</head>

<body>
  <app-root></app-root>
</body>

</html>

to include Material Icons and Roboto font.

Categories
Nodejs

How to Dockerize a Node Web App

Running our Node web app in Docker saves us lots of headaches.

Every time a Docker image is deployed, it’s guaranteed to deploy a fresh container.

This way, we won’t have to worry about messing up our code in the container.

Also, the Docker image is built with an image, so we can build it repeatedly without doing anything manually.

In this article, we’ll look at how to Dockerize a simple Node web app.

Create the Node.js App

We start by creating our Node app.

To start, we create a project folder and run npm init --yes to create package.json .

Then we can replace everything in their with:

{
  "name": "my-app",
  "version": "1.0.0",
  "description": "a simple app",
  "author": "",
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "express": "^4.16.1"
  }
}

Then we create a server.js file in the same folder and add:

'use strict';

const express = require('express');

const PORT = 8080;
const HOST = '0.0.0.0';

const app = express();
app.get('/', (req, res) => {
  res.send('Hello World');
});

app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);

It just has one route that responds with ‘hello world’.

Creating a Dockerfile

Next, we create a Dockerfile in the project folder.

Then we add:

FROM node:12
WORKDIR /usr/src/app
COPY package*.json ./

RUN npm install
COPY . .

EXPOSE 8080
CMD [ "node", "server.js" ]

We get the Node 12 image, then create a working directory to build the image.

Then we copy package.json and package-lock.json to the root directory with COPY package*.json ./

Next, we run npm install to install the packages.

And then we bundle the app’s source code with COPY . . .

Then we use EXPOSE 8080 to open the 8080 port for the Docker image.

Finally, we run node server.js to start the app with CMD [ “node”, “server.js” ] .

Next, we create .dockerignore file to stop Docker from copying the local modules to the Docker container.

We do the same with the NPM logs.

So we have the following in .dockerignore :

node_modules
npm-debug.log

Building the Image

We can then build the image with:

docker build -t <your username>/my-app .

Where <your username> is the username of your account.

Then we should see our image when we run docker images .

Run the Image

Once it’s built, we can run our image with:

docker run -p 8888:8080 -d <your username>/my-app

-d runs the container in detached mode, which leaves it running in the background.

-p redirects a public port to a private port in the container.

We can then run docker ps to get the container ID.

The app’s output can be obtained with:

docker logs <container id>

And we can go into the container with:

docker exec -it <container id> /bin/bash

Then to test our app, we can run:

curl -i localhost:8888

Then we should get the ‘hello world’ response from the app.

Conclusion

We can create a Docker image for a Node web app with a simple Dockerfile.

Then we don’t have to do much to get it running with Docker.

Categories
Node.js Tips

Node.js Tips — Overwrite Files, POST Request, and Run Async Code in Series

As with any kind of app, there are difficult issues to solve when we write Node apps. In this article, we’ll look at some solutions to common problems when writing Node apps.

Overwrite a File Using fs in Node.js

fs.writeFileSync and fs.writeFile both overwrite the file by default.

Therefore, we don’t have to add any extra checks.

Also, we can set the 'w' flag to make sure we write to the file:

fs.writeFileSync(path, content, {
  encoding: 'utf8',
  flag: 'w'
})

We set the option in the 3rd argument.

Defining an Array as an Environment Variable in Node.js

We can set environment variables as a comma-separated string as its value.

Then we can get the string and call split to split the environment variable string with a comma.

For example, we can write:

app.js

const names = process.env.NAMES.split(',');

Then when we run:

NAMES=bar,baz,foo node app.js

Then process.env.NAMES will be 'bar,baz,foo' .

And then we can call split as we did above to convert it to an array.

Make POST Request Using Node.js

We can make a POST request using the http module.

For instance, we can write:

const http = require('http')

const body = JSON.stringify({
  foo: "bar"
})

const request = new http.ClientRequest({
  hostname: "SERVER_NAME",
  port: 80,
  path: "/some-path",
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Content-Length": Buffer.byteLength(body)
  }
})

request.end(body);

We use the http.ClientRequest constructor to create our request.

We specify the hostname which is the hostname.

port is the port.

path is the path relative to the hostname.

method is the request method, which should be 'POST' to make a POST request.

headers have the request headers.

Then we call request.end to make a request with the body , which is the request body.

Then to listen to the request-response, we listen to the response event.

For instance, we can write:

request.on('response', (response) => {
  console.log(response.statusCode);
  console.log(response.headers);
  response.setEncoding('utf8');
  response.on('data', (chunk) => {
    console.log(chunk);
  });
});

We listen to the response event on the request object.

The callback has the response object which is the read stream with the response .

It also has the statusCode to get the status code.

headers have the response headers.

We listen to the data event on the response to get the response in the chunk parameter.

Together, we have:

const http = require('http')

const body = JSON.stringify({
  foo: "bar"
})

const request = new http.ClientRequest({
  hostname: "SERVER_NAME",
  port: 80,
  path: "/some-path",
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Content-Length": Buffer.byteLength(body)
  }
})

request.end(body);

request.on('response', (response) => {
  console.log(response.statusCode);
  console.log(response.headers);
  response.setEncoding('utf8');
  response.on('data', (chunk) => {
    console.log(chunk);
  });
});

Running Async Code in Series

We can use the async module’s series method to run multiple pieces of async code in series.

For instance, we can write:

const async = require('async');

const foo = (callback) => {
  setTimeout(() => {
    callback(null, 'foo');
  }, 5000);
}

const bar = (callback) => {
  setTimeout(() => {
    callback(null, 'bar');
  }, 2000);
}

async.series([
  foo,
  bar
], (err, results) => {
  console.log(results);
});

We have 2 functions foo and bar which runs setTimeout and takes a Node-style callback.

The callback parameter in each function takes an error object and the result.

Then we can pass the functions to the async.series method after putting them in an array.

Since the signature of the functions match with async.series is looking for, we’ll get the results of each function in the results parameter, which is an array.

It has all the results of each function that we passed as the 2nd argument of callback .

This means results is ['foo', 'bar'] .

Set Navigation Timeout with Node Puppeteer

We can set the navigation timeout with the setDefaultNavigationTimeout method.

For instance, we can write:

await page.setDefaultNavigationTimeout(0);

to set the default timeout in milliseconds.

The timeout will affect goBack , goForward , goto , reload , setContent , and waitForNavigation .

Conclusion

We can set the navigation timeout with Puppeteer. To make a POST request, we can use the http.ClientRequest constructor. Also, we can use the async.series method to run functions that run asynchronously in the Node format sequentially. Environment variables are always strings. writeFile and writeFileSync always overwrite files.

Categories
Node.js Tips

Node.js Tips — Download Files, Async Test, Socket.io Custom

As with any kind of app, there are difficult issues to solve when we write Node apps. In this article, we’ll look at some solutions to common problems that we might encounter when writing Node apps.

Log Inside page.evaluate with Puppeteer

We can log output when page.evaluate is run by listening to the console event.

For instance, we can write:

const page = await browser.newPage();
page.on('console', consoleObj => console.log(consoleObj.text()));

We call browser.newPage to create a new page object.

Then we listen to the console event with it.

It takes a callback, which we can convert to text with the text method.

Testing Asynchronous Function with Mocha

We can run async functions with Mocha tests to test them.

For instance, we can write:

it('should do something', async function() {
  this.timeout(40000);
  const result = await someFunction();
  assert.isBelow(result, 3);
});

We call this.timeout to set the timeout before the test times out.

Then we use await to run our async function, which returns a promise.

Finally, we get the result and use assert to check it.

Acknowledgment for socket.io Custom Event

We can listen to events after a connection is established.

On the server-side, we can listen to the connection event to see if a connection is established.

Once it is, then we emit an event to acknowledge the connection is made.

Likewise, we can do the same on the client-side.

For instance, in our server-side code, we write:

io.sockets.on('connection', (sock) => {
   sock.emit('connected', {
     connected: 'yes'
   });

   sock.on('message', (data, callback) => {
     console.log('received', data);
     const responseData = {
       hello: 'world'
     };

     callback(responseData);
   });
 });

We listen to the connection event to listen to connections.

Then we emit the connected event to acknowledge the connection.

We also listen to the message event which takes data and a callback, which we call to send data back to the client.

The on the client-side, we write:

const socket = io.connect('http://localhost:3000');
socket.on('error', (err) => {
  console.error(err);
});

socket.on('connected', (data) => {
  console.log('connected', data);
  socket.emit('message', {
    payload: 'hello'
  }, (responseData) => {
    console.log(responseData);
  });
});

We connect to the server with io.connect .

And we listen to the error even which runs a callback if there’s an error.

We also listen to the connected event for any data that’s sent.

And we emit a message event with some data and a callback which the server calls to get the data we passed into the callback function in the server-side code.

responseData is the responseData passed into callback in the server-side code.

Download Large File with Node.js while Avoiding High Memory Consumption

We can make an HTTP request with the http module.

Then we listen to the response event, which has the downloaded file’s content.

We create a write stream with the file path.

Then we listen to the data event emitted on the response object to get the chunks of data.

We also listen to the end event emitted from response so that we can stop writing to the stream.

For example, we can write:

const http = require('http');
const fs = require('fs');

const download = (url, dest, cb) => {
  const file = fs.createWriteStream(dest);
  const request = http.get(url, (response) => {
    response.pipe(file);
    file.on('finish', () => {
      file.close(cb);
    });
  }).on('error', (err) => {
    fs.unlink(dest);
    if (cb) cb(err.message);
  });
};

We created a write stream with fs.createWriteStream .

Then we make our request with the http.get method.

We call response.pipe to pipe the response to our write stream, which writes it to the file.

We listen to the finish event and close the write stream in the callback.

If there’s an error, we delete what’s written so far with unlink .

We listen to the error event to watch for errors.

cb is a callback that we call to deliver the results.

Since the data is obtained in chunks, memory usage shouldn’t be an issue.

Conclusion

We can download a file and write it to disk with a write stream. We can log the output with Puppeteer. Mocha can run async functions in tests. We can send events to acknowledge the emission of custom events.

Categories
Node.js Tips

Node.js Tips — Testing Redirects, Sessions, and Auth Middlewares

Like any kind of apps, there are difficult issues to solve when we write Node apps.

In this article, we’ll look at some solutions to common problems when writing Node apps.

Use the Middleware to Check the Authorization Before Entering Each Route in Express

We can add a middleware to a route to check for the proper credentials before entering a route.

For instance, we can write:

const protectedMiddlewares = [authChecker, fetchUser];
const unprotectedMiddlewares = [trackVisistorCount];

app.get("/", unprotectedMiddlewares, (req, res) => {
  //...
})

app.get("/dashboard", protectedMiddlewares, (req, res) => {
  // ...
})

app.get("/auth", unprotectedMiddlewares, (req, res) => {
  //...
})

app.put("/auth", unprotectedMiddlewares, (req, res) => {
  //...
})

We have an array of middleware for the protected dashboard route.

And we have an array of middleware for the unprotected routes.

A middleware may look like the following:

const authChecker = (req, res, next) => {
  if (req.session.auth || req.path === '/auth') {
    next();
  } else {
    res.redirect("/auth");
  }
}

We have the authChecker middleware that checks the path of the route and the session.

If there’s a session set, then we call the next middleware which should lead to a protected route handler in the end.

Otherwise, we redirect to the auth route.

Testing Requests that Redirect with Mocha and Supertest in Node

To test requests that redirect to another route with Supertest running in the Mocha test runner, we can write;

it('should log out the user', (done) => {
  request(app)
    .post('/login')
    .type('form')
    .field('user', 'username')
    .field('password', 'password')
    .end((err, res) => {
      if (err) { return done(err); }
      request(app)
        .get('/')
        .end((err, res) => {
          if (err) { return done(err); }
          res.text.should.include('profile');
          done();
        });
    });
});

We test a login route for redirection by checking the response text in the second end callback.

To do that, we made a POST request with the username and password.

Then that callback is called after the redirect is done.

We can get the text from the response to see if it’s the text in the profile route.

Send Additional HTTP Headers with Express

We can send whatever response headers we want with the setHeader method of the response object.

For instance, we can write:

app.use((req, res, next) => {
  res.setHeader("Access-Control-Allow-Origin", "*");
  return next();
});

We set the Access-Control-Allow-Origin to * to allow requests from all origins.

Working with Sessions in Express

To work with sessions in an Express app, we can use the expression-middleware.

For instance, we can use it by writing:

const express = require('express');
const session = require('express-session');

const app = express();

app.use(session({
  resave: false,
  saveUninitialized: false,
  secret: 'secret'
}));

app.get('/', (req, res) => {
  if (req.session.views) {
    ++req.session.views;
  } else {
    req.session.views = 1;
  }
  res.send(`${req.session.views} views`);
});

app.listen(3000);

We used the express-session package with a few options.

session is a function that returns a middleware to let us work with sessions.

resave means we don’t save a session if it’s not modified is it’s false .

saveUninitialized means that we don’t create a session until something is stored if it’s false .

secret is the secret for signing the sessions.

Then we can store whatever data we want in the req.session property.

It’ll persist until the session expires.

We just keep increasing the views count as we hit the / route.

The default is the memory store, which should only be used if we don’t have multiple instances.

Otherwise, we need to use persistent sessions.

Get the List of Connected Clients Username using Socket.io

We can get the list of clients connected to a namespace or not by using the clients method.

For instance, we can write:

const clients = io.sockets.adapter.rooms[roomId];
for (const clientId of Object.keys(clients)) {
  console.log(clientId);
  const clientSocket = io.sockets.connected[clientId];
}

We get an object with the client IDs as the keys with the io.socket.adapter.rooms object.

roomId is the ID of the room we’re in.

Then we can get the socket for the client with the io.socket.connected object.

Conclusion

We can get the clients with Socket.io.

Also, we can write our own middlewares to check for credentials before proceeding to the route.

express-session lets us work with sessions in our Express app.

We can test redirects with Supertest by checking the response content in the end callback.