Categories
Node.js Best Practices

Node.js Best Practices — Security and Config

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Block Cross-Site Request Forgeries

We should block cross-site request forgeries.

This is an attacker where attackers can attempt to put data into an application via their own site.

The attacker creates a request with a form or other input that creates requests against an app.

To mitigate cross-site request forgeries, we can use the csurf package.

For instance, we can write:

const express = require(‘express’);
const csrf = require('csurf');

const app = express();

app.use(csrf());

app.use(function(req, res, next){
 res.locals.csrftoken = req.csrfToken();
 next();
});

We use the csrf middleware so we can get the CSRF token with the csrfToken method.

Then we can use it in our template with:

<input type="hidden" name="<i>csrf" value={{csrftoken}} />

Don’t Use Evil Regular Expressions

Evil regex includes grouping with partitions, repetition, and alternation with overlapping.

These patterns can take exponential time to computed when applied to certain non-matching inputs.

Examples of these patterns include:

  • (a+)+
  • ([a-zA-Z]+)*
  • (a|aa)+

If check against some input like aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, then we would hang our app.

It can take seconds or minutes to complete the pattern check.

We can audit evil regexes with:

Add Rate Limiting

Rate limiting will protect us from DOS attacks.

We don’t want attackers to bombard our app with lots of requests and let them all go through.

To limit the number of requests to our app from one IP address, we can use the express-limiter package.

For example, we can write:

const express = require('express');
const redisClient = require('redis').createClient();

const app = express();

const limiter = require('express-limiter')(app, redisClient);

limiter({
  lookup: ['connection.remoteAddress'],
  total: 100,
  expire: 1000 * 60 * 60
})

We use the limiter middleware to limit requests up to 100 per hour per IP address.

total is the number of requests.

expire is when the limit is reset.

Docker Compose

We can create our Docker compose config to install Nginx with our app together.

For instance, we can write:

web:
  build: ./app
  volumes:
    - "./app:/src/app"
  ports:
    - "3030:3000"
  command: pm2-docker app/server.js
  nginx:
    restart: always
    build: ./nginx/
  ports:
    - "80:80"
  volumes:
    - /www/public
  volumes_from:
    - web
  links:
    - web:web

We install Nginx and our app all in one go with our Docker compose file.

Keep a Clear Separation Between the Business Logic and the API Routes

We should keep a clear separation between the business logic and the API routes.

We definitely shouldn’t have our logic in our API routes since a route can do many things.

We need a service layer with all the business logic so that we can work with and test them separately.

Also, we can use everything easier.

Use a config Folder for Configuration Files

Configuration files should be in a config folder so that we can add all the configuration into one place.

It’s easy to find and change this way.

Conclusion

Evil regex and folder structure should be taken into consideration when we create our Node app.

Categories
Node.js Best Practices

Node.js Best Practices — Helmet and Cookie

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Add Helmet to Set Sane Defaults

Some default settings for Express apps aren’t very secure.

Therefore, the Helmet middleware is available to set some saner defaults.

To use it, we write:

const express = require('express');
const helmet = require('helmet');

const app = express();

app.use(helmet());

when we create our Express app.

It does several things to improve the security of our app.

It enables the Content-Security-Policy HTTP header.

This defines the trusted origins of the contents like scripts, images, etc that are allowed to load on our web page.

DNS prefetching is good fir speeding up load times.

However, disabling prefetching will limit potential data leakage about the types of external services that are used.

It can also reduce traffic and costs associated with DNS query lookups.

The X-Frame-Options HTTP header is also enabled.

This blocks clickjacking attempts by disabling options for the webpage rendered on another site.

The X-Powered-By HTTP header is also hidden.

This way, attackers can’t identify what we’re using to create our app.

Public key pinning headers are also enabled.

This prevents man in the middle attacks that use forged certificates.

Strict-Transport-Security header is also enabled.

This forces subsequent connects to the server to use HTTPS once a client is connected with HTTPS initially.

It also enables the Cache-Control, Pragma, Expires, and Surrogate-Control with defaults that block clients from caching old versions of site resources.

The X-Content-Type-Options HTTP header stops clients from sniffing the MIME type of a response outside the content-type that’s declared.

Referred HTTP header in our app’s response header can also be controlled to include certain pieces of information.

The X-XSS-Protection HTTP header that prevents some XSS attacks in browsers.

Tighten Session Cookies

We should tighten sessions cookies that aren’t highly secure.

We can set various settings with the express-session package.

The secret property is a secret string for the cookie to be salted with.

key is the name of the cookie.

httpOnly flags cookies to be accessible by the issuing web server only.

secure should be set to true , which requires SSL/TLS.

This forces cookies to be only used with HTTPS requests.

domain indicates the domain that the cookie can be accessed from.

path has the path that the cookie is accepted within the app’s domain.

expires has the expiration date of the cookie is set.

This defaults to last for a session.

To use these options, we can write:

const express = require('express');
const session = require('express-session');

const app = express();

app.use(session({
  secret: 'secret',
  key: 'someKey',
  cookie: {
    httpOnly: true,
    secure: true,
    domain: 'example.com',
    path: '/foo/bar',
    expires: new Date( Date.now() + 60 * 60 * 1000 )
  }
}));

We can set all these options to create and send our cookie.

Conclusion

The Express Helmet and express-session packages are very useful for securing our Express app.

Categories
Node.js Best Practices

Node.js Best Practices — REST and Test

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Use HTTP Methods & API Routes

We should follow RESTful conventions when we create our endpoints.

We should use nouns as identifiers.

For example, we have routes like:

  • POST /article or PUT /article/:id to create a new article
  • GET /article to retrieve a list of articles
  • GET /article/:id to retrieve an article
  • PATCH /article/:id to modify an existing article
  • DELETE /article/:id to remove an article

Use HTTP Status Codes Correctly

Status codes should correctly tell the status of our response.

We can have the following:

  • 2xx, if everything is fine.
  • 3xx, if the resource has moved
  • 4xx, if the request can’t be fulfilled because of a client error
  • 5xx, if something went wrong on the API side.

Client-side errors are things like invalid input or unauthorized credentials.

Server-side errors are things like exceptions thrown on the server-side for whatever reasons.

We can respond with status codes in Express with res.status .

For example, we can write:

res.status(500).send({ error: 'an error occurred' })

We respond with the 500 status code with a message.

Use HTTP headers to Send Metadata

HTTP headers let us send metadata with requests and responses.

They can include information like pagination, rate-limiting, or authentication.

We can add custom headers by prefixing the keys with X- .

For instance, we can send a CSRF token with the X-Csrf-Token request header.

HTTP doesn’t define any size limit on headers.

However, Node imposes an 80KB limit as the max heart size.

The Right Framework for Our Node.js REST API

We should pick the right framework for our REST API.

There’s Koa, Express, Hapi, Resify, Nest.js, and more.

We can use the first 4 to build simple rest services.

If we need a more complete solution, we can use Nest.js.

It has things like ORM and testing built-in.

Black-Box Test Our Node.js REST APIs

To test our Node REST APIs, we can make requests to our API and check the results.

We can use a specialized HTTP client like Supertest to test our API.

For example, to test getting an article with it, we can write:

const request = require('supertest')

describe('GET /user/:id', () => {
  it('returns a user', () => {
    return request(app)
      .get('/article')
      .set('Accept', 'application/json')
      .expect(200, {
        id: '1',
        title: 'title',
        content: 'something'
      }, done)
  })
})

We make the HTTP request to the article endpoint.

And we call set to set some request headers.

Then we call expect to check the status code and response body respectively.

The data would be populated in a database that’s only used when running unit tests.

They would be reset after every test.

This ensures that we have clean data to test with.

In addition to black-box tests, we should also do unit tests for other parts like the services.

Conclusion

We should follow RESTful conventions for our APIs.

Also, testing is important for any app.

Categories
Node.js Best Practices

Node.js Best Practices — Performance and Uptime

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Monitoring

We must monitor our app to make sure that it’s in a healthy state.

The number of restarts, resource usage, and more indicates how our app is doing.

We use these tools to get a good idea of what’s happening in production

Delegate Anything Possible to a Reverse Proxy

Anything that can be done by a reverse proxy should be done by it.

So we can use one like Nginx to do things like SSL, gzipping, etc.

Otherwise, our app would be busy dealing with network tasks rather than dealing with the app’s core tasks.

Express has middleware for all these tasks, but it’s better to offload them to a reverse proxy because Node’s single thread model would keep our app busy just doing networking tasks.

Make Use of SSL

SSL is a given since we don’t want attackers to snoop the communication channels between our clients and the server.

We should use a reverse proxy for this so we can delegate this to it rather than using our app for SSL.

Smart Logging

We can use a logging platform to make logging and searching the logs easier.

Otherwise, everything our app is doing is a black box.

Then if troubles arise, we’ll have problems fixing issues since we don’t know what’s going on.

Lock Dependencies

Dependencies should be locked so that new version won’t be installed without changing the version explicitly.

The NPM config in each environment should have the exact version and not the latest version of every package.

We can use npm shrinkwrap for finer controls of the dependencies.

They’re locked by default by NPM so we don’t have to worry about this.

Ensure Error Management Best Practices are Met

Errors should be handled so that we have a stable app.

Error handling is different for synchronous and async code.

We got to understand their differences.

Promises call catch to catch errors.

Synchronous code uses the catch block to catch errors.

We don’t want our app to crash when it does basic operations like parsing invalid JSON, using undefined variables, etc.

Guard Process Uptime Using the Right Tool

Node processes must be guarded against failures.

This means that they’ve to be restarted automatically when they crash.

A process manager like PM2 would do this for us and more.

It also provides us with cluster management features which we get without modifying our app’s code.

There’re also other tools like Forever that does the same thing.

Use All CPU Cores

We should use all CPU cores to run our Node app.

Node.js can only run on one CPU core without clustering.

This means by default, it’ll just leave the other cores idle.

We can do that with Docker or deployment scripts based on the Linux init system to replicate the process.

Conclusion

There’re many things we can do to maximize the performance of our Node app.

We can improve logging and monitoring to make troubleshooting easier.

Categories
Node.js Best Practices

Node.js Best Practices — Nginx

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Adding a Reverse Proxy with Nginx

We should never expose our Express app directly to the Internet.

Instead, we should use a reverse proxy to direct traffic to our app.

This way, we can add caching, control our traffic, and more.

We can install it by running:

apt update
apt install nginx

Then we can start it by running:

systemctl start nginx

Once we have Nginx running, we can go into /etc/nginx/nginx.conf to change the Nginx config to point to our app.

For example, we can write:

server {
  listen 80;
  location / {
     proxy_pass http://localhost:3000;
  }
}

We use the proxy_pass directive to pass any traffic from port 80 to our Express app, which is listening to port 3000.

Then we restart Nginx to make our config take effect by running:

systemctl restart nginx

Load Balancing with Nginx

To add load balancing with Nginx, we can edit the nginx.conf by writing:

http {
  upstream fooapp {
    server localhost:3000;
    server domain2;
    server domain3;
    ...
  }
  ...
}

to add the instances of our app.

The upstream section creates a server group that will load balance traffic across the servers we specify.

Then in the server section, we add:

server {
   listen 80;
   location / {
     proxy_pass http://fooapp;
  }
}

to use the fooapp server group.

Then we restart Nginx with:

systemctl restart nginx

Enabling Caching with Nginx

To enable caching, we can use the proxy_cache_path directive.

In nginx.conf , we write:

http {
  proxy_cache_path /data/nginx/cache levels=1:2   keys_zone=STATIC:10m
  inactive=24h max_size=1g;
  ...
}

We cache for 24 hours with the max cache size set to 1GB.

Also, we add:

server {
   listen 80;
   location / {
     proxy_pass            http://fooapp;
     proxy_set_header      Host $host;
     proxy_buffering       on;
     proxy_cache           STATIC;
     proxy_cache_valid     200 1d;
     proxy_cache_use_stale error timeout invalid_header updating
            http_500 http_502 http_503 http_504;
  }
}

proxy_buffering set to on to add buffering.

proxy_cache is set to STATIC to enable the cache.

proxy_cache_valid sets the cache to expire in 1 day is the status code is 200.

proxy_cache_use_stale determines when a stale cached response can be used during communication.

We set it to be used when there’s a timeout error, invalid header error when the current is updated with the updating keyword.

We also can use the cache if the response status code is 500, 502, 503, or 504.

Enabling Gzip Compression with Nginx

We can enable Gzip compression with Nginx by using the gzip modules.

For instance, we can write:

server {
   gzip on;
   gzip_types      text/plain application/xml;
   gzip_proxied    no-cache no-store private expired auth;
   gzip_min_length 1000;
  ...
}

in nginx.conf to enable Gzip with gzip on .

gzip_types lets us set the MIME type for files to zip.

gzip_proxied lets us enable or disable gzipping on proxied requests.

We enable it with the private, expired, no-cache and no-store Cache-Control header values.

auth means we enable compression if a request header enables the Authorization field.

gzip_min_length means we set the minimum length of the response that will be gzipped.

The length is Content-Length response header field and it’s in bytes.

Conclusion

There’re many things we can do with Nginx to improve the performance of our apps.