Categories
Node.js Best Practices

Node.js Best Practices — Tokens and Secrets

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Support Blacklisting JWTs

We should be able to blacklist JSON web tokens so that we can lock out malicious users,

There are no mechanisms to do this for most systems.

We can add a list of untrusted tokens to prevent them from logging in.

Prevent Brute-Force Attacks Against Authorization

Brute-force attacks against authorization can be prevented with rate limiting.

For instance, we can limit the login attempts by the block repeated failed login requests.

Run Node.js as a Non-root User

Non-root user should be used to run Node apps.

This way, they can do whatever they want in our system.

We can bake that into the Docker image or set it with the -u flag.

Limit Payload Size Using a Reverse Proxy or a Middleware

Payload size should be limited to avoid overloading our systems.

This can help with preventing DOS attacks.

If the requests’ body size is small, less damage can be done.

We can set express body parser to accept small size payloads with the limit option.

Avoid JavaScript eval Statements

We can avoid JavaScript eval statements.

They’re insecure since code is run from a string.

It also makes optimizations and debugging impossible.

setTimeout , setInterval , and the Function constructor also run code from strings.

So we should avoid passing strings to them as well.

Prevent Evil RegEx from Overloading Single Thread Execution

There’s some regex that we should avoid.

To make data validation easy, we can use a library like validator.js or look up safe regex we can use with safe-regex to detect vulnerable regex patterns to avoid.

Bad regex can make our app susceptible to DOS attacks that block the event loop.

This will make our app hang.

Avoid Module Loading Using a Variable

We shouldn’t call require with a variable.

This way, we can’t let attackers pass anything into the require function.

For instance, instead of writing:

const insecure = require(helperPath);

We write:

const uploadHelpers = require('./helpers/upload');

This also applies to other paths we pass in like when we read a file with fs.readFile .

Run Unsafe Code in a Sandbox

If we have any unsafe code, we should run them in a sandbox.

This way, they can’t get to the outside world and potentially do damage.

NPM packages can be sandboxed. A dedicated process can also be sandboxed.

Take Extra Care When Working with Child Processes

If we run child processes in our Node app, we should sanitize the command string so that we can run without risks.

If we don’t escape them, then attackers can run anything they want, which can be catastrophic.

Hide Error Details from Clients

If there are any details about errors that expose the internals of our system, we should hide them from clients.

This way, the chance of attackers finding ways to attack our app is much lower.

Anything like paths, stack traces, and more should be hidden.

Conclusion

We should hide sensitive data, isolate risky code, and escape any strings that are potentially malicious.

Categories
Node.js Best Practices

Node.js Best Practices — Test and Arrow Functions

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Use Arrow Function Expressions

Arrow functions are a great feature of modern JavaScript.

It lets us write callbacks without binding to a new value of this inside the callback.

Also, it’s more compact.

We should use to avoid bugs and have easier to read code.

Write API Tests

API tests let us check the results of our APIs.

We’ll know right away from the tests if we don’t get what we want.

They’re fast so we can test without lifting a finger.

Also, they’re great for documenting how to call our APIs.

There’re other kinds of tests like performance tests, database tests, etc. that we can do as well.

Include 3 Parts in Each Test Name

Our tests should be self explanatory.

So we should state in the test name what’s being tested.

Also, we should state what circumstances are being tested and what’s the expected result.

This way, no one will be confused with what we’re testing.

For instance, we can write:

describe('Item Service', () => {  
  describe('Add new item', () => {  
    it('When no price is specified, then the item status rejected', () => {  
      const item = new ItemService().add(...);  
      expect(item.status).to.equal('rejected');  
    });  
  });  
});

We have the describes labeling what unit we’re testing.

And the string we pass into it has the scenario we’re testing.

The expect call and the it string have the expectation.

Detect Code Issues with a Linter

We can detect code issues with a linter so that we can detect antipatterns early.

To make this easy, we can add a pre-commit hook that runs before a commit is made to do the check.

Avoid Global Test Fixtures and Seeds and Add Data Per Test

Test fixtures should be added per test.

And the data should be scrubbed after each test.

This way, we won’t have tests that are dependent on each other.

With this done, every test should run in isolation so we can run them in any order.

Inspect for Vulnerable Dependencies

We should inspect for vulnerable dependencies so that we can update them.

This way, we can update them and avoid attackers attacking our app with those vulnerabilities.

To do this, we can run npm audit or other tools.

Tag Our Tests

We can tag our tests so that they run before a commit is made.

We run the ones that must be run to prevent committing any that breaks out code.

Otherwise, we’ll run all tests all the time, which is probably too slow for most apps before commit.

Check Test Coverage

Checking for test coverage lets us identify any decreases and check for things we missed in our tests.

Tools like Istanbul/nyc can check test coverage in our code so that we get a clear idea of what’s needed to be tested.

Conclusion

We should use arrow functions.

And we should have good test coverage in our code.

Categories
Node.js Best Practices

Node.js Best Practices — Syntax Issues

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Start a Code Block’s Curly Braces on the Same Line

The curly braces should be one the same line as the opening statement.

For example, instead of writing:

function foo()
{
  // code block
}

We write:

function foo() {
  // code block
}

This helps us avoid unexpected results.

If we have:

function foo()
{
  return;
  {
    bar: "fantastic"
  };
}

Then the return and the object is considered separate.

If we put the opening curly brace beside the return , then they’ll be considered one statement.

Separate Statements Properly

We have should separate statements properly.

For example, we can write:

function doThing() {
  // ...
}

doThing()

const items = [1, 2, 3]
items.forEach(console.log)

On the other hand, we should avoid typos like:

const m = new Map()
const a = [1,2,3]
[...m.values()].forEach(console.log)

The last 2 lines are considered to be the same statement and will throw a syntax error.

Another example would be:

const count = 2
(function foo() {
  // do something
}())

2 is considered to be a function with the parentheses on the new line.

To avoid all these issues, we should put semicolons to separate them.

Name Our Functions

We should name our functions so that we can trace functions by name when debugging.

Debugging using the core dump might be a challenge if we see significant issues with memory consumption from anonymous functions.

Use Naming Conventions for Variables, Constants, Functions, and Classes

Naming conventions for variables, constants, functions, and classes should follow common conventions.

Lower camel case should be used for naming constants, variables, and functions.

Upper camel case should be used for classes.

This helps us distinguish between plain variables or functions and classes.

Also, we should use descriptive names but keep them short.

Prefer const over let and Ditch the var

var shouldn’t be used for declaring variables anymore.

Their scope is tricky.

let and const are block-scoped, so where they’re available are clear.

const is better than let since we can’t reassign them to a new value.

Require Modules First, not Inside Functions

Modules should be required at the top of modules so that we can find errors and other issues when the module loads.

If they’re inside functions, then we see the issues with require only when we run the code.

So this just isn’t a good idea.

Also, requires are run synchronously by Node, so if they take a long time, then they may block code that is after the require.

Require Modules by Folders as Opposed to the Files Directly

We should place an index.js file that exposes the module’s intervals so consumers will pass through it.

This lets us create an interface that makes future changes easier without breaking the contract.

For example, we can write:

module.exports.foo = require("./foo");
module.exports.bar = require("./bar");

rather than:

module.exports.foo = require("./foo/foo.js");
module.exports.bar = require("./bar/bar.js");

to avoid importing JavaScript modules directly inside their folder.

Conclusion

We should consider syntax changes that make our lives easier and avoid errors.

Categories
Node.js Best Practices

Node.js Best Practices — Data Validation

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Components With Known Security Vulnerabilities

We should log and audit each API call to cloud management services with AWS CloudTrail.

The security checker provided by our cloud provider should be run.

Logging and Monitoring

Logging and monitoring should be sufficient.

We should look for any suspicious auditing events like user log in, user creation, permission change, etc.

If there’re login failures we should be alerted.

The time and username that initiated the update in each database record should be recorded.

Cross-Site-Scripting

To avoid cross-site scripting, we should use template engines and frameworks that automatically escape scripts by design.

Most of them should have this feature.

Untrusted HTTP requests data should be escaped based on the HTML output.

Applying context-sensitive encoding when modifying browser documents on client-side would prevent cross-site scripting on the DOM.

Also, we should enable a content security policy to defend against cross-site scripting.

Protect Personally Identifiable Information

Personally, identifiable information should be protected.

Any data that can be used to identify a person should be encrypted.

Privacy laws are enacted in different countries so we should follow them.

Have a security.txt File in Production

A text file called security.txt should be in the ./well-known directory or the root direct.

It should give the details of which security researchers can report vulnerabilities and the contact details of the person responsible.

This way, we can be notified of any security vulnerabilities that are found.

Have a SECURITY.md File

In a code project, we can have a security.md file with the contact information of the project owner.

This way, people can report vulnerabilities that are found in the project.

Adjust the HTTP Response Headers for Enhanced Security

We should adjust the HTTP response headers for enhanced security.

Attacks like cross-site scripting, clickjacking, and other malicious attacks take advantage of data exposed with response headers to conduct their attacks.

Constantly and Automatically Inspect for Vulnerable Dependencies

We should use npm audit or snyk to track, monitor, and patch vulnerable dependencies.

These can be integrated into our CI setup to catch vulnerable dependency before it’s in production.

Avoid Using the Node.js crypto Library for Handling Passwords

The Node crypto library isn’t as secure as bcrypt, which lets us salt and hash our passwords.

If we don’t salt and hash, then they may be brute-forced or guessed with dictionary attacks.

Escape HTML, JS and CSS Output

We should escape HTML, JavaScript, and CSS so that cross-site scripting attacks are prevented.

We can use dedicated libraries that mark the data as pure content that shouldn’t be run.

Validate Incoming JSON Schemas

Incoming JSON should be validated with a JSON schema to make sure that data from requests are valid.

If we don’t check them, then malicious and invalid data can get into our systems and cause problems.

Libraries like jsonschema or joi let us do the checks easily.

Conclusion

We should check our data so that they aren’t malicious or invalid.

Anything that can run in our data should be escaped.

Categories
Node.js Best Practices

Node.js Best Practices — Secrets

Like any kind of apps, JavaScript apps also have to be written well.

Otherwise, we run into all kinds of issues later on.

In this article, we’ll look at some best practices we should follow when writing Node apps.

Generating Random Strings Using Node.js

We can generate random strings with the cryptoRandomBytes method.

Other methods might not be as random as we think.

And this makes applications vulnerable to cryptographic attacks.

Authentication

We should use multi-factor authentication for important services and accounts.

Passwords should be rotated frequently, including SSH keys.

Strong password policies should be applied everywhere.

Don’t ship apps with any default credentials.

Only standard authentication methods like OAuth, OpenID, etc should be used.

Basic auth is insecure so we shouldn’t use it.

Auth endpoints should have rate-limiting so attackers can’t use brute force attacks to attack our app.

When there are login failures, we shouldn’t let users know that username or password validation failed.

The error message should be generic to avoid guessing.

Using a centralized user management system is also a good idea to avoid multiple accounts.

Access Control

The principle of least privilege should be followed.

This means the least amount of power should be granted to a user for them to do their work.

Never work with root accounts except for account management.

All instances or containers should be run with a role or service account.

We can assign permissions to groups and not to users.

This makes permission management easier and more transparent.

Security Misconfiguration

Access to the production environment internals should be through the internal network only.

This can be done via SSH or other ways.

Internal services should never be exposed.

Internal network access should be restricted to a few users.

If we’re using cookies, we should configure it to the secured mode where it’s sent over SSL only.

They should also be configured to the same site so only requests from the same domain will reply to the given cookies.

We should also set cookies to HttpOnly to prevent client-side JavaScript code from accessing the cookies.

Servicer should be protected with strict and restrictive access rules.

Threats should be prioritized with security threat modeling.

DDOS attack protection should be added with HTTP and TCP load balancers.

We should perform penetration tests periodically.

Sensitive Data Exposure

We should only accept SSL/TLS connections and enforce strict transport security with headers.

Networks should be separated into subnets to ensure each node has the least access permissions.

All services and instances that don’t need Internet access should be blocked from accessing the Internet.

Secrets should be stored in vault products like AWS KMS, Google Cloud KMS, etc.

Sensitive instance metadata should be locked down with metadata.

Data that are in transit should be encrypted.

Secrets shouldn’t be in log statements.

Plain passwords shouldn’t be shown on front end and the back end shouldn’t store sensitive information in plain text.

Components With Known Security Vulnerabilities

Docker images should be scanned for known vulnerabilities.

Automatic patching and upgrades should be enabled to avoid running outdated OS versions without security patches.

Users should have the ID, access, and refresh tokens so that access tokens are refreshed periodically and are short-lived.

Conclusion

There’re many things to think about with security to secure our data.

To keep strict access control, we’ve to do all the items described and more.