JavaScript

JavaScript

Design Patterns

Standardizing the way we structure our JavaScript allows us to collaborate more effectively with one another. Using intelligent design patterns improves maintainability, code readability, and even helps to prevent bugs.

General Best Practices

Use Meaningful Names for Variables

Make sure your variables are named in a way that makes sense. This reduces the need for additional comments as your code speaks for itself.

Avoid:

const ddmmyyyy = new Date();

Prefer:

const date = new Date();

Also be mindful of making searchable code through variable names.

Avoid:

// What is 86400000?
setTimeout(randomFunction, 86400000);

Prefer:

const MILLISECONDS_IN_A_DAY = 86_400_000;
setTimeout(blastOff, MILLISECONDS_IN_A_DAY);

Avoid Mental Mapping

Don't force people to memorize the variable context. Variables should be understood even when the reader has not managed to follow the whole history of how they came to be.

Avoid:

const names = ['John', 'Jane', 'Joe'];

names.forEach(v => {
  doStuff();
  doSomethingExtra();
  // ...
  // ...
  // ...
  // What is this 'v' for?
  dispatch(v);
});

Prefer:

const names = ['John', 'Jane', 'Joe'];

names.forEach(name => {
  doStuff();
  doSomethingExtra();
  // ...
  // ...
  // ...
  // 'name' makes sense now
  dispatch(name);
});

Do Not Add Unneeded Context

If your class or object name tells you what it is, there is no need to include it in the variable name.

Avoid:

const car = {
  carMake: 'Honda',
  carModel: 'Accord',
  carColor: 'Blue'
};

function paintCar(car) {
  car.carColor = 'Red';
}

Prefer:

const car = {
  make: 'Honda',
  model: 'Accord',
  color: 'Blue'
};

function paintCar(car) {
  car.color = 'Red';
}

Use Strong Type Checks Where Applicable

In some cases, you can't be certain of the type you'll get back for a piece of data, but in most cases, using === instead of == can help you avoid all sorts of unnecessary problems later on with truthy and falsy values. When you use ==, your variables will be converted to match types. The === operator forces a comparison of values and types.

0 == false // true
0 === false // false
2 == '2' // true
2 === '2' // false

Use Descriptive Function Names

Considering functions that represent a certain behavior, a function name should be a verb or a phrase fully exposing the intent behind it as well as the intent of the arguments. Their name should say what they do.

Avoid:

function sMail(user) {
  // ...
}

Prefer:

function sendEmail(emailAddress) {
  // ...
}

Minimize Function Arguments Where Possible

Ideally, you should avoid a long number of arguments. Limiting the number of function parameters makes code more readable and easier to test.

Ideally, you are making one or two arguments in your function. Usually, if you have more than three arguments, your function is trying to do too much. In cases where it’s not, most of the time a higher-level object will suffice as an argument.

Avoid:

const createMenu = (title, body, buttonText, cancellable) => {
  // ...
}

createMenu('Foo', 'Bar', 'Baz', true);

Prefer:

const createMenu = ({ title, body, buttonText, cancellable }) => {
  // ...
}

createMenu({
  title: 'Foo',
  body: 'Bar',
  buttonText: 'Baz',
  cancellable: true
});

Functions Should Only Do One Thing

When your function does more than one thing, it is harder to test, compose and reason about. When you isolate a function to just one action, it can be refactored easily and your code will read much much cleaner.

Avoid:

function notifyListeners(listeners) {
  listeners.forEach(listener => {
    const listenerRecord = database.lookup(listener);
    if (listenerRecord.isActive()) {
      notify(listener);
    }
  });
}

Prefer:

function notifyActiveListeners(listeners) {
  listeners.filter(isListenerActive).forEach(notify);
}

function isListenerActive(listener) {
  const listenerRecord = database.lookup(listener);
  return listenerRecord.isActive();
}

Don't Repeat Yourself (DRY)

You should do your best to avoid code duplication. Writing the same code more than once is not only wasteful when writing it the first time, but even more so when trying to maintain it. Instead of having one change affect all relevant modules, you have to find all duplicate modules and repeat that change.

Often, duplication in code happens because two or more modules have slight differences, although they share a lot in common. Keeping your code DRY means creating an abstraction that can handle this set of different things with just one function/module/class.

Avoid:

function showDeveloperList(developers) {
  developers.forEach(developer => {
    const expectedSalary = developer.calculateExpectedSalary();
    const experience = developer.getExperience();
    const githubLink = developer.getGithubLink();
    const data = {
      expectedSalary,
      experience,
      githubLink
    };

    render(data);
  });
}

function showManagerList(managers) {
  managers.forEach(manager => {
    const expectedSalary = manager.calculateExpectedSalary();
    const experience = manager.getExperience();
    const portfolio = manager.getMBAProjects();
    const data = {
      expectedSalary,
      experience,
      portfolio
    };

    render(data);
  });
}

Prefer:

function showEmployeeList(employees) {
  employees.forEach(employee => {
    const expectedSalary = employee.calculateExpectedSalary();
    const experience = employee.getExperience();

    const data = {
      expectedSalary,
      experience
    };

    switch (employee.type) {
      case 'manager':
        data.portfolio = employee.getMBAProjects();
        break;
      case 'developer':
        data.githubLink = employee.getGithubLink();
        break;
    }

    render(data);
  });
}

Use Shorthand Notation When It Makes Sense

Shorthand notation like ternary operators can make your code more readable and scannable. Just remember, overdoing it can have the opposite effect.

Avoid:

let level;
if (score > 100) {
  level = 2;
} else {
  level = 1;
}

if (discount) {
  let price = discount;
} else {
  let price = 20;
}

Prefer:

let level = (score > 100) ? 2 : 1;
let price = discount || 20;

More on shorthand notation at SitePoint

See more clean code practices at the clean-code-javascript repo on GitHub

Writing Modern JavaScript

It's important we use language features that are intended to be used. This means not using deprecated functions, methods, or properties. Whether we are using plain JavaScript or a library, we should not use deprecated features. Using deprecated features can have negative effects on performance, security, maintainability, and compatibility.

On all new projects you should be using up to date JavaScript methodologies combined with build process tools like webpack and Babel to ensure browser compatibility. This allows us to utilize modern techniques while being certain our code will not break in older systems.

Some older projects that have not yet been upgraded may not have the capability to use the most modern techniques, but it is still important to have processes in place that allow us to grow the technology stack as a project matures. In these cases, you should still follow best practice recommendations even if the newest patterns are not yet available to you.

Using Classes

Before ES6, classes in JavaScript were created by building a constructor function and adding properties by extending the prototype object. This created a fairly complex way to extend classes and deal with prototypal inheritance. Modern techniques allow us to create and extend classes directly and write cleaner code.

The old way:

var MyClass = function () {
  this.something = 0;
};

MyClass.prototype.add = function () {
  this.something++;
};

The new way:

class MyClass {
  constructor() {
    super(); // if you're extending
    this.something = 0;
  }

  add() {
    this.something++;
  }
}

Classes in modern JavaScript offer a nicer syntax to access the standard prototypal inheritance we've already had for a while, but can also help guide the structure of componentized code. When deciding whether or not to use a Class, think of the code you're writing in the greater context of the application.

Classes will not always be the answer for creating modular code in your application, but you should consider them when you need to create discrete components or when those components need to inherit behaviors from other components, while still functioning as a single unit. For example, a utility function that searches a string for text may not be a good utilization of Classes, but an accordion menu with children components would.

Using Arrow Functions

Arrow functions are a great way to slim down easily executable code blocks. When using this style of function be sure not to over engineer a simple action just to make it smaller. For example, this is a good use of a simple multiline function being compressed into a single line:

Multi-line:

const init = msg => {
  console.log(msg);
};

Single line:

const init = msg => console.log(msg);

This is a very simple function, so compressing it down into a single line won't cause any readability issues. However, the more complicated this function gets, the less likely it should be on a single line.

Something important to remember is that arrow functions are not always the answer. Their release addressed a very specific problem many engineers were facing with preserving the context of this. In a traditional function this is bound to different values depending on the context it is called. With arrow functions, it is bound to the code that contains the arrow function. Because arrow functions also have syntax benefits, as a general rule, use arrow functions unless you need a more traditional treatment of this (like in an event listener).

Concatenating Strings and Templating

When dealing with strings in JavaScript, it is very common to need some form of concatenation along the way. Before ES6 we were concatenating string with the + operator:

var first = 'hello';
var last = 'world';
var msg = 'I said, "' + first + ' ' + last + '" to the crowd.';

Modern techniques give us something called, "template literals", which let us concatenate strings in a much more straightforward manner utilizing the back tick and some basic templating:

const first = 'hello';
const last = 'world';
const msg = `I said, "${first} ${last}," to the crowd.`;

Destructuring Arrays and Objects

Destructuring is a JavaScript technique that allows you to easily assign values and properties from arrays and objects into specific variables. This direct mapping affords us an easy way to access data from objects and arrays in a more convenient syntax.

The old way:

var arr = [1, 2, 3, 4];
var a = arr[0];
var b = arr[1];
var c = arr[2];
var d = arr[3];

function assign(props) {
  const content = props.content;
  const title = props.title;
  const id = props.id;
  // ...
};

The new way:

const [a, b, c, d] = [1, 2, 3, 4];

const assign = props => {
  const { content, title, id } = props;
  // ...
};

Use destructuring whenever possible to slim down your code and improve overall readability.

The Spread Operator

The ... spread operator can simplify several operations in JavaScript, and should be used in place of more verbose syntax. It can take a little while to learn the different ways the spread operator is used, but once you are familiar with it the code is much more obvious. Spread allows an array or object to be expanded or concatenated, and allows easy casting of array-like objects like NodeLists and sets as iterable.

The old way:

var newObject = Object.assign({}, oldObject, {
  overridden: 'property',
});

var nodes = Array.prototype.slice.call(document.getElementsByTagName('p'));

var arr = ['some', 'values'];
var largerArr = ['newValue'].concat(arr);

The new way:

const newObject = {
  ...oldObject,
  overridden: 'property',
};

const nodes = [...document.getElementsByTagName('p')];

const arr = ['some', 'values'];
const largerArr = ['newValue', ...arr];

More on using spread syntax from the MDN docs.

Other Object Enhancements

In addition to the spread operator, object declarations now support computed property names:

The old way:

var obj = {};
obj[someDynamicKeyName] = 'value';

The new way:

const objWithComputedPropertyName = {
  [someDynamicKeyName]: 'value'
};

A shorthand property syntax is also available to handle the common situation where an object key is assigned a variable with the same identifier:

The old way:

var title = getUserInput('title');
var content = getUserInput('content');

return {
  title: title,
  content: content,
};

The new way:

const title = getUserInput('title');
const content = getUserInput('content');

return { title, content };

Componentizing Your Code

Keeping different bits of functionality in your code reasonably separated is important to building and maintaining a scalable system over time. In the past we've had to accomplish this with our build systems leaning on concatenation as an abstraction layer to the project. Modern JavaScript techniques allow us to utilize import statements to break apart and reassemble your code into consumable chunks.

When you're building your project, be sure to lean on imports to wire your application together. As of right now, we do still need a build system to process the code so it can be consumed by a browser, but using this modern technique will make sure our code is well structured as the project ages. You can also use import statements to load parts of an external library you may need for any given component. The code below will give you an example:

// Only loading in the map method from lodash, because that's all we need!
import map from 'lodash/map';

This also allows you to build one component and import it into another.

It's also worth noting that named exports can be imported by wrapping the exported function within curly braces:

import { example } from 'example/lib';

This is only possible if the exported component is a named export like so:

export const example = 66;

Modules

When creating your own modules be sure to think about how it should be used by others. Luckily ES6 modules makes this a simple task.

There are many ways you can export a module, but typically exposing specific functions and/or data structures through an ES6 module is the preffered way.

// datastructure.js
// Private variable to the module
const data = {};

// Private function to the module
const process = value => {
  // Complex logic
  return value;
};

// the two functions below are public
export const getData = field => {
  return process(data[field]);
};

export const addData = (field, value) => {
  data[field] = value;
};

In the module above only two functions are being exposed, everything else is private to the module, therefore this module can be used as following.

import { addData, getData } from './datastructure';

addData('key', 'myvalue');

const value = getData('key');

Avoid using classes unless there's a good reason to. Consider the following example:

import Module from './mymodule';
/* Module is a ES6 class */
new Module('.element-selector', {
  onEvent1: () => {},
  onEvent2: () => {},
});

A good indicator that you don't need classes is when you don't need the instance of that class. The code sample below provides a better alternative.

// Option 1: still using classes but with a better design
import Module from './mymodule';

const module1 = new Module('.element-selector', {
  // options...
});
module1.addEventListener('onEvent1', () => {});
module1.addEventListener('onEvent2', () => {});
module1.doSomething();
module1.hide();

The example above changes the design of the module API a bit and assumes multiple and separate instance of the module are desired. However, sometimes that might not even be necessary. If all you need is to abstract some complex logic and accept a couple of parameter, exposing a factory/init function is all you need.

// Option 2: not using classes
import module from './mymodule';

module('.element-selector', {
  // options...
});

Don't Pollute the Window Object

Adding methods or properties to the window object or the global namespace should be done carefully. window object pollution can result in collisions with other scripts. If you need to expose data to the rest of your application, you should first consider using some sort of state management. Sometimes however, exposing methods or properties to the window global is necessary and when doing so wrap your code in closures and expose methods and properties to window with caution.

When a script is not wrapped in a closure, the current context or this is actually window:

console.log(this === window); // true

for (var i = 0; i < 9; i++) {
  // ...
}

const result = true;

console.log(window.result === result); // true
console.log(window.i === i); // true

When we put our code inside a closure, our variables are private to that closure unless we expose them:

(function () {
  for (let i = 0; i < 9; i++) {
    // ...
  }

  window.result = true;
})();

console.log(typeof window.result !== 'undefined'); // true
console.log(typeof window.i !== 'undefined'); // false

Notice how i was not exposed to the window object.

Secure Your Code

In JavaScript, we often have to insert new elements with dynamic attributes and content into the DOM. A common way to do this is to use the innerHTML method like so:

const someElement = document.getElementById('someElement');
const someUrl = 'https://someurl.com/';
const someContent = 'Some content';

someElement.innerHTML = `<div class="container"><a href="${someUrl}">${someContent}</a></div>`;

However, passing HTML strings to innerHTML and methods like it can expose your code to cross-site scripting, also known as XSS—the most common security vulnerability in JavaScript. Because these methods evaluate strings passed to them as HTML, they can execute potentially harmful code. For instance, if someContent in the above example is <img src="fakeImage" onerror="alert('hacked!')" />, the JavaScript in the onerror attribute will be executed.

There are several measures you can take to circumvent this XSS vulnerability:

Use textContent instead of innerHTML

When setting the human-readable content of a single element, using textContent is safer than using innerHTML because it does not parse strings as HTML—meaning any malicious code passed to it will not be executed. Refer to MDN's documentation on textContent for more info.

Use the DOM API to create and add elements

When you need to create multiple DOM elements, use the document.createElement method to create new elements and the Element API to set attributes and append them to the document. Creating your own elements and attributes will ensure that only those you explicitly define will make their way into the DOM.

Note that appending new elements to the DOM is a relatively expensive operation, so in general you'll want to build out the structure of new elements before adding them to the DOM, preferably within a single container element, then append them to the document all at once.

Refer to MDN's documentation on document.createElement and the Element API for more info.

Sanitize HTML strings before adding to the DOM

In general, using the Element API is the preferred best practice to safely create and add DOM elements. However, it tends to result in much more verbose code compared to HTML-parsing methods like innerHTML. This can become painful if you need to dynamically create a large number of new elements. In these cases, the convenience of methods like innerHTML can be extremely tempting.

If you need to generate a large amount of HTML dynamically, consider using a DOMParser to parse and sanitize HTML strings before adding the HTML to the DOM with a method like innerHTML. Parsing HTML strings with a DOMParser will not automatically make the code any safer, but it will allow you to access the elements from the string and strip potentially unsafe tags and attributes before they have a chance to get executed. Refer to MDN's documentation on DOMParser for more info.

Alternatively, you may consider adding a client-side sanitization library to your project so you can strip potentially malicious code from your HTML before you add it to the DOM. Passing your HTML strings through a sanitizer can help prevent XSS attacks when using methods like innerHTML. However, no library is perfect, so be aware that you are relying on the security of the sanitizer you choose. Also, remember to consider the effect on performance when deciding whether to add any large library to your project.

Performance

Writing performant code is absolutely critical. Poorly written JavaScript can significantly slow down and even crash the browser. On mobile devices, it can prematurely drain batteries and contribute to data overages. Performance at the browser level is a major part of user experience, and is very important to our work.

Only Load Libraries You Need

JavaScript libraries should only be loaded on the page when needed. React + React DOM are around 650 KB together. This isn't a huge deal on a fast connection, but can add up quickly in a constrained bandwidth situation when we start adding a bunch of libraries. Loading a large number of libraries also increases the chance of conflicts.

Not only should you only load the libraries you need, but using import statements, you should only load the parts of the libraries you need. For example, if you're using Lodash, it can be very large to load the entire system, especially if you're not using all of it. You should always utilize import statements to target relevant parts of an external library to make sure you're loading only what you need. The code block below will illustrate this point:

import map from 'lodash/map';
import tail from 'lodash/tail';
import times from 'lodash/times';
import uniq from 'lodash/uniq';

This code block imports four methods from Lodash instead of the entire library. Read more about the proper way to load Lodash. These imports can also be reduced to a single line, but for Lodash specifically, it's more performant to separate them.

Cache DOM Selections

It's a common JavaScript mistake to reselect something unnecessarily. For example, every time a menu button is clicked, we do not need to reselect the menu. Rather, we select the menu once and cache its selector. This applies whether you are using a library or not. For example:

Uncached:

const hideButton = document.querySelector('.hide-button');

hideButton.addEventListener('click', () => {
  const menu = document.getElementById('menu');
  menu.style.display = 'none';
});

Cached:

const menu = document.getElementById('menu');
const hideButton = document.querySelector('.hide-button');

hideButton.addEventListener('click', () => {
  menu.style.display = 'none';
});

Notice how, in cached versions, we are pulling the menu selection out of the event listener so it only happens once. The cached version is, not surprisingly, the fastest way to handle this situation.

Event Delegation

Event delegation is the act of adding one event listener to a parent node to listen for events bubbling up from its children. This is much more performant than adding one event listener for each child element. Here is an example:

document.getElementById('menu').addEventListener('click', event => {
  const { currentTarget } = event;
  let { target } = event;

  if (currentTarget && target) {
    if (target.nodeName === 'LI') {
      // Target stuff...
    } else {
      while (currentTarget.contains(target)) {
        // Parent stuff...
        target = target.parentNode;
      }
    }
  }
});

You may be wondering why we don't just add one listener to the <body> for all our events. Well, we want the event to bubble up the DOM as little as possible for performance reasons. This would also be pretty messy code to write.

More on event delegation from GoMakeThings.

Debounce, Throttle, and requestAnimationFrame

Browser events such as scrolling, resizing, and cursor movements happen as fast as possible and can cause performance issues. By debouncing, throttling, or using requestAnimationFrame on our functions, we can increase performance by controlling the rate at which an event listener calls them.

Debouncing

Debouncing a function will prevent it from being called again until a defined amount of time has passed, i.e., execute this function if 200ms has passed since it was last called. A common use case would be when resizing a browser window; we can apply classes or move elements after the resize has happened.

// Returns a function, that, as long as it continues to be invoked, will not
// be triggered. The function will be called after it stops being called for
// `wait` milliseconds.
const debounce = (func, wait) => {
  let timeout;

  // This is the function that is returned and will be executed many times
  // We spread (...args) to capture any number of parameters we want to pass
  return function executedFunction(...args) {

    // The callback function to be executed after 
    // the debounce time has elapsed
    const later = () => {
      // null timeout to indicate the debounce ended
      timeout = null;
      
      // Execute the callback
      func(...args);
    };
    // This will reset the waiting every function execution.
    // This is the step that prevents the function from
    // being executed because it will never reach the 
    // inside of the previous setTimeout  
    clearTimeout(timeout);
    
    // Restart the debounce waiting period.
    // setTimeout returns a truthy value (it differs in web vs Node)
    timeout = setTimeout(later, wait);
  };
}

// Execute the function after a user has stopped scrolling for 250ms
window.addEventListener('scroll', debounce(veryIntenseFunction, 250));

// or:
const anotherIntenseFunction = debounce(() => {
  // All the taxing stuff you do
}, 250);

window.addEventListener('scroll', anotherIntenseFunction);

There is a more advanced version of debounce where we can pass an immediate flag. In the examples above, we always wait until the end of the debounce to execute the callback, but with immediate, you can change it such that the function executes at the leading edge and won't allow you to execute again until it has delayed calling long enough to deplete the timer.

Throttling

Throttling a function will cause it to only be called a maximum number of times over a defined period of time, i.e., only execute this function once every 50ms. A common use case would be when scrolling a browser window; we may want an element to show up as we scroll down the page, but killing performance by checking the scroll position constantly isn't necessary. Debouncing wouldn't work in this example because we don't want to wait for the user to stop scrolling.

// Pass in the callback that we want to throttle and the delay between throttled events
const throttle = (callback, delay) => {
  // Create a closure around these variables.
  // They will be shared among all events handled by the throttle.
  let throttleTimeout = null;
  let storedEvent = null;

  // This is the function that will handle events and throttle callbacks when the throttle is active.
  const throttledEventHandler = event => {
    // Update the stored event every iteration
    storedEvent = event;

    // We execute the callback with our event if our throttle is not active
    const shouldHandleEvent = !throttleTimeout;

    // If there isn't a throttle active, we execute the callback and create a new throttle.
    if (shouldHandleEvent) {
      // Handle our event
      callback(storedEvent);

      // Since we have used our stored event, we null it out.
      storedEvent = null;

      // Create a new throttle by setting a timeout to prevent handling events during the delay.
      // Once the timeout finishes, we execute our throttle if we have a stored event.
      throttleTimeout = setTimeout(() => {
        // We immediately null out the throttleTimeout since the throttle time has expired.
        throttleTimeout = null;

        // If we have a stored event, recursively call this function.
        // The recursion is what allows us to run continusously while events are present.
        // If events stop coming in, our throttle will end. It will then execute immediately if a new event ever comes.
        if (storedEvent) {
          // Since our timeout finishes:
          // 1. This recursive call will execute `callback` immediately since throttleTimeout is now null
          // 2. It will restart the throttle timer, allowing us to repeat the throttle process
          throttledEventHandler(storedEvent);
        }
      }, delay);
    }
  };

  // Return our throttled event handler as a closure
  return throttledEventHandler;
}

const throttledHandler = throttle(() => {
  // All the taxing stuff you do
}, 250);

window.addEventListener('scroll', throttledHandler);

requestAnimationFrame

requestAnimationFrame is similar to throttling, but it's a browser native API and tries to always throttle to 60fps. Its very name helps us know when it's best to use: while animating things. This would be the case when our JavaScript function is updating element positions, sizes, or anything else that's "painting" to the screen. requestAnimationFrame has a very simple syntax, and may be inlined into your function for one-time use:

const rafThrottle = timeout => {
  // If there's a timer, cancel it
  if (timeout) {
    window.cancelAnimationFrame(timeout);
  }

  // Set up the new requestAnimationFrame
  timeout = requestAnimationFrame(() => {
    // Run our scroll functions
    console.log('throttled');
  });
}

window.addEventListener('scroll', rafThrottle);

For more information and examples of debouncing, throttling, and requestAnimationFrame, see Debouncing and Throttling Explained Through Examples, The Difference Between Throttling and Debouncing , and JavaScript Debounce Function.

Client-side Data

When dealing with client-side data requests (Ajax calls), there are a lot of different methods to consider. This portion of the document will walk you through various situations and talk about the different technologies and patterns you may encounter along the way.

Using Fetch and Promises for Modern Environments

The fetch API is a modern replacement for the XMLHttpRequest. It is generally well supported, having features present in all evergreen browsers (browsers that auto-update). Fetch is recommended to be used in all modern environments when making Ajax calls or dealing with client-side data requests. Visit the MDN fetch documentation for a basic example of how to use this API.

To properly use fetch, support for promises also needs to be present (both have the same browser support). The support requirement for both is an important distinction when your project needs to support non-evergreen browsers (IE 11 and under), because both APIs will need to be polyfilled to make fetch happen.

To polyfill with NPM, we recommend adding the following packages to your dependencies: promise-polyfill and whatwg-fetch. They are both applicable at different points in the build process. Promises are polyfilled at the file-level with an import and fetch is polyfilled at the build level in your task runner. Please see the official whatwg-fetch documentation for detailed installation instructions.

If you are unable to process the polyfills in a modern workflow, the files can also be downloaded and enqueued separately (fetch, promise), but if possible, they should be implemented at the build level.

Using A Normal Ajax Call for Older Environments

For various reasons on a project, you may not be able to use a modern technique for dealing with client-side data requests. If you find yourself in that situation, it usually isn’t necessary to load an entire library like jQuery for a single feature. If you find yourself in this situation try writing a vanilla ajax call instead. Basic ajax calls do not require any pollyfills or fallbacks. You can reference the XMLHttpRequest Browser Compatibility table on MDN for specific feature support.

Please see the MDN XMLHttpRequest documentation for an example of a basic Ajax call.

When to Use a Client-side Data Request Library

Sometimes a project may require a more robust solution for managing your requests, especially if you will be making many requests to various endpoints. While fetch can do most (and someday all) of the things we need, there may be a few areas where it could fall short in your project. A few main items where fetch may fall short:

  • Cancelable requests
  • Timeout requests
  • Request progress

It should be noted that these are in active development and timeout requests can also be handled by using a wrapper function.

Certain libraries have these built in already and are still promise-based, but can also come with a few other advantages that fetch doesn’t have like: transformers, interceptors, and built-in XSRF protection. If you find yourself needing these features that are outside the scope of native JavaScript you may want to evaluate the benefit of using a library.

If you plan on making many requests over the lifetime of the application and you don’t need the features listed above, you should consider making a helper function or module that will handle all of your application’s fetch calls so you can easily include things like: expected error handling, a common URL base, any cookies you may need, any mode changes like CORS, etc.. Overall, you should be able to accomplish what you need to with fetch in the majority of cases.

Certain codebases may already have such libraries in place. Many legacy projects use jQuery.ajax() to make their requests. If possible, attempt to phase out jQuery for a vanilla solution where appropriate. In many cases, replacing with fetch or XMLHttpRequest will be possible.

Concatenating Requests

When constructing a page that contains a lot of client-side data requests you will want to consider concatenating your requests into a single Ajax call. This will help you avoid piling up requests or sending them through callbacks and nested promises when parts of the data depend on other parts.

GraphQL

GraphQL is an open source data query and manipulation language. It provides an alternative to REST because it allows for a consistent way to make declarative queries. We first define the data structure(s) we need, then request the data, and return only the data that was requested. This creates an environment of smaller, more targeted calls to an API. It also allows us to concatenate multiple calls into single data requests, reducing the overhead and time to load.

An essential part of a GraphQL API is an API schema. GraphQL requires a human-readable schema which describes the types which are available, and how they relate to one another. It is recommended to use GraphQL on a project if you are able.

Unit and Integration Testing

We generally employ unit and integration tests only when building applications that are meant to be distributed. Writing tests for client themes usually does not offer a huge amount of value (there are of course exceptions to this). When writing tests, it's important to use the framework that best fits the situation and make sure it is well documented for future engineers coming onto the project.

Libraries

With the influx of JavaScript upgrades in recent years, the need for a third-party library to polyfill functionality is becoming more and more rare (outside of a build script). Don't load in extensions unless the benefit outweighs the size of and added load-time of using it. While it is often more efficient for coding to use a quick jQuery method, it is rarely worth bringing in an entire library for one-off instances.

If you are working on a legacy project that already contains a library, make sure you're still evaluating the need for it as you build out features to best set up clients for the future.

Cookies

Safari browser's Intelligent Tracking Prevention (ITP) 2.1 sets the expiration period at 7 days for all first-party cookies set by in-line (or tag management solution injected) vendor JavaScript libraries like Google Analytics’ analytics.js.

Authentication cookies (secure and HTTP-only) which have been properly implemented won’t be affected by the 7-day cap. These cookies should be deployed using the Set-Cookie header in the HTTP response and inaccessible via JavaScript’s document.cookie API.

Solutions for other types of cookies include:

  1. Using localStorage to persist the unique identifier (i.e. the client ID) instead of relying solely on the _ga cookie
  2. Setting the _ga cookie with the HTTP response, rather than with JavaScript

Keep in mind that these solutions come with caveats: using localStorage only works on the same domain and would not work for cross-domain tracking.

As an alternative to local storage, server-side tracking via the proxy layer in Cloudflare is probably the best option for clients with significant traffic from Safari.

Edit this page on GitHub Updated at Wed, Oct 12, 2022