Home | Send Feedback | Share on Bluesky |

ES2026: The smaller features

Published: 27. March 2026  •  javascript

As always, in June we get a new ECMAScript specification with a set of new features. In this blog post, I will show you the features being added to JavaScript in ECMAScript 2026.

You can follow the standardization process of ECMAScript on the TC39 GitHub repository. All proposals that reach stage 4 are typically included in the next ECMAScript release, which is expected to be published in June each year.

The big new feature of ECMAScript 2026 is the Temporal API, which is a major improvement to JavaScript's date and time handling. You can find an introduction to the Temporal API in my other blog post: ES2026: The Temporal API. In this blog post, I will show you the smaller, but still very useful, additions to ECMAScript 2026.

Note that it is still quite early in the ECMAScript 2026 release cycle, so some of these features may not yet be implemented in all JavaScript engines. You can check the current implementation status of each feature on caniuse.com.

Upsert for Map and WeakMap

The problem

A common pattern when working with maps is to check whether a key already exists, and if not, insert a default value. For example:

if (!map.has(key)) {
  map.set(key, defaultValue);
}

const value = map.get(key);

That is noisy, repetitive, and usually performs multiple lookups when the engine could handle the whole operation in one API call.


What's new

ECMAScript 2026 adds these methods:

They let you say, "give me the existing value for this key, or insert a default and give me that instead."

getOrInsert takes a default value directly as its second argument.

getOrInsertComputed takes a callback as its second argument. That callback is only called to compute the default value when the key is actually missing. This is useful when creating the default value is expensive and should be avoided when possible.


Example

const tags = new Map();
tags.set(100, "javascript");

const post = tags.getOrInsert(101, "untagged");
// post is "untagged", and tags contains the entries 100 => "javascript" and 101 => "untagged"

const existing = tags.getOrInsert(100, "untagged");
// existing is "javascript", and tags still contains the entry 100 => "javascript"
// (the default value is not inserted because the key already exists)

const expensiveDefault = tags.getOrInsertComputed(102, () => {
    console.log("...expensive calculation...");
    return "untagged";
});
// Logs "...expensive calculation...", then expensiveDefault is "untagged", 
// and tags now contains the entries 100 => "javascript", 101 => "untagged", and 102 => "untagged"

const skipExpensive = tags.getOrInsertComputed(100, () => {
    console.log("This will not be logged because the key already exists");
    return "untagged";
});
// skipExpensive is "javascript", and tags still contains the entry 100 => "javascript".
// The callback is not called because the key already exists.

JSON.parse source text access

The problem

JSON.parse has always been lossy in certain cases because it only produces JavaScript values, and some JSON tokens do not have a perfect JavaScript representation. This is especially true for large numbers, which can lose precision when parsed as JavaScript Number values.

{
  "first": 999999999999999999,
  "second": 999999999999999999.0,
  "third": 1000000000000000000
}

Once parsed as JavaScript numbers, the numbers in the example collapse into the same imprecise floating-point result. JSON.parse has a second argument, a reviver function, that receives two arguments: the key and the value. But the problem is that the value is already the parsed JavaScript value.

const input = `{
  "first": 999999999999999999,
  "second": 999999999999999999.0,
  "third": 1000000000000000000
}`;

const parsed = JSON.parse(input, (key, value) => {
  console.log("reviver called with:", { key, value });
});

// reviver called with: { key: "first", value: 1000000000000000000 }
// reviver called with: { key: "second", value: 1000000000000000000 }
// reviver called with: { key: "third", value: 1000000000000000000 }
console.log(parsed.first);
// 1000000000000000000

console.log(parsed.second);
// 1000000000000000000

console.log(parsed.third);
// 1000000000000000000

console.log(parsed.first === parsed.second && parsed.second === parsed.third);
// true

What's new

ECMAScript 2026 extends JSON.parse revivers so they receive a third argument: a context object. For unmodified primitive values, that context includes the original source text for the value in the property context.source.

If you want the serialization side of the same story, JSON.rawJSON and JSON.isRawJSON were introduced as part of the same proposal. They let you work with raw JSON values in JSON.stringify(...) without forcing them through JavaScript's lossy Number representation first.


Example

In this example the reviver uses context.source to detect when a numeric token is too large to be safely represented as a JavaScript Number. When that happens, it returns a BigInt instead, which can represent the value without loss of precision.

const input = '{"id":90071992547409931234567890,"count":3}';

const parsed = JSON.parse(input, (key, value, context) => {
  console.log("reviver called with:", { key, value, context });

  if (context?.source && /^-?[0-9]+$/.test(context.source)) {
    const asNumber = Number(context.source);

    if (!Number.isSafeInteger(asNumber) || String(asNumber) !== context.source) {
      return BigInt(context.source);
    }
  }

  return value;
});

// reviver called with: { key: "id", value: 90071992547409930000000000, context: { source: "90071992547409931234567890" } }
// reviver called with: { key: "count", value: 3, context: { source: "3" } }

console.log(parsed.id);
// 90071992547409931234567890n

console.log(parsed.count);
// 3

You see that the value argument to the reviver is the already-parsed JavaScript number, which has lost precision. But the context.source property contains the original JSON token text, which is a string that preserves all the digits.


Stringify example with JSON.rawJSON

The rawJSON method lets you tell JSON.stringify to write a value directly as a JSON token, without converting it to a JavaScript value first. You use the method in the replacer function of JSON.stringify to wrap values that you want to be treated as raw JSON. In this example we use it to serialize BigInt values as JSON numbers. Using the default JSON.stringify behavior the call would throw a TypeError.

const payload = {
  id: 90071992547409931234567890n,
  count: 3n,
};

const json = JSON.stringify(payload, (key, value) => {
  if (typeof value === "bigint") {
    return JSON.rawJSON(value.toString());
  }

  return value;
});

console.log(json);
// {"id":90071992547409931234567890,"count":3}

Iterator sequencing

The problem

JavaScript already has iterators and iterator helpers, but combining multiple iterables into one lazy sequence was not well supported. One workaround was to write a generator function that yields from each source in turn:

function* concat(...sources) {
  for (const source of sources) {
    yield* source;
  }
}

const critical = Iterator.from(["fix prod", "publish patch"]);
const routine = Iterator.from(["reply to email", "update docs"]);
const workday = concat(
  critical,
  ["lunch break"],
  routine
);
console.log([...workday]);
// [ "fix prod", "publish patch", "lunch break", "reply to email", "update docs" ]

What's new

ES2026 adds Iterator.concat(...), which creates a single iterator by consuming multiple iterables and iterator sources in order.


Example

If we rewrite the previous example using Iterator.concat, it no longer needs the concat generator function.

const critical = Iterator.from(["fix prod", "publish patch"]);
const routine = Iterator.from(["reply to email", "update docs"]);

const workday = Iterator.concat(
  critical,
  ["lunch break"],
  routine
);

console.log([...workday]);
// [
//   "fix prod",
//   "publish patch",
//   "lunch break",
//   "reply to email",
//   "update docs"
// ]

Uint8Array Base64 and hex conversion

The problem

JavaScript has long had a mismatch between binary data APIs and text-based encoding helpers.

So when we need to convert a Uint8Array to Base64, we have to do something like this.

const bytes = new Uint8Array([99, 104, 112, 99, 32, 105, 100]);
const binaryString = String.fromCharCode(...bytes);
const token = btoa(binaryString);
// c2hpcCBpZA

const decodedBinaryString = atob(token);
const decodedBytes = Uint8Array.from(decodedBinaryString, (m) => m.codePointAt(0));
console.log(decodedBytes);
// Uint8Array(7) [ 99, 104, 112, 99, 32, 105, 100 ] 

What's new

ECMAScript 2026 adds built-in conversion methods for Uint8Array:

The same proposal also adds the corresponding hex helpers:


Example

const bytes = new Uint8Array([99, 104, 112, 99, 32, 105, 100]);
const token = bytes.toBase64({ alphabet: "base64url", omitPadding: true });
console.log(token);
// c2hpcCBpZA

const decodedBytes = Uint8Array.fromBase64(token, { alphabet: "base64url", lastChunkHandling: "loose" });
console.log(decodedBytes);
// Uint8Array(7) [ 99, 104, 112, 99, 32, 105, 100 ]

The toBase64 method takes an optional options object that can specify the alphabet and whether to omit padding. alphabet can be either "base64" (the standard that uses + and /, this is the default if not specified) or "base64url" (the URL-safe variant that uses - and _). The omitPadding option is a boolean that defaults to false, meaning that padding characters (=) will be included in the output by default.

The fromBase64 methods can also take an optional options object. alphabet has the same meaning as in toBase64, and it defaults to "base64". The lastChunkHandling option specifies how to handle the last chunk of the Base64 string, which may be shorter than 4 characters.


Example for hex conversion:

const bytes = new Uint8Array([99, 104, 112, 99, 32, 105, 100]);
const hex = bytes.toHex();
console.log(hex);
// 63687063206964
const decodedBytes = Uint8Array.fromHex(hex);
console.log(decodedBytes);
// Uint8Array(7) [ 99, 104, 112, 99, 32, 105, 100 ]

Math.sumPrecise

The problem

Addition in JavaScript can result in surprisingly inaccurate results because of the way floating-point numbers are represented and rounded. For example:

[1e20, 0.1, -1e20].reduce((a, b) => a + b, 0)
// 0

Mathematically, the correct answer is 0.1, but floating-point rounding causes the small value to get lost.


What's new

ECMAScript 2026 adds Math.sumPrecise(iterable), a built-in summation method that uses a more accurate algorithm than naive repeated +.


Example

const values = [1e20, 0.1, -1e20];
const precise = Math.sumPrecise(values);
console.log(precise);
// 0.1

Error.isError

The problem

Checking whether something is really an error object is more subtle than it should be.

instanceof Error can fail across realms, such as iframes or VM contexts. And checking Object.prototype.toString.call(value) is not reliable because it can be spoofed by setting Symbol.toStringTag.


What's new

ECMAScript 2026 adds Error.isError(value), a reliable check for native Error objects and subclasses.


Example

const real = new TypeError("Wrong value");
const fake = {
  name: "TypeError",
  message: "Wrong value",
  [Symbol.toStringTag]: "Error",
};

console.log(Error.isError(real));
// true

console.log(Error.isError(fake));
// false

Array.fromAsync

The problem

JavaScript has had Array.from for a long time, but there has been no equally direct way to collect all values from an async iterable into an array.

The usual pattern has to use a for await...of loop to consume the async iterable and push values into an array one by one:

const result = [];
for await (const value of source) {
  result.push(value);
}

What's new

ES2026 adds Array.fromAsync(items, mapFn?, thisArg?).

It is to for await...of what Array.from is to for...of.

The second and third arguments are optional mapping parameters, just like in Array.from. If you provide a mapping function, it will be applied to each value as it is collected into the final array. The thisArg is the value of this inside the mapping function when it is provided.


Example

This example converts an async iterable of page labels into an array of uppercase page labels using Array.fromAsync with a mapping function.

async function* fetchPages() {
  yield "page 1";
  yield "page 2";
  yield "page 3";
}

const pages = await Array.fromAsync(fetchPages(), page => page.toUpperCase());

console.log(pages);
// ["PAGE 1", "PAGE 2", "PAGE 3"]

Wrapping up

ECMAScript 2026 is a fairly big release because of the addition of the Temporal API, which is a major improvement to JavaScript's date and time handling. In this blog post, I've shown you the smaller additions to ECMAScript 2026. They may be smaller, but they are still very useful, and they can make your code cleaner and reduce boilerplate in common scenarios.