charCodeAt vs fromCharCode

The charCodeAt() and fromCharCode() methods in JavaScript are complementary string methods used for handling character encoding, but they serve opposite purposes. charCodeAt() retrieves the Unicode code unit (a 16-bit value) of the character at a specified index within a string, returning an integer between 0 and 65535. If the index is out of range, it returns NaN. For example, "ABC".charCodeAt(0) returns 65, which is the Unicode value for 'A'.

Conversely, fromCharCode() is a static method of the String object that creates a string from one or more specified Unicode code units. It accepts a sequence of numbers (each between 0 and 65535) and returns a string composed of the corresponding characters. For instance, String.fromCharCode(65, 66, 67) returns "ABC". This method is useful for generating strings from numeric Unicode values, such as in encoding or decoding operations.

A key limitation of fromCharCode() is that it only works with 16-bit values, which means it cannot directly represent Unicode code points above 65535 (supplementary characters) without using surrogate pairs. For such characters, String.fromCodePoint() is preferred, as it can handle code points up to 0x10FFFF directly. In contrast, charCodeAt() always returns a value less than 65536 because higher code points are represented by surrogate pairs in UTF-16.

Both methods are well-established and have been supported across browsers since at least July 2015. Their behavior is stable and unlikely to change, as they are based on fixed Unicode standards and UTF-16 encoding rules. While charCodeAt() is an instance method, fromCharCode() is static and must be called on the String constructor.