UTF-16
Encoding & StandardsA variable-width Unicode encoding that uses 2 or 4 bytes per character, used internally by JavaScript, Java, and Windows.
UTF-16 uses 16-bit code units. Characters in the Basic Multilingual Plane (U+0000 to U+FFFF) use one code unit (2 bytes). Characters above U+FFFF — including most emoji — require a surrogate pair (4 bytes).This is why JavaScript's `string.length` can be surprising with emoji: `'😀'.length` returns 2 (two UTF-16 code units), not 1. Developers must use spread syntax (`[...'😀'].length`) or `Array.from()` for correct counting.
UTF-16 exists in two byte orders: UTF-16LE (little-endian, used by Windows) and UTF-16BE (big-endian). A BOM character can indicate which is used.