Decimal to Hex Converter
Convert decimal to hex — uppercase/lowercase, 0x prefix option, plus binary and octal
Decimal → Hex
Hexadecimal
Binary
Octal
Common hex values reference ▸
| Decimal | Hex (uppercase) | Hex (0x) | Notes |
|---|---|---|---|
| 0 | 00 | 0x00 | Null |
| 9 | 09 | 0x09 | Tab |
| 10 | 0A | 0x0A | Newline |
| 32 | 20 | 0x20 | Space |
| 127 | 7F | 0x7F | DEL |
| 255 | FF | 0xFF | Max byte |
| 256 | 100 | 0x100 | 2^8 |
| 1024 | 400 | 0x400 | 1 KiB |
| 65535 | FFFF | 0xFFFF | Max 16-bit |
| 16777215 | FFFFFF | 0xFFFFFF | Max 24-bit / white #FFFFFF |
Every developer needs to convert decimal to hex regularly — CSS color values, memory addresses, error codes, byte constants in code. This converter gives you the hex output with formatting options (uppercase/lowercase, 0x prefix) plus binary and octal for free.
How Decimal to Hex Conversion Works
Divide by 16 repeatedly, collect remainders, read them in reverse. Remainders 10–15 become A–F.
Example: 255 → hex
255 ÷ 16 = 15 remainder 15 → F
15 ÷ 16 = 0 remainder 15 → F
Read bottom to top: FF
255 decimal = 0xFF
Example: 4096 → hex
4096 ÷ 16 = 256 remainder 0 → 0
256 ÷ 16 = 16 remainder 0 → 0
16 ÷ 16 = 1 remainder 0 → 0
1 ÷ 16 = 0 remainder 1 → 1
Result: 1000 → 0x1000
Formatting Options
Uppercase vs. Lowercase
Both are valid hexadecimal. Convention varies by context:
- Uppercase (
FF,1A3) — standalone values, assembly language, memory dumps, Windows error codes - Lowercase (
ff,1a3) — CSS colors (#ff0000), UUID strings (550e8400-e29b-41d4-a716-446655440000), some APIs
The 0x Prefix
0x signals “this is hexadecimal” to both humans and compilers. Required in C/C++/JavaScript literals:
int mask = 0xFF; // required
int mask = FF; // syntax error
Optional in HTML/CSS colors (#FF0000 uses # instead of 0x), SQL (x'FF' uses x-string notation), and documentation.
Decimal to Hex Lookup: Key Values
These appear constantly in systems work:
| Decimal | Hex | Notes |
|---|---|---|
| 0 | 0x00 | Null, false, off |
| 10 | 0x0A | Newline (LF) |
| 13 | 0x0D | Carriage return (CR) |
| 32 | 0x20 | Space character |
| 48–57 | 0x30–0x39 | ASCII ‘0’–‘9’ |
| 65–90 | 0x41–0x5A | ASCII ‘A’–‘Z’ |
| 97–122 | 0x61–0x7A | ASCII ‘a’–‘z’ |
| 127 | 0x7F | DEL / max ASCII |
| 128 | 0x80 | Min 8-bit extended / sign bit |
| 255 | 0xFF | Max single byte |
| 256 | 0x100 | Carry out of 8 bits |
| 1024 | 0x400 | 1 KiB |
| 65535 | 0xFFFF | Max 16-bit unsigned |
| 16777215 | 0xFFFFFF | Max 24-bit / pure white in RGB |
CSS Color Mathematics
Web colors are hex: #RRGGBB. Each channel is one byte (0–255 → 00–FF).
rgb(255, 140, 0) = #FF8C00 (dark orange)
R: 255 = FF
G: 140 = 8C (140 ÷ 16 = 8 r 12 → 8C)
B: 0 = 00
The opacity alpha channel adds a fourth byte in CSS: rgba(255, 0, 0, 0.5) = #FF000080 (50% opacity = 128 = 0x80).
Error Codes: Reading Windows HRESULT in Hex
Windows error codes are 32-bit hex values where the structure encodes severity, facility, and code:
0x80070057(E_INVALIDARG): severity=2 (error), facility=7 (Win32), code=0x0057=870xC0000005(STATUS_ACCESS_VIOLATION): the dreaded AV crash
When you see a Windows error as decimal (like 2147942487), converting to hex (0x80070057) makes it immediately recognizable.
Frequently Asked Questions
Why pad hex values with leading zeros?
Padding aligns byte boundaries. 0x0F makes it clear there’s one byte; 0xF looks ambiguous. Security-sensitive code always pads to the expected size.
How many hex digits for N bits?
Each hex digit covers 4 bits: ceil(bits / 4) digits. For 8 bits: 2 hex digits. For 32 bits: 8 hex digits. For 64 bits: 16 hex digits.
Is 0x prefix required in JavaScript?
For number literals: yes. let n = 0xFF; But for string-based conversion: parseInt("FF", 16) doesn’t need it, while parseInt("0xFF", 16) also works.
Privacy
All conversions run entirely in your browser using JavaScript. No data is sent anywhere.