Converters

How Does Hexadecimal Work: Complete Guide to Base-16 Numbers

Learn how hexadecimal (base-16) works, how to convert hex to decimal and back, the role of hex in computing, signed hex with two's complement, hex arithmetic, and programming examples in Python, JavaScript, and C.

Published March 19, 2026
15 minute read
Cryptography Guide

How Does Hexadecimal Work: Complete Guide to Base-16 Numbers

Hexadecimal is everywhere in computing. If you have ever seen a web color code like #FF5733, a memory address like 0x7FFF5FBFF8AC, or a cryptographic hash, you have encountered hexadecimal numbers. Yet many developers treat hex as a mysterious notation they copy and paste without fully understanding.

This guide changes that. By the end, you will understand exactly how the hexadecimal number system works, be able to convert between hex and decimal in both directions, and know why hex is the dominant notation in low-level computing. Try our free hex to decimal converter to follow along with any of the examples below.

Why Base-16? The Power-of-2 Connection

To understand why hexadecimal exists, start with a simple question: why not just use decimal for everything in computing?

The answer lies in binary. Computers operate on binary (base-2), where data is represented as sequences of 0s and 1s. A single byte -- the fundamental unit of computer memory -- is 8 binary digits (bits). The byte value 11010110 is perfectly precise for a computer, but it is long, error-prone, and tedious for humans to read.

Decimal (base-10) is natural for humans, but it does not align cleanly with binary. Converting between binary and decimal requires division and multiplication that obscures the underlying bit patterns.

Hexadecimal (base-16) solves this problem because 16 is a power of 2: 16 = 2^4. This means each hex digit represents exactly 4 binary bits. Two hex digits represent exactly one byte. The alignment is perfect:

Binary (8 bits)Hex (2 digits)Decimal
0000 0000000
0111 11117F127
1000 000080128
1111 1111FF255

The byte value 11010110 becomes simply D6 in hex -- two characters that map directly to the underlying bits without any arithmetic. This is why hexadecimal became the preferred notation in computing.

Hexadecimal Digits: 0-9 and A-F

Hexadecimal uses sixteen symbols to represent values zero through fifteen:

Hex DigitDecimal ValueBinary
000000
110001
220010
330011
440100
550101
660110
770111
881000
991001
A101010
B111011
C121100
D131101
E141110
F151111

The letters A through F extend beyond the ten decimal digits to cover values 10-15. Both uppercase (A-F) and lowercase (a-f) are accepted in virtually all contexts, though uppercase is the conventional standard.

Common Hex Prefixes and Notations

Different programming languages and contexts use different notations to indicate that a number is hexadecimal:

  • 0x -- C, C++, Java, Python, JavaScript: 0xFF
  • # -- CSS color codes: #FF5733
  • $ -- 6502/68000 assembly: $FF
  • h suffix -- Intel assembly: FFh
  • U+ -- Unicode code points: U+0041
  • \x -- Hex escape in strings: \x41 (the letter A)

The prefix or notation does not change the value -- it only tells the parser or reader to interpret the digits as hexadecimal.

How to Convert Hex to Decimal

The core technique is positional notation -- the same principle behind how all number systems work, applied with base 16 instead of base 10.

The Formula

For a hexadecimal number with digits d(n-1), d(n-2), ..., d(1), d(0), the decimal value is:

decimal = d(n-1) x 16^(n-1) + d(n-2) x 16^(n-2) + ... + d(1) x 16^1 + d(0) x 16^0

Each hex digit is multiplied by 16 raised to the power of its position (counting from the right, starting at 0), and all products are summed.

Powers of 16 Reference

Memorizing the first few powers of 16 makes mental conversion much faster:

PowerValue
16^01
16^116
16^2256
16^34,096
16^465,536
16^51,048,576
16^616,777,216
16^7268,435,456
16^84,294,967,296

Worked Example 1: Convert 2F to Decimal

Hex input: 2F

  1. Digit 2 is at position 1: 2 x 16^1 = 2 x 16 = 32
  2. Digit F (15) is at position 0: 15 x 16^0 = 15 x 1 = 15
  3. Sum: 32 + 15 = 47

Result: 2F(hex) = 47(decimal)

Worked Example 2: Convert 1A3F to Decimal

Hex input: 1A3F

  1. Digit 1 at position 3: 1 x 16^3 = 1 x 4,096 = 4,096
  2. Digit A (10) at position 2: 10 x 16^2 = 10 x 256 = 2,560
  3. Digit 3 at position 1: 3 x 16^1 = 3 x 16 = 48
  4. Digit F (15) at position 0: 15 x 16^0 = 15 x 1 = 15
  5. Sum: 4,096 + 2,560 + 48 + 15 = 6,719

Result: 1A3F(hex) = 6,719(decimal)

Worked Example 3: Convert DEADBEEF to Decimal

Hex input: DEADBEEF (a famous "hexspeak" value used as a magic number in debugging)

  1. D (13) x 16^7 = 13 x 268,435,456 = 3,489,660,928
  2. E (14) x 16^6 = 14 x 16,777,216 = 234,881,024
  3. A (10) x 16^5 = 10 x 1,048,576 = 10,485,760
  4. D (13) x 16^4 = 13 x 65,536 = 851,968
  5. B (11) x 16^3 = 11 x 4,096 = 45,056
  6. E (14) x 16^2 = 14 x 256 = 3,584
  7. E (14) x 16^1 = 14 x 16 = 224
  8. F (15) x 16^0 = 15 x 1 = 15

Sum: 3,489,660,928 + 234,881,024 + 10,485,760 + 851,968 + 45,056 + 3,584 + 224 + 15 = 3,735,928,559

This is a 32-bit unsigned integer commonly used to mark uninitialized memory in debugging tools.

How to Convert Decimal to Hex

The reverse process uses repeated division by 16:

  1. Divide the decimal number by 16
  2. Record the remainder (this becomes a hex digit)
  3. Replace the number with the quotient
  4. Repeat until the quotient is 0
  5. Read the hex digits from bottom to top (last remainder first)

Worked Example: Convert 6,719 to Hex

StepOperationQuotientRemainderHex Digit
16719 / 1641915F
2419 / 162633
326 / 16110A
41 / 16011

Reading bottom to top: 1A3F

Verification: 1A3F(hex) = 1x4096 + 10x256 + 3x16 + 15x1 = 4096 + 2560 + 48 + 15 = 6,719. Correct.

Hexadecimal in Computing

Memory Addresses

When you debug a program, memory addresses are displayed in hexadecimal:

0x7FFF5FBFF8AC
0x00400000 (typical .text section start on Linux)
0xDEADBEEF (debug marker for uninitialized memory)

Hex is used because memory is byte-addressable, and two hex digits map to one byte. A 64-bit address like 0x7FFF5FBFF8AC is 12 hex digits -- far more readable than the 48 binary digits it represents.

Web Color Codes

CSS hex colors encode red, green, and blue intensity values:

Hex CodeR (decimal)G (decimal)B (decimal)Color
#000000000Black
#FFFFFF255255255White
#FF000025500Red
#00FF0002550Green
#0000FF00255Blue
#FF57332558751Orange-red
#3A7BD558123213Steel blue

Each hex pair represents one byte (0-255 decimal), and you can convert these values using our hex to decimal converter. For full RGB breakdown, try our hex to RGB converter.

MAC Addresses

Network interface addresses use six hex byte pairs separated by colons or hyphens:

00:1A:2B:3C:4D:5E

The first three bytes (00:1A

) identify the manufacturer (OUI), and the last three (3C:4D
) are the device-specific identifier. Each pair is a hex byte that converts to a decimal value 0-255.

Unicode Code Points

Every Unicode character has a code point written in hex:

Code PointHex ValueDecimalCharacter
U+00414165A
U+00484872H
U+00E9E9233e (with accent)
U+4E164E1619,990Chinese "world"
U+1F6001F600128,512Grinning face emoji

The hex notation is used because it aligns with the underlying byte encoding (UTF-8, UTF-16) and is more compact than decimal for the full Unicode range (0 to 10FFFF).

Common Hex Values Every Developer Should Know

Certain hex values appear so frequently in programming and systems administration that recognizing them instantly saves significant time.

Byte Boundaries

HexDecimalSignificance
0x000Null byte, string terminator in C
0x0A10Line feed (LF), Unix newline
0x0D13Carriage return (CR)
0x2032ASCII space character
0x7F127Maximum signed 8-bit integer, DEL char
0x80128Minimum signed 8-bit (two's complement)
0xFF255Maximum unsigned 8-bit (1 byte)
0xFFFF65,535Maximum unsigned 16-bit
0xFFFFFFFF4,294,967,295Maximum unsigned 32-bit

ASCII Character Ranges in Hex

Hex RangeDecimal RangeCharacters
30-3948-57Digits '0' to '9'
41-5A65-90Uppercase 'A' to 'Z'
61-7A97-122Lowercase 'a' to 'z'

Knowing that uppercase A starts at 0x41 (65) and lowercase a starts at 0x61 (97) -- a difference of exactly 0x20 (32) -- is useful when implementing case conversion in low-level code.

Debug and Magic Numbers

Programmers use recognizable hex patterns to mark special memory states:

HexDecimalUsage
0xDEADBEEF3,735,928,559Uninitialized memory marker (IBM)
0xCAFEBABE3,405,691,582Java class file magic number
0xBAADF00D3,131,961,357Windows LocalAlloc uninitialized heap
0xFEEDFACE4,277,009,102Mach-O binary header (macOS)
0xDEADC0DE3,735,929,054OpenSolaris uninitialized buffer marker

These "hexspeak" values are chosen because they are easy to spot in memory dumps while using only hex-valid letters (A-F).

Signed Hexadecimal and Two's Complement

In most modern computers, negative integers are stored using two's complement encoding. Understanding this is essential for interpreting signed hex values in memory dumps, register values, and network protocols.

How Two's Complement Works

For an N-bit signed integer:

  • If the most significant bit (MSB) is 0, the number is positive (or zero)
  • If the MSB is 1, the number is negative

To find the negative value from a hex number where the MSB is set:

  1. Convert to binary
  2. Invert all bits (0 becomes 1, 1 becomes 0)
  3. Add 1
  4. The result is the magnitude; prepend a minus sign

8-bit Signed Hex Examples

HexBinaryUnsignedSigned (8-bit)
000000000000
7F01111111127+127 (max positive)
8010000000128-128 (min negative)
FF11111111255-1
FE11111110254-2
8110000001129-127

Worked Example: What is 0x9C as a Signed 8-bit Integer?

  1. Binary: 10011100
  2. MSB is 1, so the number is negative
  3. Invert: 01100011
  4. Add 1: 01100100 = 100 in decimal
  5. Result: -100

So 0x9C as an unsigned byte is 156, but as a signed byte it is -100. The interpretation depends entirely on context.

16-bit and 32-bit Signed Values

The same principle extends to wider integers:

HexUnsignedSigned (32-bit)
0000000000
7FFFFFFF2,147,483,647+2,147,483,647 (max)
800000002,147,483,648-2,147,483,648 (min)
FFFFFFFF4,294,967,295-1
FFFFFFFE4,294,967,294-2

Hexadecimal Arithmetic

While most people use calculators for hex arithmetic, understanding the basics is useful for debugging and mental math.

Hex Addition

Hex addition works like decimal addition but with a carry threshold of 16 instead of 10:

  3A
+ 1F
----
  1. Right column: A(10) + F(15) = 25. Since 25 >= 16, write 25 - 16 = 9 and carry 1
  2. Left column: 3 + 1 + 1(carry) = 5

Result: 3A + 1F = 59

Verification: 58 + 31 = 89 in decimal. 59(hex) = 5x16 + 9 = 89. Correct.

Hex Subtraction

  FF
- 3A
----
  1. Right column: F(15) - A(10) = 5
  2. Left column: F(15) - 3 = C(12)

Result: FF - 3A = C5

Verification: 255 - 58 = 197. C5(hex) = 12x16 + 5 = 197. Correct.

Quick Mental Math Tips

  • Adding 1 to F gives 10 (hex), not 16 -- remember you are in base 16
  • FF + 1 = 100 (hex) = 256 (decimal)
  • Subtracting 1 from 10 (hex) gives F
  • Doubling a hex digit: double the decimal value, convert back if >= 16

Hex to Decimal in Programming Languages

Python

Python's int() function is the standard way to parse hex strings:

Python16 lines
Highlighting code...

JavaScript

JavaScript uses parseInt() for hex strings and 0x prefix for literals:

JavaScript17 lines
Highlighting code...

C / C++

C uses strtol() for string conversion and supports hex literals directly:

C21 lines
Highlighting code...

Frequently Asked Questions

How do you convert hex to decimal?

Multiply each hex digit by its positional power of 16, then sum all the products. For example, hex 2F = 2x16 + 15x1 = 32 + 15 = 47. Replace letters A-F with their decimal values (A=10, B=11, C=12, D=13, E=14, F=15) before multiplying.

What is 0xFF in decimal?

0xFF equals 255 in decimal. F(15) x 16 + F(15) x 1 = 240 + 15 = 255. This is the maximum value of an unsigned 8-bit byte (binary 11111111) and is commonly used as a bitmask in programming.

What does the 0x prefix mean?

The 0x prefix is a notation convention used in C, Java, Python, JavaScript, and many other languages to indicate that the following digits are hexadecimal. It does not change the value -- 0xFF and FF represent the same number (255 decimal).

How do you convert decimal to hex?

Repeatedly divide the decimal number by 16, recording each remainder. The remainders, read from bottom to top, form the hex number. For remainders 10-15, use letters A-F. For example, 255 / 16 = 15 remainder 15, then 15 / 16 = 0 remainder 15. Reading bottom to top: FF.

What is the difference between hex and decimal?

Hexadecimal is base-16 (digits 0-9 and A-F) while decimal is base-10 (digits 0-9 only). Hex is preferred in computing because each hex digit maps to exactly 4 binary bits, making it a compact representation of binary data. Decimal is the standard human counting system.

How do you convert signed hex to decimal?

For signed hex values (two's complement), check the most significant bit. If it is set (hex digit 8-F in the leftmost position for 8-bit values), the number is negative. To find the magnitude: invert all bits and add 1. For example, 0xFF as a signed 8-bit integer is -1 (invert 11111111 to 00000000, add 1 to get 1, negate to get -1).

Where is hexadecimal used?

Hex is used for memory addresses, web color codes (#FF5733), MAC addresses, Unicode code points (U+0041), cryptographic hashes (SHA-256), assembly language, IPv6 addresses, and anywhere binary data needs a compact human-readable notation.

Why is hex better than octal?

Hex aligns with the byte boundary: 2 hex digits = 1 byte (8 bits). Octal (base-8) uses 3-bit groups, which do not align cleanly with 8-bit bytes. A byte requires 2.67 octal digits but exactly 2 hex digits. This is why hex largely replaced octal in modern computing, though octal survives in Unix file permissions (chmod 755).

How do you do hex arithmetic?

Hex arithmetic follows the same rules as decimal arithmetic, with a carry/borrow threshold of 16 instead of 10. For addition, if a column sum exceeds 15 (F), subtract 16 and carry 1 to the next column. For subtraction, if a column difference is negative, add 16 and borrow 1 from the next column.

Can hexadecimal have decimal points?

Yes, though it is rare. Hex 1.8 equals 1 + 8/16 = 1.5 in decimal. Some programming contexts (C99's %a format, IEEE 754 hex float notation) use hexadecimal floating-point like 0x1.8p3 (meaning 1.5 x 2^3 = 12.0). In practice, most hex-to-decimal conversion involves integers.

Hex vs Other Number Systems

Understanding how hexadecimal compares to other number systems clarifies when to use each one:

PropertyBinary (Base-2)Octal (Base-8)Decimal (Base-10)Hex (Base-16)
Digits used0-10-70-90-9, A-F
Bits per digit13~3.324
Byte alignmentPerfect (8 digits)Poor (2.67 digits)None (variable)Perfect (2 digits)
Primary useHardware logicUnix permissionsHuman mathComputing notation

Hex wins in computing contexts because of its perfect byte alignment. Octal survives in Unix file permissions (chmod 755 = rwxr-xr-x) where the 3-bit grouping matches the 3-bit read/write/execute permission model. Decimal remains the standard for human-facing values like prices, counts, and measurements.

For converting between these systems, you can use our binary to decimal converter or binary to octal converter alongside the hex to decimal converter.

Summary

Hexadecimal is the bridge between human-readable notation and the binary reality of computers. Its power comes from a single mathematical fact: 16 = 2^4, which means each hex digit maps to exactly 4 binary bits. This alignment makes hex the natural way to express binary data compactly.

Key takeaways from this guide:

  • Hex to decimal: multiply each digit by its power of 16 and sum the products
  • Decimal to hex: repeatedly divide by 16, collecting remainders bottom-to-top
  • Two hex digits = one byte: FF = 255, the maximum unsigned byte value
  • Signed hex uses two's complement -- the same 8 bits can mean 255 (unsigned) or -1 (signed)
  • Hex arithmetic follows standard rules with a carry threshold of 16
  • Programming: Python int("FF", 16), JavaScript parseInt("FF", 16), C strtol("FF", NULL, 16)

Ready to convert? Use our free hex to decimal converter to convert any hexadecimal value to decimal instantly, with a step-by-step positional notation breakdown. You can also explore our hex to binary converter and hex to text converter for related conversions.

About This Article

This article is part of our comprehensive converters cipher tutorial series. Learn more about classical cryptography and explore our interactive cipher tools.

More Converters Tutorials

Try Converters Cipher Tool

Put your knowledge into practice with our interactive convertersencryption and decryption tool.

Try Converters Tool