二进制转十进制转换器

这个二进制转十进制转换器可将二进制(基数 2)数转换为十进制(基数 10)数值。输入任意二进制数,即可查看对应的十进制结果,并附有逐步分解说明,展示每个位的权值如何构成最终结果。

Binary to Decimal Converter

Convert between binary and decimal number systems

Frequently Asked Questions

如何将二进制转换为十进制?

将每个二进制数字乘以2的相应次幂(从右侧0开始),然后将所有结果相加。例如,二进制1011 = (1×2³) + (0×2²) + (1×2¹) + (1×2⁰) = 8 + 0 + 2 + 1 = 十进制11。这种位置记数法适用于任何长度的二进制数。

什么是二进制数字系统?

二进制数字系统(基数2)只使用两个数字:0和1。每个数字位置代表2的幂次,就像十进制中每个位置代表10的幂次一样。二进制是计算机的基本语言,因为数字电路有两种状态:开(1)和关(0)。计算机中所有数据,从文本到图像到视频,最终都以二进制数形式存储和处理。

什么是有符号二进制到十进制的转换?

有符号二进制使用一位(最左边,称为符号位)来表示数字是正数还是负数。在最常见的表示法——补码中,如果符号位为0则数字为正,正常读取;如果符号位为1则数字为负:反转所有位,加1,然后在前面加负号。例如,8位有符号二进制11111001表示-7,因为反转得到00000110(6),加1得到7。

补码是如何工作的?

补码是计算机表示负整数的标准方式。求一个二进制数的补码:(1)反转所有位(将0变为1,将1变为0),然后(2)将结果加1。例如,在8位补码中表示-5:从5=00000101开始,反转得到11111010,加1得到11111011。要还原:如果符号位为1,反转所有位并加1。n位补码数的范围是-2^(n-1)到2^(n-1)-1。

如何将二进制小数转换为十进制?

对于小数点(二进制点)后的二进制数字,将每位乘以2的负幂次。小数点后第一位是2^(-1)=0.5,第二位是2^(-2)=0.25,第三位是2^(-3)=0.125,依此类推。例如,二进制101.11 = (1×4) + (0×2) + (1×1) + (1×0.5) + (1×0.25) = 4 + 0 + 1 + 0.5 + 0.25 = 十进制5.75。

什么是Double Dabble方法?

Double Dabble(也称为倍增法)是二进制到十进制转换的心算捷径。从最左边的位开始。将运行总数翻倍并加上当前位。对每一位重复此操作。例如,1101:从1开始,翻倍得2再加1=3,翻倍得6再加0=6,翻倍得12再加1=13。这种方法避免了计算2的幂次,对于心算来说更快。

二进制、八进制、十进制和十六进制有什么区别?

二进制(基数2)使用0-1的数字,八进制(基数8)使用0-7,十进制(基数10)使用0-9,十六进制(基数16)使用0-9和A-F。每种系统以不同方式表示相同的值。二进制是计算机原生的,十进制是人类使用的,而八进制/十六进制是二进制的紧凑简写,因为8=2³,16=2⁴。例如,十进制255在二进制中是11111111,在八进制中是377,在十六进制中是FF。

二进制中的LSB和MSB是什么?

LSB(最低有效位)是二进制数中最右边的位,代表2⁰=1。MSB(最高有效位)是最左边的位,代表最高的2的幂次。在二进制数10110中,LSB是0(最右边),MSB是1(最左边,代表2⁴=16)。在有符号二进制中,MSB也作为符号位,表示正数(0)或负数(1)。

如何将8位二进制转换为十进制?

对于8位无符号二进制数,将每位乘以其2的幂次(从2⁷=128到2⁰=1)并求和。例如,10110011 = 128+0+32+16+0+0+2+1 = 179。对于有符号8位(补码),如果MSB为1则数字为负:反转位,加1,取负。无符号范围为0-255;有符号范围为-128到127。

8、16和32位中最大的十进制数是什么?

对于无符号整数:8位最大值为255(2⁸-1),16位最大值为65,535(2¹⁶-1),32位最大值为4,294,967,295(2³²-1)。对于有符号补码:8位最大值为127(2⁷-1),16位最大值为32,767(2¹⁵-1),32位最大值为2,147,483,647(2³¹-1)。每增加一位,可能值的数量翻倍。

如何在Python和JavaScript中将二进制转换为十进制?

在Python中,使用int("binary_string", 2)将二进制转换为十进制。例如:int("1011", 2)返回11。在JavaScript中,使用parseInt("binary_string", 2)。例如:parseInt("1011", 2)返回11。两个函数都接受0和1组成的字符串以及基数(2表示二进制)作为参数,返回十进制整数值。

什么是浮点二进制?

浮点二进制是计算机用来表示实数(小数)的IEEE 754标准。32位浮点数使用1位表示符号,8位表示指数,23位表示尾数(小数部分)。64位双精度浮点数使用1+11+52位。值等于(-1)^符号 × 1.尾数 × 2^(指数-偏置)。这种格式可以表示非常大和非常小的数字,但可能存在精度限制,这就是为什么在大多数编程语言中0.1+0.2不完全等于0.3。

How to Convert Binary to Decimal

Converting binary to decimal uses positional notation, the same principle behind every number system. Each digit in a binary number represents a power of 2, just as each digit in a decimal number represents a power of 10. Because binary is base-2 and has only two symbols (0 and 1), each bit position doubles in value from right to left: 1, 2, 4, 8, 16, 32, 64, 128, and so on.

Step-by-Step Positional Notation Method

Step 1: Write down the binary number

Write out each binary digit. For example: 10110

Step 2: Assign powers of 2 to each bit position from right to left

Starting at position 0 (rightmost bit), label each bit with its power of 2:

Bit:10110
Power:2423222120
Value:168421

Step 3: Multiply each bit by its corresponding power of 2

  • 1 × 24 = 16
  • 0 × 23 = 0
  • 1 × 22 = 4
  • 1 × 21 = 2
  • 0 × 20 = 0

Step 4: Sum all the products to get the decimal result

16 + 0 + 4 + 2 + 0 = 22

General Formula:

Decimal = b(n) × 2n + b(n-1) × 2n-1 + ... + b(1) × 21 + b(0) × 20

Where b(i) is the binary digit at position i (either 0 or 1).

The Double Dabble Method

The Double Dabble method (also known as the doubling method) is a mental arithmetic shortcut that avoids calculating large powers of 2. Instead of multiplying each bit by its power and summing, you process the binary number from left to right using a simple rule: double the running total and add the current bit.

Double Dabble Example: 11010 to Decimal

Binary input: 11010

1. Start with the leftmost bit: 1 (running total = 1)

2. Double 1 = 2, add next bit 1 = 3

3. Double 3 = 6, add next bit 0 = 6

4. Double 6 = 12, add next bit 1 = 13

5. Double 13 = 26, add next bit 0 = 26

Result: Binary 11010 = Decimal 26

Double Dabble Example: 10110011 to Decimal

Binary input: 10110011

1. Start: 1

2. Double 1 = 2, add 0 = 2

3. Double 2 = 4, add 1 = 5

4. Double 5 = 10, add 1 = 11

5. Double 11 = 22, add 0 = 22

6. Double 22 = 44, add 0 = 44

7. Double 44 = 88, add 1 = 89

8. Double 89 = 178, add 1 = 179

Result: Binary 10110011 = Decimal 179

The Double Dabble method is particularly useful for converting long binary numbers by hand, since you never need to remember large powers of 2. It works because doubling the running total is equivalent to shifting all previous bits one position left (multiplying by 2) before adding the next bit.

Binary to Decimal Conversion Examples

Convert 1010 to Decimal

Binary input: 1010

  • 1 × 23 = 8
  • 0 × 22 = 0
  • 1 × 21 = 2
  • 0 × 20 = 0

8 + 0 + 2 + 0 = 10

Convert 11111111 to Decimal

Binary input: 11111111 (8 bits, all ones)

  • 1 × 128 = 128
  • 1 × 64 = 64
  • 1 × 32 = 32
  • 1 × 16 = 16
  • 1 × 8 = 8
  • 1 × 4 = 4
  • 1 × 2 = 2
  • 1 × 1 = 1

128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255

This is the maximum unsigned value for one byte (8 bits): 28 - 1 = 255.

Convert 10000000 to Decimal

Binary input: 10000000

Only the MSB (Most Significant Bit) is set:

1 × 27 = 128

In 8-bit signed (two's complement), this same bit pattern represents -128, the most negative 8-bit value.

Convert 1100100 to Decimal

Binary input: 1100100

  • 1 × 64 = 64
  • 1 × 32 = 32
  • 0 × 16 = 0
  • 0 × 8 = 0
  • 1 × 4 = 4
  • 0 × 2 = 0
  • 0 × 1 = 0

64 + 32 + 0 + 0 + 4 + 0 + 0 = 100

Binary to Decimal Conversion Table

This complete binary to decimal conversion table covers all 256 byte values from 0 to 255. Each row shows the decimal value, its 8-bit binary representation, hexadecimal equivalent, and octal representation.

Quick Reference (Powers of 2 & Common Values):

DecimalBinaryHexOctal
000000000000
100000001011
200000010022
400000100044
8000010000810
10000010100A12
16000100001020
32001000002040
42001010102A52
640100000040100
1000110010064144
127011111117F177
1281000000080200
25511111111FF377

This table covers all possible single-byte values (0-255) with their binary, hexadecimal, and octal equivalents. For values beyond 255, apply the same positional notation method to larger binary numbers.

Signed Binary & Two's Complement

Unsigned binary can only represent non-negative numbers. To represent negative numbers, computers use two's complement, the standard signed binary system used by virtually all modern processors.

How Two's Complement Works

In two's complement, the most significant bit (MSB) is the sign bit: 0 means positive, 1 means negative. The remaining bits hold the magnitude, but negative values are encoded by inverting all bits and adding 1.

Example: Convert 8-bit signed binary 11111001 to decimal

1. MSB is 1, so the number is negative

2. Invert all bits: 11111001 → 00000110

3. Add 1: 00000110 + 1 = 00000111 = 7

4. Apply the negative sign: -7

Two's Complement Range

BitsMinimumMaximumTotal Values
8-128127256
16-32,76832,76765,536
32-2,147,483,6482,147,483,6474,294,967,296

Binary Fractions to Decimal

Binary numbers can have a fractional part, separated by a binary point (similar to a decimal point). Bits after the binary point represent negative powers of 2: 2-1 = 0.5, 2-2 = 0.25, 2-3 = 0.125, and so on.

Example: Convert 101.11 to Decimal

Binary input: 101.11

Integer part (101):

  • 1 × 22 = 4
  • 0 × 21 = 0
  • 1 × 20 = 1

Fractional part (.11):

  • 1 × 2-1 = 0.5
  • 1 × 2-2 = 0.25

4 + 0 + 1 + 0.5 + 0.25 = 5.75

Example: Convert 1.0101 to Decimal

Binary input: 1.0101

  • 1 × 20 = 1
  • 0 × 2-1 = 0
  • 1 × 2-2 = 0.25
  • 0 × 2-3 = 0
  • 1 × 2-4 = 0.0625

1 + 0 + 0.25 + 0 + 0.0625 = 1.3125

Note: Some decimal fractions cannot be represented exactly in binary (for example, 0.1 in decimal is a repeating fraction in binary: 0.0001100110011...). This is why floating-point arithmetic can produce small rounding errors in programming.

Binary to Decimal in Programming

Most programming languages provide built-in functions for converting binary strings to decimal integers. Here are examples in the three most commonly used languages.

Python

# Convert binary string to decimal
binary_str = "10110011"
decimal_value = int(binary_str, 2)
print(decimal_value)          # 179

# Binary literal in Python
x = 0b10110011
print(x)                      # 179

# Convert decimal back to binary
print(bin(179))               # '0b10110011'

# Two's complement for signed 8-bit
def signed_binary_to_decimal(binary_str, bits=8):
    value = int(binary_str, 2)
    if value >= 2**(bits - 1):
        value -= 2**bits
    return value

print(signed_binary_to_decimal("11111001"))  # -7
print(signed_binary_to_decimal("01111111"))  # 127

JavaScript

// Convert binary string to decimal
const binaryStr = "10110011";
const decimal = parseInt(binaryStr, 2);
console.log(decimal);           // 179

// Binary literal in JavaScript
const x = 0b10110011;
console.log(x);                 // 179

// Convert decimal back to binary
console.log((179).toString(2)); // '10110011'

// For large binary values, use BigInt
const bigBin = "1111111111111111111111111111111111111111";
const bigDec = BigInt("0b" + bigBin);
console.log(bigDec.toString()); // '1099511627775'

C

#include <stdio.h>
#include <string.h>
#include <math.h>

long binaryToDecimal(const char *binary) {
    long decimal = 0;
    int len = strlen(binary);
    for (int i = 0; i < len; i++) {
        if (binary[i] == '1') {
            decimal += (long)pow(2, len - 1 - i);
        }
    }
    return decimal;
}

int main() {
    printf("%ld\n", binaryToDecimal("10110011")); // 179
    printf("%ld\n", binaryToDecimal("11111111")); // 255
    printf("%ld\n", binaryToDecimal("1100100"));  // 100

    // C also supports binary literals (C23/GCC extension)
    // int x = 0b10110011;  // 179
    return 0;
}

Applications

  • Computer architecture: Understanding how CPUs process data at the binary level and convert to human-readable decimal output
  • Networking: IP address and subnet mask calculations require binary-to-decimal conversion (e.g., subnet mask 11111111.11111111.11111111.00000000 = 255.255.255.0)
  • Digital electronics: Reading sensor outputs, ADC (Analog-to-Digital Converter) values, and logic gate states
  • Programming: Bitwise operations, bit flags, permissions, and debugging binary data
  • Data encoding: Understanding character encodings like ASCII (7-bit) and UTF-8 (variable-length binary sequences)
  • Cryptography: Binary operations underpin XOR encryption, hash functions, and block ciphers used in our Vernam cipher and other tools