Integers literals can be expressed in several numeration bases.
Decimal numbers use the usual syntax
Octal numbers are expressed using a prefix
0O) followed by one or more digits in the range
Hexadecimal numbers are expressed using a prefix
0X) followed by one or more hexadecimal digits in the range
[0-f]. Examples are
that both the
x in the prefix and the letters in the
hexadecimal number are case insensitive. Thus,
0XdeadBEEF is a
valid (but ugly as hell) literal.
Binary numbers are expressed using a prefix
0B) followed by one or more binary digits in the range
[0-1]. Examples of binary literals are
Negative numbers, of any numeration base, are constructed using the
minus operator as explained below. Therefore the minus symbol
- in negative numbers is not part of the literal themselves.
_ can appear anywhere in a numeric literal
except as the first character. It is ignored, and its purpose is to
make it easier for programmers to read them:
The type of a numeric literal is the smallest signed integer capable of holding it, starting with 32 bits, in steps of powers of two and up to 64 bits.10
So, for example, the value
2 has type
int<32>, but the
0xffff_ffff has type
int<64>, because it is out of
the range of signed 32-bit numbers.
A set of suffixes can be used to construct integer literals of certain
l is for 64-bit integers.
h is for 16-bit integers (also known as
b is for 8-bit integers (also known
as bytes) and
N is for 4-bit integers (also
known as nibbles).
10L is a 64-bit integer with value
10H is a 16-bit integer with
10b is a 8-bit integer with value
Similarly, the signed or unsigned attribute of an integer can be
explicitly specified using the suffix
default are signed types). For example
0xffff_ffffU has type
0ub has type
uint<8>. It is possible
to combine width-indicating suffixes with signedness suffixes:
10UL denotes the same literal as
The above rules guarantee that it is always possible to determine the width and signedness of an integer constant just by looking at it, with no ambiguity.
Rationale: the width of a C “int” is 32 bits in most currently used architectures, and binary data formats are usually modelled after C.