#define _INTSIZEOF(n) ( (sizeof(n) + sizeof(int) - 1) & ~(sizeof(int) - 1) )
The above macro simply aligns the size of n
to the nearest greater-or-equal sizeof(int)
boundary.
The basic algorithm for aligning value a
to the nearest greater-or-equal arbitrary boundary b
is to
- Divide
a
byb
rounding up, and then - Multiply the quotient by
b
again.
In the domain of unsigned (or just positive) values the first step is achieved by the following popular trick
q = (a + b - 1) / b // where `/` is ordinary C-style integer division (rounding down) // Now `q` is `a` divided by `b` rounded up
Combining this with the second step we get the following
aligned_a = (a + b - 1) / b * b
In aligned_a
you get the desired aligned value.
Applying this algorithm to the problem at hand one would arrive at the following implementation of _INTSIZEOF
macro
#define _INTSIZEOF(n)\ ( (sizeof(n) + sizeof(int) - 1) / sizeof(int) * sizeof(int) )
This is already good enough.
However, if you know in advance that the alignment boundary is a power of 2, you can "optimize" the calculations by replacing the divide+multiply sequence with a simple bitwise operation
aligned_a = (a + b - 1) & ~(b - 1)
That is exactly what‘s done in the above original implementation of _INTSIZEOF
macro.
This "optimization" might probably make sense with some compilers (although I would expect a modern compiler to be able to figure it out by itself). However, considering that the above _INTSIZEOF(n)
macro is apparently intended to serve as a compile-time expression (it does not depend on any run-time values, barring VLA objects/types passed as n
), there‘s not much point in optimizing it that way.
origin: c - an implement of sizeof guaranteeing bit alignment - Stack Overflow
material: 内存对齐#define _INTSIZEOF(n) ((sizeof(n)+sizeof(int)-1)&~(sizeof(int) - 1) )
an implement of sizeof guaranteeing bit alignment