Conversion from int16_t to uint32_t
int16_t s; uint32_t ui = s;
Is the result of converting a int16_t
value to a uint32_t
value compiler dependent? If not, what’s the rule?
=================================================================================================================
The results are well-defined; non-negative values stay the same, and negative values are reduced modulo 2^32. But the situations where exact sized types like int16_t
and uint32_t
are quite rare. There’s really no need for anything other than int
and unsigned long
here: those types have at least as many bits as int16_t
and uint32_t
and, unlike int16_t
and uint32_t
, they’re required to exist on any conforming implementation. If you really really want the sexy new sized types, at least go for portability withint_least16_t
and uint_least32_t
.
=================================================================================================================
look for “integer conversions” in the standard. The difference between uint32_t
anduint_least32_t
here is precisely that the value of a converted int16_t
is fully determined by the standard with uint32_t
, whereas with uint_least32_t
it depends on the width of the type (and only on that, since once the implementation decides that the rest of the behavior is determined). In the unlikely event that you want to convert negative values to an unsigned type, chances are you do want to know the width of the type. – Steve Jessop
=================================================================================================================
还没有评论,来说两句吧...