TransWikia.com

Why does C# System.Decimal (decimal) "waste" bits?

Stack Overflow Asked on November 4, 2021

As written in the official docs the 128 bits of System.Decimal are filled like this:

The return value is a four-element array of 32-bit signed integers.

The first, second, and third elements of the returned array contain
the low, middle, and high 32 bits of the 96-bit integer number.

The fourth element of the returned array contains the scale factor and
sign. It consists of the following parts:

Bits 0 to 15, the lower word, are unused and must be zero.

Bits 16 to 23 must contain an exponent between 0 and 28, which
indicates the power of 10 to divide the integer number.

Bits 24 to 30 are unused and must be zero.

Bit 31 contains the sign: 0 mean positive, and 1 means negative.

With that in mind one can see that some bits are "wasted" or unused.

Why not for example 120 bits of integer, 7 bits of exponent and 1 bit of sign.

Probably there is a good reason for a decimal being the way it is. This question would like to know the reasoning behind that decision.

2 Answers

Based on Kevin Gosse's comment

For what it's worth, the decimal type seems to predate .net. The .net framework CLR delegates the computations to the oleaut32 lib, and I could find traces of the DECIMAL type as far back as Windows 95

I searched further and found a likely user of the DECIMAL code in oleauth32 Windows 95.

The old Visual Basic (non .NET based) and VBA have a sort-of-dynamic type called 'Variant'. In there (and only in there) you could save something nearly identical to our current System.Decimal.

Variant is always 128 bits with the first 16 bits reserved for an enum value of which data type is inside the Variant.

The separation of the remaining 112 bits could be based on common CPU architectures in the early 90'ies or ease of use for the Windows programmer. It sounds sensible to not pack exponent and sign in one byte just to have one more byte available for the integer.

When .NET was built the existing (low level) code for this type and it's operations was reused for System.Decimal.

Nothing of this is 100% verified and I would have liked the answer to contain more historical evidence but that's what I could puzzle together.

Answered by Tom on November 4, 2021

Here is the C# source of Decimal. Note the FCallAddSub style methods. These calls out to (unavailable) fast C++ implementations of these methods.

I suspect the implementation is like this because it means that operations on the 'numbers' in the first 96 bits can be simple and fast, as CPUs operate on 32-bit words. If 120 bits were used, CPU operations would be slower and trickier and require a lot of bitmasks to get the interesting extra 24 bits, which would then be difficult to work with. Additionally, this would then 'pollute' the highest 32-bit flags, and make certain optimizations impossible.

If you look at the code, you can see that this simple bit layout is useful everywhere. It is no doubt especially useful in the underlying C++ (and probably assembler).

Answered by Jason Crease on November 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP