Converting Modular to Integer type

Hi,

Quick newbie question. From what I have seen, I can do this to convert from I64 (signed 64 bit) to U64 (unsigned 64 bit):

function To_U64(v: I64) return U64 is
begin
    return U64'Mod(v);
end To_U64;

Is there a similar approach for doing the reverse, going from U64 to I64?
I guess it might not be that simple since the mapping of 0 is undetermined for a given arbitrary range. Then what approach would you recommend?

Thanks in advance :smile:

Perhaps the following?

function To_I64 is new Ada.Unchecked_Conversion (Source => U64, Target => I64);
1 Like

This depends on exactly what you’re doing; if it’s simply reinterpreting the bits, then the Unchecked_Conversion is the way to go. If, on the other hand, it’s something where you want things to map ‘naturally’, then something like Long_Integer(X) might be appropriate. If, on the third hand, you need to watch your bounds, then something like:

Function Convert(X : Interfaces.Unsigned_64) return Interfaces.Integer_64 is
(if    X >= Interfaces.Unsigned_64( Interfaces.Integer_64'Last ) then Interfaces.Integer_64'Last
 elsif X <= 0 then 0
 else Interfaces.Integer_64( X )
);

Hope that helps.

2 Likes

Use the standard type conversion:

   X := I64 (Y);
   Y := U64 (X);

It will raise Constraint_Error when the value is out of range.

2 Likes

Hi, thanks everyone for the answers. What I am trying to do is to do the inverse function, that is, to map it as if it was a two’s complement conversion. I am not sure if Unchecked_Conversion is what I want, does this conversion depend on the internal representation of the machine or is it consistent across platforms? e.g. endianness

Do you want to map negative signed numbers -I to 2**N - I? Then you write just that:

   if I < 0 then
      return U64'Last - U64 (-1 - I);
   else
      return U64 (I);
   end if;
1 Like

Well, I believe that the code above does that, the problem comes when going the other way around. Coming from an unsigned and then going to the signed version.

I have arrived at this:

    function To_I64(v: U64) return I64 is
    begin
        return (if v <= U64(I64'Last) then  I64(v)
                else                        -I64(U64'Last - v + 1));
    end To_I64;

Just wondering if there is any other way of doing this.

Note that 2’s complement has asymmetric range -2 ** N .. 2 ** N - 1. The edge case is that there is no -I64’First. So your code is incorrect. You must subtract 1 after the type conversion.

BTW, you can use bit-wise operations instead of subtraction, e.g..

      Last : constant := U64'Modulus / 2;
   begin
      if X >= Last then
         return -I64 (not X) - 1;
      else
         return I64 (X);
      end if;
1 Like

You are right, changed it. One last question, is the representation of two’s complement uniform across machines in Ada?

I believe all modern processors are 2’s complement.

1 Like

AFAIK, in the past maybe some machines were not two’s complement, but nowadays even modern C++ does not support non-two’s complement machines P0907R4: Signed Integers are Two’s Complement

In Ada probably the compiler has to generate correct code, as in the mathematically correct code, in a deterministic manner. So the compiler will generate whatever is needed to fulfill the operation.

1 Like

Thanks! I think I will keep the subtracting version, rather than the complement one. My inner C programmer has an uncanny feeling when depending on bit representation. :slight_smile:

Oh… I was there… Oh… You are about to discover the most amazing language when it comes to bit and low-level aspects… You are going to love Ada! Here is an interesting book/chapter that may help you a lot Ada for the Embedded C Developer - learn.adacore.com but most importantly Types and Representation - learn.adacore.com

2 Likes

This question is… confusing.
Ada doesn’t require two’s complement, and that’s a property of the processor anyway. (Randy’s Janus has an implementation that runs/ran on a one’s complement machine, IIRC.)

It’s essentially a moot point, because all modern machines are two’s complement, and likely to be that way unless and until someone catapults some alternate architecture (probably radically different, like balanced trinary) into [semi-]mainstream usage.

(Experimental hardware, FPGAs and the like excepted, of course.)

When you really need the control, you can’t beat record-representation clauses and manually laying things out. (I did this for a 4- and 8-bit floating point, that was… an interesting exercise.)

Every time you look inside a value or you play with any bit-oriented operation, you are implicitly assuming a representation, which is in this case 2’s complement (nearly all CPUs of the known universe). There is no way to correctly adjust bits if you do not known the internal representation.

Playing with bits in arithmetic context with and, or, xor, whatever, is formally equivalent to a bunch of UC from that point of view (because you bypass the language all way down to the hardware), and code would be just not portable to a non-2’s-complement machine.

Sorry for the trivia.