A Collection of Ada Unchecked_Conversion Testing Programs

Using Unchecked_Conversion has worried me, but programming’s no place for paranoia, and so I wrote these test programs to settle the unsettlement within me:

I may add more later.

1 Like

In many cases there is absolutely nothing to worry about with unchecked_conversions. The size is checked for a start. So long as you understand what it is doing then you should foresee any potential problems.

I shall have to compare these to my own implementations at some point.

You do need to be aware of byte-order issues, which are particularly likely at the edges of the system, which is where you’re most likely to need to do these conversions!

1 Like

True but that applys to bit shifting too. Unchecked_conversion is just easier, less error prone and if anything more efficient.

Bit shifting is byte order neutral. Doing

Shift_Right(16#FF000000#, 24);

will always yield the MSB of 16#000000FF#, regardless if the system is little endian, big endian, or middle endian.


I guess I’m spoilt with only ever having to deal with Little Endian or network byte order and never dual purpose.

If you have to do byte swapping, it’s best to use normal type conversion and let the compiler do it for you.

As in this example, from SNTP_Support:196 ?

   end record
       Bit_Order => System.High_Order_First,
       Scalar_Storage_Order => System.High_Order_First,

(describes a packet to be transmitted/received via the network)

I know this for packed types and other simple representation differences, but I’ve read conflicting reports of how these particular representation clauses are or aren’t suited to the problems of endianness for networked programs and the like.

Furthermore, I overwhelmingly prefer to use Ada 1995, which predates some of these clauses.

13.5.3 Bit Ordering
[The Bit_Order attribute specifies the interpretation of the storage place attributes.]
Reason: The intention is to provide uniformity in the interpretation of storage places across implementations on a particular machine by allowing the user to specify the Bit_Order. It is not intended to fully support data interoperability across different machines, although it can be used for that purpose in some situations.
We can’t require all implementations on a given machine to use the same bit ordering by default; if the user cares, a pragma Bit_Order can be used to force all implementations to use the same bit ordering.

For every specific record subtype S, the following attribute is defined:

This seems to imply that Bit_Order only applies to record types.

This is not a standard aspect, but what you’re doing is not using a normal type conversion to swap bytes.

Something like this has worked since Ada 83 (13.6):

type U8 is range 0 .. 2 ** 8 - 1;
for U8'Size use 8;
type U16 is range 0 .. 2 ** 16 - 1;
for U16'Size use 16;

type U16_BE is record
   MSB : U8;
   LSB : U8;
end record;
for U16_BE'Size use U16'Size;
for U16_BE use record
   MSB at 0 range 0 .. 7;
   LSB at 1 range 0 .. 7;
end record;

type U16_LE is new U16_BE;
for U16_LE'Size use U16'Size;
for U16_LE use record
   MSB at 1 range 0 .. 7;
   LSB at 0 range 0 .. 7;
end record;

Then, if getting bytes in network order on an LE machine, you can do

BE : U16_BE;
LE : U16_LE;
BE.MSB := Buffer (I);
BE.LSB := Buffer (I + 1); -- Or UC of Buffer (I .. I + 1), or aggregate, or ...
LE := U16_LE (BE);

and the bytes are swapped for you. This generalizes to larger types. Of course, you usually use UC to convert the result to U16.

Yes; it means that the bit positions in a record representation clause use the specified bit numbering. System.Bit_Order defines the default numbering, but is often used to indicate the endianness of the machine running the program.

Shouldn’t you use object_size as size on a type is a request and not a requirement. Or size on the variables (objects) as I don’t know when Object_Size came along. I’m guessing it is fine here but it’s best to get the compiler to have your back?

The prime issue with using these representation clauses is the assumption about the Storage_Unit and the like. I don’t want to write code that assumes octets, and wrote as much in the article. I merely care that the representation of certain modular types is exactly the same as certain array of octet types, however they’re stored.

Object_Size was introduced in Ada 23. As I was writing something Ada 83-ish it obviously isn’t an option. But the meaning, requirements, and implementation advice for Size and Object_Size are clearly defined in ARM 13.3, and you should be familiar with them if you need to use them.

What processor are you thinking of targeting where Storage_Unit /= 8? They’re about as common as hen’s teeth these days. And on such a processor, a modular type may not have the same representation as an array of octets.
But if you’re doing these kinds of things, your code will be processor dependent anyway, and will likely require a specific value of Storage_Unit.

Right but to be pedantic, shouldn’t you set size on the variable declaration if you do not have object_size available as discussed in the link that I provided?

These programs are still useful in certain situations. One program I intend to finish within the year uses my SHA library and my Serpent cipher library, which respectively use big and little endian integers.

I’m not.

Yes, but these programs will prove or disprove such for a particular system, removing any and all doubt.

Yes, the obvious way to do this is to simply require particular values for Storage_Unit and the like. If I were writing serious Ada code, such as for the DoD, the decisions wouldn’t even be mine.

Ada 83 didn’t have unsigned integer types, but the ARG (or equivalent) ruled that compilers had to use an unsigned implementation for a (sub)type with no negative values and a Size clause that excluded a signed representation. No Size clause for objects was ever needed to obtain this.

As the ARM states, one place where Size on a (sub)type is followed is UC, which is what my example was using it for.

1 Like

Or you can just assert Storage_Unit = 8. For Ada 95, you’d use something like PragmARC.Assertion_Handler

If the Ada you’re writing is not serious, you can safely assume Storage_Unit = 8.