Discussions about open source Ada compilers

Rather than diverging on the thread on gnat bootstrapping nlnet project when answering OneWingedShark message, I prefer opening an appropriate new thread.

First some answers to the following post.

Absence of an explicit data structuring in Forth, not being a huge insurmountable problem, will be a nuisance in the following sense. Memory access in Forth is @ and ! . Simple but overly primitive when generating code from DIANA. Ada is very stack oriented so that the primitive memory access is disp(FP(level)), that is indirect displaced addressing on a Frame Pointer associated to static nesting level. Frame Pointers are best kept in a display rather than in a static chain because static depth is frequently important in Ada (the DIANA translator with nested subprogram and declare blocks has more than 8 levels and it is safer to set a limit of 16 to 32 levels). This is why I changed my display coding from a “clever” using of r8 to r15 x86-64 registers for FP display to a 32 memory display at base of stack (see the end of codi_x86_64.finc in src/codegen/fasmg of framagit ada-83-compiler-tools project).

When producing intermediate low level representation, you need to verifiy where variables or variable descriptors are stacked and aligned. This will be obscure in Forth. For example, an Ada 83 STRING type is a character array. Arrays need to be represented with a descriptor containing array element size, first, last indices (triplet for each dimension) and pointer to array content. The descriptor can be allocated on execution stack because its size is known at compilation, the array content must be allocated elsewhere because its size can be known at runtime only in some circumstances. The descriptor must be ordered POINTER, SIZE, FIRST, LAST on a 64 bit addressed machine if SIZE and others are 32 bits integers, to respect alignment. The array content must be placed on another sort of stack managed heap.
This is technical details, but effective implementation is full of those tiny details. With Forth it will be difficult to manage.

For those points:

There’s several ways you can do it:

  1. Using a High Level Language as a Cross Assembler

Yes I once thought using TCC or something like it, but I have a profound aversion for C and all ressembling improvised languages. So no to let in C components, if unavoidable, I even prefer assembly code to C.

  1. Target a VM

I prefer avoiding dragging in a whole gaz plant.

  1. P-Code is simple, but less well-known now;

Polish Gliwice Ada IIPS theses planned to use A-code, an extension to P-code for Ada 83 (M.Ciernak). It is the same perspective as Ada/Ed interpreted bytecode, a stack machine code is used which cannot be optimized easily but generation from DIANA is easier than n-uples. But polish implementers planned to translate A-Code to native x86 binary (Wierzinska thesis).

  1. JVM is ubiquitous, making for a good cross-compile bootstrap platform;

This is a whole gaz plant. I can’t.

  1. DOTNET is interesting, but more limited (there are implementations on non-MS OSes);

Same remark.

  1. Forth/SeedForth, very simple and small;

I thought also about using Forth, but after reflexion and small scale trials I abandoned the idea for reasons evoked above. The low level code is unreadable with regards to data placement verifications.

  1. LOLITA - A Low Level Intermediate Language for Ada+
    (This paper is the only ref to it, though, TTBOMK)

I did not know, thank you for the article, you are an efficient provider of quite interesting informations. Some remarks are interesting, they do not use n-uples but trees, they point the disp(FP) basic addressing, there is an allusion to macro low-level IR instructions. But there are too few details to progress a lot.

  1. Interpret/execute the IR directly.
    (This is technically what Graal/Truffle does, though the IR is essentially the AST.)

This is somewhat akin to the solution I adopted : the low-level IR is macro-code assembled to native code. I finally settled on this solution because it is the most direct way to getting a working native executable. It has drawbacks though : 1) the executable is monolithic and withed units or ancestors are include macro-text rather than library ELF units. 2) The macros are stack machine operations (like P-code or A-code instructions), no optimization is done ; for array accesses in loops impossible to get loop invariants out ; common subexpressions idem. But given the power of present machines it is not an immediate catastrophe. I also think it is half a problem, instead of being native code generators, stack machine macros can almost certainly be generators of an optimizable ad-hoc low-level IR itself generating final native code. But for now it overwhelms my capacities. So I am content with the present solution.

1 Like

More directly in line with the topic title:

What do we have at hand in relation with Ada open source compiler and what can we do which would be a useful community achievement.

First I set gnat apart. Gnat is a wonderfully useful tool, up to date with recent Ada revisions, its source code can be looked at, but it is only manageable with a team of first class engineers. AdaCore does an admirable job, this is not our play court.

Briefly what are the other compilers whose source code is available, what are their characteristics what can we do with them.

  • Ada/Ed

Ada/Ed as every body knows is an early demonstrator Ada 83 compiler written in C, targets an interpreter and lacks very useful features as record representation clauses which are basic to microcontroller Ada programming (for example the AVR map is conveniently represented with such a record).
Working with Ada/Ed source code is a pain for non C programmers. The high level representation of Ada/Ed is not DIANA (which is not that easy to use, but is relatively clear, documented and used both in the Ada 83 Perigrine source code and the Pascal IIPS module of M.Ciernak ). Ada/Ed can surely be recompiled with modern C compilers with some modest editing. I have done it for the interpreter being curious of its byte code and runtime.
But I doubt that Ada/Ed will be really useful outside of its initial vocation which was research and education being the Setl specification descendant.

  • SmallAda and its descendant HAC

Gautier de Montmollin has done an admirable job over a decade and intensely over the five last years, when recovering the SmallAda system written in Pascal. HAC is an operational system able to execute some serious programs, chapeau bas for this work. Nonetheless some choices have been done which will hinder or preclude some desirable features. HAC is not written in Ada 83, but at least Ada 95 using child packages. I don’t know enough of the HAC source to tell if some Ada 2X features are used. On the other side, HAC compiles a subset of Ada 83, SmallAda was a vehicle for tasking research and not destined to be a full Ada 83 compiler.
That means that HAC cannot compile itself, and should be augmented as far as being able to compile its Ada 9X or later 2X if necessary. This is probably not the goal of the author.

  • The Perigrine Ada 83 / DIANA and its descendant (TLALOC ? )

From the Walnut distribution, I extracted an Ada 83 system written for DEC Ada by Perigrine Systems directed by Bill Easton. As it was somewhat strangely structured (there was an unclear mix of three usage of IDL for IDL itself, LALR and DIANA management) I did my best to clarify the affair and modified the system’s structure, virtual pages pointer management and many other things. Thanks to gnat, that must be said.

This system takes full Ada 83 source text and produces a DIANA high level intermediate representation of it.
As far as I tested it, it works well but for some rudimentary library management and care to manually compile modules in the correct order.
This system also swallowed the Kalinda OS code, and if somebody tries other Ada 83 source text (which should be done), I am confident it will produce the corresponding DIANA.

Then I decided to follow the polish IIPS project and began to browse the DIANA IR in the code_gen procedure to produce low level IR and final code. I think I’ll call this whole system TLALOC.

Being written itself in rigorous Ada 83, TLALOC compiles itself without error and so produces all the DIANA structure of the compiler, which is already something. If a successful DIANA to low level IR code generator can be produced, this compilation system can become the only Ada 83 open source compiler which compiles itself (Ada/Ed cannot being in C, HAC either being in 9X).
Ada 83 being a fixed standard, there is minimal risk of sideslipping from version to version as happens in gnat. Only avoid inventing pragmas… Stick to the 83-LRM.

I rewrote the readme and markdown introductions in english on the Framagit with some clickable diagrams to access source code. Hoping it will encourage reading.

recall, the thing is here : Vincent Morin / Ada-83-compiler-tools · GitLab

  • Is there anything else ?
1 Like

The open-source ParaSail interpreter/compiler (all written in Ada) has a partial Ada 2022 front end. See parasail/ada202x_parser at main · parasail-lang/parasail · GitHub for the parser/lexer.

Internal tree representation, and static and dynamic analysis phases, are all in parasail/semantics at main · parasail-lang/parasail · GitHub

Interpreter/run-time is in parasail/interpreter at main · parasail-lang/parasail · GitHub

Translator from ParaSail VM (PSVM) to LLVM is in parasail/lib at main · parasail-lang/parasail · GitHub in (ParaSail) source files such as compiler.{psi,psl}

5 Likes

Wow, that’s impressive! (I read parasail_intro.pdf.) I wasn’t aware of ParaSail as a new programming language though I’ve seen you mention ParaSail in posts in the context of OpenMP support and parallel for loops.

:astonished_face: Well, extremely interesting but somewhat stunning, I have no voice when considering the amount of work… The Github mentions only 3 contributors, I suppose there is some team behind. It is the work of how many persons ?

I did heard about ParaSail but never had the time to dig it.

So, if my first browsing understanding is correct, you have several languages (ParaSail, 202X, Javallel, Parython) which form a family with close enough semantics so that a common high level IR can be used. This IR can be transformed to an interpreted form (special mention to the 23100 lines psc-interpreter.adb) or led further through LLVM.

The documentation contains very interesting thoughts about parallel programming and software aspects that makes it difficult. When you write about hundreds of pico-threads, is there some relation with Triton/GPU kernels ?

Now, once the reader comes back to his senses, and taking into account that this same reader started in the 1980ie with Ada 83, this question, -somewhat provocative, I admit- inevitably leaps to his mind : “is all this still Ada ?”

I did not catch that in my reading. Does that mean I could write Spark and Ada 2022 code in such a way that it compiles both with GNAT and with ParaSail’s compiler and when compiled with the ParaSail compiler it gets safe parallelism wherever it would produce a performance improvement according to the heuristics? (Or put another way, I could write ParaSail programs in Spark/Ada syntax that would also compile with an Ada compiler?)

When you browse the ParaSail GitHub, you find several parsers : one for ParaSail, another for Javallel, yet another one for Parython, one for Sparkel and finally one for 202X.

All those languages front ends syntax parsers bodies use PSC.Tree.Semantics. That is you have a common high level IR system for all. Perhaps not all languages use all provided functionnalities, but there is a common HLIR system.

Then there is a unique interpreter, so that all those languages can be interpreted by the same interpreting system which takes the common HLIR.
If desired, a ParaSail written system can produce LLVM input from PS VM so that all front end languages can in fine be coded through LLVM.

202X Ref Man says at 1.3 that 202X is a ParaSail inspired Ada 2022 subset version optimized to reduce parallel programming potential problems with globals, runtime exceptions, race conditions and all this sort of things. So 202X is not Ada2022.

I guess that Javallel is not Java, Parython not Python and Sparkel not Spark, but each is a ParaSail inspired version of the corresponding language. Finally you will not compile Ada2022 or Spark with the PSC system as you would with gnat, but rather the corresponding PS inspired version.

S.Tucker Taft who had the great kindness to indicate us this remarkable work will perhaps shortly confirm my understanding of their system (or point some deficient comprehension from my side).

1 Like

If you want to read most of the story of how ParaSail was designed, I would encourage you to take a look at the Blog which was started in 2009 and traced the week-by-week progress (Designing ParaSail, a new programming language: Why design a new programming language?).

Having spent most of my career involved with the design and implementation of Ada, in 2009 I decided to start from scratch and design a stripped-down language focused on supporting multicore hardware, with high integrity. There was no “team” behind ParaSail, though it is based on things that were learned from collaborative work done over 40 years.

4 Likes

This weekend, while clearing out the house where I grew up, I came across my papers from my final year at university. I think it was in 1989. I found articles and general documentation.
For example, the documentation for the Meridian compiler that we used at the university. Articles about the Alsys compiler (which I later used extensively in the industry), or about the amazing Rational system.

These Ada 83 systems were good and mature.
But they no longer exist, and haven’t for years (even decades), and there’s probably no more economic interest behind them.

I don’t know who own those software, but maybe we could ask for the current owners to release the sources under a free software licence, and give those marvelous piece of code a second life. It would be much easier starting point.

I regularly use FreeCAD. Its geometric kernel comes from Euclid, a software acquired by its competitor CATIA, and whose development ceased in 1998. Then, after some twists and turns, it was “liberated” and is now used in many open-source software and by many research projects.

It would be great if the Alsys compiler could experience a similar resurrection.

3 Likes

This is obviously a good idea. The difficulty will be to find contacts, time has already passed, some people are retired or even passed away, the history of firms like Alsys is complicated between Thomson, Aonix, PCT, Atego. Is there any Ada 83 compiler sources left ? where ? who could have a hand on them ?
If some of us have relations or contact indications, it would be welcome.

I contacted RRSoftware asking if they would be interested in participating in an Ada 83 technology preservation action. We’ll see if there is an answer.

I also contact the Karlsruhe university (KIT) Institut fĂĽr Programstrukturen und Datenorganisation (IPD), I remember some articles from Rudolf Landwehr who is at Atos and affiliated to KIT. They have done very interesting work, could it be that some source be disclosed ?

Let’s try…

Thank you ! I read integrally the blog. It is full of very interesting thoughts. Parasail is a leading edge research vehicle, somewhat far from my Ada 83 preservation and revival occupations.

But the modern and the ancient have some continuity.

For example I wondered why Ada 83 appeared to me such a peculiar software development when I was young student and researcher ; and why Ada 95 conveyed a more conventional feeling. I realized later that Ada 83 is a structure forcing language (to the point of being sometimes heavy on the software developer) and in fact discourages the use of pointers ; of course there are access types, but much can be done without. In the contrary Ada 95 led the programmer to instinctly use aliased and access things and was much more conducive to pointers use. I’ll say an horrible thing, but Ada 95 had a kind of C facility for the developer. So the “no pointer” policy of ParaSail echoes the “pointer set aside” encouragement of Ada 83.

Also, the “growing/shrinking” objects idea echoes the Ada 83 memory manager of my Kalinda OS trials. Kalinda’s memory package is an Ada 83 (and at first Ada 95, after Pascal and shortly C, nobody’s perfect) adaptation of the Macintosh memory manager and its double indirection handles. Ada forced me to implement a concept of dynamic size array with header to replace traditional pointer access to handles’ contents. Pointers disappear, arrays are used. The same idea appears in ParaSail.

So thank you for all those intellectually vibrant ideas, and hoping good wind for ParaSail ! (we are on the shore here in Brest, sailing evocates familiar pictures)

The Alsys compiler is not dead, it changed its name to ObjectAda and is doing well.
You will notice .prj files in some open-source projects, including mine, which are project files for the ObjectAda IDE.

As far as I remember, when switching to Ada 95, Alsys bought the technology, and drop the in house technology.
I remember the ObjectAda interface beeing very different from the old Adaworld interface. I’m not sure there is a single line of code shared between the two family (maybe for some backend?).

This is why I think the Ada83 code could be released.
But I may be wrong!

I should have been more precise in the topic title and state that the interest is on open source pure Ada 83 compilers.

Ada 9X and 2X is still sensitive technology (so more as we get close to our epoch) and having access to those compiler source code is improbable. It should not be the case for pure Ada 83 which is normally considered as obsolete technology.

If there was an Alsys pure Ada 83 compiler, it would be interesting to have access to its source code. I always used Object Ada in the past, and never used a pure Ada 83 Alsys compiler.

Anyway, I am convinced that it is time and perhaps urgent to preserve Ada 83 technology artifacts both for computer history and give the occasion to the young to have a hand on rather remarkable software engineering achievements.

I am reading the documents on the Rational R1000s400 restarting at DataMuseum.dk . They have produced an emulator starting from schematics… It is extremely interesting, and from some points of view still modern. Reading the patents is also instructive even in 2025.

For now I see three possible paths to Ada 83 compiler source code :

  • Janus Ada 83 which was for i386 DOS and Unix
  • Karlsruhe compiler which probably targeted a Siemens machine and M68K (to verify)
  • and Alsys (targets ? ).

If somebody has other ideas or personal contacts, feel free to participate in the context of Ada 83 Memory wiki (ada83.org).

The link Developers Forge for Open Source Ada Code and Tools - Compilers has a section “ancient”.
It mentions Alsys AdaWorld, Verdix VADS, Meridian.
That is where we should try to convince some people to disclose and save old Ada 83 only compilers. There should not be any technological or commercial risk for them to do so.

Thank you.

Gaz plant?
I’m not sure what you’re trying to say, but think perhaps that you’re commenting on the dependency upon the VM, and tangentially the frameworks/environments. — This is a valid concern, but there are ways to mitigate that.

To put an explicit example, consider JVM/classes: you could “work backward” and import classes that (a) represent/are the IR; and (b) execute/codegen the IR… then (c) retarget the codegen to your desired architecture. — if the JVM-GNAT were up-to-date/reliable/complete in the implementation of Ada-2012, then you’d be able to simply define your DIANA-for-ADA-2022, compile (generating a set of classes), import into (e.g) Graal/Truffle, which is a JVM, write the Truffle-executors, and bam! you have an Ada translator that runs on any platform with the JVM and you can run on all targets the JVM runs on… also, the Truffle setup can apparently generate executables for all the places that Graal can run, a result of unifying this framework, which uses annotations on the classes, and hooks in the JIT-compiler… so you can “get .exe-generation for free.”

Er, but why?
If you weren’t using SeedForth (tokenized/minimal Forth), you could just use your production/emission method to insert comments into the generated-code… see Tsoding’s Porth videos for details/examples. (I recommend bumping the speed up to 2x.)

Sure, but a lot of it is a subset. (There are, IIRC, some subtle semantic differences that were done in one of the revisions, 9x [maybe 2005]; but is very [amazingly so] backwards-compatible.) — My point is that, if you were to define a hypothetical DIANA-2022 (i.e. able to represent all of the Ada 2022 standard), then by necessity you would have the ability to represent Ada83. If you were to, say, use Ada.Containers.Multiway_Trees to represent the structure you could then “unify” the syntax on the defined-equivalent Ada95/2005/2012/2022 structures. Example:

  1. Return 4;
  2. Return Result : Integer := 4; end return;
  3. Return Result : Constant Integer := 4; end return;
  4. Return Result : Constant Integer := 4 do null; end return;
    (Technically, the do part would be a child-node here.)
  5. (4); – of expression-function
    Function Example return Integer is (4);
  6. Etc for the aliased permutations of the extended-return.

All of these nodes, being defined equivalent, should be the same node, though with differences realized in “attributes” (in my implementation, fields for “name: string”, “aliased : Boolean”, “constant : Boolean”), thus there is a single processing path for all of the various “forms”, greatly simplifying how to handle/process it… but the real advantage is that you can use the Ada 2012 Pre, Post, Type_Invarient, and Dynamic_Predicate aspects to enforce (and possibly SPARK-prove) correctness. —IIUC, these can handle all the static-semantics, and a fairly big chunk [most?] of the the dynamic-semantics, with the rest being tucked into a Validate-method, which can be part of the base-class and be automatically called on/from the base-class’s Execute method. — Given the JVM-GNAT+Graal/Truffle above, this doesn’t sound like too bad a prospect (except that the latest is an old GNAT, and IIRC still was in-progress for implementing Ada-2012 before it stopped “being a thing”).

TL;DR — I believe Ada should have an update to DIANA, [re]defining it for Ada-2022.

(I actually have Snodgrass’s Interface Design Language book, which has Lamb/Nestor IDL definition; since DIANA is an instance of the Lamb/Nestor IDL, I’ve toyed with the idea of first updating that language [to be more in-line with Ada/SPARK, syntactically… I really don’t like if/fi].)

This could be due to the nature of DIANA being an instance of IDL: you would have the implementation/definition of the IDL itself, of the parser/executor for that (IDL) [the LALR], and then (possibly reusing that) the DIANA itself. (IOW, a generalized LALR, which took a set of instructions to produce IDL, then which took those as instructions to produce DIANA, finally giving you the instructions to process that… a small language-bootstrap within the compiler itself? [I haven’t looked, but it might be related to the Ada frontend I found online years ago which produced DIANA.])

??
The Ada standard is fixed for Ada83/95/2005/2012 — it’s not, in any way, a moving target.

This is one design decision that I think was a mistake: access should not be at all equivalent to a [C-]Pointer: if we had generalized the notion to “thing that provides a reference”, the new Implicit_Dereference wouldn’t be restricted to discriminated-records-with-an-access-discriminant, but could be set on all types. (Assuming that it would be an interface/attribute of the type.) / This is something I wish to address via metalanguage definition, allowing the reification of the conceptual type-system hierarchy, indexing, [de]referencing, perhaps parameter-passing etc, all at the static/compile-time level

Probably the translation of the French expression “machine à gaz,” which indicates something overly complex.

2 Likes

Wasn’t it precisely to improve interfacing capablities with C?

Excuse me, that’s what Lionel said : a very complicated and heavy system or arrangement (Search “usine à gaz” and pictures) ; I often mix french and english.

Gnat designers choose something else, perhaps for good reasons. Ada/Ed was not DIANA centered either.
DIANA IR is a clever idea but not so easy to use (see https://ada83.org/wiki/images/d/d9/Exp_ada_codegen-BG-Zorn-UC-Berkeley-1984.pdf). The difficulty comes from the IDL hierarchical class and terminals definitions with attribute inheritance. That’s why I produced the complete terminal nodes list with all inherited attributes on doc/DIANA_NODES.pdf · main · Vincent Morin / Ada-83-compiler-tools · GitLab . I work with it and text listings of DIANA graphs produced with options U, A, P of the DIANA front-end. Those are unavoidable tools for code_gen development, and this is sometimes cumbersome to use, even if I begin to be acquainted with.

The Peregrine/TLALOC DIANA implementation is compact and uses a 32 bits virtual page plus offset segmented pointer scheme on DIRECT_IO storage blocks, which is efficient. In IIPS Ada, M.Ciernak used conventional Pascal pointers which give a memory dispersed data structure and forces transformation for storage.

Could a DIANA extended IR be defined and exploited with profit on 9X and later, I do not know, and I’m afraid it surpasses my competences.

Yes of course, but there is the appendix F and the implementation defined pragmas. Abuse of them can do harm.

Because Ada 83 didn’t have OO and now Ada does. Compilers map naturally to OO.

*sigh* — Yes.

It’s OK, I just hadn’t heard the term before.

Interesting, though I get the idea that the big complaint isn’t really with DIANA so much as their implementation: they [correctly] separated the front- and back-ends, isolating and abstracting the interfaces… choosing their “C IR” (used by Berkley’s Pascal, C, and Fortran compilers), but not [yet] having implemented things like tasking and generics… this ties directly with your observation:

While reading the DIANA spec, I was struck by how the combination of generic and (possibly) OOP-hierarchy could/would be impactful on the implementation. — Remember, despite the common thought that OOP is also about code-sharing, it is not: that is the proper domain of generic; and also that DIANA is defined to be an Abstract Data Type itself. — OOP [as-realized in common implementations] is about providing some entity with a common interface; akin to how Ada is superior to C by virtue of “arrays that know their own length”, and Pascal in that they [can] be unconstrained: meaning that, inherently, String(1..5) and String(1..11) are not different types. (OOP as originally envisioned/developed is about messages, as shown in Simula, LISP, and [IIUC] Erlang.)

Wait… are you using text-processing on DIANA, in substantial amounts?
If you are, you’re missing the point of it being an Abstract Data Type.
(Sorry, I have to ask; there’s a rather large set of programmers that fall into the Trap of Text, wherein they [e.g.] see something that’s being represented in text and immediately think along the lines of “let me use regex” and “just use string Split”.)

Yes, the DoD’s evaluation of DIANA, and also the Peregrine compiler, showed as much.

This.
The thing that I think kneecaps DIANA is “source-reproducibility” — good for the human-users, and for the “surface-level” parts of programming… but it forces a bit of that Trap of Text mentality. How, for example, should this be handled:

if Hex in 0..9 then                               -- Because ASCII is structured
  Return Character'Val(C-0 + Character'Val('0')); -- in such a way that the values
else                                              -- of the digits are sequential,
  Return Character'Val(C-9 + Character'Val('A')); -- as are the alphabetical items,
end if;                                           -- we can generate hex via offset.

What we have here is a “split-screen” view of two “texts”: the code and the explanation.
This is inherently different than a line of text with a comment explaining what it is doing. (Block vs Line.) And because it’s unstructured text you actually can’t decide what’s going on machine-wise… it gets more complicated when you consider the prototyping/debugging practice of commenting out a chunk of code.

This is why I try to encourage structure-first, rather than the Trap of Text: even if your editor is showing you “text”, behind the scenes it can be managing the underlying structure. A word processor, for example, does this [for documents] so ubiquitously and unobtrusively that most people don’t even realize what’s happening. (You could, therefore, solve the above decidability issue by not being a text-editor [i.e. glorified Notepad], but rather like your word processor keep the different sorts of comments sorted/managed.) — Structurally-speaking, this would entail essentially three or four types of comment: (a) the sort for things like copyright, package-description, etc; (b) the “Inline Box”/section-divider; (c) the commenting-on-a-section [the split view above]; and (d) “line-comment”, which should actually be about a particular [sup-]portion.

I’m sorry, why should you be concerned with implementation-defined pragmas? The Pragma Restrictions is a thing — Use No_Implementation_X, where X is the language-feature you wish to restrict. (e.g. Pragma Restriction( No_Implementation_Pragmas ).)