Alire Publish with different git forge? (codeberg)

Sometimes you have to work in systems as they are in the real world. You don’t always have the luxury of making everything fit your ideal. Part of software engineering is learning to work with the best of what you are given. If you are running your own company, sure do what you like. But when you work for a much larger vehicle that has layers upon layers of management, you don’t get the luxury unfortunately.

Agreed, but sometimes it is necessary given your current constraints. When it is necessary, I defintely find the branching model much more tenable. It catches a lot more mistakes.

1 Like

You didn’t read/understand the “Workspaces” paper: there are no branches in it at all.

You are misunderstanding the usage of the word ‘merge’ — I am using it in the sense of how compilation is a set of sources “merged” together. The Workspaces paper’s nodes are not “branches” but “working sets” the eponymous workspaces. Consider the dependency graph for the following:

With pkg_Alpha.pkg_Beta, pkg_Alpha.pkg_Gamma.pkg_Delta, pkg_Epsilon;
Function pkg_Xi.fn_Omega return pkg_Xi.Some_Type;

The graph could be represented as the dependency-set/-tree:

Module_B
\       \
 \       pkg_Alpha
  Module_C
    \  \  \
     \  \  pkg_Alpha.pkg_Beta
      \  pkg_Alpha.pkg_Gamma
       pkg_Alpha.pkg_Gamma.pkg_Delta

Module_A
 \      \
  \      pkg_Epsilon
   Module_B

Module_D
 \  \  \
  \  \  Module_A
   \  pkg_Xi
    pkg_Xi.pkg_Omega

In the above, Module_B is consistent only when pkg_Alpha+Module_C (children of pkg_Alpha) are consistent, thus when Module_B represents the set of pkg_Alpha and its children being consistent/compliable.

Module_B can then be merged into the set of Module_B+pkg_Epsilon to yield Module_A; thus “Module_A” represents the dependencies of pkg_Xi.fn_Omega and can be merged together with the explicit dependencies (Module_A) and the implicit dependencies (pkg_Xi) to yield the project’s dependencies.

Now consider “merge” and “compile together” as roughly synonymous, and you get the compiler’s method for handling a compilation-unit… consider the nodes as sets of program-sources (which may or may not be tied to child-packages, but are probably tied to subsystems), and those are the “workspaces”. Consider the “Merging Upward” as compiling/consistency-checking/proving/etc, or as committing into the repository —that these levels in the tree are hierarchical database entries, and the merge upward is committing into the next higher level— and now you can see that the root node’s history IS the history of the states wherein the project is compliable.

Which is the advantage of the system in the Workspaces paper: there are no branches, only working-sets, and they are only “merged in” when things “at that ‘level’” are consistent. – You can never have an instance where you have a “non-compliable” root-node because by design the root node is only updated when “everything below” is consistent: there’s no more “Dave broke the build”.

I disagree: as a rule, there should be no “files” or “source-texts” — rather the sources should be DB-amiable structures: let the system handle the interface with your local computer, whether that’s FS files, or IDE-elements using the same DB-amiable structures.

(Yes, there are other resources, like music, or fonts, or 3D-textures, or images that may be needed; these are, however, arguably beyond the scope of CI/CD and the “normal case” of source is what we’re discussing.)

But you do not need to change anything. You already have modules of some defined interface and dependencies expressed in.with-clauses. You just need to be able to say the system here is a module consisting of the following packages. Please, version it.

Merge means combining two different versions of a file into one, usually by removing correct code and introducing bugs… :grinning:

They are not. What you call merge is getting a version-consistent view of the project.

You can imagine module versions as a pearl chain hanging down. Each pearl is a release of the module. Project releases are horizontal lines between pearls expressing dependencies between modules. Whether you compile or build things in a view is irrelevant. It can be a documentation or a photo album.

ClearCase called it the UCM-model (Unified Change Management).

The toolchain works with files: source text, object files, executables etc. Thus the interface must provide what given toolchain expects. The advantage of a virtual file system is that the consistent view is always enforced. You cannot mix incompatible versions which is the most common error when working with Git etc And of course it is massively more efficient than puling files. Consider GNAT compilation process. In order to decide whether to recompile a file GNAT compares timestamps of the Ada, the object and the ali-file. The virtual file system need not to pull any these files at all, only to query the timestamps of.

BTW, practically any modern filesystem is backed by a journaling database. You just do not see it.

We do version it. It doesn’t help if two people need to modify it at the same time for different reasons due to how management schedules it.

Only if you disregard that we’re talking about a hierarchy of workspaces (sets of source) within a database — that is, what would you call the operation that corresponds to SQL’s UPDATE?

No, what I’m calling “merge” is the operation of synchronizing/updating a collection of sources; yes, this does produce a version-consistent view… but this is about updating the hierarchy of databases.

But, this is going backwards, bolting on the “database” and crippling it.

People are jumping off the bridges for different reasons. Yet bridges have rail guards…

I do not care what is used to keep the projects. It could well be something as unsuitable as an SQL database. That is just a back end.

No it is not. One need to produce a view without updating anything. Consider a project delivered to a customer. It is under the system and contains the release you deployed. When 10 years later the customer would report an issue you should be able to produce the view of the release delivery as well as the view of the corresponding release build to debug.

And by the way, “merging” not only has a different meaning, it is an unfortunate word choice anyway, because a view not just pulls files in the same directory, it maps them to a directory structure which may differ from the original structure of the projects. E.g. you would like to view binaries of all projects in the view under same directory, e.g. ./linux/x86_64/debug/bin while source files under individual./<[sub]project>/linux/src etc. Mapping as a key feature of producing a usable view. The idea is to never compile/install anything. A release view is runnable as it is.

It is how it is.

Just to clarify some points being discussed here.

alr doesn’t need github or even git. It can all work from plain local folders if you wanted to spend the effort. I’ve heard rumors that some people is using it in air-gapped systems.

We default to the option of using git to avoid dealing with creating and uploading tarballs, and because it’s what 99% (made up) of the people use, but it is not mandatory; one can publish using archives if preferred.

We use GitHub to centralize the community index as a way to simplify submission and review of new releases, but it could be done by email if we were so inclined, distributing index updates via any other means, and we could move away from GitHub relatively painlessly (although we would lose the automated checks that use GH actions, which are not integral to alr operation anyway).

Even if the community index is currently hosted on GitHub, your source code can be anywhere (if packaged as an archive) and in a selection of well-known hostings (if published as a git/hg/svn repository). This list can also be overridden locally.

In short, there are several layers to keep clearly separated:

  • alr doesn’t need you to use git or github, although there’s code to publish to the latter more easily. Your code can be in a plain folder without VCS and alr should work the same.
  • The community index is currently hosted on GitHub, but changing indexes is a single command. That’s the only use of git that’s mandatory with alr if you want to use the default community index.
  • An index and the code referenced by said index need not be hosted on the same place, you can make your code available anywhere.
  • Alire the project relies on GitHub for convenience. Self-hosting everything would be possible, if it became advisable or a necessity.

Related: support for mirrors is in the pipeline, to avoid single point of failure in the future (Fallback origins · Issue #1603 · alire-project/alire · GitHub).

I asked before if it were possible to upload projects without messing up with any alien repositories. E.g. in sourceforge you would simply push updates using rsync or sftp. The answer was no .

(Leaving aside the concept of “alien”; I shudder every time I have to go on sourceforge.)

At the time it was impossible to just push a tarball somewhere and have your new version magically appear in the community index. Currently, it is theoretically possible to fully automate submission of new versions using only alr publish (and optionally whatever you want to host the tarballs if not using github) and a bit of scripting. It is also one of my goals for the near future to have a sample workflow that demonstrates this (and precisely I want to do so with the simple components).

9 Likes

That would be great!

Especially if you also show how to deal with the target system settings, like bit size, some key compiler capacities e.g. availability of atomic access and instructions (System.Atomic_Operations).

I think that many would also like to see the level of the Ada standard support. This is a source of endless questions here. Alire could provide that info out of the box with the compiler installed.

And, finally, a tutorial how to deal with OS-specific dependencies like unixODBC vs. ODBC32, Win32 synchronization primitives vs. pthreads vs. linux-futex.