fix links from doc to openapi components#72
Conversation
|
On Tue, Mar 31, 2026 at 02:37:22PM -0700, Patrick Dowler wrote:
still not sure about delivering the OpenAPI parts this way... TBD.
Well, *I* am rather sure that, given that we're sold to github
anyway, the openapi artifacts management can and should exploit the
possibilities this opens as much as we can. These things will rather
certainly liberally reference each other, and fiddling them together
from multiple standards will be a major pain. I already is for XSDs
we have.
If I had to design this from scratch, I would even say that the
openapi artefacts wouldn't be kept in standards repos at all, because
then keeping them in sync with the central repo is going to be
painful. Our XSD repo, anyway, is a major chore to maintain, and
there's trouble again and again.
So, let's try and improve on it. Imagine we had a vo-openapi repo.
For each contributing standard, there would be a branch, and that's
what the standard would refer to; if someone has a git trick with
partial checkouts into the document repos, I wouldn't mind, but I'd
say that's secondary.
At REC (or PR?) time, these branches would be merged into
vo-openapi's main; as a "normal" developer, you would check out that,
as an early adopter, you would check out the pertinent branch for
your standard.
The main hitch with this would be that the branches would need to be
regularly rebased to main as other standards evolve (or perhaps
better have merges from main whenever that changes? That would
enable a git pull workflow for the early adopters). I don't think
it's realistic to ask the editors of the standards controlling the
branches to do that, and that, it would seem to me, means we'd need
to appoint an openapi steward who would do the merges into main and
then the rebases of the branches.
Hm... certainly not impossible, but still somewhat daring. To me, at
least, it feels less haphazard than anything else I can think of at
this point.
|
|
I am certain that the openapi files that are the source of truth must be in the repository with the standard document (under ivoa-std). How we might deploy those to a well known location is the question and I take it from your comments concerning XSDs, for example, that we don't have something solid in place. I'm kind of skeptical about branches in a single "openapi" repo because this first attempt to use openapi spans 3 different standards. I will think about it some more and also other alternatives. |
|
On Wed, Apr 08, 2026 at 02:44:31PM -0700, Patrick Dowler wrote:
pdowler left a comment (ivoa-std/DALI#72)
I am certain that the openapi files that are the source of truth
must be in the repository with the standard document (under
ivoa-std).
The trouble with this plan is exactly what you mention:
I'm kind of skeptical about branches in a single "openapi" repo
because this first attempt to use openapi spans 3 different
standards. I will think about it some more and also other
alternatives.
I think branches in a single repo have a chance of working without
everyone losing orientation. I give you it's marginal, but there is
a chance.
On the other hand, collecting together the files necessary for a full
openapi spec from three, and eventually even more, independent repos
(which, mind you, will have branches, too) is, I think, a recipe for
endless headache.
*If* we go with in-standards-repo openapi files, we'd need some sort
of professional amalgamation, a person really dedicated to build
openapi releases and pre-releases (if you will).
That, I think, has the highest chance to not end in chaos, but who
would that person be?
|
|
A couple of suggestions based on our experience developing the OpenAPI schema for Execution Broker. The design of the Execution Broker schema is split into separate components that are combined together to build the final result. At the moment all of our components are in the same repository, but at some point we would like to be able to import components from other repositories, and make our own components available for other projects to use. Some of the OpenAPI tools handle this split schema well, others don't. To mitigate this, our development process starts with a tool that follows the I would suggest that this single combined file is the thing that should be published alongside the standard as the source of truth for a specific version. The development version of the schema, along with the toolchain that builds it, would be in a separate git repository, maintained and developed as a software project. One output of the toolchain would be a versioned copy of the combined schema, which is then exported as a static file into the ivoa-std repository for the standard. The GitHub project for the Execution Broker OpenAPI schema uses GitHub workflows to generate and build code for Java and Python clients, and the server side stubs for a Java-Spring service. The Java binaries are packaged and published in the GitHub Maven repository.
Based on our experience I would suggest we keep things separate. We are still learning and evolving the processes for working with OpenAPI schema and the products that can be generated from it. Trying to coordinate and maintain a global set of schema across all of the projects in a single repository would be very complicated. Even more so if we factor in the workflows for generating code and publishing binaries. |
|
Shorter version : Trying to manage all of the schema in a single place would be difficult. Alternative would be to develop each schema as separate git projects, and use a workflow to bring components together to create the combined schema file that is published as the source of truth for a specific version of a standard. |
|
On Thu, Apr 09, 2026 at 03:11:26AM -0700, Zarquan wrote:
I would suggest that this single combined file is the thing that
should be published alongside the standard as the source of truth
for a specific version.
As usual for when standards depend upon each other, you have
interesting interactions in this model. Let me draw up an example
here:
* DALI 1.2 defines MAXREC in 2026
* DAP 1.0 re-uses this in 2027 and bakes its complete openapi spec
based on MAXREC-1.2.
* DALI 1.3 makes a (backwards compatible, of course) modification of
MAXREC and updates its openapi spec in 2028 (for the sake of this
argument, let's say it's the description that changes)
* Datalink 1.3 re-uses MAXREC in 2029 and bakes in MAXREC-1.3 into
its openapi spec.
In your scheme, DAP 1.0 keeps using MAXREC-1.2. There is something
to be said for that; in particular that non-breaking changes
sometimes aren't all that non-breaking.
On the other hand, *assuming* people do it right, such updates and
fixes should really propagate up to the re-using standards. In the
scenario above, people who implement both DAP 1.0 and Datalink 1.3
would have to slightly different MAXRECs in their code basis.
That simply doesn't feel right to me, and incidentally, it's not what
we do with our Registry schemas. They are referenced by their
namespace URIs, and these stay constant within one major version.
That has served us well, I would argue: that you can use DOI
references in VODataService documents *now* is thanks to that
mechanism. I *think* I would like to keep this property for our
openapi system unless we find a really strong argument why that
wouldn't work in the openapi case.
And that's the main reason why I'm so skeptical about per-repo
openapi files: nobody would want to update these when their
dependencies change, and then we are in the unfortunate situation
that people will have to deal with both MAXREC 1.2 and MAXREC-1.3 in
one software project.
But again I can probably be rather easily convinced that that's not
an issue for what we plan to do with openapi.
|
|
All good points. This is beginning to look like a typical dependency management problem for software libraries. |
still not sure about delivering the OpenAPI parts this way... TBD.
known issue: the generated URLs overflow the margins even with the manual line break