The actual federate command takes a target and federate the instance in a one-way, for every bloxes even those that aren't shipped with the instance. Pretty brainless actually.
For your needs, I'll have to improve this part to allow the federate command to take a target and a blox (or --all).
I should be able to adapt the actual behavior to a Blox behavior for 60 tokens. On top of that, you'll need an adaptation for the auto-deployment. Notice that this feature would work for any client that want a federate universe on only some bloxes.
./manage.py federate [--target TARGET] [--delete[TARGET]] [--blox BLOXNAME]optional aguments:--target TARGET Targeted server, format protocol://domain--delete Remove targeted sources--blox BLOXNAME Only federate BLOXNAME, BLOXNAME =type from config
Where, for eg.:
./manage.py federate --blox jobBoard --target "http://server"
Will add http://server/job-board/ and other related sources to the local server
./manage.py federate --blox jobBoard --delete
Will remove the local job board from the source list
From there, you'll be able to run things like:
# Create every sources for http://server then remove the job board one./manage.py federate --target "http://server"./manage.py federate --target "http://server" --blox jobBoard --delete# Ensure that only the job board is activated for http://server./manage.py federate --target "http://server" --delete./manage.py federate --target "http://server" --blox jobBoard
From a configuration POV, I believe that the simplest way would be to have which instance are federated to which blox:
@jbpasquier I'll be happy to remove the whateverworld part. It's far from being as convienent as it was supposed to. But I'm reluctant to make this happen at each deployment.
What you describe could work but it will add a lot of new command to execute. Due to the cardinality when mixing servers between them I tend to execute the federation commands on demand and not at each deployment.
If we do that it means at each hubl deployment all the 30 instances will execute the federation command for each target in each blox. I'll have to implement something like:
for instance in group: for blox in instance.bloxes: for server in blox.federation: run('./manage.py federate --target "http://{server}" --delete') run('./manage.py federate --target "http://{server}" --blox {blox.type}')
So if I transpose it with what we know today as hublworld:
It gives me 32 instances x 19 bloxes x 31 targets x 2 commands = 37 696 executions at every release...
And each new instance would multiply this.
I think we need a way to know what is the current state of federation on each blox and then a way to make changes only on what's required. I don't have any proposition in hand right now.
Based on it, I'm now convinced that we need to define another approach.
Let's take the assumption that our front-end is already on Core 0.17, and thus takes benefit from the setLocalData which allow to create containers or resources from front-end.
From there, I'll be able to manage the federation part live from within Orbit.
The only thing that I would need would be a list of API(s) to federate what with.
No more endpoints node for each component, only a server list if needed. Front-end would be responsible to get the proper route.
We're also moving xmpp and skills to parameters for more linearity.
@Cyrilthiriet Specification is done, pretty nice improvement.
Allow any client to federate any Blox they want with whoever they want instead of the whole application.
Based on this specification #268 (comment 62122)
Will require the core 0.17 (ETA: Soon)
May have some interest for other client, as some of them may also want partial federation, Blox by Blox.
@Cyrilthiriet Just talked to the Happy Dev Paris General Director and he said he would cover the 23 T for the deployment part by putting some Happy Dev workforce on it. Then looking for a DevOps in our ranks I found somebody suitable for the job.
Long story short, I'll do it for free in the name of Happy Dev. Still 96 tokens to go.
@all I have maybe found a client who could fund this issue. But she would need her plateforme by the 21st / 22nd of June.
How much time do we need to release this functionality?
Hi @Berengere.
Could you please avoid to use @all keyword that puts everybody (20 people) in the issue and notify them for all the following exchanges ? Taging people individually could be more efficient.
@all The client @Berengere told you about confirmed they want to fund this issue, 96 tokens. How much time do you need to release this functionality ? June 22nd would be possible ?
That's rather short for me, but if I have the go today, I can manage to push it before the end of the week and so leave us a short time to test it properly.
@plup Would that work for you? The autodeploy adaptation part?
@jbpasquier We're on it. @Berengere is waiting to receive the estimate signed by the client. They gave a written "go" but we wanted the 96 token to be paid before launching the development.
The client is slowed down by his administrative procedures but since we are in a hurry, I am in favour of launching.
@alexbourlier Louis told us that if the "written go" was clear enough (and it is), it was ok. Although it's always better to cash in before launching the development. But let's just say that this is a bit of an exception given the tight timing... @Berengere do you agree ?
Hello guys ! :) Odile is planning to demonstrate the platform on Wednesday, but we need to do the testing first. When do you think you will be ready for this feature @jbpasquier ?
It's already on pre-productions. As you don't want both profile directories from day one, it's rather less an emergency matter, still @plup should be able to provide a staging for you?