Doing something that supports both would be great.
I was thinking about that while driving today, and here's what I am pondering at the moment:
- In order to support pull mode, we need a way to watch for a signal to update, to update content & run it, and then to send results
- In order to support push mode, we need a way to push content and run it, and asynchronously wait for results
What both of these things have in common is perhaps the idea that OpsMop supports multiple callbacks at once, one for CLI output and another for sending REST somewhere (anywhere, to anything), or whatever the user wants to get status.
We then need to do pull and push modes.
For push mode:
Invoke-ssh-agent and add your keys - however - outside of the program.
Assume a script in the opsmop /bin called opsmop-push:
- rsync content everywhere
- optionally run a bootstrapping script to install python3 and opsmop
- run one SSH invocation of executing opmop specifying a callback
For pull mode
Assume several scripts in /bin called opsmop-bundle and opsmopd
- opsmop bundle, more or less ultimately pushes a zipfile or a set of files (however) into some datastore
- it also supports plugins to watch for a signal and a seperate set of plugins to load from a transport
- an example signal combo could be SQS + s3, or Redis, or whatever
- opsmopd downloads some content from the server and runs it
Both of these should imply some concept of inventory to determine what servers to talk to, but I think this can be a lot simpler than what I've done previously and should just be a command line tool that returns JSON.
I kind of like the idea of the servers self-identifying with their content. For instance, there could be the idea that a server consults a local inventory class (if on AWS) to determine what AWS tags it has, and then runs that content. If a server wasn't on a cloud server, it could also look for tag names in /etc/opsmop/tags.d or something.
Dealing with a plugin to access secrets from the language would also be pretty easy, and we can talk about that when we get the basics down.
If we were to make a program to recieve events from BOTH pushed and pulled tools, it might be a very basic flask app and then it itself takes plugins, one of which could be a database but the most basic version would just output events to screen or logfile. This type of tool would be like /bin/opsmop-listen or something and would not require a database unless the receptor plugins required one.
I definitely want to concentrate on this being exceptionally scalable (scale-out being more important than latency, but 100% parallel for sure) so I don't want to just do git checkouts if I can help it - it would be too easy to thrash a git server.
For those that wanted to orchestrate multi-tier deployments in push mode they could just execute opsmop-push once on one role and then do the next layer once it got back.
So... that's just ONE possible thought for a minimal strategy. I would be open to other ideas.
Transports (how to pull data - PushTransports and PullTransports) and callbacks (how to report results) as well as Inventory modules (what's my AWS tags) would all be 100% pluggable and subclassable.
How does that sound?
The only weirdness is maybe having to run two processes during a push, but the CLI could also find a way to spin up that watcher inside a fork too.
I want to try to keep things more modular if I can, so the push and pull tools kind of only halfway know it is opsmop and could also be adapted to other tools.