Use ansible


like discussed in the issue, we are interested in adding ansible roles/playbook to install and configure. We don't necessary have to replace everything with ansible. We can still do both as a starting point.


So I forget what I said on the ticket, but here's my take...

A lot of users here are going to prefer some other config tool.

While I created ansible, I want this to appeal to users everywhere, and that might reduce the userbase somewhat.

Right now a lot of people are wanting the setup process to be able to optionally work with Docker, so I think that might be too much to keep three different things working - my instruction on the docker front was for the docker builds to USE the setup scripts, so you can install docker and non-docker.

I do think the playbooks are short, but I personally would find keeping up with the ansible compatibility changes highly frustrating, and also I have mixed feelings about having to use it again.

If there was community content, it would be hard for that to stay in sync with the devel branch.

So, while I want Vespene to be able to LAUNCH any config tool, I can't see it picking favorites with using one to install itself and that's why I went with the bash setup.

It's zero learning curve for anybody on any platform, and helps users new to Django admin understand a bit better what is getting set up, in more detail than might be abstracted otherwise.


Just trying to understand the domain you were going for here. Are you saying that everything vespene would belong mostly to the CI part of CICD and everything that goes after creating a release would live in the plugin space?

Do you envision a way to manage deployment targets, as a first class citizen or maybe as a different kind of variable, on vespene. Making the ability to launch a config tool unnecessary?


OP was discussing using ansible for the setup scripts, FWIW.

Vespene is build, CI/CD, and triggering any kinds of automation or scripts you want. That could be ansible, terraform, puppet, bash, fabric, chef, whichever!

As far as building an inventory GUI, my experience with ansible is everyone is largely using inventory plugins (ec2, etc) and that's also my personal belief - if there is a cloud, it's authorative.

If you have a datacenter, that's obviously different, but I like keeping that in source control over a GUI these days - just for history - which would only really apply if you had metal to manage and didn't have that metal in some sort of other inventory anyway, right? You could possibly do that with a common "inventory" repo and just check that out before the call to your config tool.

Let me know if that doesn't make sense.


It does make sense. I'm getting ahead of myself here and I need to have a proper look at the code and plugin system.

My intuition told me that I could instruct vespene to pull a common "inventory" repo and with that decide if the scripts would run on the worker pool or the deployment target.

But I guess what I'm describing is easily done by integrating with a simple orchestration tool, like fabric or the like.


Yeah so if you add SSH keys in Vespene instead of GitHub service logins, (or do both), those SSH keys are made available to the build environment.

Provided you are using sudo isolation (not docker), any git checkouts could then use those same SSH keys to checkout your inventory repo.

So I guess the build script would look roughly like:

git clone git:// inventory
ansible-playbook playbook.yml -i inventory/whatever.yml --extra-vars @vespene.json

That might be an easy way to solve it, and then the playbook is just two lines.

You could of course define some variables to make that shorter, pass variables to snippets, or whatever, if you decide to make it more advanced.


Have to admit, I was wondering about something similar to OP regarding using Ansible to provision but what you've said makes a lot of sense really about keeping it agnostic. Whilst it's not enterprise best practice, it makes a lot of sense for an OS tool.


What may be possible (as a very long time ansible user, as well as former supporter of salt, puppet, etc) is to provide multiple methods for installation. Ie... make a vespene/setup/{shell,docker,ansible,salt}/ and then as a community/contributors, we can contribute and help maintain them as 'choices' for those that already have various tools present in their environment. In this way, it would be possible to keep the shell scripts, while also supporting options for other users.


I understand the interest, and It's possible, but I'm not going to do it :)

Basically I'd end up on the hook for maintaining them all, and then I wouldn't be able to work on the things I wanted to work on, and I'd end up with a lot of tickets for config systems I don't like floating around in my github. They'd slowly become inconsistent or out of date or incompatible with the various config management tools.

That being said, once OpsMop gets a little further along, there's a pretty good chance that's the one complete OpsMop deployment example I maintain, because people are going to want large installation examples, and that's going to be the one I'm most familiar with. It's already easy to convert all that now, but having a few more modules would make it cleaner to do.


If someone wants to tackle this I reckon galaxy roles would be the way to go. There isn't one yet, as far as I can tell.


Example repo is probably more than enough.

The main problem is that with those common content collections you almost always end up forking it and making your own changes.

With Vespene that is going to be especially true because the workers will need their own setup configuration changes.

This is exactly the argument that derailed the various people who wanted to write container repos for this and didn't quite grok that no generic install is going to be possible beyond basic demo stage stuff.