Where are projects stored and how can I use them when launching a new vespene server?



When using jenkins I am used to going into /var/lib/jenkins/jobs and finding the config.xml files and using these to recreate the same jobs if I delete my jenkins machine in aws and then make another one at another time.

How can I do this with vespene?

Apologies if this was written somewhere obvious in the documentation and I missed it.



/tmp/vespene/BUILDNUMBER has the the vespene.json which shows all the variables and vespene_launch.sh which shows the launch script but I was wondering if there was somewhere I could find all the project info of the project like repository, schedule etc for easy copy pasting into my vespene repo for setting up vespene. Something like what your setup script 5_tutorial.sh does, which I unfortunately do not understand because I am not very familiar with python.


The builds are going to be in the table 'builds' in the vespene database within postgres.
You can most likely use standard postgress dump and restore commands to extract your build jobs so they can be moved between machines.


Yep, exactly! Everything is in the database, so you can backup and restore it as you want.

Similarly, if you want to keep the majority of your configuration in git, you can do that using this feature:



Is it possible to export .vespene files?

Or have a more intuitive way of importing/exporting build jobs instead of accessing the database.


Splitting up answers...


There's currently not a link or anything that will get you the contents of a "vespene.json" file as you should put it in your repository to exactly recreate that project.

Ultimately, I'm not 100% sure that would be a great experience, because not only do you need the .vespene file, but you also want to check in the build script, and then reference the build path in that script. This is because it seemed better to have syntax highlighting in the build script rather than to try to embed it i the .vespene file, which might be weird and less compatible with the way people are currently doing things. So the export would really have to be a zipfile you'd crack open if we added it as a UI feature.

What probably makes the most sense is when we have a REST API we can make a little demo script that does this for a given project, which would make a pretty good demo. it could create both the .vespene file and check in the build script


I'm not sure I understand the part of the question about importing .vespene is a method of importing projects to the database, and scanning the GitHub organization is already there - support for scanning other types of indexes (such as GitLab, etc) could be added as plugins.

Can you elaborate on that question a bit?

If you want to do something yourself, the easiest way to interact with the system right now without the XMLRPC API (which is coming probably in January if not early Feb.) would be to use the Django model classes, which are basically an API. The django management commands are there for that purpose.

Depending on your import use case, that might point to a feature, but right now I could use more understanding of what that would mean beyond the features already described for imports from git.


So to use the postgres dump feature I have to drop the entire database and then remake it using the exported backup file, it would be nice there there was a way to individually export projects which you can then run in a different vespene server to put that job into that server.

Otherwise if I was to do insertions that would require me to spend some time writing sql code to do so and understand the formatting of your tables which does not seem very user friendly to someone who has not used postgres before.


The recommended solution now is to NOT muck with the database, but instead put .vespene files into your repos, and then have both Vespene instances use the import feature on those repos.

This is pretty straightforward and allows your projects to appear in multiple Vespene installs without having to worry about any of this.

If you have a more specific use case in mind let me know details...


In that case I wouldn't be using the GUI much at all which seems a shame as I quite like it, and it is easier to make small changes in than changing my .vespene file and git pushing it.

Also is it possible to make multiple projects at once using one .vespene file? I tried with two yml blocks and it only took the code from the second yml block. This is the .vespene file I am using:


I think most people would disagree with you and want those artifacts in version control, based on all the feedback I got when this project launched without .vespene files. I was basically eaten alive :) But yeah I sort of agree with you - I don't want to learn some funky language to define builds, and that's why I tried to keep this language simple. You can 100% just use the GUI if you want.

You can make pgdumps of it for backup.

Partial backups may be tricky, but I don't know if I understand your use case for partial backups then. I thought it was around quickly restoring just projects - which imports help with, and not build history. But really it sounds like if you just want to clone a server, full DB backups are going to be fine for you.

On other topics:


I think what you've suggested has implied a couple of features, one is a UI feature for "copy project".


The YAML document you have shared is unfortunately not doing what you think, because of all the same things called "name" will clobber each other and the last ones win. This is because YAML is representing a dumb hash here, not a list. It's an easy mistake, and I'd consider it a flaw in the YAML spec, but it's there in JSON too if I recall correctly.


YAML does have a thing called "documents" where different structures are seperated by "---" and I think that's also a really good feature to add to the importer, where each document gets created. This would allow exactly what you want and I REALLY like this idea.

This would be an easy contribution for someone who might find it interesting to work on.

I'll file both of these ideas (copy button + multiple documents in import) on GitHub, i can't promise they are at the top of my stack of things to do, but they are good ideas.


It's also worth noting the various options on import.

There are options that just create, but don't update, or options that don't update all the fields. This can be a good way to establish the project but keep editing variables in the UI.

The flag "overwrite_configurations" on the organization dialog will allow you to import variables the first time but not have successive imports replace your variables.

You could also choose to organize variable settings in "Variable Sets", where those are not part of the import process.


Here's the YAML ticket:


Here's a ticket on the copy button idea - https://github.com/vespene-io/vespene/issues/101

This would be an easy one for someone to have a go at if they have an interest in Django too


Ok great, those tickets provide most of what I would like really. In terms of the copy button if you could use it to then copy that project over to another vespene server than that would be ideal and solve what I was asking for in creating this thread.


The copy button most likely would not do that but would just be about replicating objects on the local server.

I don't anticipate many people will have more than one Vespene cluster (for those unfamilar, Vespene head nodes as well as worker nodes both scale out).

I'm curious to hear a bit more about your use case for that as opposed to just running a distributed worker closer to what you are managing.

Speaking to the copy feature idea, I want to write a REALLY REALLY good command line when I build the REST API though and that may be an interesting and easy feature to add there, including just a generalized ability to backup and copy over certain types of objects.

Having a command line helps for me as it gives something really concrete to test the API with beyond just the typical QA kind of tests, and also is a great way for people to learn the API...and in many cases, keep from having to learn the API :)


My use case isn't for creating more than one cluster, but if I were to delete the server and then make a new fresh one and copy new projects I still want to use into it. This may happen when I am testing automating a new vespene instance in aws with certain packages installed in a new development VPC for example, without having to worry about a high numbered build history and lots of variable definitions.

I could do a pgdump and then delete all the resources I don't want in the fresh vespene server as a workaround, but seeing as this is currently possible with jenkins by taking the config.xml files of the jobs I want to copy I would be losing feature that exists in jenkins. As chopraa said it would effectively be exporting .vespene files.


So... yeah.. exporting .vespene files would handle the project config but would miss the configuration of all the other object types


Maybe on the export you could have a checkbox for if you want to export associated variables, and the associated pipeline too? Then the relevant information would just be listed under variables, pipeline, pipeline_definition and stage in the .vespene file.

This wouldn't be ideal though if you were using variables through variable sets, perhaps a .vespene file should have an additional option of variable set, providing the name of an existing variable set on the vespene server? This would be useful in it's own right, and then would also allow exporting of associated variable sets.


If you are going to be wanting all the related objects (and you need them for them to work), this feels like pgdump to me.

You could do --exclude-table-data to eliminate the builds.

I'm open to more discussion/thoughts on this, but it seems to already be supported for the backup/experimentation use case and I want to keep the system minimal where tools already exist to do these things.

I can see maybe web-surfacing it though to make it easy.