Skip to content

Instantly share code, notes, and snippets.

@atmos
Last active November 23, 2020 22:35
Show Gist options
  • Star 57 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save atmos/6631554 to your computer and use it in GitHub Desktop.
Save atmos/6631554 to your computer and use it in GitHub Desktop.
Response to a dude who asked about heaven. https://github.com/holman/feedback/issues/422

@holman got a request about our deployment system, heaven

I know it's not a high priority, but has there been any activity on open-sourcing the core Heaven gem?

There is. I've been working on extracting the non-GitHub specific parts into two gems. This first is a CLI portion called hades. The second is an HTTP API portion called heaven.

When you open source something previously used as in internal tool like Heaven, Hubot, Boxen, etc., how do you manage and hook in the parts that need to stay internal?

Normally I focus around four questions:

  • Are the defaults that we're offering sane if we wanted this for any other project?
  • Are we approaching this in a way that we can share the code without leaking sensitive credentials/info?
  • Are we building something flexible enough that people can extend this when they find themselves feeling like they need a unique solution?
  • Who is willing to dedicate the time to maintain the software once it's open sourced?

Heaven doesn't meet all of these criteria yet and that's part of the reason we still haven't open sourced it.

How does Heaven create Capistrano recipes on the fly? Do you have a baseline of static recipes for each app and then inject and/or config the dynamic parts? Is it template-based or full-on Ruby?

We have a gem called hades that's a thin wrapper around capistrano. We essentially neuter capistrano's default behavior in favor of an approach that we've found more sane for our requirements. This is all done via 'default.rb' a capistrano recipe that each application sources in their respective capfiles. Any extra customization can be overridden by creating a custom capistrano recipe that sources default.rb.

The deployment approach is basically capistrano doing the following:

cd #{deploy_to} && 
  git fetch &&
  git reset --hard <sha from the cli> && 
  god restart #{application}

There's also an apps.yml file that takes attributes like name, repo, heroku_name, and a few others. The hades gem comes with an executable that parses the apps.yml and then executes capistrano in different ways depending on which target an application is destined for.

We currently support three targets "real computers", heroku, and a custom internal app we use to build our iOS applications. For cases like heroku the capfiles are just erb since there shouldn't be any customization there. The iOS app flow is all driven by an HTTP API so in that case capistrano isn't even invoked, but the command line for deploying the iOS apps remains the same. The "real computer" approach is normally an 8-10 line custom file with who knows what's in there. The custom capfile for github.com is really long and crazy because github is a pretty large app with a lot of services. Most of our apps are fine with the defaults though.

The hades executable feels like an old school unix command line tool. We just use optparse and the examples that you pasted work fine right now if you replace the heaven command with hades. It also doesn't rely on any datastore or external services besides the presence of the apps.yml file. This can be useful when shit hits the fan and different portions of your architecture are failing. You can always hop on to the heaven boxes and manually invoke a deployment to get things back in line.

According to #38, the recipes to deploy each application are maintained as part of the Heaven code base rather than each app. Is Heaven smart about an application's environment stuff (e.g., can it automagically determine how many fe servers there are in the github.com production environment) or does it need to be updated manually every time an application's production environment changes?

It depends. :)

Heaven is a rails 4 application that relies on the hades gem. It runs unicorn and resque on a pair of awesome bare metal machines.

Hades is written in a way where host lists can be provided dynamically. In our case we have ruby code that can hit an HTTP endpoint to pull down a full host list and then provide ruby code for simple hostname filtering. So in heaven we can write custom logic that hades can take advantage of in capistrano that's aware of our environment. In our github.com capfile it looks something like this.

  Heaven.frontends("production").each do |host|
    role :app, host
    role :web, host
  end

Most of the complex examples(github.com) are still primarily maintained by hand, much to the dismay of our sysadmins.

Right now heaven deploys 175 apps and keeping the logic for deployments in one place outweighed the benefit of keeping capfiles easily accessible on a per application basis. No one at GitHub ever says "I ran cap." We intentionally made changing the logic of deployments difficult to help with maintainability and understanding when problems do arise.

Most of our environmentally related things like API tokens stuff lives in puppet right now. Our process management system ensures environmental variables are present at runtime. In a lot of ways our apps end up being UNIX environment heavy like heroku apps. We're currently not entirely satisfied with the time it takes for small changes in the puppet workflow. Heaven can optionally hit a config store and custom environmental variables are written to servers on each deployment.

Assuming that's not just a notional CLI, does the HTTP API do its thing by literally shelling out to or posix-spawning the CLI (or perhaps use Capistrano::CLI), or does it completely avoid the CLI?

Heaven has an HTTP endpoint that takes a big json payload that essentially turns into CLI arguments to hades. We double fork out of the unicorn process group so we're reparented to init and use posix-spawn to invoke the hades executable. This keeps us from getting zombie processes and allows us to deploy heaven without worrying about impacting deployments that are currently running.

Would the corresponding API call for the second example above conceptually be something like a POST to /github/production?hosts=fe&branch=mybranch?

Technically it's POST to / with a JSON body like the query params you posted.

Finally, any idea how Heaven got its name? I find naming stuff fascinating, especially since it's one of the two hard things.

I named it heaven because we use a process management system that @mojombo wrote called god. There's like god... and heaven... and in hindsight it's a pretty stupid name but it stuck. @technoweenie and I are pretty bad about naming things that we think are hilarious at the time, but people tend to find it frustrating because the names don't always imply the function it provides.

The hades gem was an extraction from the heaven codebase and we already had heaven so why not?

Another question after seeing this blog post about deploying: is the knowledge of whether a particular application is "locked" for deployment maintained in Heaven?

One of the cool things about separating the logic of the actual deployment out from the API is that we can have two modes. One is work-flow friendly and the other is "do as I say right fucking now." The work-flows basically just save us from doing a bunch of manual shit day in and day out.

Locking and unlocking is just another endpoint separate from deployment that throws a bit of info about the app into a data store and whether it's locked or unlocked dictates other functionality of the system. If you deploy a branch, the application is locked in that environment until you unlock it, deploy master, or merge a pull request. We also have a variety of checks that rely heavily on the GitHub API. We ensure stuff like branches that are deployed have master merged into them, auto-merging master for people if possible, auto-deploying subsequent commits to a branch that's deployed if ci passes, auto-detecting merged pull requests, etc.

We also support auto-deployment. It's also something that persists in heaven's data store. We can easily toggle it via the API if we ever find ourselves in a situation where we need to pause people from deploying. Most actively developed projects auto-deploy on green builds, anything of merit has CI. We still use janky for CI and have a custom notifier that integrates with heaven to relay build info. I'm hoping to move over to the status API in github.com when I get some time.

Keep in mind that basically all of this is driven via chat so no one really curls the heaven endpoints, hubot does all of that for us. Everything starts with /deploy and if we wanna bypass any of the workflows we can use /deploy!.

I hope that answers some of your questions. As far as open sourcing it, I think it's kind of a trade off for the best use of mine and others time. We could spend time making it friendly to the world or we could work on fixing bugs and making my coworkers lives easier. Long term though I'd really like to open source this because I really don't want to have to build it all again.

I think people would be underwhelmed by the technology and implementation though. It's just a bit of ruby, UNIX, and HTTP. It's not pushing the boundaries of computing, it just chugs along doing its job so we don't have to.

@meatballhat
Copy link

Who is willing to dedicate the time to maintain the software once it's open sourced?

Me. Sign me up! On a related note, @mrtazz and @jgoulah mentioned recently that they're interested in getting the OSS deployinator back in line with their diverged internal version of Deployinator. I'm in the throes of getting Hubot + Travis + GitHub API + Deployinator + Chef + whatever else all glued together and I'd love to have heaven and hades in the mix if they're the best tools for the job, especially since what I'm trying to emulate is the GitHub continuous delivery process as I understand it from the outside.

@atmos
Copy link
Author

atmos commented Sep 20, 2013

@meatballhat I'd rather work with people to consolidate what a payload looks like that's flexible enough for everyone than try to implement one system to rule them all. Ideally I'd like this payload to come from GitHub. 😃

@meatballhat
Copy link

@atmos Sounds great to me. Where/how is this discussion happening? (here?) Were you thinking GitHub would expose the necessary bits, whether via payload push or API, to allow folks to glue together their own versions of Heaven & Hades?

@sts
Copy link

sts commented Dec 16, 2013

@atmos how are your feelings regarding Capistrano 3? Are you planning to move to it, or stay with the older version?
They removed the Capistrano::CLI class, is this still as expandable as 2 was? What do you think about https://github.com/capistrano/sshkit

@tpendragon
Copy link

@atmos Got it all deployed and working with our infrastructure, but using IRC as the chat medium. Can't thank you enough - this simplifies things tremendously.

@andycox
Copy link

andycox commented Aug 14, 2015

For anyone stumbling across this, @atmos released the open source Heaven last year. 🙇 Check it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment