Configuration and the Release Process

113 views
Skip to first unread message

Alex-SF

unread,
Mar 26, 2010, 5:04:08 PM3/26/10
to devops-toolchain
How do you release configuration?
* Lump it inside the deployable code artifact
* Break it out as a separate artifact that goes out along side the
code artifact?
* Generate a working configuration based on templates or aggregated
fragments
* Leave it to an external system configuration management layer.
* Manual customization at deployment time (yikes)

I find myself generalizing common approaches like so:
* Build-driven configuration: Build process generates packaged
configuration artifacts that can be released and installed without
customization. Note there could be multiple permutations of this. See
http://www.build-doctor.com/2010/03/26/supporting-multiple-environments-part-2/
* Deploy-driven: At deployment time, configuration is customized by
the deployment automation
* System configuration-driven: Rule and role based policy that
continuously runs enforcing compliance based on configuration
specification

One might imagine a particular tool and/or framework underlying those
approaches.

All of the above begs a final question: How are you communicating
configuration changes between development and operations teams (and
back again)?
* Release artifact is the communication. Developers drive the
configuration changes through to operation delivering working
configurations as part of the app release
* Operations plays "catch up". Operations analyzes file deltas between
the developer release and updates their own methods to replicate
similar features using their own toolset
* Hit and miss. No consistent standard so changes get lost in the
cracks


jallspaw

unread,
Mar 29, 2010, 8:29:39 AM3/29/10
to devops-toolchain
Skipping forward to your question:

> All of the above begs a final question: How are you communicating
> configuration changes between development and operations teams (and
> back again)?

Depending on what kind of configuration change, all of them are in
version control, so there's an audit trail of what was done when, by
whom, with hopefully a comment or two about it.

Depending on the nature of the change:
- If there's a CM ticket, those are for all to see and notifications
usually go out via email well ahead of time, when the change starts,
and when the change is finished.

- If there's not a ticket, those are for all to see and notifications
go out via email, IRC, or IM. We had an IM bot at Flickr which would
parrot messages sent to it to a list of people. Those messages also
were injected into IRC, where they were fed into a search engine for
easy finding later.
Ex: "4:07:24 PM FlickrIMBot: OpsJoe says: pushed config change to re-
point to "dbthing[1-2]" - renamed servers to dbthing1,2, and the old
ones to dbthingold1,2, which aren't doing anything, now."

If the change is large enough, they're also brought up as a reminder
in both the dev and ops weekly staff meetings. "FYI, this is happening
on Wednesday...."

Again, tho, I think that the actual type of change is important. Many
application-specific changes (i.e. what database to talk for what
data) was kept within the application, and so therefore a change made
there (feature flags, dark launches, etc.) was really no different
than a normal code deploy.

-j

On Mar 26, 2:04 pm, Alex-SF <aho...@users.sourceforge.net> wrote:
> How do you release configuration?
> * Lump it inside the deployable code artifact
> * Break it out as a separate artifact that goes out along side the
> code artifact?
> * Generate a working configuration based on templates or aggregated
> fragments
> * Leave it to an external system configuration management layer.
> * Manual customization at deployment time (yikes)
>
> I find myself generalizing common approaches like so:
> * Build-driven configuration: Build process generates packaged
> configuration artifacts that can be released and installed without

> customization. Note there could be multiple permutations of this. Seehttp://www.build-doctor.com/2010/03/26/supporting-multiple-environmen...

ahowchin

unread,
Mar 29, 2010, 7:27:26 PM3/29/10
to devops-toolchain
I find that "configuration" tends to be a combination of both code
changes and configuration properties (like JVM JAVA_OPTS). We use our
source control system to push through changes (e.g. from development
branch -> production branch), in combination with applying new
configuration properties at deploy time. The issue for me is timing -
ensuring that I make the changes to the configuration properties at
the same time as the new code comes through. And if the new code comes
through in an overnight build (e.g. 3AM), it can be tricky - any
suggestions for managing this?

I've never really broken it down into generalised categories before -
thanks Alex!

"All of the above begs a final question: How are you communicating
configuration changes between development and operations teams (and
back again)? "

Given my point above, I find this particularly tricky. Putting
configuration into source sounds great, but does that work with tools
like ControlTier (e.g. changing JAVA_OPTS)? (Bit of a CT newbie, so if
anyone can enlighten me please do!)

Cheers,
Adrian Howchin

Noah Campbell

unread,
Mar 29, 2010, 8:08:03 PM3/29/10
to devops-t...@googlegroups.com
I think treating configuration as code is a very important concept.
Treating it otherwise leads towards dysfunction. The tough part to
tackle is configuration by its very nature requires cross departmental
coordination where code is typically isolated to engineering (however
QA is required to coordinate the change as well, but in my experience,
it's difficult to find a strong QA department that will scrutinize
configuration).

-Noah

> To unsubscribe from this group, send email to devops-toolchain+unsubscribegooglegroups.com or reply to this email with the words "REMOVE ME" as the subject.
>

Scott McCarty

unread,
Mar 29, 2010, 8:13:47 PM3/29/10
to devops-t...@googlegroups.com

Not just configuraion, but software and hardware deployment in general, for example replacing a filer or upgrading apache, etc. I have also found it difficult to find an organization that will even buy into the concept much less a QA team that can handle it.

Scott M

On Mar 29, 2010 8:08 PM, "Noah Campbell" <noahca...@gmail.com> wrote:

I think treating configuration as code is a very important concept.
Treating it otherwise leads towards dysfunction.  The tough part to
tackle is configuration by its very nature requires cross departmental
coordination where code is typically isolated to engineering (however
QA is required to coordinate the change as well, but in my experience,
it's difficult to find a strong QA department that will scrutinize
configuration).

-Noah


On Mon, Mar 29, 2010 at 4:27 PM, ahowchin <samp...@gmail.com> wrote:

> I find that "configuration...

Dan

unread,
Mar 30, 2010, 1:15:38 PM3/30/10
to devops-toolchain
We have environment-agnostic code artifacts and a separate
"repository" of configuration files. Currently it is a big structure
in subversion with all the config files of all the environments/
servers/applications checked in in their runnable form (no templates
yet) and scripts to refresh and police the files. We're looking at how
to move to a template, rule, and role based mechanism and from what
we've seen it will be easiest to write our own tool rather then use
something out there. Any favorite tools to just manage configuration
files? We use Control Tier for deployments, but haven't seen any tool
that excites us about configuration file management.

We spend a lot of time and effort communicating changes to
configurations. Most of it is manual in some way. First Dev
communicates to CM (usually by email) of changes in daily builds, then
CM communicates to Ops prior to a staging/production release to
aggregate all changes of a sprint to one big change. We try to capture
all changes in a file, but we usually resort to revision diffs in
subversion of the configuration files to capture all changes. We tend
to spend days prior to a release first aggregating all the
configuration changes, then communicating them to Ops, then they make
the changes and we review them after they make them and prior to the
deployment. It's labor intensive, but effective, hence the desire to
find a tool to help automate this process.

Back communication (Ops to Dev) is not much of an issue as they tend
only to change values but not keys and our model allows Ops to manage
their own values.

Dan

On Mar 26, 5:04 pm, Alex-SF <aho...@users.sourceforge.net> wrote:
> How do you release configuration?
> * Lump it inside the deployable code artifact
> * Break it out as a separate artifact that goes out along side the
> code artifact?
> * Generate a working configuration based on templates or aggregated
> fragments
> * Leave it to an external system configuration management layer.
> * Manual customization at deployment time (yikes)
>
> I find myself generalizing common approaches like so:
> * Build-driven configuration: Build process generates packaged
> configuration artifacts that can be released and installed without

> customization. Note there could be multiple permutations of this. Seehttp://www.build-doctor.com/2010/03/26/supporting-multiple-environmen...

ahowchin

unread,
Mar 31, 2010, 10:31:38 PM3/31/10
to devops-toolchain
Assuming that the config is not inside our deployed application, what
would such a config management tool look like? What would it encompass
(what is it's scope and purpose)? How would it push changes through -
automated or on operator input (or both)? What would trigger a change
in this config management system? What would the interface be - GUI,
pure xml files, command-line only?

A few thoughts of my own:
What would it encompass (what is it's scope and purpose)? --> To
manage configuration changes in line with code changes, ensuring that
both sets move into an environment consistently (and preferably in a
automated/semi-automated manner, able to be triggered).
How would it push changes through - automated or on operator input (or
both)? Both
What would trigger a change in this config management system?
Schedules, either internal or external - e.g. through ControlTier.
What would the interface be - GUI, pure xml files, command-line only?
Mixture of all 3, but a strong xml/command-line interface would be
necessary to ensure interoperability with other tools (ie. the tools
API).

These are just my initial thoughts and imaginings - feel free to add/
destroy/suggest your own.

Cheers,
Adrian

James Bailey

unread,
Apr 1, 2010, 2:19:07 AM4/1/10
to devops-t...@googlegroups.com
On 30 March 2010 01:08, Noah Campbell <noahca...@gmail.com> wrote:
> I think treating configuration as code is a very important concept.
> Treating it otherwise leads towards dysfunction.  The tough part to
> tackle is configuration by its very nature requires cross departmental
> coordination where code is typically isolated to engineering (however
> QA is required to coordinate the change as well, but in my experience,
> it's difficult to find a strong QA department that will scrutinize
> configuration).

I have to quote James White on Infrastructure here:

== Rules ==
On Infrastructure
—————–
There is one system, not a collection of systems.
The desired state of the system should be a known quantity.
The “known quantity” must be machine parseable.
The actual state of the system must self-correct to the desired state.
The only authoritative source for the actual state of the system is the system.
The entire system must be deployable using source media and text files.

I would dare to add to that:

The source media and text files must be versioned

I don't feel it is about treating configuration as code it is about
treating everything configs, code, firmware even as components in a
single system. Processes and tools have to be able to deal with the
entire stack from switch and router firmware, to high end SAN and NAS
configuration to serried ranks of servers. By the same token any
module of that system must be machine parseable, must be deployable by
source media, must be versioned.

QA is not really about testing[0] it really about Quality Assurance,
they are they gate keepers of our reputations, as devs and sysops we
write the unit tests and the BDT scripts that define the functional
testing of our systems.

[0] I seemed to have defined and written most of the infrastructure
tests at my previous position before handing them over to the QA team.

Jim :)

jameswhite

unread,
Apr 1, 2010, 2:10:44 PM4/1/10
to devops-toolchain
I've no idea how we overlooked that one. We did version control for
everything. I guess we just assumed it was implied. I'll update the
rules. Thanks. And thanks to lak for pointing me to this.

On Apr 1, 1:19 am, James Bailey <paradoxbo...@googlemail.com> wrote:

Lee Thompson

unread,
Apr 4, 2010, 12:42:59 AM4/4/10
to devops-toolchain
On Mar 26, 4:04 pm, Alex-SF <aho...@users.sourceforge.net> wrote:

> I find myself generalizing common approaches like so:
> * Build-driven configuration: Build process generates packaged
> configuration artifacts that can be released and installed without

> customization. Note there could be multiple permutations of this. See http://www.build-doctor.com/2010/03/26/supporting-multiple-environmen...

I really like EJ Ciramella's build-doctor write-up on different
approaches and how each one "scales" better. I'd add a note that the
well defined configuration default concepts that I first read about in
the Postfix project and some folks call "convention over
configuration" helps scalability. This is incredibly useful when you
setup a new experimental environment it should for the most part work
and boot up without tremendous config work. So, when you chose a
template tool, make sure it has support for defaults and overloads.

Another thing that adds to "scalability" of config is incident driven
config overloading from the admin/operator somehow. Normal runtime
would probably not use this capability, but when you got to move fast
for an incident that doesn't have a defined procedure, you don't want
to go back and generate builds and push artifacts around. This
capability also helps developers and performance optimizers experiment
with configs prior to committing them to source control.

Benjamin VanEvery

unread,
May 29, 2013, 9:57:10 PM5/29/13
to devops-t...@googlegroups.com
I'm curious if there has been any new thought on this topic in our community (link to original discussion -- https://groups.google.com/forum/#!msg/devops-toolchain/MbzgvD_rLM8/86er6qw5tukJ).

From the online research I've done, I haven't come across much. The one promising article that I found is an article on the Netflix blog about their tool Archaius, http://techblog.netflix.com/2012/06/annoucing-archaius-dynamic-properties.html, which is a step in the right direction, but still misses a few requirements for my team. 

Before I proceed, I want to clarify what "application configurations" mean to me. To me, they represent configurations needed during application runtime. They do NOT represent system configurations, and would therefore not include configuration for something like LDAP or NTP servers (i.e. things managed by tools like puppet). An application configuration might be something like a connection string to the current primary master database or the current timeouts for some third party resource.

In our architecture, our application configuration system must need the following requirements:
* timely convergence of configuration values; can reliably assume new configurations have been deployed and digested by all applications within $threshold seconds
* separation & independence of the system from the various application code bases
* environment specific overrides (where an environment signifies the context within which the application runs, e.g. data center, amazon region, experiment stack, ...) such that, e.g., configurations can differ between data centers
* resilience; if the configuration system is down, dependent applications should continue to function if perhaps only in a degraded mode

Our system is made up of heterogeneous services on various deployment schedules. Our application configurations are stored as INI formatted files and managed in a git repository whose sole purpose is version history of the configurations. Each of our services (mix of PHP, Java, NodeJS, and custom scripts) can parse the INI files to get the necessary information.

What I'm really curious to learn are the various ways that other teams have solved the issue of application configuration management and deployment in their architectures in regard to the requirements above.
Reply all
Reply to author
Forward
0 new messages