Most other combinations of endpoints work as well:
*`http://localhost:9292/h/<hostname>/s/` - All service for `<hostname>`
*`http://localhost:9292/h/<hostname>/s/<servicename>` - `<servicename>` on `<hostname>`
*`http://localhost:9292/h/<hostname>/<servicename>` - `<servicename>` on `<hostname>`
*`http://localhost:9292/a/<appname>/<configname>` - Configuration for `<appname>`
*`http://localhost:9292/c/<appname>/<element>` - Specific configuration element for `<appname>`
...
...
@@ -114,7 +115,7 @@ Most other combinations of endpoints work as well:
}
# Adding new entries
I've not yet flushed out the put support for each route yet. I've been doing additions via `irb` for now:
I've not yet flushed out all the put support for each route yet. I've been doing additions via `irb` for now:
## Adding a new application and configuration item
...
...
@@ -150,13 +151,37 @@ I've not yet flushed out the put support for each route yet. I've been doing add
password: my super secret password
# Hosts and Services/Applications and Configurations
I'm still noodling out some things around Hosts and Services. There's a field called `state` that's a Boolean. The thought is that `state` set at `0` or `false` means that service or Host is unavailable. Services would, when they start up, send a put to Noah setting the flag to online. There's a bit of overlap between Applications and Services but I think the distinction is clear. Applications have configurations and Hosts have Services.
Host/Services and Applications/Configurations are almost the same thing with a few exceptions. Here are some basic facts:
* Hosts have many Services
* Applications have many Configurations
* Hosts and Services have a status - `up`,`down` or `pending`
The intention of the `status` field for Hosts and Services is that a service might, when starting up, set the appropriate status. Same goes for said service shutting down. This also applies to hosts (i.e. a curl PUT is sent to Noah during the boot process).
While an application might have different "configurations" based on environment (production, qa, dev), the Configuration object in Noah is intended to be more holistic i.e. these are the Configuration atoms (a yaml file, property X, property Y) that form the running configuration of an Application.
Here's a holistic example using a tomcat application:
* Host running tomcat comes up. It sets its status as "pending"
* Each service on the box starts up and sets its status to "pending" and finally "up" (think steps in the init script for the service)
* Tomcat (now in the role of `Application`) given a single property in a properties file called "bootstrap.url", grabs a list of `Configuration`atoms it needs to run. Let's say, by default, Tomcat starts up with all webapps disabled. Using the `Configuration` item `webapps`, it knows which ones to start up.
* Each webapp (an application under a different context root) now has the role of `Application` and the role of `Service`. As an application, the webapp would grab things that would normally be stored in a .properties file. Maybe even the log4j.xml file. In the role of `Service`, a given webapp might be an API endpoint and so it would have a hostname (a virtual host maybe?) and services associated with it. Each of those, has a `status`.
That might be confusing and it's a fairly overly-contrived example. A more comon use case would be the above where, instead of storing the database.yml on the server, the Rails application actually reads the file from Noah. Now that might not be too exciting but try this example:
* Rails application with memcached as part of the stack.
* Instead of a local configuration file, the list of memcached servers is a `Configuration` object belonging to the rails application's `Application` object.
* As new memcached servers are brought online, your CM tool (puppet or chef) updates Noah
* Your Rails application either via restarting (and thus rebootstrapping the list of memcached servers from Noah) or using the Watcher subsystem is instantly aware of those servers. You could fairly easily implement a custom Watcher that, when the list of memcached server changes, the Passenger restart file is written.
Make sense?
# Constraints
You can view all the constraints inside `models.rb` but here they are for now:
* A new host must have at least `name` and `state` set.
* A new service must have at least `name` and `state` set.
* A new host must have at least `name` and `status` set.
* A new service must have at least `name` and `status` set.
* Each Host `name` must be unique
* Each Service `name` per Host must be unique
* Each Application `name` must exist and be unique