Workflow is comprised of a number of small, independent services that combine to create a distributed CaaS.

All Workflow components are deployed as services (and associated controllers) in your Kubernetes cluster. If you are interested we have a more detailed exploration of the Workflow architecture.

All of the componentry for Workflow is built with composability in mind. If you need to customize one of the components for your specific deployment or need the functionality in your own project we invite you to give it a shot!


Project Location: drycc/controller

The controller component is an HTTP API server which serves as the endpoint for the drycc CLI. The controller provides all of the platform functionality as well as interfacing with your Kubernetes cluster. The controller persists all of its data to the database component.


Project Location: drycc/passport

The passport component exposes a web API and provide OAuth2 authentication.


Project Location: drycc/postgres

The database component is a managed instance of PostgreSQL which holds a majority of the platforms state. Backups and WAL files are pushed to object storage via WAL-E. When the database is restarted, backups are fetched and replayed from object storage so no data is lost.


Project Location: drycc/builder

The builder component is responsible for accepting code pushes via Git and managing the build process of your Application. The builder process is:

  1. Receives incoming git push requests over SSH
  2. Authenticates the user via SSH key fingerprint
  3. Authorizes the user’s access to push code to the Application
  4. Starts the Application Build phase (see below)
  5. Triggers a new Release via the Controller

Builder currently supports both buildpack and Dockerfile based builds.

Project Location: drycc/imagebuilder

For Buildpack-based deploys, the builder component will launch a one-shot Job in the drycc namespace. This job runs imagebuilder component which handles default and custom buildpacks (specified by .packbuilder). The completed image is pushed to the managed Container registry on cluster. For more information about buildpacks see using buildpacks.

Unlike buildpack-based, For Applications which contain a Dockerfile in the root of the repository, it generates a Container image (using the underlying Container engine). For more information see using Dockerfiles.

Object Storage

Project Location: drycc/storage

All of the Workflow components that need to persist data will ship them to the object storage that was configured for the cluster.For example, database ships its WAL files, registry stores Container images, and slugbuilder stores slugs.

Workflow supports either on or off-cluster storage. For production deployments we highly recommend that you configure off-cluster object storage.

To facilitate experimentation, development and test environments, the default charts for Workflow include on-cluster storage via storage.

If you also feel comfortable using Kubernetes persistent volumes you may configure storage to use persistent storage available in your environment.


Project Location: drycc/registry

The registry component is a managed container registry which holds application images generated from the builder component. Registry persists the Container image images to either local storage (in development mode) or to object storage configured for the cluster.

Logger: fluentbit, logger

The logging subsystem consists of two components. Fluentbit handles log shipping and logger maintains a ring-buffer of application logs.

Project Location: drycc/fluentbit

Fluentbit is deployed to your Kubernetes cluster via Daemon Sets. Fluentbit subscribes to all container logs, decorates the output with Kubernetes metadata and can be configured to drain logs to multiple destinations. By default, Fluentbit ships logs to the logger component, which powers drycc logs.

Project Location: drycc/logger

The logger component receives log streams from fluentbit, collating by Application name. Logger does not persist logs to disk, instead maintaining an in-memory ring buffer. For more information on logger see the project documentation.


Project Location: drycc/monitor

The monitoring subsystem consists of two components: Telegraf and Grafana.

Telegraf is the is the metrics collection agent that runs using the daemon set API. It runs on every worker node in the cluster, fetches information about the pods currently running and ships it to Prometheus.

Grafana is a standalone graphing application. It natively supports Prometheus as a datasource and provides a robust engine for creating dashboards on top of timeseries data. Workflow provides a few dashboards out of the box for monitoring Drycc Workflow and Kubernetes. The dashboards can be used as a starting point for creating more custom dashboards to suit a user’s needs.


Project Location: drycc/prometheus

Prometheus is a system monitoring and alerting system. It was opensourced by SoundCloud in 2012 and is the second project both to join and to graduate within Cloud Native Computing Foundation after Kubernetes. Prometheus stores all metrics data as time series, i.e metrics information is stored along with the timestamp at which it was recorded, optional key-value pairs called as labels can also be stored along with metrics.


Project Location: drycc/rabbitmq

RabbitMQ is the most widely deployed open source message broker. Controller use celery with rabbitMQ to execute the asynchronous task.


Project Location: drycc/rabbitmq

Helm Broker is a Service Broker that exposes Helm charts as Service Classes in Service Catalog. To do so, Helm Broker uses the concept of addons. An addon is an abstraction layer over a Helm chart which provides all information required to convert the chart into a Service Class.


Project Location: drycc/rabbitmq

Prometheus is an open-source systemsmonitoring and alerting toolkit originally built atSoundCloud.

See Also