This is the multi-page printable view of this section. Click here to print.
Applications
- 1: Deploying an Application
- 2: Using Buildpacks
- 3: Using Dockerfiles
- 4: Using Docker Images
- 5: Managing App Processes
- 6: Configuring an Application
- 7: Managing App Metrics
- 8: Managing App Lifecycle
- 9: Managing App Volumes
- 10: Managing App Gateway
- 11: Managing App Resources
- 12: Inter-app Communication
- 13: Managing Resource Limits
- 14: Domains and Routing
- 15: SSL Certificates
- 16: Using drycc path
1 - Deploying an Application
Deploy applications to Drycc using git push
or the drycc
client. An Application runs inside containers and can scale horizontally if it follows the Twelve-Factor App methodology.
Supported Applications
Drycc Workflow deploys any application or service that runs in a container. To scale horizontally, applications must store state in external backing services rather than the local filesystem.
For example, content management systems like WordPress and Drupal that persist data to the local filesystem cannot scale horizontally using drycc scale
. However, most modern applications feature stateless application tiers that scale well on Drycc.
Login to the Controller
Note
Install the client and register before deploying applications. See [client installation][install client] and user registration.Authenticate against the Drycc Controller using the URL provided by your administrator:
$ drycc login http://drycc.example.com
Opening browser to http://drycc.example.com/v2/login/drycc/?key=4ccc81ee2dce4349ad5261ceffe72c71
Waiting for login... .o.Logged in as admin
Configuration file written to /root/.drycc/client.json
Alternatively, log in with username and password:
$ drycc login http://drycc.example.com --username=demo --password=demo
Configuration file written to /root/.drycc/client.json
Select a Build Process
Drycc Workflow supports three build methods:
Buildpacks
Use Cloud Native Buildpacks to build applications following CNB documentation.
Learn more: Deploying with Buildpacks
Dockerfiles
Define portable execution environments using Dockerfiles built on your chosen base OS.
Learn more: Deploying with Dockerfiles
Container Images
Deploy container images from public or private registries, ensuring identical images across development, CI, and production.
Learn more: Deploying with Container Images
Tune Application Settings
Configure application-specific settings using drycc config:set
. These override global defaults:
Setting | Description |
---|---|
DRYCC_DISABLE_CACHE |
Disable the [imagebuilder cache][] (default: not set) |
DRYCC_DEPLOY_BATCHES |
Number of pods to bring up/down sequentially during scaling (default: number of available nodes) |
DRYCC_DEPLOY_TIMEOUT |
Deploy timeout in seconds per batch (default: 120) |
IMAGE_PULL_POLICY |
Kubernetes [image pull policy][pull-policy] (default: “IfNotPresent”; allowed: “Always”, “IfNotPresent”) |
KUBERNETES_DEPLOYMENTS_REVISION_HISTORY_LIMIT |
Number of deployment revisions Kubernetes retains (default: all) |
KUBERNETES_POD_TERMINATION_GRACE_PERIOD_SECONDS |
Seconds Kubernetes waits for pod termination after SIGTERM (default: 30) |
Deploy Timeout
The deploy timeout setting behaves differently depending on the deployment method.
Deployments (Current Method)
Kubernetes handles rolling updates server-side. The base timeout multiplies with DRYCC_DEPLOY_BATCHES
for the total timeout. For example: 240 seconds × 4 batches = 960 seconds total.
ReplicationController Deploy (Legacy)
This timeout defines how long to wait for each batch to complete within DRYCC_DEPLOY_BATCHES
.
Timeout Extensions
The base timeout extends for:
- Health checks using
initialDelaySeconds
on liveness/readiness probes (uses the larger value) - Slow image pulls (adds 10 minutes when pulls exceed 1 minute)
Deployments
Drycc Workflow uses [Kubernetes Deployments][] for deployments. Previous versions used ReplicationControllers (enable with DRYCC_KUBERNETES_DEPLOYMENTS=1
).
Benefits of Deployments include:
- Server-side rolling updates in Kubernetes
- Continued deployments even if CLI connection interrupts
- Better pod management
Each deployment creates:
- One Deployment object per process type
- Multiple ReplicaSets (one per release)
- ReplicaSets manage running pods
The CLI behavior remains the same. The only visible difference is in drycc ps
output showing different pod names.
2 - Using Buildpacks
Drycc supports deploying applications using Cloud Native Buildpacks. Buildpacks transform deployed code into executable containers following CNB documentation.
Add SSH Key
For buildpack-based deployments via git push
, Drycc Workflow authenticates users using SSH keys. Each user must upload a unique SSH key.
- Generate an SSH key by following these instructions.
- Upload your SSH key using
drycc keys add
:
$ drycc keys add ~/.ssh/id_drycc.pub
Uploading id_drycc.pub to drycc... done
For more information about managing SSH keys, see this guide.
Prepare an Application
Clone this example application to explore the buildpack workflow if you don’t have an existing application:
$ git clone https://github.com/drycc/example-go.git
$ cd example-go
Create an Application
Create an application on the Controller:
$ drycc create
Creating application... done, created skiing-keypunch
Git remote drycc added
Push to Deploy
Deploy your application using git push drycc master
:
$ git push drycc master
Counting objects: 75, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (48/48), done.
Writing objects: 100% (75/75), 18.28 KiB | 0 bytes/s, done.
Total 75 (delta 30), reused 58 (delta 22)
remote: --->
Starting build... but first, coffee!
---> Waiting podman running.
---> Process podman started.
---> Waiting caddy running.
---> Process caddy started.
---> Building pack
---> Using builder registry.drycc.cc/drycc/buildpacks:bookworm
Builder 'registry.drycc.cc/drycc/buildpacks:bookworm' is trusted
Pulling image 'registry.drycc.cc/drycc/buildpacks:bookworm'
Resolving "drycc/buildpacks" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull registry.drycc.cc/drycc/buildpacks:bookworm...
Getting image source signatures
...
---> Skip generate base layer
---> Python Buildpack
---> Downloading and extracting Python 3.10.0
---> Installing requirements with pip
Collecting Django==3.2.8
Downloading Django-3.2.8-py3-none-any.whl (7.9 MB)
Collecting gunicorn==20.1.0
Downloading gunicorn-20.1.0-py3-none-any.whl (79 kB)
Collecting sqlparse>=0.2.2
Downloading sqlparse-0.4.2-py3-none-any.whl (42 kB)
Collecting pytz
Downloading pytz-2021.3-py2.py3-none-any.whl (503 kB)
Collecting asgiref<4,>=3.3.2
Downloading asgiref-3.4.1-py3-none-any.whl (25 kB)
Requirement already satisfied: setuptools>=3.0 in /layers/drycc_python/python/lib/python3.10/site-packages (from gunicorn==20.1.0->-r requirements.txt (line 2)) (57.5.0)
Installing collected packages: sqlparse, pytz, asgiref, gunicorn, Django
Successfully installed Django-3.2.8 asgiref-3.4.1 gunicorn-20.1.0 pytz-2021.3 sqlparse-0.4.2
---> Generate Launcher
...
Build complete.
Launching App...
...
Done, skiing-keypunch:v2 deployed to Workflow
Use 'drycc open' to view this application in your browser
To learn more, use 'drycc help' or visit https://www.drycc.cc
To ssh://git@drycc.staging-2.drycc.cc:2222/skiing-keypunch.git
* [new branch] master -> master
$ curl -s http://skiing-keypunch.example.com
Powered by Drycc
Release v2 on skiing-keypunch-v2-web-02zb9
Drycc automatically detects buildpack applications and scales the web
process type to 1 on first deployment.
Scale your application by adjusting the number of containers. For example, use drycc scale web=3
to run 3 web containers.
Included Buildpacks
Drycc includes these buildpacks for convenience:
- Go Buildpack
- Java Buildpack
- [Node.js Buildpack][]
- PHP Buildpack
- Python Buildpack
- Ruby Buildpack
- Rust Buildpack
Drycc runs the bin/detect
script from each buildpack to match your code.
Note
The [Scala Buildpack][] requires at least 512MB of free memory for the Scala Build Tool.Using a Custom Buildpack
Use a custom buildpack by creating a .pack-builder
file in your project root:
$ tee .pack-builder << EOF
registry.drycc.cc/drycc/buildpacks:bookworm
EOF
The custom buildpack will be used on your next git push
.
Using Private Repositories
Pull code from private repositories by setting the SSH_KEY
environment variable to a private key with access. Use either a file path or raw key material:
$ drycc config set SSH_KEY=/home/user/.ssh/id_rsa
$ drycc config set SSH_KEY="""-----BEGIN RSA PRIVATE KEY-----
(...)
-----END RSA PRIVATE KEY-----"""
For example, use a custom buildpack from a private GitHub URL by adding an SSH public key to your GitHub settings, then set SSH_KEY
to the corresponding private key and configure .pack-builder
:
$ tee .pack-builder << EOF
registry.drycc.cc/drycc/buildpacks:bookworm
EOF
$ git add .pack-builder
$ git commit -m "chore(buildpack): modify the pack_builder"
$ git push drycc master
Builder Selection
Drycc selects the build method following these rules:
- Uses
container
if Dockerfile exists - Uses
buildpack
if Procfile exists - Defaults to
container
if both exist - Override with
DRYCC_STACK=container
orDRYCC_STACK=buildpack
3 - Using Dockerfiles
Drycc supports deploying applications using Dockerfiles. A Dockerfile automates the process of building a [Container Image][] that defines your application’s runtime environment. While Dockerfiles offer powerful customization, they require careful configuration to work with Drycc.
Add SSH Key
For Dockerfile-based deployments via git push
, Drycc Workflow authenticates users using SSH keys. Each user must upload a unique SSH key to the platform.
- Generate an SSH key by following these instructions.
- Upload your SSH key using
drycc keys add
:
$ drycc keys add ~/.ssh/id_drycc.pub
Uploading id_drycc.pub to drycc... done
For more information about managing SSH keys, see this guide.
Prepare an Application
If you don’t have an existing application, clone this example application to explore the Dockerfile workflow:
$ git clone https://github.com/drycc/helloworld.git
$ cd helloworld
Dockerfile Requirements
Your Dockerfile must meet these requirements for successful deployment:
- Use the
EXPOSE
directive to expose exactly one port for HTTP traffic. - Ensure your application listens for HTTP connections on that port.
- Define the default process using the
CMD
directive. - Include bash in your container image.
Note
If you use a private registry (such as GCR or others), set a$PORT
environment variable that matches your EXPOSE
d port. For example: drycc config set PORT=5000
. See Configuring Registry for details.
Create an Application
Create an application on the Controller:
$ drycc create
Creating application... done, created folksy-offshoot
Git remote drycc added
Push to Deploy
Deploy your application using git push drycc master
:
$ git push drycc master
Counting objects: 13, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (13/13), done.
Writing objects: 100% (13/13), 1.99 KiB | 0 bytes/s, done.
Total 13 (delta 2), reused 0 (delta 0)
-----> Building Docker image
Uploading context 4.096 kB
Uploading context
Step 0 : FROM drycc/base:latest
---> 60024338bc63
Step 1 : RUN wget -O /tmp/go1.2.1.linux-amd64.tar.gz -q https://go.googlecode.com/files/go1.2.1.linux-amd64.tar.gz
---> Using cache
---> cf9ef8c5caa7
Step 2 : RUN tar -C /usr/local -xzf /tmp/go1.2.1.linux-amd64.tar.gz
---> Using cache
---> 515b1faf3bd8
Step 3 : RUN mkdir -p /go
---> Using cache
---> ebf4927a00e9
Step 4 : ENV GOPATH /go
---> Using cache
---> c6a276eded37
Step 5 : ENV PATH /usr/local/go/bin:/go/bin:$PATH
---> Using cache
---> 2ba6f6c9f108
Step 6 : ADD . /go/src/github.com/drycc/helloworld
---> 94ab7f4b977b
Removing intermediate container 171b7d9fdb34
Step 7 : RUN cd /go/src/github.com/drycc/helloworld && go install -v .
---> Running in 0c8fbb2d2812
github.com/drycc/helloworld
---> 13b5af931393
Removing intermediate container 0c8fbb2d2812
Step 8 : ENV PORT 80
---> Running in 9b07da36a272
---> 2dce83167874
Removing intermediate container 9b07da36a272
Step 9 : CMD ["/go/bin/helloworld"]
---> Running in f7b215199940
---> b1e55ce5195a
Removing intermediate container f7b215199940
Step 10 : EXPOSE 80
---> Running in 7eb8ec45dcb0
---> ea1a8cc93ca3
Removing intermediate container 7eb8ec45dcb0
Successfully built ea1a8cc93ca3
-----> Pushing image to private registry
Launching... done, v2
-----> folksy-offshoot deployed to Drycc
http://folksy-offshoot.local3.dryccapp.com
To learn more, use `drycc help` or visit https://www.drycc.cc
To ssh://git@local3.dryccapp.com:2222/folksy-offshoot.git
* [new branch] master -> master
$ curl -s http://folksy-offshoot.local3.dryccapp.com
Welcome to Drycc!
See the documentation at http://docs.drycc.cc/ for more information.
Drycc automatically detects Dockerfile applications and scales the web
process type to 1 on first deployment.
Scale your application by adjusting the number of containers. For example, use drycc scale web=3
to run 3 web containers.
Container Build Arguments
Starting with Workflow v2.13.0, you can inject application configuration into your container image using Docker build arguments. Enable this feature by setting an environment variable:
$ drycc config set DRYCC_DOCKER_BUILD_ARGS_ENABLED=1
Once enabled, all environment variables set with drycc config set
become available in your Dockerfile. For example, after running drycc config set POWERED_BY=Workflow
, you can use this build argument in your Dockerfile:
ARG POWERED_BY
RUN echo "Powered by $POWERED_BY" > /etc/motd
4 - Using Docker Images
Drycc supports deploying applications using existing [Docker Images][]. This approach integrates well with Docker-based CI/CD pipelines.
Prepare an Application
Clone this example application to get started:
$ git clone https://github.com/drycc/example-dockerfile-http.git
$ cd example-dockerfile-http
Build the image and push it to DockerHub using your local Docker client:
$ docker build -t <username>/example-dockerfile-http .
$ docker push <username>/example-dockerfile-http
Docker Image Requirements
Container images must meet these requirements for successful deployment:
- Use the
EXPOSE
directive to expose exactly one port for HTTP traffic. - Ensure your application listens for HTTP connections on that port.
- Define the default process using the
CMD
directive. - Include bash in the container image.
Note
For private registries (such as GCR), set a$PORT
environment variable that matches your EXPOSE
d port. For example: drycc config set PORT=5000
. See Configuring Registry for details.
Create an Application
Create an application on the controller:
$ mkdir -p /tmp/example-dockerfile-http && cd /tmp/example-dockerfile-http
$ drycc create example-dockerfile-http --no-remote
Creating application... done, created example-dockerfile-http
Note
For commands other thandrycc create
, the client uses the current directory name as the app name if not specified with --app
.
Deploy the Application
Deploy from DockerHub or a public registry using drycc pull
:
$ drycc pull <username>/example-dockerfile-http:latest
Creating build... done, v2
$ curl -s http://example-dockerfile-http.local3.dryccapp.com
Powered by Drycc
Drycc automatically detects container images and scales the web
process type to 1 on first deployment.
Scale your application by adjusting the number of containers. For example, use drycc scale web=3
to run 3 web containers.
Private Registry
Deploy images from private registries by attaching credentials using drycc registry
. Use the same credentials as docker login
.
Follow these steps for private Docker images:
- Obtain registry credentials (such as Quay.io Robot Account or GCR.io Long Lived Token)
- Run
drycc registry set <username> <password> -a <application-name>
- Use
drycc pull
normally against private registry images
For GCR.io Long Lived Token, compact the JSON blob using jq and use _json_key
as the username:
drycc registry set _json_key "$(cat google_cloud_cred.json | jq -c .)"
When using private registries, Kubernetes manages image pulls directly. This improves security and speed, but requires setting the application port manually with drycc config set PORT=80
before configuring registry credentials.
Note
[GCR.io][] and [ECR][] with short-lived authentication tokens are not currently supported.5 - Managing App Processes
Drycc Workflow manages your application as a set of processes that can be named, scaled, and configured according to their role. This gives you the flexibility to easily manage different facets of your application. For example, you may have web-facing processes that handle HTTP traffic, background worker processes that do async work, and a helper process that streams from an API.
By using a Procfile, either checked into your application or provided via the CLI, you can specify the name of the process type and the application command that should run. To spawn other process types, use drycc scale <ptype>=<n>
to scale those types accordingly.
Default Process Types
In the absence of a Procfile, a single, default process type is assumed for each application.
Applications built using Buildpacks via git push
implicitly receive a web
process type, which starts
the application server. Rails 4, for example, has the following process type:
web: bundle exec rails server -p $PORT
All applications utilizing Dockerfiles have an implied web
process type, which runs the
Dockerfile’s CMD
directive unmodified:
$ cat Dockerfile
FROM centos:latest
COPY . /app
WORKDIR /app
CMD python -m SimpleHTTPServer 5000
EXPOSE 5000
For the above Dockerfile-based application, the web
process type would run the Container CMD
of python -m SimpleHTTPServer 5000
.
Applications utilizing remote container images, a web
process type is also implied, and runs the CMD
specified in the container image.
Note
Theweb
process type is special as it is the default process type that will receive HTTP traffic from Workflow’s routers. Other process types can be named arbitrarily.
Declaring Process Types
If you use buildpack or Dockerfile builds and want to override or specify additional process types, simply include a file named Procfile
in the root of your application’s source tree.
The format of a Procfile
is one process type per line, with each line containing the command to invoke:
<process type>: <command>
The syntax is defined as:
<process type>
– a lowercase alphanumeric string, is a name for your command, such as web, worker, urgentworker, clock, etc.<command>
– a command line to launch the process, such asrake jobs:work
.
This example Procfile specifies two types, web
and sleeper
. The web
process launches a web server on port 5000 and a simple process which sleeps for 900 seconds and exits.
$ cat Procfile
web: bundle exec ruby web.rb -p ${PORT:-5000}
sleeper: sleep 900
If you are using remote container images, you may define process types by either running drycc pull
with a Procfile
in your working directory, or by passing a stringified Procfile to the --procfile
CLI option.
For example, passing process types inline:
$ drycc pull drycc/example-go:latest --procfile="web: /app/bin/boot"
Read a Procfile
in another directory:
$ drycc pull drycc/example-go:latest --procfile="$(cat deploy/Procfile)"
Or via a Procfile located in your current, working directory:
$ cat Procfile
web: /bin/boot
sleeper: echo "sleeping"; sleep 900
$ drycc pull -a steely-mainsail drycc/example-go
Creating build... done
$ drycc scale sleeper=1 -a steely-mainsail
Scaling processes... but first, coffee!
done in 0s
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
steely-mainsail-sleeper-76c45b967c-4qm6w v3 up sleeper 1/1 0 2023-12-08T02:25:00UTC
steely-mainsail-web-c4f44c4b4-7p7dh v3 up web 1/1 0 2023-12-08T02:25:00UTC
Note
Only process types ofweb
will be scaled to 1 automatically. If you have additional process types, remember to scale the process counts after creation.
To remove a process type simply scale it to 0:
$ drycc scale sleeper=0 -a steely-mainsail
Scaling processes... but first, coffee!
done in 3s
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
steely-mainsail-web-c4f44c4b4-7p7dh v3 up web 1/1 0 2023-12-08T02:25:00UTC
Scaling Processes
Applications deployed on Drycc Workflow scale out via the process model. Use drycc scale
to control the number of containers that power your app.
$ drycc scale web=5 -a iciest-waggoner
Scaling processes... but first, coffee!
done in 3s
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
iciest-waggoner-web-c4f44c4b4-7p7dh v3 up web 1/1 0 2023-12-08T02:25:00UTC
iciest-waggoner-web-c4f44c4b4-8p7dh v3 up web 1/1 0 2023-12-08T02:25:00UTC
iciest-waggoner-web-c4f44c4b4-9p7dh v3 up web 1/1 0 2023-12-08T02:25:00UTC
iciest-waggoner-web-c4f44c4b4-1p7dh v3 up web 1/1 0 2023-12-08T02:25:00UTC
iciest-waggoner-web-c4f44c4b4-2p7dh v3 up web 1/1 0 2023-12-08T02:25:00UTC
If you have multiple process types for your application you may scale the process count for each type separately. For example, this allows you to manage web processes independently from background workers. For more information on process types see our documentation for Managing App Processes.
In this example, we are scaling the process type web
to 5 but leaving the process type background
with one worker.
$ drycc scale web=5
Scaling processes... but first, coffee!
done in 4s
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
scenic-icehouse-web-3291896318-7lord v3 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-jn957 v3 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-rsekj v3 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-vwhnh v3 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-vokg7 v3 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-vokg7 v3 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-background-3291896318-yf8kh v3 up background 1/1 0 2023-12-08T02:25:00UTC
Scaling a process down, by reducing the process count, sends a TERM
signal to the processes, followed by a SIGKILL
if they have not exited within 30 seconds. Depending on your application, scaling down may interrupt long-running HTTP client connections.
For example, scaling from 5 processes to 3:
$ drycc scale web=3
Scaling processes... but first, coffee!
done in 1s
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
scenic-icehouse-web-3291896318-vwhnh v2 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-vokg7 v2 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-vokg9 v2 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-background-3291896318-yf8kh v2 up background 1/1 0 2023-12-08T02:25:00UTC
Autoscale
Autoscale allows adding a minimum and maximum number of pods on a per process type basis. This is accomplished by specifying a target CPU usage across all available pods.
This feature is built on top of Horizontal Pod Autoscaling in Kubernetes or HPA for short.
Note
This is an alpha feature. It is recommended to be on the latest Kubernetes when using this feature.$ drycc autoscale set web --min=3 --max=8 --cpu-percent=75
Applying autoscale settings for process type web on scenic-icehouse... done
And then review the scaling rule that was created for web
:
$ drycc autoscale list
PTYPE PERCENT MIN MAX
web 75 3 8
Remove scaling rule:
$ drycc autoscale unset web
Removing autoscale for process type web on scenic-icehouse... done
For autoscaling to work CPU requests have to be specified on each application pod (can be done via drycc limits --cpu
). This allows the autoscale policies to do the appropriate calculations and make decisions on when to scale up and down.
Scale up can only happen if there was no rescaling within the last 3 minutes. Scale down will wait for 5 minutes from the last rescaling. That information and more can be found at HPA algorithm page.
Fetch Container Logs
List the containers:
$ drycc ps
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
python-getting-started-web-69b7d4bfdc-kl4xf v2 up web 1/1 0 2023-12-08T02:25:00UTC
=== python-getting-started Processes
--- web:
python-getting-started-web-69b7d4bfdc-kl4xf up (v2)
Fetch the container logs:
$ drycc ps logs -f python-getting-started-web-69b7d4bfdc-kl4xf
[2024-05-24 07:14:39 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2024-05-24 07:14:39 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
[2024-05-24 07:14:39 +0000] [1] [INFO] Using worker: gevent
[2024-05-24 07:14:39 +0000] [8] [INFO] Booting worker with pid: 8
[2024-05-24 07:14:39 +0000] [9] [INFO] Booting worker with pid: 9
[2024-05-24 07:14:39 +0000] [10] [INFO] Booting worker with pid: 10
[2024-05-24 07:14:39 +0000] [11] [INFO] Booting worker with pid: 11
Get Container Information
List the containers:
$ drycc ps describe python-getting-started-web-69b7d4bfdc-kl4xf
Container: python-getting-started-web
Image: drycc/python-getting-started:latest
Command:
Args:
- gunicorn
- -c
- gunicorn_config.py
- helloworld.wsgi:application
State: running
startedAt: "2024-05-24T07:14:39Z"
Ready: true
Restart Count: 0
Delete a Container
Delete the containers. Due to the set number of replicas, a new container will be launched to meet the quantity requirement.
$ drycc ps delete python-getting-started-web-69b7d4bfdc-kl4xf
Deleting python-getting-started-web-69b7d4bfdc-kl4xf from python-getting-started... done
Get a Shell to a Running Container
Verify that the container is running:
$ drycc ps
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
python-getting-started-web-69b7d4bfdc-kl4xf v2 up web 1/1 0 2023-12-08T02:25:00UTC
=== python-getting-started Processes
--- web:
python-getting-started-web-69b7d4bfdc-kl4xf up (v2)
Get a shell to the running container:
$ drycc ps exec python-getting-started-web-69b7d4bfdc-kl4xf -it -- bash
In your shell, list the root directory:
# Run this inside the container
ls /
Running individual commands in a container:
$ drycc ps exec python-getting-started-web-69b7d4bfdc-kl4xf -- date
Use “drycc ps –help” for a list of global command-line options (applies to all commands).
Restarting Application Processes
If you need to restart an application process, you may use drycc ps restart
. Behind the scenes, Drycc Workflow instructs Kubernetes to terminate the old process and launch a new one in its place.
$ drycc ps
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
scenic-icehouse-web-3291896318-vokg7 v2 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-rsekj v2 up web 1/1 0 2023-12-08T02:50:21UTC
scenic-icehouse-web-3291896318-vokg7 v2 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-background-3291896318-yf8kh v2 up background 1/1 0 2023-12-08T02:25:00UTC
$ drycc ps restart scenic-icehouse-background
NAME RELEASE STATE PTYPE READY RESTARTS STARTED
scenic-icehouse-web-3291896318-vokg7 v2 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-web-3291896318-rsekj v2 up web 1/1 0 2023-12-08T02:50:21UTC
scenic-icehouse-web-3291896318-vokg7 v2 up web 1/1 0 2023-12-08T02:25:00UTC
scenic-icehouse-background-3291896318-yf8kh v2 starting background 1/1 0 2023-12-08T02:25:00UTC
Notice that the process name has changed from scenic-icehouse-background-3291896318-yf8kh
to scenic-icehouse-background-3291896318-yd87g
. In a multi-node Kubernetes cluster, this may also have the effect of scheduling the pod to a new node.
Use “drycc ps –help” for a list of ps command-line options (process info).
List Application Process Types
$ drycc pts
NAME RELEASE READY UP-TO-DATE AVAILABLE GARBAGE STARTED
web v2 3/3 1 1 true 2023-12-08T02:25:00UTC
background v2 1/1 1 1 false 2023-12-08T02:25:00UTC
Clean Process Types
Clean up non-existent process types; it is usually executed automatically when autodeploy is set to true
.
$ drycc pts clean background
Get Deployment Info of Application Process Type
$ drycc pts describe web
Container: python-getting-started-web
Image: drycc/python-getting-started:latest
Command:
Args:
- gunicorn
- -c
- gunicorn_config.py
- helloworld.wsgi:application
Limits:
cpu 1
ephemeral-storage 2Gi
memory 1Gi
Liveness: http-get headers=[] path=/geo/ port=8000 delay=120s timeout=10s period=20s #success=1 #failure=3
Readiness: http-get headers=[] path=/geo/ port=8000 delay=120s timeout=10s period=20s #success=1 #failure=3
6 - Configuring an Application
Configuring an Application
Drycc applications [store configuration in environment variables][] to separate config from code and simplify environment-specific settings.
Setting Environment Variables
Use drycc config
to manage environment variables for deployed applications.
$ drycc help config
Manage environment variables that define app config
Usage:
drycc config [flags]
drycc config [command]
Available Commands:
info An app config info
set Set environment variables for an app
unset Unset environment variables for an app
pull Pull environment variables to the path
push Push environment variables from the path
attach Selects environment groups to attach an app ptype
detach Selects environment groups to detach an app ptype
Flags:
-a, --app string The uniquely identifiable name for the application
-g, --group string The group for which the config needs to be listed
-p, --ptype string The ptype for which the config needs to be listed
-v, --version int The version for which the config needs to be listed
Global Flags:
-c, --config string Path to configuration file. (default "~/.drycc/client.json")
-h, --help Display help information
Use "drycc config [command] --help" for more information about a command.
Configuration changes trigger automatic deployment of a new release.
Set multiple environment variables in one command or use drycc config push
with a local .env file:
$ drycc config set FOO=1 BAR=baz && drycc config pull
$ cat .env
FOO=1
BAR=baz
$ echo "TIDE=high" >> .env
$ drycc config push
Creating config... done, v4
=== yuppie-earthman
DRYCC_APP: yuppie-earthman
FOO: 1
BAR: baz
TIDE: high
Set environment variables for specific process types:
$ drycc config set FOO=1 BAR=baz --ptype=web
Or manage environment variable groups:
$ drycc config set FOO1=1 BAR1=baz --group=web.env
Attach the group to the web process type:
$ drycc config attach web web.env
Detach the group:
$ drycc config detach web web.env
Attach to Backing Services
Drycc treats backing services like databases, caches, and queues as attached resources. Configure connections using environment variables.
For example, attach an application to an external PostgreSQL database:
$ drycc config set DATABASE_URL=postgres://user:pass@example.com:5432/db
=== peachy-waxworks
DATABASE_URL: postgres://user:pass@example.com:5432/db
Remove attachments using drycc config unset
.
Buildpacks Cache
Applications using [Imagebuilder][] reuse the latest image data by default. This speeds up deployments for applications that fetch third-party libraries. Buildpacks must implement caching by writing to the cache directory.
Disable and Re-enable Cache
Disable caching by setting DRYCC_DISABLE_CACHE=1
. Drycc clears cache files when disabled. Re-enable by unsetting the variable.
Clear Cache
Clear the cache using this procedure:
$ drycc config set DRYCC_DISABLE_CACHE=1
$ git commit --allow-empty -m "Clearing Drycc cache"
$ git push drycc # (use your remote name if different)
$ drycc config unset DRYCC_DISABLE_CACHE
Custom Health Checks
By default, Workflow verifies that applications start in their containers. Add health checks by configuring probes for the application. Health checks use Kubernetes Container Probes with three types: startupProbe
, livenessProbe
, and readinessProbe
. Each probe supports httpGet
, exec
, or tcpSocket
checks.
Probe Types
-
startupProbe: Indicates whether the application has started. Disables other probes until successful. Failure triggers restart policy.
-
livenessProbe: Useful for long-running applications that may break and need restarting.
-
readinessProbe: Useful when containers temporarily cannot serve requests but will recover. Failed containers stop receiving traffic but don’t restart.
Check Types
-
httpGet: Performs HTTP GET on the container. Response codes 200-399 pass. Specify port number.
-
exec: Runs a command in the container. Exit code 0 passes, non-zero fails. Provide command arguments.
-
tcpSocket: Attempts to open a socket connection. Container is healthy if connection succeeds. Specify port number.
Configure health checks per process type using drycc healthchecks set
. Defaults to web
process type if not specified.
Configure an HTTP GET liveness probe:
$ drycc healthchecks set livenessProbe httpGet 80 --ptype web
Applying livenessProbe healthcheck... done
App: peachy-waxworks
UUID: afd84067-29e9-4a5f-9f3a-60d91e938812
Owner: dev
Created: 2023-12-08T10:25:00Z
Updated: 2023-12-08T10:25:00Z
Healthchecks:
livenessProbe web http-get headers=[] path=/ port=80 delay=50s timeout=50s period=10s #success=1 #failure=3
Include specific headers or paths:
$ drycc healthchecks set livenessProbe httpGet 80 \
--path /welcome/index.html \
--headers "X-Client-Version:v1.0,X-Foo:bar"
Applying livenessProbe healthcheck... done
App: peachy-waxworks
UUID: afd84067-29e9-4a5f-9f3a-60d91e938812
Owner: dev
Created: 2023-12-08T10:25:00Z
Updated: 2023-12-08T10:25:00Z
Healthchecks:
livenessProbe web http-get headers=[X-Client-Version=v1.0] path=/welcome/index.html port=80 delay=50s timeout=50s period=10s #success=1 #failure=3
Configure an exec readiness probe:
$ drycc healthchecks set readinessProbe exec -- /bin/echo -n hello --ptype web
Applying readinessProbe healthcheck... done
App: peachy-waxworks
UUID: afd84067-29e9-4a5f-9f3a-60d91e938812
Owner: dev
Created: 2023-12-08T10:25:00Z
Updated: 2023-12-08T10:25:00Z
Healthchecks:
readinessProbe web exec /bin/echo -n hello delay=50s timeout=50s period=10s #success=1 #failure=3
Overwrite probes by running drycc healthchecks set
again. Health checks modify deployment behavior - Workflow waits for checks to pass before proceeding to the next pod.
Autodeploy
By default, configuration, limits, and health check changes trigger automatic deployment. Disable autodeploy to prevent automatic deployments:
$ drycc autodeploy disable
Re-enable autodeploy:
$ drycc autodeploy enable
Manually deploy all process types:
$ drycc releases deploy
Deploy specific process types with optional force flag:
$ drycc releases deploy web --force
Autorollback
By default, deployment failures automatically rollback to the previous successful version. Disable autorollback:
$ drycc autorollback disable
Re-enable autorollback:
$ drycc autorollback enable
Isolate Applications
Isolate applications to specific nodes using drycc tags
.
Note
Configure your cluster with proper node labels before using tags. Commands will fail without labels. Learn more: [“Assigning Pods to Nodes”][pods-to-nodes].Once nodes have appropriate labels, restrict application process types to those nodes:
$ drycc tags set web environ=prod
Applying tags... done, v4
environ prod
7 - Managing App Metrics
Metrics provide basic monitoring capabilities for pods, offering various monitoring indicators such as CPU, memory, disk, and network usage to meet basic monitoring requirements for pod resources.
Create an Authentication Token
Create an authentication token using the Drycc client:
$ drycc tokens add prometheus --password admin --username admin
! WARNING: Make sure to copy your token now.
! You won't be able to see it again, please confirm whether to continue.
! To proceed, type "yes" !
> yes
UUID USERNAME TOKEN
58176cf1-37a8-4c52-9b27-4c7a47269dfb admin 1F2c7A802aF640fd9F31dD846AdDf56BcMsay
Add Scrape Configurations for Prometheus
A valid example configuration file can be found in the Drycc documentation.
The global configuration specifies parameters that are valid in all other configuration contexts. They also serve as defaults for other configuration sections:
global:
scrape_interval: 60s
evaluation_interval: 60s
scrape_configs:
- job_name: 'drycc'
scheme: https
metrics_path: /v2/apps/<appname>/metrics
authorization:
type: Token
credentials: 1F2c7A802aF640fd9F31dD846AdDf56BcMsay
static_configs:
- targets: ['drycc.domain.com']
8 - Managing App Lifecycle
Track Application Changes
Drycc Workflow tracks all changes to your application. Application changes result from either new application code pushed to the platform (via git push drycc master
), or updates to application configuration (via drycc config:set KEY=VAL
).
Each time a build or configuration change is made to your application, a new release is created. These release numbers increase monotonically.
You can see a record of changes to your application using drycc releases
:
$ drycc releases
OWNER STATE VERSION CREATED SUMMARY
dev succeed v3 2023-12-04T10:17:46Z dev deleted PIP_INDEX_URL, DISABLE_COLLECTSTATIC
dev succeed v2 2023-12-01T10:20:22Z dev added IMAGE_PULL_POLICY, PIP_INDEX_URL, PORT, DISABLE_COLLEC[...]
dev succeed v1 2023-11-30T17:54:57Z dev created initial release
Rollback a Release
Drycc Workflow supports rolling back to previous releases. If buggy code or an errant configuration change is pushed to your application, you may rollback to a previously known good release.
Note
All rollbacks create a new, numbered release but reference the build/code and configuration from the desired rollback point.In this example, the application is currently running release v4. Using drycc rollback v2
tells Workflow to deploy the build and configuration that was used for release v2. This creates a new release named v5 whose contents are the source and configuration from release v2:
$ drycc releases
OWNER STATE VERSION CREATED SUMMARY
dev succeed v3 2023-12-04T10:17:46Z dev deleted PIP_INDEX_URL, DISABLE_COLLECTSTATIC
dev succeed v2 2023-12-01T10:20:22Z dev added IMAGE_PULL_POLICY, PIP_INDEX_URL, PORT, DISABLE_COLLEC[...]
dev succeed v1 2023-11-30T17:54:57Z dev created initial release
$ drycc rollback v2
Rolled back to v2
$ drycc releases
OWNER STATE VERSION CREATED SUMMARY
dev succeed v4 2023-12-04T10:20:46Z dev rolled back to v2
dev succeed v3 2023-12-04T10:17:46Z dev deleted PIP_INDEX_URL, DISABLE_COLLECTSTATIC
dev succeed v2 2023-12-01T10:20:22Z dev added IMAGE_PULL_POLICY, PIP_INDEX_URL, PORT, DISABLE_COLLEC[...]
dev succeed v1 2023-11-30T17:54:57Z dev created initial release
To rollback only the web process type:
$ drycc rollback v3 web
Rolled back to v3
$ drycc releases
OWNER STATE VERSION CREATED SUMMARY
dev succeed v5 2023-12-04T10:23:49Z dev rolled back to v3
dev succeed v4 2023-12-04T10:20:46Z dev rolled back to v2
dev succeed v3 2023-12-04T10:17:46Z dev deleted PIP_INDEX_URL, DISABLE_COLLECTSTATIC
dev succeed v2 2023-12-01T10:20:22Z dev added IMAGE_PULL_POLICY, PIP_INDEX_URL, PORT, DISABLE_COLLEC[...]
dev succeed v1 2023-11-30T17:54:57Z dev created initial release
Run One-off Administration Tasks
Drycc applications use one-off processes for admin tasks like database migrations and other commands that must run against the live application.
Use drycc run
to execute commands on the deployed application:
$ drycc run -- 'ls -l'
Running `ls -l`...
total 28
-rw-r--r-- 1 root root 553 Dec 2 23:59 LICENSE
-rw-r--r-- 1 root root 60 Dec 2 23:59 Procfile
-rw-r--r-- 1 root root 33 Dec 2 23:59 README.md
-rw-r--r-- 1 root root 1622 Dec 2 23:59 pom.xml
-rw-r--r-- 1 root root 25 Dec 2 23:59 system.properties
drwxr-xr-x 3 root root 4096 Dec 2 23:59 src
-rw-r--r-- 1 root root 25 Dec 2 23:59 system.properties
drwxr-xr-x 6 root root 4096 Dec 3 00:00 target
Share an Application
Use drycc perms add
to allow another Drycc user to collaborate on your application:
$ drycc perms add otheruser view,change,delete
Adding user otheruser as a collaborator for view,change,delete peachy-waxwork... done
Use drycc perms
to see who an application is currently shared with, and drycc perms remove
to remove a collaborator.
Note
Collaborators can do anything with an application that its owner can do, except delete the application.When working with an application that has been shared with you, clone the original repository and add Drycc’s git remote entry before attempting to git push
any changes to Drycc:
$ git clone https://github.com/drycc/example-java-jetty.git
Cloning into 'example-java-jetty'... done
$ cd example-java-jetty
$ git remote add -f drycc ssh://git@local3.dryccapp.com:2222/peachy-waxworks.git
Updating drycc
From drycc-controller.local:peachy-waxworks
* [new branch] master -> drycc/master
Application Troubleshooting
Applications deployed on Drycc Workflow treat logs as event streams. Drycc Workflow aggregates stdout
and stderr
from every Container making it easy to troubleshoot problems with your application.
Use drycc grafana
to view the log output from your deployed application:
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.Server:jetty-7.6.0.v20120127
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10005
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10006
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10007
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10008
Rollback a Release
Drycc Workflow also supports rolling back go previous releases. If buggy code or an errant configuration change is pushed to your application, you may rollback to a previously known, good release.
Note
All rollbacks create a new, numbered release. But will reference the build/code and configuration from the desired rollback point.In this example, the application is currently running release v4. Using drycc rollback v2
tells Workflow to deploy the
build and configuration that was used for release v2. This creates a new release named v5 whose contents are the source
and configuration from release v2:
$ drycc releases
OWNER STATE VERSION CREATED SUMMARY
dev succeed v3 2023-12-04T10:17:46Z dev deleted PIP_INDEX_URL, DISABLE_COLLECTSTATIC
dev succeed v2 2023-12-01T10:20:22Z dev added IMAGE_PULL_POLICY, PIP_INDEX_URL, PORT, DISABLE_COLLEC[...]
dev succeed v1 2023-11-30T17:54:57Z dev created initial release
$ drycc rollback v2
Rolled back to v2
$ drycc releases
OWNER STATE VERSION CREATED SUMMARY
dev succeed v4 2023-12-04T10:20:46Z dev rolled back to v2
dev succeed v3 2023-12-04T10:17:46Z dev deleted PIP_INDEX_URL, DISABLE_COLLECTSTATIC
dev succeed v2 2023-12-01T10:20:22Z dev added IMAGE_PULL_POLICY, PIP_INDEX_URL, PORT, DISABLE_COLLEC[...]
dev succeed v1 2023-11-30T17:54:57Z dev created initial release
Only rollback web process type:
$ drycc rollback v3 web
Rolled back to v3
$ drycc releases
OWNER STATE VERSION CREATED SUMMARY
dev succeed v5 2023-12-04T10:23:49Z dev rolled back to v3
dev succeed v4 2023-12-04T10:20:46Z dev rolled back to v2
dev succeed v3 2023-12-04T10:17:46Z dev deleted PIP_INDEX_URL, DISABLE_COLLECTSTATIC
dev succeed v2 2023-12-01T10:20:22Z dev added IMAGE_PULL_POLICY, PIP_INDEX_URL, PORT, DISABLE_COLLEC[...]
dev succeed v1 2023-11-30T17:54:57Z dev created initial release
Run One-off Administration Tasks
Drycc applications use one-off processes for admin tasks like database migrations and other commands that must run against the live application.
Use drycc run
to execute commands on the deployed application.
$ drycc run -- 'ls -l'
Running `ls -l`...
total 28
-rw-r--r-- 1 root root 553 Dec 2 23:59 LICENSE
-rw-r--r-- 1 root root 60 Dec 2 23:59 Procfile
-rw-r--r-- 1 root root 33 Dec 2 23:59 README.md
-rw-r--r-- 1 root root 1622 Dec 2 23:59 pom.xml
drwxr-xr-x 3 root root 4096 Dec 2 23:59 src
-rw-r--r-- 1 root root 25 Dec 2 23:59 system.properties
drwxr-xr-x 6 root root 4096 Dec 3 00:00 target
Share an Application
Use drycc perms add
to allow another Drycc user to collaborate on your application.
$ drycc perms add otheruser view,change,delete
Adding user otheruser as a collaborator for view,change,delete peachy-waxwork... done
Use drycc perms
to see who an application is currently shared with, and drycc perms remove
to remove a collaborator.
Note
Collaborators can do anything with an application that its owner can do, except delete the application.When working with an application that has been shared with you, clone the original repository and add Drycc’ git remote
entry before attempting to git push
any changes to Drycc.
$ git clone https://github.com/drycc/example-java-jetty.git
Cloning into 'example-java-jetty'... done
$ cd example-java-jetty
$ git remote add -f drycc ssh://git@local3.dryccapp.com:2222/peachy-waxworks.git
Updating drycc
From drycc-controller.local:peachy-waxworks
* [new branch] master -> drycc/master
Application Troubleshooting
Applications deployed on Drycc Workflow treat logs as event streams. Drycc Workflow aggregates stdout
and stderr
from every Container making it easy to troubleshoot problems with your application.
Use drycc grafana to view the log output from your deployed application.
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.Server:jetty-7.6.0.v20120127
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.5]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10005
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.6]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10006
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null}
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.7]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10007
Dec 3 00:30:31 ip-10-250-15-201 peachy-waxworks[web.8]: INFO:oejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:10008
9 - Managing App Volumes
You can use the commands below to create volumes and mount them to applications. Drycc supports ReadWriteMany access mode, so before deploying Drycc, you need to have a StorageClass ready that can support ReadWriteMany. When deploying Drycc, set controller.appStorageClass
to this StorageClass.
Use drycc volumes
to mount a volume for a deployed application’s processes.
$ drycc help volumes
Valid commands for volumes:
add create a volume for the application
expand expand a volume for the application
list list volumes in the application
info print information about a volume
remove delete a volume from the application
client the client used to manage volume files
mount mount a volume to process of the application
unmount unmount a volume from process of the application
Use 'drycc help [command]' to learn more.
Create a Volume for the Application
You can create a CSI volume with the drycc volumes add
command:
$ drycc volumes add myvolume 200M
Creating myvolume to scenic-icehouse... done
Or use an existing NFS server:
$ drycc volumes add mynfsvolume 200M -t nfs --nfs-server=nfs.drycc.com --nfs-path=/
Creating mynfsvolume to scenic-icehouse... done
Or use an existing OSS:
$ drycc volumes add myossvolume 200M -t oss --oss-server=oss.drycc.com --oss-bucket=vbucket --oss-access-key=ak --oss-secret-key=sk
Creating myossvolume to scenic-icehouse... done
List Volumes in the Application
After a volume is created, you can list the volumes in this application:
$ drycc volumes list
NAME OWNER TYPE PTYPE PATH SIZE
myvolume admin csi 200M
mynfsvolume admin nfs 200M
myossvolume admin oss 200M
Mount a Volume
The volume named “myvolume” is created. You can mount the volume to a process of the application using the drycc volumes mount
command. When a volume is mounted, a new release will be created and deployed automatically.
$ drycc volumes mount myvolume web=/data/web
Mounting volume... done
Use drycc volumes list
to show mount details:
$ drycc volumes list
NAME OWNER TYPE PTYPE PATH SIZE
myvolume admin nfs web /data/web 200M
If you no longer need the volume, use drycc volumes unmount
to unmount the volume and then use drycc volumes remove
to delete the volume from the application. The volume must be unmounted before it can be deleted.
$ drycc volumes unmount myvolume web
Unmounting volume... done
$ drycc volumes remove myvolume
Deleting myvolume from scenic-icehouse... done
Use Volume Client to Manage Volume Files
Assuming the volume named “myvolume” is created and mounted.
Prepare a file named testfile:
$ echo "testtext" > testfile
Upload the file:
$ drycc volumes client cp testfile vol://myvolume/
[↑] testfile 100% [==================================================] (5/ 5 B, 355 B/s)
List files in myvolume:
$ drycc volumes client ls vol://myvolume/
[2024-07-22T15:32:28+08:00] 5 testfile
Delete testfile in myvolume:
$ drycc volumes client rm vol://myvolume/testfile
10 - Managing App Gateway
A Gateway describes how traffic can be translated to services within the cluster. It defines a request for a way to translate traffic from outside the cluster to Kubernetes services. For example, traffic sent to a Kubernetes service by a cloud load balancer, an in-cluster proxy, or an external hardware load balancer. While many use cases have client traffic originating “outside” the cluster, this is not a requirement.
Create a Gateway for an Application
A gateway is a way of exposing services externally, which generates an external IP address to connect routes and services. After deployment, the gateway is automatically created.
List the gateways:
$ drycc gateways
NAME LISTENER PORT PROTOCOL ADDRESSES
python-getting-started tcp-80-0 80 HTTP 101.65.132.51
You can also add a port to this gateway or create a new one:
$ drycc gateways add python-getting-started --port=443 --protocol=HTTPS
Adding gateway python-getting-started to python-getting-started... done
Create a Service for an Application
A service is a way of exposing services internally, creating a service generates an internal DNS that can access process types. The web process type is created automatically; for other types, you should add them as needed.
List the services:
$ drycc services
PTYPE PORT PROTOCOL TARGET-PORT DOMAIN
web 80 TCP 8000 python-getting-started.python-getting-started.svc
Add a new service for a process type:
$ drycc services add sleep 8001:8001
Create a Route for an Application
A gateway may be attached to one or more route references which serve to direct traffic for a subset of traffic to a specific service. The web process type is already bound to the gateway and service.
List the routes:
$ drycc routes
NAME OWNER KIND GATEWAYS SERVICES
python-getting-started demo HTTPRoute ["python-getting-started:80"] ["python-getting-started:80"]
Create a new route and attach a gateway:
$ drycc routes add sleep HTTPRoute --ptype=sleep sleep:8001,100
$ drycc routes attach sleep --gateway=python-getting-started --port=80
11 - Managing App Resources
You can use the commands below to create resources and bind them to applications. These commands depend on the service-catalog.
Use drycc resources
to create and bind a resource for a deployed application.
$ drycc help resources
Manage resources for your applications
Usage:
drycc resources [flags]
drycc resources [command]
Available Commands:
services List all available resource services
plans List all available plans for a resource service
create Create a resource for the application
list List resources in the application
describe Get a resource's detail in the application
update Update a resource from the application
bind Bind a resource for an application
unbind Unbind a resource for an application
destroy Delete a resource from the application
Flags:
-a, --app string The uniquely identifiable name for the application
-l, --limit int The maximum number of results to display
Global Flags:
-c, --config string Path to configuration file. (default "~/.drycc/client.json")
-h, --help Display help information
-v, --version Display client version
Use "drycc resources [command] --help" for more information about a command.
List All Available Resource Services
You can list available resource services with the drycc resources services
command:
$ drycc resources services
ID NAME UPDATEABLE
15032a52-33c2-4b40-97aa-ceb972f51509 airflow true
b7cb26a4-b258-445c-860b-a664239a67f8 cloudbeaver true
9ce3c3ba-33b5-4e4e-a5e9-a338a83d5070 flink true
b80c51a1-957c-4d93-b3d5-efde84cd8031 fluentbit true
fff5b6c7-ed85-429b-8265-493e40cc53c7 grafana true
412e368f-bf78-4798-92cc-43343119a57d kafka true
ea2a9b87-fbc4-4e2a-adee-161c1f91d98d minio true
383f7316-84f3-4955-8491-1d4b02b749c8 mongodb true
fbee746b-f3a7-4bef-8b55-cbecfd4c8ac3 mysql-cluster true
5975094d-45cc-4e85-8573-f93937d026c7 opensearch true
1db95161-7193-4544-8c76-e5ad5f6c03f6 pmm true
5cfb0abf-276c-445b-9060-9aa964ede87d postgresql-cluster true
b8f70264-eafc-4b2f-848e-2ec0d059032b prometheus true
e1fd0d37-9046-4152-a29b-d155c5657c8b redis true
7d2b64c6-0b59-4f08-a2f5-7b17cea6e5ee redis-cluster true
2e6877df-86e7-4bcc-a869-2a9b6847a465 seaweedfs true
4aea5c0f-9495-420d-896a-ffc61a3eced5 spark true
b50db3b5-8d5f-4be9-b8bd-467ecd6cc11d zookeeper true
List All Available Plans for a Resource Service
You can list all available plans for a resource service with the drycc resources plans
command:
$ drycc resources plans redis
ID NAME DESCRIPTION
8d659058-a3b4-4058-b039-cc03a31b9442 standard-128 Redis standard-128 plan which limit resources memory size 128Mi.
36e3dbec-fc51-4f6b-9baa-e31e316858be standard-256 Redis standard-256 plan which limit resources memory size 256Mi.
560817c2-5aa1-41c4-9ee6-a77e3ee552d5 standard-512 Redis standard-512 plan which limit resources memory size 512Mi.
d544d989-9fb8-43e9-a74e-0840ce1b8f0f standard-1024 Redis standard-1024 plan which limit resources memory size 1Gi.
ad51b7bb-9b12-4ffd-8e49-010c0141b263 standard-2048 Redis standard-2048 plan which limit resources memory size 2Gi.
5097d76e-557c-453f-bdb1-54009e0df78d standard-4096 Redis standard-4096 plan which limit resources memory size 4Gi.
be3fa2d0-36d2-47c5-9561-9deffe5ba373 standard-8192 Redis standard-8192 plan which limit resources memory size 8Gi.
4ca812a8-d7c3-439f-96cd-26523e88400e standard-16384 Redis standard-16384 plan which limit resources memory size 16Gi.
b7f2a71f-0d97-48fd-8eed-aab24a7822f3 standard-32768 Redis standard-32768 plan which limit resources memory size 32Gi.
25c6b5d5-7505-47c8-95b1-dc9bdc698063 standard-65536 Redis standard-65536 plan which limit resources memory size 64Gi.
Create a Resource in an Application
You can create a resource with the drycc resources create
command:
$ drycc resources create redis redis standard-128
Creating redis to scenic-icehouse... done
After resources are created, you can list the resources in this application:
$ drycc resources list
UUID NAME OWNER PLAN UPDATED
07220e9e-d54d-4d74-a88c-f464aa374386 redis admin redis:standard-128 2024-05-08T01:01:00Z
Bind Resources
The resource named “redis” is created. You can bind the redis resource to the application using the drycc resources bind redis
command:
$ drycc resources bind redis
Binding resource... done
Describe Resources
Use drycc resources describe
to show the binding details. If the binding is successful, this command will show the connection information for the resource:
$ drycc resources describe redis
=== scenic-icehouse resource redis
plan: redis:1000
status: Ready
binding: Ready
REDISPORT: 6379
REDIS_PASSWORD: RzG87SJWG1
SENTINELHOST: 172.16.0.2
SENTINELPORT: 26379
Update Resources
You can use the drycc resources update
command to upgrade to a new plan. An example of how to upgrade the plan’s capacity to 128MB:
$ drycc resources update redis redis standard-128
Updating redis to scenic-icehouse... done
Remove a Resource
If you no longer need a resource, use drycc resources unbind
to unbind the resource and then use drycc resources destroy
to delete the resource from the application. The resource must be unbound before it can be deleted.
$ drycc resources unbind redis
Unbinding resource... done
$ drycc resources destroy redis
Deleting redis from scenic-icehouse... done
12 - Inter-app Communication
Multi-process applications often feature one public-facing process supported by background processes that handle scheduled tasks or queue processing. Implement this architecture on Drycc Workflow by enabling DNS-based communication between applications and hiding supporting processes from public access.
DNS Service Discovery
Drycc Workflow supports single applications composed of multiple processes. Each application communicates on a single port, so inter-app communication requires discovering the target application’s address and port.
All Workflow applications map to port 80 externally. The challenge lies in discovering IP addresses. Workflow creates a Kubernetes Service for each application, assigning a name and cluster-internal IP address.
The cluster’s DNS service automatically manages DNS records, mapping application names to IP addresses as services start and stop. Applications communicate by sending requests to the service domain name: app-name.app-namespace
.
13 - Managing Resource Limits
Managing Application Resource Limits
Drycc Workflow supports restricting memory and CPU shares for each process. Requests/limits set on a per-process type are given to Kubernetes as resource requests and limits. This means you guarantee a minimum amount of resources (requests) for a process while limiting the process from using more than the specified maximum (limits).
By default, Kubernetes will set requests equal to limits if you don’t explicitly set the requests value. Please keep in mind that 0 <= requests <= limits
.
Setting Limits
If you set requests/limits that are out of range for your cluster, Kubernetes will be unable to schedule your application processes into the cluster!
$ drycc limits plans
ID SPEC CPU VCPUS MEMORY FEATURES
std1.large.c1m1 std1 Universal CPU 1 1 GiB Integrated GPU shared
std1.large.c1m2 std1 Universal CPU 1 2 GiB Integrated GPU shared
std1.large.c1m4 std1 Universal CPU 1 4 GiB Integrated GPU shared
std1.large.c1m8 std1 Universal CPU 1 8 GiB Integrated GPU shared
std1.large.c2m2 std1 Universal CPU 2 2 GiB Integrated GPU shared
std1.large.c2m4 std1 Universal CPU 2 4 GiB Integrated GPU shared
std1.large.c2m8 std1 Universal CPU 2 8 GiB Integrated GPU shared
std1.large.c2m16 std1 Universal CPU 2 16 GiB Integrated GPU shared
$ drycc limits set web=std1.large.c1m1
Applying limits... done
14 - Domains and Routing
Add or remove custom domains for your application using drycc domains
:
$ drycc domains add hello.bacongobbler.com --ptype=web
Adding hello.bacongobbler.com to finest-woodshed... done
After adding the domain, configure DNS by setting up a CNAME record from your custom domain to the Drycc domain:
$ dig hello.dryccapp.com
[...]
;; ANSWER SECTION:
hello.bacongobbler.com. 1759 IN CNAME finest-woodshed.dryccapp.com.
finest-woodshed.dryccapp.com. 270 IN A 172.17.8.100
Note
Setting a CNAME for a root domain can cause issues. An @ record as a CNAME redirects all traffic to another domain, including mail and SOA records. We recommend using subdomains, but you can work around this by pointing the @ record to the load balancer’s IP address.Manage Routing
Control application accessibility through the routing mesh using drycc routing
:
Disable routing to make the application unreachable externally (but still accessible internally via Kubernetes Service):
$ drycc routing disable
Disabling routing for finest-woodshed... done
Re-enable routing to restore external access:
$ drycc routing enable
Enabling routing for finest-woodshed... done
15 - SSL Certificates
SSL is a cryptographic protocol that provides end-to-end encryption and integrity for all web requests. Applications that transmit sensitive data should enable SSL to ensure all information is transmitted securely.
To enable SSL on a custom domain, such as www.example.com
, use the SSL certificate endpoint.
Note
Thedrycc certs
command is only useful for custom domains. Default application domains are SSL-enabled by default and can be accessed using HTTPS, for example https://foo.dryccapp.com
(provided that you have [installed your wildcard certificate][platform-ssl] on the routers or load balancer).
Overview
Due to the unique nature of SSL validation, provisioning SSL for your domain is a multi-step process that involves several third parties. You will need to:
- Purchase an SSL certificate from your SSL provider
- Upload the certificate to Drycc
Acquire an SSL Certificate
Purchasing an SSL certificate varies in cost and process depending on the vendor. RapidSSL offers a simple way to purchase a certificate and is a recommended solution. If you can use this provider, see buy an SSL certificate with RapidSSL for instructions.
DNS and Domain Configuration
Once the SSL certificate is provisioned and confirmed, you must route requests for your domain through Drycc. Unless you’ve already done so, add the domain specified when generating the CSR to your application with:
$ drycc domains add www.example.com --ptype=web -a foo
Adding www.example.com to foo... done
Add a Certificate
Add your certificate, any intermediate certificates, and private key to the endpoint using the certs:add
command.
$ drycc certs add example-com server.crt server.key -a foo
Adding SSL endpoint... done
www.example.com
Note
The certificate name can only contain lowercase letters (a-z), numbers (0-9), and hyphens.The Drycc platform will examine the certificate and extract relevant information such as the Common Name, Subject Alternative Names (SAN), fingerprint, and more.
This allows for wildcard certificates and multiple domains in the SAN without uploading duplicates.
Add a Certificate Chain
Sometimes certificates (such as self-signed or inexpensive certificates) require additional certificates to establish the chain of trust. Bundle all certificates into one file with your site’s certificate first:
$ cat server.crt server.ca > server.bundle
Then add them to Drycc using the certs add
command:
$ drycc certs add example-com server.bundle server.key -a foo
Adding SSL endpoint... done
www.example.com
Attach SSL Certificate to a Domain
Certificates are not automatically connected to domains. You must manually attach a certificate to a domain:
$ drycc certs attach example-com example.com -a foo
Each certificate can be connected to multiple domains. There is no need to upload duplicates.
To remove an association:
$ drycc certs detach example-com example.com -a foo
Certificate Overview
You can verify the details of your domain’s SSL configuration with drycc certs
:
$ drycc certs
NAME COMMON-NAME EXPIRES SAN DOMAINS
example-com example.com 14 Jan 2017 blog.example.com example.com
Or view detailed information for each certificate:
$ drycc certs info example-com -a foo
=== bar-com Certificate
Common Name(s): example.com
Expires At: 2017-01-14 23:57:57 +0000 UTC
Starts At: 2016-01-15 23:57:57 +0000 UTC
Fingerprint: 7A:CA:B8:50:FF:8D:EB:03:3D:AC:AD:13:4F:EE:03:D5:5D:EB:5E:37:51:8C:E0:98:F8:1B:36:2B:20:83:0D:C0
Subject Alt Name: blog.example.com
Issuer: /C=US/ST=CA/L=San Francisco/O=Drycc/OU=Engineering/CN=example.com/emailAddress=engineering@drycc.cc
Subject: /C=US/ST=CA/L=San Francisco/O=Drycc/OU=Engineering/CN=example.com/emailAddress=engineering@drycc.cc
Connected Domains: example.com
Owner: admin-user
Created: 2016-01-28 19:07:41 +0000 UTC
Updated: 2016-01-30 00:10:02 +0000 UTC
Testing SSL
Use a command-line utility like curl
to test that everything is configured correctly for your secure domain.
Note
The-k
option tells curl to ignore untrusted certificates.
Pay attention to the output. It should print SSL certificate verify ok
. If it prints something like common name: www.example.com (does not match 'www.somedomain.com')
, then something is not configured correctly.
Enforce SSL at the Router
To enforce that all HTTP requests are redirected to HTTPS, enable TLS enforcement at the router level:
$ drycc tls force enable -a foo
Enabling https-only requests for foo... done
Users hitting the HTTP endpoint for the application will now receive a 301 redirect to the HTTPS endpoint.
To disable enforced TLS:
$ drycc tls force disable -a foo
Disabling https-only requests for foo... done
Automated Certificate Management
With Automated Certificate Management (ACM), Drycc automatically manages TLS certificates for applications with Hobby and Professional dynos on the Common Runtime, and for applications in Private Spaces that enable the feature.
Certificates handled by ACM automatically renew one month before they expire, and new certificates are created automatically whenever you add or remove a custom domain. All applications with paid dynos include ACM for free.
Automated Certificate Management uses Let’s Encrypt, the free, automated, and open certificate authority for managing your application’s TLS certificates. Let’s Encrypt is run for the public benefit by the Internet Security Research Group (ISRG).
To enable ACM:
$ drycc tls auto enable -a foo
To disable ACM:
$ drycc tls auto disable -a foo
Remove a Certificate
You can remove a certificate using the certs:remove
command:
$ drycc certs remove my-cert -a foo
Removing www.example.com... Done.
Swapping Certificates
Over the lifetime of an application, you will need to acquire certificates with new expiration dates and apply them to all relevant applications. The recommended way to swap certificates is:
Be intentional with certificate names, such as example-com-2017
, where the year signifies the expiry year. This allows for example-com-2018
when a new certificate is purchased.
Assuming all applications are already using example-com-2017
, run the following commands (they can be chained together):
$ drycc certs detach example-com-2017 example.com -a foo
$ drycc certs attach example-com-2018 example.com -a foo
This handles a single domain, allowing you to verify everything worked as planned and slowly roll it out to other applications using the same method.
Troubleshooting
Here are some steps you can follow if your SSL endpoint is not working as expected.
Untrusted Certificate
In some cases when accessing the SSL endpoint, it may list your certificate as untrusted.
If this occurs, it may be because it is not trusted by Mozilla’s list of root CAs. If this is the case, your certificate may be considered untrusted for many browsers.
If you have uploaded a certificate that was signed by a root authority but you get the message that it is not trusted, then something is wrong with the certificate. For example, it may be missing intermediate certificates. If so, download the intermediate certificates from your SSL provider, remove the certificate from Drycc, and re-run the certs add
command.
16 - Using drycc path
The Drycc stack supports advanced use cases with custom Docker images. For most applications, we recommend using Drycc’s default buildpack system, which provides automatic security updates, language-specific optimizations, and eliminates the need to maintain Dockerfiles.
Drycc Config Path Overview
A Drycc repository supports two configurations:
- A
.drycc
directory at the root of the working tree - A root directory as a ‘bare’ repository (without working tree), typically used for
drycc pull
Repository contents include:
config/[a-z0-9]+(\.[a-z0-9]+)*::
Configuration files named by group.
Format follows environment variable syntax.
[a-z0-9]+(\-[a-z0-9]+)*.(yaml|yml)::
Pipeline configuration files.
Config Format
Environment variables use <NAME>=<VALUE>
format. By convention, variable names are capitalized:
DEBUG=true
JVM_OPTIONS=-XX:+UseG1GC
Pipeline Format
A manifest contains these top-level sections:
build
– Specifies Dockerfile for buildingenv
– Defines container environment variablesrun
– Specifies release phase tasksconfig
– References config groups (global groups referenced automatically)deploy
– Defines deployment commands and arguments
Example manifest for building Docker images:
kind: pipeline
ptype: web
build:
docker: Dockerfile
arg:
CODENAME: bookworm
env:
VERSION: 1.2.1
run:
command:
- ./deployment-tasks.sh
image: task
timeout: 100
config:
- jvm-config
deploy:
command:
- bash
- -ec
args:
- bundle exec puma -C config/puma.rb
For more deployment examples, see the Drycc samples.