docs: restructure docs (#14421)

Closes #13434 
Supersedes #14182

---------

Co-authored-by: Ethan <39577870+ethanndickson@users.noreply.github.com>
Co-authored-by: Ethan Dickson <ethan@coder.com>
Co-authored-by: Ben Potter <ben@coder.com>
Co-authored-by: Stephen Kirby <58410745+stirby@users.noreply.github.com>
Co-authored-by: Stephen Kirby <me@skirby.dev>
Co-authored-by: EdwardAngert <17991901+EdwardAngert@users.noreply.github.com>
Co-authored-by: Edward Angert <EdwardAngert@users.noreply.github.com>
This commit is contained in:
Muhammad Atif Ali
2024-10-05 08:52:04 -07:00
committed by GitHub
parent 288df75686
commit 419eba5fb6
298 changed files with 5009 additions and 3889 deletions

View File

@ -1,5 +0,0 @@
Get started with Coder administration:
<children>
This page is rendered on https://coder.com/docs/admin. Refer to the other documents in the `admin/` directory.
</children>

View File

@ -1,33 +0,0 @@
# Application Logs
In Coderd, application logs refer to the records of events, messages, and
activities generated by the application during its execution. These logs provide
valuable information about the application's behavior, performance, and any
issues that may have occurred.
Application logs include entries that capture events on different levels of
severity:
- Informational messages
- Warnings
- Errors
- Debugging information
By analyzing application logs, system administrators can gain insights into the
application's behavior, identify and diagnose problems, track performance
metrics, and make informed decisions to improve the application's stability and
efficiency.
## Error logs
To ensure effective monitoring and timely response to critical events in the
Coder application, it is recommended to configure log alerts that specifically
watch for the following log entries:
| Log Level | Module | Log message | Potential issues |
| --------- | ---------------------------- | ----------------------- | ------------------------------------------------------------------------------------------------- |
| `ERROR` | `coderd` | `workspace build error` | Workspace owner is unable to start their workspace. |
| `ERROR` | `coderd.autobuild` | `workspace build error` | Autostart failed to initiate the workspace. |
| `ERROR` | `coderd.provisionerd-<name>` | | The provisioner job encounters issues importing the workspace template or building the workspace. |
| `ERROR` | `coderd.userauth` | | Authentication problems, such as the inability of the workspace user to log in. |
| `ERROR` | `coderd.prometheusmetrics` | | The metrics aggregator's queue is full, causing it to reject new metrics. |

View File

@ -1,107 +0,0 @@
# Automation
All actions possible through the Coder dashboard can also be automated as it
utilizes the same public REST API. There are several ways to extend/automate
Coder:
- [coderd Terraform Provider](https://registry.terraform.io/providers/coder/coderd/latest)
- [CLI](../reference/cli)
- [REST API](../reference/api)
- [Coder SDK](https://pkg.go.dev/github.com/coder/coder/v2/codersdk)
## Quickstart
Generate a token on your Coder deployment by visiting:
```shell
https://coder.example.com/settings/tokens
```
List your workspaces
```shell
# CLI
coder ls \
--url https://coder.example.com \
--token <your-token> \
--output json
# REST API (with curl)
curl https://coder.example.com/api/v2/workspaces?q=owner:me \
-H "Coder-Session-Token: <your-token>"
```
## Documentation
We publish an [API reference](../reference/api) in our documentation. You can
also enable a [Swagger endpoint](../reference/cli/server.md#--swagger-enable) on
your Coder deployment.
## Use cases
We strive to keep the following use cases up to date, but please note that
changes to API queries and routes can occur. For the most recent queries and
payloads, we recommend checking the relevant documentation.
### Users & Groups
- [Manage Users via Terraform](https://registry.terraform.io/providers/coder/coderd/latest/docs/resources/user)
- [Manage Groups via Terraform](https://registry.terraform.io/providers/coder/coderd/latest/docs/resources/group)
### Templates
- [Manage templates via Terraform or CLI](../templates/change-management.md):
Store all templates in git and update them in CI/CD pipelines.
### Workspace agents
Workspace agents have a special token that can send logs, metrics, and workspace
activity.
- [Custom workspace logs](../reference/api/agents.md#patch-workspace-agent-logs):
Expose messages prior to the Coder init script running (e.g. pulling image, VM
starting, restoring snapshot).
[coder-logstream-kube](https://github.com/coder/coder-logstream-kube) uses
this to show Kubernetes events, such as image pulls or ResourceQuota
restrictions.
```shell
curl -X PATCH https://coder.example.com/api/v2/workspaceagents/me/logs \
-H "Coder-Session-Token: $CODER_AGENT_TOKEN" \
-d "{
\"logs\": [
{
\"created_at\": \"$(date -u +'%Y-%m-%dT%H:%M:%SZ')\",
\"level\": \"info\",
\"output\": \"Restoring workspace from snapshot: 05%...\"
}
]
}"
```
- [Manually send workspace activity](../reference/api/agents.md#submit-workspace-agent-stats):
Keep a workspace "active," even if there is not an open connection (e.g. for a
long-running machine learning job).
```shell
#!/bin/bash
# Send workspace activity as long as the job is still running
while true
do
if pgrep -f "my_training_script.py" > /dev/null
then
curl -X POST "https://coder.example.com/api/v2/workspaceagents/me/report-stats" \
-H "Coder-Session-Token: $CODER_AGENT_TOKEN" \
-d '{
"connection_count": 1
}'
# Sleep for 30 minutes (1800 seconds) if the job is running
sleep 1800
else
# Sleep for 1 minute (60 seconds) if the job is not running
sleep 60
fi
done
```

View File

@ -1,21 +1,5 @@
# External Authentication
Coder integrates with Git and OpenID Connect to automate away the need for
developers to authenticate with external services within their workspace.
## Git Providers
When developers use `git` inside their workspace, they are prompted to
authenticate. After that, Coder will store and refresh tokens for future
operations.
<video autoplay playsinline loop>
<source src="https://github.com/coder/coder/blob/main/site/static/external-auth.mp4?raw=true" type="video/mp4">
Your browser does not support the video tag.
</video>
## Configuration
To add an external authentication provider, you'll need to create an OAuth
application. The following providers are supported:
@ -25,8 +9,8 @@ application. The following providers are supported:
- [Azure DevOps](https://learn.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/oauth?view=azure-devops)
- [Azure DevOps (via Entra ID)](https://learn.microsoft.com/en-us/entra/architecture/auth-oauth2)
The next step is to [configure the Coder server](./configure.md) to use the
OAuth application by setting the following environment variables:
The next step is to configure the Coder server to use the OAuth application by
setting the following environment variables:
```env
CODER_EXTERNAL_AUTH_0_ID="<USER_DEFINED_ID>"
@ -43,7 +27,7 @@ The `CODER_EXTERNAL_AUTH_0_ID` environment variable is used for internal
reference. Therefore, it can be set arbitrarily (e.g., `primary-github` for your
GitHub provider).
### GitHub
## GitHub
> If you don't require fine-grained access control, it's easier to configure a
> GitHub OAuth app!
@ -84,7 +68,7 @@ CODER_EXTERNAL_AUTH_0_CLIENT_ID=xxxxxx
CODER_EXTERNAL_AUTH_0_CLIENT_SECRET=xxxxxxx
```
### GitHub Enterprise
## GitHub Enterprise
GitHub Enterprise requires the following environment variables:
@ -98,7 +82,7 @@ CODER_EXTERNAL_AUTH_0_AUTH_URL="https://github.example.com/login/oauth/authorize
CODER_EXTERNAL_AUTH_0_TOKEN_URL="https://github.example.com/login/oauth/access_token"
```
### Bitbucket Server
## Bitbucket Server
Bitbucket Server requires the following environment variables:
@ -110,7 +94,7 @@ CODER_EXTERNAL_AUTH_0_CLIENT_SECRET=xxx
CODER_EXTERNAL_AUTH_0_AUTH_URL=https://bitbucket.domain.com/rest/oauth2/latest/authorize
```
### Azure DevOps
## Azure DevOps
Azure DevOps requires the following environment variables:
@ -124,7 +108,7 @@ CODER_EXTERNAL_AUTH_0_AUTH_URL="https://app.vssps.visualstudio.com/oauth2/author
CODER_EXTERNAL_AUTH_0_TOKEN_URL="https://app.vssps.visualstudio.com/oauth2/token"
```
### Azure DevOps (via Entra ID)
## Azure DevOps (via Entra ID)
Azure DevOps (via Entra ID) requires the following environment variables:
@ -138,7 +122,7 @@ CODER_EXTERNAL_AUTH_0_AUTH_URL="https://login.microsoftonline.com/<TENANT ID>/oa
> Note: Your app registration in Entra ID requires the `vso.code_write` scope
### GitLab self-managed
## GitLab self-managed
GitLab self-managed requires the following environment variables:
@ -154,7 +138,7 @@ CODER_EXTERNAL_AUTH_0_TOKEN_URL="https://gitlab.company.org/oauth/token"
CODER_EXTERNAL_AUTH_0_REGEX=gitlab\.company\.org
```
### Gitea
## Gitea
```env
CODER_EXTERNAL_AUTH_0_ID="gitea"
@ -168,7 +152,7 @@ CODER_EXTERNAL_AUTH_0_AUTH_URL="https://gitea.com/login/oauth/authorize"
The Redirect URI for Gitea should be
https://coder.company.org/external-auth/gitea/callback
### Self-managed git providers
## Self-managed git providers
Custom authentication and token URLs should be used for self-managed Git
provider deployments.
@ -182,12 +166,12 @@ CODER_EXTERNAL_AUTH_0_REGEX=github\.company\.org
> Note: The `REGEX` variable must be set if using a custom git domain.
### JFrog Artifactory
## JFrog Artifactory
See [this](https://coder.com/docs/guides/artifactory-integration#jfrog-oauth)
guide on instructions on how to set up for JFrog Artifactory.
See [this](../admin/integrations/jfrog-artifactory.md) guide on instructions on
how to set up for JFrog Artifactory.
### Custom scopes
## Custom scopes
Optionally, you can request custom scopes:
@ -195,10 +179,11 @@ Optionally, you can request custom scopes:
CODER_EXTERNAL_AUTH_0_SCOPES="repo:read repo:write write:gpg_key"
```
### Multiple External Providers (enterprise) (premium)
## Multiple External Providers (enterprise) (premium)
Multiple providers are an [Enterprise feature](https://coder.com/pricing). Below
is an example configuration with multiple providers.
Multiple providers are an Enterprise feature.
[Learn more](https://coder.com/pricing#compare-plans). Below is an example
configuration with multiple providers.
```env
# Provider 1) github.com
@ -206,7 +191,7 @@ CODER_EXTERNAL_AUTH_0_ID=primary-github
CODER_EXTERNAL_AUTH_0_TYPE=github
CODER_EXTERNAL_AUTH_0_CLIENT_ID=xxxxxx
CODER_EXTERNAL_AUTH_0_CLIENT_SECRET=xxxxxxx
CODER_EXTERNAL_AUTH_0_REGEX=github.com/orgname
CODER_EXTERNAL_AUTH_0_REGEX=github.com/org
# Provider 2) github.example.com
CODER_EXTERNAL_AUTH_1_ID=secondary-github
@ -219,128 +204,10 @@ CODER_EXTERNAL_AUTH_1_TOKEN_URL="https://github.example.com/login/oauth/access_t
CODER_EXTERNAL_AUTH_1_VALIDATE_URL="https://github.example.com/api/v3/user"
```
To support regex matching for paths (e.g. github.com/orgname), you'll need to
add this to the
To support regex matching for paths (e.g. github.com/org), you'll need to add
this to the
[Coder agent startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script):
```shell
git config --global credential.useHttpPath true
```
### Kubernetes environment variables
If you deployed Coder with Kubernetes you can set the environment variables in
your `values.yaml` file:
```yaml
coder:
env:
# […]
- name: CODER_EXTERNAL_AUTH_0_ID
value: USER_DEFINED_ID
- name: CODER_EXTERNAL_AUTH_0_TYPE
value: github
- name: CODER_EXTERNAL_AUTH_0_CLIENT_ID
valueFrom:
secretKeyRef:
name: github-primary-basic-auth
key: client-id
- name: CODER_EXTERNAL_AUTH_0_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: github-primary-basic-auth
key: client-secret
```
You can set the secrets by creating a `github-primary-basic-auth.yaml` file and
applying it.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: github-primary-basic-auth
type: Opaque
stringData:
client-secret: xxxxxxxxx
client-id: xxxxxxxxx
```
Make sure to restart the affected pods for the change to take effect.
## Require git authentication in templates
If your template requires git authentication (e.g. running `git clone` in the
[startup_script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)),
you can require users authenticate via git prior to creating a workspace:
![Git authentication in template](../images/admin/git-auth-template.png)
### Native git authentication will auto-refresh tokens
<blockquote class="info">
<p>
This is the preferred authentication method.
</p>
</blockquote>
By default, the coder agent will configure native `git` authentication via the
`GIT_ASKPASS` environment variable. Meaning, with no additional configuration,
external authentication will work with native `git` commands.
To check the auth token being used **from inside a running workspace**, run:
```shell
# If the exit code is non-zero, then the user is not authenticated with the
# external provider.
coder external-auth access-token <external-auth-id>
```
Note: Some IDE's override the `GIT_ASKPASS` environment variable and need to be
configured.
**VSCode**
Use the
[Coder](https://marketplace.visualstudio.com/items?itemName=coder.coder-remote)
extension to automatically configure these settings for you!
Otherwise, you can manually configure the following settings:
- Set `git.terminalAuthentication` to `false`
- Set `git.useIntegratedAskPass` to `false`
### Hard coded tokens do not auto-refresh
If the token is required to be inserted into the workspace, for example
[GitHub cli](https://cli.github.com/), the auth token can be inserted from the
template. This token will not auto-refresh. The following example will
authenticate via GitHub and auto-clone a repo into the `~/coder` directory.
```hcl
data "coder_external_auth" "github" {
# Matches the ID of the external auth provider in Coder.
id = "github"
}
resource "coder_agent" "dev" {
os = "linux"
arch = "amd64"
dir = "~/coder"
env = {
GITHUB_TOKEN : data.coder_external_auth.github.access_token
}
startup_script = <<EOF
if [ ! -d ~/coder ]; then
git clone https://github.com/coder/coder
fi
EOF
}
```
See the
[Terraform provider documentation](https://registry.terraform.io/providers/coder/coder/latest/docs/data-sources/external_auth)
for all available options.

View File

@ -1,13 +0,0 @@
# Groups
Groups can be used with [template RBAC](./rbac.md) to give groups of users
access to specific templates. They can be defined via the Coder web UI,
[synced from your identity provider](./auth.md) or
[managed via Terraform](https://registry.terraform.io/providers/coder/coderd/latest/docs/resources/template).
![Groups](../images/groups.png)
## Enabling this feature
This feature is only available with a
[Premium or Enterprise license](https://coder.com/pricing).

18
docs/admin/index.md Normal file
View File

@ -0,0 +1,18 @@
# Administration
These guides contain information on managing the Coder control plane and
[authoring templates](./templates/index.md).
First time viewers looking to set up control plane access can start with the
[configuration guide](./setup/index.md). If you're a team lead looking to design
environments for your developers, check out our
[templates guides](./templates/index.md). If you are a developer using Coder, we
recommend the [user guides](../user-guides/index.md).
For automation and scripting workflows, see our [CLI](../reference/cli/index.md)
and [API](../reference/api/index.md) docs.
For any information not strictly contained in these sections, check out our
[Tutorials](../tutorials/index.md) and [FAQs](../tutorials/faqs.md).
<children></children>

View File

@ -0,0 +1,130 @@
# Architecture
The Coder deployment model is flexible and offers various components that
platform administrators can deploy and scale depending on their use case. This
page describes possible deployments, challenges, and risks associated with them.
<div class="tabs">
## Community Edition
![Architecture Diagram](../../images/architecture-diagram.png)
## Enterprise
![Single Region Architecture Diagram](../../images/architecture-single-region.png)
## Multi-Region Enterprise
![Multi Region Architecture Diagram](../../images/architecture-multi-region.png)
</div>
## Primary components
### coderd
_coderd_ is the service created by running `coder server`. It is a thin API that
connects workspaces, provisioners and users. _coderd_ stores its state in
Postgres and is the only service that communicates with Postgres.
It offers:
- Dashboard (UI)
- HTTP API
- Dev URLs (HTTP reverse proxy to workspaces)
- Workspace Web Applications (e.g for easy access to `code-server`)
- Agent registration
### provisionerd
_provisionerd_ is the execution context for infrastructure modifying providers.
At the moment, the only provider is Terraform (running `terraform`).
By default, the Coder server runs multiple provisioner daemons.
[External provisioners](../provisioners.md) can be added for security or
scalability purposes.
### Workspaces
At the highest level, a workspace is a set of cloud resources. These resources
can be VMs, Kubernetes clusters, storage buckets, or whatever else Terraform
lets you dream up.
The resources that run the agent are described as _computational resources_,
while those that don't are called _peripheral resources_.
Each resource may also be _persistent_ or _ephemeral_ depending on whether
they're destroyed on workspace stop.
### Agents
An agent is the Coder service that runs within a user's remote workspace. It
provides a consistent interface for coderd and clients to communicate with
workspaces regardless of operating system, architecture, or cloud.
It offers the following services along with much more:
- SSH
- Port forwarding
- Liveness checks
- `startup_script` automation
Templates are responsible for
[creating and running agents](../templates/extending-templates/index.md#workspace-agents)
within workspaces.
## Service Bundling
While _coderd_ and Postgres can be orchestrated independently, our default
installation paths bundle them all together into one system service. It's
perfectly fine to run a production deployment this way, but there are certain
situations that necessitate decomposition:
- Reducing global client latency (distribute coderd and centralize database)
- Achieving greater availability and efficiency (horizontally scale individual
services)
## Data Layer
### PostgreSQL (Recommended)
While `coderd` runs a bundled version of PostgreSQL, we recommend running an
external PostgreSQL 13+ database for production deployments.
A managed PostgreSQL database, with daily backups, is recommended:
- For AWS: Amazon RDS for PostgreSQL
- For Azure: Azure Database for PostgreSQL
- Flexible Server For GCP: Cloud SQL for PostgreSQL
Learn more about database requirements:
[Database Health](../monitoring/health-check.md#database)
### Git Providers (Recommended)
Users will likely need to pull source code and other artifacts from a git
provider. The Coder control plane and workspaces will need network connectivity
to the git provider.
- [GitHub Enterprise](../external-auth.md#github-enterprise)
- [GitLab](../external-auth.md#gitlab-self-managed)
- [BitBucket](../external-auth.md#bitbucket-server)
- [Other Providers](../external-auth.md#self-managed-git-providers)
### Artifact Manager (Optional)
Workspaces and templates can pull artifacts from an artifact manager, such as
JFrog Artifactory. This can be configured on the infrastructure level, or in
some cases within Coder:
- Tutorial: [JFrog Artifactory and Coder](../integrations/jfrog-artifactory.md)
### Container Registry (Optional)
If you prefer not to pull container images for the control plane (`coderd`,
`provisionerd`) and workspaces from public container registry (Docker Hub,
GitHub Container Registry) you can run your own container registry with Coder.
To shorten the provisioning time, it is recommended to deploy registry mirrors
in the same region as the workspace nodes.

View File

@ -0,0 +1,32 @@
# Infrastructure
Learn how to spin up & manage Coder infrastructure.
## Architecture
Coder is a self-hosted platform that runs on your own servers. For large
deployments, we recommend running the control plane on Kubernetes. Workspaces
can be run as VMs or Kubernetes pods. The control plane (`coderd`) runs in a
single region. However, workspace proxies, provisioners, and workspaces can run
across regions or even cloud providers for the optimal developer experience.
Learn more about Coder's
[architecture, concepts, and dependencies](./architecture.md).
## Reference Architectures
We publish [reference architectures](./validated-architectures/index.md) that
include best practices around Coder configuration, infrastructure sizing,
autoscaling, and operational readiness for different deployment sizes (e.g.
`Up to 2000 users`).
## Scale Tests
Use our [scale test utility](./scale-utility.md) that can be run on your Coder
deployment to simulate user activity and measure performance.
## Monitoring
See our dedicated [Monitoring](../monitoring/index.md) section for details
around monitoring your Coder deployment via a bundled Grafana dashboard, health
check, and/or within your own observability stack via Prometheus metrics.

View File

@ -90,11 +90,11 @@ Database:
## Available reference architectures
[Up to 1,000 users](../../architecture/1k-users.md)
[Up to 1,000 users](./validated-architectures/1k-users.md)
[Up to 2,000 users](../../architecture/2k-users.md)
[Up to 2,000 users](./validated-architectures/2k-users.md)
[Up to 3,000 users](../../architecture/3k-users.md)
[Up to 3,000 users](./validated-architectures/3k-users.md)
## Hardware recommendation
@ -113,12 +113,12 @@ on the workload size to ensure deployment stability.
#### CPU and memory usage
Enabling
[agent stats collection](../../reference/cli/server.md#--prometheus-collect-agent-stats)
[agent stats collection](../../reference/cli/index.md#--prometheus-collect-agent-stats)
(optional) may increase memory consumption.
Enabling direct connections between users and workspace agents (apps or SSH
traffic) can help prevent an increase in CPU usage. It is recommended to keep
[this option enabled](../../reference/cli/server.md#--disable-direct-connections)
[this option enabled](../../reference/cli/index.md#--disable-direct-connections)
unless there are compelling reasons to disable it.
Inactive users do not consume Coder resources.
@ -149,18 +149,19 @@ Terminal (bidirectional), and Workspace events/logs (unidirectional).
If the Coder deployment expects traffic from developers spread across the globe,
be aware that customer-facing latency might be higher because of the distance
between users and the load balancer. Fortunately, the latency can be improved
with a deployment of Coder [workspace proxies](../workspace-proxies.md).
with a deployment of Coder
[workspace proxies](../networking/workspace-proxies.md).
**Node Autoscaling**
We recommend disabling the autoscaling for `coderd` nodes. Autoscaling can cause
interruptions for user connections, see
[Autoscaling](scale-utility.md#autoscaling) for more details.
[Autoscaling](./scale-utility.md#autoscaling) for more details.
### Control plane: Workspace Proxies
When scaling [workspace proxies](../workspace-proxies.md), follow the same
guidelines as for `coderd` above:
When scaling [workspace proxies](../networking/workspace-proxies.md), follow the
same guidelines as for `coderd` above:
- `1 vCPU x 2 GB memory` for every 250 users.
- Disable autoscaling.

View File

@ -6,15 +6,15 @@ infrastructure. For scale-testing Kubernetes clusters we recommend to install
and use the dedicated Coder template,
[scaletest-runner](https://github.com/coder/coder/tree/main/scaletest/templates/scaletest-runner).
Learn more about [Coders architecture](../../architecture/architecture.md) and
our [scale-testing methodology](scale-testing.md).
Learn more about [Coders architecture](./architecture.md) and our
[scale-testing methodology](./scale-testing.md).
## Recent scale tests
> Note: the below information is for reference purposes only, and are not
> intended to be used as guidelines for infrastructure sizing. Review the
> [Reference Architectures](../../architecture/validated-arch.md#node-sizing)
> for hardware sizing recommendations.
> [Reference Architectures](./validated-architectures/index.md#node-sizing) for
> hardware sizing recommendations.
| Environment | Coder CPU | Coder RAM | Coder Replicas | Database | Users | Concurrent builds | Concurrent connections (Terminal/SSH) | Coder Version | Last tested |
| ---------------- | --------- | --------- | -------------- | ----------------- | ----- | ----------------- | ------------------------------------- | ------------- | ------------ |
@ -249,6 +249,7 @@ an annotation on the coderd deployment.
## Troubleshooting
If a load test fails or if you are experiencing performance issues during
day-to-day use, you can leverage Coder's [Prometheus metrics](../prometheus.md)
to identify bottlenecks during scale tests. Additionally, you can use your
existing cloud monitoring stack to measure load, view server logs, etc.
day-to-day use, you can leverage Coder's
[Prometheus metrics](../integrations/prometheus.md) to identify bottlenecks
during scale tests. Additionally, you can use your existing cloud monitoring
stack to measure load, view server logs, etc.

View File

@ -0,0 +1,51 @@
# Reference Architecture: up to 1,000 users
The 1,000 users architecture is designed to cover a wide range of workflows.
Examples of subjects that might utilize this architecture include medium-sized
tech startups, educational units, or small to mid-sized enterprises.
**Target load**: API: up to 180 RPS
**High Availability**: non-essential for small deployments
## Hardware recommendations
### Coderd nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | ------------------- | ------------------- | --------------- | ---------- | ----------------- |
| Up to 1,000 | 2 vCPU, 8 GB memory | 1-2 / 1 coderd each | `n1-standard-2` | `t3.large` | `Standard_D2s_v3` |
**Footnotes**:
- For small deployments (ca. 100 users, 10 concurrent workspace builds), it is
acceptable to deploy provisioners on `coderd` nodes.
### Provisioner nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ------------------------------ | ---------------- | ------------ | ----------------- |
| Up to 1,000 | 8 vCPU, 32 GB memory | 2 nodes / 30 provisioners each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
**Footnotes**:
- An external provisioner is deployed as Kubernetes pod.
### Workspace nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ----------------------- | ---------------- | ------------ | ----------------- |
| Up to 1,000 | 8 vCPU, 32 GB memory | 64 / 16 workspaces each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
**Footnotes**:
- Assumed that a workspace user needs at minimum 2 GB memory to perform. We
recommend against over-provisioning memory for developer workloads, as this my
lead to OOMKiller invocations.
- Maximum number of Kubernetes workspace pods per node: 256
### Database nodes
| Users | Node capacity | Replicas | Storage | GCP | AWS | Azure |
| ----------- | ------------------- | -------- | ------- | ------------------ | ------------- | ----------------- |
| Up to 1,000 | 2 vCPU, 8 GB memory | 1 | 512 GB | `db-custom-2-7680` | `db.t3.large` | `Standard_D2s_v3` |

View File

@ -0,0 +1,59 @@
# Reference Architecture: up to 2,000 users
In the 2,000 users architecture, there is a moderate increase in traffic,
suggesting a growing user base or expanding operations. This setup is
well-suited for mid-sized companies experiencing growth or for universities
seeking to accommodate their expanding user populations.
Users can be evenly distributed between 2 regions or be attached to different
clusters.
**Target load**: API: up to 300 RPS
**High Availability**: The mode is _enabled_; multiple replicas provide higher
deployment reliability under load.
## Hardware recommendations
### Coderd nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ----------------------- | --------------- | ----------- | ----------------- |
| Up to 2,000 | 4 vCPU, 16 GB memory | 2 nodes / 1 coderd each | `n1-standard-4` | `t3.xlarge` | `Standard_D4s_v3` |
### Provisioner nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ------------------------------ | ---------------- | ------------ | ----------------- |
| Up to 2,000 | 8 vCPU, 32 GB memory | 4 nodes / 30 provisioners each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
**Footnotes**:
- An external provisioner is deployed as Kubernetes pod.
- It is not recommended to run provisioner daemons on `coderd` nodes.
- Consider separating provisioners into different namespaces in favor of
zero-trust or multi-cloud deployments.
### Workspace nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ------------------------ | ---------------- | ------------ | ----------------- |
| Up to 2,000 | 8 vCPU, 32 GB memory | 128 / 16 workspaces each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
**Footnotes**:
- Assumed that a workspace user needs 2 GB memory to perform
- Maximum number of Kubernetes workspace pods per node: 256
- Nodes can be distributed in 2 regions, not necessarily evenly split, depending
on developer team sizes
### Database nodes
| Users | Node capacity | Replicas | Storage | GCP | AWS | Azure |
| ----------- | -------------------- | -------- | ------- | ------------------- | -------------- | ----------------- |
| Up to 2,000 | 4 vCPU, 16 GB memory | 1 | 1 TB | `db-custom-4-15360` | `db.t3.xlarge` | `Standard_D4s_v3` |
**Footnotes**:
- Consider adding more replicas if the workspace activity is higher than 500
workspace builds per day or to achieve higher RPS.

View File

@ -0,0 +1,62 @@
# Reference Architecture: up to 3,000 users
The 3,000 users architecture targets large-scale enterprises, possibly with
on-premises network and cloud deployments.
**Target load**: API: up to 550 RPS
**High Availability**: Typically, such scale requires a fully-managed HA
PostgreSQL service, and all Coder observability features enabled for operational
purposes.
**Observability**: Deploy monitoring solutions to gather Prometheus metrics and
visualize them with Grafana to gain detailed insights into infrastructure and
application behavior. This allows operators to respond quickly to incidents and
continuously improve the reliability and performance of the platform.
## Hardware recommendations
### Coderd nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ----------------- | --------------- | ----------- | ----------------- |
| Up to 3,000 | 8 vCPU, 32 GB memory | 4 / 1 coderd each | `n1-standard-4` | `t3.xlarge` | `Standard_D4s_v3` |
### Provisioner nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ------------------------ | ---------------- | ------------ | ----------------- |
| Up to 3,000 | 8 vCPU, 32 GB memory | 8 / 30 provisioners each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
**Footnotes**:
- An external provisioner is deployed as Kubernetes pod.
- It is strongly discouraged to run provisioner daemons on `coderd` nodes at
this level of scale.
- Separate provisioners into different namespaces in favor of zero-trust or
multi-cloud deployments.
### Workspace nodes
| Users | Node capacity | Replicas | GCP | AWS | Azure |
| ----------- | -------------------- | ------------------------------ | ---------------- | ------------ | ----------------- |
| Up to 3,000 | 8 vCPU, 32 GB memory | 256 nodes / 12 workspaces each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` |
**Footnotes**:
- Assumed that a workspace user needs 2 GB memory to perform
- Maximum number of Kubernetes workspace pods per node: 256
- As workspace nodes can be distributed between regions, on-premises networks
and cloud areas, consider different namespaces in favor of zero-trust or
multi-cloud deployments.
### Database nodes
| Users | Node capacity | Replicas | Storage | GCP | AWS | Azure |
| ----------- | -------------------- | -------- | ------- | ------------------- | --------------- | ----------------- |
| Up to 3,000 | 8 vCPU, 32 GB memory | 2 | 1.5 TB | `db-custom-8-30720` | `db.t3.2xlarge` | `Standard_D8s_v3` |
**Footnotes**:
- Consider adding more replicas if the workspace activity is higher than 1500
workspace builds per day or to achieve higher RPS.

View File

@ -0,0 +1,366 @@
# Coder Validated Architecture
Many customers operate Coder in complex organizational environments, consisting
of multiple business units, agencies, and/or subsidiaries. This can lead to
numerous Coder deployments, due to discrepancies in regulatory compliance, data
sovereignty, and level of funding across groups. The Coder Validated
Architecture (CVA) prescribes a Kubernetes-based deployment approach, enabling
your organization to deploy a stable Coder instance that is easier to maintain
and troubleshoot.
The following sections will detail the components of the Coder Validated
Architecture, provide guidance on how to configure and deploy these components,
and offer insights into how to maintain and troubleshoot your Coder environment.
- [General concepts](#general-concepts)
- [Kubernetes Infrastructure](#kubernetes-infrastructure)
- [PostgreSQL Database](#postgresql-database)
- [Operational readiness](#operational-readiness)
## Who is this document for?
This guide targets the following personas. It assumes a basic understanding of
cloud/on-premise computing, containerization, and the Coder platform.
| Role | Description |
| ------------------------- | ------------------------------------------------------------------------------ |
| Platform Engineers | Responsible for deploying, operating the Coder deployment and infrastructure |
| Enterprise Architects | Responsible for architecting Coder deployments to meet enterprise requirements |
| Managed Service Providers | Entities that deploy and run Coder software as a service for customers |
## CVA Guidance
| CVA provides: | CVA does not provide: |
| ---------------------------------------------- | ---------------------------------------------------------------------------------------- |
| Single and multi-region K8s deployment options | Prescribing OS, or cloud vs. on-premise |
| Reference architectures for up to 3,000 users | An approval of your architecture; the CVA solely provides recommendations and guidelines |
| Best practices for building a Coder deployment | Recommendations for every possible deployment scenario |
> For higher level design principles and architectural best practices, see
> Coder's
> [Well-Architected Framework](https://coder.com/blog/coder-well-architected-framework).
## General concepts
This section outlines core concepts and terminology essential for understanding
Coder's architecture and deployment strategies.
### Administrator
An administrator is a user role within the Coder platform with elevated
privileges. Admins have access to administrative functions such as user
management, template definitions, insights, and deployment configuration.
### Coder control plane
Coder's control plane, also known as _coderd_, is the main service recommended
for deployment with multiple replicas to ensure high availability. It provides
an API for managing workspaces and templates, and serves the dashboard UI. In
addition, each _coderd_ replica hosts 3 Terraform [provisioners](#provisioner)
by default.
### User
A [user](../../users/index.md) is an individual who utilizes the Coder platform
to develop, test, and deploy applications using workspaces. Users can select
available templates to provision workspaces. They interact with Coder using the
web interface, the CLI tool, or directly calling API methods.
### Workspace
A [workspace](../../../user-guides/workspace-management.md) refers to an
isolated development environment where users can write, build, and run code.
Workspaces are fully configurable and can be tailored to specific project
requirements, providing developers with a consistent and efficient development
environment. Workspaces can be autostarted and autostopped, enabling efficient
resource management.
Users can connect to workspaces using SSH or via workspace applications like
`code-server`, facilitating collaboration and remote access. Additionally,
workspaces can be parameterized, allowing users to customize settings and
configurations based on their unique needs. Workspaces are instantiated using
Coder templates and deployed on resources created by provisioners.
### Template
A [template](../../../admin/templates/index.md) in Coder is a predefined
configuration for creating workspaces. Templates streamline the process of
workspace creation by providing pre-configured settings, tooling, and
dependencies. They are built by template administrators on top of Terraform,
allowing for efficient management of infrastructure resources. Additionally,
templates can utilize Coder modules to leverage existing features shared with
other templates, enhancing flexibility and consistency across deployments.
Templates describe provisioning rules for infrastructure resources offered by
Terraform providers.
### Workspace Proxy
A [workspace proxy](../../../admin/networking/workspace-proxies.md) serves as a
relay connection option for developers connecting to their workspace over SSH, a
workspace app, or through port forwarding. It helps reduce network latency for
geo-distributed teams by minimizing the distance network traffic needs to
travel. Notably, workspace proxies do not handle dashboard connections or API
calls.
### Provisioner
Provisioners in Coder execute Terraform during workspace and template builds.
While the platform includes built-in provisioner daemons by default, there are
advantages to employing external provisioners. These external daemons provide
secure build environments and reduce server load, improving performance and
scalability. Each provisioner can handle a single concurrent workspace build,
allowing for efficient resource allocation and workload management.
### Registry
The [Coder Registry](https://registry.coder.com) is a platform where you can
find starter templates and _Modules_ for various cloud services and platforms.
Templates help create self-service development environments using
Terraform-defined infrastructure, while _Modules_ simplify template creation by
providing common features like workspace applications, third-party integrations,
or helper scripts.
Please note that the Registry is a hosted service and isn't available for
offline use.
## Kubernetes Infrastructure
Kubernetes is the recommended, and supported platform for deploying Coder in the
enterprise. It is the hosting platform of choice for a large majority of Coder's
Fortune 500 customers, and it is the platform in which we build and test against
here at Coder.
### General recommendations
In general, it is recommended to deploy Coder into its own respective cluster,
separate from production applications. Keep in mind that Coder runs development
workloads, so the cluster should be deployed as such, without production-level
configurations.
### Compute
Deploy your Kubernetes cluster with two node groups, one for Coder's control
plane, and another for user workspaces (if you intend on leveraging K8s for
end-user compute).
#### Control plane nodes
The Coder control plane node group must be static, to prevent scale down events
from dropping pods, and thus dropping user connections to the dashboard UI and
their workspaces.
Coder's Helm Chart supports
[defining nodeSelectors, affinities, and tolerations](https://github.com/coder/coder/blob/e96652ebbcdd7554977594286b32015115c3f5b6/helm/coder/values.yaml#L221-L249)
to schedule the control plane pods on the appropriate node group.
#### Workspace nodes
Coder workspaces can be deployed either as Pods or Deployments in Kubernetes.
See our
[example Kubernetes workspace template](https://github.com/coder/coder/tree/main/examples/templates/kubernetes).
Configure the workspace node group to be auto-scaling, to dynamically allocate
compute as users start/stop workspaces at the beginning and end of their day.
Set nodeSelectors, affinities, and tolerations in Coder templates to assign
workspaces to the given node group:
```tf
resource "kubernetes_deployment" "coder" {
spec {
template {
metadata {
labels = {
app = "coder-workspace"
}
}
spec {
affinity {
pod_anti_affinity {
preferred_during_scheduling_ignored_during_execution {
weight = 1
pod_affinity_term {
label_selector {
match_expressions {
key = "app.kubernetes.io/instance"
operator = "In"
values = ["coder-workspace"]
}
}
topology_key = # add your node group label here
}
}
}
}
tolerations {
# Add your tolerations here
}
node_selector {
# Add your node selectors here
}
container {
image = "coder-workspace:latest"
name = "dev"
}
}
}
}
}
```
#### Node sizing
For sizing recommendations, see the below reference architectures:
- [Up to 1,000 users](1k-users.md)
- [Up to 2,000 users](2k-users.md)
- [Up to 3,000 users](3k-users.md)
### Networking
It is likely your enterprise deploys Kubernetes clusters with various networking
restrictions. With this in mind, Coder requires the following connectivity:
- Egress from workspace compute to the Coder control plane pods
- Egress from control plane pods to Coder's PostgreSQL database
- Egress from control plane pods to git and package repositories
- Ingress from user devices to the control plane Load Balancer or Ingress
controller
We recommend configuring your network policies in accordance with the above.
Note that Coder workspaces do not require any ports to be open.
### Storage
If running Coder workspaces as Kubernetes Pods or Deployments, you will need to
assign persistent storage. We recommend leveraging a
[supported Container Storage Interface (CSI) driver](https://kubernetes-csi.github.io/docs/drivers.html)
in your cluster, with Dynamic Provisioning and read/write, to provide on-demand
storage to end-user workspaces.
The following Kubernetes volume types have been validated by Coder internally,
and/or by our customers:
- [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim)
- [NFS](https://kubernetes.io/docs/concepts/storage/volumes/#nfs)
- [subPath](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath)
- [cephfs](https://kubernetes.io/docs/concepts/storage/volumes/#cephfs)
Our
[example Kubernetes workspace template](https://github.com/coder/coder/blob/5b9a65e5c137232351381fc337d9784bc9aeecfc/examples/templates/kubernetes/main.tf#L191-L219)
provisions a PersistentVolumeClaim block storage device, attached to the
Deployment.
It is not recommended to mount volumes from the host node(s) into workspaces,
for security and reliability purposes. The below volume types are _not_
recommended for use with Coder:
- [Local](https://kubernetes.io/docs/concepts/storage/volumes/#local)
- [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)
Not that Coder's control plane filesystem is ephemeral, so no persistent storage
is required.
## PostgreSQL database
Coder requires access to an external PostgreSQL database to store user data,
workspace state, template files, and more. Depending on the scale of the
user-base, workspace activity, and High Availability requirements, the amount of
CPU and memory resources required by Coder's database may differ.
### Disaster recovery
Prepare internal scripts for dumping and restoring your database. We recommend
scheduling regular database backups, especially before upgrading Coder to a new
release. Coder does not support downgrades without initially restoring the
database to the prior version.
### Performance efficiency
We highly recommend deploying the PostgreSQL instance in the same region (and if
possible, same availability zone) as the Coder server to optimize for low
latency connections. We recommend keeping latency under 10ms between the Coder
server and database.
When determining scaling requirements, take into account the following
considerations:
- `2 vCPU x 8 GB RAM x 512 GB storage`: A baseline for database requirements for
Coder deployment with less than 1000 users, and low activity level (30% active
users). This capacity should be sufficient to support 100 external
provisioners.
- Storage size depends on user activity, workspace builds, log verbosity,
overhead on database encryption, etc.
- Allocate two additional CPU core to the database instance for every 1000
active users.
- Enable High Availability mode for database engine for large scale deployments.
If you enable
[database encryption](../../../admin/security/database-encryption.md) in Coder,
consider allocating an additional CPU core to every `coderd` replica.
#### Resource utilization guidelines
Below are general recommendations for sizing your PostgreSQL instance:
- Increase number of vCPU if CPU utilization or database latency is high.
- Allocate extra memory if database performance is poor, CPU utilization is low,
and memory utilization is high.
- Utilize faster disk options (higher IOPS) such as SSDs or NVMe drives for
optimal performance enhancement and possibly reduce database load.
## Operational readiness
Operational readiness in Coder is about ensuring that everything is set up
correctly before launching a platform into production. It involves making sure
that the service is reliable, secure, and easily scales accordingly to user-base
needs. Operational readiness is crucial because it helps prevent issues that
could affect workspace users experience once the platform is live.
### Helm Chart Configuration
1. Reference our [Helm chart values file](../../../../helm/coder/values.yaml)
and identify the required values for deployment.
1. Create a `values.yaml` and add it to your version control system.
1. Determine the necessary environment variables. Here is the
[full list of supported server environment variables](../../../reference/cli/server.md).
1. Follow our documented
[steps for installing Coder via Helm](../../../install/kubernetes.md).
### Template configuration
1. Establish dedicated accounts for users with the _Template Administrator_
role.
1. Maintain Coder templates using
[version control](../../templates/managing-templates/change-management.md).
1. Consider implementing a GitOps workflow to automatically push new template
versions into Coder from git. For example, on Github, you can use the
[Setup Coder](https://github.com/marketplace/actions/setup-coder) action.
1. Evaluate enabling
[automatic template updates](../../templates/managing-templates/index.md#template-update-policies-enterprise-premium)
upon workspace startup.
### Observability
1. Enable the Prometheus endpoint (environment variable:
`CODER_PROMETHEUS_ENABLE`).
1. Deploy the
[Coder Observability bundle](https://github.com/coder/observability) to
leverage pre-configured dashboards, alerts, and runbooks for monitoring
Coder. This includes integrations between Prometheus, Grafana, Loki, and
Alertmanager.
1. Review the [Prometheus response](../../integrations/prometheus.md) and set up
alarms on selected metrics.
### User support
1. Incorporate [support links](../../setup/appearance.md#support-links) into
internal documentation accessible from the user context menu. Ensure that
hyperlinks are valid and lead to up-to-date materials.
1. Encourage the use of `coder support bundle` to allow workspace users to
generate and provide network-related diagnostic data.

View File

@ -0,0 +1,18 @@
# Integrations
Coder is highly extensible and is not limited to the platforms outlined in these
docs. The control plane can be provisioned on any VM or container compute, and
workspaces can include any Terraform resource. See our
[architecture diagram](../infrastructure/architecture.md) for more details.
You can host your deployment on almost any infrastructure. To learn how, read
our [installation guides](../../install/index.md).
<children></children>
The following resources may help as you're deploying Coder.
- [Coder packages: one-click install on cloud providers](https://github.com/coder/packages)
- [Deploy Coder offline](../../install/offline.md)
- [Supported resources (Terraform registry)](https://registry.terraform.io)
- [Writing custom templates](../templates/index.md)

View File

@ -0,0 +1,163 @@
# Island Browser Integration
<div>
<a href="https://github.com/ericpaulsen" style="text-decoration: none; color: inherit;">
<span style="vertical-align:middle;">Eric Paulsen</span>
<img src="https://github.com/ericpaulsen.png" width="24px" height="24px" style="vertical-align:middle; margin: 0px;"/>
</a>
</div>
April 24, 2024
---
[Island](https://www.island.io/) is an enterprise-grade browser, offering a
Chromium-based experience similar to popular web browsers like Chrome and Edge.
It includes built-in security features for corporate applications and data,
aiming to bridge the gap between consumer-focused browsers and the security
needs of the enterprise.
Coder natively integrates with Island's feature set, which include data loss
protection (DLP), application awareness, browser session recording, and single
sign-on (SSO). This guide intends to document these feature categories and how
they apply to your Coder deployment.
## General Configuration
### Create an Application Group for Coder
We recommend creating an Application Group specific to Coder in the Island
Management console. This Application Group object will be referenced when
creating browser policies.
[See the Island documentation for creating an Application Group](https://documentation.island.io/docs/create-and-configure-an-application-group-object).
## Advanced Data Loss Protection
Integrate Island's advanced data loss prevention (DLP) capabilities with Coder's
cloud development environment (CDE), enabling you to control the “last mile”
between developers CDE and their local devices, ensuring that sensitive IP
remains in your centralized environment.
### Block cut, copy, paste, printing, screen share
1. [Create a Data Sandbox Profile](https://documentation.island.io/docs/create-and-configure-a-data-sandbox-profile)
1. Configure the following actions to allow/block (based on your security
requirements):
- Screenshot and Screen Share
- Printing
- Save Page
- Clipboard Limitations
1. [Create a Policy Rule](https://documentation.island.io/docs/create-and-configure-a-policy-rule-general)
to apply the Data Sandbox Profile
1. Define the Coder Application group as the Destination Object
1. Define the Data Sandbox Profile as the Action in the Last Mile Protection
section
### Conditionally allow copy on Coder's CLI authentication page
1. [Create a URL Object](https://documentation.island.io/docs/create-and-configure-a-policy-rule-general)
with the following configuration:
- **Include**
- **URL type**: Wildcard
- **URL address**: `coder.example.com/cli-auth`
- **Casing**: Insensitive
1. [Create a Data Sandbox Profile](https://documentation.island.io/docs/create-and-configure-a-data-sandbox-profile)
1. Configure action to allow copy/paste
1. [Create a Policy Rule](https://documentation.island.io/docs/create-and-configure-a-policy-rule-general)
to apply the Data Sandbox Profile
1. Define the URL Object you created as the Destination Object
1. Define the Data Sandbox Profile as the Action in the Last Mile Protection
section
### Prevent file upload/download from the browser
1. Create a Protection Profiles for both upload/download
- [Upload documentation](https://documentation.island.io/docs/create-and-configure-an-upload-protection-profile)
- [Download documentation](https://documentation.island.io/v1/docs/en/create-and-configure-a-download-protection-profile)
1. [Create a Policy Rule](https://documentation.island.io/docs/create-and-configure-a-policy-rule-general)
to apply the Protection Profiles
1. Define the Coder Application group as the Destination Object
1. Define the applicable Protection Profile as the Action in the Data Protection
section
### Scan files for sensitive data
1. [Create a Data Loss Prevention scanner](https://documentation.island.io/docs/create-a-data-loss-prevention-scanner)
1. [Create a Policy Rule](https://documentation.island.io/docs/create-and-configure-a-policy-rule-general)
to apply the DLP Scanner
1. Define the Coder Application group as the Destination Object
1. Define the DLP Scanner as the Action in the Data Protection section
## Application Awareness and Boundaries
Ensure that Coder is only accessed through the Island browser, guaranteeing that
your browser-level DLP policies are always enforced, and developers cant
sidestep such policies simply by using another browser.
### Configure browser enforcement, conditional access policies
1. Create a conditional access policy for your configured identity provider.
> Note: the configured IdP must be the same for both Coder and Island
- [Azure Active Directory/Entra ID](https://documentation.island.io/docs/configure-browser-enforcement-for-island-with-azure-ad#create-and-apply-a-conditional-access-policy)
- [Okta](https://documentation.island.io/docs/configure-browser-enforcement-for-island-with-okta)
- [Google](https://documentation.island.io/docs/configure-browser-enforcement-for-island-with-google-enterprise)
## Browser Activity Logging
Govern and audit in-browser terminal and IDE sessions using Island, such as
screenshots, mouse clicks, and keystrokes.
### Activity Logging Module
1. [Create an Activity Logging Profile](https://documentation.island.io/docs/create-and-configure-an-activity-logging-profile)
Supported browser events include:
- Web Navigation
- File Download
- File Upload
- Clipboard/Drag & Drop
- Print
- Save As
- Screenshots
- Mouse Clicks
- Keystrokes
1. [Create a Policy Rule](https://documentation.island.io/docs/create-and-configure-a-policy-rule-general)
to apply the Activity Logging Profile
1. Define the Coder Application group as the Destination Object
1. Define the Activity Logging Profile as the Action in the Security &
Visibility section
## Identity-aware logins (SSO)
Integrate Island's identity management system with Coder's authentication
mechanisms to enable identity-aware logins.
### Configure single sign-on (SSO) seamless authentication between Coder and Island
Configure the same identity provider (IdP) for both your Island and Coder
deployment. Upon initial login to the Island browser, the user's session token
will automatically be passed to Coder and authenticate their Coder session.

View File

@ -0,0 +1,175 @@
# JFrog Artifactory Integration
<div>
<a href="https://github.com/matifali" style="text-decoration: none; color: inherit;">
<span style="vertical-align:middle;">M Atif Ali</span>
<img src="https://github.com/matifali.png" width="24px" height="24px" style="vertical-align:middle; margin: 0px;"/>
</a>
</div>
January 24, 2024
---
Use Coder and JFrog Artifactory together to secure your development environments
without disturbing your developers' existing workflows.
This guide will demonstrate how to use JFrog Artifactory as a package registry
within a workspace.
## Requirements
- A JFrog Artifactory instance
- 1:1 mapping of users in Coder to users in Artifactory by email address or
username
- Repositories configured in Artifactory for each package manager you want to
use
## Provisioner Authentication
The most straight-forward way to authenticate your template with Artifactory is
by using our official Coder [modules](https://registry.coder.com). We publish
two type of modules that automate the JFrog Artifactory and Coder integration.
1. [JFrog-OAuth](https://registry.coder.com/modules/jfrog-oauth)
2. [JFrog-Token](https://registry.coder.com/modules/jfrog-token)
### JFrog-OAuth
This module is usable by JFrog self-hosted (on-premises) Artifactory as it
requires configuring a custom integration. This integration benefits from
Coder's [external-auth](https://coder.com/docs/admin/external-auth) feature and
allows each user to authenticate with Artifactory using an OAuth flow and issues
user-scoped tokens to each user.
To set this up, follow these steps:
1. Modify your Helm chart `values.yaml` for JFrog Artifactory to add,
```yaml
artifactory:
enabled: true
frontend:
extraEnvironmentVariables:
- name: JF_FRONTEND_FEATURETOGGLER_ACCESSINTEGRATION
value: "true"
access:
accessConfig:
integrations-enabled: true
integration-templates:
- id: "1"
name: "CODER"
redirect-uri: "https://CODER_URL/external-auth/jfrog/callback"
scope: "applied-permissions/user"
```
> Note Replace `CODER_URL` with your Coder deployment URL, e.g.,
> <coder.example.com>
2. Create a new Application Integration by going to
<https://JFROG_URL/ui/admin/configuration/integrations/new> and select the
Application Type as the integration you created in step 1.
![JFrog Platform new integration](../../images/guides/artifactory-integration/jfrog-oauth-app.png)
3. Add a new
[external authentication](https://coder.com/docs/admin/external-auth) to
Coder by setting these env variables,
```env
# JFrog Artifactory External Auth
CODER_EXTERNAL_AUTH_1_ID="jfrog"
CODER_EXTERNAL_AUTH_1_TYPE="jfrog"
CODER_EXTERNAL_AUTH_1_CLIENT_ID="YYYYYYYYYYYYYYY"
CODER_EXTERNAL_AUTH_1_CLIENT_SECRET="XXXXXXXXXXXXXXXXXXX"
CODER_EXTERNAL_AUTH_1_DISPLAY_NAME="JFrog Artifactory"
CODER_EXTERNAL_AUTH_1_DISPLAY_ICON="/icon/jfrog.svg"
CODER_EXTERNAL_AUTH_1_AUTH_URL="https://JFROG_URL/ui/authorization"
CODER_EXTERNAL_AUTH_1_SCOPES="applied-permissions/user"
```
> Note Replace `JFROG_URL` with your JFrog Artifactory base URL, e.g.,
> <example.jfrog.io>
4. Create or edit a Coder template and use the
[JFrog-OAuth](https://registry.coder.com/modules/jfrog-oauth) module to
configure the integration.
```tf
module "jfrog" {
source = "registry.coder.com/modules/jfrog-oauth/coder"
version = "1.0.0"
agent_id = coder_agent.example.id
jfrog_url = "https://jfrog.example.com"
configure_code_server = true # this depends on the code-server
username_field = "username" # If you are using GitHub to login to both Coder and Artifactory, use username_field = "username"
package_managers = {
"npm": "npm",
"go": "go",
"pypi": "pypi"
}
}
```
### JFrog-Token
This module makes use of the
[Artifactory terraform provider](https://registry.terraform.io/providers/jfrog/artifactory/latest/docs)
and an admin-scoped token to create user-scoped tokens for each user by matching
their Coder email or username with Artifactory. This can be used for both SaaS
and self-hosted(on-premises) Artifactory instances.
To set this up, follow these steps:
1. Get a JFrog access token from your Artifactory instance. The token must be an
[admin token](https://registry.terraform.io/providers/jfrog/artifactory/latest/docs#access-token)
with scope `applied-permissions/admin`.
2. Create or edit a Coder template and use the
[JFrog-Token](https://registry.coder.com/modules/jfrog-token) module to
configure the integration and pass the admin token. It is recommended to
store the token in a sensitive terraform variable to prevent it from being
displayed in plain text in the terraform state.
```tf
variable "artifactory_access_token" {
type = string
sensitive = true
}
module "jfrog" {
source = "registry.coder.com/modules/jfrog-token/coder"
version = "1.0.0"
agent_id = coder_agent.example.id
jfrog_url = "https://example.jfrog.io"
configure_code_server = true # this depends on the code-server
artifactory_access_token = var.artifactory_access_token
package_managers = {
"npm": "npm",
"go": "go",
"pypi": "pypi"
}
}
```
<blockquote class="info">
The admin-level access token is used to provision user tokens and is never exposed to
developers or stored in workspaces.
</blockquote>
If you do not want to use the official modules, you can check example template
that uses Docker as the underlying compute
[here](https://github.com/coder/coder/tree/main/examples/jfrog/docker). The same
concepts apply to all compute types.
## Offline Deployments
See the
[offline deployments](../templates/extending-templates/modules.md#offline-installations)
section for instructions on how to use coder-modules in an offline environment
with Artifactory.
## More reading
- See the full example template
[here](https://github.com/coder/coder/tree/main/examples/jfrog/docker).
- To serve extensions from your own VS Code Marketplace, check out
[code-marketplace](https://github.com/coder/code-marketplace#artifactory-storage).

View File

@ -0,0 +1,70 @@
# Integrating JFrog Xray with Coder Kubernetes Workspaces
<div>
<a href="https://github.com/matifali" style="text-decoration: none; color: inherit;">
<span style="vertical-align:middle;">Muhammad Atif Ali</span>
<img src="https://github.com/matifali.png" width="24px" height="24px" style="vertical-align:middle; margin: 0px;"/>
</a>
</div>
March 17, 2024
---
This guide will walk you through the process of adding
[JFrog Xray](https://jfrog.com/xray/) integration to Coder Kubernetes workspaces
using Coder's [JFrog Xray Integration](https://github.com/coder/coder-xray).
## Prerequisites
- A self-hosted JFrog Platform instance.
- Kubernetes workspaces running on Coder.
## Deploying the Coder - JFrog Xray Integration
1. Create a JFrog Platform
[Access Token](https://jfrog.com/help/r/jfrog-platform-administration-documentation/access-tokens)
with a user that has the read
[permission](https://jfrog.com/help/r/jfrog-platform-administration-documentation/permissions)
for the repositories you want to scan.
1. Create a Coder [token](../../reference/cli/tokens_create.md#tokens-create)
with a user that has the [`owner`](../users#roles) role.
1. Create Kubernetes secrets for the JFrog Xray and Coder tokens.
```bash
kubectl create secret generic coder-token --from-literal=coder-token='<token>'
kubectl create secret generic jfrog-token --from-literal=user='<user>' --from-literal=token='<token>'
```
1. Deploy the Coder - JFrog Xray integration.
```bash
helm repo add coder-xray https://helm.coder.com/coder-xray
helm upgrade --install coder-xray coder-xray/coder-xray \
--namespace coder-xray \
--create-namespace \
--set namespace="<CODER_WORKSPACES_NAMESPACE>" \ # Replace with your Coder workspaces namespace
--set coder.url="https://<your-coder-url>" \
--set coder.secretName="coder-token" \
--set artifactory.url="https://<your-artifactory-url>" \
--set artifactory.secretName="jfrog-token"
```
### Updating the Coder template
[`coder-xray`](https://github.com/coder/coder-xray) will scan all kubernetes
workspaces in the specified namespace. It depends on the `image` available in
Artifactory and indexed by Xray. To ensure that the images are available in
Artifactory, update the Coder template to use the Artifactory registry.
```tf
image = "<ARTIFACTORY_URL>/<REPO>/<IMAGE>:<TAG>"
```
> **Note**: To authenticate with the Artifactory registry, you may need to
> create a
> [Docker config](https://jfrog.com/help/r/jfrog-artifactory-documentation/docker-advanced-topics)
> and use it in the `imagePullSecrets` field of the kubernetes pod. See this
> [guide](../../tutorials/image-pull-secret.md) for more information.
![JFrog Xray Integration](../../images/guides/xray-integration/example.png)

View File

@ -0,0 +1,78 @@
# Kubernetes event logs
To stream Kubernetes events into your workspace startup logs, you can use
Coder's [`coder-logstream-kube`](https://github.com/coder/coder-logstream-kube)
tool. `coder-logstream-kube` provides useful information about the workspace pod
or deployment, such as:
- Causes of pod provisioning failures, or why a pod is stuck in a pending state.
- Visibility into when pods are OOMKilled, or when they are evicted.
## Prerequisites
`coder-logstream-kube` works best with the
[`kubernetes_deployment`](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment)
Terraform resource, which requires the `coder` service account to have
permission to create deployments. For example, if you use
[Helm](../../install/kubernetes.md#install-coder-with-helm) to install Coder,
you should set `coder.serviceAccount.enableDeployments=true` in your
`values.yaml`
```diff
coder:
serviceAccount:
workspacePerms: true
- enableDeployments: false
+ enableDeployments: true
annotations: {}
name: coder
```
> Note: This is only required for Coder versions < 0.28.0, as this will be the
> default value for Coder versions >= 0.28.0
## Installation
Install the `coder-logstream-kube` helm chart on the cluster where the
deployment is running.
```shell
helm repo add coder-logstream-kube https://helm.coder.com/logstream-kube
helm install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \
--namespace coder \
--set url=<your-coder-url-including-http-or-https>
```
## Example logs
Here is an example of the logs you can expect to see in the workspace startup
logs:
### Normal pod deployment
![normal pod deployment](../../images/admin/integrations/coder-logstream-kube-logs-normal.png)
### Wrong image
![Wrong image name](../../images/admin/integrations/coder-logstream-kube-logs-wrong-image.png)
### Kubernetes quota exceeded
![Kubernetes quota exceeded](../../images/admin/integrations/coder-logstream-kube-logs-quota-exceeded.png)
### Pod crash loop
![Pod crash loop](../../images/admin/integrations/coder-logstream-kube-logs-pod-crashed.png)
## How it works
Kubernetes provides an
[informers](https://pkg.go.dev/k8s.io/client-go/informers) API that streams pod
and event data from the API server.
coder-logstream-kube listens for pod creation events with containers that have
the CODER_AGENT_TOKEN environment variable set. All pod events are streamed as
logs to the Coder API using the agent token for authentication. For more
details, see the
[coder-logstream-kube](https://github.com/coder/coder-logstream-kube)
repository.

View File

@ -0,0 +1,237 @@
# Additional clusters
With Coder, you can deploy workspaces in additional Kubernetes clusters using
different
[authentication methods](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#authentication)
in the Terraform provider.
![Region picker in "Create Workspace" screen](../../images/admin/integrations/kube-region-picker.png)
## Option 1) Kubernetes contexts and kubeconfig
First, create a kubeconfig file with
[multiple contexts](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
```shell
kubectl config get-contexts
CURRENT NAME CLUSTER
workspaces-europe-west2-c workspaces-europe-west2-c
* workspaces-us-central1-a workspaces-us-central1-a
```
### Kubernetes control plane
If you deployed Coder on Kubernetes, you can attach a kubeconfig as a secret.
This assumes Coder is deployed on the `coder` namespace and your kubeconfig file
is in ~/.kube/config.
```shell
kubectl create secret generic kubeconfig-secret -n coder --from-file=~/.kube/config
```
Modify your helm values to mount the secret:
```yaml
coder:
# ...
volumes:
- name: "kubeconfig-mount"
secret:
secretName: "kubeconfig-secret"
volumeMounts:
- name: "kubeconfig-mount"
mountPath: "/mnt/secrets/kube"
readOnly: true
```
[Upgrade Coder](../../install/kubernetes.md#upgrading-coder-via-helm) with these
new values.
### VM control plane
If you deployed Coder on a VM, copy the kubeconfig file to
`/home/coder/.kube/config`.
### Create a Coder template
You can start from our
[example template](https://github.com/coder/coder/tree/main/examples/templates/kubernetes).
From there, add
[template parameters](../templates/extending-templates/parameters.md) to allow
developers to pick their desired cluster.
```tf
# main.tf
data "coder_parameter" "kube_context" {
name = "kube_context"
display_name = "Cluster"
default = "workspaces-us-central1-a"
mutable = false
option {
name = "US Central"
icon = "/emojis/1f33d.png"
value = "workspaces-us-central1-a"
}
option {
name = "Europe West"
icon = "/emojis/1f482.png"
value = "workspaces-europe-west2-c"
}
}
provider "kubernetes" {
config_path = "~/.kube/config" # or /mnt/secrets/kube/config for Kubernetes
config_context = data.coder_parameter.kube_context.value
}
```
## Option 2) Kubernetes ServiceAccounts
Alternatively, you can authenticate with remote clusters with ServiceAccount
tokens. Coder can store these secrets on your behalf with
[managed Terraform variables](../templates/extending-templates/variables.md).
Alternatively, these could also be fetched from Kubernetes secrets or even
[Hashicorp Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs/data-sources/generic_secret).
This guide assumes you have a `coder-workspaces` namespace on your remote
cluster. Change the namespace accordingly.
### Create a ServiceAccount
Run this command against your remote cluster to create a ServiceAccount, Role,
RoleBinding, and token:
```shell
kubectl apply -n coder-workspaces -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: coder-v2
---
apiVersion: v1
kind: Secret
metadata:
name: coder-v2
annotations:
kubernetes.io/service-account.name: coder-v2
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: coder-v2
rules:
- apiGroups: ["", "apps", "networking.k8s.io"]
resources: ["persistentvolumeclaims", "pods", "deployments", "services", "secrets", "pods/exec","pods/log", "events", "networkpolicies", "serviceaccounts"]
verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection"]
- apiGroups: ["metrics.k8s.io", "storage.k8s.io"]
resources: ["pods", "storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: coder-v2
subjects:
- kind: ServiceAccount
name: coder-v2
roleRef:
kind: Role
name: coder-v2
apiGroup: rbac.authorization.k8s.io
EOF
```
The output should be similar to:
```text
serviceaccount/coder-v2 created
secret/coder-v2 created
role.rbac.authorization.k8s.io/coder-v2 created
rolebinding.rbac.authorization.k8s.io/coder-v2 created
```
### 2. Modify the Kubernetes template
You can start from our
[example template](https://github.com/coder/coder/tree/main/examples/templates/kubernetes).
```tf
variable "host" {
description = "Cluster host address"
sensitive = true
}
variable "cluster_ca_certificate" {
description = "Cluster CA certificate (base64 encoded)"
sensitive = true
}
variable "token" {
description = "Cluster CA token (base64 encoded)"
sensitive = true
}
variable "namespace" {
description = "Namespace"
}
provider "kubernetes" {
host = var.host
cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
token = base64decode(var.token)
}
```
### Create Coder template with managed variables
Fetch the values from the secret and pass them to Coder. This should work on
macOS and Linux.
To get the cluster address:
```shell
kubectl cluster-info
Kubernetes control plane is running at https://example.domain:6443
export CLUSTER_ADDRESS=https://example.domain:6443
```
To fetch the CA certificate and token:
```shell
export CLUSTER_CA_CERTIFICATE=$(kubectl get secrets coder-v2 -n coder-workspaces -o jsonpath="{.data.ca\.crt}")
export CLUSTER_SERVICEACCOUNT_TOKEN=$(kubectl get secrets coder-v2 -n coder-workspaces -o jsonpath="{.data.token}")
```
Create the template with these values:
```shell
coder templates push \
--variable host=$CLUSTER_ADDRESS \
--variable cluster_ca_certificate=$CLUSTER_CA_CERTIFICATE \
--variable token=$CLUSTER_SERVICEACCOUNT_TOKEN \
--variable namespace=coder-workspaces
```
If you're on a Windows machine (or if one of the commands fail), try grabbing
the values manually:
```shell
# Get cluster API address
kubectl cluster-info
# Get cluster CA and token (base64 encoded)
kubectl get secrets coder-service-account-token -n coder-workspaces -o jsonpath="{.data}"
coder templates push \
--variable host=API_ADDRESS \
--variable cluster_ca_certificate=CLUSTER_CA_CERTIFICATE \
--variable token=CLUSTER_SERVICEACCOUNT_TOKEN \
--variable namespace=coder-workspaces
```

View File

@ -0,0 +1,23 @@
# Provisioning with OpenTofu
<!-- Keeping this in as a placeholder for supporting OpenTofu. We should fix support for custom terraform binaries ASAP. -->
> ⚠️ This guide is a work in progress. We do not officially support using custom
> Terraform binaries in your Coder deployment. To track progress on the work,
> see this related [Github Issue](https://github.com/coder/coder/issues/12009).
Coder deployments support any custom Terraform binary, including
[OpenTofu](https://opentofu.org/docs/) - an open source alternative to
Terraform.
> You can read more about OpenTofu and Hashicorp's licensing in our
> [blog post](https://coder.com/blog/hashicorp-license) on the Terraform
> licensing changes.
## Using a custom Terraform binary
You can change your deployment custom Terraform binary as long as it is in
`PATH` and is within the
[supported versions](https://github.com/coder/coder/blob/f57ce97b5aadd825ddb9a9a129bb823a3725252b/provisioner/terraform/install.go#L22-L25).
The hardcoded version check ensures compatibility with our
[example templates](https://github.com/coder/coder/tree/main/examples/templates).

View File

@ -101,7 +101,7 @@ spec:
`CODER_PROMETHEUS_COLLECT_AGENT_STATS` before they can be retrieved from the
deployment. They will always be available from the agent.
<!-- Code generated by 'make docs/admin/prometheus.md'. DO NOT EDIT -->
<!-- Code generated by 'make docs/admin/integrations/prometheus.md'. DO NOT EDIT -->
| Name | Type | Description | Labels |
| ------------------------------------------------------------- | --------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
@ -183,4 +183,4 @@ deployment. They will always be available from the agent.
| `promhttp_metric_handler_requests_in_flight` | gauge | Current number of scrapes being served. | |
| `promhttp_metric_handler_requests_total` | counter | Total number of scrapes by HTTP status code. | `code` |
<!-- End generated by 'make docs/admin/prometheus.md'. -->
<!-- End generated by 'make docs/admin/integrations/prometheus.md'. -->

View File

@ -0,0 +1,48 @@
# Integrating HashiCorp Vault with Coder
<div>
<a href="https://github.com/matifali" style="text-decoration: none; color: inherit;">
<span style="vertical-align:middle;">Muhammad Atif Ali</span>
<img src="https://github.com/matifali.png" width="24px" height="24px" style="vertical-align:middle; margin: 0px;"/>
</a>
</div>
August 05, 2024
---
This guide will walk you through the process of adding
[HashiCorp Vault](https://www.vaultproject.io/) integration to Coder workspaces.
Coder makes it easy to integrate HashiCorp Vault with your workspaces by
providing official terraform modules to integrate Vault with Coder. This guide
will show you how to use these modules to integrate HashiCorp Vault with Coder.
## `vault-github`
[`vault-github`](https://registry.coder.com/modules/vault-github) is a terraform
module that allows you to authenticate with Vault using a GitHub token. This
modules uses the existing GitHub [external authentication](../external-auth.md)
to get the token and authenticate with Vault.
To use this module, you need to add the following code to your terraform
configuration:
```tf
module "vault" {
source = "registry.coder.com/modules/vault-github/coder"
version = "1.0.7"
agent_id = coder_agent.example.id
vault_addr = "https://vault.example.com"
coder_github_auth_id = "my-github-auth-id"
}
```
This module will install and authenticate the `vault` CLI in your Coder
workspace.
Users then can use the `vault` CLI to interact with the vault, e.g., to het a kv
secret,
```shell
vault kv get -namespace=YOUR_NAMESPACE -mount=MOUNT_NAME SECRET_NAME
```

View File

@ -0,0 +1,47 @@
# Licensing
Some features are only accessible with a Premium or Enterprise license. See our
[pricing page](https://coder.com/pricing) for more details. To try Premium
features, you can [request a trial](https://coder.com/trial) or
[contact sales](https://coder.com/contact).
<!-- markdown-link-check-disable -->
> If you are an existing customer, you can learn more our new Premium plan in
> the [Coder v2.16 blog post](https://coder.com/blog/release-recap-2-16-0)
<!-- markdown-link-check-enable -->
## Adding your license key
There are two ways to add a license to a Coder deployment:
<div class="tabs">
### Coder UI
First, ensure you have a license key
([request a trial](https://coder.com/trial)).
With an `Owner` account, navigate to `Deployment -> Licenses`, `Add a license`
then drag or select the license file with the `jwt` extension.
![Add License UI](../../images/add-license-ui.png)
### Coder CLI
First, ensure you have a license key
([request a trial](https://coder.com/trial)) and the
[Coder CLI](../../install/cli.md) installed.
1. Save your license key to disk and make note of the path
2. Open a terminal
3. Ensure you are logged into your Coder deployment
`coder login <access url>`
4. Run
`coder licenses add -f <path to your license key>`
</div>

View File

@ -3,16 +3,18 @@
Coder includes an operator-friendly deployment health page that provides a
number of details about the health of your Coder deployment.
![Health check in Coder Dashboard](../../images/admin/monitoring/health-check.png)
You can view it at `https://${CODER_URL}/health`, or you can alternatively view
the
[JSON response directly](../reference/api/debug.md#debug-info-deployment-health).
[JSON response directly](../../reference/api/debug.md#debug-info-deployment-health).
The deployment health page is broken up into the following sections:
## Access URL
The Access URL section shows checks related to Coder's
[access URL](./configure.md#access-url).
[access URL](../setup/index.md#access-url).
Coder will periodically send a GET request to `${CODER_ACCESS_URL}/healthz` and
validate that the response is `200 OK`. The expected response body is also the
@ -26,7 +28,7 @@ _Access URL not set_
**Problem:** no access URL has been configured.
**Solution:** configure an [access URL](./configure.md#access-url) for Coder.
**Solution:** configure an [access URL](../setup/index.md#access-url) for Coder.
### EACS02
@ -107,7 +109,7 @@ query fails.
_Database Latency High_
**Problem:** This code is returned if the median latency is higher than the
[configured threshold](../reference/cli/server.md#--health-check-threshold-database).
[configured threshold](../../reference/cli/server.md#--health-check-threshold-database).
This may not be an error as such, but is an indication of a potential issue.
**Solution:** Investigate the sizing of the configured database with regard to
@ -118,9 +120,9 @@ configured threshold to a higher value (this will not address the root cause).
> [!TIP]
>
> - You can enable
> [detailed database metrics](../reference/cli/server.md#--prometheus-collect-db-metrics)
> [detailed database metrics](../../reference/cli/server.md#--prometheus-collect-db-metrics)
> in Coder's Prometheus endpoint.
> - If you have [tracing enabled](../reference/cli/server.md#--trace), these
> - If you have [tracing enabled](../../reference/cli/server.md#--trace), these
> traces may also contain useful information regarding Coder's database
> activity.
@ -129,9 +131,9 @@ configured threshold to a higher value (this will not address the root cause).
Coder workspace agents may use
[DERP (Designated Encrypted Relay for Packets)](https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp)
to communicate with Coder. This requires connectivity to a number of configured
[DERP servers](../reference/cli/server.md#--derp-config-path) which are used to
relay traffic between Coder and workspace agents. Coder periodically queries the
health of its configured DERP servers and may return one or more of the
[DERP servers](../../reference/cli/server.md#--derp-config-path) which are used
to relay traffic between Coder and workspace agents. Coder periodically queries
the health of its configured DERP servers and may return one or more of the
following:
### EDERP01
@ -148,7 +150,7 @@ misconfigured reverse HTTP proxy. Additionally, while workspace users should
still be able to reach their workspaces, connection performance may be degraded.
> **Note:** This may also be shown if you have
> [forced websocket connections for DERP](../reference/cli/server.md#--derp-force-websockets).
> [forced websocket connections for DERP](../../reference/cli/server.md#--derp-force-websockets).
**Solution:** ensure that any proxies you use allow connection upgrade with the
`Upgrade: derp` header.
@ -181,7 +183,7 @@ to establish [direct connections](../networking/stun.md). Without at least one
working STUN server, direct connections may not be possible.
**Solution:** Ensure that the
[configured STUN severs](../reference/cli/server.md#derp-server-stun-addresses)
[configured STUN severs](../../reference/cli/server.md#--derp-server-stun-addresses)
are reachable from Coder and that UDP traffic can be sent/received on the
configured port.
@ -205,7 +207,8 @@ for long-lived connections:
- Between users interacting with Coder's Web UI (for example, the built-in
terminal, or VSCode Web),
- Between workspace agents and `coderd`,
- Between Coder [workspace proxies](../admin/workspace-proxies.md) and `coderd`.
- Between Coder [workspace proxies](../networking/workspace-proxies.md) and
`coderd`.
Any issues causing failures to establish WebSocket connections will result in
**severe** impairment of functionality for users. To validate this
@ -250,8 +253,8 @@ to write a message.
## Workspace Proxy
If you have configured [Workspace Proxies](../admin/workspace-proxies.md), Coder
will periodically query their availability and show their status here.
If you have configured [Workspace Proxies](../networking/workspace-proxies.md),
Coder will periodically query their availability and show their status here.
### EWP01
@ -292,10 +295,10 @@ be built until there is at least one provisioner daemon running.
**Solution:**
If you are using
[External Provisioner Daemons](./provisioners.md#external-provisioners), ensure
[External Provisioner Daemons](../provisioners.md#external-provisioners), ensure
that they are able to successfully connect to Coder. Otherwise, ensure
[`--provisioner-daemons`](../reference/cli/server.md#provisioner-daemons) is set
to a value greater than 0.
[`--provisioner-daemons`](../../reference/cli/server.md#--provisioner-daemons)
is set to a value greater than 0.
> Note: This may be a transient issue if you are currently in the process of
> updating your deployment.
@ -330,17 +333,6 @@ version of Coder.
> Note: This may be a transient issue if you are currently in the process of
> updating your deployment.
### EIF01
_Interface with Small MTU_
**Problem:** One or more local interfaces have MTU smaller than 1378, which is
the minimum MTU for Coder to establish direct connections without fragmentation.
**Solution:** Since IP fragmentation can be a source of performance problems, we
recommend you disable the interface when using Coder or
[disable direct connections](../../cli#--disable-direct-connections)
## EUNKNOWN
_Unknown Error_

View File

@ -0,0 +1,24 @@
# Monitoring Coder
Learn about our the tools, techniques, and best practices to monitor Coder your
Coder deployment.
## Quick Start: Observability Helm Chart
Deploy Prometheus, Grafana, Alert Manager, and pre-built dashboards on your
Kubernetes cluster to monitor the Coder control plane, provisioners, and
workspaces.
![Grafana Dashboard](../../images/admin/monitoring/grafana-dashboard.png)
Learn how to install & read the docs on the
[Observability Helm Chart GitHub](https://github.com/coder/observability)
## Table of Contents
- [Logs](./logs.md): Learn how to access to Coder server logs, agent logs, and
even how to expose Kubernetes pod scheduling logs.
- [Metrics](./metrics.md): Learn about the valuable metrics to measure on a
Coder deployment, regardless of your monitoring stack.
- [Health Check](./health-check.md): Learn about the periodic health check and
error codes that run on Coder deployments.

View File

@ -0,0 +1,59 @@
# Logs
All Coder services log to standard output, which can be critical for identifying
errors and monitoring Coder's deployment health. Like any service, logs can be
captured via Splunk, Datadog, Grafana Loki, or other ingestion tools.
## `coderd` Logs
By default, the Coder server exports human-readable logs to standard output. You
can access these logs via `kubectl logs deployment/coder -n <coder-namespace>`
on Kubernetes or `journalctl -u coder` if you deployed Coder on a host
machine/VM.
- To change the log format/location, you can set
[`CODER_LOGGING_HUMAN`](../../reference/cli/server.md#--log-human) and
[`CODER_LOGGING_JSON](../../reference/cli/server.md#--log-json) server config.
options.
- To only display certain types of logs, use
the[`CODER_LOG_FILTER`](../../reference/cli/server.md#-l---log-filter) server
config.
Events such as server errors, audit logs, user activities, and SSO & OpenID
Connect logs are all captured in the `coderd` logs.
## `provisionerd` Logs
Logs for [external provisioners](../provisioners.md) are structured
[and configured](../../reference/cli/provisioner_start.md#--log-human) similarly
to `coderd` logs. Use these logs to troubleshoot and monitor the Terraform
operations behind workspaces and templates.
## Workspace Logs
The [Coder agent](../infrastructure/architecture.md#agents) inside workspaces
provides useful logs around workspace-to-server and client-to-workspace
connections. For Kubernetes workspaces, these are typically the pod logs as the
agent runs via the container entrypoint.
Agent logs are also stored in the workspace filesystem by default:
- macOS/Linux: `/tmp/coder-agent.log`
- Windows: Refer to the template code (e.g.
[azure-windows](https://github.com/coder/coder/blob/2cfadad023cb7f4f85710cff0b21ac46bdb5a845/examples/templates/azure-windows/Initialize.ps1.tftpl#L64))
to see where logs are stored.
> Note: Logs are truncated once they reach 5MB in size.
Startup script logs are also stored in the temporary directory of macOS and
Linux workspaces.
## Kubernetes Event Logs
Sometimes, a workspace may take a while to start or even fail to start due to
underlying events on the Kubernetes cluster such as a node being out of
resources or a missing image. You can install
[coder-logstream-kube](../integrations/kubernetes-logs.md) to stream Kubernetes
events to the Coder UI.
![Kubernetes logs in Coder dashboard](../../images/admin/monitoring/logstream-kube.png)

View File

@ -0,0 +1,22 @@
# Deployment Metrics
Coder exposes many metrics which give insight into the current state of a live
Coder deployment. Our metrics are designed to be consumed by a
[Prometheus server](https://prometheus.io/).
If you don't have an Prometheus server installed, you can follow the Prometheus
[Getting started](https://prometheus.io/docs/prometheus/latest/getting_started/)
guide.
### Setting up metrics
To set up metrics monitoring, please read our
[Prometheus integration guide](../integrations/prometheus.md). The following
links point to relevant sections there.
- [Enable Prometheus metrics](../integrations/prometheus.md#enable-prometheus-metrics)
in the control plane
- [Enable the Prometheus endpoint in Helm](../integrations/prometheus.md#kubernetes-deployment)
(Kubernetes users only)
- [Configure Prometheus to scrape Coder metrics](../integrations/prometheus.md#prometheus-configuration)
- [See the list of available metrics](../integrations/prometheus.md#available-metrics)

View File

@ -3,12 +3,11 @@
Notifications are sent by Coder in response to specific internal events, such as
a workspace being deleted or a user being created.
**Notifications are currently an experimental feature.**
## Enable experiment
In order to activate the notifications feature, you'll need to enable the
`notifications` experiment.
In order to activate the notifications feature on Coder v2.15.X, you'll need to
enable the `notifications` experiment. Notifications are enabled by default
starting in v2.16.0.
```bash
# Using the CLI flag
@ -74,7 +73,7 @@ flags.
Notifications can currently be delivered by either SMTP or webhook. Each message
can only be delivered to one method, and this method is configured globally with
[`CODER_NOTIFICATIONS_METHOD`](https://coder.com/docs/reference/cli/server#--notifications-method)
[`CODER_NOTIFICATIONS_METHOD`](../../../reference/cli/server.md#--notifications-method)
(default: `smtp`).
Enterprise customers can configure which method to use for each of the supported
@ -229,14 +228,14 @@ All users have the option to opt-out of any notifications. Go to **Account** ->
**Notifications** to turn notifications on or off. The delivery method for each
notification is indicated on the right hand side of this table.
![User Notification Preferences](../images/user-notification-preferences.png)
![User Notification Preferences](../../../images/admin/monitoring/notifications/user-notification-preferences.png)
## Delivery Preferences (enterprise) (premium)
Administrators can configure which delivery methods are used for each different
[event type](#event-types).
![preferences](../images/admin/notification-admin-prefs.png)
![preferences](../../../images/admin/monitoring/notifications/notification-admin-prefs.png)
You can find this page under
`https://$CODER_ACCESS_URL/deployment/notifications?tab=events`.
@ -247,10 +246,10 @@ Administrators may wish to stop _all_ notifications across the deployment. We
support a killswitch in the CLI for these cases.
To pause sending notifications, execute
[`coder notifications pause`](https://coder.com/docs/reference/cli/notifications_pause).
[`coder notifications pause`](../../../reference/cli/notifications_pause.md).
To resume sending notifications, execute
[`coder notifications resume`](https://coder.com/docs/reference/cli/notifications_resume).
[`coder notifications resume`](../../../reference/cli/notifications_resume.md).
## Troubleshooting
@ -277,7 +276,7 @@ Messages older than 7 days are deleted.
### Message States
![states](../images/admin/notification-states.png)
![states](../../../images/admin/monitoring/notifications/notification-states.png)
_A notifier here refers to a Coder replica which is responsible for dispatching
the notification. All running replicas act as notifiers to process pending

View File

@ -17,8 +17,8 @@ consistent between Slack and their Coder login.
Before setting up Slack notifications, ensure that you have the following:
- Administrator access to the Slack platform to create apps
- Coder platform with
[notifications enabled](../notifications#enable-experiment)
- Coder platform v2.15.0 or greater with
[notifications enabled](./index.md#enable-experiment) for versions <v2.16.0
## Create Slack Application
@ -90,11 +90,9 @@ receiver.router.post("/v1/webhook", async (req, res) => {
return res.status(400).send("Error: request body is missing");
}
const { title_markdown, body_markdown } = req.body;
if (!title_markdown || !body_markdown) {
return res
.status(400)
.send('Error: missing fields: "title_markdown", or "body_markdown"');
const { title, body } = req.body;
if (!title || !body) {
return res.status(400).send('Error: missing fields: "title", or "body"');
}
const payload = req.body.payload;
@ -120,11 +118,11 @@ receiver.router.post("/v1/webhook", async (req, res) => {
blocks: [
{
type: "header",
text: { type: "mrkdwn", text: title_markdown },
text: { type: "plain_text", text: title },
},
{
type: "section",
text: { type: "mrkdwn", text: body_markdown },
text: { type: "mrkdwn", text: body },
},
],
};
@ -194,12 +192,9 @@ must respond appropriately.
## Enable Webhook Integration in Coder
To enable webhook integration in Coder, ensure the "notifications" experiment is
activated by running the following command:
```bash
export CODER_EXPERIMENTS=notifications
```
To enable webhook integration in Coder, ensure the "notifications"
[experiment is activated](./index.md#enable-experiment) (only required in
v2.15.X).
Then, define the POST webhook endpoint matching the deployed Slack bot:

View File

@ -15,7 +15,7 @@ Before setting up Microsoft Teams notifications, ensure that you have the
following:
- Administrator access to the Teams platform
- Coder platform with notifications enabled
- Coder platform with [notifications enabled](./index.md#enable-experiment)
## Build Teams Workflow
@ -67,10 +67,10 @@ The process of setting up a Teams workflow consists of three key steps:
}
}
},
"title_markdown": {
"title": {
"type": "string"
},
"body_markdown": {
"body": {
"type": "string"
}
}
@ -108,11 +108,11 @@ The process of setting up a Teams workflow consists of three key steps:
},
{
"type": "TextBlock",
"text": "**@{replace(body('Parse_JSON')?['title_markdown'], '"', '\"')}**"
"text": "**@{replace(body('Parse_JSON')?['title'], '"', '\"')}**"
},
{
"type": "TextBlock",
"text": "@{replace(body('Parse_JSON')?['body_markdown'], '"', '\"')}",
"text": "@{replace(body('Parse_JSON')?['body'], '"', '\"')}",
"wrap": true
},
{
@ -133,12 +133,9 @@ The process of setting up a Teams workflow consists of three key steps:
## Enable Webhook Integration
To enable webhook integration in Coder, ensure the "notifications" experiment is
activated by running the following command:
```bash
export CODER_EXPERIMENTS=notifications
```
To enable webhook integration in Coder, ensure the "notifications"
[experiment is activated](./index.md#enable-experiment) (only required in
v2.15.X).
Then, define the POST webhook endpoint created by your Teams workflow:

View File

@ -32,10 +32,9 @@ connect to the same Postgres endpoint.
HA brings one configuration variable to set in each Coderd node:
`CODER_DERP_SERVER_RELAY_URL`. The HA nodes use these URLs to communicate with
each other. Inter-node communication is only required while using the embedded
relay (default). If you're using
[custom relays](../networking/index.md#custom-relays), Coder ignores
`CODER_DERP_SERVER_RELAY_URL` since Postgres is the sole rendezvous for the
Coder nodes.
relay (default). If you're using [custom relays](./index.md#custom-relays),
Coder ignores `CODER_DERP_SERVER_RELAY_URL` since Postgres is the sole
rendezvous for the Coder nodes.
`CODER_DERP_SERVER_RELAY_URL` will never be `CODER_ACCESS_URL` because
`CODER_ACCESS_URL` is a load balancer to all Coder nodes.
@ -51,7 +50,7 @@ Here's an example 3-node network configuration setup:
## Kubernetes
If you installed Coder via
[our Helm Chart](../install/kubernetes.md#install-coder-with-helm), just
[our Helm Chart](../../install/kubernetes.md#4-install-coder-with-helm), just
increase `coder.replicaCount` in `values.yaml`.
If you installed Coder into Kubernetes by some other means, insert the relay URL
@ -71,5 +70,5 @@ Then, increase the number of pods.
## Up next
- [Networking](../networking/index.md)
- [Kubernetes](../install/kubernetes.md)
- [Read more on Coder's networking stack](./index.md)
- [Install on Kubernetes](../../install/kubernetes.md)

View File

@ -0,0 +1,199 @@
# Networking
Coder's network topology has three types of nodes: workspaces, coder servers,
and users.
The coder server must have an inbound address reachable by users and workspaces,
but otherwise, all topologies _just work_ with Coder.
When possible, we establish direct connections between users and workspaces.
Direct connections are as fast as connecting to the workspace outside of Coder.
When NAT traversal fails, connections are relayed through the coder server. All
user <-> workspace connections are end-to-end encrypted.
[Tailscale's open source](https://tailscale.com) backs our networking logic.
## Requirements
In order for clients and workspaces to be able to connect:
> **Note:** We strongly recommend that clients connect to Coder and their
> workspaces over a good quality, broadband network connection. The following
> are minimum requirements:
>
> - better than 400ms round-trip latency to the Coder server and to their
> workspace
> - better than 0.5% random packet loss
- All clients and agents must be able to establish a connection to the Coder
server (`CODER_ACCESS_URL`) over HTTP/HTTPS.
- Any reverse proxy or ingress between the Coder control plane and
clients/agents must support WebSockets.
> **Note:** We strongly recommend that clients connect to Coder and their
> workspaces over a good quality, broadband network connection. The following
> are minimum requirements:
>
> - better than 400ms round-trip latency to the Coder server and to their
> workspace
> - better than 0.5% random packet loss
In order for clients to be able to establish direct connections:
> **Note:** Direct connections via the web browser are not supported. To improve
> latency for browser-based applications running inside Coder workspaces in
> regions far from the Coder control plane, consider deploying one or more
> [workspace proxies](./workspace-proxies.md).
- The client is connecting using the CLI (e.g. `coder ssh` or
`coder port-forward`). Note that the
[VSCode extension](https://marketplace.visualstudio.com/items?itemName=coder.coder-remote)
and [JetBrains Plugin](https://plugins.jetbrains.com/plugin/19620-coder/), and
[`ssh coder.<workspace>`](../../reference/cli/config-ssh.md) all utilize the
CLI to establish a workspace connection.
- Either the client or workspace agent are able to discover a reachable
`ip:port` of their counterpart. If the agent and client are able to
communicate with each other using their locally assigned IP addresses, then a
direct connection can be established immediately. Otherwise, the client and
agent will contact
[the configured STUN servers](../../reference/cli/server.md#derp-server-stun-addresses)
to try and determine which `ip:port` can be used to communicate with their
counterpart. See [STUN and NAT](./stun.md) for more details on how this
process works.
- All outbound UDP traffic must be allowed for both the client and the agent on
**all ports** to each others' respective networks.
- To establish a direct connection, both agent and client use STUN. This
involves sending UDP packets outbound on `udp/3478` to the configured
[STUN server](../../reference/cli/server.md#--derp-server-stun-addresses).
If either the agent or the client are unable to send and receive UDP packets
to a STUN server, then direct connections will not be possible.
- Both agents and clients will then establish a
[WireGuard](https://www.wireguard.com/) tunnel and send UDP traffic on
ephemeral (high) ports. If a firewall between the client and the agent
blocks this UDP traffic, direct connections will not be possible.
## coder server
Workspaces connect to the coder server via the server's external address, set
via [`ACCESS_URL`](../../admin/setup/index.md#access-url). There must not be a
NAT between workspaces and coder server.
Users connect to the coder server's dashboard and API through its `ACCESS_URL`
as well. There must not be a NAT between users and the coder server.
Template admins can overwrite the site-wide access URL at the template level by
leveraging the `url` argument when
[defining the Coder provider](https://registry.terraform.io/providers/coder/coder/latest/docs#url):
```terraform
provider "coder" {
url = "https://coder.namespace.svc.cluster.local"
}
```
This is useful when debugging connectivity issues between the workspace agent
and the Coder server.
## Web Apps
The coder servers relays dashboard-initiated connections between the user and
the workspace. Web terminal <-> workspace connections are an exception and may
be direct.
In general, [port forwarded](./port-forwarding.md) web apps are faster than
dashboard-accessed web apps.
## 🌎 Geo-distribution
### Direct connections
Direct connections are a straight line between the user and workspace, so there
is no special geo-distribution configuration. To speed up direct connections,
move the user and workspace closer together.
Establishing a direct connection can be an involved process because both the
client and workspace agent will likely be behind at least one level of NAT,
meaning that we need to use STUN to learn the IP address and port under which
the client and agent can both contact each other. See [STUN and NAT](./stun.md)
for more information on how this process works.
If a direct connection is not available (e.g. client or server is behind NAT),
Coder will use a relayed connection. By default,
[Coder uses Google's public STUN server](../../reference/cli/server.md#--derp-server-stun-addresses),
but this can be disabled or changed for
[offline deployments](../../install/offline.md).
### Relayed connections
By default, your Coder server also runs a built-in DERP relay which can be used
for both public and [offline deployments](../../install/offline.md).
However, Tailscale has graciously allowed us to use
[their global DERP relays](https://tailscale.com/kb/1118/custom-derp-servers/#what-are-derp-servers).
You can launch `coder server` with Tailscale's DERPs like so:
```bash
$ coder server --derp-config-url https://controlplane.tailscale.com/derpmap/default
```
#### Custom Relays
If you want lower latency than what Tailscale offers or want additional DERP
relays for offline deployments, you may run custom DERP servers. Refer to
[Tailscale's documentation](https://tailscale.com/kb/1118/custom-derp-servers/#why-run-your-own-derp-server)
to learn how to set them up.
After you have custom DERP servers, you can launch Coder with them like so:
```json
# derpmap.json
{
"Regions": {
"1": {
"RegionID": 1,
"RegionCode": "myderp",
"RegionName": "My DERP",
"Nodes": [
{
"Name": "1",
"RegionID": 1,
"HostName": "your-hostname.com"
}
]
}
}
}
```
```bash
$ coder server --derp-config-path derpmap.json
```
### Dashboard connections
The dashboard (and web apps opened through the dashboard) are served from the
coder server, so they can only be geo-distributed with High Availability mode in
our Enterprise Edition. [Reach out to Sales](https://coder.com/contact) to learn
more.
## Browser-only connections (enterprise) (premium)
Some Coder deployments require that all access is through the browser to comply
with security policies. In these cases, pass the `--browser-only` flag to
`coder server` or set `CODER_BROWSER_ONLY=true`.
With browser-only connections, developers can only connect to their workspaces
via the web terminal and
[web IDEs](../../user-guides/workspace-access/web-ides.md).
### Workspace Proxies (enterprise) (premium)
Workspace proxies are a Coder Enterprise feature that allows you to provide
low-latency browser experiences for geo-distributed teams.
To learn more, see [Workspace Proxies](./workspace-proxies.md).
## Up next
- Learn about [Port Forwarding](./port-forwarding.md)
- Troubleshoot [Networking Issues](./troubleshooting.md)

View File

@ -0,0 +1,286 @@
# Port Forwarding
Port forwarding lets developers securely access processes on their Coder
workspace from a local machine. A common use case is testing web applications in
a browser.
There are three ways to forward ports in Coder:
- The `coder port-forward` command
- Dashboard
- SSH
The `coder port-forward` command is generally more performant than:
1. The Dashboard which proxies traffic through the Coder control plane versus
peer-to-peer which is possible with the Coder CLI
1. `sshd` which does double encryption of traffic with both Wireguard and SSH
## The `coder port-forward` command
This command can be used to forward TCP or UDP ports from the remote workspace
so they can be accessed locally. Both the TCP and UDP command line flags
(`--tcp` and `--udp`) can be given once or multiple times.
The supported syntax variations for the `--tcp` and `--udp` flag are:
- Single port with optional remote port: `local_port[:remote_port]`
- Comma separation `local_port1,local_port2`
- Port ranges `start_port-end_port`
- Any combination of the above
### Examples
Forward the remote TCP port `8080` to local port `8000`:
```console
coder port-forward myworkspace --tcp 8000:8080
```
Forward the remote TCP port `3000` and all ports from `9990` to `9999` to their
respective local ports.
```console
coder port-forward myworkspace --tcp 3000,9990-9999
```
For more examples, see `coder port-forward --help`.
## Dashboard
> To enable port forwarding via the dashboard, Coder must be configured with a
> [wildcard access URL](../../admin/setup/index.md#wildcard-access-url). If an
> access URL is not specified, Coder will create
> [a publicly accessible URL](../../admin/setup/index.md#tunnel) to reverse
> proxy the deployment, and port forwarding will work.
>
> There is a
> [DNS limitation](https://datatracker.ietf.org/doc/html/rfc1035#section-2.3.1)
> where each segment of hostnames must not exceed 63 characters. If your app
> name, agent name, workspace name and username exceed 63 characters in the
> hostname, port forwarding via the dashboard will not work.
### From an coder_app resource
One way to port forward is to configure a `coder_app` resource in the
workspace's template. This approach shows a visual application icon in the
dashboard. See the following `coder_app` example for a Node React app and note
the `subdomain` and `share` settings:
```tf
# node app
resource "coder_app" "node-react-app" {
agent_id = coder_agent.dev.id
slug = "node-react-app"
icon = "https://upload.wikimedia.org/wikipedia/commons/a/a7/React-icon.svg"
url = "http://localhost:3000"
subdomain = true
share = "authenticated"
healthcheck {
url = "http://localhost:3000/healthz"
interval = 10
threshold = 30
}
}
```
Valid `share` values include `owner` - private to the user, `authenticated` -
accessible by any user authenticated to the Coder deployment, and `public` -
accessible by users outside of the Coder deployment.
![Port forwarding from an app in the UI](../../images/networking/portforwarddashboard.png)
## Accessing workspace ports
Another way to port forward in the dashboard is to use the "Open Ports" button
to specify an arbitrary port. Coder will also detect if apps inside the
workspace are listening on ports, and list them below the port input (this is
only supported on Windows and Linux workspace agents).
![Port forwarding in the UI](../../images/networking/listeningports.png)
### Sharing ports
We allow developers to share ports as URLs, either with other authenticated
coder users or publicly. Using the open ports interface, developers can assign a
sharing levels that match our `coder_app`s share option in
[Coder terraform provider](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app#share).
- `owner` (Default): The implicit sharing level for all listening ports, only
visible to the workspace owner
- `authenticated`: Accessible by other authenticated Coder users on the same
deployment.
- `public`: Accessible by any user with the associated URL.
Once a port is shared at either `authenticated` or `public` levels, it will stay
pinned in the open ports UI for better accessibility regardless of whether or
not it is still accessible.
![Annotated port controls in the UI](../../images/networking/annotatedports.png)
The sharing level is limited by the maximum level enforced in the template
settings in enterprise deployments, and not restricted in OSS deployments.
This can also be used to change the sharing level of `coder_app`s by entering
their port number in the sharable ports UI. The `share` attribute on `coder_app`
resource uses a different method of authentication and **is not impacted by the
template's maximum sharing level**, nor the level of a shared port that points
to the app.
### Configure maximum port sharing level (enterprise) (premium)
Enterprise-licensed template admins can control the maximum port sharing level
for workspaces under a given template in the template settings. By default, the
maximum sharing level is set to `Owner`, meaning port sharing is disabled for
end-users. OSS deployments allow all workspaces to share ports at both the
`authenticated` and `public` levels.
![Max port sharing level in the UI](../../images/networking/portsharingmax.png)
### Configuring port protocol
Both listening and shared ports can be configured to use either `HTTP` or
`HTTPS` to connect to the port. For listening ports the protocol selector
applies to any port you input or select from the menu. Shared ports have
protocol configuration for each shared port individually.
You can access any port on the workspace and can configure the port protocol
manually by appending a `s` to the port in the URL.
```
# Uses HTTP
https://33295--agent--workspace--user--apps.example.com/
# Uses HTTPS
https://33295s--agent--workspace--user--apps.example.com/
```
### Cross-origin resource sharing (CORS)
When forwarding via the dashboard, Coder automatically sets headers that allow
requests between separately forwarded applications belonging to the same user.
When forwarding through other methods the application itself will need to set
its own CORS headers if they are being forwarded through different origins since
Coder does not intercept these cases. See below for the required headers.
#### Authentication
Since ports forwarded through the dashboard are private, cross-origin requests
must include credentials (set `credentials: "include"` if using `fetch`) or the
requests cannot be authenticated and you will see an error resembling the
following:
> Access to fetch at
> 'https://coder.example.com/api/v2/applications/auth-redirect' from origin
> 'https://8000--dev--user--apps.coder.example.com' has been blocked by CORS
> policy: No 'Access-Control-Allow-Origin' header is present on the requested
> resource. If an opaque response serves your needs, set the request's mode to
> 'no-cors' to fetch the resource with CORS disabled.
#### Headers
Below is a list of the cross-origin headers Coder sets with example values:
```
access-control-allow-credentials: true
access-control-allow-methods: PUT
access-control-allow-headers: X-Custom-Header
access-control-allow-origin: https://8000--dev--user--apps.coder.example.com
vary: Origin
vary: Access-Control-Request-Method
vary: Access-Control-Request-Headers
```
The allowed origin will be set to the origin provided by the browser if the
users are identical. Credentials are allowed and the allowed methods and headers
will echo whatever the request sends.
#### Configuration
These cross-origin headers are not configurable by administrative settings.
If applications set any of the above headers they will be stripped from the
response except for `Vary` headers that are set to a value other than the ones
listed above.
In other words, CORS behavior through the dashboard is not currently
configurable by either admins or users.
#### Allowed by default
<table class="tg">
<thead>
<tr>
<th class="tg-0pky" rowspan="2"></th>
<th class="tg-0pky" rowspan="3"></th>
<th class="tg-0pky">From</th>
<th class="tg-0pky" colspan="3">Alice</th>
<th class="tg-0pky">Bob</th>
</tr>
<tr>
<th class="tg-0pky" rowspan="2"></th>
<th class="tg-0pky">Workspace 1</th>
<th class="tg-0pky" colspan="2">Workspace 2</th>
<th class="tg-0pky">Workspace 3</th>
</tr>
<tr>
<th class="tg-0pky">To</th>
<th class="tg-0pky">App A</th>
<th class="tg-0pky">App B</th>
<th class="tg-0pky">App C</th>
<th class="tg-0pky">App D</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky" rowspan="3">Alice</td>
<td class="tg-0pky" rowspan="2">Workspace 1</td>
<td class="tg-0pky">App A</td>
<td class="tg-0pky"></td>
<td class="tg-0pky"><span style="font-weight:400;font-style:normal">*</span></td>
<td class="tg-0pky"><span style="font-weight:400;font-style:normal">*</span></td>
<td class="tg-0pky"></td>
</tr>
<tr>
<td class="tg-0pky">App B</td>
<td class="tg-0pky">✅*</td>
<td class="tg-0pky"></td>
<td class="tg-0pky"><span style="font-weight:400;font-style:normal">*</span></td>
<td class="tg-0pky"></td>
</tr>
<tr>
<td class="tg-0pky">Workspace 2</td>
<td class="tg-0pky">App C</td>
<td class="tg-0pky"><span style="font-weight:400;font-style:normal">*</span></td>
<td class="tg-0pky"><span style="font-weight:400;font-style:normal">*</span></td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
</tr>
<tr>
<td class="tg-0pky">Bob</td>
<td class="tg-0pky">Workspace 3</td>
<td class="tg-0pky">App D</td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
<td class="tg-0pky"></td>
</tr>
</tbody>
</table>
> '\*' means `credentials: "include"` is required
## SSH
First,
[configure SSH](../../user-guides/workspace-access/index.md#configure-ssh) on
your local machine. Then, use `ssh` to forward like so:
```console
ssh -L 8080:localhost:8000 coder.myworkspace
```
You can read more on SSH port forwarding
[here](https://www.ssh.com/academy/ssh/tunneling/example).

View File

@ -0,0 +1,174 @@
# STUN and NAT
> [Session Traversal Utilities for NAT (STUN)](https://www.rfc-editor.org/rfc/rfc8489.html)
> is a protocol used to assist applications in establishing peer-to-peer
> communications across Network Address Translations (NATs) or firewalls.
>
> [Network Address Translation (NAT)](https://en.wikipedia.org/wiki/Network_address_translation)
> is commonly used in private networks to allow multiple devices to share a
> single public IP address. The vast majority of home and corporate internet
> connections use at least one level of NAT.
## Overview
In order for one application to connect to another across a network, the
connecting application needs to know the IP address and port under which the
target application is reachable. If both applications reside on the same
network, then they can most likely connect directly to each other. In the
context of a Coder workspace agent and client, this is generally not the case,
as both agent and client will most likely be running in different _private_
networks (e.g. `192.168.1.0/24`). In this case, at least one of the two will
need to know an IP address and port under which they can reach their
counterpart.
This problem is often referred to as NAT traversal, and Coder uses a standard
protocol named STUN to address this.
Inside of that network, packets from the agent or client will show up as having
source address `192.168.1.X:12345`. However, outside of this private network,
the source address will show up differently (for example, `12.3.4.56:54321`). In
order for the Coder client and agent to establish a direct connection with each
other, one of them needs to know the `ip:port` pair under which their
counterpart can be reached. Once communication succeeds in one direction, we can
inspect the source address of the received packet to determine the return
address.
At a high level, STUN works like this:
> The below glosses over a lot of the complexity of traversing NATs. For a more
> in-depth technical explanation, see
> [How NAT traversal works (tailscale.com)](https://tailscale.com/blog/how-nat-traversal-works).
- **Discovery:** Both the client and agent will send UDP traffic to one or more
configured STUN servers. These STUN servers are generally located on the
public internet, and respond with the public IP address and port from which
the request came.
- **Coordination:** The client and agent then exchange this information through
the Coder server. They will then construct packets that should be able to
successfully traverse their counterpart's NATs successfully.
- **NAT Traversal:** The client and agent then send these crafted packets to
their counterpart's public addresses. If all goes well, the NATs on the other
end should route these packets to the correct internal address.
- **Connection:** Once the packets reach the other side, they send a response
back to the source `ip:port` from the packet. Again, the NATs should recognize
these responses as belonging to an ongoing communication, and forward them
accordingly.
At this point, both the client and agent should be able to send traffic directly
to each other.
## Examples
### 1. Direct connections without NAT or STUN
In this example, both the client and agent are located on the network
`192.168.21.0/24`. Assuming no firewalls are blocking packets in either
direction, both client and agent are able to communicate directly with each
other's locally assigned IP address.
![Diagram of a workspace agent and client in the same network](../../images/networking/stun1.png)
### 2. Direct connections with one layer of NAT
In this example, client and agent are located on different networks and connect
to each other over the public Internet. Both client and agent connect to a
configured STUN server located on the public Internet to determine the public IP
address and port on which they can be reached.
![Diagram of a workspace agent and client in separate networks](../../images/networking/stun2.1.png)
They then exchange this information through Coder server, and can then
communicate directly with each other through their respective NATs.
![Diagram of a workspace agent and client in separate networks](../../images/networking/stun2.2.png)
### 3. Direct connections with VPN and NAT hairpinning
In this example, the client workstation must use a VPN to connect to the
corporate network. All traffic from the client will enter through the VPN entry
node and exit at the VPN exit node inside the corporate network. Traffic from
the client inside the corporate network will appear to be coming from the IP
address of the VPN exit node `172.16.1.2`. Traffic from the client to the public
internet will appear to have the public IP address of the corporate router
`12.34.56.7`.
The workspace agent is running on a Kubernetes cluster inside the corporate
network, which is behind its own layer of NAT. To anyone inside the corporate
network but outside the cluster network, its traffic will appear to be coming
from `172.16.1.254`. However, traffic from the agent to services on the public
Internet will also see traffic originating from the public IP address assigned
to the corporate router. Additionally, the corporate router will most likely
have a firewall configured to block traffic from the internet to the corporate
network.
If the client and agent both use the public STUN server, the addresses
discovered by STUN will both be the public IP address of the corporate router.
To correctly route the traffic backwards, the corporate router must correctly
route both:
- Traffic sent from the client to the external IP of the corporate router back
to the cluster router, and
- Traffic sent from the agent to the external IP of the corporate router to the
VPN exit node.
This behaviour is known as "hairpinning", and may not be supported in all
network configurations.
If hairpinning is not supported, deploying an internal STUN server can aid
establishing direct connections between client and agent. When the agent and
client query this internal STUN server, they will be able to determine the
addresses on the corporate network from which their traffic appears to
originate. Using these internal addresses is much more likely to result in a
successful direct connection.
![Diagram of a workspace agent and client over VPN](../../images/networking/stun3.png)
## Hard NAT
Some NATs are known to use a different port when forwarding requests to the STUN
server and when forwarding probe packets to peers. In that case, the address a
peer discovers over the STUN protocol will have the correct IP address, but the
wrong port. Tailscale refers to this as "hard" NAT in
[How NAT traversal works (tailscale.com)](https://tailscale.com/blog/how-nat-traversal-works).
If both peers are behind a "hard" NAT, direct connections may take longer to
establish or will not be established at all. If one peer is behind a "hard" NAT
and the other is running a firewall (including Windows Defender Firewall), the
firewall may block direct connections.
In both cases, peers fallback to DERP connections if they cannot establish a
direct connection.
If your workspaces are behind a "hard" NAT, you can:
1. Ensure clients are not also behind a "hard" NAT. You may have limited ability
to control this if end users connect from their homes.
2. Ensure firewalls on client devices (e.g. Windows Defender Firewall) have an
inbound policy allowing all UDP ports either to the `coder` or `coder.exe`
CLI binary, or from the IP addresses of your workspace NATs.
3. Reconfigure your workspace network's NAT connection to the public internet to
be an "easy" NAT. See below for specific examples.
### AWS NAT Gateway
The
[AWS NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html)
is a known "hard" NAT. You can use a
[NAT Instance](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html)
instead of a NAT Gateway, and configure it to use the same port assignment for
all UDP traffic from a particular source IP:port combination (Tailscale calls
this "easy" NAT). Linux `MASQUERADE` rules work well for this.
### AWS Elastic Kubernetes Service (EKS)
The default configuration of AWS Elastic Kubernetes Service (EKS) includes the
[Amazon VPC CNI Driver](https://github.com/aws/amazon-vpc-cni-k8s), which by
default randomizes the public port for different outgoing UDP connections. This
makes it act as a "hard" NAT, even if the EKS nodes are on a public subnet (and
thus do not need to use the AWS NAT Gateway to reach the Internet).
This behavior can be disabled by setting the environment variable
`AWS_VPC_K8S_CNI_RANDOMIZESNAT=none` in the `aws-node` DaemonSet. Note, however,
if your nodes are on a private subnet, they will still need NAT to reach the
public Internet, meaning that issues with the
[AWS NAT Gateway](#aws-nat-gateway) might affect you.

View File

@ -0,0 +1,124 @@
# Troubleshooting
`coder ping <workspace>` will ping the workspace agent and print diagnostics on
the state of the connection. These diagnostics are created by inspecting both
the client and agent network configurations, and provide insights into why a
direct connection may be impeded, or why the quality of one might be degraded.
The `-v/--verbose` flag can be appended to the command to print client debug
logs.
```console
$ coder ping dev
pong from workspace proxied via DERP(Council Bluffs, Iowa) in 42ms
pong from workspace proxied via DERP(Council Bluffs, Iowa) in 41ms
pong from workspace proxied via DERP(Council Bluffs, Iowa) in 39ms
✔ preferred DERP region: 999 (Council Bluffs, Iowa)
✔ sent local data to Coder networking coordinator
✔ received remote agent data from Coder networking coordinator
preferred DERP region: 999 (Council Bluffs, Iowa)
endpoints: x.x.x.x:46433, x.x.x.x:46433, x.x.x.x:46433
✔ Wireguard handshake 11s ago
❗ You are connected via a DERP relay, not directly (p2p)
Possible client-side issues with direct connection:
- Network interface utun0 has MTU 1280, (less than 1378), which may degrade the quality of direct connections
Possible agent-side issues with direct connection:
- Agent is potentially behind a hard NAT, as multiple endpoints were retrieved from different STUN servers
- Agent IP address is within an AWS range (AWS uses hard NAT)
```
## Common Problems with Direct Connections
### Disabled Deployment-wide
Direct connections can be disabled at the deployment level by setting the
`CODER_BLOCK_DIRECT` environment variable or the `--block-direct-connections`
flag on the server. When set, this will be reflected in the output of
`coder ping`.
### UDP Blocked
Some corporate firewalls block UDP traffic. Direct connections require UDP
traffic to be allowed between the client and agent, as well as between the
client/agent and STUN servers in most cases. `coder ping` will indicate if
either the Coder agent or client had issues sending or receiving UDP packets to
STUN servers.
If this is the case, you may need to add exceptions to the firewall to allow UDP
for Coder workspaces, clients, and STUN servers.
### Endpoint-Dependent NAT (Hard NAT)
Hard NATs prevent public endpoints gathered from STUN servers from being used by
the peer to establish a direct connection. Typically, if only one side of the
connection is behind a hard NAT, direct connections can still be established
easily. However, if both sides are behind hard NATs, direct connections may take
longer to establish or may not be possible at all.
`coder ping` will indicate if it's possible the client or agent is behind a hard
NAT.
Learn more about [STUN and NAT](./stun.md).
### No STUN Servers
If there are no STUN servers available within a deployment's DERP MAP, direct
connections may not be possible. Notable exceptions are if the client and agent
are on the same network, or if either is able to use UPnP instead of STUN to
resolve the public IP of the other. `coder ping` will indicate if no STUN
servers were found.
### Endpoint Firewalls
Direct connections may also be impeded if one side is behind a hard NAT and the
other is running a firewall that blocks ingress traffic from unknown 5-tuples
(Protocol, Source IP, Source Port, Destination IP, Destination Port).
If this is suspected, you may need to add an exception for Coder to the
firewall, or reconfigure the hard NAT.
### VPNs
If a VPN is the default route for all IP traffic, it may interfere with the
ability for clients and agents to form direct connections. This happens if the
NAT does not permit traffic to be
['hairpinned'](./stun.md#3-direct-connections-with-vpn-and-nat-hairpinning) from
the public IP address of the NAT (determined via STUN) to the internal IP
address of the agent.
If this is the case, you may need to add exceptions to the VPN for Coder, modify
the NAT configuration, or deploy an internal STUN server.
### Low MTU
If a network interface on the side of either the client or agent has an MTU
smaller than 1378, any direct connections form may have degraded quality or
performance, as IP packets are fragmented. `coder ping` will indicate if this is
the case by inspecting network interfaces on both the client and the workspace
agent.
If another interface cannot be used, and the MTU cannot be changed, you may need
to disable direct connections, and relay all traffic via DERP instead, which
will not be affected by the low MTU.
## Throughput
The `coder speedtest <workspace>` command measures the throughput between the
client and the workspace agent.
```console
$ coder speedtest workspace
29ms via coder
Starting a 5s download test...
INTERVAL TRANSFER BANDWIDTH
0.00-1.00 sec 630.7840 MBits 630.7404 Mbits/sec
1.00-2.00 sec 913.9200 MBits 913.8106 Mbits/sec
2.00-3.00 sec 943.1040 MBits 943.0399 Mbits/sec
3.00-4.00 sec 933.3760 MBits 933.2143 Mbits/sec
4.00-5.00 sec 848.8960 MBits 848.7019 Mbits/sec
5.00-5.02 sec 13.5680 MBits 828.8189 Mbits/sec
----------------------------------------------------
0.00-5.02 sec 4283.6480 MBits 853.8217 Mbits/sec
```

View File

@ -4,15 +4,16 @@ Workspace proxies provide low-latency experiences for geo-distributed teams.
Coder's networking does a best effort to make direct connections to a workspace.
In situations where this is not possible, such as connections via the web
terminal and [web IDEs](../ides/web-ides.md), workspace proxies are able to
reduce the amount of distance the network traffic needs to travel.
terminal and [web IDEs](../../user-guides/workspace-access/index.md#web-ides),
workspace proxies are able to reduce the amount of distance the network traffic
needs to travel.
A workspace proxy is a relay connection a developer can choose to use when
connecting with their workspace over SSH, a workspace app, port forwarding, etc.
Dashboard connections and API calls (e.g. the workspaces list) are not served
over workspace proxies.
![ProxyDiagram](../images/workspaceproxy/proxydiagram.png)
![ProxyDiagram](../../images/admin/networking/workspace-proxies/proxydiagram.png)
# Deploy a workspace proxy
@ -26,12 +27,8 @@ Workspace proxies can be used in the browser by navigating to the user
## Requirements
- The [Coder CLI](../reference/cli) must be installed and authenticated as a
user with the Owner role.
- Alternatively, the
[coderd Terraform Provider](https://registry.terraform.io/providers/coder/coderd/latest)
can be used to create and manage workspace proxies, if authenticated as a user
with the Owner role.
- The [Coder CLI](../../reference/cli/index.md) must be installed and
authenticated as a user with the Owner role.
## Step 1: Create the proxy
@ -61,7 +58,7 @@ the workspace proxy usable. If the proxy deployment is successful,
```
$ coder wsproxy ls
NAME URL STATUS STATUS
NAME URL STATUS STATUS
brazil-saopaulo https://brazil.example.com ok
europe-frankfurt https://europe.example.com ok
sydney https://sydney.example.com ok
@ -153,8 +150,8 @@ coder wsproxy server
### Running as a system service
If you've installed Coder via a [system package](../install/index.md), you can
configure the workspace proxy by settings in
If you've installed Coder via a [system package](../../install/index.md), you
can configure the workspace proxy by settings in
`/etc/coder.d/coder-workspace-proxy.env`
To run workspace proxy as a system service on the host:
@ -202,49 +199,6 @@ FROM ghcr.io/coder/coder:latest
ENTRYPOINT ["/opt/coder", "wsproxy", "server"]
```
### Managing via Terraform
The
[coderd Terraform Provider](https://registry.terraform.io/providers/coder/coderd/latest)
can also be used to create and manage workspace proxies in the same Terraform
configuration as your deployment.
```hcl
provider "coderd" {
url = "https://coder.example.com"
token = "****"
}
resource "coderd_workspace_proxy" "sydney-wsp" {
name = "sydney-wsp"
display_name = "Australia (Sydney)"
icon = "/emojis/1f1e6-1f1fa.png"
}
resource "kubernetes_deployment" "syd_wsproxy" {
metadata { /* ... */ }
spec {
template {
metadata { /* ... */ }
spec {
container {
name = "syd-wsp"
image = "ghcr.io/coder/coder:latest"
args = ["wsproxy", "server"]
env {
name = "CODER_PROXY_SESSION_TOKEN"
value = coderd_workspace_proxy.sydney-wsp.session_token
}
/* ... */
}
/* ... */
}
}
/* ... */
}
}
```
### Selecting a proxy
Users can select a workspace proxy at the top-right of the browser-based Coder
@ -252,9 +206,9 @@ dashboard. Workspace proxy preferences are cached by the web browser. If a proxy
goes offline, the session will fall back to the primary proxy. This could take
up to 60 seconds.
![Workspace proxy picker](../images/admin/workspace-proxy-picker.png)
![Workspace proxy picker](../../images/admin/networking/workspace-proxies/ws-proxy-picker.png)
## Step 3: Observability
## Observability
Coder workspace proxy exports metrics via the HTTP endpoint, which can be
enabled using either the environment variable `CODER_PROMETHEUS_ENABLE` or the

View File

@ -10,18 +10,20 @@ are often benefits to running external provisioner daemons:
- **Isolate APIs:** Deploy provisioners in isolated environments (on-prem, AWS,
Azure) instead of exposing APIs (Docker, Kubernetes, VMware) to the Coder
server. See [Provider Authentication](../templates/authentication.md) for more
details.
server. See
[Provider Authentication](../admin/templates/extending-templates/provider-authentication.md)
for more details.
- **Isolate secrets**: Keep Coder unaware of cloud secrets, manage/rotate
secrets on provisioner servers.
- **Reduce server load**: External provisioners reduce load and build queue
times from the Coder server. See
[Scaling Coder](scaling/scale-utility.md#recent-scale-tests) for more details.
[Scaling Coder](../admin/infrastructure/index.md#scale-tests) for more
details.
Each provisioner runs a single
[concurrent workspace build](scaling/scale-testing.md#control-plane-provisioner).
[concurrent workspace build](../admin/infrastructure/scale-testing.md#control-plane-provisionerd).
For example, running 30 provisioner containers will allow 30 users to start
workspaces at the same time.
@ -32,9 +34,7 @@ to learn how to start provisioners via Docker, Kubernetes, Systemd, etc.
## Authentication
The provisioner daemon must authenticate with your Coder deployment. If you have
multiple [organizations](./organizations.md), you'll need at least 1 provisioner
running for each organization.
The provisioner daemon must authenticate with your Coder deployment.
<div class="tabs">
@ -79,7 +79,7 @@ Kubernetes/Docker/etc.
A user account with the role `Template Admin` or `Owner` can start provisioners
using their user account. This may be beneficial if you are running provisioners
via [automation](./automation.md).
via [automation](../reference/index.md).
```sh
coder login https://<your-coder-url>
@ -208,7 +208,7 @@ Provisioners can broadly be categorized by scope: `organization` or `user`. The
scope of a provisioner can be specified with
[`-tag=scope=<scope>`](../reference/cli/provisioner_start.md#t---tag) when
starting the provisioner daemon. Only users with at least the
[Template Admin](../admin/users.md#roles) role or higher may create
[Template Admin](./users/index.md#roles) role or higher may create
organization-scoped provisioner daemons.
There are two exceptions:

View File

@ -1,23 +0,0 @@
# Role Based Access Control (RBAC)
Use RBAC to define which users and [groups](./groups.md) can use specific
templates in Coder. These can be defined via the Coder web UI,
[synced from your identity provider](./auth.md) or
[managed via Terraform](https://registry.terraform.io/providers/coder/coderd/latest/docs/resources/template).
![rbac](../images/template-rbac.png)
The "Everyone" group makes a template accessible to all users. This can be
removed to make a template private.
## Permissions
You can set the following permissions:
- **Admin**: Read, use, edit, push, and delete
- **View**: Read, use
## Enabling this feature
This feature is only available with an
[Enterprise or Premium license](https://coder.com/pricing).

View File

@ -0,0 +1,87 @@
# API Tokens of deleted users not invalidated
---
## Summary
Coder identified an issue in
[https://github.com/coder/coder](https://github.com/coder/coder) where API
tokens belonging to a deleted user were not invalidated. A deleted user in
possession of a valid and non-expired API token is still able to use the above
token with their full suite of capabilities.
## Impact: HIGH
If exploited, an attacker could perform any action that the deleted user was
authorized to perform.
## Exploitability: HIGH
The CLI writes the API key to `~/.coderv2/session` by default, so any deleted
user who previously logged in via the Coder CLI has the potential to exploit
this. Note that there is a time window for exploitation; API tokens have a
maximum lifetime after which they are no longer valid.
The issue only affects users who were active (not suspended) at the time they
were deleted. Users who were first suspended and later deleted cannot exploit
this issue.
## Affected Versions
All versions of Coder between v0.8.15 and v0.22.2 (inclusive) are affected.
All customers are advised to upgrade to
[v0.23.0](https://github.com/coder/coder/releases/tag/v0.23.0) as soon as
possible.
## Details
Coder incorrectly failed to invalidate API keys belonging to a user when they
were deleted. When authenticating a user via their API key, Coder incorrectly
failed to check whether the API key corresponds to a deleted user.
## Indications of Compromise
> 💡 Automated remediation steps in the upgrade purge all affected API keys.
> Either perform the following query before upgrade or run it on a backup of
> your database from before the upgrade.
Execute the following SQL query:
```sql
SELECT
users.email,
users.updated_at,
api_keys.id,
api_keys.last_used
FROM
users
LEFT JOIN
api_keys
ON
api_keys.user_id = users.id
WHERE
users.deleted
AND
api_keys.last_used > users.updated_at
;
```
If the output is similar to the below, then you are not affected:
```sql
-----
(0 rows)
```
Otherwise, the following information will be reported:
- User email
- Time the user was last modified (i.e. deleted)
- User API key ID
- Time the affected API key was last used
> 💡 If your license includes the
> [Audit Logs](https://coder.com/docs/admin/audit-logs#filtering-logs) feature,
> you can then query all actions performed by the above users by using the
> filter `email:$USER_EMAIL`.

View File

@ -6,7 +6,7 @@ Audit Logs allows **Auditors** to monitor user operations in their deployment.
We track the following resources:
<!-- Code generated by 'make docs/admin/audit-logs.md'. DO NOT EDIT -->
<!-- Code generated by 'make docs/admin/security/audit-logs.md'. DO NOT EDIT -->
| <b>Resource<b> | |
| -------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@ -30,7 +30,7 @@ We track the following resources:
| WorkspaceBuild<br><i>start, stop</i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody><tr><td>build_number</td><td>false</td></tr><tr><td>created_at</td><td>false</td></tr><tr><td>daily_cost</td><td>false</td></tr><tr><td>deadline</td><td>false</td></tr><tr><td>id</td><td>false</td></tr><tr><td>initiator_by_avatar_url</td><td>false</td></tr><tr><td>initiator_by_username</td><td>false</td></tr><tr><td>initiator_id</td><td>false</td></tr><tr><td>job_id</td><td>false</td></tr><tr><td>max_deadline</td><td>false</td></tr><tr><td>provisioner_state</td><td>false</td></tr><tr><td>reason</td><td>false</td></tr><tr><td>template_version_id</td><td>true</td></tr><tr><td>transition</td><td>false</td></tr><tr><td>updated_at</td><td>false</td></tr><tr><td>workspace_id</td><td>false</td></tr></tbody></table> |
| WorkspaceProxy<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody><tr><td>created_at</td><td>true</td></tr><tr><td>deleted</td><td>false</td></tr><tr><td>derp_enabled</td><td>true</td></tr><tr><td>derp_only</td><td>true</td></tr><tr><td>display_name</td><td>true</td></tr><tr><td>icon</td><td>true</td></tr><tr><td>id</td><td>true</td></tr><tr><td>name</td><td>true</td></tr><tr><td>region_id</td><td>true</td></tr><tr><td>token_hashed_secret</td><td>true</td></tr><tr><td>updated_at</td><td>false</td></tr><tr><td>url</td><td>true</td></tr><tr><td>version</td><td>true</td></tr><tr><td>wildcard_hostname</td><td>true</td></tr></tbody></table> |
<!-- End generated by 'make docs/admin/audit-logs.md'. -->
<!-- End generated by 'make docs/admin/security/audit-logs.md'. -->
## Filtering logs
@ -70,15 +70,15 @@ audit trails.
Audit logs can be accessed through our REST API. You can find detailed
information about this in our
[endpoint documentation](../reference/api/audit.md#get-audit-logs).
[endpoint documentation](../../reference/api/audit.md#get-audit-logs).
## Service Logs
Audit trails are also dispatched as service logs and can be captured and
categorized using any log management tool such as [Splunk](https://splunk.com).
Example of a [JSON formatted](../reference/cli/server.md#--log-json) audit log
entry:
Example of a [JSON formatted](../../reference/cli/server.md#--log-json) audit
log entry:
```json
{
@ -113,8 +113,8 @@ entry:
}
```
Example of a [human readable](../reference/cli/server.md#--log-human) audit log
entry:
Example of a [human readable](../../reference/cli/server.md#--log-human) audit
log entry:
```console
2023-06-13 03:43:29.233 [info] coderd: audit_log ID=95f7c392-da3e-480c-a579-8909f145fbe2 Time="2023-06-13T03:43:29.230422Z" UserID=6c405053-27e3-484a-9ad7-bcb64e7bfde6 OrganizationID=00000000-0000-0000-0000-000000000000 Ip=<nil> UserAgent=<nil> ResourceType=workspace_build ResourceID=988ae133-5b73-41e3-a55e-e1e9d3ef0b66 ResourceTarget="" Action=start Diff="{}" StatusCode=200 AdditionalFields="{\"workspace_name\":\"linux-container\",\"build_number\":\"7\",\"build_reason\":\"initiator\",\"workspace_owner\":\"\"}" RequestID=9682b1b5-7b9f-4bf2-9a39-9463f8e41cd6 ResourceIcon=""
@ -122,5 +122,5 @@ entry:
## Enabling this feature
This feature is only available with a
[Premium or Enterprise license](https://coder.com/pricing).
This feature is only available with an enterprise license.
[Learn more](../licensing/index.md)

View File

@ -7,7 +7,7 @@ preventing attackers with database access from using them to impersonate users.
## How it works
Coder allows administrators to specify
[external token encryption keys](../reference/cli/server.md#external-token-encryption-keys).
[external token encryption keys](../../reference/cli/server.md#external-token-encryption-keys).
If configured, Coder will use these keys to encrypt external user tokens before
storing them in the database. The encryption algorithm used is AES-256-GCM with
a 32-byte key length.
@ -47,7 +47,7 @@ Additional database fields may be encrypted in the future.
- Ensure you have a valid backup of your database. **Do not skip this step.** If
you are using the built-in PostgreSQL database, you can run
[`coder server postgres-builtin-url`](../reference/cli/server_postgres-builtin-url.md)
[`coder server postgres-builtin-url`](../../reference/cli/server_postgres-builtin-url.md)
to get the connection URL.
- Generate a 32-byte random key and base64-encode it. For example:
@ -90,7 +90,7 @@ if you need to rotate keys, you can perform the following procedure:
- Generate a new encryption key following the same procedure as above.
- Add the above key to the list of
[external token encryption keys](../reference/cli/server.md#--external-token-encryption-keys).
[external token encryption keys](../../reference/cli/server.md#--external-token-encryption-keys).
**The new key must appear first in the list**. For example, in the Kubernetes
secret created above:
@ -110,13 +110,13 @@ data:
encrypted with the old key(s).
- To re-encrypt all encrypted database fields with the new key, run
[`coder server dbcrypt rotate`](../reference/cli/server_dbcrypt_rotate.md).
[`coder server dbcrypt rotate`](../../reference/cli/server_dbcrypt_rotate.md).
This command will re-encrypt all tokens with the specified new encryption key.
We recommend performing this action during a maintenance window.
> Note: this command requires direct access to the database. If you are using
> the built-in PostgreSQL database, you can run
> [`coder server postgres-builtin-url`](../reference/cli/server_postgres-builtin-url.md)
> [`coder server postgres-builtin-url`](../../reference/cli/server_postgres-builtin-url.md)
> to get the connection URL.
- Once the above command completes successfully, remove the old encryption key
@ -133,7 +133,7 @@ To disable encryption, perform the following actions:
being written, which may cause the next step to fail.
- Run
[`coder server dbcrypt decrypt`](../reference/cli/server_dbcrypt_decrypt.md).
[`coder server dbcrypt decrypt`](../../reference/cli/server_dbcrypt_decrypt.md).
This command will decrypt all encrypted user tokens and revoke all active
encryption keys.
@ -143,7 +143,7 @@ To disable encryption, perform the following actions:
> to help prevent accidentally decrypting data.
- Remove all
[external token encryption keys](../reference/cli/server.md#--external-token-encryption-keys)
[external token encryption keys](../../reference/cli/server.md#--external-token-encryption-keys)
from Coder's configuration.
- Start coderd. You can now safely delete the encryption keys from your secret
@ -161,12 +161,12 @@ To delete all encrypted data from your database, perform the following actions:
being written.
- Run
[`coder server dbcrypt delete`](../reference/cli/server_dbcrypt_delete.md).
[`coder server dbcrypt delete`](../../reference/cli/server_dbcrypt_delete.md).
This command will delete all encrypted user tokens and revoke all active
encryption keys.
- Remove all
[external token encryption keys](../reference/cli/server.md#--external-token-encryption-keys)
[external token encryption keys](../../reference/cli/server.md#--external-token-encryption-keys)
from Coder's configuration.
- Start coderd. You can now safely delete the encryption keys from your secret
@ -175,11 +175,11 @@ To delete all encrypted data from your database, perform the following actions:
## Troubleshooting
- If Coder detects that the data stored in the database was not encrypted with
any known keys, it will refuse to start. If you are seeing this behaviour,
any known keys, it will refuse to start. If you are seeing this behavior,
ensure that the encryption keys provided are correct.
- If Coder detects that the data stored in the database was encrypted with a key
that is no longer active, it will refuse to start. If you are seeing this
behaviour, ensure that the encryption keys provided are correct and that you
behavior, ensure that the encryption keys provided are correct and that you
have not revoked any keys that are still in use.
- Decryption may fail if newly encrypted data is written while decryption is in
progress. If this happens, ensure that all active coder instances are stopped,

View File

@ -0,0 +1,20 @@
# Security Advisories
> If you discover a vulnerability in Coder, please do not hesitate to report it
> to us by following the instructions
> [here](https://github.com/coder/coder/blob/main/SECURITY.md).
From time to time, Coder employees or other community members may discover
vulnerabilities in the product.
If a vulnerability requires an immediate upgrade to mitigate a potential
security risk, we will add it to the below table.
Click on the description links to view more details about each specific
vulnerability.
---
| Description | Severity | Fix | Vulnerable Versions |
| --------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------------------------------------------------------------- | ------------------- |
| [API tokens of deleted users not invalidated](https://github.com/coder/coder/blob/main/docs/security/0001_user_apikeys_invalidation.md) | HIGH | [v0.23.0](https://github.com/coder/coder/releases/tag/v0.23.0) | v0.8.25 - v0.22.2 |

View File

@ -0,0 +1,113 @@
# Secrets
<blockquote class="info">
This article explains how to use secrets in a workspace. To authenticate the
workspace provisioner, see <a href="/admin/auth">this</a>.
</blockquote>
Coder is open-minded about how you get your secrets into your workspaces.
## Wait a minute...
Your first stab at secrets with Coder should be your local method. You can do
everything you can locally and more with your Coder workspace, so whatever
workflow and tools you already use to manage secrets may be brought over.
Often, this workflow is simply:
1. Give your users their secrets in advance
1. Your users write them to a persistent file after they've built their
workspace
[Template parameters](../templates/extending-templates/parameters.md) are a
dangerous way to accept secrets. We show parameters in cleartext around the
product. Assume anyone with view access to a workspace can also see its
parameters.
## SSH Keys
Coder generates SSH key pairs for each user. This can be used as an
authentication mechanism for git providers or other tools. Within workspaces,
git will attempt to use this key within workspaces via the `$GIT_SSH_COMMAND`
environment variable.
Users can view their public key in their account settings:
![SSH keys in account settings](../../images/ssh-keys.png)
> Note: SSH keys are never stored in Coder workspaces, and are fetched only when
> SSH is invoked. The keys are held in-memory and never written to disk.
## Dynamic Secrets
Dynamic secrets are attached to the workspace lifecycle and automatically
injected into the workspace. With a little bit of up front template work, they
make life simpler for both the end user and the security team.
This method is limited to
[services with Terraform providers](https://registry.terraform.io/browse/providers),
which excludes obscure API providers.
Dynamic secrets can be implemented in your template code like so:
```tf
resource "twilio_iam_api_key" "api_key" {
account_sid = "ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
friendly_name = "Test API Key"
}
resource "coder_agent" "main" {
# ...
env = {
# Let users access the secret via $TWILIO_API_SECRET
TWILIO_API_SECRET = "${twilio_iam_api_key.api_key.secret}"
}
}
```
A catch-all variation of this approach is dynamically provisioning a cloud
service account (e.g
[GCP](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_service_account_key#private_key))
for each workspace and then making the relevant secrets available via the
cloud's secret management system.
## Displaying Secrets
While you can inject secrets into the workspace via environment variables, you
can also show them in the Workspace UI with
[`coder_metadata`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/metadata).
![Secrets UI](../../images/admin/secret-metadata.PNG)
Can be produced with
```tf
resource "twilio_iam_api_key" "api_key" {
account_sid = "ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
friendly_name = "Test API Key"
}
resource "coder_metadata" "twilio_key" {
resource_id = twilio_iam_api_key.api_key.id
item {
key = "Username"
value = "Administrator"
}
item {
key = "Password"
value = twilio_iam_api_key.api_key.secret
sensitive = true
}
}
```
## Secrets Management
For more advanced secrets management, you can use a secrets management tool to
store and retrieve secrets in your workspace. For example, you can use
[HashiCorp Vault](https://www.vaultproject.io/) to inject secrets into your
workspace.
Refer to our [HashiCorp Vault Integration](../integrations/vault.md) guide for
more information on how to integrate HashiCorp Vault with Coder.

View File

@ -6,7 +6,7 @@ requirements.
You can access the Appearance settings by navigating to
`Deployment > Appearance`.
![application name and logo url](../images/admin/application-name-logo-url.png)
![application name and logo url](../../images/admin/setup/appearance/application-name-logo-url.png)
## Application Name
@ -20,7 +20,7 @@ page and in the top left corner of the dashboard. The default is the Coder logo.
## Announcement Banners
![service banner](../images/admin/announcement_banner_settings.png)
![announcement banner](../../images/admin/setup/appearance/announcement_banner_settings.png)
Announcement Banners let admins post important messages to all site users. Only
Site Owners may set the announcement banners.
@ -28,17 +28,17 @@ Site Owners may set the announcement banners.
Example: Use multiple announcement banners for concurrent deployment-wide
updates, such as maintenance or new feature rollout.
![Multiple announcements](../images/admin/multiple-banners.PNG)
![Multiple announcements](../../images/admin/setup/appearance/multiple-banners.PNG)
Example: Adhere to government network classification requirements and notify
users of which network their Coder deployment is on.
![service banner secret](../images/admin/service-banner-secret.png)
![service banner secret](../../images/admin/setup/appearance/service-banner-secret.png)
## OIDC Login Button Customization
[Use environment variables to customize](./auth.md#oidc-login-customization) the
text and icon on the OIDC button on the Sign In page.
[Use environment variables to customize](../users/oidc-auth.md#oidc-login-customization)
the text and icon on the OIDC button on the Sign In page.
## Support Links
@ -47,13 +47,13 @@ referring to internal company resources. The menu section replaces the original
menu positions: documentation, report a bug to GitHub, or join the Discord
server.
![support links](../images/admin/support-links.png)
![support links](../../images/admin/setup/appearance/support-links.png)
### Icons
The link icons are optional, and can be set to any url or
[builtin icon](../templates/icons.md#bundled-icons), additionally `bug`, `chat`,
and `docs` are available as three special icons.
[builtin icon](../templates/extending-templates/icons.md#bundled-icons),
additionally `bug`, `chat`, and `docs` are available as three special icons.
### Configuration

View File

@ -1,6 +1,8 @@
# Configure Control Plane Access
Coder server's primary configuration is done via environment variables. For a
full list of the options, run `coder server --help` or see our
[CLI documentation](../reference/cli/server.md).
[CLI documentation](../../reference/cli/server.md).
## Access URL
@ -39,9 +41,8 @@ coder server
`CODER_WILDCARD_ACCESS_URL` is necessary for
[port forwarding](../networking/port-forwarding.md#dashboard) via the dashboard
or running [coder_apps](../templates/index.md#coder-apps) on an absolute path.
Set this to a wildcard subdomain that resolves to Coder (e.g.
`*.coder.example.com`).
or running [coder_apps](../templates/index.md) on an absolute path. Set this to
a wildcard subdomain that resolves to Coder (e.g. `*.coder.example.com`).
If you are providing TLS certificates directly to the Coder server, either
@ -49,8 +50,8 @@ If you are providing TLS certificates directly to the Coder server, either
2. Configure multiple certificates and keys via
[`coder.tls.secretNames`](https://github.com/coder/coder/blob/main/helm/coder/values.yaml)
in the Helm Chart, or
[`--tls-cert-file`](../reference/cli/server.md#--tls-cert-file) and
[`--tls-key-file`](../reference/cli/server.md#--tls-key-file) command line
[`--tls-cert-file`](../../reference/cli/server.md#--tls-cert-file) and
[`--tls-key-file`](../../reference/cli/server.md#--tls-key-file) command line
options (these both take a comma separated list of files; list certificates
and their respective keys in the same order).
@ -60,9 +61,9 @@ The Coder server can directly use TLS certificates with `CODER_TLS_ENABLE` and
accompanying configuration flags. However, Coder can also run behind a
reverse-proxy to terminate TLS certificates from LetsEncrypt, for example.
- [Apache](https://github.com/coder/coder/tree/main/examples/web-server/apache)
- [Caddy](https://github.com/coder/coder/tree/main/examples/web-server/caddy)
- [NGINX](https://github.com/coder/coder/tree/main/examples/web-server/nginx)
- [Apache](./web-server/apache/index.md)
- [Caddy](./web-server/caddy/index.md)
- [NGINX](./web-server/nginx/index.md)
### Kubernetes TLS configuration
@ -129,63 +130,24 @@ steps:
6. Start your Coder deployment with
`CODER_PG_CONNECTION_URL=<external-connection-string>`.
## System packages
If you've installed Coder via a [system package](../install/index.md), you can
configure the server by setting the following variables in
`/etc/coder.d/coder.env`:
```env
# String. Specifies the external URL (HTTP/S) to access Coder.
CODER_ACCESS_URL=https://coder.example.com
# String. Address to serve the API and dashboard.
CODER_HTTP_ADDRESS=0.0.0.0:3000
# String. The URL of a PostgreSQL database to connect to. If empty, PostgreSQL binaries
# will be downloaded from Maven (https://repo1.maven.org/maven2) and store all
# data in the config root. Access the built-in database with "coder server postgres-builtin-url".
CODER_PG_CONNECTION_URL=
# Boolean. Specifies if TLS will be enabled.
CODER_TLS_ENABLE=
# If CODER_TLS_ENABLE=true, also set:
CODER_TLS_ADDRESS=0.0.0.0:3443
# String. Specifies the path to the certificate for TLS. It requires a PEM-encoded file.
# To configure the listener to use a CA certificate, concatenate the primary
# certificate and the CA certificate together. The primary certificate should
# appear first in the combined file.
CODER_TLS_CERT_FILE=
# String. Specifies the path to the private key for the certificate. It requires a
# PEM-encoded file.
CODER_TLS_KEY_FILE=
```
To run Coder as a system service on the host:
```shell
# Use systemd to start Coder now and on reboot
sudo systemctl enable --now coder
# View the logs to ensure a successful start
journalctl -u coder.service -b
```
To restart Coder after applying system changes:
```shell
sudo systemctl restart coder
```
## Configuring Coder behind a proxy
To configure Coder behind a corporate proxy, set the environment variables
`HTTP_PROXY` and `HTTPS_PROXY`. Be sure to restart the server. Lowercase values
(e.g. `http_proxy`) are also respected in this case.
## External Authentication
Coder supports external authentication via OAuth2.0. This allows enabling
integrations with git providers, such as GitHub, GitLab, and Bitbucket etc.
External authentication can also be used to integrate with external services
like JFrog Artifactory and others.
Please refer to the [external authentication](../external-auth.md) section for
more information.
## Up Next
- [Learn how to upgrade Coder](./upgrade.md).
- [Learn how to setup and manage templates](../templates/index.md)
- [Setup external provisioners](../provisioners.md)

View File

@ -0,0 +1,28 @@
# Redirect HTTP to HTTPS
<VirtualHost *:80>
ServerName coder.example.com
ServerAlias *.coder.example.com
Redirect permanent / https://coder.example.com/
</VirtualHost>
<VirtualHost *:443>
ServerName coder.example.com
ServerAlias *.coder.example.com
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
ProxyPass / http://127.0.0.1:3000/ upgrade=any # required for websockets
ProxyPassReverse / http://127.0.0.1:3000/
ProxyRequests Off
ProxyPreserveHost On
RewriteEngine On
# Websockets are required for workspace connectivity
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:3000/$1 [P,L]
SSLCertificateFile /etc/letsencrypt/live/coder.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/coder.example.com/privkey.pem
</VirtualHost>

View File

@ -0,0 +1,172 @@
# How to use Apache as a reverse-proxy with LetsEncrypt
## Requirements
1. Start a Coder deployment and be sure to set the following
[configuration values](../../index.md):
```env
CODER_HTTP_ADDRESS=127.0.0.1:3000
CODER_ACCESS_URL=https://coder.example.com
CODER_WILDCARD_ACCESS_URL=*coder.example.com
```
Throughout the guide, be sure to replace `coder.example.com` with the domain
you intend to use with Coder.
2. Configure your DNS provider to point your coder.example.com and
\*.coder.example.com to your server's public IP address.
> For example, to use `coder.example.com` as your subdomain, configure
> `coder.example.com` and `*.coder.example.com` to point to your server's
> public ip. This can be done by adding A records in your DNS provider's
> dashboard.
3. Install Apache (assuming you're on Debian/Ubuntu):
```shell
sudo apt install apache2
```
4. Enable the following Apache modules:
```shell
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod ssl
sudo a2enmod rewrite
```
5. Stop Apache service and disable default site:
```shell
sudo a2dissite 000-default.conf
sudo systemctl stop apache2
```
## Install and configure LetsEncrypt Certbot
1. Install LetsEncrypt Certbot: Refer to the
[CertBot documentation](https://certbot.eff.org/instructions?ws=apache&os=ubuntufocal&tab=wildcard).
Be sure to pick the wildcard tab and select your DNS provider for
instructions to install the necessary DNS plugin.
## Create DNS provider credentials
> This example assumes you're using CloudFlare as your DNS provider. For other
> providers, refer to the
> [CertBot documentation](https://eff-certbot.readthedocs.io/en/stable/using.html#dns-plugins).
1. Create an API token for the DNS provider you're using: e.g.
[CloudFlare](https://developers.cloudflare.com/fundamentals/api/get-started/create-token)
with the following permissions:
- Zone - DNS - Edit
2. Create a file in `.secrets/certbot/cloudflare.ini` with the following
content:
```ini
dns_cloudflare_api_token = YOUR_API_TOKEN
```
```shell
mkdir -p ~/.secrets/certbot
touch ~/.secrets/certbot/cloudflare.ini
nano ~/.secrets/certbot/cloudflare.ini
```
3. Set the correct permissions:
```shell
sudo chmod 600 ~/.secrets/certbot/cloudflare.ini
```
## Create the certificate
1. Create the wildcard certificate:
```shell
sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini -d coder.example.com -d *.coder.example.com
```
## Configure Apache
> This example assumes Coder is running locally on `127.0.0.1:3000` and that
> you're using `coder.example.com` as your subdomain.
1. Create Apache configuration for Coder:
```shell
sudo nano /etc/apache2/sites-available/coder.conf
```
2. Add the following content:
```apache
# Redirect HTTP to HTTPS
<VirtualHost *:80>
ServerName coder.example.com
ServerAlias *.coder.example.com
Redirect permanent / https://coder.example.com/
</VirtualHost>
<VirtualHost *:443>
ServerName coder.example.com
ServerAlias *.coder.example.com
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
ProxyPass / http://127.0.0.1:3000/ upgrade=any # required for websockets
ProxyPassReverse / http://127.0.0.1:3000/
ProxyRequests Off
ProxyPreserveHost On
RewriteEngine On
# Websockets are required for workspace connectivity
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:3000/$1 [P,L]
SSLCertificateFile /etc/letsencrypt/live/coder.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/coder.example.com/privkey.pem
</VirtualHost>
```
> Don't forget to change: `coder.example.com` by your (sub)domain
3. Enable the site:
```shell
sudo a2ensite coder.conf
```
4. Restart Apache:
```shell
sudo systemctl restart apache2
```
## Refresh certificates automatically
1. Create a new file in `/etc/cron.weekly`:
```shell
sudo touch /etc/cron.weekly/certbot
```
2. Make it executable:
```shell
sudo chmod +x /etc/cron.weekly/certbot
```
3. And add this code:
```shell
#!/bin/sh
sudo certbot renew -q
```
And that's it, you should now be able to access Coder at your sub(domain) e.g.
`https://coder.example.com`.

View File

@ -0,0 +1,15 @@
{
on_demand_tls {
ask http://example.com
}
}
coder.example.com, *.coder.example.com {
reverse_proxy localhost:3000
tls {
on_demand
issuer acme {
email email@example.com
}
}
}

View File

@ -0,0 +1,57 @@
version: "3.9"
services:
coder:
image: ghcr.io/coder/coder:${CODER_VERSION:-latest}
environment:
CODER_PG_CONNECTION_URL: "postgresql://${POSTGRES_USER:-username}:${POSTGRES_PASSWORD:-password}@database/${POSTGRES_DB:-coder}?sslmode=disable"
CODER_HTTP_ADDRESS: "0.0.0.0:7080"
# You'll need to set CODER_ACCESS_URL to an IP or domain
# that workspaces can reach. This cannot be localhost
# or 127.0.0.1 for non-Docker templates!
CODER_ACCESS_URL: "${CODER_ACCESS_URL}"
# Optional) Enable wildcard apps/dashboard port forwarding
CODER_WILDCARD_ACCESS_URL: "${CODER_WILDCARD_ACCESS_URL}"
# If the coder user does not have write permissions on
# the docker socket, you can uncomment the following
# lines and set the group ID to one that has write
# permissions on the docker socket.
#group_add:
# - "998" # docker group on host
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
database:
condition: service_healthy
database:
image: "postgres:14.2"
ports:
- "5432:5432"
environment:
POSTGRES_USER: ${POSTGRES_USER:-username} # The PostgreSQL user (useful to connect to the database)
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password} # The PostgreSQL password (useful to connect to the database)
POSTGRES_DB: ${POSTGRES_DB:-coder} # The PostgreSQL default database (automatically created at first launch)
volumes:
- coder_data:/var/lib/postgresql/data # Use "docker volume rm coder_coder_data" to reset Coder
healthcheck:
test:
[
"CMD-SHELL",
"pg_isready -U ${POSTGRES_USER:-username} -d ${POSTGRES_DB:-coder}",
]
interval: 5s
timeout: 5s
retries: 5
caddy:
image: caddy:2.6.2
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- $PWD/Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
volumes:
coder_data:
caddy_data:
caddy_config:

View File

@ -0,0 +1,187 @@
# Caddy
This is an example configuration of how to use Coder with
[caddy](https://caddyserver.com/docs). To use Caddy to generate TLS
certificates, you'll need a domain name that resolves to your Caddy server.
## Getting started
### With docker-compose
1. [Install Docker](https://docs.docker.com/engine/install/) and
[Docker Compose](https://docs.docker.com/compose/install/)
1. Start with our example configuration
```shell
# Create a project folder
cd $HOME
mkdir coder-with-caddy
cd coder-with-caddy
# Clone coder/coder and copy the Caddy example
git clone https://github.com/coder/coder /tmp/coder
mv /tmp/coder/docs/admin/setup/web-server/caddy $(pwd)
```
1. Modify the [Caddyfile](./Caddyfile) and change the following values:
- `localhost:3000`: Change to `coder:7080` (Coder container on Docker
network)
- `email@example.com`: Email to request certificates from LetsEncrypt/ZeroSSL
(does not have to be Coder admin email)
- `coder.example.com`: Domain name you're using for Coder.
- `*.coder.example.com`: Domain name for wildcard apps, commonly used for
[dashboard port forwarding](../../../networking/port-forwarding.md). This
is optional and can be removed.
1. Start Coder. Set `CODER_ACCESS_URL` and `CODER_WILDCARD_ACCESS_URL` to the
domain you're using in your Caddyfile.
```shell
export CODER_ACCESS_URL=https://coder.example.com
export CODER_WILDCARD_ACCESS_URL=*.coder.example.com
docker compose up -d # Run on startup
```
### Standalone
1. If you haven't already, [install Coder](../../../../install/index.md)
2. Install [Caddy Server](https://caddyserver.com/docs/install)
3. Copy our sample [Caddyfile](./Caddyfile) and change the following values:
> If you're installed Caddy as a system package, update the default Caddyfile
> with `vim /etc/caddy/Caddyfile`
- `email@example.com`: Email to request certificates from LetsEncrypt/ZeroSSL
(does not have to be Coder admin email)
- `coder.example.com`: Domain name you're using for Coder.
- `*.coder.example.com`: Domain name for wildcard apps, commonly used for
[dashboard port forwarding](../../../networking/port-forwarding.md). This
is optional and can be removed.
- `localhost:3000`: Address Coder is running on. Modify this if you changed
`CODER_HTTP_ADDRESS` in the Coder configuration.
- _DO NOT CHANGE the `ask http://example.com` line! Doing so will result in
your certs potentially not being generated._
4. [Configure Coder](../../index.md) and change the following values:
- `CODER_ACCESS_URL`: root domain (e.g. `https://coder.example.com`)
- `CODER_WILDCARD_ACCESS_URL`: wildcard domain (e.g. `*.example.com`).
5. Start the Caddy server:
If you're [keeping Caddy running](https://caddyserver.com/docs/running) via a
system service:
```shell
sudo systemctl restart caddy
```
Or run a standalone server:
```shell
caddy run
```
6. Optionally, use [ufw](https://wiki.ubuntu.com/UncomplicatedFirewall) or
another firewall to disable external traffic outside of Caddy.
```shell
# Check status of UncomplicatedFirewall
sudo ufw status
# Allow SSH
sudo ufw allow 22
# Allow HTTP, HTTPS (Caddy)
sudo ufw allow 80
sudo ufw allow 443
# Deny direct access to Coder server
sudo ufw deny 3000
# Enable UncomplicatedFirewall
sudo ufw enable
```
7. Navigate to your Coder URL! A TLS certificate should be auto-generated on
your first visit.
## Generating wildcard certificates
By default, this configuration uses Caddy's
[on-demand TLS](https://caddyserver.com/docs/caddyfile/options#on-demand-tls) to
generate a certificate for each subdomain (e.g. `app1.coder.example.com`,
`app2.coder.example.com`). When users visit new subdomains, such as accessing
[ports on a workspace](../../../networking/port-forwarding.md), the request will
take an additional 5-30 seconds since a new certificate is being generated.
For production deployments, we recommend configuring Caddy to generate a
wildcard certificate, which requires an explicit DNS challenge and additional
Caddy modules.
1. Install a custom Caddy build that includes the
[caddy-dns](https://github.com/caddy-dns) module for your DNS provider (e.g.
CloudFlare, Route53).
- Docker:
[Build an custom Caddy image](https://github.com/docker-library/docs/tree/master/caddy#adding-custom-caddy-modules)
with the module for your DNS provider. Be sure to reference the new image
in the `docker-compose.yaml`.
- Standalone:
[Download a custom Caddy build](https://caddyserver.com/download) with the
module for your DNS provider. If you're using Debian/Ubuntu, you
[can configure the Caddy package](https://caddyserver.com/docs/build#package-support-files-for-custom-builds-for-debianubunturaspbian)
to use the new build.
2. Edit your `Caddyfile` and add the necessary credentials/API tokens to solve
the DNS challenge for wildcard certificates.
For example, for AWS Route53:
```diff
tls {
- on_demand
- issuer acme {
- email email@example.com
- }
+ dns route53 {
+ max_retries 10
+ aws_profile "real-profile"
+ access_key_id "AKI..."
+ secret_access_key "wJa..."
+ token "TOKEN..."
+ region "us-east-1"
+ }
}
```
> Configuration reference from
> [caddy-dns/route53](https://github.com/caddy-dns/route53).
And for CloudFlare:
Generate a
[token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token)
with the following permissions:
- Zone:Zone:Edit
```diff
tls {
- on_demand
- issuer acme {
- email email@example.com
- }
+ dns cloudflare CLOUDFLARE_API_TOKEN
}
```
> Configuration reference from
> [caddy-dns/cloudflare](https://github.com/caddy-dns/cloudflare).

View File

@ -0,0 +1,179 @@
# How to use NGINX as a reverse-proxy with LetsEncrypt
## Requirements
1. Start a Coder deployment and be sure to set the following
[configuration values](../../index.md):
```env
CODER_HTTP_ADDRESS=127.0.0.1:3000
CODER_ACCESS_URL=https://coder.example.com
CODER_WILDCARD_ACCESS_URL=*.coder.example.com
```
Throughout the guide, be sure to replace `coder.example.com` with the domain
you intend to use with Coder.
2. Configure your DNS provider to point your coder.example.com and
\*.coder.example.com to your server's public IP address.
> For example, to use `coder.example.com` as your subdomain, configure
> `coder.example.com` and `*.coder.example.com` to point to your server's
> public ip. This can be done by adding A records in your DNS provider's
> dashboard.
3. Install NGINX (assuming you're on Debian/Ubuntu):
```shell
sudo apt install nginx
```
4. Stop NGINX service:
```shell
sudo systemctl stop nginx
```
## Adding Coder deployment subdomain
> This example assumes Coder is running locally on `127.0.0.1:3000` and that
> you're using `coder.example.com` as your subdomain.
1. Create NGINX configuration for this app:
```shell
sudo touch /etc/nginx/sites-available/coder.example.com
```
2. Activate this file:
```shell
sudo ln -s /etc/nginx/sites-available/coder.example.com /etc/nginx/sites-enabled/coder.example.com
```
## Install and configure LetsEncrypt Certbot
1. Install LetsEncrypt Certbot: Refer to the
[CertBot documentation](https://certbot.eff.org/instructions?ws=apache&os=ubuntufocal&tab=wildcard).
Be sure to pick the wildcard tab and select your DNS provider for
instructions to install the necessary DNS plugin.
## Create DNS provider credentials
> This example assumes you're using CloudFlare as your DNS provider. For other
> providers, refer to the
> [CertBot documentation](https://eff-certbot.readthedocs.io/en/stable/using.html#dns-plugins).
1. Create an API token for the DNS provider you're using: e.g.
[CloudFlare](https://developers.cloudflare.com/fundamentals/api/get-started/create-token)
with the following permissions:
- Zone - DNS - Edit
2. Create a file in `.secrets/certbot/cloudflare.ini` with the following
content:
```ini
dns_cloudflare_api_token = YOUR_API_TOKEN
```
```shell
mkdir -p ~/.secrets/certbot
touch ~/.secrets/certbot/cloudflare.ini
nano ~/.secrets/certbot/cloudflare.ini
```
3. Set the correct permissions:
```shell
sudo chmod 600 ~/.secrets/certbot/cloudflare.ini
```
## Create the certificate
1. Create the wildcard certificate:
```shell
sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini -d coder.example.com -d *.coder.example.com
```
## Configure nginx
1. Edit the file with:
```shell
sudo nano /etc/nginx/sites-available/coder.example.com
```
2. Add the following content:
```nginx
server {
server_name coder.example.com *.coder.example.com;
# HTTP configuration
listen 80;
listen [::]:80;
# HTTP to HTTPS
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
# HTTPS configuration
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/coder.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/coder.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000; # Change this to your coder deployment port default is 3000
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
}
}
```
> Don't forget to change: `coder.example.com` by your (sub)domain
3. Test the configuration:
```shell
sudo nginx -t
```
## Refresh certificates automatically
1. Create a new file in `/etc/cron.weekly`:
```shell
sudo touch /etc/cron.weekly/certbot
```
2. Make it executable:
```shell
sudo chmod +x /etc/cron.weekly/certbot
```
3. And add this code:
```shell
#!/bin/sh
sudo certbot renew -q
```
## Restart NGINX
```shell
sudo systemctl restart nginx
```
And that's it, you should now be able to access Coder at your sub(domain) e.g.
`https://coder.example.com`.

View File

@ -0,0 +1,164 @@
# Creating Templates
Users with the `Template Administrator` role or above can create templates
within Coder.
## From a starter template
In most cases, it is best to start with a starter template.
<div class="tabs">
### Web UI
After navigating to the Templates page in the Coder dashboard, choose
`Create Template > Choose a starter template`.
![Create a template](../../images/admin/templates/create-template.png)
From there, select a starter template for desired underlying infrastructure for
workspaces.
![Starter templates](../../images/admin/templates/starter-templates.png)
Give your template a name, description, and icon and press `Create template`.
![Name and icon](../../images/admin/templates/import-template.png)
> **⚠️ Note**: If template creation fails, Coder is likely not authorized to
> deploy infrastructure in the given location. Learn how to configure
> [provisioner authentication](#TODO).
### CLI
You can the [Coder CLI](../../install/cli.md) to manage templates for Coder.
After [logging in](#TODO) to your deployment, create a folder to store your
templates:
```sh
# This snippet applies to macOS and Linux only
mkdir $HOME/coder-templates
cd $HOME/coder-templates
```
Use the [`templates init`](../../reference/cli/templates_init.md) command to
pull a starter template:
```sh
coder templates init
```
After pulling the template to your local machine (e.g. `aws-linux`), you can
rename it:
```sh
# This snippet applies to macOS and Linux only
mv aws-linux universal-template
cd universal-template
```
Next, push it to Coder with the
[`templates push`](../../reference/cli/templates_push.md) command:
```sh
coder templates push
```
> ⚠️ Note: If `template push` fails, Coder is likely not authorized to deploy
> infrastructure in the given location. Learn how to configure
> [provisioner authentication](../provisioners.md).
You can edit the metadata of the template such as the display name with the
[`templates edit`](../../reference/cli/templates_edit.md) command:
```sh
coder templates edit universal-template \
--display-name "Universal Template" \
--description "Virtual machine configured with Java, Python, Typescript, IntelliJ IDEA, and Ruby. Use this for starter projects. " \
--icon "/emojis/2b50.png"
```
### CI/CD
Follow the [change management](./managing-templates/change-management.md) guide
to manage templates via GitOps.
</div>
## From an existing template
You can duplicate an existing template in your Coder deployment. This will copy
the template code and metadata, allowing you to make changes without affecting
the original template.
<div class="tabs">
### Web UI
After navigating to the page for a template, use the dropdown menu on the right
to `Duplicate`.
![Duplicate menu](../../images/admin/templates/duplicate-menu.png)
Give the new template a name, icon, and description.
![Duplicate page](../../images/admin/templates/duplicate-page.png)
Press `Create template`. After the build, you will be taken to the new template
page.
![New template](../../images/admin/templates/new-duplicate-template.png)
### CLI
First, ensure you are logged in to the control plane as a user with permissions
to read and write permissions.
```console
coder login
```
You can list the available templates with the following CLI invocation.
```console
coder templates list
```
After identified the template you'd like to work from, clone it into a directory
with a name you'd like to assign to the new modified template.
```console
coder templates pull <template-name> ./<new-template-name>
```
Then, you can make modifications to the existing template in this directory and
push them to the control plane using the `-d` flag to specify the directory.
```console
coder templates push <new-template-name> -d ./<new-template-name>
```
You will then see your new template in the dashboard.
</div>
## From scratch (advanced)
There may be cases where you want to create a template from scratch. You can use
[any Terraform provider](https://registry.terraform.com) with Coder to create
templates for additional clouds (e.g. Hetzner, Alibaba) or orchestrators
(VMware, Proxmox) that we do not provide example templates for.
Refer to the following resources:
- [Tutorial: Create a template from scratch](../../tutorials/template-from-scratch.md)
- [Extending templates](./extending-templates/index.md): Features and concepts
around templates (agents, parameters, variables, etc)
- [Coder Registry](https://registry.coder.com/templates): Official and community
templates for Coder
- [Coder Terraform Provider Reference](https://registry.terraform.io/providers/coder/coder)
### Next steps
- [Extending templates](./extending-templates/index.md)
- [Managing templates](./managing-templates/index.md)

View File

@ -0,0 +1,148 @@
# Agent metadata
![agent-metadata](../../../images/admin/templates/agent-metadata-ui.png)
You can show live operational metrics to workspace users with agent metadata. It
is the dynamic complement of [resource metadata](./resource-metadata.md).
You specify agent metadata in the
[`coder_agent`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent).
## Examples
All of these examples use
[heredoc strings](https://developer.hashicorp.com/terraform/language/expressions/strings#heredoc-strings)
for the script declaration. With heredoc strings, you can script without messy
escape codes, just as if you were working in your terminal.
Some of the examples use the [`coder stat`](../../../reference/cli/stat.md)
command. This is useful for determining CPU and memory usage of the VM or
container that the workspace is running in, which is more accurate than resource
usage about the workspace's host.
Here's a standard set of metadata snippets for Linux agents:
```tf
resource "coder_agent" "main" {
os = "linux"
...
metadata {
display_name = "CPU Usage"
key = "cpu"
# Uses the coder stat command to get container CPU usage.
script = "coder stat cpu"
interval = 1
timeout = 1
}
metadata {
display_name = "Memory Usage"
key = "mem"
# Uses the coder stat command to get container memory usage in GiB.
script = "coder stat mem --prefix Gi"
interval = 1
timeout = 1
}
metadata {
display_name = "CPU Usage (Host)"
key = "cpu_host"
# calculates CPU usage by summing the "us", "sy" and "id" columns of
# top.
script = <<EOT
top -bn1 | awk 'FNR==3 {printf "%2.0f%%", $2+$3+$4}'
EOT
interval = 1
timeout = 1
}
metadata {
display_name = "Memory Usage (Host)"
key = "mem_host"
script = <<EOT
free | awk '/^Mem/ { printf("%.0f%%", $4/$2 * 100.0) }'
EOT
interval = 1
timeout = 1
}
metadata {
display_name = "Disk Usage"
key = "disk"
script = "df -h | awk '$6 ~ /^\\/$/ { print $5 }'"
interval = 1
timeout = 1
}
metadata {
display_name = "Load Average"
key = "load"
script = <<EOT
awk '{print $1,$2,$3}' /proc/loadavg
EOT
interval = 1
timeout = 1
}
}
```
## Useful utilities
You can also show agent metadata for information about the workspace's host.
[top](https://manpages.ubuntu.com/manpages/jammy/en/man1/top.1.html) is
available in most Linux distributions and provides virtual memory, CPU and IO
statistics. Running `top` produces output that looks like:
```text
%Cpu(s): 65.8 us, 4.4 sy, 0.0 ni, 29.3 id, 0.3 wa, 0.0 hi, 0.2 si, 0.0 st
MiB Mem : 16009.0 total, 493.7 free, 4624.8 used, 10890.5 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 11021.3 avail Mem
```
[vmstat](https://manpages.ubuntu.com/manpages/jammy/en/man8/vmstat.8.html) is
available in most Linux distributions and provides virtual memory, CPU and IO
statistics. Running `vmstat` produces output that looks like:
```text
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 19580 4781680 12133692 217646944 0 2 4 32 1 0 1 1 98 0 0
```
[dstat](https://manpages.ubuntu.com/manpages/jammy/man1/dstat.1.html) is
considerably more parseable than `vmstat` but often not included in base images.
It is easily installed by most package managers under the name `dstat`. The
output of running `dstat 1 1` looks like:
```text
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
1 1 98 0 0|3422k 25M| 0 0 | 153k 904k| 123k 174k
```
## Managing the database load
Agent metadata can generate a significant write load and overwhelm your Coder
database if you're not careful. The approximate writes per second can be
calculated using the formula:
```text
(metadata_count * num_running_agents * 2) / metadata_avg_interval
```
For example, let's say you have
- 10 running agents
- each with 6 metadata snippets
- with an average interval of 4 seconds
You can expect `(10 * 6 * 2) / 4`, or 30 writes per second.
One of the writes is to the `UNLOGGED` `workspace_agent_metadata` table and the
other to the `NOTIFY` query that enables live stats streaming in the UI.
## Next Steps
- [Resource metadata](./resource-metadata.md)
- [Parameters](./parameters.md)

View File

@ -0,0 +1,461 @@
# Docker in Workspaces
There are a few ways to run Docker within container-based Coder workspaces.
| Method | Description | Limitations |
| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Sysbox container runtime](#sysbox-container-runtime) | Install the Sysbox runtime on your Kubernetes nodes or Docker host(s) for secure docker-in-docker and systemd-in-docker. Works with GKE, EKS, AKS, Docker. | Requires [compatible nodes](https://github.com/nestybox/sysbox#host-requirements). [Limitations](https://github.com/nestybox/sysbox/blob/master/docs/user-guide/limitations.md) |
| [Envbox](#envbox) | A container image with all the packages necessary to run an inner Sysbox container. Removes the need to setup sysbox-runc on your nodes. Works with GKE, EKS, AKS. | Requires running the outer container as privileged (the inner container that acts as the workspace is locked down). Requires compatible [nodes](https://github.com/nestybox/sysbox/blob/master/docs/distro-compat.md#sysbox-distro-compatibility). |
| [Rootless Podman](#rootless-podman) | Run Podman inside Coder workspaces. Does not require a custom runtime or privileged containers. Works with GKE, EKS, AKS, RKE, OpenShift | Requires smarter-device-manager for FUSE mounts. [See all](https://github.com/containers/podman/blob/main/rootless.md#shortcomings-of-rootless-podman) |
| [Privileged docker sidecar](#privileged-sidecar-container) | Run Docker as a privileged sidecar container. | Requires a privileged container. Workspaces can break out to root on the host machine. |
## Sysbox container runtime
The [Sysbox](https://github.com/nestybox/sysbox) container runtime allows
unprivileged users to run system-level applications, such as Docker, securely
from the workspace containers. Sysbox requires a
[compatible Linux distribution](https://github.com/nestybox/sysbox/blob/master/docs/distro-compat.md)
to implement these security features. Sysbox can also be used to run systemd
inside Coder workspaces. See [Systemd in Docker](#systemd-in-docker).
### Use Sysbox in Docker-based templates
After [installing Sysbox](https://github.com/nestybox/sysbox#installation) on
the Coder host, modify your template to use the sysbox-runc runtime:
```tf
resource "docker_container" "workspace" {
# ...
name = "coder-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
image = "codercom/enterprise-base:ubuntu"
env = ["CODER_AGENT_TOKEN=${coder_agent.main.token}"]
command = ["sh", "-c", coder_agent.main.init_script]
# Use the Sysbox container runtime (required)
runtime = "sysbox-runc"
}
resource "coder_agent" "main" {
arch = data.coder_provisioner.me.arch
os = "linux"
startup_script = <<EOF
#!/bin/sh
# Start Docker
sudo dockerd &
# ...
EOF
}
```
### Use Sysbox in Kubernetes-based templates
After
[installing Sysbox on Kubernetes](https://github.com/nestybox/sysbox/blob/master/docs/user-guide/install-k8s.md),
modify your template to use the sysbox-runc RuntimeClass. This requires the
Kubernetes Terraform provider version 2.16.0 or greater.
```tf
terraform {
required_providers {
coder = {
source = "coder/coder"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.16.0"
}
}
}
variable "workspaces_namespace" {
default = "coder-namespace"
}
data "coder_workspace" "me" {}
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
dir = "/home/coder"
startup_script = <<EOF
#!/bin/sh
# Start Docker
sudo dockerd &
# ...
EOF
}
resource "kubernetes_pod" "dev" {
count = data.coder_workspace.me.start_count
metadata {
name = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
namespace = var.workspaces_namespace
annotations = {
"io.kubernetes.cri-o.userns-mode" = "auto:size=65536"
}
}
spec {
runtime_class_name = "sysbox-runc"
# Use the Sysbox container runtime (required)
security_context {
run_as_user = 1000
fs_group = 1000
}
container {
name = "dev"
env {
name = "CODER_AGENT_TOKEN"
value = coder_agent.main.token
}
image = "codercom/enterprise-base:ubuntu"
command = ["sh", "-c", coder_agent.main.init_script]
}
}
}
```
## Envbox
[Envbox](https://github.com/coder/envbox) is an image developed and maintained
by Coder that bundles the sysbox runtime. It works by starting an outer
container that manages the various sysbox daemons and spawns an unprivileged
inner container that acts as the user's workspace. The inner container is able
to run system-level software similar to a regular virtual machine (e.g.
`systemd`, `dockerd`, etc). Envbox offers the following benefits over running
sysbox directly on the nodes:
- No custom runtime installation or management on your Kubernetes nodes.
- No limit to the number of pods that run envbox.
Some drawbacks include:
- The outer container must be run as privileged
- Note: the inner container is _not_ privileged. For more information on the
security of sysbox containers see sysbox's
[official documentation](https://github.com/nestybox/sysbox/blob/master/docs/user-guide/security.md).
- Initial workspace startup is slower than running `sysbox-runc` directly on the
nodes. This is due to `envbox` having to pull the image to its own Docker
cache on its initial startup. Once the image is cached in `envbox`, startup
performance is similar.
Envbox requires the same kernel requirements as running sysbox directly on the
nodes. Refer to sysbox's
[compatibility matrix](https://github.com/nestybox/sysbox/blob/master/docs/distro-compat.md#sysbox-distro-compatibility)
to ensure your nodes are compliant.
To get started with `envbox` check out the
[starter template](https://github.com/coder/coder/tree/main/examples/templates/envbox)
or visit the [repo](https://github.com/coder/envbox).
### Authenticating with a Private Registry
Authenticating with a private container registry can be done by referencing the
credentials via the `CODER_IMAGE_PULL_SECRET` environment variable. It is
encouraged to populate this
[environment variable](https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data)
by using a Kubernetes
[secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials).
Refer to your container registry documentation to understand how to best create
this secret.
The following shows a minimal example using a the JSON API key from a GCP
service account to pull a private image:
```bash
# Create the secret
$ kubectl create secret docker-registry <name> \
--docker-server=us.gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ./json-key-file.yaml)" \
--docker-email=<service-account-email>
```
```tf
env {
name = "CODER_IMAGE_PULL_SECRET"
value_from {
secret_key_ref {
name = "<name>"
key = ".dockerconfigjson"
}
}
}
```
## Rootless podman
[Podman](https://docs.podman.io/en/latest/) is Docker alternative that is
compatible with OCI containers specification. which can run rootless inside
Kubernetes pods. No custom RuntimeClass is required.
Before using Podman, please review the following documentation:
- [Basic setup and use of Podman in a rootless environment](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md)
- [Shortcomings of Rootless Podman](https://github.com/containers/podman/blob/main/rootless.md#shortcomings-of-rootless-podman)
1. Enable
[smart-device-manager](https://gitlab.com/arm-research/smarter/smarter-device-manager#enabling-access)
to securely expose a FUSE devices to pods.
```shell
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fuse-device-plugin-daemonset
namespace: kube-system
spec:
selector:
matchLabels:
name: fuse-device-plugin-ds
template:
metadata:
labels:
name: fuse-device-plugin-ds
spec:
hostNetwork: true
containers:
- name: fuse-device-plugin-ctr
image: soolaugust/fuse-device-plugin:v1.0
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
imagePullSecrets:
- name: registry-secret
EOF
```
2. Be sure to label your nodes to enable smarter-device-manager:
```shell
kubectl get nodes
kubectl label nodes --all smarter-device-manager=enabled
```
> ⚠️ **Warning**: If you are using a managed Kubernetes distribution (e.g.
> AKS, EKS, GKE), be sure to set node labels via your cloud provider.
> Otherwise, your nodes may drop the labels and break podman functionality.
3. For systems running SELinux (typically Fedora-, CentOS-, and Red Hat-based
systems), you might need to disable SELinux or set it to permissive mode.
4. Use this
[kubernetes-with-podman](https://github.com/coder/community-templates/tree/main/kubernetes-podman)
example template, or make your own.
```shell
echo "kubernetes-with-podman" | coder templates init
cd ./kubernetes-with-podman
coder templates create
```
> For more information around the requirements of rootless podman pods, see:
> [How to run Podman inside of Kubernetes](https://www.redhat.com/sysadmin/podman-inside-kubernetes)
## Privileged sidecar container
A
[privileged container](https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities)
can be added to your templates to add docker support. This may come in handy if
your nodes cannot run Sysbox.
> ⚠️ **Warning**: This is insecure. Workspaces will be able to gain root access
> to the host machine.
### Use a privileged sidecar container in Docker-based templates
```tf
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
}
resource "docker_network" "private_network" {
name = "network-${data.coder_workspace.me.id}"
}
resource "docker_container" "dind" {
image = "docker:dind"
privileged = true
name = "dind-${data.coder_workspace.me.id}"
entrypoint = ["dockerd", "-H", "tcp://0.0.0.0:2375"]
networks_advanced {
name = docker_network.private_network.name
}
}
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/enterprise-base:ubuntu"
name = "dev-${data.coder_workspace.me.id}"
command = ["sh", "-c", coder_agent.main.init_script]
env = [
"CODER_AGENT_TOKEN=${coder_agent.main.token}",
"DOCKER_HOST=${docker_container.dind.name}:2375"
]
networks_advanced {
name = docker_network.private_network.name
}
}
```
### Use a privileged sidecar container in Kubernetes-based templates
```tf
terraform {
required_providers {
coder = {
source = "coder/coder"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.16.0"
}
}
}
variable "workspaces_namespace" {
default = "coder-namespace"
}
data "coder_workspace" "me" {}
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
}
resource "kubernetes_pod" "main" {
count = data.coder_workspace.me.start_count
metadata {
name = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
namespace = var.namespace
}
spec {
# Run a privileged dind (Docker in Docker) container
container {
name = "docker-sidecar"
image = "docker:dind"
security_context {
privileged = true
run_as_user = 0
}
command = ["dockerd", "-H", "tcp://127.0.0.1:2375"]
}
container {
name = "dev"
image = "codercom/enterprise-base:ubuntu"
command = ["sh", "-c", coder_agent.main.init_script]
security_context {
run_as_user = "1000"
}
env {
name = "CODER_AGENT_TOKEN"
value = coder_agent.main.token
}
# Use the Docker daemon in the "docker-sidecar" container
env {
name = "DOCKER_HOST"
value = "localhost:2375"
}
}
}
}
```
## Systemd in Docker
Additionally, [Sysbox](https://github.com/nestybox/sysbox) can be used to give
workspaces full `systemd` capabilities.
After
[installing Sysbox on Kubernetes](https://github.com/nestybox/sysbox/blob/master/docs/user-guide/install-k8s.md),
modify your template to use the sysbox-runc RuntimeClass. This requires the
Kubernetes Terraform provider version 2.16.0 or greater.
```tf
terraform {
required_providers {
coder = {
source = "coder/coder"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.16.0"
}
}
}
variable "workspaces_namespace" {
default = "coder-namespace"
}
data "coder_workspace" "me" {}
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
dir = "/home/coder"
}
resource "kubernetes_pod" "dev" {
count = data.coder_workspace.me.start_count
metadata {
name = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
namespace = var.workspaces_namespace
annotations = {
"io.kubernetes.cri-o.userns-mode" = "auto:size=65536"
}
}
spec {
# Use Sysbox container runtime (required)
runtime_class_name = "sysbox-runc"
# Run as root in order to start systemd (required)
security_context {
run_as_user = 0
fs_group = 0
}
container {
name = "dev"
env {
name = "CODER_AGENT_TOKEN"
value = coder_agent.main.token
}
image = "codercom/enterprise-base:ubuntu"
command = ["sh", "-c", <<EOF
# Start the Coder agent as the "coder" user
# once systemd has started up
sudo -u coder --preserve-env=CODER_AGENT_TOKEN /bin/bash -- <<-' EOT' &
while [[ ! $(systemctl is-system-running) =~ ^(running|degraded) ]]
do
echo "Waiting for system to start... $(systemctl is-system-running)"
sleep 2
done
${coder_agent.main.init_script}
EOT
exec /sbin/init
EOF
]
}
}
}
```

View File

@ -0,0 +1,96 @@
# External Authentication
Coder integrates with any OpenID Connect provider to automate away the need for
developers to authenticate with external services within their workspace. This
can be used to authenticate with git providers, private registries, or any other
service that requires authentication.
## External Auth Providers
External auth providers are configured using environment variables in the Coder
Control Plane. See
## Git Providers
When developers use `git` inside their workspace, they are prompted to
authenticate. After that, Coder will store and refresh tokens for future
operations.
<video autoplay playsinline loop>
<source src="https://github.com/coder/coder/blob/main/site/static/external-auth.mp4?raw=true" type="video/mp4">
Your browser does not support the video tag.
</video>
### Require git authentication in templates
If your template requires git authentication (e.g. running `git clone` in the
[startup_script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)),
you can require users authenticate via git prior to creating a workspace:
![Git authentication in template](../../../images/admin/git-auth-template.png)
### Native git authentication will auto-refresh tokens
<blockquote class="info">
<p>
This is the preferred authentication method.
</p>
</blockquote>
By default, the coder agent will configure native `git` authentication via the
`GIT_ASKPASS` environment variable. Meaning, with no additional configuration,
external authentication will work with native `git` commands.
To check the auth token being used **from inside a running workspace**, run:
```shell
# If the exit code is non-zero, then the user is not authenticated with the
# external provider.
coder external-auth access-token <external-auth-id>
```
Note: Some IDE's override the `GIT_ASKPASS` environment variable and need to be
configured.
**VSCode**
Use the
[Coder](https://marketplace.visualstudio.com/items?itemName=coder.coder-remote)
extension to automatically configure these settings for you!
Otherwise, you can manually configure the following settings:
- Set `git.terminalAuthentication` to `false`
- Set `git.useIntegratedAskPass` to `false`
### Hard coded tokens do not auto-refresh
If the token is required to be inserted into the workspace, for example
[GitHub cli](https://cli.github.com/), the auth token can be inserted from the
template. This token will not auto-refresh. The following example will
authenticate via GitHub and auto-clone a repo into the `~/coder` directory.
```tf
data "coder_external_auth" "github" {
# Matches the ID of the external auth provider in Coder.
id = "github"
}
resource "coder_agent" "dev" {
os = "linux"
arch = "amd64"
dir = "~/coder"
env = {
GITHUB_TOKEN : data.coder_external_auth.github.access_token
}
startup_script = <<EOF
if [ ! -d ~/coder ]; then
git clone https://github.com/coder/coder
fi
EOF
}
```
See the
[Terraform provider documentation](https://registry.terraform.io/providers/coder/coder/latest/docs/data-sources/external_auth)
for all available options.

View File

@ -0,0 +1,80 @@
# Icons
Coder uses icons in several places, including ones that can be configured
throughout the app, or specified in your Terraform. They're specified by a URL,
which can be to an image hosted on a CDN of your own, or one of the icons that
come bundled with your Coder deployment.
- **Template Icons**:
- Make templates and workspaces visually recognizable with a relevant or
memorable icon
- [**Terraform**](https://registry.terraform.io/providers/coder/coder/latest/docs):
- [`coder_app`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app#icon)
- [`coder_parameter`](https://registry.terraform.io/providers/coder/coder/latest/docs/data-sources/parameter#icon)
and
[`option`](https://registry.terraform.io/providers/coder/coder/latest/docs/data-sources/parameter#nested-schema-for-option)
blocks
- [`coder_script`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/script#icon)
- [`coder_metadata`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/metadata#icon)
These can all be configured to use an icon by setting the `icon` field.
```tf
data "coder_parameter" "my_parameter" {
icon = "/icon/coder.svg"
option {
icon = "/emojis/1f3f3-fe0f-200d-26a7-fe0f.png"
}
}
```
- [**Authentication Providers**](https://coder.com/docs/admin/external-auth):
- Use icons for external authentication providers to make them recognizable.
You can set an icon for each provider by setting the
`CODER_EXTERNAL_AUTH_X_ICON` environment variable, where `X` is the number
of the provider.
```env
CODER_EXTERNAL_AUTH_0_ICON=/icon/github.svg
CODER_EXTERNAL_AUTH_1_ICON=/icon/google.svg
```
- [**Support Links**](../../setup/appearance.md#support-links):
- Use icons for support links to make them recognizable. You can set the
`icon` field for each link in `CODER_SUPPORT_LINKS` array.
## Bundled icons
Coder is distributed with a bundle of icons for popular cloud providers and
programming languages. You can see all of the icons (or suggest new ones) in our
repository on
[GitHub](https://github.com/coder/coder/tree/main/site/static/icon).
You can also view the entire list, with search and previews, by navigating to
/icons on your Coder deployment. E.g. [https://coder.example.com/icons](#). This
can be particularly useful in airgapped deployments.
![The icon gallery](../../../images/icons-gallery.png)
## External icons
You can use any image served over HTTPS as an icon, by specifying the full URL
of the image. We recommend that you use a CDN that you control, but it can be
served from any source that you trust.
You can also embed an image by using data: URLs.
- Only the https: and data: protocols are supported in icon URLs (not http:)
- Be careful when using images hosted by someone else; they might disappear or
change!
- Be careful when using data: URLs. They can get rather large, and can
negatively impact loading times for pages and queries they appear in. Only use
them for very small icons that compress well.

View File

@ -0,0 +1,93 @@
# Extending templates
There are a variety of Coder-native features to extend the configuration of your
development environments. Many of the following features are defined in your
templates using the
[Coder Terraform provider](https://registry.terraform.io/providers/coder/coder/latest/docs).
The provider docs will provide code examples for usage; alternatively, you can
view our
[example templates](https://github.com/coder/coder/tree/main/examples/templates)
to get started.
## Workspace agents
For users to connect to a workspace, the template must include a
[`coder_agent`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent).
The associated agent will facilitate
[workspace connections](../../../user-guides/workspace-access/index.md) via SSH,
port forwarding, and IDEs. The agent may also display real-time
[workspace metadata](./agent-metadata.md) like resource usage.
```tf
resource "coder_agent" "dev" {
os = "linux"
arch = "amd64"
dir = "/workspace"
display_apps {
vscode = true
}
}
```
You can also leverage [resource metadata](./resource-metadata.md) to display
static resource information from your template.
Templates must include some computational resource to start the agent. All
processes on the workspace are then spawned from the agent. It also provides all
information displayed in the dashboard's workspace view.
![A healthy workspace agent](../../../images/templates/healthy-workspace-agent.png)
Multiple agents may be used in a single template or even a single resource. Each
agent may have it's own apps, startup script, and metadata. This can be used to
associate multiple containers or VMs with a workspace.
## Resource persistence
The resources you define in a template may be _ephemeral_ or _persistent_.
Persistent resources stay provisioned when workspaces are stopped, where as
ephemeral resources are destroyed and recreated on restart. All resources are
destroyed when a workspace is deleted.
> You can read more about how resource behavior and workspace state in the
> [workspace lifecycle documentation](../../../user-guides/workspace-lifecycle.md).
Template resources follow the
[behavior of Terraform resources](https://developer.hashicorp.com/terraform/language/resources/behavior#how-terraform-applies-a-configuration)
and can be further configured  using the
[lifecycle argument](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle).
A common configuration is a template whose only persistent resource is the home
directory. This allows the developer to retain their work while ensuring the
rest of their environment is consistently up-to-date on each workspace restart.
When a workspace is deleted, the Coder server essentially runs a
[terraform destroy](https://www.terraform.io/cli/commands/destroy) to remove all
resources associated with the workspace.
> Terraform's
> [prevent-destroy](https://www.terraform.io/language/meta-arguments/lifecycle#prevent_destroy)
> and
> [ignore-changes](https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes)
> meta-arguments can be used to prevent accidental data loss.
## Coder apps
Additional IDEs, documentation, or services can be associated to your workspace
using the
[`coder_app`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app)
resource.
![Coder Apps in the dashboard](../../../images/admin/templates/coder-apps-ui.png)
Note that some apps are associated to the agent by default as
[`display_apps`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#nested-schema-for-display_apps)
and can be hidden directly in the
[`coder_agent`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent)
resource. You can arrange the display orientation of Coder apps in your template
using [resource ordering](./resource-ordering.md).
Check out our [module registry](https://registry.coder.com/modules) for
additional Coder apps from the team and our OSS community.
<children></children>

View File

@ -0,0 +1,198 @@
# Reusing template code
To reuse code across different Coder templates, such as common scripts or
resource definitions, we suggest using
[Terraform Modules](https://developer.hashicorp.com/terraform/language/modules).
You can store these modules externally from your Coder deployment, like in a git
repository or a Terraform registry. This example shows how to reference a module
from your template:
```tf
data "coder_workspace" "me" {}
module "coder-base" {
source = "github.com/my-organization/coder-base"
# Modules take in variables and can provision infrastructure
vpc_name = "devex-3"
subnet_tags = { "name": data.coder_workspace.me.name }
code_server_version = 4.14.1
}
resource "coder_agent" "dev" {
# Modules can provide outputs, such as helper scripts
startup_script=<<EOF
#!/bin/sh
${module.coder-base.code_server_install_command}
EOF
}
```
Learn more about
[creating modules](https://developer.hashicorp.com/terraform/language/modules)
and
[module sources](https://developer.hashicorp.com/terraform/language/modules/sources)
in the Terraform documentation.
## Coder modules
Coder publishes plenty of modules that can be used to simplify some common tasks
across templates. Some of the modules we publish are,
1. [`code-server`](https://registry.coder.com/modules/code-server) and
[`vscode-web`](https://registry.coder.com/modules/vscode-web)
2. [`git-clone`](https://registry.coder.com/modules/git-clone)
3. [`dotfiles`](https://registry.coder.com/modules/dotfiles)
4. [`jetbrains-gateway`](https://registry.coder.com/modules/jetbrains-gateway)
5. [`jfrog-oauth`](https://registry.coder.com/modules/jfrog-oauth) and
[`jfrog-token`](https://registry.coder.com/modules/jfrog-token)
6. [`vault-github`](https://registry.coder.com/modules/vault-github)
For a full list of available modules please check
[Coder module registry](https://registry.coder.com/modules).
## Offline installations
In offline and restricted deploymnets, there are 2 ways to fetch modules.
1. Artifactory
2. Private git repository
### Artifactory
Air gapped users can clone the [coder/modules](htpps://github.com/coder/modules)
repo and publish a
[local terraform module repository](https://jfrog.com/help/r/jfrog-artifactory-documentation/set-up-a-terraform-module/provider-registry)
to resolve modules via [Artifactory](https://jfrog.com/artifactory/).
1. Create a local-terraform-repository with name `coder-modules-local`
2. Create a virtual repository with name `tf`
3. Follow the below instructions to publish coder modules to Artifactory
```shell
git clone https://github.com/coder/modules
cd modules
jf tfc
jf tf p --namespace="coder" --provider="coder" --tag="1.0.0"
```
4. Generate a token with access to the `tf` repo and set an `ENV` variable
`TF_TOKEN_example.jfrog.io="XXXXXXXXXXXXXXX"` on the Coder provisioner.
5. Create a file `.terraformrc` with following content and mount at
`/home/coder/.terraformrc` within the Coder provisioner.
```tf
provider_installation {
direct {
exclude = ["registry.terraform.io/*/*"]
}
network_mirror {
url = "https://example.jfrog.io/artifactory/api/terraform/tf/providers/"
}
}
```
6. Update module source as,
```tf
module "module-name" {
source = "https://example.jfrog.io/tf__coder/module-name/coder"
version = "1.0.0"
agent_id = coder_agent.example.id
...
}
```
> Do not forget to replace example.jfrog.io with your Artifactory URL
Based on the instructions
[here](https://jfrog.com/blog/tour-terraform-registries-in-artifactory/).
#### Example template
We have an example template
[here](https://github.com/coder/coder/blob/main/examples/jfrog/remote/main.tf)
that uses our
[JFrog Docker](https://github.com/coder/coder/blob/main/examples/jfrog/docker/main.tf)
template as the underlying module.
### Private git repository
If you are importing a module from a private git repository, the Coder server or
[provisioner](../../provisioners.md) needs git credentials. Since this token
will only be used for cloning your repositories with modules, it is best to
create a token with access limited to the repository and no extra permissions.
In GitHub, you can generate a
[fine-grained token](https://docs.github.com/en/rest/overview/permissions-required-for-fine-grained-personal-access-tokens?apiVersion=2022-11-28)
with read only access to the necessary repos.
If you are running Coder on a VM, make sure that you have `git` installed and
the `coder` user has access to the following files:
```shell
# /home/coder/.gitconfig
[credential]
helper = store
```
```shell
# /home/coder/.git-credentials
# GitHub example:
https://your-github-username:your-github-pat@github.com
```
If you are running Coder on Docker or Kubernetes, `git` is pre-installed in the
Coder image. However, you still need to mount credentials. This can be done via
a Docker volume mount or Kubernetes secrets.
#### Passing git credentials in Kubernetes
First, create a `.gitconfig` and `.git-credentials` file on your local machine.
You might want to do this in a temporary directory to avoid conflicting with
your own git credentials.
Next, create the secret in Kubernetes. Be sure to do this in the same namespace
that Coder is installed in.
```shell
export NAMESPACE=coder
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: git-secrets
namespace: $NAMESPACE
type: Opaque
data:
.gitconfig: $(cat .gitconfig | base64 | tr -d '\n')
.git-credentials: $(cat .git-credentials | base64 | tr -d '\n')
EOF
```
Then, modify Coder's Helm values to mount the secret.
```yaml
coder:
volumes:
- name: git-secrets
secret:
secretName: git-secrets
volumeMounts:
- name: git-secrets
mountPath: "/home/coder/.gitconfig"
subPath: .gitconfig
readOnly: true
- name: git-secrets
mountPath: "/home/coder/.git-credentials"
subPath: .git-credentials
readOnly: true
```
### Next steps
- JFrog's
[Terraform Registry support](https://jfrog.com/help/r/jfrog-artifactory-documentation/terraform-registry)
- [Configuring the JFrog toolchain inside a workspace](../../integrations/jfrog-artifactory.md)
- [Coder Module Registry](https://registry.coder.com/modules)

View File

@ -0,0 +1,300 @@
# Parameters
A template can prompt the user for additional information when creating
workspaces with
[_parameters_](https://registry.terraform.io/providers/coder/coder/latest/docs/data-sources/parameter).
![Parameters in Create Workspace screen](../../../images/parameters.png)
The user can set parameters in the dashboard UI and CLI.
You'll likely want to hardcode certain template properties for workspaces, such
as security group. But you can let developers specify other properties with
parameters like instance size, geographical location, repository URL, etc.
This example lets a developer choose a Docker host for the workspace:
```tf
data "coder_parameter" "docker_host" {
name = "Region"
description = "Which region would you like to deploy to?"
icon = "/emojis/1f30f.png"
type = "string"
default = "tcp://100.94.74.63:2375"
option {
name = "Pittsburgh, USA"
value = "tcp://100.94.74.63:2375"
icon = "/emojis/1f1fa-1f1f8.png"
}
option {
name = "Helsinki, Finland"
value = "tcp://100.117.102.81:2375"
icon = "/emojis/1f1eb-1f1ee.png"
}
option {
name = "Sydney, Australia"
value = "tcp://100.127.2.1:2375"
icon = "/emojis/1f1e6-1f1f9.png"
}
}
```
From there, a template can refer to a parameter's value:
```tf
provider "docker" {
host = data.coder_parameter.docker_host.value
}
```
## Types
A Coder parameter can have one of these types:
- `string`
- `bool`
- `number`
- `list(string)`
To specify a default value for a parameter with the `list(string)` type, use a
JSON array and the Terraform
[jsonencode](https://developer.hashicorp.com/terraform/language/functions/jsonencode)
function. For example:
```tf
data "coder_parameter" "security_groups" {
name = "Security groups"
icon = "/icon/aws.png"
type = "list(string)"
description = "Select appropriate security groups."
mutable = true
default = jsonencode([
"Web Server Security Group",
"Database Security Group",
"Backend Security Group"
])
}
```
## Options
A `string` parameter can provide a set of options to limit the user's choices:
```tf
data "coder_parameter" "docker_host" {
name = "Region"
description = "Which region would you like to deploy to?"
type = "string"
default = "tcp://100.94.74.63:2375"
option {
name = "Pittsburgh, USA"
value = "tcp://100.94.74.63:2375"
icon = "/emojis/1f1fa-1f1f8.png"
}
option {
name = "Helsinki, Finland"
value = "tcp://100.117.102.81:2375"
icon = "/emojis/1f1eb-1f1ee.png"
}
option {
name = "Sydney, Australia"
value = "tcp://100.127.2.1:2375"
icon = "/emojis/1f1e6-1f1f9.png"
}
}
```
### Incompatibility in Parameter Options for Workspace Builds
When creating Coder templates, authors have the flexibility to modify parameter
options associated with rich parameters. Such modifications can involve adding,
substituting, or removing a parameter option. It's important to note that making
these changes can lead to discrepancies in parameter values utilized by ongoing
workspace builds.
Consequently, workspace users will be prompted to select the new value from a
pop-up window or by using the command-line interface. While this additional
interactive step might seem like an interruption, it serves a crucial purpose.
It prevents workspace users from becoming trapped with outdated template
versions, ensuring they can smoothly update their workspace without any
hindrances.
Example:
- Bob creates a workspace using the `python-dev` template. This template has a
parameter `image_tag`, and Bob selects `1.12`.
- Later, the template author Alice is notified of a critical vulnerability in a
package installed in the `python-dev` template, which affects the image tag
`1.12`.
- Alice remediates this vulnerability, and pushes an updated template version
that replaces option `1.12` with `1.13` for the `image_tag` parameter. She
then notifies all users of that template to update their workspace
immediately.
- Bob saves their work, and selects the `Update` option in the UI. As their
workspace uses the now-invalid option `1.12`, for the `image_tag` parameter,
they are prompted to select a new value for `image_tag`.
## Required and optional parameters
A parameter is _required_ if it doesn't have the `default` property. The user
**must** provide a value to this parameter before creating a workspace:
```tf
data "coder_parameter" "account_name" {
name = "Account name"
description = "Cloud account name"
mutable = true
}
```
If a parameter contains the `default` property, Coder will use this value if the
user does not specify any:
```tf
data "coder_parameter" "base_image" {
name = "Base image"
description = "Base machine image to download"
default = "ubuntu:latest"
}
```
Admins can also set the `default` property to an empty value so that the
parameter field can remain empty:
```tf
data "coder_parameter" "dotfiles_url" {
name = "dotfiles URL"
description = "Git repository with dotfiles"
mutable = true
default = ""
}
```
## Mutability
Immutable parameters can only be set in these situations:
- Creating a workspace for the first time.
- Updating a workspace to a new template version. This sets the initial value
for required parameters.
The idea is to prevent users from modifying fragile or persistent workspace
resources like volumes, regions, and so on.
Example:
```tf
data "coder_parameter" "region" {
name = "Region"
description = "Region where the workspace is hosted"
mutable = false
default = "us-east-1"
}
```
You can modify a parameter's `mutable` attribute state anytime. In case of
emergency, you can temporarily allow for changing immutable parameters to fix an
operational issue, but it is not advised to overuse this opportunity.
## Ephemeral parameters
Ephemeral parameters are introduced to users in the form of "build options." Use
ephemeral parameters to model specific behaviors in a Coder workspace, such as
reverting to a previous image, restoring from a volume snapshot, or building a
project without using cache.
Since these parameters are ephemeral in nature, subsequent builds proceed in the
standard manner:
```tf
data "coder_parameter" "force_rebuild" {
name = "force_rebuild"
type = "bool"
description = "Rebuild the Docker image rather than use the cached one."
mutable = true
default = false
ephemeral = true
}
```
## Validating parameters
Coder supports rich parameters with multiple validation modes: min, max,
monotonic numbers, and regular expressions.
### Number
You can limit a `number` parameter to `min` and `max` boundaries.
You can also specify its monotonicity as `increasing` or `decreasing` to verify
the current and new values. Use the `monotonic` attribute for resources that
can't be shrunk or grown without implications, like disk volume size.
```tf
data "coder_parameter" "instances" {
name = "Instances"
type = "number"
description = "Number of compute instances"
validation {
min = 1
max = 8
monotonic = "increasing"
}
}
```
It is possible to override the default `error` message for a `number` parameter,
along with its associated `min` and/or `max` properties. The following message
placeholders are available `{min}`, `{max}`, and `{value}`.
```tf
data "coder_parameter" "instances" {
name = "Instances"
type = "number"
description = "Number of compute instances"
validation {
min = 1
max = 4
error = "Sorry, we can't provision too many instances - maximum limit: {max}, wanted: {value}."
}
}
```
**NOTE:** as of
[`terraform-provider-coder` v0.19.0](https://registry.terraform.io/providers/coder/coder/0.19.0/docs),
`options` can be specified in `number` parameters; this also works with
validations such as `monotonic`.
### String
You can validate a `string` parameter to match a regular expression. The `regex`
property requires a corresponding `error` property.
```tf
data "coder_parameter" "project_id" {
name = "Project ID"
description = "Alpha-numeric project ID"
validation {
regex = "^[a-z0-9]+$"
error = "Unfortunately, this isn't a valid project ID"
}
}
```
## Create Autofill
When the template doesn't specify default values, Coder may still autofill
parameters.
1. Coder will look for URL query parameters with form `param.<name>=<value>`.
This feature enables platform teams to create pre-filled template creation
links.
2. Coder will populate recently used parameter key-value pairs for the user.
This feature helps reduce repetition when filling common parameters such as
`dotfiles_url` or `region`.

View File

@ -0,0 +1,315 @@
# Workspace Process Logging
The workspace process logging feature allows you to log all system-level
processes executing in the workspace.
> **Note:** This feature is only available on Linux in Kubernetes. There are
> additional requirements outlined further in this document.
Workspace process logging adds a sidecar container to workspace pods that will
log all processes started in the workspace container (e.g., commands executed in
the terminal or processes created in the background by other processes).
Processes launched inside containers or nested containers within the workspace
are also logged. You can view the output from the sidecar or send it to a
monitoring stack, such as CloudWatch, for further analysis or long-term storage.
Please note that these logs are not recorded or captured by the Coder
organization in any way, shape, or form.
> This is an [Premium or Enterprise](https://coder.com/pricing) feature. To
> learn more about Coder Enterprise, please
> [contact sales](https://coder.com/contact).
## How this works
Coder uses [eBPF](https://ebpf.io/) (which we chose for its minimal performance
impact) to perform in-kernel logging and filtering of all exec system calls
originating from the workspace container.
The core of this feature is also open source and can be found in the
[exectrace](https://github.com/coder/exectrace) GitHub repo. The enterprise
component (in the `enterprise/` directory of the repo) is responsible for
starting the eBPF program with the correct filtering options for the specific
workspace.
## Requirements
The host machine must be running a Linux kernel >= 5.8 with the kernel config
`CONFIG_DEBUG_INFO_BTF=y` enabled.
To check your kernel version, run:
```shell
uname -r
```
To validate the required kernel config is enabled, run either of the following
commands on your nodes directly (_not_ from a workspace terminal):
```shell
cat /proc/config.gz | gunzip | grep CONFIG_DEBUG_INFO_BTF
```
```shell
cat "/boot/config-$(uname -r)" | grep CONFIG_DEBUG_INFO_BTF
```
If these requirements are not met, workspaces will fail to start for security
reasons.
Your template must be a Kubernetes template. Workspace process logging is not
compatible with the `sysbox-runc` runtime due to technical limitations, but it
is compatible with our `envbox` template family.
## Example templates
We provide working example templates for Kubernetes, and Kubernetes with
`envbox` (for [Docker support in workspaces](./docker-in-workspaces.md)). You
can view these templates in the
[exectrace repo](https://github.com/coder/exectrace/tree/main/enterprise/templates).
## Configuring custom templates to use workspace process logging
If you have an existing Kubernetes or Kubernetes with `envbox` template that you
would like to add workspace process logging to, follow these steps:
1. Ensure the image used in your template has `curl` installed.
1. Add the following section to your template's `main.tf` file:
<!--
If you are updating this section, please also update the example templates
in the exectrace repo.
-->
```hcl
locals {
# This is the init script for the main workspace container that runs before the
# agent starts to configure workspace process logging.
exectrace_init_script = <<EOT
set -eu
pidns_inum=$(readlink /proc/self/ns/pid | sed 's/[^0-9]//g')
if [ -z "$pidns_inum" ]; then
echo "Could not determine process ID namespace inum"
exit 1
fi
# Before we start the script, does curl exist?
if ! command -v curl >/dev/null 2>&1; then
echo "curl is required to download the Coder binary"
echo "Please install curl to your image and try again"
# 127 is command not found.
exit 127
fi
echo "Sending process ID namespace inum to exectrace sidecar"
rc=0
max_retry=5
counter=0
until [ $counter -ge $max_retry ]; do
set +e
curl \
--fail \
--silent \
--connect-timeout 5 \
-X POST \
-H "Content-Type: text/plain" \
--data "$pidns_inum" \
http://127.0.0.1:56123
rc=$?
set -e
if [ $rc -eq 0 ]; then
break
fi
counter=$((counter+1))
echo "Curl failed with exit code $${rc}, attempt $${counter}/$${max_retry}; Retrying in 3 seconds..."
sleep 3
done
if [ $rc -ne 0 ]; then
echo "Failed to send process ID namespace inum to exectrace sidecar"
exit $rc
fi
EOT
}
```
1. Update the `command` of your workspace container like the following:
<!--
If you are updating this section, please also update the example templates
in the exectrace repo.
-->
```hcl
resource "kubernetes_pod" "main" {
...
spec {
...
container {
...
// NOTE: this command is changed compared to the upstream kubernetes
// template
command = [
"sh",
"-c",
"${local.exectrace_init_script}\n\n${coder_agent.main.init_script}",
]
...
}
...
}
...
}
```
> **Note:** If you are using the `envbox` template, you will need to update
> the third argument to be
> `"${local.exectrace_init_script}\n\nexec /envbox docker"` instead.
1. Add the following container to your workspace pod spec.
<!--
If you are updating this section, please also update the example templates
in the exectrace repo.
-->
```hcl
resource "kubernetes_pod" "main" {
...
spec {
...
// NOTE: this container is added compared to the upstream kubernetes
// template
container {
name = "exectrace"
image = "ghcr.io/coder/exectrace:latest"
image_pull_policy = "Always"
command = [
"/opt/exectrace",
"--init-address", "127.0.0.1:56123",
"--label", "workspace_id=${data.coder_workspace.me.id}",
"--label", "workspace_name=${data.coder_workspace.me.name}",
"--label", "user_id=${data.coder_workspace_owner.me.id}",
"--label", "username=${data.coder_workspace_owner.me.name}",
"--label", "user_email=${data.coder_workspace_owner.me.email}",
]
security_context {
// exectrace must be started as root so it can attach probes into the
// kernel to record process events with high throughput.
run_as_user = "0"
run_as_group = "0"
// exectrace requires a privileged container so it can control mounts
// and perform privileged syscalls against the host kernel to attach
// probes.
privileged = true
}
}
...
}
...
}
```
> **Note:** `exectrace` requires root privileges and a privileged container
> to attach probes to the kernel. This is a requirement of eBPF.
1. Add the following environment variable to your workspace pod:
<!--
If you are updating this section, please also update the example templates
in the exectrace repo.
-->
```hcl
resource "kubernetes_pod" "main" {
...
spec {
...
env {
name = "CODER_AGENT_SUBSYSTEM"
value = "exectrace"
}
...
}
...
}
```
Once you have made these changes, you can push a new version of your template
and workspace process logging will be enabled for all workspaces once they are
restarted.
## Viewing workspace process logs
To view the process logs for a specific workspace you can use `kubectl` to print
the logs:
```bash
kubectl logs pod-name --container exectrace
```
The raw logs will look something like this:
```json
{
"ts": "2022-02-28T20:29:38.038452202Z",
"level": "INFO",
"msg": "exec",
"fields": {
"labels": {
"user_email": "jessie@coder.com",
"user_id": "5e876e9a-121663f01ebd1522060d5270",
"username": "jessie",
"workspace_id": "621d2e52-a6987ef6c56210058ee2593c",
"workspace_name": "main"
},
"cmdline": "uname -a",
"event": {
"filename": "/usr/bin/uname",
"argv": ["uname", "-a"],
"truncated": false,
"pid": 920684,
"uid": 101000,
"gid": 101000,
"comm": "bash"
}
}
}
```
### View logs in AWS EKS
If you're using AWS' Elastic Kubernetes Service, you can
[configure your cluster](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-EKS-logs.html)
to send logs to CloudWatch. This allows you to view the logs for a specific user
or workspace.
To view your logs, go to the CloudWatch dashboard (which is available on the
**Log Insights** tab) and run a query similar to the following:
```text
fields @timestamp, log_processed.fields.cmdline
| sort @timestamp asc
| filter kubernetes.container_name="exectrace"
| filter log_processed.fields.labels.username="zac"
| filter log_processed.fields.labels.workspace_name="code"
```
## Usage considerations
- The sidecar attached to each workspace is a privileged container, so you may
need to review your organization's security policies before enabling this
feature. Enabling workspace process logging does _not_ grant extra privileges
to the workspace container itself, however.
- `exectrace` will log processes from nested Docker containers (including deeply
nested containers) correctly, but Coder does not distinguish between processes
started in the workspace and processes started in a child container in the
logs.
- With `envbox` workspaces, this feature will detect and log startup processes
begun in the outer container (including container initialization processes).
- Because this feature logs **all** processes in the workspace, high levels of
usage (e.g., during a `make` run) will result in an abundance of output in the
sidecar container. Depending on how your Kubernetes cluster is configured, you
may incur extra charges from your cloud provider to store the additional logs.

View File

@ -0,0 +1,48 @@
# Provider Authentication
<blockquote class="danger">
<p>
Do not store secrets in templates. Assume every user has cleartext access
to every template.
</p>
</blockquote>
The Coder server's
[provisioner](https://registry.terraform.io/providers/coder/coder/latest/docs/data-sources/provisioner)
process needs to authenticate with other provider APIs to provision workspaces.
There are two approaches to do this:
- Pass credentials to the provisioner as parameters.
- Preferred: Execute the Coder server in an environment that is authenticated
with the provider.
We encourage the latter approach where supported:
- Simplifies the template.
- Keeps provider credentials out of Coder's database, making it a less valuable
target for attackers.
- Compatible with agent-based authentication schemes, which handle credential
rotation or ensure the credentials are not written to disk.
Generally, you can set up an environment to provide credentials to Coder in
these ways:
- A well-known location on disk. For example, `~/.aws/credentials` for AWS on
POSIX systems.
- Environment variables.
It is usually sufficient to authenticate using the CLI or SDK for the provider
before running Coder, but check the Terraform provider's documentation for
details.
These platforms have Terraform providers that support authenticated
environments:
- [Google Cloud](https://registry.terraform.io/providers/hashicorp/google/latest/docs)
- [Amazon Web Services](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)
- [Microsoft Azure](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
- [Kubernetes](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs)
Other providers might also support authenticated environments. Check the
[documentation of the Terraform provider](https://registry.terraform.io/browse/providers)
for details.

View File

@ -0,0 +1,111 @@
# Resource Metadata
Expose key workspace information to your users with
[`coder_metadata`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/metadata)
resources in your template code.
You can use `coder_metadata` to show Terraform resource attributes like these:
- Compute resources
- IP addresses
- [Secrets](../../security/secrets.md#displaying-secrets)
- Important file paths
![ui](../../../images/admin/templates/coder-metadata-ui.png)
<blockquote class="info">
Coder automatically generates the <code>type</code> metadata.
</blockquote>
You can also present automatically updating, dynamic values with
[agent metadata](./agent-metadata.md).
## Example
Expose the disk size, deployment name, and persistent directory in a Kubernetes
template with:
```tf
resource "kubernetes_persistent_volume_claim" "root" {
...
}
resource "kubernetes_deployment" "coder" {
# My deployment is ephemeral
count = data.coder_workspace.me.start_count
...
}
resource "coder_metadata" "pvc" {
resource_id = kubernetes_persistent_volume_claim.root.id
item {
key = "size"
value = kubernetes_persistent_volume_claim.root.spec[0].resources[0].requests.storage
}
item {
key = "dir"
value = "/home/coder"
}
}
resource "coder_metadata" "deployment" {
count = data.coder_workspace.me.start_count
resource_id = kubernetes_deployment.coder[0].id
item {
key = "name"
value = kubernetes_deployment.coder[0].metadata[0].name
}
}
```
## Hiding resources in the dashboard
Some resources don't need to be exposed in the dashboard's UI. This helps keep
the workspace view clean for developers. To hide a resource, use the `hide`
attribute:
```tf
resource "coder_metadata" "hide_serviceaccount" {
count = data.coder_workspace.me.start_count
resource_id = kubernetes_service_account.user_data.id
hide = true
item {
key = "name"
value = kubernetes_deployment.coder[0].metadata[0].name
}
}
```
## Using a custom resource icon
To use custom icons for your resource metadata, use the `icon` attribute. It
must be a valid path or URL.
```tf
resource "coder_metadata" "resource_with_icon" {
count = data.coder_workspace.me.start_count
resource_id = kubernetes_service_account.user_data.id
icon = "/icon/database.svg"
item {
key = "name"
value = kubernetes_deployment.coder[0].metadata[0].name
}
}
```
To make it easier for you to customize your resource we added some built-in
icons:
- Folder `/icon/folder.svg`
- Memory `/icon/memory.svg`
- Image `/icon/image.svg`
- Widgets `/icon/widgets.svg`
- Database `/icon/database.svg`
We also have other icons related to the IDEs. You can see more information on
how to use the builtin icons [here](./icons.md).
## Up next
- [Secrets](../../security/secrets.md)
- [Agent metadata](./agent-metadata.md)

View File

@ -0,0 +1,183 @@
# UI Resource Ordering
In Coder templates, managing the order of UI elements is crucial for a seamless
user experience. This page outlines how resources can be aligned using the
`order` Terraform property or inherit the natural order from the file.
The resource with the lower `order` is presented before the one with greater
value. A missing `order` property defaults to 0. If two resources have the same
`order` property, the resources will be ordered by property `name` (or `key`).
## Using "order" property
### Coder parameters
The `order` property of `coder_parameter` resource allows specifying the order
of parameters in UI forms. In the below example, `project_id` will appear
_before_ `account_id`:
```tf
data "coder_parameter" "project_id" {
name = "project_id"
display_name = "Project ID"
description = "Specify cloud provider project ID."
order = 2
}
data "coder_parameter" "account_id" {
name = "account_id"
display_name = "Account ID"
description = "Specify cloud provider account ID."
order = 1
}
```
### Agents
Agent resources within the UI left pane are sorted based on the `order`
property, followed by `name`, ensuring a consistent and intuitive arrangement.
```tf
resource "coder_agent" "primary" {
...
order = 1
}
resource "coder_agent" "secondary" {
...
order = 2
}
```
The agent with the lowest order is presented at the top in the workspace view.
### Agent metadata
The `coder_agent` exposes metadata to present operational metrics in the UI.
Metrics defined with Terraform `metadata` blocks can be ordered using additional
`order` property; otherwise, they are sorted by `key`.
```tf
resource "coder_agent" "main" {
...
metadata {
display_name = "CPU Usage"
key = "cpu_usage"
script = "coder stat cpu"
interval = 10
timeout = 1
order = 1
}
metadata {
display_name = "CPU Usage (Host)"
key = "cpu_usage_host"
script = "coder stat cpu --host"
interval = 10
timeout = 1
order = 2
}
metadata {
display_name = "RAM Usage"
key = "ram_usage"
script = "coder stat mem"
interval = 10
timeout = 1
order = 1
}
metadata {
display_name = "RAM Usage (Host)"
key = "ram_usage_host"
script = "coder stat mem --host"
interval = 10
timeout = 1
order = 2
}
}
```
### Applications
Similarly to Coder agents, `coder_app` resources incorporate the `order`
property to organize button apps in the app bar within a `coder_agent` in the
workspace view.
Only template defined applications can be arranged. _VS Code_ or _Terminal_
buttons are static.
```tf
resource "coder_app" "code-server" {
agent_id = coder_agent.main.id
slug = "code-server"
display_name = "code-server"
...
order = 2
}
resource "coder_app" "filebrowser" {
agent_id = coder_agent.main.id
display_name = "File Browser"
slug = "filebrowser"
...
order = 1
}
```
## Inherit order from file
### Coder parameter options
The options for Coder parameters maintain the same order as in the file
structure. This simplifies management and ensures consistency between
configuration files and UI presentation.
```tf
data "coder_parameter" "database_region" {
name = "database_region"
display_name = "Database Region"
icon = "/icon/database.svg"
description = "These are options."
mutable = true
default = "us-east1-a"
// The order of options is stable and inherited from .tf file.
option {
name = "US Central"
description = "Select for central!"
value = "us-central1-a"
}
option {
name = "US East"
description = "Select for east!"
value = "us-east1-a"
}
...
}
```
### Coder metadata items
In cases where multiple item properties exist, the order is inherited from the
file, facilitating seamless integration between a Coder template and UI
presentation.
```tf
resource "coder_metadata" "attached_volumes" {
resource_id = docker_image.main.id
// Items will be presented in the UI in the following order.
item {
key = "disk-a"
value = "60 GiB"
}
item {
key = "disk-b"
value = "128 GiB"
}
}
```

View File

@ -0,0 +1,93 @@
# Resource persistence
By default, all Coder resources are persistent, but production templates
**must** use the practices laid out in this document to prevent accidental
deletion.
Coder templates have full control over workspace ephemerality. In a completely
ephemeral workspace, there are zero resources in the Off state. In a completely
persistent workspace, there is no difference between the Off and On states.
The needs of most workspaces fall somewhere in the middle, persisting user data
like filesystem volumes, but deleting expensive, reproducible resources such as
compute instances.
## Disabling persistence
The Terraform
[`coder_workspace` data source](https://registry.terraform.io/providers/coder/coder/latest/docs/data-sources/workspace)
exposes the `start_count = [0 | 1]` attribute. To make a resource ephemeral, you
can assign the `start_count` attribute to resource's
[`count`](https://developer.hashicorp.com/terraform/language/meta-arguments/count)
meta-argument.
In this example, Coder will provision or tear down the `docker_container`
resource:
```tf
data "coder_workspace" "me" {
}
resource "docker_container" "workspace" {
# When `start_count` is 0, `count` is 0, so no `docker_container` is created.
count = data.coder_workspace.me.start_count # 0 (stopped), 1 (started)
# ... other config
}
```
## ⚠️ Persistence pitfalls
Take this example resource:
```tf
data "coder_workspace" "me" {
}
resource "docker_volume" "home_volume" {
name = "coder-${data.coder_workspace.me.owner}-home"
}
```
Because we depend on `coder_workspace.me.owner`, if the owner changes their
username, Terraform will recreate the volume (wiping its data!) the next time
that Coder starts the workspace.
To prevent this, use immutable IDs:
- `coder_workspace.me.owner_id`
- `coder_workspace.me.id`
```tf
data "coder_workspace" "me" {
}
resource "docker_volume" "home_volume" {
# This volume will survive until the Workspace is deleted or the template
# admin changes this resource block.
name = "coder-${data.coder_workspace.id}-home"
}
```
## 🛡 Bulletproofing
Even if your persistent resource depends exclusively on immutable IDs, a change
to the `name` format or other attributes would cause Terraform to rebuild the
resource.
You can prevent Terraform from recreating a resource under any circumstance by
setting the
[`ignore_changes = all` directive in the `lifecycle` block](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#ignore_changes).
```tf
data "coder_workspace" "me" {
}
resource "docker_volume" "home_volume" {
# This resource will survive until either the entire block is deleted
# or the workspace is.
name = "coder-${data.coder_workspace.me.id}-home"
lifecycle {
ignore_changes = all
}
}
```

View File

@ -0,0 +1,126 @@
# Terraform template-wide variables
In Coder, Terraform templates offer extensive flexibility through template-wide
variables. These variables, managed by template authors, facilitate the
construction of customizable templates. Unlike parameters, which are primarily
for workspace customization, template variables remain under the control of the
template author, ensuring workspace users cannot modify them.
```tf
variable "CLOUD_API_KEY" {
type = string
description = "API key for the service"
default = "1234567890"
sensitive = true
}
```
Given that variables are a
[fundamental concept in Terraform](https://developer.hashicorp.com/terraform/language/values/variables),
Coder endeavors to fully support them. Native support includes `string`,
`number`, and `bool` formats. However, other types such as `list(string)` or
`map(any)` will default to being treated as strings.
## Default value
Upon adding a template variable, it's mandatory to provide a value during the
first push. At this stage, the template administrator faces two choices:
1. _No `default` property_: opt not to define a default property. Instead,
utilize the `--var name=value` command-line argument during the push to
supply the variable's value.
2. _Define `default` property_: set a default property for the template
variable. If the administrator doesn't input a value via CLI, Coder
automatically uses this default during the push.
After the initial push, variables are stored in the database table, associated
with the specific template version. They can be conveniently managed via
_Template Settings_ without requiring an extra push.
### Resolved values vs. default values
It's crucial to note that Coder templates operate based on resolved values
during a push, rather than default values. This ensures that default values do
not inadvertently override the configured variable settings during the push
process.
This approach caters to users who prefer to avoid accidental overrides of their
variable settings with default values during pushes, thereby enhancing control
and predictability.
If you encounter a situation where you need to override template settings for
variables, you can employ a straightforward solution:
1. Create a `terraform.tfvars` file in in the template directory:
```tf
coder_image = newimage:tag
```
2. Push the new template revision using Coder CLI:
```
coder templates push my-template -y # no need to use --var
```
This file serves as a mechanism to override the template settings for variables.
It can be stored in the repository for easy access and reference. Coder CLI
automatically detects it and loads variable values.
## Input options
When working with Terraform configurations in Coder, you have several options
for providing values to variables using the Coder CLI:
1. _Manual input in CLI_: You can manually input values for Terraform variables
directly in the CLI during the deployment process.
2. _Command-line argument_: Utilize the `--var name=value` command-line argument
to specify variable values inline as key-value pairs.
3. _Variables file selection_: Alternatively, you can use a variables file
selected via the `--variables-file values.yml` command-line argument. This
approach is particularly useful when dealing with multiple variables or to
avoid manual input of numerous values. Variables files can be versioned for
better traceability and management, and it enhances reproducibility.
Here's an example of a YAML-formatted variables file, `values.yml`:
```yaml
region: us-east-1
bucket_name: magic
zone_types: '{"us-east-1":"US East", "eu-west-1": "EU West"}'
cpu: 1
```
In this sample file:
- `region`, `bucket_name`, `zone_types`, and `cpu` are Terraform variable names.
- Corresponding values are provided for each variable.
- The `zone_types` variable demonstrates how to provide a JSON-formatted string
as a value in YAML.
## Terraform .tfvars files
In Terraform, `.tfvars` files provide a convenient means to define variable
values for a project in a reusable manner. These files, ending with either
`.tfvars` or `.tfvars.json`, streamline the process of setting numerous
variables.
By utilizing `.tfvars` files, you can efficiently manage and organize variable
values for your Terraform projects. This approach offers several advantages:
- Clarity and consistency: Centralize variable definitions in dedicated files,
enhancing clarity, instead of input values on template push.
- Ease of maintenance: Modify variable values in a single location under version
control, simplifying maintenance and updates.
Coder automatically loads variable definition files following a specific order,
providing flexibility and control over variable configuration. The loading
sequence is as follows:
1. `terraform.tfvars`: This file contains variable values and is loaded first.
2. `terraform.tfvars.json`: If present, this JSON-formatted file is loaded after
`terraform.tfvars`.
3. `\*.auto.tfvars`: Files matching this pattern are loaded next, ordered
alphabetically.
4. `\*.auto.tfvars.json`: JSON-formatted files matching this pattern are loaded
last.

View File

@ -0,0 +1,376 @@
# Web IDEs
In Coder, web IDEs are defined as
[coder_app](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app)
resources in the template. With our generic model, any web application can be
used as a Coder application. For example:
```tf
# Add button to open Portainer in the workspace dashboard
# Note: Portainer must be already running in the workspace
resource "coder_app" "portainer" {
agent_id = coder_agent.main.id
slug = "portainer"
display_name = "Portainer"
icon = "https://simpleicons.org/icons/portainer.svg"
url = "https://localhost:9443/api/status"
healthcheck {
url = "https://localhost:9443/api/status"
interval = 6
threshold = 10
}
}
```
## code-server
[code-server](https://github.com/coder/coder) is our supported method of running
VS Code in the web browser. A simple way to install code-server in Linux/macOS
workspaces is via the Coder agent in your template:
```console
# edit your template
cd your-template/
vim main.tf
```
```tf
resource "coder_agent" "main" {
arch = "amd64"
os = "linux"
startup_script = <<EOF
#!/bin/sh
# install code-server
# add '-s -- --version x.x.x' to install a specific code-server version
curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix=/tmp/code-server
# start code-server on a specific port
# authn is off since the user already authn-ed into the coder deployment
# & is used to run the process in the background
/tmp/code-server/bin/code-server --auth none --port 13337 &
EOF
}
```
For advanced use, we recommend installing code-server in your VM snapshot or
container image. Here's a Dockerfile which leverages some special
[code-server features](https://coder.com/docs/code-server/):
```Dockerfile
FROM codercom/enterprise-base:ubuntu
# install the latest version
USER root
RUN curl -fsSL https://code-server.dev/install.sh | sh
USER coder
# pre-install VS Code extensions
RUN code-server --install-extension eamodio.gitlens
# directly start code-server with the agent's startup_script (see above),
# or use a process manager like supervisord
```
You'll also need to specify a `coder_app` resource related to the agent. This is
how code-server is displayed on the workspace page.
```tf
resource "coder_app" "code-server" {
agent_id = coder_agent.main.id
slug = "code-server"
display_name = "code-server"
url = "http://localhost:13337/?folder=/home/coder"
icon = "/icon/code.svg"
subdomain = false
healthcheck {
url = "http://localhost:13337/healthz"
interval = 2
threshold = 10
}
}
```
![code-server in a workspace](../../../images/code-server-ide.png)
## VS Code Web
VS Code supports launching a local web client using the `code serve-web`
command. To add VS Code web as a web IDE, you have two options.
1. Install using the
[vscode-web module](https://registry.coder.com/modules/vscode-web) from the
coder registry.
```tf
module "vscode-web" {
source = "registry.coder.com/modules/vscode-web/coder"
version = "1.0.14"
agent_id = coder_agent.main.id
accept_license = true
}
```
2. Install and start in your `startup_script` and create a corresponding
`coder_app`
```tf
resource "coder_agent" "main" {
arch = "amd64"
os = "linux"
startup_script = <<EOF
#!/bin/sh
# install VS Code
curl -Lk 'https://code.visualstudio.com/sha/download?build=stable&os=cli-alpine-x64' --output vscode_cli.tar.gz
mkdir -p /tmp/vscode-cli
tar -xf vscode_cli.tar.gz -C /tmp/vscode-cli
rm vscode_cli.tar.gz
# start the web server on a specific port
/tmp/vscode-cli/code serve-web --port 13338 --without-connection-token --accept-server-license-terms >/tmp/vscode-web.log 2>&1 &
EOF
}
```
> `code serve-web` was introduced in version 1.82.0 (August 2023).
You also need to add a `coder_app` resource for this.
```tf
# VS Code Web
resource "coder_app" "vscode-web" {
agent_id = coder_agent.coder.id
slug = "vscode-web"
display_name = "VS Code Web"
icon = "/icon/code.svg"
url = "http://localhost:13338?folder=/home/coder"
subdomain = true # VS Code Web does currently does not work with a subpath https://github.com/microsoft/vscode/issues/192947
share = "owner"
}
```
## Jupyter Notebook
To use Jupyter Notebook in your workspace, you can install it by using the
[Jupyter Notebook module](https://registry.coder.com/modules/jupyter-notebook)
from the Coder registry:
```tf
module "jupyter-notebook" {
source = "registry.coder.com/modules/jupyter-notebook/coder"
version = "1.0.19"
agent_id = coder_agent.example.id
}
```
![Jupyter Notebook in Coder](../../../images/jupyter-notebook.png)
## JupyterLab
Configure your agent and `coder_app` like so to use Jupyter. Notice the
`subdomain=true` configuration:
```tf
data "coder_workspace" "me" {}
resource "coder_agent" "coder" {
os = "linux"
arch = "amd64"
dir = "/home/coder"
startup_script = <<-EOF
pip3 install jupyterlab
$HOME/.local/bin/jupyter lab --ServerApp.token='' --ip='*'
EOF
}
resource "coder_app" "jupyter" {
agent_id = coder_agent.coder.id
slug = "jupyter"
display_name = "JupyterLab"
url = "http://localhost:8888"
icon = "/icon/jupyter.svg"
share = "owner"
subdomain = true
healthcheck {
url = "http://localhost:8888/healthz"
interval = 5
threshold = 10
}
}
```
Or Alternatively, you can use the JupyterLab module from the Coder registry:
```tf
module "jupyter" {
source = "registry.coder.com/modules/jupyter-lab/coder"
version = "1.0.0"
agent_id = coder_agent.main.id
}
```
If you cannot enable a
[wildcard subdomain](../../../admin/setup/index.md#wildcard-access-url), you can
configure the template to run Jupyter on a path. There is however
[security risk](../../../reference/cli/server.md#--dangerous-allow-path-app-sharing)
running an app on a path and the template code is more complicated with coder
value substitution to recreate the path structure.
![JupyterLab in Coder](../../../images/jupyter.png)
## RStudio
Configure your agent and `coder_app` like so to use RStudio. Notice the
`subdomain=true` configuration:
```tf
resource "coder_agent" "coder" {
os = "linux"
arch = "amd64"
dir = "/home/coder"
startup_script = <<EOT
#!/bin/bash
# start rstudio
/usr/lib/rstudio-server/bin/rserver --server-daemonize=1 --auth-none=1 &
EOT
}
resource "coder_app" "rstudio" {
agent_id = coder_agent.coder.id
slug = "rstudio"
display_name = "R Studio"
icon = "https://upload.wikimedia.org/wikipedia/commons/d/d0/RStudio_logo_flat.svg"
url = "http://localhost:8787"
subdomain = true
share = "owner"
healthcheck {
url = "http://localhost:8787/healthz"
interval = 3
threshold = 10
}
}
```
If you cannot enable a
[wildcard subdomain](https://coder.com/docs/admin/configure#wildcard-access-url),
you can configure the template to run RStudio on a path using an NGINX reverse
proxy in the template. There is however
[security risk](https://coder.com/docs/reference/cli/server#--dangerous-allow-path-app-sharing)
running an app on a path and the template code is more complicated with coder
value substitution to recreate the path structure.
[This](https://github.com/sempie/coder-templates/tree/main/rstudio) is a
community template example.
![RStudio in Coder](../../../images/rstudio-port-forward.png)
## Airflow
Configure your agent and `coder_app` like so to use Airflow. Notice the
`subdomain=true` configuration:
```tf
resource "coder_agent" "coder" {
os = "linux"
arch = "amd64"
dir = "/home/coder"
startup_script = <<EOT
#!/bin/bash
# install and start airflow
pip3 install apache-airflow
/home/coder/.local/bin/airflow standalone &
EOT
}
resource "coder_app" "airflow" {
agent_id = coder_agent.coder.id
slug = "airflow"
display_name = "Airflow"
icon = "/icon/airflow.svg"
url = "http://localhost:8080"
subdomain = true
share = "owner"
healthcheck {
url = "http://localhost:8080/healthz"
interval = 10
threshold = 60
}
}
```
or use the [Airflow module](https://registry.coder.com/modules/apache-airflow)
from the Coder registry:
```tf
module "airflow" {
source = "registry.coder.com/modules/airflow/coder"
version = "1.0.13"
agent_id = coder_agent.main.id
}
```
![Airflow in Coder](../../../images/airflow-port-forward.png)
## File Browser
To access the contents of a workspace directory in a browser, you can use File
Browser. File Browser is a lightweight file manager that allows you to view and
manipulate files in a web browser.
Show and manipulate the contents of the `/home/coder` directory in a browser.
```tf
resource "coder_agent" "coder" {
os = "linux"
arch = "amd64"
dir = "/home/coder"
startup_script = <<EOT
#!/bin/bash
curl -fsSL https://raw.githubusercontent.com/filebrowser/get/master/get.sh | bash
filebrowser --noauth --root /home/coder --port 13339 >/tmp/filebrowser.log 2>&1 &
EOT
}
resource "coder_app" "filebrowser" {
agent_id = coder_agent.coder.id
display_name = "file browser"
slug = "filebrowser"
url = "http://localhost:13339"
icon = "https://raw.githubusercontent.com/matifali/logos/main/database.svg"
subdomain = true
share = "owner"
healthcheck {
url = "http://localhost:13339/healthz"
interval = 3
threshold = 10
}
}
```
Or alternatively, you can use the
[`filebrowser`](https://registry.coder.com/modules/filebrowser) module from the
Coder registry:
```tf
module "filebrowser" {
source = "registry.coder.com/modules/filebrowser/coder"
version = "1.0.8"
agent_id = coder_agent.main.id
}
```
![File Browser](../../../images/file-browser.png)
## SSH Fallback
If you prefer to run web IDEs in localhost, you can port forward using
[SSH](../../../user-guides/workspace-access/index.md#ssh) or the Coder CLI
`port-forward` sub-command. Some web IDEs may not support URL base path
adjustment so port forwarding is the only approach.

View File

@ -0,0 +1,87 @@
# Workspace Tags
Template administrators can leverage static template tags to limit workspace
provisioning to designated provisioner groups that have locally deployed
credentials for creating workspace resources. While this method ensures
controlled access, it offers limited flexibility and does not permit users to
select the nodes for their workspace creation.
By using `coder_workspace_tags` and `coder_parameter`s, template administrators
can enable dynamic tag selection and modify static template tags.
## Dynamic tag selection
Here is a sample `coder_workspace_tags` data resource with a few workspace tags
specified:
```tf
data "coder_workspace_tags" "custom_workspace_tags" {
tags = {
"zone" = "developers"
"runtime" = data.coder_parameter.runtime_selector.value
"project_id" = "PROJECT_${data.coder_parameter.project_name.value}"
"cache" = data.coder_parameter.feature_cache_enabled.value == "true" ? "with-cache" : "no-cache"
}
}
```
**Legend**
- `zone` - static tag value set to `developers`
- `runtime` - supported by the string-type `coder_parameter` to select
provisioner runtime, `runtime_selector`
- `project_id` - a formatted string supported by the string-type
`coder_parameter`, `project_name`
- `cache` - an HCL condition involving boolean-type `coder_parameter`,
`feature_cache_enabled`
Review the
[full template example](https://github.com/coder/coder/tree/main/examples/workspace-tags)
using `coder_workspace_tags` and `coder_parameter`s.
## Constraints
### Tagged provisioners
It is possible to choose tag combinations that no provisioner can handle. This
will cause the provisioner job to get stuck in the queue until a provisioner is
added that can handle its combination of tags.
Before releasing the template version with configurable workspace tags, ensure
that every tag set is associated with at least one healthy provisioner.
### Parameters types
Provisioners require job tags to be defined in plain string format. When a
workspace tag refers to a `coder_parameter` without involving the string
formatter, for example,
(`"runtime" = data.coder_parameter.runtime_selector.value`), the Coder
provisioner server can transform only the following parameter types to strings:
_string_, _number_, and _bool_.
### Mutability
A mutable `coder_parameter` can be dangerous for a workspace tag as it allows
the workspace owner to change a provisioner group (due to different tags). In
most cases, `coder_parameter`s backing `coder_workspace_tags` should be marked
as immutable and set only once, during workspace creation.
### HCL syntax
When importing the template version with `coder_workspace_tags`, the Coder
provisioner server extracts raw partial queries for each workspace tag and
stores them in the database. During workspace build time, the Coder server uses
the [Hashicorp HCL library](https://github.com/hashicorp/hcl) to evaluate these
raw queries on-the-fly without processing the entire Terraform template. This
evaluation is simpler but also limited in terms of available functions,
variables, and references to other resources.
**Supported syntax**
- Static string: `foobar_tag = "foobaz"`
- Formatted string: `foobar_tag = "foobaz ${data.coder_parameter.foobaz.value}"`
- Reference to `coder_parameter`:
`foobar_tag = data.coder_parameter.foobar.value`
- Boolean logic: `production_tag = !data.coder_parameter.staging_env.value`
- Condition:
`cache = data.coder_parameter.feature_cache_enabled.value == "true" ? "with-cache" : "no-cache"`

View File

@ -0,0 +1,62 @@
# Template
Templates are written in
[Terraform](https://developer.hashicorp.com/terraform/intro) and define the
underlying infrastructure that all Coder workspaces run on.
![Starter templates](../../images/admin/templates/starter-templates.png)
<small>The "Starter Templates" page within the Coder dashboard.</small>
## Learn the concepts
While templates are written in standard Terraform, it's important to learn the
Coder-specific concepts behind templates. The best way to learn the concepts is
by
[creating a basic template from scratch](../../tutorials/template-from-scratch.md).
If you are unfamiliar with Terraform, see
[Hashicorp's Tutorials](https://developer.hashicorp.com/terraform/tutorials) for
common cloud providers.
## Starter templates
After learning the basics, use starter templates to import a template with
sensible defaults for popular platforms (e.g. AWS, Kubernetes, Docker, etc).
Docs:
[Create a template from a starter template](./creating-templates.md#from-a-starter-template).
## Extending templates
It's often necessary to extend the template to make it generally useful to end
users. Common modifications are:
- Your image(s) (e.g. a Docker image with languages and tools installed). Docs:
[Image management](./managing-templates/image-management.md).
- Additional parameters (e.g. disk size, instance type, or region). Docs:
[Template parameters](./extending-templates/parameters.md).
- Additional IDEs (e.g. JetBrains) or features (e.g. dotfiles, RDP). Docs:
[Adding IDEs and features](./extending-templates/index.md).
Learn more about the various ways you can
[extend your templates](./extending-templates/index.md).
## Best Practices
We recommend starting with a universal template that can be used for basic
tasks. As your Coder deployment grows, you can create more templates to meet the
needs of different teams.
- [Image management](./managing-templates/image-management.md): Learn how to
create and publish images for use within Coder workspaces & templates.
- [Dev Container support](./managing-templates/devcontainers.md): Enable dev
containers to allow teams to bring their own tools into Coder workspaces.
- [Template hardening](./extending-templates/resource-persistence.md#-bulletproofing):
Configure your template to prevent certain resources from being destroyed
(e.g. user disks).
- [Manage templates with Ci/Cd pipelines](./managing-templates/change-management.md):
Learn how to source control your templates and use GitOps to ensure template
changes are reviewed and tested.
- [Permissions and Policies](./template-permissions.md): Control who may access
and modify your template.
<children></children>

View File

@ -0,0 +1,95 @@
# Template Change Management
We recommend source-controlling your templates as you would other any code, and
automating the creation of new versions in CI/CD pipelines.
These pipelines will require tokens for your deployment. To cap token lifetime
on creation,
[configure Coder server to set a shorter max token lifetime](../../../reference/cli/server.md#--max-token-lifetime).
## coderd Terraform Provider
The
[coderd Terraform provider](https://registry.terraform.io/providers/coder/coderd/latest)
can be used to push new template versions, either manually, or in CI/CD
pipelines. To run the provider in a CI/CD pipeline, and to prevent drift, you'll
need to store the Terraform state
[remotely](https://developer.hashicorp.com/terraform/language/backend).
```tf
terraform {
required_providers {
coderd = {
source = "coder/coderd"
}
}
backend "gcs" {
bucket = "example-bucket"
prefix = "terraform/state"
}
}
provider "coderd" {
// Can be populated from environment variables
url = "https://coder.example.com"
token = "****"
}
// Get the commit SHA of the configuration's git repository
variable "TFC_CONFIGURATION_VERSION_GIT_COMMIT_SHA" {
type = string
}
resource "coderd_template" "kubernetes" {
name = "kubernetes"
description = "Develop in Kubernetes!"
versions = [{
directory = ".coder/templates/kubernetes"
active = true
# Version name is optional
name = var.TFC_CONFIGURATION_VERSION_GIT_COMMIT_SHA
tf_vars = [{
name = "namespace"
value = "default4"
}]
}]
/* ... Additional template configuration */
}
```
For an example, see how we push our development image and template
[with GitHub actions](https://github.com/coder/coder/blob/main/.github/workflows/dogfood.yaml).
## Coder CLI
You can also [install Coder](../../../install/cli.md) to automate pushing new
template versions in CI/CD pipelines.
```console
# Install the Coder CLI
curl -L https://coder.com/install.sh | sh
# curl -L https://coder.com/install.sh | sh -s -- --version=0.x
# To create API tokens, use `coder tokens create`.
# If no `--lifetime` flag is passed during creation, the default token lifetime
# will be 30 days.
# These variables are consumed by Coder
export CODER_URL=https://coder.example.com
export CODER_SESSION_TOKEN=*****
# Template details
export CODER_TEMPLATE_NAME=kubernetes
export CODER_TEMPLATE_DIR=.coder/templates/kubernetes
export CODER_TEMPLATE_VERSION=$(git rev-parse --short HEAD)
# Push the new template version to Coder
coder templates push --yes $CODER_TEMPLATE_NAME \
--directory $CODER_TEMPLATE_DIR \
--name=$CODER_TEMPLATE_VERSION # Version name is optional
```
### Next steps
- [Coder CLI Reference](../../../reference/cli/templates.md)
- [Coderd Terraform Provider Reference](https://registry.terraform.io/providers/coder/coderd/latest/docs)
- [Coderd API Reference](../../../reference/index.md)

View File

@ -0,0 +1,114 @@
# Template Dependencies
When creating Coder templates, it is unlikely that you will just be using
built-in providers. Part of Terraform's flexibility stems from its rich plugin
ecosystem, and it makes sense to take advantage of this.
That having been said, here are some recommendations to follow, based on the
[Terraform documentation](https://developer.hashicorp.com/terraform/tutorials/configuration-language/provider-versioning).
Following these recommendations will:
- **Prevent unexpected changes:** Your templates will use the same versions of
Terraform providers each build. This will prevent issues related to changes in
providers.
- **Improve build performance:** Coder caches provider versions on each build.
If the same provider version can be re-used on subsequent builds, Coder will
simply re-use the cached version if it is available.
- **Improve build reliability:** As some providers are hundreds of megabytes in
size, interruptions in connectivity to the Terraform registry during a
workspace build can result in a failed build. If Coder is able to re-use a
cached provider version, the likelihood of this is greatly reduced.
## Lock your provider and module versions
If you add a Terraform provider to `required_providers` without specifying a
version requirement, Terraform will always fetch the latest version on each
invocation:
```terraform
terraform {
required_providers {
coder = {
source = "coder/coder"
}
frobnicate = {
source = "acme/frobnicate"
}
}
}
```
Any new releases of the `coder` or `frobnicate` providers will be picked up upon
the next time a workspace is built using this template. This may include
breaking changes.
To prevent this, add a
[version constraint](https://developer.hashicorp.com/terraform/language/expressions/version-constraints)
to each provider in the `required_providers` block:
```terraform
terraform {
required_providers {
coder = {
source = "coder/coder"
version = ">= 0.2, < 0.3"
}
frobnicate = {
source = "acme/frobnicate"
version = "~> 1.0.0"
}
}
}
```
In the above example, the `coder/coder` provider will be limited to all versions
above or equal to `0.2.0` and below `0.3.0`, while the `acme/frobnicate`
provider will be limited to all versions matching `1.0.x`.
The above also applies to Terraform modules. In the below example, the module
`razzledazzle` is locked to version `1.2.3`.
```terraform
module "razzledazzle" {
source = "registry.example.com/modules/razzle/dazzle"
version = "1.2.3"
foo = "bar"
}
```
## Use a Dependency Lock File
Terraform allows creating a
[dependency lock file](https://developer.hashicorp.com/terraform/language/files/dependency-lock)
to track which provider versions were selected previously. This allows you to
ensure that the next workspace build uses the same provider versions as with the
last build.
To create a new Terraform lock file, run the
[`terraform init` command](https://developer.hashicorp.com/terraform/cli/commands/init)
inside a folder containing the Terraform source code for a given template.
This will create a new file named `.terraform.lock.hcl` in the current
directory. When you next run
[`coder templates push`](../../../reference/cli/templates_push.md), the lock
file will be stored alongside with the other template source code.
> Note: Terraform best practices also recommend checking in your
> `.terraform.lock.hcl` into Git or other VCS.
The next time a workspace is built from that template, Coder will make sure to
use the same versions of those providers as specified in the lock file.
If, at some point in future, you need to update the providers and versions you
specified within the version constraints of the template, run
```console
terraform init -upgrade
```
This will check each provider, check the newest satisfiable version based on the
version constraints you specified, and update the `.terraform.lock.hcl` with
those new versions. When you next run `coder templates push`, again, the updated
lock file will be stored and used to determine the provider versions to use for
subsequent workspace builds.

View File

@ -0,0 +1,112 @@
# Dev Containers
[Development containers](https://containers.dev) are an open source
specification for defining development environments.
[Envbuilder](https://github.com/coder/envbuilder) is an open source project by
Coder that runs dev containers via Coder templates and your underlying
infrastructure. It can run on Docker or Kubernetes.
There are several benefits to adding a devcontainer-compatible template to
Coder:
- Drop-in migration from Codespaces (or any existing repositories that use dev
containers)
- Easier to start projects from Coder. Just create a new workspace then pick a
starter devcontainer.
- Developer teams can "bring their own image." No need for platform teams to
manage complex images, registries, and CI pipelines.
## How it works
A Coder admin adds a devcontainer-compatible template to Coder (envbuilder).
Then developers enter their repository URL as a
[parameter](../extending-templates/parameters.md) when they create their
workspace. [Envbuilder](https://github.com/coder/envbuilder) clones the repo and
builds a container from the `devcontainer.json` specified in the repo.
When using the [Envbuilder Terraform provider](#provider), a previously built
and cached image can be re-used directly, allowing instantaneous dev container
starts.
Developers can edit the `devcontainer.json` in their workspace to rebuild to
iterate on their development environments.
## Example templates
- [Devcontainers (Docker)](https://github.com/coder/coder/tree/main/examples/templates/devcontainer-docker)
provisions a development container using Docker.
- [Devcontainers (Kubernetes)](https://github.com/coder/coder/tree/main/examples/templates/devcontainer-kubernetes)
provisioners a development container on the Kubernetes.
- [Google Compute Engine (Devcontainer)](https://github.com/coder/coder/tree/main/examples/templates/gcp-devcontainer)
runs a development container inside a single GCP instance. It also mounts the
Docker socket from the VM inside the container to enable Docker inside the
workspace.
- [AWS EC2 (Devcontainer)](https://github.com/coder/coder/tree/main/examples/templates/aws-devcontainer)
runs a development container inside a single EC2 instance. It also mounts the
Docker socket from the VM inside the container to enable Docker inside the
workspace.
![Devcontainer parameter screen](../../../images/templates/devcontainers.png)
Your template can prompt the user for a repo URL with
[Parameters](../extending-templates/parameters.md).
## Authentication
You may need to authenticate to your container registry, such as Artifactory, or
git provider such as GitLab, to use Envbuilder. See the
[Envbuilder documentation](https://github.com/coder/envbuilder/blob/main/docs/container-registry-auth.md)
for more information.
## Caching
To improve build times, dev containers can be cached. There are two main forms
of caching:
1. **Layer Caching** caches individual layers and pushes them to a remote
registry. When building the image, Envbuilder will check the remote registry
for pre-existing layers. These will be fetched and extracted to disk instead
of building the layers from scratch.
2. **Image Caching** caches the _entire image_, skipping the build process
completely (except for post-build lifecycle scripts).
Refer to the
[Envbuilder documentation](https://github.com/coder/envbuilder/blob/main/docs/caching.md)
for more information.
## Envbuilder Terraform Provider
To support resuming from a cached image, use the
[Envbuilder Terraform Provider](https://github.com/coder/terraform-provider-envbuilder)
in your template. The provider will:
1. Clone the remote Git repository,
2. Perform a 'dry-run' build of the dev container in the same manner as
Envbuilder would,
3. Check for the presence of a previously built image in the provided cache
repository,
4. Output the image remote reference in SHA256 form, if found.
The above example templates will use the provider if a remote cache repository
is provided.
If you are building your own Devcontainer template, you can consult the
[provider documentation](https://registry.terraform.io/providers/coder/envbuilder/latest/docs/resources/cached_image).
You may also wish to consult a
[documented example usage of the `envbuilder_cached_image` resource](https://github.com/coder/terraform-provider-envbuilder/blob/main/examples/resources/envbuilder_cached_image/envbuilder_cached_image_resource.tf).
## Other features & known issues
Envbuilder provides two release channels:
- **Stable:** available at
[`ghcr.io/coder/envbuilder`](https://github.com/coder/envbuilder/pkgs/container/envbuilder).
Tags `>=1.0.0` are considered stable.
- **Preview:** available at
[`ghcr.io/coder/envbuilder-preview`](https://github.com/coder/envbuilder/pkgs/container/envbuilder-preview).
This is built from the tip of `main`, and should be considered
**experimental** and prone to **breaking changes**.
Refer to the [Envbuilder GitHub repo](https://github.com/coder/envbuilder/) for
more information and to submit feature requests or bug reports.

View File

@ -0,0 +1,73 @@
# Image Management
While Coder provides example
[base container images](https://github.com/coder/enterprise-images) for
workspaces, it's often best to create custom images that matches the needs of
your users. This document serves a guide to operational maturity with some best
practices around managing workspaces images for Coder.
1. Create a minimal base image
2. Create golden image(s) with standard tooling
3. Allow developers to bring their own images and customizations with Dev
Containers
> Note: An image is just one of the many properties defined within the template.
> Templates can pull images from a public image registry (e.g. Docker Hub) or an
> internal one., thanks to Terraform.
## Create a minimal base image
While you may not use this directly in Coder templates, it's useful to have a
minimal base image is a small image that contains only the necessary
dependencies to work in your network and work with Coder. Here are some things
to consider:
- `curl`, `wget`, or `busybox` is required to download and run
[the agent](https://github.com/coder/coder/blob/main/provisionersdk/scripts/bootstrap_linux.sh)
- `git` is recommended so developers can clone repositories
- If the Coder server is using a certificate from an internal certificate
authority (CA), you'll need to add or mount these into your image
- Other generic utilities that will be required by all users, such as `ssh`,
`docker`, `bash`, `jq`, and/or internal tooling
- Consider creating (and starting the container with) a non-root user
> See Coder's
> [example base image](https://github.com/coder/enterprise-images/tree/main/images/minimal)
> for reference.
## Create general-purpose golden image(s) with standard tooling
It's often practical to have a few golden images that contain standard tooling
for developers. These images should contain a number of languages (e.g. Python,
Java, TypeScript), IDEs (VS Code, JetBrains, PyCharm), and other tools (e.g.
`docker`). Unlike project-specific images (which are also important), general
purpose images are great for:
- **Scripting:** Developers may just want to hop in a Coder workspace to run
basic scripts or queries.
- **Day 1 Onboarding:** New developers can quickly get started with a familiar
environment without having to browse through (or create) an image
- **Basic Projects:** Developers can use these images for simple projects that
don't require any specific tooling outside of the standard libraries. As the
project gets more complex, its best to move to a project-specific image.
- **"Golden Path" Projects:** If your developer platform offers specific tech
stacks and types of projects, the golden image can be a good starting point
for those projects.
> This is often referred to as a "sandbox" or "kitchen sink" image. Since large
> multi-purpose container images can quickly become difficult to maintain, it's
> important to keep the number of general-purpose images to a minimum (2-3 in
> most cases) with a well-defined scope.
Examples:
- [Universal Dev Containers Image](https://github.com/devcontainers/images/tree/main/src/universal)
## Allow developers to bring their own images and customizations with Dev Containers
While golden images are great for general use cases, developers will often need
specific tooling for their projects. The [Dev Container](https://containers.dev)
specification allows developers to define their projects dependencies within a
`devcontainer.json` in their Git repository.
- [Learn how to integrate Dev Containers with Coder](./devcontainers.md)

View File

@ -0,0 +1,95 @@
# Working with templates
You create and edit Coder templates as [Terraform](../../../start/coder-tour.md)
configuration files (`.tf`) and any supporting files, like a README or
configuration files for other services.
## Who creates templates?
The [Template Admin](../../../admin/users/groups-roles.md#roles) role (and
above) can create templates. End users, like developers, create workspaces from
them. Templates can also be [managed with git](./change-management.md), allowing
any developer to propose changes to a template.
You can give different users and groups access to templates with
[role-based access control](../template-permissions.md).
## Starter templates
We provide starter templates for common cloud providers, like AWS, and
orchestrators, like Kubernetes. From there, you can modify them to use your own
images, VPC, cloud credentials, and so on. Coder supports all Terraform
resources and properties, so fear not if your favorite cloud provider isn't
here!
![Starter templates](../../../images/start/starter-templates.png)
If you prefer to use Coder on the
[command line](../../../reference/cli/index.md), `coder templates init`.
> Coder starter templates are also available on our
> [GitHub repo](https://github.com/coder/coder/tree/main/examples/templates).
## Community Templates
As well as Coder's starter templates, you can see a list of community templates
by our users
[here](https://github.com/coder/coder/blob/main/examples/templates/community-templates.md).
## Editing templates
Our starter templates are meant to be modified for your use cases. You can edit
any template's files directly in the Coder dashboard.
![Editing a template](../../../images/templates/choosing-edit-template.gif)
If you'd prefer to use the CLI, use `coder templates pull`, edit the template
files, then `coder templates push`.
> Even if you are a Terraform expert, we suggest reading our
> [guided tour of a template](../../../tutorials/template-from-scratch.md).
## Updating templates
Coder tracks a template's versions, keeping all developer workspaces up-to-date.
When you publish a new version, developers are notified to get the latest
infrastructure, software, or security patches. Learn more about
[change management](./change-management.md).
![Updating a template](../../../images/templates/update.png)
### Template update policies (enterprise) (premium)
Enterprise template admins may want workspaces to always remain on the latest
version of their parent template. To do so, enable **Template Update Policies**
in the template's general settings. All non-admin users of the template will be
forced to update their workspaces before starting them once the setting is
applied. Workspaces which leverage autostart or start-on-connect will be
automatically updated on the next startup.
![Template update policies](../../../images/templates/update-policies.png)
## Delete templates
You can delete a template using both the coder CLI and UI. Only
[template admins and owners](../../users/groups-roles.md#roles) can delete a
template, and the template must not have any running workspaces associated to
it.
In the UI, navigate to the template you want to delete, and select the dropdown
in the right-hand corner of the page to delete the template.
![delete-template](../../../images/delete-template.png)
Using the CLI, login to Coder and run the following command to delete a
template:
```shell
coder templates delete <template-name>
```
## Next steps
- [Image management](./image-management.md)
- [Devcontainer templates](./devcontainers.md)
- [Change management](./change-management.md)

View File

@ -0,0 +1,103 @@
# Workspace Scheduling
You can configure a template to control how workspaces are started and stopped.
You can also manage the lifecycle of failed or inactive workspaces.
![Schedule screen](../../../images/admin/templates/schedule/template-schedule-settings.png)
## Schedule
Template [admins](../../users/index.md) may define these default values:
- [**Default autostop**](../../../user-guides/workspace-scheduling.md#autostop):
How long a workspace runs without user activity before Coder automatically
stops it.
- [**Autostop requirement**](#autostop-requirement-enterprise-premium): Enforce
mandatory workspace restarts to apply template updates regardless of user
activity.
- **Activity bump**: The duration of inactivity that must pass before a
workspace is automatically stopped.
- **Dormancy**: This allows automatic deletion of unused workspaces to reduce
spend on idle resources.
## Allow users scheduling
For templates where a uniform autostop duration is not appropriate, admins may
allow users to define their own autostart and autostop schedules. Admins can
restrict the days of the week a workspace should automatically start to help
manage infrastructure costs.
## Failure cleanup (enterprise) (premium)
Failure cleanup defines how long a workspace is permitted to remain in the
failed state prior to being automatically stopped. Failure cleanup is an
enterprise-only feature.
## Dormancy threshold (enterprise) (premium)
Dormancy Threshold defines how long Coder allows a workspace to remain inactive
before being moved into a dormant state. A workspace's inactivity is determined
by the time elapsed since a user last accessed the workspace. A workspace in the
dormant state is not eligible for autostart and must be manually activated by
the user before being accessible. Coder stops workspaces during their transition
to the dormant state if they are detected to be running. Dormancy Threshold is
an enterprise-only feature.
## Dormancy auto-deletion (enterprise) (premium)
Dormancy Auto-Deletion allows a template admin to dictate how long a workspace
is permitted to remain dormant before it is automatically deleted. Dormancy
Auto-Deletion is an enterprise-only feature.
## Autostop requirement (enterprise) (premium)
Autostop requirement is a template setting that determines how often workspaces
using the template must automatically stop. Autostop requirement ignores any
active connections, and ensures that workspaces do not run in perpetuity when
connections are left open inadvertently.
Workspaces will apply the template autostop requirement on the given day in the
user's timezone and specified quiet hours (see below). This ensures that
workspaces will not be stopped during work hours.
The available options are "Days", which can be set to "Daily", "Saturday" or
"Sunday", and "Weeks", which can be set to any number from 1 to 16.
"Days" governs which days of the week workspaces must stop. If you select
"daily", workspaces must be automatically stopped every day at the start of the
user's defined quiet hours. When using "Saturday" or "Sunday", workspaces will
be automatically stopped on Saturday or Sunday in the user's timezone and quiet
hours.
"Weeks" determines how many weeks between required stops. It cannot be changed
from the default of 1 if you have selected "Daily" for "Days". When using a
value greater than 1, workspaces will be automatically stopped every N weeks on
the day specified by "Days" and the user's quiet hours. The autostop week is
synchronized for all workspaces on the same template.
Autostop requirement is disabled when the template is using the deprecated max
lifetime feature. Templates can choose to use a max lifetime or an autostop
requirement during the deprecation period, but only one can be used at a time.
## User quiet hours (enterprise) (premium)
User quiet hours can be configured in the user's schedule settings page.
Workspaces on templates with an autostop requirement will only be forcibly
stopped due to the policy at the start of the user's quiet hours.
![User schedule settings](../../../images/admin/templates/schedule/user-quiet-hours.png)
Admins can define the default quiet hours for all users with the
`--default-quiet-hours-schedule` flag or `CODER_DEFAULT_QUIET_HOURS_SCHEDULE`
environment variable. The value should be a cron expression such as
`CRON_TZ=America/Chicago 30 2 * * *` which would set the default quiet hours to
2:30 AM in the America/Chicago timezone. The cron schedule can only have a
minute and hour component. The default schedule is UTC 00:00. It is recommended
to set the default quiet hours to a time when most users are not expected to be
using Coder.
Admins can force users to use the default quiet hours with the
[CODER_ALLOW_CUSTOM_QUIET_HOURS](../../../reference/cli/server.md#allow-custom-quiet-hours)
environment variable. Users will still be able to see the page, but will be
unable to set a custom time or timezone. If users have already set a custom
quiet hours schedule, it will be ignored and the default will be used instead.

View File

@ -0,0 +1,120 @@
# Open in Coder
You can embed an "Open in Coder" button into your git repos or internal wikis to
let developers quickly launch a new workspace.
<video autoplay playsinline loop>
<source src="https://github.com/coder/coder/blob/main/docs/images/templates/open-in-coder.mp4?raw=true" type="video/mp4">
Your browser does not support the video tag.
</video>
## How it works
To support any infrastructure and software stack, Coder provides a generic
approach for "Open in Coder" flows.
### 1. Set up git authentication
See [External Authentication](../external-auth.md) to set up git authentication
in your Coder deployment.
### 2. Modify your template to auto-clone repos
The id in the template's `coder_external_auth` data source must match the
`CODER_EXTERNAL_AUTH_X_ID` in the Coder deployment configuration.
If you want the template to clone a specific git repo:
```hcl
# Require external authentication to use this template
data "coder_external_auth" "github" {
id = "primary-github"
}
resource "coder_agent" "dev" {
# ...
dir = "~/coder"
startup_script =<<EOF
# Clone repo from GitHub
if [ ! -d "coder" ]
then
git clone https://github.com/coder/coder
fi
EOF
}
```
> Note: The `dir` attribute can be set in multiple ways, for example:
>
> - `~/coder`
> - `/home/coder/coder`
> - `coder` (relative to the home directory)
If you want the template to support any repository via
[parameters](./extending-templates/parameters.md)
```hcl
# Require external authentication to use this template
data "coder_external_auth" "github" {
id = "primary-github"
}
# Prompt the user for the git repo URL
data "coder_parameter" "git_repo" {
name = "git_repo"
display_name = "Git repository"
default = "https://github.com/coder/coder"
}
locals {
folder_name = try(element(split("/", data.coder_parameter.git_repo.value), length(split("/", data.coder_parameter.git_repo.value)) - 1), "")
}
resource "coder_agent" "dev" {
# ...
dir = "~/${local.folder_name}"
startup_script =<<EOF
# Clone repo from GitHub
if [ ! -d "${local.folder_name}" ]
then
git clone ${data.coder_parameter.git_repo.value}
fi
EOF
}
```
### 3. Embed the "Open in Coder" button with Markdown
```md
[![Open in Coder](https://YOUR_ACCESS_URL/open-in-coder.svg)](https://YOUR_ACCESS_URL/templates/YOUR_TEMPLATE/workspace)
```
Be sure to replace `YOUR_ACCESS_URL` with your Coder access url (e.g.
<https://coder.example.com>) and `YOUR_TEMPLATE` with the name of your template.
### 4. Optional: pre-fill parameter values in the "Create Workspace" page
This can be used to pre-fill the git repo URL, disk size, image, etc.
```md
[![Open in Coder](https://YOUR_ACCESS_URL/open-in-coder.svg)](https://YOUR_ACCESS_URL/templates/YOUR_TEMPLATE/workspace?param.git_repo=https://github.com/coder/slog&param.home_disk_size%20%28GB%29=20)
```
![Pre-filled parameters](../../images/templates/pre-filled-parameters.png)
### 5. Optional: disable specific parameter fields by including their names as
specified in your template in the `disable_params` search params list
```md
[![Open in Coder](https://YOUR_ACCESS_URL/open-in-coder.svg)](https://YOUR_ACCESS_URL/templates/YOUR_TEMPLATE/workspace?disable_params=first_parameter,second_parameter)
```
### Example: Kubernetes
For a full example of the Open in Coder flow in Kubernetes, check out
[this example template](https://github.com/bpmct/coder-templates/tree/main/kubernetes-open-in-coder).

View File

@ -0,0 +1,21 @@
# Permissions (enterprise) (premium)
Licensed Coder administrators can control who can use and modify the template.
![Template Permissions](../../images/templates/permissions.png)
Permissions allow you to control who can use and modify the template. Both
individual user and groups can be added to the access list for a template.
Members can be assigned either a `Use` role, granting use of the template to
create workspaces, or `Admin`, allowing a user or members of a group to control
all aspects of the template. This offers a way to elevate the privileges of
ordinary users for specific templates without granting them the site-wide role
of `Template Admin`.
By default the `Everyone` group is assigned to each template meaning any Coder
user can use the template to create a workspace. To prevent this, disable the
`Allow everyone to use the template` setting when creating a template.
![Create Template Permissions](../../images/templates/create-template-permissions.png)
Permissions is an enterprise-only feature.

View File

@ -0,0 +1,155 @@
# Troubleshooting templates
Occasionally, you may run into scenarios where a workspace is created, but the
agent is either not connected or the
[startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)
has failed or timed out.
## Agent connection issues
If the agent is not connected, it means the agent or
[init script](https://github.com/coder/coder/tree/main/provisionersdk/scripts)
has failed on the resource.
```console
$ coder ssh myworkspace
⢄⡱ Waiting for connection from [agent]...
```
While troubleshooting steps vary by resource, here are some general best
practices:
- Ensure the resource has `curl` installed (alternatively, `wget` or `busybox`)
- Ensure the resource can `curl` your Coder
[access URL](../../admin/setup/index.md#access-url)
- Manually connect to the resource and check the agent logs (e.g.,
`kubectl exec`, `docker exec` or AWS console)
- The Coder agent logs are typically stored in `/tmp/coder-agent.log`
- The Coder agent startup script logs are typically stored in
`/tmp/coder-startup-script.log`
- The Coder agent shutdown script logs are typically stored in
`/tmp/coder-shutdown-script.log`
- This can also happen if the websockets are not being forwarded correctly when
running Coder behind a reverse proxy.
[Read our reverse-proxy docs](../../admin/setup/index.md#tls--reverse-proxy)
## Startup script issues
Depending on the contents of the
[startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script),
and whether or not the
[startup script behavior](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script_behavior)
is set to blocking or non-blocking, you may notice issues related to the startup
script. In this section we will cover common scenarios and how to resolve them.
### Unable to access workspace, startup script is still running
If you're trying to access your workspace and are unable to because the
[startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)
is still running, it means the
[startup script behavior](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script_behavior)
option is set to blocking or you have enabled the `--wait=yes` option (for e.g.
`coder ssh` or `coder config-ssh`). In such an event, you can always access the
workspace by using the web terminal, or via SSH using the `--wait=no` option. If
the startup script is running longer than it should, or never completing, you
can try to [debug the startup script](#debugging-the-startup-script) to resolve
the issue. Alternatively, you can try to force the startup script to exit by
terminating processes started by it or terminating the startup script itself (on
Linux, `ps` and `kill` are useful tools).
For tips on how to write a startup script that doesn't run forever, see the
[`startup_script`](#startup_script) section. For more ways to override the
startup script behavior, see the
[`startup_script_behavior`](#startup_script_behavior) section.
Template authors can also set the
[startup script behavior](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script_behavior)
option to non-blocking, which will allow users to access the workspace while the
startup script is still running. Note that the workspace must be updated after
changing this option.
### Your workspace may be incomplete
If you see a warning that your workspace may be incomplete, it means you should
be aware that programs, files, or settings may be missing from your workspace.
This can happen if the
[startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)
is still running or has exited with a non-zero status (see
[startup script error](#startup-script-error)). No action is necessary, but you
may want to
[start a new shell session](#session-was-started-before-the-startup-script-finished-web-terminal)
after it has completed or check the
[startup script logs](#debugging-the-startup-script) to see if there are any
issues.
### Session was started before the startup script finished
The web terminal may show this message if it was started before the
[startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)
finished, but the startup script has since finished. This message can safely be
dismissed, however, be aware that your preferred shell or dotfiles may not yet
be activated for this shell session. You can either start a new session or
source your dotfiles manually. Note that starting a new session means that
commands running in the terminal will be terminated and you may lose unsaved
work.
Examples for activating your preferred shell or sourcing your dotfiles:
- `exec zsh -l`
- `source ~/.bashrc`
### Startup script exited with an error
When the
[startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)
exits with an error, it means the last command run by the script failed. When
`set -e` is used, this means that any failing command will immediately exit the
script and the remaining commands will not be executed. This also means that
[your workspace may be incomplete](#your-workspace-may-be-incomplete). If you
see this error, you can check the
[startup script logs](#debugging-the-startup-script) to figure out what the
issue is.
Common causes for startup script errors:
- A missing command or file
- A command that fails due to missing permissions
- Network issues (e.g., unable to reach a server)
### Debugging the startup script
The simplest way to debug the
[startup script](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/agent#startup_script)
is to open the workspace in the Coder dashboard and click "Show startup log" (if
not already visible). This will show all the output from the script. Another
option is to view the log file inside the workspace (usually
`/tmp/coder-startup-script.log`). If the logs don't indicate what's going on or
going wrong, you can increase verbosity by adding `set -x` to the top of the
startup script (note that this will show all commands run and may output
sensitive information). Alternatively, you can add `echo` statements to show
what's going on.
Here's a short example of an informative startup script:
```shell
echo "Running startup script..."
echo "Run: long-running-command"
/path/to/long-running-command
status=$?
echo "Done: long-running-command, exit status: ${status}"
if [ $status -ne 0 ]; then
echo "Startup script failed, exiting..."
exit $status
fi
```
> **Note:** We don't use `set -x` here because we're manually echoing the
> commands. This protects against sensitive information being shown in the log.
This script tells us what command is being run and what the exit status is. If
the exit status is non-zero, it means the command failed and we exit the script.
Since we are manually checking the exit status here, we don't need `set -e` at
the top of the script to exit on error.
> **Note:** If you aren't seeing any logs, check that the `dir` directive points
> to a valid directory in the file system.

View File

@ -1,55 +0,0 @@
# Upgrade
This article walks you through how to upgrade your Coder server.
<blockquote class="danger">
<p>
Prior to upgrading a production Coder deployment, take a database snapshot since
Coder does not support rollbacks.
</p>
</blockquote>
To upgrade your Coder server, simply reinstall Coder using your original method
of [install](../install).
## Via install.sh
If you installed Coder using the `install.sh` script, re-run the below command
on the host:
```shell
curl -L https://coder.com/install.sh | sh
```
The script will unpack the new `coder` binary version over the one currently
installed. Next, you can restart Coder with the following commands (if running
it as a system service):
```shell
systemctl daemon-reload
systemctl restart coder
```
## Via docker-compose
If you installed using `docker-compose`, run the below command to upgrade the
Coder container:
```shell
docker-compose pull coder && docker-compose up -d coder
```
## Via Kubernetes
See
[Upgrading Coder via Helm](../install/kubernetes.md#upgrading-coder-via-helm).
## Via Windows
Download the latest Windows installer or binary from
[GitHub releases](https://github.com/coder/coder/releases/latest), or upgrade
from Winget.
```pwsh
winget install Coder.Coder
```

View File

@ -0,0 +1,84 @@
## GitHub
### Step 1: Configure the OAuth application in GitHub
First,
[register a GitHub OAuth app](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/).
GitHub will ask you for the following Coder parameters:
- **Homepage URL**: Set to your Coder deployments
[`CODER_ACCESS_URL`](../../reference/cli/server.md#--access-url) (e.g.
`https://coder.domain.com`)
- **User Authorization Callback URL**: Set to `https://coder.domain.com`
> Note: If you want to allow multiple coder deployments hosted on subdomains
> e.g. coder1.domain.com, coder2.domain.com, to be able to authenticate with the
> same GitHub OAuth app, then you can set **User Authorization Callback URL** to
> the `https://domain.com`
Note the Client ID and Client Secret generated by GitHub. You will use these
values in the next step.
Coder will need permission to access user email addresses. Find the "Account
Permissions" settings for your app and select "read-only" for "Email addresses".
### Step 2: Configure Coder with the OAuth credentials
Navigate to your Coder host and run the following command to start up the Coder
server:
```shell
coder server --oauth2-github-allow-signups=true --oauth2-github-allowed-orgs="your-org" --oauth2-github-client-id="8d1...e05" --oauth2-github-client-secret="57ebc9...02c24c"
```
> For GitHub Enterprise support, specify the
> `--oauth2-github-enterprise-base-url` flag.
Alternatively, if you are running Coder as a system service, you can achieve the
same result as the command above by adding the following environment variables
to the `/etc/coder.d/coder.env` file:
```env
CODER_OAUTH2_GITHUB_ALLOW_SIGNUPS=true
CODER_OAUTH2_GITHUB_ALLOWED_ORGS="your-org"
CODER_OAUTH2_GITHUB_CLIENT_ID="8d1...e05"
CODER_OAUTH2_GITHUB_CLIENT_SECRET="57ebc9...02c24c"
```
**Note:** To allow everyone to signup using GitHub, set:
```env
CODER_OAUTH2_GITHUB_ALLOW_EVERYONE=true
```
Once complete, run `sudo service coder restart` to reboot Coder.
If deploying Coder via Helm, you can set the above environment variables in the
`values.yaml` file as such:
```yaml
coder:
env:
- name: CODER_OAUTH2_GITHUB_ALLOW_SIGNUPS
value: "true"
- name: CODER_OAUTH2_GITHUB_CLIENT_ID
value: "533...des"
- name: CODER_OAUTH2_GITHUB_CLIENT_SECRET
value: "G0CSP...7qSM"
# If setting allowed orgs, comment out CODER_OAUTH2_GITHUB_ALLOW_EVERYONE and its value
- name: CODER_OAUTH2_GITHUB_ALLOWED_ORGS
value: "your-org"
# If allowing everyone, comment out CODER_OAUTH2_GITHUB_ALLOWED_ORGS and it's value
#- name: CODER_OAUTH2_GITHUB_ALLOW_EVERYONE
# value: "true"
```
To upgrade Coder, run:
```shell
helm upgrade <release-name> coder-v2/coder -n <namespace> -f values.yaml
```
> We recommend requiring and auditing MFA usage for all users in your GitHub
> organizations. This can be enforced from the organization settings page in the
> "Authentication security" sidebar tab.

View File

@ -0,0 +1,44 @@
# Groups and Roles
Groups and roles can be manually assigned in Coder. For production deployments,
these can also be [managed and synced by the identity provider](./idp-sync.md).
## Groups
Groups are logical segmentations of users in Coder and can be used to control
which templates developers can use. For example:
- Users within the `devops` group can access the `AWS-VM` template
- Users within the `data-science` group can access the `Jupyter-Kubernetes`
template
## Roles
Roles determine which actions users can take within the platform.
| | Auditor | User Admin | Template Admin | Owner |
| --------------------------------------------------------------- | ------- | ---------- | -------------- | ----- |
| Add and remove Users | | ✅ | | ✅ |
| Manage groups (enterprise) (premium) | | ✅ | | ✅ |
| Change User roles | | | | ✅ |
| Manage **ALL** Templates | | | ✅ | ✅ |
| View **ALL** Workspaces | | | ✅ | ✅ |
| Update and delete **ALL** Workspaces | | | | ✅ |
| Run [external provisioners](../provisioners.md) | | | ✅ | ✅ |
| Execute and use **ALL** Workspaces | | | | ✅ |
| View all user operation [Audit Logs](../security/audit-logs.md) | ✅ | | | ✅ |
A user may have one or more roles. All users have an implicit Member role that
may use personal workspaces.
### Security notes
A malicious Template Admin could write a template that executes commands on the
host (or `coder server` container), which potentially escalates their privileges
or shuts down the Coder server. To avoid this, run
[external provisioners](../provisioners.md).
In low-trust environments, we do not recommend giving users direct access to
edit templates. Instead, use
[CI/CD pipelines to update templates](../templates/managing-templates/change-management.md)
with proper security scans and code reviews in place.

View File

@ -0,0 +1,31 @@
# Headless Authentication
Headless user accounts that cannot use the web UI to log in to Coder. This is
useful for creating accounts for automated systems, such as CI/CD pipelines or
for users who only consume Coder via another client/API.
> You must have the User Admin role or above to create headless users.
## Create a headless user
<div class="tabs">
## CLI
```sh
coder users create \
--email="coder-bot@coder.com" \
--username="coder-bot" \
--login-type="none \
```
## UI
Navigate to the `Users` > `Create user` in the topbar
![Create a user via the UI](../../images/admin/users/headless-user.png)
</div>
To make API or CLI requests on behalf of the headless user, learn how to
[generate API tokens on behalf of a user](./sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-another-user).

View File

@ -1,260 +1,11 @@
# Authentication
By default, Coder is accessible via password authentication. Coder does not
recommend using password authentication in production, and recommends using an
authentication provider with properly configured multi-factor authentication
(MFA). It is your responsibility to ensure the auth provider enforces MFA
correctly.
The following steps explain how to set up GitHub OAuth or OpenID Connect.
## GitHub
### Step 1: Configure the OAuth application in GitHub
First,
[register a GitHub OAuth app](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/).
GitHub will ask you for the following Coder parameters:
- **Homepage URL**: Set to your Coder deployments
[`CODER_ACCESS_URL`](../reference/cli/server.md#--access-url) (e.g.
`https://coder.domain.com`)
- **User Authorization Callback URL**: Set to `https://coder.domain.com`
> Note: If you want to allow multiple coder deployments hosted on subdomains
> e.g. coder1.domain.com, coder2.domain.com, to be able to authenticate with the
> same GitHub OAuth app, then you can set **User Authorization Callback URL** to
> the `https://domain.com`
Note the Client ID and Client Secret generated by GitHub. You will use these
values in the next step.
Coder will need permission to access user email addresses. Find the "Account
Permissions" settings for your app and select "read-only" for "Email addresses".
### Step 2: Configure Coder with the OAuth credentials
Navigate to your Coder host and run the following command to start up the Coder
server:
```shell
coder server --oauth2-github-allow-signups=true --oauth2-github-allowed-orgs="your-org" --oauth2-github-client-id="8d1...e05" --oauth2-github-client-secret="57ebc9...02c24c"
```
> For GitHub Enterprise support, specify the
> `--oauth2-github-enterprise-base-url` flag.
Alternatively, if you are running Coder as a system service, you can achieve the
same result as the command above by adding the following environment variables
to the `/etc/coder.d/coder.env` file:
```env
CODER_OAUTH2_GITHUB_ALLOW_SIGNUPS=true
CODER_OAUTH2_GITHUB_ALLOWED_ORGS="your-org"
CODER_OAUTH2_GITHUB_CLIENT_ID="8d1...e05"
CODER_OAUTH2_GITHUB_CLIENT_SECRET="57ebc9...02c24c"
```
**Note:** To allow everyone to signup using GitHub, set:
```env
CODER_OAUTH2_GITHUB_ALLOW_EVERYONE=true
```
Once complete, run `sudo service coder restart` to reboot Coder.
If deploying Coder via Helm, you can set the above environment variables in the
`values.yaml` file as such:
```yaml
coder:
env:
- name: CODER_OAUTH2_GITHUB_ALLOW_SIGNUPS
value: "true"
- name: CODER_OAUTH2_GITHUB_CLIENT_ID
value: "533...des"
- name: CODER_OAUTH2_GITHUB_CLIENT_SECRET
value: "G0CSP...7qSM"
# If setting allowed orgs, comment out CODER_OAUTH2_GITHUB_ALLOW_EVERYONE and its value
- name: CODER_OAUTH2_GITHUB_ALLOWED_ORGS
value: "your-org"
# If allowing everyone, comment out CODER_OAUTH2_GITHUB_ALLOWED_ORGS and it's value
#- name: CODER_OAUTH2_GITHUB_ALLOW_EVERYONE
# value: "true"
```
To upgrade Coder, run:
```shell
helm upgrade <release-name> coder-v2/coder -n <namespace> -f values.yaml
```
> We recommend requiring and auditing MFA usage for all users in your GitHub
> organizations. This can be enforced from the organization settings page in the
> "Authentication security" sidebar tab.
## OpenID Connect
The following steps through how to integrate any OpenID Connect provider (Okta,
Active Directory, etc.) to Coder.
### Step 1: Set Redirect URI with your OIDC provider
Your OIDC provider will ask you for the following parameter:
- **Redirect URI**: Set to `https://coder.domain.com/api/v2/users/oidc/callback`
### Step 2: Configure Coder with the OpenID Connect credentials
Navigate to your Coder host and run the following command to start up the Coder
server:
```shell
coder server --oidc-issuer-url="https://issuer.corp.com" --oidc-email-domain="your-domain-1,your-domain-2" --oidc-client-id="533...des" --oidc-client-secret="G0CSP...7qSM"
```
If you are running Coder as a system service, you can achieve the same result as
the command above by adding the following environment variables to the
`/etc/coder.d/coder.env` file:
```env
CODER_OIDC_ISSUER_URL="https://issuer.corp.com"
CODER_OIDC_EMAIL_DOMAIN="your-domain-1,your-domain-2"
CODER_OIDC_CLIENT_ID="533...des"
CODER_OIDC_CLIENT_SECRET="G0CSP...7qSM"
```
Once complete, run `sudo service coder restart` to reboot Coder.
If deploying Coder via Helm, you can set the above environment variables in the
`values.yaml` file as such:
```yaml
coder:
env:
- name: CODER_OIDC_ISSUER_URL
value: "https://issuer.corp.com"
- name: CODER_OIDC_EMAIL_DOMAIN
value: "your-domain-1,your-domain-2"
- name: CODER_OIDC_CLIENT_ID
value: "533...des"
- name: CODER_OIDC_CLIENT_SECRET
value: "G0CSP...7qSM"
```
To upgrade Coder, run:
```shell
helm upgrade <release-name> coder-v2/coder -n <namespace> -f values.yaml
```
## OIDC Claims
When a user logs in for the first time via OIDC, Coder will merge both the
claims from the ID token and the claims obtained from hitting the upstream
provider's `userinfo` endpoint, and use the resulting data as a basis for
creating a new user or looking up an existing user.
To troubleshoot claims, set `CODER_VERBOSE=true` and follow the logs while
signing in via OIDC as a new user. Coder will log the claim fields returned by
the upstream identity provider in a message containing the string
`got oidc claims`, as well as the user info returned.
> **Note:** If you need to ensure that Coder only uses information from the ID
> token and does not hit the UserInfo endpoint, you can set the configuration
> option `CODER_OIDC_IGNORE_USERINFO=true`.
### Email Addresses
By default, Coder will look for the OIDC claim named `email` and use that value
for the newly created user's email address.
If your upstream identity provider users a different claim, you can set
`CODER_OIDC_EMAIL_FIELD` to the desired claim.
> **Note** If this field is not present, Coder will attempt to use the claim
> field configured for `username` as an email address. If this field is not a
> valid email address, OIDC logins will fail.
### Email Address Verification
Coder requires all OIDC email addresses to be verified by default. If the
`email_verified` claim is present in the token response from the identity
provider, Coder will validate that its value is `true`. If needed, you can
disable this behavior with the following setting:
```env
CODER_OIDC_IGNORE_EMAIL_VERIFIED=true
```
> **Note:** This will cause Coder to implicitly treat all OIDC emails as
> "verified", regardless of what the upstream identity provider says.
### Usernames
When a new user logs in via OIDC, Coder will by default use the value of the
claim field named `preferred_username` as the the username.
If your upstream identity provider uses a different claim, you can set
`CODER_OIDC_USERNAME_FIELD` to the desired claim.
> **Note:** If this claim is empty, the email address will be stripped of the
> domain, and become the username (e.g. `example@coder.com` becomes `example`).
> To avoid conflicts, Coder may also append a random word to the resulting
> username.
## OIDC Login Customization
If you'd like to change the OpenID Connect button text and/or icon, you can
configure them like so:
```env
CODER_OIDC_SIGN_IN_TEXT="Sign in with Gitea"
CODER_OIDC_ICON_URL=https://gitea.io/images/gitea.png
```
To change the icon and text above the OpenID Connect button, see application
name and logo url in [appearance](./appearance.md) settings.
## Disable Built-in Authentication
To remove email and password login, set the following environment variable on
your Coder deployment:
```env
CODER_DISABLE_PASSWORD_AUTH=true
```
## SCIM (enterprise) (premium)
Coder supports user provisioning and deprovisioning via SCIM 2.0 with header
authentication. Upon deactivation, users are
[suspended](./users.md#suspend-a-user) and are not deleted.
[Configure](./configure.md) your SCIM application with an auth key and supply it
the Coder server.
```env
CODER_SCIM_AUTH_HEADER="your-api-key"
```
## TLS
If your OpenID Connect provider requires client TLS certificates for
authentication, you can configure them like so:
```env
CODER_TLS_CLIENT_CERT_FILE=/path/to/cert.pem
CODER_TLS_CLIENT_KEY_FILE=/path/to/key.pem
```
## Group Sync (enterprise) (premium)
# IDP Sync (enterprise) (premium)
If your OpenID Connect provider supports group claims, you can configure Coder
to synchronize groups in your auth provider to groups within Coder. To enable
group sync, ensure that the `groups` claim is being sent by your OpenID
provider. You might need to request an additional
[scope](../reference/cli/server.md#--oidc-scopes) or additional configuration on
the OpenID provider side.
[scope](../../reference/cli/server.md#--oidc-scopes) or additional configuration
on the OpenID provider side.
If group sync is enabled, the user's groups will be controlled by the OIDC
provider. This means manual group additions/removals will be overwritten on the
@ -283,7 +34,8 @@ the OIDC provider. See
> ones include `groups`, `memberOf`, and `roles`.
Next configure the Coder server to read groups from the claim name with the
[OIDC group field](../reference/cli/server.md#--oidc-group-field) server flag:
[OIDC group field](../../reference/cli/server.md#--oidc-group-field) server
flag:
```sh
# as an environment variable
@ -301,7 +53,7 @@ names in Coder and removed from groups that the user no longer belongs to.
For cases when an OIDC provider only returns group IDs ([Azure AD][azure-gids])
or you want to have different group names in Coder than in your OIDC provider,
you can configure mapping between the two with the
[OIDC group mapping](../reference/cli/server.md#--oidc-group-mapping) server
[OIDC group mapping](../../reference/cli/server.md#--oidc-group-mapping) server
flag.
```sh
@ -339,7 +91,7 @@ For deployments with multiple [organizations](./organizations.md), you must
configure group sync at the organization level. In future Coder versions, you
will be able to configure this in the UI. For now, you must use CLI commands.
First confirm you have the [Coder CLI](../install/index.md) installed and are
First confirm you have the [Coder CLI](../../install/index.md) installed and are
logged in with a user who is an Owner or Organization Admin role. Next, confirm
that your OIDC provider is sending a groups claim by logging in with OIDC and
visiting the following URL:
@ -420,7 +172,7 @@ coder organizations settings set group-sync \
Visit the Coder UI to confirm these changes:
![IDP Sync](../images/admin/organizations/group-sync.png)
![IDP Sync](../../images/admin/users/organizations/group-sync.png)
</div>
@ -430,7 +182,7 @@ You can limit which groups from your identity provider can log in to Coder with
[CODER_OIDC_ALLOWED_GROUPS](https://coder.com/docs/cli/server#--oidc-allowed-groups).
Users who are not in a matching group will see the following error:
![Unauthorized group error](../images/admin/group-allowlist.png)
![Unauthorized group error](../../images/admin/group-allowlist.png)
## Role sync (enterprise) (premium)
@ -460,10 +212,10 @@ the OIDC provider. See
> Depending on the OIDC provider, this claim may be named differently.
Next configure the Coder server to read groups from the claim name with the
[OIDC role field](../reference/cli/server.md#--oidc-user-role-field) server
[OIDC role field](../../reference/cli/server.md#--oidc-user-role-field) server
flag:
Set the following in your Coder server [configuration](./configure.md).
Set the following in your Coder server [configuration](../setup/index.md).
```env
# Depending on your identity provider configuration, you may need to explicitly request a "roles" scope
@ -546,7 +298,7 @@ coder organizations settings set role-sync \
Visit the Coder UI to confirm these changes:
![IDP Sync](../images/admin/organizations/role-sync.png)
![IDP Sync](../../images/admin/users/organizations/role-sync.png)
</div>
@ -575,7 +327,7 @@ the OIDC provider. See
> ones include `groups`, `memberOf`, and `roles`.
Next configure the Coder server to read groups from the claim name with the
[OIDC organization field](../reference/cli/server.md#--oidc-organization-field)
[OIDC organization field](../../reference/cli/server.md#--oidc-organization-field)
server flag:
```sh
@ -589,7 +341,7 @@ Next, fetch the corresponding organization IDs using the following endpoint:
https://[coder.example.com]/api/v2/organizations
```
Set the following in your Coder server [configuration](./configure.md).
Set the following in your Coder server [configuration](../setup/index.md).
```env
CODER_OIDC_ORGANIZATION_MAPPING='{"data-scientists":["d8d9daef-e273-49ff-a832-11fe2b2d4ab1", "70be0908-61b5-4fb5-aba4-4dfb3a6c5787"]}'
@ -614,8 +366,8 @@ Some common issues when enabling group/role sync.
If you are running into issues with group/role sync, is best to view your Coder
server logs and enable
[verbose mode](https://coder.com/docs/v2/v2.5.1/cli#-v---verbose). To reduce
noise, you can filter for only logs related to group/role sync:
[verbose mode](../../reference/cli/index.md#-v---verbose). To reduce noise, you
can filter for only logs related to group/role sync:
```sh
CODER_VERBOSE=true

View File

@ -1,48 +1,33 @@
# Users
This article walks you through the user roles available in Coder and creating
and managing users.
By default, Coder is accessible via password authentication. For production
deployments, we recommend using an SSO authentication provider with multi-factor
authentication (MFA). It is your responsibility to ensure the auth provider
enforces MFA correctly.
## Configuring SSO
- [OpenID Connect](./oidc-auth.md) (e.g. Okta, KeyCloak, PingFederate, Azure AD)
- [GitHub](./github-auth.md) (or GitHub Enterprise)
## Groups
Multiple users can be organized into logical groups to control which templates
they can use. While groups can be manually created in Coder, we recommend
syncing them from your identity provider.
- [Learn more about Groups](./groups-roles.md)
- [Group & Role Sync](./idp-sync.md)
## Roles
Coder offers these user roles in the community edition:
Roles determine which actions users can take within the platform. Typically,
most developers in your organization have the `Member` role, allowing them to
create workspaces. Other roles have administrative capabilities such as
auditing, managing users, and managing templates.
| | Auditor | User Admin | Template Admin | Owner |
| ----------------------------------------------------- | ------- | ---------- | -------------- | ----- |
| Add and remove Users | | ✅ | | ✅ |
| Manage groups (premium) | | ✅ | | ✅ |
| Change User roles | | | | ✅ |
| Manage **ALL** Templates | | | ✅ | ✅ |
| View **ALL** Workspaces | | | ✅ | ✅ |
| Update and delete **ALL** Workspaces | | | | ✅ |
| Run [external provisioners](./provisioners.md) | | | ✅ | ✅ |
| Execute and use **ALL** Workspaces | | | | ✅ |
| View all user operation [Audit Logs](./audit-logs.md) | ✅ | | | ✅ |
A user may have one or more roles. All users have an implicit Member role that
may use personal workspaces.
## Custom Roles (Premium) (Beta)
Coder v2.16+ deployments can configure custom roles on the
[Organization](./organizations.md) level.
![Custom roles](../images/admin/organizations/custom-roles.png)
> Note: This requires a Premium license.
> [Contact your account team](https://coder.com/contact) for more details.
## Security notes
A malicious Template Admin could write a template that executes commands on the
host (or `coder server` container), which potentially escalates their privileges
or shuts down the Coder server. To avoid this, run
[external provisioners](./provisioners.md).
In low-trust environments, we do not recommend giving users direct access to
edit templates. Instead, use
[CI/CD pipelines to update templates](../templates/change-management.md) with
proper security scans and code reviews in place.
- [Learn more about Roles](./groups-roles.md)
- [Group & Role Sync](./idp-sync.md)
## User status

View File

@ -0,0 +1,158 @@
# OpenID Connect
The following steps through how to integrate any OpenID Connect provider (Okta,
Active Directory, etc.) to Coder.
## Step 1: Set Redirect URI with your OIDC provider
Your OIDC provider will ask you for the following parameter:
- **Redirect URI**: Set to `https://coder.domain.com/api/v2/users/oidc/callback`
## Step 2: Configure Coder with the OpenID Connect credentials
Navigate to your Coder host and run the following command to start up the Coder
server:
```shell
coder server --oidc-issuer-url="https://issuer.corp.com" --oidc-email-domain="your-domain-1,your-domain-2" --oidc-client-id="533...des" --oidc-client-secret="G0CSP...7qSM"
```
If you are running Coder as a system service, you can achieve the same result as
the command above by adding the following environment variables to the
`/etc/coder.d/coder.env` file:
```env
CODER_OIDC_ISSUER_URL="https://issuer.corp.com"
CODER_OIDC_EMAIL_DOMAIN="your-domain-1,your-domain-2"
CODER_OIDC_CLIENT_ID="533...des"
CODER_OIDC_CLIENT_SECRET="G0CSP...7qSM"
```
Once complete, run `sudo service coder restart` to reboot Coder.
If deploying Coder via Helm, you can set the above environment variables in the
`values.yaml` file as such:
```yaml
coder:
env:
- name: CODER_OIDC_ISSUER_URL
value: "https://issuer.corp.com"
- name: CODER_OIDC_EMAIL_DOMAIN
value: "your-domain-1,your-domain-2"
- name: CODER_OIDC_CLIENT_ID
value: "533...des"
- name: CODER_OIDC_CLIENT_SECRET
value: "G0CSP...7qSM"
```
To upgrade Coder, run:
```shell
helm upgrade <release-name> coder-v2/coder -n <namespace> -f values.yaml
```
## OIDC Claims
When a user logs in for the first time via OIDC, Coder will merge both the
claims from the ID token and the claims obtained from hitting the upstream
provider's `userinfo` endpoint, and use the resulting data as a basis for
creating a new user or looking up an existing user.
To troubleshoot claims, set `CODER_VERBOSE=true` and follow the logs while
signing in via OIDC as a new user. Coder will log the claim fields returned by
the upstream identity provider in a message containing the string
`got oidc claims`, as well as the user info returned.
> **Note:** If you need to ensure that Coder only uses information from the ID
> token and does not hit the UserInfo endpoint, you can set the configuration
> option `CODER_OIDC_IGNORE_USERINFO=true`.
### Email Addresses
By default, Coder will look for the OIDC claim named `email` and use that value
for the newly created user's email address.
If your upstream identity provider users a different claim, you can set
`CODER_OIDC_EMAIL_FIELD` to the desired claim.
> **Note** If this field is not present, Coder will attempt to use the claim
> field configured for `username` as an email address. If this field is not a
> valid email address, OIDC logins will fail.
### Email Address Verification
Coder requires all OIDC email addresses to be verified by default. If the
`email_verified` claim is present in the token response from the identity
provider, Coder will validate that its value is `true`. If needed, you can
disable this behavior with the following setting:
```env
CODER_OIDC_IGNORE_EMAIL_VERIFIED=true
```
> **Note:** This will cause Coder to implicitly treat all OIDC emails as
> "verified", regardless of what the upstream identity provider says.
### Usernames
When a new user logs in via OIDC, Coder will by default use the value of the
claim field named `preferred_username` as the the username.
If your upstream identity provider uses a different claim, you can set
`CODER_OIDC_USERNAME_FIELD` to the desired claim.
> **Note:** If this claim is empty, the email address will be stripped of the
> domain, and become the username (e.g. `example@coder.com` becomes `example`).
> To avoid conflicts, Coder may also append a random word to the resulting
> username.
## OIDC Login Customization
If you'd like to change the OpenID Connect button text and/or icon, you can
configure them like so:
```env
CODER_OIDC_SIGN_IN_TEXT="Sign in with Gitea"
CODER_OIDC_ICON_URL=https://gitea.io/images/gitea.png
```
To change the icon and text above the OpenID Connect button, see application
name and logo url in [appearance](../setup/appearance.md) settings.
## Disable Built-in Authentication
To remove email and password login, set the following environment variable on
your Coder deployment:
```env
CODER_DISABLE_PASSWORD_AUTH=true
```
## SCIM (enterprise) (premium)
Coder supports user provisioning and deprovisioning via SCIM 2.0 with header
authentication. Upon deactivation, users are
[suspended](./index.md#suspend-a-user) and are not deleted.
[Configure](../setup/index.md) your SCIM application with an auth key and supply
it the Coder server.
```env
CODER_SCIM_AUTH_HEADER="your-api-key"
```
## TLS
If your OpenID Connect provider requires client TLS certificates for
authentication, you can configure them like so:
```env
CODER_TLS_CLIENT_CERT_FILE=/path/to/cert.pem
CODER_TLS_CLIENT_KEY_FILE=/path/to/key.pem
```
### Next steps
- [Group Sync](./idp-sync.md)
- [Groups & Roles](./groups-roles.md)

View File

@ -1,7 +1,8 @@
# Organizations (Premium)
> Note: Organizations requires a [Premium license](../licensing.md). For more
> details, [contact your account team](https://coder.com/contact).
> Note: Organizations requires a
> [Premium license](https://coder.com/pricing#compare-plans). For more details,
> [contact your account team](https://coder.com/contact).
Organizations can be used to segment and isolate resources inside a Coder
deployment for different user groups or projects.
@ -11,7 +12,7 @@ deployment for different user groups or projects.
Here is an example of how one could use organizations to run a Coder deployment
with multiple platform teams, all with unique resources:
![Organizations Example](../images/admin/organizations/diagram.png)
![Organizations Example](../../images/admin/users/organizations/diagram.png)
## The default organization
@ -20,21 +21,21 @@ All Coder deployments start with one organization called `Coder`.
To edit the organization details, navigate to `Deployment -> Organizations` in
the top bar:
![Organizations Menu](../images/admin/organizations/deployment-organizations.png)
![Organizations Menu](../../images/admin/users/organizations/deployment-organizations.png)
From there, you can manage the name, icon, description, users, and groups:
![Organization Settings](../images/admin/organizations/default-organization.png)
![Organization Settings](../../images/admin/users/organizations/default-organization.png)
## Additional organizations
Any additional organizations have unique admins, users, templates, provisioners,
groups, and workspaces. Each organization must have at least one
[provisioner](./provisioners.md) as the built-in provisioner only applies to the
default organization.
[provisioner](../provisioners.md) as the built-in provisioner only applies to
the default organization.
You can configure [organization/role/group sync](./auth.md) from your identity
provider to avoid manually assigning users to organizations.
You can configure [organization/role/group sync](./idp-sync.md) from your
identity provider to avoid manually assigning users to organizations.
## Creating an organization
@ -49,17 +50,16 @@ provider to avoid manually assigning users to organizations.
Within the sidebar, click `New organization` to create an organization. In this
example, we'll create the `data-platform` org.
![New Organization](../images/admin/organizations/new-organization.png)
![New Organization](../../images/admin/users/organizations/new-organization.png)
From there, let's deploy a provisioner and template for this organization.
### 2. Deploy a provisioner
[Provisioners](../admin/provisioners.md) are organization-scoped and are
responsible for executing Terraform/OpenTofu to provision the infrastructure for
workspaces and testing templates. Before creating templates, we must deploy at
least one provisioner as the built-in provisioners are scoped to the default
organization.
[Provisioners](../provisioners.md) are organization-scoped and are responsible
for executing Terraform/OpenTofu to provision the infrastructure for workspaces
and testing templates. Before creating templates, we must deploy at least one
provisioner as the built-in provisioners are scoped to the default organization.
Using Coder CLI, run the following command to create a key that will be used to
authenticate the provisioner:
@ -74,7 +74,7 @@ Successfully created provisioner key data-cluster! Save this authentication toke
Next, start the provisioner with the key on your desired platform. In this
example, we'll start it using the Coder CLI on a host with Docker. For
instructions on using other platforms like Kubernetes, see our
[provisioner documentation](../admin/provisioners.md).
[provisioner documentation](../provisioners.md).
```sh
export CODER_URL=https://<your-coder-url>
@ -87,24 +87,24 @@ coder provisionerd start --org <org-name>
Once you've started a provisioner, you can create a template. You'll notice the
"Create Template" screen now has an organization dropdown:
![Template Org Picker](../images/admin/organizations/template-org-picker.png)
![Template Org Picker](../../images/admin/users/organizations/template-org-picker.png)
### 5. Add members
Navigate to `Deployment->Organizations` to add members to your organization.
Once added, they will be able to see the organization-specific templates.
![Add members](../images/admin/organizations/organization-members.png)
![Add members](../../images/admin/users/organizations/organization-members.png)
### 6. Create a workspace
Now, users in the data platform organization will see the templates related to
their organization. Users can be in multiple organizations.
![Workspace List](../images/admin/organizations/workspace-list.png)
![Workspace List](../../images/admin/users/organizations/workspace-list.png)
## Beta
Organizations is in beta. If you encounter any issues, please
As of v2.16.0, Organizations is in beta. If you encounter any issues, please
[file an issue](https://github.com/coder/coder/issues/new) or contact your
account team.

View File

@ -0,0 +1,27 @@
# Password Authentication
Coder has password authentication enabled by default. The account created during
setup is a username/password account.
## Disable password authentication
To disable password authentication, use the
[`CODER_DISABLE_PASSWORD_AUTH`](../../reference/cli/server.md#--disable-password-auth)
flag on the Coder server.
## Restore the `Owner` user
If you remove the admin user account (or forget the password), you can run the
[`coder server create-admin-user`](../../reference/cli/server_create-admin-user.md)command
on your server.
> Note: You must run this command on the same machine running the Coder server.
> If you are running Coder on Kubernetes, this means using
> [kubectl exec](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_exec/)
> to exec into the pod.
## Reset a user's password
An admin must reset passwords on behalf of users. This can be done in the web UI
in the Users page or CLI:
[`coder reset-password`](../../reference/cli/reset-password.md)

View File

@ -9,7 +9,8 @@ For example: A template is configured with a cost of 5 credits per day, and the
user is granted 15 credits, which can be consumed by both started and stopped
workspaces. This budget limits the user to 3 concurrent workspaces.
Quotas are licensed with [Groups](./groups.md).
Quotas are scoped to [Groups](./groups-roles.md) in Enterprise and
[organizations](./organizations.md) in Premium.
## Definitions
@ -70,7 +71,7 @@ unused workspaces and freeing up compute in the cluster.
Each group has a configurable Quota Allowance. A user's budget is calculated as
the sum of their allowances.
![group-settings](../images/admin/quota-groups.png)
![group-settings](../../images/admin/users/quotas/quota-groups.png)
For example:
@ -98,8 +99,9 @@ process dynamically calculates costs, so quota violation fails builds as opposed
to failing the build-triggering operation. For example, the Workspace Create
Form will never get held up by quota enforcement.
![build-log](../images/admin/quota-buildlog.png)
![build-log](../../images/admin/quota-buildlog.png)
## Up next
- [Configuring](./configure.md)
- [Group Sync](./idp-sync.md)
- [Control plane configuration](../setup/index.md)

View File

@ -0,0 +1,64 @@
# API & Session Tokens
Users can generate tokens to make API requests on behalf of themselves.
## Short-Lived Tokens (Sessions)
The [Coder CLI](../../install/cli.md) and [Backstage Plugin](#TODO) use
short-lived token to authenticate. To generate a short-lived session token on
behalf of your account, visit the following URL:
`https://coder.example.com/cli-auth`
### Session Durations
By default, sessions last 24 hours and are automatically refreshed. You can
configure
[`CODER_SESSION_DURATION`](../../reference/cli/server.md#--session-duration) to
change the duration and
[`CODER_DISABLE_SESSION_EXPIRY_REFRESH`](../../reference/cli/server.md#--disable-session-expiry-refresh)
to configure this behavior.
## Long-Lived Tokens (API Tokens)
Users can create long lived tokens. We refer to these as "API tokens" in the
product.
### Generate a long-lived API token on behalf of yourself
<div class="tabs">
#### UI
Visit your account settings in the top right of the dashboard or by navigating
to `https://coder.example.com/settings/account`
Navigate to the tokens page in the sidebar and create a new token:
![Create an API token](../../images/admin/users/create-token.png)
#### CLI
Use the following command:
```sh
coder tokens create --name=my-token --lifetime=720h
```
See the help docs for
[`coder tokens create`](../../reference/cli/tokens_create.md) for more info.
</div>
### Generate a long-lived API token on behalf of another user
Today, you must use the REST API to generate a token on behalf of another user.
You must have the `Owner` role to do this. Use our API reference for more
information:
[Create token API key](https://coder.com/docs/reference/api/users#create-token-api-key)
### Set max token length
You can use the
[`CODER_MAX_TOKEN_LIFETIME`](https://coder.com/docs/reference/cli/server#--max-token-lifetime)
server flag to set the maximum duration for long-lived tokens in your
deployment.