Allows adding custom static CSP directives to Coder. Niche use case but
makes this easier then creating a reverse proxy that has to replace the
header. We want to preserve our directives, so having an append option
is preferred to a "replace" option via a reverse proxy.
Closes https://github.com/coder/coder/issues/15118
Once a token refresh fails, we remove the `oauth_refresh_token` from the
database. This will prevent the token from hitting the IDP for
subsequent refresh attempts.
Without this change, a bad script can cause a failing token to hit a
remote IDP repeatedly with each `git` operation. With this change, after
the first hit, subsequent hits will fail locally, and never contact the
IDP.
The solution in both cases is to authenticate the external auth link. So
the resolution is the same as before.
This PR is the first step aiming to resolve#15126 -
Creating a new endpoint to return the details associated to a
provisioner key.
This is an authenticated endpoints aiming to be used by the provisioner
daemons - using the provisioner key as authentication method.
This endpoint is not ment to be used with PSK or User Sessions.
Resolves https://github.com/coder/coder/issues/15513
Disables notifications when both `$CODER_NOTIFICATIONS_WEBHOOK_ENDPOINT` and `$CODER_EMAIL_SMARTHOST` are unset.
Breaking change: `$CODER_EMAIL_SMARTHOST` is no longer set by default as `localhost:587`, meaning any deployments that make use of this default value will need to add it back.
---------
Co-authored-by: Danny Kopping <danny@coder.com>
Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
Adds an api endpoint to grab all available sync field options for IDP
sync. This is for autocomplete on idp sync forms. This is required for
organization admins to have some insight into the claim fields available
when configuring group/role sync.
- We were instantiating a cryptokey cache with a vanilla reference to
the database instead of one wrapped by dbcrypt.
- Fixes an issue where failing to instantiate unrelated keycaches does
not fatally error out.
Refactors our use of `slogtest` to instantiate a "standard logger" across most of our tests. This standard logger incorporates https://github.com/coder/slog/pull/217 to also ignore database query canceled errors by default, which are a source of low-severity flakes.
Any test that has set non-default `slogtest.Options` is left alone. In particular, `coderdtest` defaults to ignoring all errors. We might consider revisiting that decision now that we have better tools to target the really common flaky Error logs on shutdown.
Addresses https://github.com/coder/nexus/issues/35.
This PR:
- Adds a `workspace_modules` table to track modules used by the
Terraform provisioner in provisioner jobs.
- Adds a `module_path` column to the `workspace_resources` table,
allowing to identify which module a resource originates from.
- Starts pushing this new information into telemetry.
For the person reviewing this PR, do not fret about the 1,500 new lines
- ~1,000 of them are auto-generated.
Bumps the Tailnet and Agent API version 2.3, and creates some extra controls and machinery around these versions.
What happened is that we accidentally shipped two new API features without bumping the version. `ScriptCompleted` on the Agent API in Coder v2.16 and `RefreshResumeToken` on the Tailnet API in Coder v2.15.
Since we can't easily retroactively bump the versions, we'll roll these changes into API version 2.3 along with the new WorkspaceUpdates RPC, which hasn't been released yet. That means there is some ambiguity in Coder v2.15-v2.17 about exactly what methods are supported on the Tailnet and Agent APIs. This isn't great, but hasn't caused us major issues because
1. RefreshResumeToken is considered optional, and clients just log and move on if the RPC isn't supported.
2. Agents basically never get started talking to a Coderd that is older than they are, since the agent binary is normally downloaded from Coderd at workspace start.
Still it's good to get things squared away in terms of versions for SDK users and possible edge cases around client and server versions.
To mitigate against this thing happening again, this PR also:
1. adds a CODEOWNERS for the API proto packages, so I'll review changes
2. defines interface types for different API versions, and has the agent explicitly use a specific version. That way, if you add a new method, and try to use it in the agent without thinking explicitly about versions, it won't compile.
With the protocol controllers stuff, we've sort of already abstracted the Tailnet API such that the interface type strategy won't work, but I'll work on getting the Controller to be version aware, such that it can check the API version it's getting against the controllers it has -- in a later PR.
Move claims from a `debug` column to an actual typed column to be used.
This does not functionally change anything, it just adds some Go typing to build
on.
Relates to https://github.com/coder/coder/issues/15082
The old implementation of `GetWorkspacesEligibleForTransition` returns
many workspaces that are not actually eligible for transition. This new
implementation reduces this number significantly (at least on our
dogfood instance).
- Assert rbac in fake notifications enqueuer
- Move fake notifications enqueuer to separate notificationstest package
- Update dbauthz rbac policy to allow provisionerd and autostart to create and read notification messages
- Update tests as required
Fixes test flakes _a la_
```
t.go:108: 2024-11-05 09:52:37.996 [erro] workspacestats: failed to load template schedule bumping activity, defaulting to bumping by 60min request_id=f14215d2-73dc-47ba-aa81-422c62f257e4 workspace_id=545d73c7-3a62-4466-8c08-b6abb12867b7 template_id=49747428-3abb-40e4-a6b2-03653e9f2506 ...
error= fetch object:
github.com/coder/coder/v2/coderd/database/dbauthz.(*querier).GetTemplateByID.fetch[...].func1
/home/runner/work/coder/coder/coderd/database/dbauthz/dbauthz.go:497
- pq: canceling statement due to user request
*** slogtest: log detected at level ERROR; TEST FAILURE ***
```
seen here on main: https://github.com/coder/coder/actions/runs/11681605747/job/32527006174
Closes https://github.com/coder/coder/issues/12612
The problem in the linked issue was caused due to a mismatch of when the
Web UI tooltip shows up (2 hours before an autostop requirement) and the
leeway in the `autostop_requirement` algorithm (workspace builds must be
1 hour before an autostop requirement to skip them).
Now, restarting your workspace whilst the tooltip is showing will skip
the upcoming autostop requirement.
This also could have been fixed by only showing the tooltip one hour
before the autostop requirement, but it looks like 1 hour was chosen
arbitrarily, and it doesn't hurt to give users more time to skip the
autostop.
chore of #14729
Refactors the `ServerTailnet` to use `tailnet.Controller` so that we reuse logic around reconnection and handling control messages, instead of reimplementing. This unifies our "client" use of the tailscale API across CLI, coderd, and wsproxy.
This PR:
- Updates the table in `docs/admin/provisioners.md` to include highlight
multi-org changes
- Updates the instructions for the provisionerd helm chart when using
provisioner keys
---------
Co-authored-by: Ben Potter <ben@coder.com>
Refers to #14984
Currently, password validation is done backend side and is not explicit
enough so it can be painful to create first users.
We'd like to make this validation easier - but also duplicate it
frontend side to make it smoother.
Flows involved :
- First user set password
- New user set password
- Change password
---------
Co-authored-by: BrunoQuaresma <bruno_nonato_quaresma@hotmail.com>
Filter is applied in original workspace query. We do not need to join
`workspaces` twice. Use build_number instead of `created_at`
for determining the last build.
Anyone with authz access to a workspace should be able to read timings
information of its builds.
To do this without `AsSystemContext` would do an extra 4 db calls.
This PR is the first in a series aimed at closing
[#15109](https://github.com/coder/coder/issues/15109).
### Changes
- **Template Database Creation:**
`dbtestutil.Open` now has the ability to create a template database if
none is provided via `DB_FROM`. The template database’s name is derived
from a hash of the migration files, ensuring that it can be reused
across tests and is automatically updated whenever migrations change.
- **Optimized Database Handling:**
Previously, `dbtestutil.Open` would spin up a new container for each
test when `DB_FROM` was unset. Now, it first checks for an active
PostgreSQL instance on `localhost:5432`. If none is found, it creates a
single container that remains available for subsequent tests,
eliminating repeated container startups.
These changes address the long individual test times (10+ seconds)
reported by some users, likely due to the time Docker took to start and
complete migrations.
The failure condition being fixed is `w1` and `w2` could belong
to different users, organizations, and templates and still cause a
serializable failure if run concurrently. This is because the old query
did a `seq scan` on the `workspace_builds` table. Since that is the
table being updated, we really want to prevent that.
So before this would fail for any 2 workspaces. Now it only fails if
`w1` and `w2` are owned by the same user and organization.
Fixes#15312
When we need to `Unlisten()` for an event, instead of immediately removing the event from the `p.queues`, we store a channel to signal any goroutines trying to Subscribe to the same event when we are done. On `Subscribe`, if the channel is present, wait for it before calling `Listen` to ensure the ordering is correct.
Closes#14716Closes#14717
Adds a new user-scoped tailnet API endpoint (`api/v2/tailnet`) with a new RPC stream for receiving updates on workspaces owned by a specific user, as defined in #14716.
When a stream is started, the `WorkspaceUpdatesProvider` will begin listening on the user-scoped pubsub events implemented in #14964. When a relevant event type is seen (such as a workspace state transition), the provider will query the DB for all the workspaces (and agents) owned by the user. This gets compared against the result of the previous query to produce a set of workspace updates.
Workspace updates can be requested for any user ID, however only workspaces the authorised user is permitted to `ActionRead` will have their updates streamed.
Opening a tunnel to an agent requires that the user can perform `ActionSSH` against the workspace containing it.
Second PR for #14716.
Adds a query that, given a user ID, returns all the workspaces they own, that can also be `ActionRead` by the requesting user.
```
type GetWorkspacesAndAgentsByOwnerIDRow struct {
WorkspaceID uuid.UUID `db:"workspace_id" json:"workspace_id"`
WorkspaceName string `db:"workspace_name" json:"workspace_name"`
JobStatus ProvisionerJobStatus `db:"job_status" json:"job_status"`
Transition WorkspaceTransition `db:"transition" json:"transition"`
Agents []AgentIDNamePair `db:"agents" json:"agents"`
}
```
`JobStatus` and `Transition` are set using the latest build/job of the workspace. Deleted workspaces are not included.
We currently send empty payloads to pubsub channels of the form `workspace:<workspace_id>` to notify listeners of updates to workspaces (such as for refreshing the workspace dashboard).
To support https://github.com/coder/coder/issues/14716, we'll instead send `WorkspaceEvent` payloads to pubsub channels of the form `workspace_owner:<owner_id>`. This enables a listener to receive events for all workspaces owned by a user.
This PR replaces the usage of the old channels without modifying any existing behaviors.
```
type WorkspaceEvent struct {
Kind WorkspaceEventKind `json:"kind"`
WorkspaceID uuid.UUID `json:"workspace_id" format:"uuid"`
// AgentID is only set for WorkspaceEventKindAgent* events
// (excluding AgentTimeout)
AgentID *uuid.UUID `json:"agent_id,omitempty" format:"uuid"`
}
```
We've defined `WorkspaceEventKind`s based on how the old channel was used, but it's not yet necessary to inspect the types of any of the events, as the existing listeners are designed to fire off any of them.
```
WorkspaceEventKindStateChange WorkspaceEventKind = "state_change"
WorkspaceEventKindStatsUpdate WorkspaceEventKind = "stats_update"
WorkspaceEventKindMetadataUpdate WorkspaceEventKind = "mtd_update"
WorkspaceEventKindAppHealthUpdate WorkspaceEventKind = "app_health"
WorkspaceEventKindAgentLifecycleUpdate WorkspaceEventKind = "agt_lifecycle_update"
WorkspaceEventKindAgentLogsUpdate WorkspaceEventKind = "agt_logs_update"
WorkspaceEventKindAgentConnectionUpdate WorkspaceEventKind = "agt_connection_update"
WorkspaceEventKindAgentLogsOverflow WorkspaceEventKind = "agt_logs_overflow"
WorkspaceEventKindAgentTimeout WorkspaceEventKind = "agt_timeout"
```
Closes https://github.com/coder/coder/issues/15213
This PR enables sending notifications without requiring the auth system
context, instead using a new auth notifier context.
The subquery on the users table was incorrectly using the username from
the `workspaces` table, not the `users` table.
This passed `sqlc-vet` because the column did exist in the query, it
just was not the correct one.