This pull requests adds the necessary migrations and queries to support
presets within the coderd database. Future PRs will build functionality
to the provisioners and the frontend.
As requested for [this
issue](https://github.com/coder/internal/issues/245) we need to have a
new resource `resources_monitoring` in the agent.
It needs to be parsed from the provisioner and inserted into a new db
table.
Addresses https://github.com/coder/nexus/issues/175.
## Changes
- Adds the `telemetry_items` database table. It's a key value store for
telemetry events that don't fit any other database tables.
- Adds a telemetry report when HTML is served for the first time in
`site.go`.
Replace Depot build action with Nix for Nix dogfood image builds
The dogfood Nix image is now built using Nix's native container tooling instead of Depot. This change:
- Adds Nix setup steps to the GitHub Actions workflow
- Removes the Dockerfile.nix in favor of a Nix-native container build
- Updates the flake.nix to support building Docker images
- Introduces a hash file to track Nix-related changes
- Updates the vendorHash for Go dependencies
Change-Id: I4e011fe3a19d9a1375fbfd5223c910e59d66a5d9
Signed-off-by: Thomas Kosiewski <tk@coder.com>
Fixes https://github.com/coder/coder/issues/16124
If a workspace agent crashes, it is possible for any startup scripts to
be ran again. This PR makes it so that the
`GetWorkspaceAgentScriptTimingsByBuildID` query only returns the first
timing recorded per-script.
Change as part of https://github.com/coder/coder/pull/16071
It has been decided that we want to be able to have some notification
templates be disabled _by default_
https://github.com/coder/coder/pull/16071#issuecomment-2580757061.
This adds a new column (`enabled_by_default`) to
`notification_templates` that defaults to `TRUE`. It also modifies the
`inhibit_enqueue_if_disabled` function to reject notifications for
templates that have `enabled_by_default = FALSE` with the user not
explicitly enabling it.
RE: https://github.com/coder/coder/issues/15740,
https://github.com/coder/coder/issues/15297
In order to add a graph to the coder frontend to show user status over
time as an indicator of license usage, this PR adds the following:
* a new `api.insightsUserStatusCountsOverTime` endpoint to the API
* which calls a new `GetUserStatusCountsOverTime` query from postgres
* which relies on two new tables `user_status_changes` and
`user_deleted`
* which are populated by a new trigger and function that tracks updates
to the users table
The chart itself will be added in a subsequent PR
---------
Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
Another PR to address https://github.com/coder/coder/issues/15109.
- adds the DisableForeignKeysAndTriggers utility, which simplifies
converting tests from in-mem to postgres
- converts the dbauthz test suite to pass on both the in-mem db and
Postgres
When calculating the queue position in
`GetProvisionerJobsByIDsWithQueuePosition` we only counted jobs with
`started_at = NULL`. This is misleading, as it allows canceling or
canceled jobs to take up rows in the computed queue position, giving an
impression that the queue is larger than it really is.
This modifies the query to also exclude jobs with a null `canceled_at`,
`completed_at`, or `error` field for the purposes of calculating the
queue position, and also adds a test to validate this behaviour.
(Note: due to the behaviour of `dbgen.ProvisionerJob` with `dbmem` I had
to use other proxy methods to validate the corresponding dbmem
implementation.)
---------
Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
Relates to https://github.com/coder/coder/issues/15390
Currently when a user creates a workspace, their workspace's TTL is
determined by the template's default TTL. If the Coder instance is AGPL,
or if the template has disallowed the user from configuring autostop,
then it is not possible to change the workspace's TTL after creation.
Any changes to the template's default TTL only takes effect on _new_
workspaces.
This PR modifies the behaviour slightly so that on AGPL Coder, or on
enterprise when a template does not allow user's to configure their
workspace's TTL, updating the template's default TTL will also update
any workspace's TTL to match this value.
https://github.com/coder/coder/pull/15608 introduced a buggy behaviour
with dbcrypt enabled.
When clearing an oauth refresh token, we had been setting the value to
the empty string.
The database encryption package considers decrypting an empty string to
be an error, as an empty encrypted string value will still have a nonce
associated with it and thus not actually be empty when stored at rest.
Instead of 'deleting' the refresh token, 'update' it to be the empty
string.
This plays nicely with dbcrypt.
It also adds a 'utility test' in the dbcrypt package to help encrypt a
value. This was useful when manually fixing users affected by this bug
on our dogfood instance.
Relates to https://github.com/coder/coder/issues/15082
Further to https://github.com/coder/coder/pull/15429, this reduces the
amount of false-positives returned by the 'is eligible for autostart'
part of the query. We achieve this by calculating the 'next start at'
time of the workspace, storing it in the database, and using it in our
`GetWorkspacesEligibleForTransition` query.
The prior implementation of the 'is eligible for autostart' query would
return _all_ workspaces that at some point in the future _might_ be
eligible for autostart. This now ensures we only return workspaces that
_should_ be eligible for autostart.
We also now pass `currentTick` instead of `t` to the
`GetWorkspacesEligibleForTransition` query as otherwise we'll have one
round of workspaces that are skipped by `isEligibleForTransition` due to
`currentTick` being a truncated version of `t`.
Addresses https://github.com/coder/nexus/issues/99.
Changes:
- Save the id of the built-in example template used to create a template
version in the database
- Include the example id in telemetry
Once a token refresh fails, we remove the `oauth_refresh_token` from the
database. This will prevent the token from hitting the IDP for
subsequent refresh attempts.
Without this change, a bad script can cause a failing token to hit a
remote IDP repeatedly with each `git` operation. With this change, after
the first hit, subsequent hits will fail locally, and never contact the
IDP.
The solution in both cases is to authenticate the external auth link. So
the resolution is the same as before.
Adds an api endpoint to grab all available sync field options for IDP
sync. This is for autocomplete on idp sync forms. This is required for
organization admins to have some insight into the claim fields available
when configuring group/role sync.
Addresses https://github.com/coder/nexus/issues/35.
This PR:
- Adds a `workspace_modules` table to track modules used by the
Terraform provisioner in provisioner jobs.
- Adds a `module_path` column to the `workspace_resources` table,
allowing to identify which module a resource originates from.
- Starts pushing this new information into telemetry.
For the person reviewing this PR, do not fret about the 1,500 new lines
- ~1,000 of them are auto-generated.
Move claims from a `debug` column to an actual typed column to be used.
This does not functionally change anything, it just adds some Go typing to build
on.
Relates to https://github.com/coder/coder/issues/15082
The old implementation of `GetWorkspacesEligibleForTransition` returns
many workspaces that are not actually eligible for transition. This new
implementation reduces this number significantly (at least on our
dogfood instance).
Filter is applied in original workspace query. We do not need to join
`workspaces` twice. Use build_number instead of `created_at`
for determining the last build.
The failure condition being fixed is `w1` and `w2` could belong
to different users, organizations, and templates and still cause a
serializable failure if run concurrently. This is because the old query
did a `seq scan` on the `workspace_builds` table. Since that is the
table being updated, we really want to prevent that.
So before this would fail for any 2 workspaces. Now it only fails if
`w1` and `w2` are owned by the same user and organization.
Second PR for #14716.
Adds a query that, given a user ID, returns all the workspaces they own, that can also be `ActionRead` by the requesting user.
```
type GetWorkspacesAndAgentsByOwnerIDRow struct {
WorkspaceID uuid.UUID `db:"workspace_id" json:"workspace_id"`
WorkspaceName string `db:"workspace_name" json:"workspace_name"`
JobStatus ProvisionerJobStatus `db:"job_status" json:"job_status"`
Transition WorkspaceTransition `db:"transition" json:"transition"`
Agents []AgentIDNamePair `db:"agents" json:"agents"`
}
```
`JobStatus` and `Transition` are set using the latest build/job of the workspace. Deleted workspaces are not included.
The subquery on the users table was incorrectly using the username from
the `workspaces` table, not the `users` table.
This passed `sqlc-vet` because the column did exist in the query, it
just was not the correct one.
Joins in fields like `username`, `avatar_url`, `organization_name`,
`template_name` to `workspaces` via a **view**.
The view must be maintained moving forward, but this prevents needing to
add RBAC permissions to fetch related workspace fields.
Relates to https://github.com/coder/coder/issues/14232
This implements two endpoints (names subject to change):
- `/api/v2/users/otp/request`
- `/api/v2/users/otp/change-password`
* feat: begin impl of agent script timings
* feat: add job_id and display_name to script timings
* fix: increment migration number
* fix: rename migrations from 251 to 254
* test: get tests compiling
* fix: appease the linter
* fix: get tests passing again
* fix: drop column from correct table
* test: add fixture for agent script timings
* fix: typo
* fix: use job id used in provisioner job timings
* fix: increment migration number
* test: behaviour of script runner
* test: rewrite test
* test: does exit 1 script break things?
* test: rewrite test again
* fix: revert change
Not sure how this came to be, I do not recall manually changing
these files.
* fix: let code breathe
* fix: wrap errors
* fix: justify nolint
* fix: swap require.Equal argument order
* fix: add mutex operations
* feat: add 'ran_on_start' and 'blocked_login' fields
* fix: update testdata fixture
* fix: refer to agent_id instead of job_id in timings
* fix: JobID -> AgentID in dbauthz_test
* fix: add 'id' to scripts, make timing refer to script id
* fix: fix broken tests and convert bug
* fix: update testdata fixtures
* fix: update testdata fixtures again
* feat: capture stage and if script timed out
* fix: update migration number
* test: add test for script api
* fix: fake db query
* fix: use UTC time
* fix: ensure r.scriptComplete is not nil
* fix: move err check to right after call
* fix: uppercase sql
* fix: use dbtime.Now()
* fix: debug log on r.scriptCompleted being nil
* fix: ensure correct rbac permissions
* chore: remove DisplayName
* fix: get tests passing
* fix: remove space in sql up
* docs: document ExecuteOption
* fix: drop 'RETURNING' from sql
* chore: remove 'display_name' from timing table
* fix: testdata fixture
* fix: put r.scriptCompleted call in goroutine
* fix: track goroutine for test + use separate context for reporting
* fix: appease linter, handle trackCommandGoroutine error
* fix: resolve race condition
* feat: replace timed_out column with status column
* test: update testdata fixture
* fix: apply suggestions from review
* revert: linter changes