This PR replaces the use of the **container** ID with the
**devcontainer** ID. This is a breaking change. This allows rebuilding a
devcontainer when there is no valid container ID.
Add an endpoint to fetch AI task prompts for multiple workspace builds
at the same time. A prompt is the value of the "AI Prompt" workspace
build parameter. On main, the only way our API allows fetching workspace
build parameters is by using the `/workspacebuilds/$build_id/parameters`
endpoint, requiring a separate API call for every build.
The Tasks dashboard fetches Task workspaces in order to show them in a
list, and then needs to fetch the value of the `AI Prompt` parameter for
every task workspace (using its latest build id), requiring an
additional API call for each list item. This endpoint will allow the
dashboard to make just 2 calls to render the list: one to fetch task
workspaces, the other to fetch prompts.
<img width="1512" alt="Screenshot 2025-06-20 at 11 33 11"
src="https://github.com/user-attachments/assets/92899999-e922-44c5-8325-b4b23a0d2bff"
/>
Related to https://github.com/coder/internal/issues/660.
Fixes https://github.com/coder/coder/issues/18024
* drive-by: renames `handleExperimentsSafe` to
`handleExperimentsAvailable` to better match semantics
* defines list of `codersdk.ExperimentsKnown` and updates
`ReadExperiments` to log on invalid experiments
* typescript-ignores `codersdk.Experiments` so apitypings generates a
valid enum list of possible values of experiment
* updates OverviewPageView to distinguish between known 'hidden'
experiments and unknown 'invalid' experiments
When in experimental this was used as an escape hatch. Removed to be
consistent with the template author's intentions
Backwards compatible, removing an experimental api field that is no longer used.
"Idle" is more accurate than "complete" since:
1. AgentAPI only knows if the screen is active; it has no way of knowing
if the task is complete.
2. The LLM might be done with its current prompt, but that does not mean
the task is complete either (it likely needs refinement).
The "complete" state will be reserved for future definition.
Additionally, in the case where the screen goes idle but the LLM never
reported a status update, we can get an idle icon without a message, and
it looks kinda janky in the UI so if there is no message I display the
state text.
Closes https://github.com/coder/internal/issues/699
## Description
This PR adds support for deleting prebuilt workspaces via the
authorization layer. It introduces special-case handling to ensure that
`prebuilt_workspace` permissions are evaluated when attempting to delete
a prebuilt workspace, falling back to the standard `workspace` resource
as needed.
Prebuilt workspaces are a subset of workspaces, identified by having
`owner_id` set to `PREBUILD_SYSTEM_USER`.
This means:
* A user with `prebuilt_workspace.delete` permission is allowed to
**delete only prebuilt workspaces**.
* A user with `workspace.delete` permission can **delete both normal and
prebuilt workspaces**.
⚠️ This implementation is scoped to **deletion operations only**. No
other operations are currently supported for the `prebuilt_workspace`
resource.
To delete a workspace, users must have the following permissions:
* `workspace.read`: to read the current workspace state
* `update`: to modify workspace metadata and related resources during
deletion (e.g., updating the `deleted` field in the database)
* `delete`: to perform the actual deletion of the workspace
## Changes
* Introduced `authorizeWorkspace()` helper to handle prebuilt workspace
authorization logic.
* Ensured both `prebuilt_workspace` and `workspace` permissions are
checked.
* Added comments to clarify the current behavior and limitations.
* Moved `SystemUserID` constant from the `prebuilds` package to the
`database` package `PrebuildsSystemUserID` to resolve an import cycle
(commit
f24e4ab4b6).
* Update middleware `ExtractOrganizationMember` to include system user
members.
This commit consolidates two container endpoints on the backend and improves the
frontend devcontainer support by showing names and displaying apps as
appropriate.
With this change, the frontend now has knowledge of the subagent and we can also
display things like port forwards.
The frontend was updated to show dev container labels on the border as well as
subagent connection status. The recreation flow was also adjusted a bit to show
placeholder app icons when relevant.
Support for apps was also added, although these are still WIP on the backend.
And the port forwarding utility was added in since the sub agents now provide
the necessary info.
Fixescoder/internal#666
This is meant to complement the existing task reporter since the LLM
does not call it reliably.
It also includes refactoring to use the common agent flags/env vars.
# Add separate token lifetime limits for administrators
This PR introduces a new configuration option `--max-admin-token-lifetime` that allows administrators to create API tokens with longer lifetimes than regular users. By default, administrators can create tokens with a lifetime of up to 7 days (168 hours), while the existing `--max-token-lifetime` setting continues to apply to regular users.
The implementation:
- Adds a new `MaximumAdminTokenDuration` field to the session configuration
- Modifies the token validation logic to check the user's role and apply the appropriate lifetime limit
- Updates the token configuration endpoint to return the correct maximum lifetime based on the user's role
- Adds tests to verify that administrators can create tokens with longer and shorter lifetimes
- Updates documentation and help text to reflect the new option
This change allows organizations to grant administrators extended token lifetimes while maintaining tighter security controls for regular users.
Fixes#17395
Closes https://github.com/coder/internal/issues/619
Implement the `coderd` side of the AgentAPI for the upcoming
dev-container agents work.
`agent/agenttest/client.go` is left unimplemented for a future PR
working to implement the agent side of this feature.
This change introduces a refactor of the devcontainers recreation logic
which is now handled asynchronously rather than being request scoped.
The response was consequently changed from "No Content" to "Accepted" to
reflect this.
A new `Status` field was introduced to the devcontainer struct which
replaces `Running` (bool). This reflects that the devcontainer can now
be in various states (starting, running, stopped or errored).
The status field also protects against multiple concurrent recrations,
as long as they are initiated via the API.
Updates #16424
User name, avatar URL, and last seen at, are not required fields so they
can be empty. Instead of returning the 0 values from Go, we want to make
it more agnostic, and omit them when they are empty. This make the docs
and usage way clearer for consumers.
Relates to https://github.com/coder/coder/issues/17432
### Part 1:
Notes:
- `GetPresetsAtFailureLimit` SQL query is added, which is similar to
`GetPresetsBackoff`, they use same CTEs: `filtered_builds`,
`time_sorted_builds`, but they are still different.
- Query is executed on every loop iteration. We can consider marking
specific preset as permanently failed as an optimization to avoid
executing query on every loop iteration. But I decided don't do it for
now.
- By default `FailureHardLimit` is set to 3.
- `FailureHardLimit` is configurable. Setting it to zero - means that
hard limit is disabled.
### Part 2
Notes:
- `PrebuildFailureLimitReached` notification is added.
- Notification is sent to template admins.
- Notification is sent only the first time, when hard limit is reached.
But it will `log.Warn` on every loop iteration.
- I introduced this enum:
```sql
CREATE TYPE prebuild_status AS ENUM (
'normal', -- Prebuilds are working as expected; this is the default, healthy state.
'hard_limited', -- Prebuilds have failed repeatedly and hit the configured hard failure limit; won't be retried anymore.
'validation_failed' -- Prebuilds failed due to a non-retryable validation error (e.g. template misconfiguration); won't be retried.
);
```
`validation_failed` not used in this PR, but I think it will be used in
next one, so I wanted to save us an extra migration.
- Notification looks like this:
<img width="472" alt="image"
src="https://github.com/user-attachments/assets/e10efea0-1790-4e7f-a65c-f94c40fced27"
/>
### Latest notification views:
<img width="463" alt="image"
src="https://github.com/user-attachments/assets/11310c58-68d1-4075-a497-f76d854633fe"
/>
<img width="725" alt="image"
src="https://github.com/user-attachments/assets/6bbfe21a-91ac-47c3-a9d1-21807bb0c53a"
/>
Dynamic params skip parameter validation in coder/coder.
This is because conditional parameters cannot be validated
with the static parameters in the database.
Closes https://github.com/coder/internal/issues/648
This change introduces a new `ParentId` field to the agent's manifest.
This will allow an agent to know if it is a child or not, as well as
knowing who the owner is.
This is part of the Dev Container Agents work
# Description
This PR adds the `worker_name` field to the provisioner jobs endpoint.
To achieve this, the following SQL query was updated:
-
`GetProvisionerJobsByOrganizationAndStatusWithQueuePositionAndProvisioner`
As a result, the `codersdk.ProvisionerJob` type, which represents the
provisioner job API response, was modified to include the new field.
**Notes:**
* As mentioned in
[comment](https://github.com/coder/coder/pull/17877#discussion_r2093218206),
the `GetProvisionerJobsByIDsWithQueuePosition` query was not changed due
to load concerns. This means that for template and template version
endpoints, `worker_id` will still be returned, but `worker_name` will
not.
* Similar to `worker_id`, the `worker_name` is only present once a job
is assigned to a provisioner daemon. For jobs in a pending state (not
yet assigned), neither `worker_id` nor `worker_name` will be returned.
---
# Affected Endpoints
- `/organizations/{organization}/provisionerjobs`
- `/organizations/{organization}/provisionerjobs/{job}`
---
# Testing
- Added new tests verifying that both `worker_id` and `worker_name` are
returned once a provisioner job reaches the **succeeded** state.
- Existing tests covering state transitions and other logic remain
unchanged, as they test different scenarios.
---
# Front-end Changes
Admin provisioner jobs dashboard:
<img width="1088" alt="Screenshot 2025-05-16 at 11 51 33"
src="https://github.com/user-attachments/assets/0e20e360-c615-4497-84b7-693777c5443e"
/>
Fixes: https://github.com/coder/coder/issues/16982
Existing template versions do not have the metadata (modules + plan) in
the db. So revert to using static parameter information from the
original template import.
This data will still be served over the websocket.