### Description
This PR introduces GPG signing for all Coder *slim-binaries*.
Detached signatures will allow users to verify the integrity and
authenticity of the binaries they download.
### Changes
* `scripts/sign_with_gpg.sh`: New script to sign a given binary
using GPG. It imports the release key, signs the binary, and
verifies the signature.
* `scripts/build_go.sh`: Updated to call `sign_with_gpg.sh` when the
`CODER_SIGN_GPG` environment variable is set to 1.
* `.github/workflows/release.yaml`: The` CODER_SIGN_GPG` environment
variable is now set to 1 during the release build, enabling GPG
signing for all release binaries.
* `.github/workflows/ci.yaml`: The `CODER_SIGN_GPG` environment
variable is now set to 1 during the CI build, enabling GPG
signing for all CI binaries.
* `Makefile`: Detached signatures are moved to the `/site/out/bin/
`directory
Add a confirmation dialog to the release script that prompts the user to
manually update the release calendar documentation before proceeding
with the release.
## Changes
- Added a confirmation prompt that asks users to update the release
calendar documentation
- Provides the URL to the documentation
(https://coder.com/docs/install/releases#release-schedule)
- Suggests running the `./scripts/update-release-calendar.sh` script
- Requires explicit confirmation before proceeding with the release
- Exits the script if the user hasn't updated the documentation
## Testing
- [x] Script syntax validation passes (`bash -n scripts/release.sh`)
- [x] Changes are placed at the appropriate point in the release flow
(after release notes editing, before actual release creation)
This addresses the issue where the release calendar documentation was
getting out of date. While automation can be added later, this ensures
users manually confirm the documentation is updated before each release.
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: bpmct <22407953+bpmct@users.noreply.github.com>
# Implement OAuth2 Dynamic Client Registration (RFC 7591/7592)
This PR implements OAuth2 Dynamic Client Registration according to RFC 7591 and Client Configuration Management according to RFC 7592. These standards allow OAuth2 clients to register themselves programmatically with Coder as an authorization server.
Key changes include:
1. Added database schema extensions to support RFC 7591/7592 fields in the `oauth2_provider_apps` table
2. Implemented `/oauth2/register` endpoint for dynamic client registration (RFC 7591)
3. Added client configuration management endpoints (RFC 7592):
- GET/PUT/DELETE `/oauth2/clients/{client_id}`
- Registration access token validation middleware
4. Added comprehensive validation for OAuth2 client metadata:
- URI validation with support for custom schemes for native apps
- Grant type and response type validation
- Token endpoint authentication method validation
5. Enhanced developer documentation with:
- RFC compliance guidelines
- Testing best practices to avoid race conditions
- Systematic debugging approaches for OAuth2 implementations
The implementation follows security best practices from the RFCs, including proper token handling, secure defaults, and appropriate error responses. This enables third-party applications to integrate with Coder's OAuth2 provider capabilities programmatically.
# Add RFC 6750 Bearer Token Authentication Support
This PR implements RFC 6750 Bearer Token authentication as an additional authentication method for Coder's API. This allows clients to authenticate using standard OAuth 2.0 Bearer tokens in two ways:
1. Using the `Authorization: Bearer <token>` header
2. Using the `access_token` query parameter
Key changes:
- Added support for extracting tokens from both Bearer headers and access_token query parameters
- Implemented proper WWW-Authenticate headers for 401/403 responses with appropriate error descriptions
- Added comprehensive test coverage for the new authentication methods
- Updated the OAuth2 protected resource metadata endpoint to advertise Bearer token support
- Enhanced the OAuth2 testing script to verify Bearer token functionality
These authentication methods are added as fallback options, maintaining backward compatibility with Coder's existing authentication mechanisms. The existing authentication methods (cookies, session token header, etc.) still take precedence.
This implementation follows the OAuth 2.0 Bearer Token specification (RFC 6750) and improves interoperability with standard OAuth 2.0 clients.
## Summary
This PR implements critical MCP OAuth2 compliance features for Coder's authorization server, adding PKCE support, resource parameter handling, and OAuth2 server metadata discovery. This brings Coder's OAuth2 implementation significantly closer to production readiness for MCP (Model Context Protocol)
integrations.
## What's Added
### OAuth2 Authorization Server Metadata (RFC 8414)
- Add `/.well-known/oauth-authorization-server` endpoint for automatic client discovery
- Returns standardized metadata including supported grant types, response types, and PKCE methods
- Essential for MCP client compatibility and OAuth2 standards compliance
### PKCE Support (RFC 7636)
- Implement Proof Key for Code Exchange with S256 challenge method
- Add `code_challenge` and `code_challenge_method` parameters to authorization flow
- Add `code_verifier` validation in token exchange
- Provides enhanced security for public clients (mobile apps, CLIs)
### Resource Parameter Support (RFC 8707)
- Add `resource` parameter to authorization and token endpoints
- Store resource URI and bind tokens to specific audiences
- Critical for MCP's resource-bound token model
### Enhanced OAuth2 Error Handling
- Add OAuth2-compliant error responses with proper error codes
- Use standard error format: `{"error": "code", "error_description": "details"}`
- Improve error consistency across OAuth2 endpoints
### Authorization UI Improvements
- Fix authorization flow to use POST-based consent instead of GET redirects
- Remove dependency on referer headers for security decisions
- Improve CSRF protection with proper state parameter validation
## Why This Matters
**For MCP Integration:** MCP requires OAuth2 authorization servers to support PKCE, resource parameters, and metadata discovery. Without these features, MCP clients cannot securely authenticate with Coder.
**For Security:** PKCE prevents authorization code interception attacks, especially critical for public clients. Resource binding ensures tokens are only valid for intended services.
**For Standards Compliance:** These are widely adopted OAuth2 extensions that improve interoperability with modern OAuth2 clients.
## Database Changes
- **Migration 000343:** Adds `code_challenge`, `code_challenge_method`, `resource_uri` to `oauth2_provider_app_codes`
- **Migration 000343:** Adds `audience` field to `oauth2_provider_app_tokens` for resource binding
- **Audit Updates:** New OAuth2 fields properly tracked in audit system
- **Backward Compatibility:** All changes maintain compatibility with existing OAuth2 flows
## Test Coverage
- Comprehensive PKCE test suite in `coderd/identityprovider/pkce_test.go`
- OAuth2 metadata endpoint tests in `coderd/oauth2_metadata_test.go`
- Integration tests covering PKCE + resource parameter combinations
- Negative tests for invalid PKCE verifiers and malformed requests
## Testing Instructions
```bash
# Run the comprehensive OAuth2 test suite
./scripts/oauth2/test-mcp-oauth2.sh
Manual Testing with Interactive Server
# Start Coder in development mode
./scripts/develop.sh
# In another terminal, set up test app and run interactive flow
eval $(./scripts/oauth2/setup-test-app.sh)
./scripts/oauth2/test-manual-flow.sh
# Opens browser with OAuth2 flow, handles callback automatically
# Clean up when done
./scripts/oauth2/cleanup-test-app.sh
Individual Component Testing
# Test metadata endpoint
curl -s http://localhost:3000/.well-known/oauth-authorization-server | jq .
# Test PKCE generation
./scripts/oauth2/generate-pkce.sh
# Run specific test suites
go test -v ./coderd/identityprovider -run TestVerifyPKCE
go test -v ./coderd -run TestOAuth2AuthorizationServerMetadata
```
### Breaking Changes
None. All changes maintain backward compatibility with existing OAuth2 flows.
---
Change-Id: Ifbd0d9a543d545f9f56ecaa77ff2238542ff954a
Signed-off-by: Thomas Kosiewski <tk@coder.com>
## Description
This PR improves the RBAC package by refactoring the policy, enhancing
documentation, and adding utility scripts.
## Changes
* Refactored `policy.rego` for clarity and readability
* Updated README with OPA section
* Added `benchmark_authz.sh` script for authz performance testing and
comparison
* Added `gen_input.go` to generate input for `opa eval` testing
Updates Terraform from 1.11.4 to 1.12.2 across all relevant files.
Changes include:
- GitHub Actions setup-tf configuration
- Dockerfile configurations (dogfood and base)
- Install script
- Provisioner install.go with version constants
- Test data files (tfstate.json, tfplan.json, version.txt)
Follows the same pattern as PR #17323 which updated to 1.11.4.
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: sreya <4856196+sreya@users.noreply.github.com>
This PR starts running test-go-pg on macOS and Windows in regular CI.
Previously this suite was only run in the nightly gauntlet for 2
reasons:
- it was flaky
- it was slow (took 17 minutes)
We've since stabilized the flakiness by switching to depot runners,
using ram disks, optimizing the number of tests run in parallel, and
automatically re-running failing tests. We've also [brought
down](https://github.com/coder/coder/pull/17756) the time to run the
suite to 9 minutes. Additionally, this PR allows test-go-pg to use cache
from previous runs, which speeds it up further. The cache is only used
on PRs, `main` will still run tests without it.
This PR also:
- removes the nightly gauntlet since all tests now run in regular CI
- removes the `test-cli` job for the same reason
- removes the `setup-imdisk` action which is now fully replaced by
[coder/setup-ramdisk-action](https://github.com/coder/setup-ramdisk-action)
- makes 2 minor changes which could be separate PRs, but I rolled them
into this because they were helpful when iterating on it:
- replace the `if: always()` condition on the `gen` job with a `if: ${{
!cancelled() }}` to allow the job to be cancelled. Previously the job
would run to completion even if the entire workflow was cancelled. See
[the GitHub
docs](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/evaluate-expressions-in-workflows-and-actions#always)
for more details.
- disable the recently added `TestReinitializeAgent` since it does not
pass on Windows with Postgres. There's an open issue to fix it:
https://github.com/coder/internal/issues/642
This PR will:
- unblock https://github.com/coder/coder/issues/15109
- alleviate https://github.com/coder/internal/issues/647
I tested caching by temporarily enabling cache upload on this PR: here's
[a
run](https://github.com/coder/coder/actions/runs/15119046903/job/42496939341?pr=17853#step:13:1296)
showing cache being used.
Publishing inside a db transaction can lead to database connection
starvation/contention since it requires its own connection.
This ruleguard rule (one-shotted by Claude Sonnet 3.7 and finalized by
@Emyrk) will detect two of the following 3 instances:
```go
type Nested struct {
ps pubsub.Pubsub
}
func TestFail(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
nested := &Nested{
ps: ps,
}
// will catch this
_ = db.InTx(func(_ database.Store) error {
_, _ = fmt.Printf("")
_ = ps.Publish("", []byte{})
return nil
}, nil)
// will catch this
_ = db.InTx(func(_ database.Store) error {
_ = nested.ps.Publish("", []byte{})
return nil
}, nil)
// will NOT catch this
_ = db.InTx(func(_ database.Store) error {
blah(ps)
return nil
}, nil)
}
func blah(ps pubsub.Pubsub) {
ps.Publish("", []byte{})
}
```
The ruleguard doesn't recursively introspect function calls so only the
first two cases will be guarded against, but it's better than nothing.
<img width="1444" alt="image"
src="https://github.com/user-attachments/assets/8ffa0d88-16a0-41a9-9521-21211910dec9"
/>
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Steven Masley <stevenmasley@gmail.com>
Move SBOM generation and attestation to GitHub workflow
This PR moves the SBOM generation and attestation process from the `build_docker.sh` script to the GitHub workflow. The change:
1. Removes SBOM generation and attestation from the `build_docker.sh` script
2. Adds a new "SBOM Generation and Attestation" step in the GitHub workflow
3. Generates and attests SBOMs for both multi-arch images and latest tags when applicable
This approach ensures SBOM generation happens once for the final multi-architecture image rather than for each architecture separately.
Change-Id: I2e15d7322ddec933bbc9bd7880abba9b0842719f
Signed-off-by: Thomas Kosiewski <tk@coder.com>
Deprecates `ResourceSystem`. It's a large collection of unrelated things, and violates the principle of least privilege because to get access to low-security stuff like various statistics, you also get access to serious-security stuff like crypto keys.
We should eventually break it up and remove it, but the least we can do for now is not make the problem worse.
Fixes https://github.com/coder/coder/issues/17063
I'm ignoring flake.nix for now.
```
$ IGNORE_NIX=true ./scripts/check_go_versions.sh
INFO : go.mod : 1.24.1
INFO : dogfood/coder/Dockerfile : 1.24.1
INFO : setup-go/action.yaml : 1.24.1
INFO : flake.nix : 1.22
INFO : Ignoring flake.nix, as IGNORE_NIX=true
Go version check passed, all versions are 1.24.1
$ ./scripts/check_go_versions.sh
INFO : go.mod : 1.24.1
INFO : dogfood/coder/Dockerfile : 1.24.1
INFO : setup-go/action.yaml : 1.24.1
INFO : flake.nix : 1.22
ERROR: Go version mismatch between go.mod and flake.nix
```
- Update go.mod to use Go 1.24.1
- Update GitHub Actions setup-go action to use Go 1.24.1
- Fix linting issues with golangci-lint by:
- Updating to golangci-lint v1.57.1 (more compatible with Go 1.24.1)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <claude@anthropic.com>
This change adds support for workspace app auditing.
To avoid audit log spam, we introduce the concept of app audit sessions.
An audit session is unique per workspace app, user, ip, user agent and
http status code. The sessions are stored in a separate table from audit
logs to allow use-case specific optimizations. Sessions are ephemeral
and the table does not function as a log.
The logic for auditing is placed in the DBTokenProvider for workspace
apps so that wsproxies are included.
This is the final change affecting the API fo #15139.
Updates #15139
This PR fixes the SBOM filename generation in the Docker build script to
properly handle image tags that contain slashes. The current
implementation only replaces colons with underscores, but fails when
image tags include slashes (common in registry paths).
The fix updates the string replacement to handle both colons and slashes
in the image tag when generating the SBOM filename.
Change-Id: Ifd7bad6d165393e11202e5bf070a4cb26eaa6a6a
Signed-off-by: Thomas Kosiewski <tk@coder.com>
Signed-off-by: Thomas Kosiewski <tk@coder.com>
This PR fixes an issue in the Docker build script where the SBOM file path used the image tag directly, which could contain colons. Since colons are not valid characters in filenames on many filesystems, this replaces colons with underscores in the output filename.
Change-Id: I887f4fc255d9bfa19b6c5d23ad0a5db7352aa2af
Signed-off-by: Thomas Kosiewski <tk@coder.com>
Niche edge case, assumes access_token is jwt.
Some `access_token`s are JWT's with potential useful claims.
These claims would be nearly equivalent to `user_info` claims.
This is not apart of the oauth spec, so this feature should not be
loudly advertised. If using this feature, alternate solutions are preferred.