* fix: Allow terraform provisions to be gracefully cancelled
This change allows terraform commands to be gracefully cancelled on
Unix-like platforms by signaling interrupt on provision cancellation.
One implementation detail to note is that we do not necessarily kill a
running terraform command immediately even if the stream is closed. The
reason for this is to allow for graceful cancellation even in such an
event. Currently the timeout is set to 5 minutes by default.
Related: #2683
The above issue may be partially or fully fixed by this change.
* fix: Remove incorrect minimumTerraformVersion variable
* Allow init to return provision complete response
* Pass workspace owner email address to provisioner
* Remove owner_email and owner_username fields from agent metadata
* Add Git environment variables to example templates
* Remove "owner_name" field from provisioner metadata, use username instead
* Remove Git configuration from most templates, add documentation
* Proofreading/typo fixes from @mafredri
* Update example templates to latest version of terraform-provider-coder
* feat: Add app support
This adds apps as a property to a workspace agent.
The resource is added to the Terraform provider here:
https://github.com/coder/terraform-provider-coder/pull/17
Apps will be opened in the dashboard or via the CLI
with `coder open <name>`. If `command` is specified, a
terminal will appear locally and in the web. If `target`
is specified, the browser will open to an exposed instance
of that target.
* Compare fields in apps test
* Update Terraform provider to use relative path
* Add some basic structure for routing
* chore: Remove interface from coderd and lift API surface
Abstracting coderd into an interface added misdirection because
the interface was never intended to be fulfilled outside of a single
implementation.
This lifts the abstraction, and attaches all handlers to a root struct
named `*coderd.API`.
* Add basic proxy logic
* Add proxying based on path
* Add app proxying for wildcards
* Add wsconncache
* fix: Race when writing to a closed pipe
This is such an intermittent race it's difficult to track,
but regardless this is an improvement to the code.
* fix: Race when writing to a closed pipe
This is such an intermittent race it's difficult to track,
but regardless this is an improvement to the code.
* fix: Race when writing to a closed pipe
This is such an intermittent race it's difficult to track,
but regardless this is an improvement to the code.
* fix: Race when writing to a closed pipe
This is such an intermittent race it's difficult to track,
but regardless this is an improvement to the code.
* Add workspace route proxying endpoint
- Makes the workspace conn cache concurrency-safe
- Reduces unnecessary open checks in `peer.Channel`
- Fixes the use of a temporary context when dialing a workspace agent
* Add embed errors
* chore: Refactor site to improve testing
It was difficult to develop this package due to the
embed build tag being mandatory on the tests. The logic
to test doesn't require any embedded files.
* Add test for error handler
* Remove unused access url
* Add RBAC tests
* Fix dial agent syntax
* Fix linting errors
* Fix gen
* Fix icon required
* Adjust migration number
* Fix proxy error status code
* Fix empty db lookup
Closes#1705.
There was an issue in the implementation brought by #1577 by not trimming
the array value when resources use counts. This should fix it, and adds
a test to be sure!
This fixes resources created from Terraform modules not
properly being associated with an agent.
By not using the address, and resource identifiers prefixed
with `module.<name>` would be missed!
Although the terraform-exec docs don't indicate this, the result of
"terraform show" isn't actually the state... it's a trimmed version
of the state that excludes resource identifiers, essentially removing
all state that did exist.
Tests will be written to ensure Terraform state reconciliation can occur.
This will happen in another PR, as dogfood is currently broken because of this.
The Terraform Provisioner depended on the statefile content
being at a specific path, which disallowed the use of external
state providers. This fixes it!
* fix: Update GIT_COMMITTER_NAME to use username
This was a mistake when adding the committer fields 🤦.
* fix: Use environment variables for agent authentication
Using files led to situations where running "coder server --dev" would
break `gitssh`. This is applicable in a production environment too. Users
should be able to log into another Coder deployment from their workspace.
Users can still set "CODER_URL" if they'd like with agent env vars!
This fixes the dependency tree by adding recursion. It
now finds indirect connections and associates it with
an agent.
An example is attached which surfaced this issue.
This enables a "kubernetes_pod" to attach multiple agents that
could be for multiple services. Each agent is required to have
a unique name, so SSH syntax is:
`coder ssh <workspace>.<agent>`
A resource can have zero agents too, they aren't required.
It was occasionally failing without any clear indication of
what to fix on our side. The plugins weren't being found
by Terraform.
We already disable this on Windows, so figured it's fine on
Darwin too considering most production deployments will be Linux.
* feat: Add stage to build logs
This adds a stage property to logs, and refactors the job logs
cliui.
It also adds tests to the cliui for build logs!
* feat: Add stage to build logs
This adds a stage property to logs, and refactors the job logs
cliui.
It also adds tests to the cliui for build logs!
* feat: Add config-ssh and tests for resiliency
* Rename "Echo" test to "ImmediateExit"
* Fix Terraform resource agent association
* Fix logs post-cancel
* Fix select on Windows
* Remove terraform init logs
* Move timer into it's own loop
* Fix race condition in provisioner jobs
* Fix requested changes
* feat: Add AWS instance identity authentication
This allows zero-trust authentication for all AWS instances.
Prior to this, AWS instances could be used by passing `CODER_TOKEN`
as an environment variable to the startup script. AWS explicitly
states that secrets should not be passed in startup scripts because
it's user-readable.
* feat: Support caching provisioner assets
This caches the Terraform binary, and Terraform plugins.
Eventually, it could cache other temporary files.
* chore: fix linter
Co-authored-by: Garrett <garrett@coder.com>
* feat: Add stage to build logs
This adds a stage property to logs, and refactors the job logs
cliui.
It also adds tests to the cliui for build logs!
* Fix comments
The logic required a constant value before, which disallowed dynamic
value injection into the agent. This isn't an accurate limitation,
so inverting the logic resolves it.
This update exposes the workspace name and owner, and changes
authentication methods to be explicit. Implicit authentication
added unnecessary complexity and introduced inconsistency.
* ci: Update DataDog GitHub branch to fallback to GITHUB_REF
This was detecting branches, but not our "main" branch before.
Hopefully this fixes it!
* Add basic Terraform Provider
* Rename post files to upload
* Add tests for resources
* Skip instance identity test
* Add tests for ensuring agent get's passed through properly
* Fix linting errors
* Add echo path
* Fix agent authentication
* fix: Convert all jobs to use a common resource and agent type
This enables a consistent API for project import and provisioned resources.
* Add "coder_workspace" data source
* feat: Remove magical parameters from being injected
This is a much cleaner abstraction. Explicitly declaring the user
parameters for each provisioner makes for significantly simpler
testing.
* feat: Add graceful exits to provisionerd
Terraform (or other provisioners) may need to cleanup state, or
cancel actions before exit. This adds the ability to gracefully
exit provisionerd.
* Fix cancel error check
* feat: Add destroy to workspace provision job
This enables the full flow of create/update/delete.
* fix: Use plan to detect resource agent association
Before this used the configuration object which detected all resources
regardless of count.
* ci: Update DataDog GitHub branch to fallback to GITHUB_REF
This was detecting branches, but not our "main" branch before.
Hopefully this fixes it!
* Add basic Terraform Provider
* Rename post files to upload
* Add tests for resources
* Skip instance identity test
* Add tests for ensuring agent get's passed through properly
* Fix linting errors
* Add echo path
* Fix agent authentication
* fix: Convert all jobs to use a common resource and agent type
This enables a consistent API for project import and provisioned resources.
* Add "coder_workspace" data source
* feat: Remove magical parameters from being injected
This is a much cleaner abstraction. Explicitly declaring the user
parameters for each provisioner makes for significantly simpler
testing.
* feat: Add graceful exits to provisionerd
Terraform (or other provisioners) may need to cleanup state, or
cancel actions before exit. This adds the ability to gracefully
exit provisionerd.
* Fix cancel error check
* feat: Add destroy to workspace provision job
This enables the full flow of create/update/delete.
* ci: Update DataDog GitHub branch to fallback to GITHUB_REF
This was detecting branches, but not our "main" branch before.
Hopefully this fixes it!
* Add basic Terraform Provider
* Rename post files to upload
* Add tests for resources
* Skip instance identity test
* Add tests for ensuring agent get's passed through properly
* Fix linting errors
* Add echo path
* Fix agent authentication
* fix: Convert all jobs to use a common resource and agent type
This enables a consistent API for project import and provisioned resources.
* Add "coder_workspace" data source
* feat: Remove magical parameters from being injected
This is a much cleaner abstraction. Explicitly declaring the user
parameters for each provisioner makes for significantly simpler
testing.
* feat: Add graceful exits to provisionerd
Terraform (or other provisioners) may need to cleanup state, or
cancel actions before exit. This adds the ability to gracefully
exit provisionerd.
* Fix cancel error check
* ci: Update DataDog GitHub branch to fallback to GITHUB_REF
This was detecting branches, but not our "main" branch before.
Hopefully this fixes it!
* Add basic Terraform Provider
* Rename post files to upload
* Add tests for resources
* Skip instance identity test
* Add tests for ensuring agent get's passed through properly
* Fix linting errors
* Add echo path
* Fix agent authentication
* fix: Convert all jobs to use a common resource and agent type
This enables a consistent API for project import and provisioned resources.
* Add "coder_workspace" data source
* feat: Remove magical parameters from being injected
This is a much cleaner abstraction. Explicitly declaring the user
parameters for each provisioner makes for significantly simpler
testing.
* feat: Add agent authentication based on instance ID
Each cloud has it's own unique instance identity signatures, which
can be used for zero-token authentication. This change adds support
for tracking by "instance_id", and automatically authenticating
with Google Cloud.
* Add test for CLI
* Fix workspace agent request name
* Fix race with adding to wait group
* Fix name of instance identity token
* refactor: Rename ProjectParameter to ProjectVersionParameter
This was confusing with ParameterValue before. It still is a bit,
but this should help distinguish scope.
* Add project version resources table
* Allow project parameters to optionally have user and workspace
* Add dry run for provisioners
* Add resource detection on project import
* feat: Add history middleware parameters
These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.
* refactor: Move all HTTP routes to top-level struct
Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.
Our HTTP routes cannot have conflicts, so neither should
function naming.
* Add provisioner daemon routes
* Add periodic updates
* Skip pubsub if short
* Return jobs with WorkspaceHistory
* Add endpoints for extracting singular history
* The full end-to-end operation works
* fix: Disable compression for websocket dRPC transport (#145)
There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.
This is just tracking some experimentation to fix that race condition
## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass ✅
- Run 7: Peer failure
## Open Questions: ##
### Is `dRPC` or `websocket` at fault for the data race?
It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](f6e369438f/drpcwire/error.go (L15)) - so `dRPC` has created this buffer and owns it.
From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](f6e369438f/drpcstream/stream.go (L253))
- [`writeFrame`](f6e369438f/drpcwire/writer.go (L65))
- [`AppendFrame`](f6e369438f/drpcwire/packet.go (L128))
- with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
out := buf
out = append(out, control). // <---------
```
This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.
Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: f6e369438f/drpcwire/writer.go (L73)
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](8dee580a7f/write.go (L180)), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](a1a9cfc821/flate/stateless.go (L94)), which is where get our race.
In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).
### Why does cloning on `Read` fail?
Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```
# UPDATE:
We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:
- Run 1: ✅
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3: ✅
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: ✅
* fix: Remove race condition with acquiredJobDone channel (#148)
Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83
__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.
__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.
* fix: Bump up workspace history timeout (#149)
This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32
Looking at the timing of the test:
```
t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply
t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running
workspacehistory_test.go:122:
Error Trace: workspacehistory_test.go:122
Error: Condition never satisfied
Test: TestWorkspaceHistory/CreateHistory
```
It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.
Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.
In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.
Co-authored-by: Bryan <bryan@coder.com>
* feat: Add parameter and jobs database schema
This modifies a prior migration which is typically forbidden,
but because we're pre-production deployment I felt grouping
would be helpful to future contributors.
This adds database functions that are required for the provisioner
daemon and job queue logic.
* feat: Compute project build parameters
Adds a projectparameter package to compute build-time project
values for a provided scope.
This package will be used to return which variables are being
used for a build, and can visually indicate the hierarchy to
a user.
* Fix terraform provisioner
* Improve naming, abstract inject to consume scope
* Run CI on all branches