Commit Graph

7 Commits

Author SHA1 Message Date
bf0ae8f573 feat: Refactor API routes to use UUIDs instead of friendly names (#401)
* Add client for agent

* Cleanup code

* Fix linting error

* Rename routes to be simpler

* Rename workspace history to workspace build

* Refactor HTTP middlewares to use UUIDs

* Cleanup routes

* Compiles!

* Fix files and organizations

* Fix querying

* Fix agent lock

* Cleanup database abstraction

* Add parameters

* Fix linting errors

* Fix log race

* Lock on close wait

* Fix log cleanup

* Fix e2e tests

* Fix upstream version of opencensus-go

* Update coderdtest.go

* Fix coverpkg

* Fix codecov ignore
2022-03-07 11:40:54 -06:00
e5c95552cd feat: Remove magical parameters from being injected (#371)
* ci: Update DataDog GitHub branch to fallback to GITHUB_REF

This was detecting branches, but not our "main" branch before.
Hopefully this fixes it!

* Add basic Terraform Provider

* Rename post files to upload

* Add tests for resources

* Skip instance identity test

* Add tests for ensuring agent get's passed through properly

* Fix linting errors

* Add echo path

* Fix agent authentication

* fix: Convert all jobs to use a common resource and agent type

This enables a consistent API for project import and provisioned resources.

* Add "coder_workspace" data source

* feat: Remove magical parameters from being injected

This is a much cleaner abstraction. Explicitly declaring the user
parameters for each provisioner makes for significantly simpler
testing.
2022-02-28 18:26:01 +00:00
d04570ad29 fix: Use sync.WaitGroup to await hijacked HTTP connections (#337)
WebSockets hijack the HTTP connection from the server, causing
server.Close() to not wait for these connections to fully cleanup.

This adds a global wait-group to the coderd API, which ensures all
WebSocket HTTP handlers have properly exited before returning.
2022-02-20 16:29:16 -06:00
07fe5ced68 feat: Add "coder" CLI (#221)
* feat: Add "coder" CLI

* Add CLI test for login

* Add "bin/coder" target to Makefile

* Update promptui to fix race

* Fix error scope

* Don't run CLI tests on Windows

* Fix requested changes
2022-02-10 08:33:27 -06:00
e75bde4e31 feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters

These will be used for streaming logs, checking status,
and other operations related to workspace and project
history.

* refactor: Move all HTTP routes to top-level struct

Nesting all structs behind their respective structures
is leaky, and promotes naming conflicts between handlers.

Our HTTP routes cannot have conflicts, so neither should
function naming.

* Add provisioner daemon routes

* Add periodic updates

* Skip pubsub if short

* Return jobs with WorkspaceHistory

* Add endpoints for extracting singular history

* The full end-to-end operation works

* fix: Disable compression for websocket dRPC transport (#145)

There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing.

This is just tracking some experimentation to fix that race condition

## Run results: ##
- Run 1: peer test failure
- Run 2: peer test failure
- Run 3: `TestWorkspaceHistory/CreateHistory`  - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45
```
status code 412: The provided project history is running. Wait for it to complete importing!`
```
- Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176
```
    workspacehistory_test.go:122: 
        	Error Trace:	workspacehistory_test.go:122
        	Error:      	Condition never satisfied
        	Test:       	TestWorkspaceHistory/CreateHistory
```
- Run 5: peer failure
- Run 6: Pass  
- Run 7: Peer failure

## Open Questions: ##

### Is `dRPC` or `websocket` at fault for the data race?

It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](f6e369438f/drpcwire/error.go (L15)) - so `dRPC` has created this buffer and owns it.

From `dRPC`'s perspective, the callstack looks like this:
- [`sendPacket`](f6e369438f/drpcstream/stream.go (L253))
  - [`writeFrame`](f6e369438f/drpcwire/writer.go (L65))
    - [`AppendFrame`](f6e369438f/drpcwire/packet.go (L128))
      - with finally the data race happening here:
```go
// AppendFrame appends a marshaled form of the frame to the provided buffer.
func AppendFrame(buf []byte, fr Frame) []byte {
...
	out := buf
	out = append(out, control).   // <---------
```

This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame.

Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__: f6e369438f/drpcwire/writer.go (L73)

However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](8dee580a7f/write.go (L180)), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](a1a9cfc821/flate/stateless.go (L94)), which is where get our race.

In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly).

### Why does cloning on `Read` fail?

Get a bunch of errors like:
```
2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF
2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0
```

# UPDATE:

We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now:

- Run 1:  
- Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338
- Run 3:  
- Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168
- Run 5: 

* fix: Remove race condition with acquiredJobDone channel (#148)

Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83

__Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs.

__Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up.

* fix: Bump up workspace history timeout (#149)

This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32

Looking at the timing of the test:
```
    t.go:56: 2022-02-02 21:33:21.964 [DEBUG]	(terraform-provisioner)	<provision.go:139>	ran apply
    t.go:56: 2022-02-02 21:33:21.991 [DEBUG]	(provisionerd)	<provisionerd.go:162>	skipping acquire; job is already running
    t.go:56: 2022-02-02 21:33:22.050 [DEBUG]	(provisionerd)	<provisionerd.go:162>	skipping acquire; job is already running
    t.go:56: 2022-02-02 21:33:22.090 [DEBUG]	(provisionerd)	<provisionerd.go:162>	skipping acquire; job is already running
    t.go:56: 2022-02-02 21:33:22.140 [DEBUG]	(provisionerd)	<provisionerd.go:162>	skipping acquire; job is already running
    t.go:56: 2022-02-02 21:33:22.195 [DEBUG]	(provisionerd)	<provisionerd.go:162>	skipping acquire; job is already running
    t.go:56: 2022-02-02 21:33:22.240 [DEBUG]	(provisionerd)	<provisionerd.go:162>	skipping acquire; job is already running
    workspacehistory_test.go:122: 
        	Error Trace:	workspacehistory_test.go:122
        	Error:      	Condition never satisfied
        	Test:       	TestWorkspaceHistory/CreateHistory
```

It  appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout.

Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here.

In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that.

Co-authored-by: Bryan <bryan@coder.com>
2022-02-03 20:34:50 +00:00
6a919aea79 feat: Add authentication and personal user endpoint (#29)
* feat: Add authentication and personal user endpoint

This contribution adds a lot of scaffolding for the database fake
and testability of coderd.

A new endpoint "/user" is added to return the currently authenticated
user to the requester.

* Use TestMain to catch leak instead

* Add userpassword package

* Add WIP

* Add user auth

* Fix test

* Add comments

* Fix login response

* Fix order

* Fix generated code

* Update httpapi/httpapi.go

Co-authored-by: Bryan <bryan@coder.com>

Co-authored-by: Bryan <bryan@coder.com>
2022-01-20 13:46:51 +00:00
afc2fa3b62 feat: Add Coder Daemon to serve the API (#18)
* feat: Add v1 schema types

This adds compatibility for sharing data with Coder v1. Since the tables are the same, all CRUD operations should function as expected.

* Add license table

* feat: Add Coder Daemon to serve the API

coderd is a public package which will be consumed by v1 to support running both at the same time. The frontend will need to be compiled and statically served as part of this eventually.

* Fix initial migration

* Move to /api/v2

* Increase peer disconnectedTimeout to reduce flakes on slow machines

* Reduce timeout again

* Fix version for pion/ice
2022-01-13 16:55:28 -06:00