mirror of
https://github.com/coder/coder.git
synced 2025-07-06 15:41:45 +00:00
feat: Add provisionerdaemon to coderd (#141)
* feat: Add history middleware parameters These will be used for streaming logs, checking status, and other operations related to workspace and project history. * refactor: Move all HTTP routes to top-level struct Nesting all structs behind their respective structures is leaky, and promotes naming conflicts between handlers. Our HTTP routes cannot have conflicts, so neither should function naming. * Add provisioner daemon routes * Add periodic updates * Skip pubsub if short * Return jobs with WorkspaceHistory * Add endpoints for extracting singular history * The full end-to-end operation works * fix: Disable compression for websocket dRPC transport (#145) There is a race condition in the interop between the websocket and `dRPC`: https://github.com/coder/coder/runs/5038545709?check_suite_focus=true#step:7:117 - it seems both the websocket and dRPC feel like they own the `byte[]` being sent between them. This can lead to data races, in which both `dRPC` and the websocket are writing. This is just tracking some experimentation to fix that race condition ## Run results: ## - Run 1: peer test failure - Run 2: peer test failure - Run 3: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040858460?check_suite_focus=true#step:8:45 ``` status code 412: The provided project history is running. Wait for it to complete importing!` ``` - Run 4: `TestWorkspaceHistory/CreateHistory` - https://github.com/coder/coder/runs/5040957999?check_suite_focus=true#step:7:176 ``` workspacehistory_test.go:122: Error Trace: workspacehistory_test.go:122 Error: Condition never satisfied Test: TestWorkspaceHistory/CreateHistory ``` - Run 5: peer failure - Run 6: Pass ✅ - Run 7: Peer failure ## Open Questions: ## ### Is `dRPC` or `websocket` at fault for the data race? It looks like this condition is specifically happening when `dRPC` decides to [`SendError`]). This constructs a new byte payload from [`MarshalError`](f6e369438f/drpcwire/error.go (L15)
) - so `dRPC` has created this buffer and owns it. From `dRPC`'s perspective, the callstack looks like this: - [`sendPacket`](f6e369438f/drpcstream/stream.go (L253)
) - [`writeFrame`](f6e369438f/drpcwire/writer.go (L65)
) - [`AppendFrame`](f6e369438f/drpcwire/packet.go (L128)
) - with finally the data race happening here: ```go // AppendFrame appends a marshaled form of the frame to the provided buffer. func AppendFrame(buf []byte, fr Frame) []byte { ... out := buf out = append(out, control). // <--------- ``` This should be fine, since `dPRC` create this buffer, and is taking the byte buffer constructed from `MarshalError` and tacking a bunch of headers on it to create a proper frame. Once `dRPC` is done writing, it _hangs onto the buffer and resets it here__:f6e369438f/drpcwire/writer.go (L73)
However... the websocket implementation, once it gets the buffer, it runs a `statelessDeflate` [here](8dee580a7f/write.go (L180)
), which compresses the buffer on the fly. This functionality actually [mutates the buffer in place](a1a9cfc821/flate/stateless.go (L94)
), which is where get our race. In the case where the `byte[]` aren't being manipulated anywhere else, this compress-in-place operation would be safe, and that's probably the case for most over-the-wire usages. In this case, though, where we're plumbing `dRPC` -> websocket, they both are manipulating it (`dRPC` is reusing the buffer for the next `write`, and `websocket` is compressing on the fly). ### Why does cloning on `Read` fail? Get a bunch of errors like: ``` 2022/02/02 19:26:10 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0 2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF 2022/02/02 19:26:25 [ERR] yamux: Failed to read header: unexpected EOF 2022/02/02 19:26:25 [WARN] yamux: frame for missing stream: Vsn:0 Type:0 Flags:0 StreamID:0 Length:0 ``` # UPDATE: We decided we could disable websocket compression, which would avoid the race because the in-place `deflate` operaton would no longer be run. Trying that out now: - Run 1: ✅ - Run 2: https://github.com/coder/coder/runs/5042645522?check_suite_focus=true#step:8:338 - Run 3: ✅ - Run 4: https://github.com/coder/coder/runs/5042988758?check_suite_focus=true#step:7:168 - Run 5: ✅ * fix: Remove race condition with acquiredJobDone channel (#148) Found another data race while running the tests: https://github.com/coder/coder/runs/5044320845?check_suite_focus=true#step:7:83 __Issue:__ There is a race in the p.acquiredJobDone chan - in particular, there can be a case where we're waiting on the channel to finish (in close) with <-p.acquiredJobDone, but in parallel, an acquireJob could've been started, which would create a new channel for p.acquiredJobDone. There is a similar race in `close(..)`ing the channel, which also came up in test runs. __Fix:__ Instead of recreating the channel everytime, we can use `sync.WaitGroup` to accomplish the same functionality - a semaphore to make close wait for the current job to wrap up. * fix: Bump up workspace history timeout (#149) This is an attempted fix for failures like: https://github.com/coder/coder/runs/5043435263?check_suite_focus=true#step:7:32 Looking at the timing of the test: ``` t.go:56: 2022-02-02 21:33:21.964 [DEBUG] (terraform-provisioner) <provision.go:139> ran apply t.go:56: 2022-02-02 21:33:21.991 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running t.go:56: 2022-02-02 21:33:22.050 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running t.go:56: 2022-02-02 21:33:22.090 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running t.go:56: 2022-02-02 21:33:22.140 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running t.go:56: 2022-02-02 21:33:22.195 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running t.go:56: 2022-02-02 21:33:22.240 [DEBUG] (provisionerd) <provisionerd.go:162> skipping acquire; job is already running workspacehistory_test.go:122: Error Trace: workspacehistory_test.go:122 Error: Condition never satisfied Test: TestWorkspaceHistory/CreateHistory ``` It appears that the `terraform apply` job had just finished - with less than a second to spare until our `require.Eventually` completes - but there's still work to be done (ie, collecting the state files). So my suspicion is that terraform might, in some cases, exceed our 5s timeout. Note that in the setup for this test - there is a similar project history wait that waits for 15s, so I borrowed that here. In the future - we can look at potentially using a simple echo provider to exercise this in the unit test, in a way that is more reliable in terms of timing. I'll log an issue to track that. Co-authored-by: Bryan <bryan@coder.com>
This commit is contained in:
@ -11,6 +11,7 @@ import (
|
||||
"cdr.dev/slog"
|
||||
"cdr.dev/slog/sloggers/sloghuman"
|
||||
"github.com/coder/coder/coderd"
|
||||
"github.com/coder/coder/database"
|
||||
"github.com/coder/coder/database/databasefake"
|
||||
)
|
||||
|
||||
@ -24,6 +25,7 @@ func Root() *cobra.Command {
|
||||
handler := coderd.New(&coderd.Options{
|
||||
Logger: slog.Make(sloghuman.Sink(os.Stderr)),
|
||||
Database: databasefake.New(),
|
||||
Pubsub: database.NewPubsubInMemory(),
|
||||
})
|
||||
|
||||
listener, err := net.Listen("tcp", address)
|
||||
|
@ -64,6 +64,10 @@ func New(options *Options) http.Handler {
|
||||
r.Route("/history", func(r chi.Router) {
|
||||
r.Get("/", api.projectHistoryByOrganization)
|
||||
r.Post("/", api.postProjectHistoryByOrganization)
|
||||
r.Route("/{projecthistory}", func(r chi.Router) {
|
||||
r.Use(httpmw.ExtractProjectHistoryParam(api.Database))
|
||||
r.Get("/", api.projectHistoryByOrganizationAndName)
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
@ -84,11 +88,19 @@ func New(options *Options) http.Handler {
|
||||
r.Route("/history", func(r chi.Router) {
|
||||
r.Post("/", api.postWorkspaceHistoryByUser)
|
||||
r.Get("/", api.workspaceHistoryByUser)
|
||||
r.Get("/latest", api.latestWorkspaceHistoryByUser)
|
||||
r.Route("/{workspacehistory}", func(r chi.Router) {
|
||||
r.Use(httpmw.ExtractWorkspaceHistoryParam(options.Database))
|
||||
r.Get("/", api.workspaceHistoryByName)
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
r.Route("/provisioners/daemons", func(r chi.Router) {
|
||||
r.Get("/", api.provisionerDaemons)
|
||||
r.Get("/serve", api.provisionerDaemonsServe)
|
||||
})
|
||||
})
|
||||
r.NotFound(site.Handler().ServeHTTP)
|
||||
return r
|
||||
|
@ -3,13 +3,16 @@ package coderdtest
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"io"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"cdr.dev/slog"
|
||||
"cdr.dev/slog/sloggers/slogtest"
|
||||
"github.com/coder/coder/coderd"
|
||||
"github.com/coder/coder/codersdk"
|
||||
@ -17,6 +20,10 @@ import (
|
||||
"github.com/coder/coder/database"
|
||||
"github.com/coder/coder/database/databasefake"
|
||||
"github.com/coder/coder/database/postgres"
|
||||
"github.com/coder/coder/provisioner/terraform"
|
||||
"github.com/coder/coder/provisionerd"
|
||||
"github.com/coder/coder/provisionersdk"
|
||||
"github.com/coder/coder/provisionersdk/proto"
|
||||
)
|
||||
|
||||
// Server represents a test instance of coderd.
|
||||
@ -57,11 +64,46 @@ func (s *Server) RandomInitialUser(t *testing.T) coderd.CreateInitialUserRequest
|
||||
return req
|
||||
}
|
||||
|
||||
// AddProvisionerd launches a new provisionerd instance!
|
||||
func (s *Server) AddProvisionerd(t *testing.T) io.Closer {
|
||||
tfClient, tfServer := provisionersdk.TransportPipe()
|
||||
ctx, cancelFunc := context.WithCancel(context.Background())
|
||||
t.Cleanup(func() {
|
||||
_ = tfClient.Close()
|
||||
_ = tfServer.Close()
|
||||
cancelFunc()
|
||||
})
|
||||
go func() {
|
||||
err := terraform.Serve(ctx, &terraform.ServeOptions{
|
||||
ServeOptions: &provisionersdk.ServeOptions{
|
||||
Listener: tfServer,
|
||||
},
|
||||
Logger: slogtest.Make(t, nil).Named("terraform-provisioner").Leveled(slog.LevelDebug),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
}()
|
||||
|
||||
closer := provisionerd.New(s.Client.ProvisionerDaemonClient, &provisionerd.Options{
|
||||
Logger: slogtest.Make(t, nil).Named("provisionerd").Leveled(slog.LevelDebug),
|
||||
PollInterval: 50 * time.Millisecond,
|
||||
UpdateInterval: 50 * time.Millisecond,
|
||||
Provisioners: provisionerd.Provisioners{
|
||||
string(database.ProvisionerTypeTerraform): proto.NewDRPCProvisionerClient(provisionersdk.Conn(tfClient)),
|
||||
},
|
||||
WorkDirectory: t.TempDir(),
|
||||
})
|
||||
t.Cleanup(func() {
|
||||
_ = closer.Close()
|
||||
})
|
||||
return closer
|
||||
}
|
||||
|
||||
// New constructs a new coderd test instance. This returned Server
|
||||
// should contain no side-effects.
|
||||
func New(t *testing.T) Server {
|
||||
// This can be hotswapped for a live database instance.
|
||||
db := databasefake.New()
|
||||
pubsub := database.NewPubsubInMemory()
|
||||
if os.Getenv("DB") != "" {
|
||||
connectionURL, close, err := postgres.Open()
|
||||
require.NoError(t, err)
|
||||
@ -74,11 +116,18 @@ func New(t *testing.T) Server {
|
||||
err = database.Migrate(sqlDB)
|
||||
require.NoError(t, err)
|
||||
db = database.New(sqlDB)
|
||||
|
||||
pubsub, err = database.NewPubsub(context.Background(), sqlDB, connectionURL)
|
||||
require.NoError(t, err)
|
||||
t.Cleanup(func() {
|
||||
_ = pubsub.Close()
|
||||
})
|
||||
}
|
||||
|
||||
handler := coderd.New(&coderd.Options{
|
||||
Logger: slogtest.Make(t, nil),
|
||||
Database: db,
|
||||
Pubsub: pubsub,
|
||||
})
|
||||
srv := httptest.NewServer(handler)
|
||||
serverURL, err := url.Parse(srv.URL)
|
||||
|
@ -16,4 +16,5 @@ func TestNew(t *testing.T) {
|
||||
t.Parallel()
|
||||
server := coderdtest.New(t)
|
||||
_ = server.RandomInitialUser(t)
|
||||
_ = server.AddProvisionerd(t)
|
||||
}
|
||||
|
@ -4,6 +4,7 @@ import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
@ -12,6 +13,7 @@ import (
|
||||
"github.com/go-chi/render"
|
||||
"github.com/google/uuid"
|
||||
"github.com/moby/moby/pkg/namesgenerator"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/database"
|
||||
"github.com/coder/coder/httpapi"
|
||||
@ -26,6 +28,7 @@ type ProjectHistory struct {
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
Name string `json:"name"`
|
||||
StorageMethod database.ProjectStorageMethod `json:"storage_method"`
|
||||
Import ProvisionerJob `json:"import"`
|
||||
}
|
||||
|
||||
// CreateProjectHistoryRequest enables callers to create a new Project Version.
|
||||
@ -50,12 +53,33 @@ func (api *api) projectHistoryByOrganization(rw http.ResponseWriter, r *http.Req
|
||||
}
|
||||
apiHistory := make([]ProjectHistory, 0)
|
||||
for _, version := range history {
|
||||
apiHistory = append(apiHistory, convertProjectHistory(version))
|
||||
job, err := api.Database.GetProvisionerJobByID(r.Context(), version.ImportJobID)
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("get provisioner job: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
apiHistory = append(apiHistory, convertProjectHistory(version, job))
|
||||
}
|
||||
render.Status(r, http.StatusOK)
|
||||
render.JSON(rw, r, apiHistory)
|
||||
}
|
||||
|
||||
// Return a single project history by organization and name.
|
||||
func (api *api) projectHistoryByOrganizationAndName(rw http.ResponseWriter, r *http.Request) {
|
||||
projectHistory := httpmw.ProjectHistoryParam(r)
|
||||
job, err := api.Database.GetProvisionerJobByID(r.Context(), projectHistory.ImportJobID)
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("get provisioner job: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
render.Status(r, http.StatusOK)
|
||||
render.JSON(rw, r, convertProjectHistory(projectHistory, job))
|
||||
}
|
||||
|
||||
// Creates a new version of the project. An import job is queued to parse
|
||||
// the storage method provided. Once completed, the import job will specify
|
||||
// the version as latest.
|
||||
@ -82,37 +106,71 @@ func (api *api) postProjectHistoryByOrganization(rw http.ResponseWriter, r *http
|
||||
return
|
||||
}
|
||||
|
||||
apiKey := httpmw.APIKey(r)
|
||||
project := httpmw.ProjectParam(r)
|
||||
history, err := api.Database.InsertProjectHistory(r.Context(), database.InsertProjectHistoryParams{
|
||||
ID: uuid.New(),
|
||||
ProjectID: project.ID,
|
||||
CreatedAt: database.Now(),
|
||||
UpdatedAt: database.Now(),
|
||||
Name: namesgenerator.GetRandomName(1),
|
||||
StorageMethod: createProjectVersion.StorageMethod,
|
||||
StorageSource: createProjectVersion.StorageSource,
|
||||
// TODO: Make this do something!
|
||||
ImportJobID: uuid.New(),
|
||||
|
||||
var provisionerJob database.ProvisionerJob
|
||||
var projectHistory database.ProjectHistory
|
||||
err := api.Database.InTx(func(db database.Store) error {
|
||||
projectHistoryID := uuid.New()
|
||||
input, err := json.Marshal(projectImportJob{
|
||||
ProjectHistoryID: projectHistoryID,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("marshal import job: %w", err)
|
||||
}
|
||||
|
||||
provisionerJob, err = db.InsertProvisionerJob(r.Context(), database.InsertProvisionerJobParams{
|
||||
ID: uuid.New(),
|
||||
CreatedAt: database.Now(),
|
||||
UpdatedAt: database.Now(),
|
||||
InitiatorID: apiKey.UserID,
|
||||
Provisioner: project.Provisioner,
|
||||
Type: database.ProvisionerJobTypeProjectImport,
|
||||
ProjectID: project.ID,
|
||||
Input: input,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert provisioner job: %w", err)
|
||||
}
|
||||
|
||||
projectHistory, err = api.Database.InsertProjectHistory(r.Context(), database.InsertProjectHistoryParams{
|
||||
ID: projectHistoryID,
|
||||
ProjectID: project.ID,
|
||||
CreatedAt: database.Now(),
|
||||
UpdatedAt: database.Now(),
|
||||
Name: namesgenerator.GetRandomName(1),
|
||||
StorageMethod: createProjectVersion.StorageMethod,
|
||||
StorageSource: createProjectVersion.StorageSource,
|
||||
ImportJobID: provisionerJob.ID,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert project history: %s", err)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("insert project history: %s", err),
|
||||
Message: err.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// TODO: A job to process the new version should occur here.
|
||||
|
||||
render.Status(r, http.StatusCreated)
|
||||
render.JSON(rw, r, convertProjectHistory(history))
|
||||
render.JSON(rw, r, convertProjectHistory(projectHistory, provisionerJob))
|
||||
}
|
||||
|
||||
func convertProjectHistory(history database.ProjectHistory) ProjectHistory {
|
||||
func convertProjectHistory(history database.ProjectHistory, job database.ProvisionerJob) ProjectHistory {
|
||||
return ProjectHistory{
|
||||
ID: history.ID,
|
||||
ProjectID: history.ProjectID,
|
||||
CreatedAt: history.CreatedAt,
|
||||
UpdatedAt: history.UpdatedAt,
|
||||
Name: history.Name,
|
||||
Import: convertProvisionerJob(job),
|
||||
}
|
||||
}
|
||||
|
||||
func projectHistoryLogsChannel(projectHistoryID uuid.UUID) string {
|
||||
return fmt.Sprintf("project-history-logs:%s", projectHistoryID)
|
||||
}
|
||||
|
@ -25,7 +25,7 @@ func TestProjectHistory(t *testing.T) {
|
||||
Provisioner: database.ProvisionerTypeTerraform,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
versions, err := server.Client.ProjectHistory(context.Background(), user.Organization, project.Name)
|
||||
versions, err := server.Client.ListProjectHistory(context.Background(), user.Organization, project.Name)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, versions, 0)
|
||||
})
|
||||
@ -48,14 +48,17 @@ func TestProjectHistory(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
_, err = writer.Write(make([]byte, 1<<10))
|
||||
require.NoError(t, err)
|
||||
_, err = server.Client.CreateProjectHistory(context.Background(), user.Organization, project.Name, coderd.CreateProjectHistoryRequest{
|
||||
history, err := server.Client.CreateProjectHistory(context.Background(), user.Organization, project.Name, coderd.CreateProjectHistoryRequest{
|
||||
StorageMethod: database.ProjectStorageMethodInlineArchive,
|
||||
StorageSource: buffer.Bytes(),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
versions, err := server.Client.ProjectHistory(context.Background(), user.Organization, project.Name)
|
||||
versions, err := server.Client.ListProjectHistory(context.Background(), user.Organization, project.Name)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, versions, 1)
|
||||
|
||||
_, err = server.Client.ProjectHistory(context.Background(), user.Organization, project.Name, history.Name)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("CreateHistoryArchiveTooBig", func(t *testing.T) {
|
||||
|
619
coderd/provisionerdaemons.go
Normal file
619
coderd/provisionerdaemons.go
Normal file
@ -0,0 +1,619 @@
|
||||
package coderd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
"github.com/go-chi/render"
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/yamux"
|
||||
"github.com/moby/moby/pkg/namesgenerator"
|
||||
"golang.org/x/xerrors"
|
||||
"nhooyr.io/websocket"
|
||||
"storj.io/drpc/drpcmux"
|
||||
"storj.io/drpc/drpcserver"
|
||||
|
||||
"github.com/coder/coder/coderd/projectparameter"
|
||||
"github.com/coder/coder/database"
|
||||
"github.com/coder/coder/httpapi"
|
||||
"github.com/coder/coder/provisionerd/proto"
|
||||
sdkproto "github.com/coder/coder/provisionersdk/proto"
|
||||
)
|
||||
|
||||
type ProvisionerDaemon database.ProvisionerDaemon
|
||||
|
||||
// Lists all registered provisioner daemons.
|
||||
func (api *api) provisionerDaemons(rw http.ResponseWriter, r *http.Request) {
|
||||
daemons, err := api.Database.GetProvisionerDaemons(r.Context())
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
err = nil
|
||||
daemons = []database.ProvisionerDaemon{}
|
||||
}
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("get provisioner daemons: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
render.Status(r, http.StatusOK)
|
||||
render.JSON(rw, r, daemons)
|
||||
}
|
||||
|
||||
// Serves the provisioner daemon protobuf API over a WebSocket.
|
||||
func (api *api) provisionerDaemonsServe(rw http.ResponseWriter, r *http.Request) {
|
||||
conn, err := websocket.Accept(rw, r, &websocket.AcceptOptions{
|
||||
// Need to disable compression to avoid a data-race
|
||||
CompressionMode: websocket.CompressionDisabled,
|
||||
})
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusBadRequest, httpapi.Response{
|
||||
Message: fmt.Sprintf("accept websocket: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
daemon, err := api.Database.InsertProvisionerDaemon(r.Context(), database.InsertProvisionerDaemonParams{
|
||||
ID: uuid.New(),
|
||||
CreatedAt: database.Now(),
|
||||
Name: namesgenerator.GetRandomName(1),
|
||||
Provisioners: []database.ProvisionerType{database.ProvisionerTypeCdrBasic, database.ProvisionerTypeTerraform},
|
||||
})
|
||||
if err != nil {
|
||||
_ = conn.Close(websocket.StatusInternalError, fmt.Sprintf("insert provisioner daemon:% s", err))
|
||||
return
|
||||
}
|
||||
|
||||
// Multiplexes the incoming connection using yamux.
|
||||
// This allows multiple function calls to occur over
|
||||
// the same connection.
|
||||
session, err := yamux.Server(websocket.NetConn(r.Context(), conn, websocket.MessageBinary), nil)
|
||||
if err != nil {
|
||||
_ = conn.Close(websocket.StatusInternalError, fmt.Sprintf("multiplex server: %s", err))
|
||||
return
|
||||
}
|
||||
mux := drpcmux.New()
|
||||
err = proto.DRPCRegisterProvisionerDaemon(mux, &provisionerdServer{
|
||||
ID: daemon.ID,
|
||||
Database: api.Database,
|
||||
Pubsub: api.Pubsub,
|
||||
Provisioners: daemon.Provisioners,
|
||||
})
|
||||
if err != nil {
|
||||
_ = conn.Close(websocket.StatusInternalError, fmt.Sprintf("drpc register provisioner daemon: %s", err))
|
||||
return
|
||||
}
|
||||
server := drpcserver.New(mux)
|
||||
err = server.Serve(r.Context(), session)
|
||||
if err != nil {
|
||||
_ = conn.Close(websocket.StatusInternalError, fmt.Sprintf("serve: %s", err))
|
||||
}
|
||||
}
|
||||
|
||||
// The input for a "workspace_provision" job.
|
||||
type workspaceProvisionJob struct {
|
||||
WorkspaceHistoryID uuid.UUID `json:"workspace_history_id"`
|
||||
}
|
||||
|
||||
// The input for a "project_import" job.
|
||||
type projectImportJob struct {
|
||||
ProjectHistoryID uuid.UUID `json:"project_history_id"`
|
||||
}
|
||||
|
||||
// Implementation of the provisioner daemon protobuf server.
|
||||
type provisionerdServer struct {
|
||||
ID uuid.UUID
|
||||
Provisioners []database.ProvisionerType
|
||||
Database database.Store
|
||||
Pubsub database.Pubsub
|
||||
}
|
||||
|
||||
// AcquireJob queries the database to lock a job.
|
||||
func (server *provisionerdServer) AcquireJob(ctx context.Context, _ *proto.Empty) (*proto.AcquiredJob, error) {
|
||||
// This marks the job as locked in the database.
|
||||
job, err := server.Database.AcquireProvisionerJob(ctx, database.AcquireProvisionerJobParams{
|
||||
StartedAt: sql.NullTime{
|
||||
Time: database.Now(),
|
||||
Valid: true,
|
||||
},
|
||||
WorkerID: uuid.NullUUID{
|
||||
UUID: server.ID,
|
||||
Valid: true,
|
||||
},
|
||||
Types: server.Provisioners,
|
||||
})
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
// The provisioner daemon assumes no jobs are available if
|
||||
// an empty struct is returned.
|
||||
return &proto.AcquiredJob{}, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("acquire job: %w", err)
|
||||
}
|
||||
// Marks the acquired job as failed with the error message provided.
|
||||
failJob := func(errorMessage string) error {
|
||||
err = server.Database.UpdateProvisionerJobByID(ctx, database.UpdateProvisionerJobByIDParams{
|
||||
ID: job.ID,
|
||||
CompletedAt: sql.NullTime{
|
||||
Time: database.Now(),
|
||||
Valid: true,
|
||||
},
|
||||
Error: sql.NullString{
|
||||
String: errorMessage,
|
||||
Valid: true,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("update provisioner job: %w", err)
|
||||
}
|
||||
return xerrors.Errorf("request job was invalidated: %s", errorMessage)
|
||||
}
|
||||
|
||||
project, err := server.Database.GetProjectByID(ctx, job.ProjectID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get project: %s", err))
|
||||
}
|
||||
organization, err := server.Database.GetOrganizationByID(ctx, project.OrganizationID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get organization: %s", err))
|
||||
}
|
||||
user, err := server.Database.GetUserByID(ctx, job.InitiatorID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get user: %s", err))
|
||||
}
|
||||
|
||||
protoJob := &proto.AcquiredJob{
|
||||
JobId: job.ID.String(),
|
||||
CreatedAt: job.CreatedAt.UnixMilli(),
|
||||
Provisioner: string(job.Provisioner),
|
||||
OrganizationName: organization.Name,
|
||||
ProjectName: project.Name,
|
||||
UserName: user.Username,
|
||||
}
|
||||
var projectHistory database.ProjectHistory
|
||||
switch job.Type {
|
||||
case database.ProvisionerJobTypeWorkspaceProvision:
|
||||
var input workspaceProvisionJob
|
||||
err = json.Unmarshal(job.Input, &input)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("unmarshal job input %q: %s", job.Input, err))
|
||||
}
|
||||
workspaceHistory, err := server.Database.GetWorkspaceHistoryByID(ctx, input.WorkspaceHistoryID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get workspace history: %s", err))
|
||||
}
|
||||
workspace, err := server.Database.GetWorkspaceByID(ctx, workspaceHistory.WorkspaceID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get workspace: %s", err))
|
||||
}
|
||||
projectHistory, err = server.Database.GetProjectHistoryByID(ctx, workspaceHistory.ProjectHistoryID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get project history: %s", err))
|
||||
}
|
||||
|
||||
// Compute parameters for the workspace to consume.
|
||||
parameters, err := projectparameter.Compute(ctx, server.Database, projectparameter.Scope{
|
||||
OrganizationID: organization.ID,
|
||||
ProjectID: project.ID,
|
||||
ProjectHistoryID: projectHistory.ID,
|
||||
UserID: user.ID,
|
||||
WorkspaceID: workspace.ID,
|
||||
WorkspaceHistoryID: workspaceHistory.ID,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("compute parameters: %s", err))
|
||||
}
|
||||
// Convert parameters to the protobuf type.
|
||||
protoParameters := make([]*sdkproto.ParameterValue, 0, len(parameters))
|
||||
for _, parameter := range parameters {
|
||||
protoParameters = append(protoParameters, parameter.Proto)
|
||||
}
|
||||
|
||||
provisionerState := []byte{}
|
||||
// If workspace history exists before this entry, use that state.
|
||||
// We can't use the before state everytime, because if a job fails
|
||||
// for some random reason, the workspace shouldn't be reset.
|
||||
//
|
||||
// Maybe we should make state global on a workspace?
|
||||
if workspaceHistory.BeforeID.Valid {
|
||||
beforeHistory, err := server.Database.GetWorkspaceHistoryByID(ctx, workspaceHistory.BeforeID.UUID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get workspace history: %s", err))
|
||||
}
|
||||
provisionerState = beforeHistory.ProvisionerState
|
||||
}
|
||||
|
||||
protoJob.Type = &proto.AcquiredJob_WorkspaceProvision_{
|
||||
WorkspaceProvision: &proto.AcquiredJob_WorkspaceProvision{
|
||||
WorkspaceHistoryId: workspaceHistory.ID.String(),
|
||||
WorkspaceName: workspace.Name,
|
||||
State: provisionerState,
|
||||
ParameterValues: protoParameters,
|
||||
},
|
||||
}
|
||||
case database.ProvisionerJobTypeProjectImport:
|
||||
var input projectImportJob
|
||||
err = json.Unmarshal(job.Input, &input)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("unmarshal job input %q: %s", job.Input, err))
|
||||
}
|
||||
projectHistory, err = server.Database.GetProjectHistoryByID(ctx, input.ProjectHistoryID)
|
||||
if err != nil {
|
||||
return nil, failJob(fmt.Sprintf("get project history: %s", err))
|
||||
}
|
||||
|
||||
protoJob.Type = &proto.AcquiredJob_ProjectImport_{
|
||||
ProjectImport: &proto.AcquiredJob_ProjectImport{
|
||||
ProjectHistoryId: projectHistory.ID.String(),
|
||||
ProjectHistoryName: projectHistory.Name,
|
||||
},
|
||||
}
|
||||
}
|
||||
switch projectHistory.StorageMethod {
|
||||
case database.ProjectStorageMethodInlineArchive:
|
||||
protoJob.ProjectSourceArchive = projectHistory.StorageSource
|
||||
default:
|
||||
return nil, failJob(fmt.Sprintf("unsupported storage source: %q", projectHistory.StorageMethod))
|
||||
}
|
||||
|
||||
return protoJob, err
|
||||
}
|
||||
|
||||
func (server *provisionerdServer) UpdateJob(stream proto.DRPCProvisionerDaemon_UpdateJobStream) error {
|
||||
for {
|
||||
update, err := stream.Recv()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
parsedID, err := uuid.Parse(update.JobId)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("parse job id: %w", err)
|
||||
}
|
||||
job, err := server.Database.GetProvisionerJobByID(stream.Context(), parsedID)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("get job: %w", err)
|
||||
}
|
||||
if !job.WorkerID.Valid {
|
||||
return errors.New("job isn't running yet")
|
||||
}
|
||||
if job.WorkerID.UUID.String() != server.ID.String() {
|
||||
return errors.New("you don't own this job")
|
||||
}
|
||||
|
||||
err = server.Database.UpdateProvisionerJobByID(stream.Context(), database.UpdateProvisionerJobByIDParams{
|
||||
ID: parsedID,
|
||||
UpdatedAt: database.Now(),
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("update job: %w", err)
|
||||
}
|
||||
switch job.Type {
|
||||
case database.ProvisionerJobTypeProjectImport:
|
||||
if len(update.ProjectImportLogs) == 0 {
|
||||
continue
|
||||
}
|
||||
var input projectImportJob
|
||||
err = json.Unmarshal(job.Input, &input)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("unmarshal job input %q: %s", job.Input, err)
|
||||
}
|
||||
insertParams := database.InsertProjectHistoryLogsParams{
|
||||
ProjectHistoryID: input.ProjectHistoryID,
|
||||
}
|
||||
for _, log := range update.ProjectImportLogs {
|
||||
logLevel, err := convertLogLevel(log.Level)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("convert log level: %w", err)
|
||||
}
|
||||
logSource, err := convertLogSource(log.Source)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("convert log source: %w", err)
|
||||
}
|
||||
insertParams.ID = append(insertParams.ID, uuid.New())
|
||||
insertParams.CreatedAt = append(insertParams.CreatedAt, time.UnixMilli(log.CreatedAt))
|
||||
insertParams.Level = append(insertParams.Level, logLevel)
|
||||
insertParams.Source = append(insertParams.Source, logSource)
|
||||
insertParams.Output = append(insertParams.Output, log.Output)
|
||||
}
|
||||
logs, err := server.Database.InsertProjectHistoryLogs(stream.Context(), insertParams)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert project logs: %w", err)
|
||||
}
|
||||
data, err := json.Marshal(logs)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("marshal project log: %w", err)
|
||||
}
|
||||
err = server.Pubsub.Publish(projectHistoryLogsChannel(input.ProjectHistoryID), data)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("publish history log: %w", err)
|
||||
}
|
||||
case database.ProvisionerJobTypeWorkspaceProvision:
|
||||
if len(update.WorkspaceProvisionLogs) == 0 {
|
||||
continue
|
||||
}
|
||||
var input workspaceProvisionJob
|
||||
err = json.Unmarshal(job.Input, &input)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("unmarshal job input %q: %s", job.Input, err)
|
||||
}
|
||||
insertParams := database.InsertWorkspaceHistoryLogsParams{
|
||||
WorkspaceHistoryID: input.WorkspaceHistoryID,
|
||||
}
|
||||
for _, log := range update.WorkspaceProvisionLogs {
|
||||
logLevel, err := convertLogLevel(log.Level)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("convert log level: %w", err)
|
||||
}
|
||||
logSource, err := convertLogSource(log.Source)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("convert log source: %w", err)
|
||||
}
|
||||
insertParams.ID = append(insertParams.ID, uuid.New())
|
||||
insertParams.CreatedAt = append(insertParams.CreatedAt, time.UnixMilli(log.CreatedAt))
|
||||
insertParams.Level = append(insertParams.Level, logLevel)
|
||||
insertParams.Source = append(insertParams.Source, logSource)
|
||||
insertParams.Output = append(insertParams.Output, log.Output)
|
||||
}
|
||||
logs, err := server.Database.InsertWorkspaceHistoryLogs(stream.Context(), insertParams)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert workspace logs: %w", err)
|
||||
}
|
||||
data, err := json.Marshal(logs)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("marshal project log: %w", err)
|
||||
}
|
||||
err = server.Pubsub.Publish(workspaceHistoryLogsChannel(input.WorkspaceHistoryID), data)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("publish history log: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (server *provisionerdServer) CancelJob(ctx context.Context, cancelJob *proto.CancelledJob) (*proto.Empty, error) {
|
||||
jobID, err := uuid.Parse(cancelJob.JobId)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("parse job id: %w", err)
|
||||
}
|
||||
err = server.Database.UpdateProvisionerJobByID(ctx, database.UpdateProvisionerJobByIDParams{
|
||||
ID: jobID,
|
||||
CancelledAt: sql.NullTime{
|
||||
Time: database.Now(),
|
||||
Valid: true,
|
||||
},
|
||||
UpdatedAt: database.Now(),
|
||||
Error: sql.NullString{
|
||||
String: cancelJob.Error,
|
||||
Valid: cancelJob.Error != "",
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("update provisioner job: %w", err)
|
||||
}
|
||||
return &proto.Empty{}, nil
|
||||
}
|
||||
|
||||
// CompleteJob is triggered by a provision daemon to mark a provisioner job as completed.
|
||||
func (server *provisionerdServer) CompleteJob(ctx context.Context, completed *proto.CompletedJob) (*proto.Empty, error) {
|
||||
jobID, err := uuid.Parse(completed.JobId)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("parse job id: %w", err)
|
||||
}
|
||||
job, err := server.Database.GetProvisionerJobByID(ctx, jobID)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("get job by id: %w", err)
|
||||
}
|
||||
// TODO: Check if the worker ID matches!
|
||||
// If it doesn't, a provisioner daemon could be impersonating another job!
|
||||
|
||||
switch jobType := completed.Type.(type) {
|
||||
case *proto.CompletedJob_ProjectImport_:
|
||||
var input projectImportJob
|
||||
err = json.Unmarshal(job.Input, &input)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("unmarshal job data: %w", err)
|
||||
}
|
||||
|
||||
// Validate that all parameters send from the provisioner daemon
|
||||
// follow the protocol.
|
||||
projectParameters := make([]database.InsertProjectParameterParams, 0, len(jobType.ProjectImport.ParameterSchemas))
|
||||
for _, protoParameter := range jobType.ProjectImport.ParameterSchemas {
|
||||
validationTypeSystem, err := convertValidationTypeSystem(protoParameter.ValidationTypeSystem)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("convert validation type system for %q: %w", protoParameter.Name, err)
|
||||
}
|
||||
|
||||
projectParameter := database.InsertProjectParameterParams{
|
||||
ID: uuid.New(),
|
||||
CreatedAt: database.Now(),
|
||||
ProjectHistoryID: input.ProjectHistoryID,
|
||||
Name: protoParameter.Name,
|
||||
Description: protoParameter.Description,
|
||||
RedisplayValue: protoParameter.RedisplayValue,
|
||||
ValidationError: protoParameter.ValidationError,
|
||||
ValidationCondition: protoParameter.ValidationCondition,
|
||||
ValidationValueType: protoParameter.ValidationValueType,
|
||||
ValidationTypeSystem: validationTypeSystem,
|
||||
|
||||
AllowOverrideDestination: protoParameter.AllowOverrideDestination,
|
||||
AllowOverrideSource: protoParameter.AllowOverrideSource,
|
||||
}
|
||||
|
||||
// It's possible a parameter doesn't define a default source!
|
||||
if protoParameter.DefaultSource != nil {
|
||||
parameterSourceScheme, err := convertParameterSourceScheme(protoParameter.DefaultSource.Scheme)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("convert parameter source scheme: %w", err)
|
||||
}
|
||||
projectParameter.DefaultSourceScheme = parameterSourceScheme
|
||||
projectParameter.DefaultSourceValue = sql.NullString{
|
||||
String: protoParameter.DefaultSource.Value,
|
||||
Valid: protoParameter.DefaultSource.Value != "",
|
||||
}
|
||||
}
|
||||
|
||||
// It's possible a parameter doesn't define a default destination!
|
||||
if protoParameter.DefaultDestination != nil {
|
||||
parameterDestinationScheme, err := convertParameterDestinationScheme(protoParameter.DefaultDestination.Scheme)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("convert parameter destination scheme: %w", err)
|
||||
}
|
||||
projectParameter.DefaultDestinationScheme = parameterDestinationScheme
|
||||
projectParameter.DefaultDestinationValue = sql.NullString{
|
||||
String: protoParameter.DefaultDestination.Value,
|
||||
Valid: protoParameter.DefaultDestination.Value != "",
|
||||
}
|
||||
}
|
||||
|
||||
projectParameters = append(projectParameters, projectParameter)
|
||||
}
|
||||
|
||||
// This must occur in a transaction in case of failure.
|
||||
err = server.Database.InTx(func(db database.Store) error {
|
||||
err = db.UpdateProvisionerJobByID(ctx, database.UpdateProvisionerJobByIDParams{
|
||||
ID: jobID,
|
||||
UpdatedAt: database.Now(),
|
||||
CompletedAt: sql.NullTime{
|
||||
Time: database.Now(),
|
||||
Valid: true,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("update provisioner job: %w", err)
|
||||
}
|
||||
// This could be a bulk-insert operation to improve performance.
|
||||
// See the "InsertWorkspaceHistoryLogs" query.
|
||||
for _, projectParameter := range projectParameters {
|
||||
_, err = db.InsertProjectParameter(ctx, projectParameter)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert project parameter %q: %w", projectParameter.Name, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("complete job: %w", err)
|
||||
}
|
||||
case *proto.CompletedJob_WorkspaceProvision_:
|
||||
var input workspaceProvisionJob
|
||||
err = json.Unmarshal(job.Input, &input)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("unmarshal job data: %w", err)
|
||||
}
|
||||
|
||||
workspaceHistory, err := server.Database.GetWorkspaceHistoryByID(ctx, input.WorkspaceHistoryID)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("get workspace history: %w", err)
|
||||
}
|
||||
|
||||
err = server.Database.InTx(func(db database.Store) error {
|
||||
err = db.UpdateProvisionerJobByID(ctx, database.UpdateProvisionerJobByIDParams{
|
||||
ID: jobID,
|
||||
UpdatedAt: database.Now(),
|
||||
CompletedAt: sql.NullTime{
|
||||
Time: database.Now(),
|
||||
Valid: true,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("update provisioner job: %w", err)
|
||||
}
|
||||
err = db.UpdateWorkspaceHistoryByID(ctx, database.UpdateWorkspaceHistoryByIDParams{
|
||||
ID: workspaceHistory.ID,
|
||||
UpdatedAt: database.Now(),
|
||||
ProvisionerState: jobType.WorkspaceProvision.State,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("update workspace history: %w", err)
|
||||
}
|
||||
// This could be a bulk insert to improve performance.
|
||||
for _, protoResource := range jobType.WorkspaceProvision.Resources {
|
||||
_, err = db.InsertWorkspaceResource(ctx, database.InsertWorkspaceResourceParams{
|
||||
ID: uuid.New(),
|
||||
CreatedAt: database.Now(),
|
||||
WorkspaceHistoryID: input.WorkspaceHistoryID,
|
||||
Type: protoResource.Type,
|
||||
Name: protoResource.Name,
|
||||
// TODO: Generate this at the variable validation phase.
|
||||
// Set the value in `default_source`, and disallow overwrite.
|
||||
WorkspaceAgentToken: uuid.NewString(),
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert workspace resource %q: %w", protoResource.Name, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("complete job: %w", err)
|
||||
}
|
||||
default:
|
||||
return nil, xerrors.Errorf("unknown job type %q; ensure coderd and provisionerd versions match",
|
||||
reflect.TypeOf(completed.Type).String())
|
||||
}
|
||||
|
||||
return &proto.Empty{}, nil
|
||||
}
|
||||
|
||||
func convertValidationTypeSystem(typeSystem sdkproto.ParameterSchema_TypeSystem) (database.ParameterTypeSystem, error) {
|
||||
switch typeSystem {
|
||||
case sdkproto.ParameterSchema_HCL:
|
||||
return database.ParameterTypeSystemHCL, nil
|
||||
default:
|
||||
return database.ParameterTypeSystem(""), xerrors.Errorf("unknown type system: %d", typeSystem)
|
||||
}
|
||||
}
|
||||
|
||||
func convertParameterSourceScheme(sourceScheme sdkproto.ParameterSource_Scheme) (database.ParameterSourceScheme, error) {
|
||||
switch sourceScheme {
|
||||
case sdkproto.ParameterSource_DATA:
|
||||
return database.ParameterSourceSchemeData, nil
|
||||
default:
|
||||
return database.ParameterSourceScheme(""), xerrors.Errorf("unknown parameter source scheme: %d", sourceScheme)
|
||||
}
|
||||
}
|
||||
|
||||
func convertParameterDestinationScheme(destinationScheme sdkproto.ParameterDestination_Scheme) (database.ParameterDestinationScheme, error) {
|
||||
switch destinationScheme {
|
||||
case sdkproto.ParameterDestination_ENVIRONMENT_VARIABLE:
|
||||
return database.ParameterDestinationSchemeEnvironmentVariable, nil
|
||||
case sdkproto.ParameterDestination_PROVISIONER_VARIABLE:
|
||||
return database.ParameterDestinationSchemeProvisionerVariable, nil
|
||||
default:
|
||||
return database.ParameterDestinationScheme(""), xerrors.Errorf("unknown parameter destination scheme: %d", destinationScheme)
|
||||
}
|
||||
}
|
||||
|
||||
func convertLogLevel(logLevel sdkproto.LogLevel) (database.LogLevel, error) {
|
||||
switch logLevel {
|
||||
case sdkproto.LogLevel_TRACE:
|
||||
return database.LogLevelTrace, nil
|
||||
case sdkproto.LogLevel_DEBUG:
|
||||
return database.LogLevelDebug, nil
|
||||
case sdkproto.LogLevel_INFO:
|
||||
return database.LogLevelInfo, nil
|
||||
case sdkproto.LogLevel_WARN:
|
||||
return database.LogLevelWarn, nil
|
||||
case sdkproto.LogLevel_ERROR:
|
||||
return database.LogLevelError, nil
|
||||
default:
|
||||
return database.LogLevel(""), xerrors.Errorf("unknown log level: %d", logLevel)
|
||||
}
|
||||
}
|
||||
|
||||
func convertLogSource(logSource proto.LogSource) (database.LogSource, error) {
|
||||
switch logSource {
|
||||
case proto.LogSource_PROVISIONER_DAEMON:
|
||||
return database.LogSourceProvisionerDaemon, nil
|
||||
case proto.LogSource_PROVISIONER:
|
||||
return database.LogSourceProvisioner, nil
|
||||
default:
|
||||
return database.LogSource(""), xerrors.Errorf("unknown log source: %d", logSource)
|
||||
}
|
||||
}
|
26
coderd/provisionerdaemons_test.go
Normal file
26
coderd/provisionerdaemons_test.go
Normal file
@ -0,0 +1,26 @@
|
||||
package coderd_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/coder/coder/coderd/coderdtest"
|
||||
)
|
||||
|
||||
func TestProvisionerDaemons(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
t.Run("Register", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
server := coderdtest.New(t)
|
||||
_ = server.AddProvisionerd(t)
|
||||
require.Eventually(t, func() bool {
|
||||
daemons, err := server.Client.ProvisionerDaemons(context.Background())
|
||||
require.NoError(t, err)
|
||||
return len(daemons) > 0
|
||||
}, time.Second, 10*time.Millisecond)
|
||||
})
|
||||
}
|
78
coderd/provisioners.go
Normal file
78
coderd/provisioners.go
Normal file
@ -0,0 +1,78 @@
|
||||
package coderd
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
|
||||
"github.com/coder/coder/database"
|
||||
)
|
||||
|
||||
type ProvisionerJobStatus string
|
||||
|
||||
// Completed returns whether the job is still processing.
|
||||
func (p ProvisionerJobStatus) Completed() bool {
|
||||
return p == ProvisionerJobStatusSucceeded || p == ProvisionerJobStatusFailed
|
||||
}
|
||||
|
||||
const (
|
||||
ProvisionerJobStatusPending ProvisionerJobStatus = "pending"
|
||||
ProvisionerJobStatusRunning ProvisionerJobStatus = "running"
|
||||
ProvisionerJobStatusSucceeded ProvisionerJobStatus = "succeeded"
|
||||
ProvisionerJobStatusCancelled ProvisionerJobStatus = "canceled"
|
||||
ProvisionerJobStatusFailed ProvisionerJobStatus = "failed"
|
||||
)
|
||||
|
||||
type ProvisionerJob struct {
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
StartedAt *time.Time `json:"started_at,omitempty"`
|
||||
CancelledAt *time.Time `json:"canceled_at,omitempty"`
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty"`
|
||||
Status ProvisionerJobStatus `json:"status"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Provisioner database.ProvisionerType `json:"provisioner"`
|
||||
WorkerID *uuid.UUID `json:"worker_id,omitempty"`
|
||||
}
|
||||
|
||||
func convertProvisionerJob(provisionerJob database.ProvisionerJob) ProvisionerJob {
|
||||
job := ProvisionerJob{
|
||||
CreatedAt: provisionerJob.CreatedAt,
|
||||
UpdatedAt: provisionerJob.UpdatedAt,
|
||||
Error: provisionerJob.Error.String,
|
||||
Provisioner: provisionerJob.Provisioner,
|
||||
}
|
||||
// Applying values optional to the struct.
|
||||
if provisionerJob.StartedAt.Valid {
|
||||
job.StartedAt = &provisionerJob.StartedAt.Time
|
||||
}
|
||||
if provisionerJob.CancelledAt.Valid {
|
||||
job.CancelledAt = &provisionerJob.CancelledAt.Time
|
||||
}
|
||||
if provisionerJob.CompletedAt.Valid {
|
||||
job.CompletedAt = &provisionerJob.CompletedAt.Time
|
||||
}
|
||||
if provisionerJob.WorkerID.Valid {
|
||||
job.WorkerID = &provisionerJob.WorkerID.UUID
|
||||
}
|
||||
|
||||
switch {
|
||||
case provisionerJob.CancelledAt.Valid:
|
||||
job.Status = ProvisionerJobStatusCancelled
|
||||
case !provisionerJob.StartedAt.Valid:
|
||||
job.Status = ProvisionerJobStatusPending
|
||||
case provisionerJob.CompletedAt.Valid:
|
||||
job.Status = ProvisionerJobStatusSucceeded
|
||||
case database.Now().Sub(provisionerJob.UpdatedAt) > 30*time.Second:
|
||||
job.Status = ProvisionerJobStatusFailed
|
||||
job.Error = "Worker failed to update job in time."
|
||||
default:
|
||||
job.Status = ProvisionerJobStatusRunning
|
||||
}
|
||||
|
||||
if job.Error != "" {
|
||||
job.Status = ProvisionerJobStatusFailed
|
||||
}
|
||||
|
||||
return job
|
||||
}
|
@ -2,6 +2,7 @@ package coderd
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
@ -22,13 +23,14 @@ type WorkspaceHistory struct {
|
||||
ID uuid.UUID `json:"id"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
CompletedAt time.Time `json:"completed_at"`
|
||||
WorkspaceID uuid.UUID `json:"workspace_id"`
|
||||
ProjectHistoryID uuid.UUID `json:"project_history_id"`
|
||||
BeforeID uuid.UUID `json:"before_id"`
|
||||
AfterID uuid.UUID `json:"after_id"`
|
||||
Name string `json:"name"`
|
||||
Transition database.WorkspaceTransition `json:"transition"`
|
||||
Initiator string `json:"initiator"`
|
||||
Provision ProvisionerJob `json:"provision"`
|
||||
}
|
||||
|
||||
// CreateWorkspaceHistoryRequest provides options to update the latest workspace history.
|
||||
@ -37,8 +39,6 @@ type CreateWorkspaceHistoryRequest struct {
|
||||
Transition database.WorkspaceTransition `json:"transition" validate:"oneof=create start stop delete,required"`
|
||||
}
|
||||
|
||||
// Begins transitioning a workspace to new state. This queues a provision job to asynchronously
|
||||
// update the underlying infrastructure. Only one historical transition can occur at a time.
|
||||
func (api *api) postWorkspaceHistoryByUser(rw http.ResponseWriter, r *http.Request) {
|
||||
var createBuild CreateWorkspaceHistoryRequest
|
||||
if !httpapi.Read(rw, r, &createBuild) {
|
||||
@ -63,12 +63,41 @@ func (api *api) postWorkspaceHistoryByUser(rw http.ResponseWriter, r *http.Reque
|
||||
})
|
||||
return
|
||||
}
|
||||
projectHistoryJob, err := api.Database.GetProvisionerJobByID(r.Context(), projectHistory.ImportJobID)
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("get provisioner job: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
projectHistoryJobStatus := convertProvisionerJob(projectHistoryJob).Status
|
||||
switch projectHistoryJobStatus {
|
||||
case ProvisionerJobStatusPending, ProvisionerJobStatusRunning:
|
||||
httpapi.Write(rw, http.StatusPreconditionFailed, httpapi.Response{
|
||||
Message: fmt.Sprintf("The provided project history is %s. Wait for it to complete importing!", projectHistoryJobStatus),
|
||||
})
|
||||
return
|
||||
case ProvisionerJobStatusFailed:
|
||||
httpapi.Write(rw, http.StatusBadRequest, httpapi.Response{
|
||||
Message: fmt.Sprintf("The provided project history %q has failed to import. You cannot create workspaces using it!", projectHistory.Name),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
project, err := api.Database.GetProjectByID(r.Context(), projectHistory.ProjectID)
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("get project: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Store prior history ID if it exists to update it after we create new!
|
||||
priorHistoryID := uuid.NullUUID{}
|
||||
priorHistory, err := api.Database.GetWorkspaceHistoryByWorkspaceIDWithoutAfter(r.Context(), workspace.ID)
|
||||
if err == nil {
|
||||
if !priorHistory.CompletedAt.Valid {
|
||||
priorJob, err := api.Database.GetProvisionerJobByID(r.Context(), priorHistory.ProvisionJobID)
|
||||
if err == nil && convertProvisionerJob(priorJob).Status.Completed() {
|
||||
httpapi.Write(rw, http.StatusConflict, httpapi.Response{
|
||||
Message: "a workspace build is already active",
|
||||
})
|
||||
@ -87,12 +116,36 @@ func (api *api) postWorkspaceHistoryByUser(rw http.ResponseWriter, r *http.Reque
|
||||
return
|
||||
}
|
||||
|
||||
var provisionerJob database.ProvisionerJob
|
||||
var workspaceHistory database.WorkspaceHistory
|
||||
// This must happen in a transaction to ensure history can be inserted, and
|
||||
// the prior history can update it's "after" column to point at the new.
|
||||
err = api.Database.InTx(func(db database.Store) error {
|
||||
// Generate the ID before-hand so the provisioner job is aware of it!
|
||||
workspaceHistoryID := uuid.New()
|
||||
input, err := json.Marshal(workspaceProvisionJob{
|
||||
WorkspaceHistoryID: workspaceHistoryID,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("marshal provision job: %w", err)
|
||||
}
|
||||
|
||||
provisionerJob, err = db.InsertProvisionerJob(r.Context(), database.InsertProvisionerJobParams{
|
||||
ID: uuid.New(),
|
||||
CreatedAt: database.Now(),
|
||||
UpdatedAt: database.Now(),
|
||||
InitiatorID: user.ID,
|
||||
Provisioner: project.Provisioner,
|
||||
Type: database.ProvisionerJobTypeWorkspaceProvision,
|
||||
ProjectID: project.ID,
|
||||
Input: input,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert provisioner job: %w", err)
|
||||
}
|
||||
|
||||
workspaceHistory, err = db.InsertWorkspaceHistory(r.Context(), database.InsertWorkspaceHistoryParams{
|
||||
ID: uuid.New(),
|
||||
ID: workspaceHistoryID,
|
||||
CreatedAt: database.Now(),
|
||||
UpdatedAt: database.Now(),
|
||||
WorkspaceID: workspace.ID,
|
||||
@ -100,8 +153,7 @@ func (api *api) postWorkspaceHistoryByUser(rw http.ResponseWriter, r *http.Reque
|
||||
BeforeID: priorHistoryID,
|
||||
Initiator: user.ID,
|
||||
Transition: createBuild.Transition,
|
||||
// This should create a provision job once that gets implemented!
|
||||
ProvisionJobID: uuid.New(),
|
||||
ProvisionJobID: provisionerJob.ID,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("insert workspace history: %w", err)
|
||||
@ -132,7 +184,7 @@ func (api *api) postWorkspaceHistoryByUser(rw http.ResponseWriter, r *http.Reque
|
||||
}
|
||||
|
||||
render.Status(r, http.StatusCreated)
|
||||
render.JSON(rw, r, convertWorkspaceHistory(workspaceHistory))
|
||||
render.JSON(rw, r, convertWorkspaceHistory(workspaceHistory, provisionerJob))
|
||||
}
|
||||
|
||||
// Returns all workspace history. This is not sorted. Use before/after to chronologically sort.
|
||||
@ -152,31 +204,52 @@ func (api *api) workspaceHistoryByUser(rw http.ResponseWriter, r *http.Request)
|
||||
|
||||
apiHistory := make([]WorkspaceHistory, 0, len(histories))
|
||||
for _, history := range histories {
|
||||
apiHistory = append(apiHistory, convertWorkspaceHistory(history))
|
||||
job, err := api.Database.GetProvisionerJobByID(r.Context(), history.ProvisionJobID)
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("get provisioner job: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
apiHistory = append(apiHistory, convertWorkspaceHistory(history, job))
|
||||
}
|
||||
|
||||
render.Status(r, http.StatusOK)
|
||||
render.JSON(rw, r, apiHistory)
|
||||
}
|
||||
|
||||
// Returns the latest workspace history. This works by querying for history without "after" set.
|
||||
func (api *api) latestWorkspaceHistoryByUser(rw http.ResponseWriter, r *http.Request) {
|
||||
workspace := httpmw.WorkspaceParam(r)
|
||||
|
||||
history, err := api.Database.GetWorkspaceHistoryByWorkspaceIDWithoutAfter(r.Context(), workspace.ID)
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
httpapi.Write(rw, http.StatusNotFound, httpapi.Response{
|
||||
Message: "workspace has no history",
|
||||
})
|
||||
return
|
||||
}
|
||||
func (api *api) workspaceHistoryByName(rw http.ResponseWriter, r *http.Request) {
|
||||
workspaceHistory := httpmw.WorkspaceHistoryParam(r)
|
||||
job, err := api.Database.GetProvisionerJobByID(r.Context(), workspaceHistory.ProvisionJobID)
|
||||
if err != nil {
|
||||
httpapi.Write(rw, http.StatusInternalServerError, httpapi.Response{
|
||||
Message: fmt.Sprintf("get workspace history: %s", err),
|
||||
Message: fmt.Sprintf("get provisioner job: %s", err),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
render.Status(r, http.StatusOK)
|
||||
render.JSON(rw, r, convertWorkspaceHistory(history))
|
||||
render.JSON(rw, r, convertWorkspaceHistory(workspaceHistory, job))
|
||||
}
|
||||
|
||||
// Converts the internal history representation to a public external-facing model.
|
||||
func convertWorkspaceHistory(workspaceHistory database.WorkspaceHistory, provisionerJob database.ProvisionerJob) WorkspaceHistory {
|
||||
//nolint:unconvert
|
||||
return WorkspaceHistory(WorkspaceHistory{
|
||||
ID: workspaceHistory.ID,
|
||||
CreatedAt: workspaceHistory.CreatedAt,
|
||||
UpdatedAt: workspaceHistory.UpdatedAt,
|
||||
WorkspaceID: workspaceHistory.WorkspaceID,
|
||||
ProjectHistoryID: workspaceHistory.ProjectHistoryID,
|
||||
BeforeID: workspaceHistory.BeforeID.UUID,
|
||||
AfterID: workspaceHistory.AfterID.UUID,
|
||||
Name: workspaceHistory.Name,
|
||||
Transition: workspaceHistory.Transition,
|
||||
Initiator: workspaceHistory.Initiator,
|
||||
Provision: convertProvisionerJob(provisionerJob),
|
||||
})
|
||||
}
|
||||
|
||||
func workspaceHistoryLogsChannel(workspaceHistoryID uuid.UUID) string {
|
||||
return fmt.Sprintf("workspace-history-logs:%s", workspaceHistoryID)
|
||||
}
|
||||
|
@ -5,6 +5,7 @@ import (
|
||||
"bytes"
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/require"
|
||||
@ -32,21 +33,31 @@ func TestWorkspaceHistory(t *testing.T) {
|
||||
return project, workspace
|
||||
}
|
||||
|
||||
setupProjectHistory := func(t *testing.T, client *codersdk.Client, user coderd.CreateInitialUserRequest, project coderd.Project) coderd.ProjectHistory {
|
||||
setupProjectHistory := func(t *testing.T, client *codersdk.Client, user coderd.CreateInitialUserRequest, project coderd.Project, files map[string]string) coderd.ProjectHistory {
|
||||
var buffer bytes.Buffer
|
||||
writer := tar.NewWriter(&buffer)
|
||||
err := writer.WriteHeader(&tar.Header{
|
||||
Name: "file",
|
||||
Size: 1 << 10,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
_, err = writer.Write(make([]byte, 1<<10))
|
||||
for path, content := range files {
|
||||
err := writer.WriteHeader(&tar.Header{
|
||||
Name: path,
|
||||
Size: int64(len(content)),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
_, err = writer.Write([]byte(content))
|
||||
require.NoError(t, err)
|
||||
}
|
||||
err := writer.Flush()
|
||||
require.NoError(t, err)
|
||||
|
||||
projectHistory, err := client.CreateProjectHistory(context.Background(), user.Organization, project.Name, coderd.CreateProjectHistoryRequest{
|
||||
StorageMethod: database.ProjectStorageMethodInlineArchive,
|
||||
StorageSource: buffer.Bytes(),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Eventually(t, func() bool {
|
||||
hist, err := client.ProjectHistory(context.Background(), user.Organization, project.Name, projectHistory.Name)
|
||||
require.NoError(t, err)
|
||||
return hist.Import.Status.Completed()
|
||||
}, 15*time.Second, 50*time.Millisecond)
|
||||
return projectHistory
|
||||
}
|
||||
|
||||
@ -54,17 +65,20 @@ func TestWorkspaceHistory(t *testing.T) {
|
||||
t.Parallel()
|
||||
server := coderdtest.New(t)
|
||||
user := server.RandomInitialUser(t)
|
||||
_ = server.AddProvisionerd(t)
|
||||
project, workspace := setupProjectAndWorkspace(t, server.Client, user)
|
||||
history, err := server.Client.WorkspaceHistory(context.Background(), "", workspace.Name)
|
||||
history, err := server.Client.ListWorkspaceHistory(context.Background(), "", workspace.Name)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, history, 0)
|
||||
projectVersion := setupProjectHistory(t, server.Client, user, project)
|
||||
projectVersion := setupProjectHistory(t, server.Client, user, project, map[string]string{
|
||||
"example": "file",
|
||||
})
|
||||
_, err = server.Client.CreateWorkspaceHistory(context.Background(), "", workspace.Name, coderd.CreateWorkspaceHistoryRequest{
|
||||
ProjectHistoryID: projectVersion.ID,
|
||||
Transition: database.WorkspaceTransitionCreate,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
history, err = server.Client.WorkspaceHistory(context.Background(), "", workspace.Name)
|
||||
history, err = server.Client.ListWorkspaceHistory(context.Background(), "", workspace.Name)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, history, 1)
|
||||
})
|
||||
@ -73,16 +87,19 @@ func TestWorkspaceHistory(t *testing.T) {
|
||||
t.Parallel()
|
||||
server := coderdtest.New(t)
|
||||
user := server.RandomInitialUser(t)
|
||||
_ = server.AddProvisionerd(t)
|
||||
project, workspace := setupProjectAndWorkspace(t, server.Client, user)
|
||||
_, err := server.Client.LatestWorkspaceHistory(context.Background(), "", workspace.Name)
|
||||
_, err := server.Client.WorkspaceHistory(context.Background(), "", workspace.Name, "")
|
||||
require.Error(t, err)
|
||||
projectVersion := setupProjectHistory(t, server.Client, user, project)
|
||||
projectHistory := setupProjectHistory(t, server.Client, user, project, map[string]string{
|
||||
"some": "file",
|
||||
})
|
||||
_, err = server.Client.CreateWorkspaceHistory(context.Background(), "", workspace.Name, coderd.CreateWorkspaceHistoryRequest{
|
||||
ProjectHistoryID: projectVersion.ID,
|
||||
ProjectHistoryID: projectHistory.ID,
|
||||
Transition: database.WorkspaceTransitionCreate,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
_, err = server.Client.LatestWorkspaceHistory(context.Background(), "", workspace.Name)
|
||||
_, err = server.Client.WorkspaceHistory(context.Background(), "", workspace.Name, "")
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
@ -90,22 +107,36 @@ func TestWorkspaceHistory(t *testing.T) {
|
||||
t.Parallel()
|
||||
server := coderdtest.New(t)
|
||||
user := server.RandomInitialUser(t)
|
||||
_ = server.AddProvisionerd(t)
|
||||
project, workspace := setupProjectAndWorkspace(t, server.Client, user)
|
||||
projectHistory := setupProjectHistory(t, server.Client, user, project)
|
||||
|
||||
projectHistory := setupProjectHistory(t, server.Client, user, project, map[string]string{
|
||||
"main.tf": `resource "null_resource" "example" {}`,
|
||||
})
|
||||
_, err := server.Client.CreateWorkspaceHistory(context.Background(), "", workspace.Name, coderd.CreateWorkspaceHistoryRequest{
|
||||
ProjectHistoryID: projectHistory.ID,
|
||||
Transition: database.WorkspaceTransitionCreate,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
var workspaceHistory coderd.WorkspaceHistory
|
||||
require.Eventually(t, func() bool {
|
||||
workspaceHistory, err = server.Client.WorkspaceHistory(context.Background(), "", workspace.Name, "")
|
||||
require.NoError(t, err)
|
||||
return workspaceHistory.Provision.Status.Completed()
|
||||
}, 15*time.Second, 50*time.Millisecond)
|
||||
require.Equal(t, "", workspaceHistory.Provision.Error)
|
||||
require.Equal(t, coderd.ProvisionerJobStatusSucceeded, workspaceHistory.Provision.Status)
|
||||
})
|
||||
|
||||
t.Run("CreateHistoryAlreadyInProgress", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
server := coderdtest.New(t)
|
||||
user := server.RandomInitialUser(t)
|
||||
_ = server.AddProvisionerd(t)
|
||||
project, workspace := setupProjectAndWorkspace(t, server.Client, user)
|
||||
projectHistory := setupProjectHistory(t, server.Client, user, project)
|
||||
projectHistory := setupProjectHistory(t, server.Client, user, project, map[string]string{
|
||||
"some": "content",
|
||||
})
|
||||
|
||||
_, err := server.Client.CreateWorkspaceHistory(context.Background(), "", workspace.Name, coderd.CreateWorkspaceHistoryRequest{
|
||||
ProjectHistoryID: projectHistory.ID,
|
||||
@ -124,6 +155,7 @@ func TestWorkspaceHistory(t *testing.T) {
|
||||
t.Parallel()
|
||||
server := coderdtest.New(t)
|
||||
user := server.RandomInitialUser(t)
|
||||
_ = server.AddProvisionerd(t)
|
||||
_, workspace := setupProjectAndWorkspace(t, server.Client, user)
|
||||
|
||||
_, err := server.Client.CreateWorkspaceHistory(context.Background(), "", workspace.Name, coderd.CreateWorkspaceHistoryRequest{
|
||||
|
@ -149,20 +149,3 @@ func (*api) workspaceByUser(rw http.ResponseWriter, r *http.Request) {
|
||||
func convertWorkspace(workspace database.Workspace) Workspace {
|
||||
return Workspace(workspace)
|
||||
}
|
||||
|
||||
// Converts the internal history representation to a public external-facing model.
|
||||
func convertWorkspaceHistory(workspaceHistory database.WorkspaceHistory) WorkspaceHistory {
|
||||
//nolint:unconvert
|
||||
return WorkspaceHistory(WorkspaceHistory{
|
||||
ID: workspaceHistory.ID,
|
||||
CreatedAt: workspaceHistory.CreatedAt,
|
||||
UpdatedAt: workspaceHistory.UpdatedAt,
|
||||
CompletedAt: workspaceHistory.CompletedAt.Time,
|
||||
WorkspaceID: workspaceHistory.WorkspaceID,
|
||||
ProjectHistoryID: workspaceHistory.ProjectHistoryID,
|
||||
BeforeID: workspaceHistory.BeforeID.UUID,
|
||||
AfterID: workspaceHistory.AfterID.UUID,
|
||||
Transition: workspaceHistory.Transition,
|
||||
Initiator: workspaceHistory.Initiator,
|
||||
})
|
||||
}
|
||||
|
Reference in New Issue
Block a user