Compare commits

..

885 Commits

Author SHA1 Message Date
02ce5e26b7 Release 1.1 fix (#1316)
* Update repo location

* Update repo location

* Update chart repo location in Makefile
2020-12-17 16:01:44 -05:00
32611444d6 Release 1.1 (#1306)
* updated versions in various files. ran make release and make api/api.md targets as per release steps

* updated versions to 1.1.0-rc.1

* Updated Makefile BASE_VERSION

* Updated GKE_VERSION in create-gke-cluster target

* Updated appVersion and version tags in Chart.yaml

* Updated tag in values.yaml

* updated _OM_VERSION in cloudbuild.yaml

* make release and make api/api.md execution
2020-12-16 15:58:45 -05:00
c0b355da51 release-1.1.0-rc.1 (#1286) 2020-11-18 13:42:34 -08:00
6f05e526fb Improved tests for statestore - redis (#1264) 2020-10-12 19:21:51 -07:00
496d156faa Added unary interceptor and removed extra logs (#1255)
* added unary interceptor and removed logs from frontend service

* removed extra logs from backend serrvice

* updated evaluator logging

* updated query logging


linter fix

* fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-21 15:02:29 -07:00
3a3d618c43 Replaced GS bucket links with substitution variables (#1262) 2020-09-21 12:22:03 -07:00
e1cbd855f5 Added time to assignment metrics to backend (#1241)
* Added time to assignment metrics to backend

- The time to match for tickets is now recorded as a metric

* Fixed formatting errors

* Fixed minor review changes

- Renamed function to calculate time to assignment
- Moved from callback to returning tickets from UpdateAssignments

* Return only successfully assigned tickets

* Fixed linting errors
2020-09-15 11:18:17 -07:00
10b36705f0 Tests update: use require assertion (#1257)
* use require in filter package


fix

* use require in rpc package

* use require in tools/certgen package

* use require in mmf package

* use require in telemetry and logging


fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-09 14:24:18 -07:00
a6fc4724bc Fix spelling in Proto files (#1256)
Regenerated dependent Swagger and Golang files.
2020-09-09 12:20:29 -07:00
511337088a Reduce logging in statestore - redis (#1248)
* reduce logging in statestore - redis  #1228


fix

* added grpc interceptors to log errors

lint fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-02 12:50:39 -07:00
5f67bb36a6 Use require in app tests and improve error messages (#1253) 2020-08-31 13:17:29 -07:00
94d2105809 Use require in tests to avoid nil pointer exceptions (#1249)
* use require in tests to avoid nil pointer exceptions

* statestore tests: replaced assert with require
2020-08-28 12:19:53 -07:00
d85f1f4bc7 Added a PR template (#1250) 2020-08-25 14:16:36 -07:00
79e9afeca7 Use Helm release to name resources (#1246)
* Fix indent of TLS certificate annotations

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Small whitespace fixes

Picked up the VSCode Yaml auto-formatter.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass 'query' config to open-match-customize

It's not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass frontend/backend to open-match-scale

They're not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Allow redis to derive resource names from the release

This ensures that multiple OpenMatch installs in a single namespace do
not attempt to install Redis stacks with the same resource names.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Include release names in PodSecurityPolicies

This avoids conflicts between multiple Open Match installations in the
same namespace.

`openmatch.fullname` named template per Helm default chart.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the Service Account name release-dependent

This makes the existing global.kubernetes.serviceAccount value an
override if specified, but if left unspecified, an appropriate name will
be chosen.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the RBAC resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the TLS Secret names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the CI-test resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make all Pod/Service names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make Grafana dashboard names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make open-match-scale slightly more standalone

This makes the hostname templates more standard in their case, because
there is no need to coordinate the hostname with the superchart.

This chart still uses a lot of templates from the open-match chart
though, so it's not yet standalone-installable.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make ConfigMap default names release-dependent

A specific ConfigMap can be applied in the same way it was previously,
by overriding configs.default.configName and
configs.override.configName, in which case it is up to the person doing
the deployment to manage name conflicts.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Use correct Jaeger service names for subcharts

This fixes an existing issue where the Jaeger connection URLs in
the configuration would be incorrect if your Helm chart was not
installed as a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Populate Grafana Datasource using a ConfigMap

This allows us to access the Prometheus subchart's named template to get
the correct Service name for the datasource.

This fixes an existing issue where the Prometheus data source URL in
Grafana would be incorrect if your Helm chart was not installed as
a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>
2020-08-17 12:04:26 -07:00
3334f7f74a Make: fix create-gke-cluster, create clusterRole (#1234)
If there are multiple `gcloud auth list` accounts the command would fail,
adding grep active to fix.
2020-07-10 10:57:16 -07:00
85ce954eb9 Update backend_service.go (#1233)
Fixed typo
2020-07-09 11:45:33 -07:00
679cfb5839 Rename Ignore list to Pending Release (#1230)
Fix naming across all code. Swagger changes left.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 13:56:30 -07:00
c53a5b7c88 Update Swagger JSONs as well as go proto files (#1231)
Output of run make presubmit on master.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 12:52:51 -07:00
cfb316169a Use supported GKE cluster version (#1232)
Update Makefile.
2020-07-08 12:25:53 -07:00
a9365b5333 fix release.sh not knowing the right images (#1219) 2020-06-01 11:05:27 -07:00
93df53201c Only install ci components when running ci (#1213) 2020-05-08 16:06:22 -07:00
eb86841423 Add release all tickets API (#1215) 2020-05-08 15:07:45 -07:00
771f706317 Fix up gRPC service documentation (#1212) 2020-05-08 14:36:41 -07:00
a9f9a2f2e6 Remove alpha software warning (#1214) 2020-05-08 13:43:54 -07:00
068632285e Give assigned tickets a time to live, default 10 minutes (#1211) 2020-05-08 12:24:27 -07:00
113461114e Improve error message for overrunning mmfs (#1207) 2020-05-08 11:50:48 -07:00
0ac7ae13ac Rework config value naming (#1206) 2020-05-08 11:09:03 -07:00
29a2dbcf99 Unified images used in helm chart and release artifacts (#1184) 2020-05-08 10:42:16 -07:00
48d3b5c0ee Added Grafana dashboard of Open Match concepts (#1193)
Dependency on #1192, resolved #1124.

Added a dashboard in Matchmaking concepts, also removed the ticket dashboard.

https://snapshot.raintank.io/dashboard/snapshot/GzXuMdqx554TB6XsNm3al4d6IEyJrEY3
2020-05-08 10:15:34 -07:00
a5fa651106 Add grpc call options to matchfunction query functions (#1205) 2020-05-07 18:24:38 -07:00
cd84d74ff9 Fix race in e2e test (#1209) 2020-05-07 15:15:19 -07:00
8c2aa1ea81 Fix evaluator not running in mmf matchid collision test (#1210) 2020-05-07 14:53:12 -07:00
493ff8e520 Refactor internal telemetry package (#1192)
This commit refactored the internal telemetry package. The pattern used in internal/app/xxx/xxx.go follows the one used in openconcensus-go. Besides adding metrics covered in #1124, this commit also introduced changes to make the telemetry settings more efficient and easier to turn on/off.

In this factorization, a metric recorded can be cast into different views through different aggregation methods. Since the metric is the one that consumes most of the resources, this can make the telemetry setups more efficient than before.
Also removed some metrics that were meaningful for debugging in v0.8 but are becoming useless for the current stage.
2020-05-06 18:42:20 -07:00
8363bc5fc9 Refactor e2e testing and improve coverage (#1204) 2020-05-05 20:06:32 -07:00
144f646b7f Test tutorials (#1176) 2020-05-05 12:15:11 -07:00
b518b5cc1b Have the test instance host the mmf and evaluator (#1196) 2020-04-23 15:02:11 -07:00
af0b9fd5f7 Remove errant closing of already closed listeners (#1195) 2020-04-23 10:24:52 -07:00
5f4b522ecd Large refactor of rpc and appmain (#1194) 2020-04-21 14:07:09 -07:00
12625d7f53 Moved customized configmap values to default (#1191) 2020-04-20 15:11:13 -07:00
3248c8c4ad Refactor application binding (#1189) 2020-04-15 11:15:49 -07:00
10c0c59997 Use consistent main code for mmf and evaluator (#1185) 2020-04-09 18:37:32 -07:00
c17e3e62c0 Removed make all commands and pinned dependency versions (#1181)
* Removed make all commands

* oops
2020-04-03 12:01:32 -07:00
8e91be6201 Update development.md doc (#1182) 2020-04-02 15:50:00 -07:00
f6c837d6cd Removed make all commands and pinned dependency versions (#1181)
* Removed make all commands

* oops
2020-04-02 13:22:58 -07:00
3c8908aae0 Fix create-gke-cluster version (#1179) 2020-03-30 21:59:10 -07:00
0689d92d9c Fix the tutorials to using the new API, and be tested (#1175)
* Better follow API guidelines

* Fix tutorials

* don't include makefile fix which is broken
2020-03-27 11:58:28 -07:00
3c9a8f5568 Better follow API guidelines (#1173) 2020-03-26 15:56:34 -07:00
30204a2d15 run presubmit to update files (#1172) 2020-03-26 15:21:53 -07:00
a5b6c0c566 Have evaluator client and synchronizer return error when observing invalid match IDs (#1167)
* Have evaluator client and synchronizer return error when observing invalid match IDs

* update

* update

* update

* update

* presubmit
2020-03-26 13:59:21 -07:00
4a00baf847 Implement assignment groups and graceful failure (#1170) 2020-03-26 12:38:40 -07:00
d74262f3ba Fix broken scale dashboard (#1166) 2020-03-21 15:46:15 -07:00
2262652ea9 Add AUTH tests to Redis implementation (#1050)
* Enable and establish Redis connections via Sentinel

* Reimplement direct redis master connect

* Add AUTH tests for Redis implementation

* fix

* update
2020-03-20 17:12:55 -07:00
e15fd47535 Add a built in created time field for Tickets and the ability to filter Tickets by created time. (#1162) 2020-03-20 15:31:17 -07:00
670f38d36e forbid assignment on ticket create (#1160) 2020-03-19 13:47:45 -07:00
f0a85633a5 update third party files (#1163) 2020-03-19 13:18:55 -07:00
6cb47ce191 Enable and establish Redis connections via Sentinel (#1038)
* Enable and establish Redis connections via Sentinel

* Reimplement direct redis master connect

* Enable and establish Redis connections via Sentinel

* feedbacks
2020-03-14 23:55:41 -07:00
529c01330e Use testing.Cleanup instead of manual cleanup. (#1158) 2020-03-12 14:20:16 -07:00
b36a348db7 Remove omerror, replacing with errgroup (#1157)
Turns out there's already a common use package for this pattern.
2020-03-12 14:00:32 -07:00
5e277265ad Removed unused set package (#1156) 2020-03-12 13:25:22 -07:00
4420d7add2 Added QueryTicketIds method to QueryService (#1151)
* Added QueryTicketIds method to QueryService

* comment
2020-03-09 15:11:16 -07:00
3de052279b Optimized MULTI EXEC querys to reduce Redis CPU consumption (#1131)
* Mysterious code to optimize Redis cpu usage

* resolve comments

* update

* fix cloudbuild
2020-03-06 23:23:59 -08:00
7a4aa3589f Removed ticket auto-expiring logic from statestore (#1146) 2020-03-06 17:20:49 -08:00
bca6f487cc Remove legacy volume mounts from om-demo yaml file (#1147) 2020-03-06 08:32:47 -08:00
d0c373a850 Drafted a short README for the benchmarking framework (#1092)
* Drafted a short README for the benchmarking framework

* update

* update

* update
2020-03-05 13:33:05 -08:00
deb2947ae2 Disable swaggerui via helm (#1144) 2020-03-05 12:08:37 -08:00
d889278151 Replace redis indexing with in memory cache (#1135) 2020-03-02 16:23:55 -08:00
1b63fa53dc Update to go 1.14 (#1133) 2020-02-26 14:13:37 -08:00
af02e4818f Do some randomization on return order of tickets (#1127) 2020-02-20 15:34:40 -08:00
cda2d3185f Add filter package, and rework query testing (#1126) 2020-02-20 14:05:42 -08:00
2317977602 Move default evaluator to internal from testing (#1122) 2020-02-14 14:49:57 -08:00
9ef83ed344 Removed scale chart configmap (#1120) 2020-02-12 13:16:18 -08:00
33bd633b1d Disabled redis when generating static yaml resources except core (#1119) 2020-02-11 14:40:38 -08:00
1af8cf1e79 have scale-frontend use individual go routines for each ticket (#1116) 2020-02-10 13:52:28 -08:00
0ef46fc4d4 implement a scenario which behaves like a team based shooter game (#1115) 2020-02-10 11:21:12 -08:00
79daf50531 Enabled more golangci tests to improve code health (#1089)
* Enabled more golangci tests

* update

* update

* update
2020-02-06 14:07:41 -08:00
a9c327b430 Move scale scenarios into unique packages (#1110) 2020-02-06 12:56:21 -08:00
2c637c97b8 Reduced Redis PING check frequency on Redis pool (#1109)
* Reduced Redis PING check frequency on Redis pool

* fix lint

* update

* update comment

* update comment
2020-02-05 18:00:31 -08:00
668b10030b Update Grafana dashboard for more detailed metrics (#1108)
* Update Grafana dashboard for more detailed metrics

* update cpu usage chart

* update
2020-02-05 17:13:17 -08:00
1c7fd24a34 Remove stats processor from scale tests (#1107) 2020-02-05 13:41:56 -08:00
be0cebd457 Disabled cloudbuild cacher to avoid build flakyness (#1103) 2020-02-04 11:34:23 -08:00
fe7bb4da8f Revert "Release 0.9.0 (#1096)" (#1097)
This reverts commit e80de171a0a6e742d42264f4ab4ecd9231cd3edc.
2020-02-03 16:19:16 -08:00
e80de171a0 Release 0.9.0 (#1096) 2020-02-03 15:42:21 -08:00
fdd707347e Update generated files (#1095) 2020-02-03 15:21:26 -08:00
6ef1382414 Fix leaking of client connections by config.Cacher (#1093)
* Fix leaking of client connections by config.Cacher

* fix link
2020-02-03 14:44:10 -08:00
d67a65e648 Reuse query client in scale tests (#1091)
It was previously not reusing it, so the clients would leak over time.
2020-02-03 13:12:02 -08:00
d3e008cd1e Update proto descriptions to reflect API changes (#1090)
* Update proto descriptions to reflect API changes
2020-02-03 11:15:01 -08:00
d93db94ad9 chartredisfix (#1088) 2020-02-03 09:18:13 -08:00
1bd63a01c7 feature: release tickets api (#1059) 2020-01-31 14:03:17 -08:00
cf8d49052c Deprecated mmf harness (#1086) 2020-01-31 11:13:29 -08:00
fca5359eee Used master HEAD in tutorials' go.mod file and fixed go build errors (#1085) 2020-01-30 15:52:23 -08:00
07637135a9 Deprecate Rosters, remove from Match, MatchProfiles (#1084) 2020-01-30 14:52:41 -08:00
8c86a4e643 Add omerrors and use it it in backend_service and evaluator_client (#1081)
Two methods are added:

ProtoFromErr: returns a grpc status given an error, with some reasoned handling for special cases. This will be used to set errors onto the FetchMatchesSummary in a followup PR.
-WaitOnErrors: this allows some number of functions to run that will all return errors. The first to return an error will be the error returned overall, and it ensures all go routines finish.
WaitOnErrors is used to simplify code in backend_service and the grpc portion of evaluator_client.

Also I realized that synchronizeSend should specify which context is being used where better.
2020-01-30 12:42:02 -08:00
31858e0ce5 Changed evaluator API from returning matches to matchids (#1082)
* Changed evaluator API from returning matches to matchids

* update proto desc
2020-01-30 10:10:35 -08:00
fc0b6dc510 Changed Synchronizer proto to return matchIDs instead (#1080)
This commit changed Synchronizer proto to return matchIDs instead. Also bumped up the numbers of the unnamed channels in synchronizer starting from m3c and changed the channel type starting from m4c to chan string as the next step of the API change is to have the evaluator returns the match ids instead.
2020-01-29 19:26:06 -08:00
edade67a6d Added sync.Map to backend and synchronizer (#1078)
This is an intermediate step to resolve #939. Leaving a bunch of TODOs in this PR and will fix them after the proto change.
2020-01-29 18:34:01 -08:00
c92c4ef07a Starts streaming when sending requests from synchronizer to evaluator (#1075)
This commit started to stream when calling the evaluator.Evaluate method such that the synchronizer is able to process the data more efficiently.
2020-01-29 17:23:38 -08:00
0b8425184b Stream proposals from mmf to synchronizer (#1077)
This improves efficiency for overall system latency, and sets up for better mmf error handling.

The overall structure of the fetch matches call has been reworked. The different go routines now set an explicit err variable. So once we have FetchSummary, we can just set the mmf err variable on it. Synchronizer calls which err will always result in an error here (as it's relatively fatal), while mmf and evaluator errors will be passed gently to the client.

One thing this code isn't doing anymore is checking if an mmf returns a match with no tickets. This seems fine to me, but willing to discuss if anyone disagrees.

Deleted the tests for the following reasons:

TestDoFetchMatchesInChannel didn't actually test fetching matches, it only tested creating a client. Since callMmf now both creates the client and makes the call, this code now blocks actually trying to make a connection. I'm not worried about having full branch test coverage on err statements...
TestDoFetchMatchesFilterChannel tested merging of mmf runs. Since there's only one mmf run now, it's no longer necessary.
2020-01-29 14:44:15 -08:00
338a03cce5 Removed synchronizer dashboard and synced grpc dashboard with API changes (#1074)
The previous dashboards don't work with our changes on the API surface.
https://snapshot.raintank.io/dashboard/snapshot/5A6ToilbqqWbeYpuf36jFCrVv3zFFK1V

This commit:

Removed the unused synchronizer dashboard.
Updated the field matches to use QueryService, BackendService and FrontendService instead of the outdated naming.
Resolved #1018
2020-01-28 15:08:16 -08:00
b7850ab81d Remove assignment.error (#1073) 2020-01-28 13:10:41 -08:00
faa730bda8 Remove c# protos and respective makefile commands (#1072) 2020-01-28 12:12:55 -08:00
76ef9546af Add battle royal scale test scenario (#1063)
Tickets choose one of the 20 regions, with a skewed probability. (probability eg: https://play.golang.org/p/V3wfvph34hM) One profile per region, which forms matches of 100 players.
2020-01-28 11:39:57 -08:00
bff8934cd3 Added the ability to specify your own Redis instance via helm (#1069)
Resolved #836
2020-01-28 10:51:41 -08:00
3a5608b547 Remove inaccurate default documentation on range filter (#1071)
Instead, this is actually just relying on the proto's default values of 0 for each. As such it shouldn't be documented.
2020-01-27 16:18:03 -08:00
b7eec77a36 Rename Backend and Frontend API to BackendService and FrontendService (#1065)
Depended and aligned with #1055. After this commit, we'll still have om-backend, om-frontend, and om-query image, but with API surface renamed.

Backend -> BackendService
Frontend -> FrontendService
2020-01-27 15:52:35 -08:00
82a011ea52 Rename Mmlogic to Queryservice (#1055)
Resolved #996.

Manually rename the file name under internal/app/mmlogic and cmd/mmlogic from mmlogic.go to query.go to keep the image name consistent with our backend and frontend naming.

TODO: Rename backend and frontend API to BackendService and FrontendService instead.
2020-01-27 15:27:17 -08:00
92210b1a13 Redis grafana dashboard (#1062)
* Redis grafana dashboard

* Alert notifiers

* update

* update

* update

* update
2020-01-23 21:31:58 -08:00
f46c0b8f3d Revamp go processes dashboard (#1064)
* Revamp go processes dashboard

* added cpu usage chart
2020-01-23 20:08:41 -08:00
a19baf3457 Revamp gRPC grafana dashboard (#1060)
* dashboard prototype

* Remove storage dashboard

* fix

* update
2020-01-23 19:36:56 -08:00
8e1fbaf938 Change backend.FetchMatches proto from taking multiple profiles to one instead (#1056) 2020-01-17 20:08:29 -08:00
957471cf83 Run scale test assignment and deletes in parallel (#1058)
Start 50 go routines for each at the beginning of the test, and pass them from fetch matches with a buffer.

Gets the redis state store first match to handle >500 tickets per second:
https://snapshot.raintank.io/dashboard/snapshot/yO88xrIUe1bFR29iNZt4YuM0xuBb8PX9
2020-01-17 17:09:55 -08:00
e24c4b9884 Fix off by 1 error in first match scale test (#1057) 2020-01-17 16:33:43 -08:00
34cc4987e8 Add a first match scenario to the scale tests (#1054)
This first match scenario runs one pool with all tickets, pairing tickets into 1v1 matches with no logic.

Metrics example: https://snapshot.raintank.io/dashboard/snapshot/JZQvjGLgZlezuZfNxPAh8n098JQuCyPW
2020-01-17 11:39:37 -08:00
8e8f2d688b Add gRPC CSharp bindings (#1051)
* Add gRPC CSharp bindings

* update
2020-01-16 16:54:56 -08:00
f347639df4 🤦 (#1048) 2020-01-15 09:47:06 -08:00
75c74681cb Make scale grafana dashboard optional to install (#1044)
* Optionally enable grafana dashboard for scale chart

* Make scale grafana dashboard optional to install
2020-01-15 09:21:45 -08:00
5b18dcf6f3 Add metric support to the scale tests (#1042) 2020-01-14 17:31:46 -08:00
3bcf327a41 Remove locust (#1041) 2020-01-14 13:53:09 -08:00
9f59844e0d Remove zipkin references from Open Match (#1040) 2020-01-14 12:23:31 -08:00
5a32cef2e9 Update Makefile and .ignore files (#1031)
This commit updated the Makefile and .ignore files for the evaluator and mmf binaries.

Also moved the evaluator to test/evaluator folder - I had it accidentally placed under the test/customize/evaluator dir because of a bad merge when working on deprecating the harness.
2020-01-13 18:59:57 -08:00
b9e2e88ef4 Implement basic tunable parameters logic for benchmarking scenarios (#1030)
This commit implements the knobs to control ShouldCreateTicketForever, ShouldAssignTicket, ShouldDeleteTicket, TicketCreatedQPS, and CreateTicketNumber. Also removed the roster-based-mmf from the repo since it is only used for the scale test and there is no need to build its image in every CI run.

After this commit got checked in, users are able to configure the knobs via the new benchmarking framework and run make install-scale-chart to it.

TODO:

Implement the filter number and profile number logic. This requires a rewrite for examples/scale/tickets and examples/scale/profiles package.
2020-01-08 17:20:37 -08:00
41632e6b8d Increase Redis ping time tolerance and provision more resources for CI (#1034) 2020-01-08 15:32:32 -08:00
188457c21f Added mmf and evaluator for the basic benchmarking scenario (#1029)
* Added mmf and evaluator for the basic benchmarking scenario

* update

* update

* fix
2020-01-07 11:08:12 -08:00
4daea744d5 Added a fixed development password for Redis (#989)
* Added a fixed development password for Redis

* update
2020-01-02 23:30:35 -08:00
1f3dd4bcbf Implement a prototype for Open Match benchmarking framework (#1027)
* Implement a prototype for Open Match benchmarking framework

* update

* update

* update
2019-12-27 18:00:47 -08:00
d82fc4fec6 Add pod tolerations, nodeSelector and affinity in helm (#1015) 2019-12-27 13:02:36 -08:00
8cb43950a1 Move ignorelists.ttl from Redis section to Open Match core (#1028) 2019-12-27 12:27:09 -08:00
9934a7e9da Rewrite synchronizer and corresponding backend (#1024) 2019-12-20 16:40:53 -08:00
8db449b307 Templatize stress test configurations (#1019)
* Templatize stress test configurations

* Update

* presubmit
2019-12-17 11:02:22 -08:00
b78d4672a6 Update client-go to kubernetes-1.13.12 (#1020) 2019-12-11 18:08:10 -08:00
e048b97c71 Moved MMF for end-to-end in-cluster testing to internal (#1014)
* Moved MMF for end-to-end in-cluster testing to internal

* Fix
2019-12-11 16:55:43 -08:00
f56263b074 Deprecate evaluator harness (#1012)
* Have applications read in config from custom input

* Moved original evaluator example to internal package

* Deprecate evaluator harness
2019-12-11 16:04:16 -08:00
aaca99c211 Update README.md (#1016) 2019-12-09 18:12:18 -08:00
9c1b0bcc0e Have applications read in config from custom input (#1007) 2019-12-09 13:26:58 -08:00
80675c32f6 Split up stress test into backend/frontend structure (#1009) 2019-12-09 12:09:00 -08:00
4e408b1abc Show how to generate install/yaml files in dev guide (#1010) 2019-12-08 11:37:52 -08:00
fd4f154a0e Remove unessisary variables and indirection from synchronizer (#1008) 2019-12-06 15:12:18 -08:00
3e2d20edc0 Have synchronizerClient use cacher, to update on config changes (#1006)
This also aligns better with patterns for other clients, and removes some synchronization complexity for this type.
2019-12-05 13:57:57 -08:00
40ba558eb2 Improve Evaluator tutorials experience (#1005)
* Improve Evaluator tutorials experience

* Improve Evaluator tutorials experience
2019-12-04 17:59:04 -08:00
72bcd72d5c Fix Redis Err: Max Clients Reached error (#999)
This commit fixed an issue where Open Match may throw out Err: max clients reached errors from Redis side under load testing scenarios. At this point, Open Match should be able to scale with 1600 profiles and 5000 tickets in the statestore.

The reason that we got those errors from Redis is by default Redis set its maxClient connections limit to 10k. However, Open Match has maxIdle number set to 5000 per pod, which exceed Redis's limit and failed the API calls. This commit manually overrides the maxClient number to 100k, reduce the maxIdle number to 200, and configure the file descriptors' limit to 10k by setting sysctl -w net.core.somaxconn=100000 using the initContainer if enabled.
2019-12-04 17:18:09 -08:00
b276ed1a08 Fixed terraform google provider version to 2.9 (#1004)
* Fixed terraform google provider version to 2.8

* Update versions.tf

* Update versions.tf
2019-12-04 13:20:36 -08:00
d977486dc5 Add more metrics to monitor synchronizer time windows performance (#1001) 2019-12-03 18:43:28 -08:00
1f74497bdd Reduced in-cluster test flakyness and stablize gRPC client connections (#1003) 2019-12-02 16:43:37 -08:00
57e9540faa Use helm to test Open Match in a k8s cluster (#988) 2019-11-25 16:29:05 -08:00
a0be7dcec5 Cherry-picked MMF server changes to upstream (#1000) 2019-11-25 16:09:06 -08:00
391cc4dc72 More cleanups (#984) 2019-11-22 11:16:52 -08:00
2c8779c5d7 Improve README instructions and code templates for the tutorials (#997) 2019-11-21 17:52:08 -08:00
e5aafc5ed7 Added a Grafana dashboard to track Redis client connection gauges (#994) 2019-11-21 09:21:44 -08:00
8554601a70 Update Scale package to sync with the latest config and API changes (#992) 2019-11-20 15:06:19 -08:00
a75833b85a Update release note and release process template (#987) 2019-11-19 14:53:33 -08:00
f01105995d Update gRPC middlewares used in the internal/rpc library (#993) 2019-11-19 13:55:11 -08:00
f949de7dce Update master branch tutorials to use v0.8.0 tags (#985) 2019-11-15 10:16:08 -08:00
335bf73904 Remove redundant matchmaker scaffold and update tutorials (#979) 2019-11-14 13:59:26 -08:00
7a1dcbdf93 More cleanup (#976) 2019-11-13 13:43:01 -08:00
0a65bdefe5 Fix typo in folder name (#975) 2019-11-13 10:15:16 -08:00
bcf0e6b9fb Harden the open-match parent chart (#972) 2019-11-13 09:51:37 -08:00
1f5df7abef Ignore reaper error (#974) 2019-11-13 08:25:02 -08:00
7005d40939 Add solution folder to Matchmaker 102 tutorial (#973) 2019-11-13 02:00:12 -08:00
3536913559 Add logging to the default evaluator (#964) 2019-11-13 01:34:05 -08:00
103213f940 Add the solution for Matchmaker 101 tutorial to a separate solution folder. (#971) 2019-11-13 00:25:09 -08:00
3b8efce53d Add a tutorial for using the default evaluator (#961) 2019-11-13 00:05:38 -08:00
580ed235d7 Generate static yaml to install open match demo (#969)
* Generate static yaml to install open match demo

* Update Makefile to sync with the latest demo update
2019-11-12 22:02:44 -08:00
23cc35ae68 Publish helm index.yaml file to helm install open-match (#962) 2019-11-12 21:42:59 -08:00
c002e75fde A Tutorial to customize the evaluator (#970) 2019-11-12 19:01:04 -08:00
6e6f063958 Update tutorial modules to use v0.8 rc (#963) 2019-11-12 16:12:57 -08:00
8d31b5af07 Fix namespace dependency on CI (#967) 2019-11-12 15:54:21 -08:00
f1a5cd9b81 Have MMF and Evaluator in customize chart use different configs (#959) 2019-11-08 15:44:58 -08:00
d3d906c8be Define Makefile and RBAC rules for open-match-demo namespace migration (#958) 2019-11-08 14:51:42 -08:00
6068507370 Move Match Function installation to the matchmaker.yaml - since customization.yaml is now optional when using default evaluator installation steps (#957) 2019-11-08 14:30:29 -08:00
04b06fcf90 Split out MMF and Evaluator install from open-match-demo (#956) 2019-11-08 11:20:42 -08:00
0c25ac9139 Turn off subcharts by default (#954) 2019-11-08 09:15:49 -08:00
0565a014ad Disable WI in create-gke-cluster step (#947) 2019-11-06 19:10:13 -08:00
57e59c3821 Bumped helm version and dependencies versions for k8s 1.16 support (#938) 2019-11-06 18:27:59 -08:00
608d5bce71 Disable Redis initContainer by default (#941) 2019-11-06 17:12:23 -08:00
52b8754eb8 Update go.mod dendencies (#949) 2019-11-06 13:31:08 -08:00
a10817f550 Fix scale test based on the config changes (#948) 2019-11-06 12:39:38 -08:00
817a0968e7 Update release template (#944) 2019-11-06 11:11:25 -08:00
043a984bab Remove k8s probes in example mmfs and evaluator (#942) 2019-11-06 10:54:17 -08:00
02d8d1f1fe Optimize developer workflow (#943) 2019-11-06 10:28:45 -08:00
242d799c18 Enabled telemetry when generating assets (#945) 2019-11-04 18:05:27 -08:00
33189f9154 Added jaeger tracing to Open Match core services (#934) 2019-11-01 13:02:46 -07:00
9ef7cb6277 Matchmaker tutorial modificationt to improve tutorial experience (#935) 2019-11-01 12:49:12 -07:00
b05c9f5574 Proposal: Make extension a map of string to any (#901)
Replaces the extension field (and match.evaluator_input) with a map of string to any.  The previous concept was that different components would read from specific extension fields.   However nothing was enforcing this behavior.  (fun fact: the very first use of this was incorrect, I used extension instead of evaluator_input when updating the default evaluator, but caught myself in the review.)

Instead have a map of string to proto.  This allows any producer to add whatever values
it wants, and the consumers to look for the specific values they want.

# Pros:
Better compos-ability, and less "forced" duplication.  Now various components can simply add information which is required by the other components processing each message.  If there is a unified system, they can use one extension.  If it's a system composed of various parts, then they simply add and use the protos required by the connected pieces.

This allows data to flow through OM better: if there are systems which are composed together outside of OM core, they don't need custom fields or manipulation. eg, if I pass my match to a well known system with returns assignments from Agones, and requires a specific extension, and I add a layer of processing before hand which requires its own extension, the match doesn't need to be modified to have the required extension when being passed to the Agones allocator.  Or if there's data on tickets which need to flow through OM to the director, they can just be added.

# Cons:
- Very simple use cases for OM are a bit more complex.
- JSON specifically now needs to add a map around the any around the actual data.
- Users who have a single extension type have a bit more work to do.  I think the recommendation should be to use the empty string, "", for such cases.
- Read, Modify, Set operations on extension data are more complicated.
2019-10-31 16:58:22 -07:00
8a29f15fe0 Unify helm image tags (#919) 2019-10-31 16:37:09 -07:00
5fa0cc700c Refactor evaluator client in the synchronizer (#933)
This fixes a number of bugs:
- If it's a grpc connection, the evaluator client also creates an http connection.
- Even if it's an http connection, it never sets it to use that http connection, always trying to recreate the client and failing.
- The http evaluator code does not use the http client which it created.
- If the config is updated, the old evaluator connection details are still used.
- If the client errors, it may still use a broken client.

Refactors the file into several completely separate components:
- A grpc based client.
- An http based client.
- A differed client which selects between creating a grpc or http client, and will detect changes to the config and recreate the client.

As a result of these changes, the type of connection is explicit based on config.  If a grpc port is present, it will always use that connection, never trying to create an http client.

The actual code to create clients and make requests is mostly unchanged.
2019-10-31 16:05:28 -07:00
d579de63aa Update proto comments to reflect latest API changes (#932) 2019-10-31 15:28:34 -07:00
797352a3fc Add Cacher, which invalidates a cache when config changes (#931)
I will use this in the evaluator_client, where we want to re-use a client if possible, but get a new client if the used values change. This solves the generic problem on a meta level, instead of having to manually remember and compare the config values used. It also prevents programming errors where a new config value is read, but the code doesn't properly detect if it has changed.

The ForceReset method will be used when the evaluator client has an error, so that the system will recover from a client which is stuck in an error state on the next call.

I anticipate there will be other places to use this inside open match.
2019-10-31 14:39:17 -07:00
68882a79bb Update release.md (#928) 2019-10-31 14:06:14 -07:00
11bf81e146 Tutorial to build a matchmaker that uses multiple pools per match profile (#929) 2019-10-31 13:46:38 -07:00
02aa992ac7 Fix cloudbuild (#922) 2019-10-31 13:04:59 -07:00
f5b651669c Added protoc-gen-doc plugin to generate API references (#925) 2019-10-31 11:34:24 -07:00
b9522a8bb5 Add a template for a basic OM based Matchmaker. The tutorials will use this as starting point. (#927) 2019-10-31 10:26:55 -07:00
8c6fbcbe49 Update the MMF101 to be a mode based matchmaker (#926) 2019-10-31 09:42:46 -07:00
99141686c9 Move Evaluator to tutorial namespace and enable tutorial configuration for Open Match (#923) 2019-10-30 19:21:23 -07:00
3cf9c2ad6a Added jaeger sample configuration to the telemetry binder (#902) 2019-10-30 18:10:04 -07:00
755c0e82f1 Fetch ignore list before querying for Tickets to fix a race where a ignored ticket could be returned as they move out of ignore list upon assignment (#918) 2019-10-30 17:41:27 -07:00
18bc9f31fd Implement Tutorial for authoring a basic match function (#913) 2019-10-30 17:23:05 -07:00
05325d3b77 Remane bindStackdriver to bindStackdriverMetrics (#921) 2019-10-30 16:32:15 -07:00
3c7f73ed03 Install override yaml file by default when using helm upgrade (#920) 2019-10-30 15:20:53 -07:00
525d35b341 Upgrade to helm3 (#896) 2019-10-29 18:34:23 -07:00
d3e8638a3b Added Storage Dashboard in Grafana (#909) 2019-10-29 17:26:15 -07:00
af19404eef Split up override configmap from open-match-core static yaml (#916) 2019-10-29 16:55:22 -07:00
2f9e1c2209 Remove struct import from protos (#915) 2019-10-29 14:00:02 -07:00
669f7d63b7 Remove evaluator config for default config yaml (#914) 2019-10-29 13:14:10 -07:00
8740494f3e Added Makefile proxy and helm default configs for jaeger (#900) 2019-10-28 17:36:40 -07:00
3899bd2fcd Have internal/config/config.go read in file through absolute path (#910) 2019-10-28 13:20:34 -07:00
dac6ac141e Update RosterBased match function to not use the harness (#904)
Update RosterBased match function to not use the harness
2019-10-25 16:28:23 -07:00
c859e04bf9 Remove global configmap (#907) 2019-10-25 15:44:15 -07:00
7a48467cb5 Config change first step (#906) 2019-10-25 15:09:04 -07:00
74992cdf79 Added time metrics to statestore wrapper (#899) 2019-10-25 11:36:35 -07:00
dd21919c00 Fix csharp dependency (#903) 2019-10-25 10:34:09 -07:00
031b39e9c2 Added templates for bug/feature reports (#898) 2019-10-24 11:19:01 -07:00
3f8f858d85 Update helm templates to follow k8s spec rules (#897) 2019-10-24 10:53:32 -07:00
1dbd3a5a45 Create a privileged service account for Redis and disable THP to prevent Redis memory leaks (#884) 2019-10-22 19:11:36 -07:00
fc94a7c451 Add csharp generated code and update .csproj dependency (#813) 2019-10-22 15:28:13 -07:00
60d20ebae5 Update resource yamls to use k8s v1.16 API (#891) 2019-10-21 11:16:07 -07:00
e369ac3c0b Update tutorial READMEs with more concrete steps (#888)
* Update tutorial READMEs with more concrete steps
2019-10-17 11:41:58 -07:00
2c35ecb304 Rename telemetry util function to match what it actually does (#874)
* Rename telemetry util function to match what it actually does

* Improve comments
2019-10-16 17:47:59 -07:00
7350524e78 Added open-match namespace in yaml installation files (#886)
* Create open-match namespace in yaml installation files when open-match-core is enabled

* Rename and comment
2019-10-16 16:13:38 -07:00
2aee5d128d Configure Open Match to improve scaling (#881) 2019-10-15 14:28:35 -07:00
4773b7b7cf Added a Grafana dashboard to monitor Redis connection latency (#876) 2019-10-15 13:37:26 -07:00
1abdace01e Pin Dockerfile base-build to go version 1.13.1. (#875)
I had a problem just now where my local go test would pass, but docker files would fail to build. My system had an old version of golang:latest cached, which failed to build properly due to missing a new method. So instead pin the dockerfile to a specific verison. This way builds will be more deterministic: If a new version of Go comes out, the new features won't work for anyone until this is updated, at which point everyone's local cache will be invalidated.
2019-10-14 18:32:25 -07:00
bbcf8d47b4 Update release template to install open match without telemetry services (#868) 2019-10-14 15:05:26 -07:00
d65dee6be0 Change k8s service type into headless service and use DNS resolver to get endpoint (#864)
* Implement client side load balancing with DNS resolver for Open Match headless service deployment

* Update comments
2019-10-14 14:44:34 -07:00
02e6b3bbde Rename attribute to *_arg, FloatRangeFilter to DoubleRangeFilter (#873)
Part of #765

Helps with #512 as the fields names not matching removes most incompatible query cases.
2019-10-14 13:16:31 -07:00
77090d1a5b Tutorial (#871)
* Tutorial skeleton

* Update

* Yet another update

* Add header
2019-10-14 12:20:43 -07:00
ce3a7bf389 Remove properties fields from messages.proto (#870)
Part of moving to the new API proposal: #765
2019-10-11 14:09:38 -07:00
bcf710b755 Remove evaluator client test (#866)
Part of moving to the new API proposal: #765

This didn't break earlier when it should have, because the test wasn't working.

As this is covered by the end to end tests anyways, delete.
2019-10-11 13:13:55 -07:00
2b7eec8c07 Remove index configuration (#862)
Part of moving to the new API proposal: #765
2019-10-11 12:54:41 -07:00
99164df2db Have mmf harness pass extension instead of properties (#865)
Part of moving to the new API proposal: #765
2019-10-11 12:00:35 -07:00
efa1ce5a0b Update telemetry/metrics to support adding histogram view (#845)
* Update telemetry/metrics to support adding histogram view

* Update comment
2019-10-11 11:42:48 -07:00
cb610a92b1 Define minimalistic pod resources to deploy sample mmf and evaluator (#867)
* Define minimalistic pod resources to deploy sample mmf and evaluator

* newline
2019-10-11 11:10:14 -07:00
89691b5512 Remove internal dependencies from the demo (#859)
* Remove internal dependencies from the demo
2019-10-10 19:32:08 -07:00
252d473d72 Generate helm charts at runtime and delete install/helm/open-match/charts repo (#824)
* Generate helm charts at runtime
2019-10-10 19:15:49 -07:00
56cfb8e66e Remove last references to ticket.properties (#863)
Part of moving to the new API proposal: #765
2019-10-10 18:50:17 -07:00
5f32d4b765 Switch Default Evaluator to use proto any (#854)
Added a new proto to messages, DefaultEvaluationCriteria.

Reworked the default evaluator to use it.
Changed the pool mmf to output it.
2019-10-10 16:47:56 -07:00
658aee8874 Use SearchField's doubles for double indexing (#860)
Part of moving to the new API proposal: #765

This replaces the double indexing, and fixes the spots which break because of it.
Future PRs will remove indexing and the passing of configuration that were because of it, and then also remove the properties altogether. (there's still a handful of places which use properties but don't actually use it for indexing.)
2019-10-10 15:54:33 -07:00
6ac7910fb1 Remove property usage from e2e/ticket_test (#861)
Part of moving to the new API proposal: #765
2019-10-10 12:54:22 -07:00
fcf7c81c84 Commit missed proto build changes (#858)
Missed this with my previous change.
2019-10-09 18:30:14 -07:00
e3d630729c Replace boolean filter with tag filter (#855)
Part of moving to the new API proposal: #765
2019-10-09 17:41:59 -07:00
da9d48ddb1 Remove unnecesssary properties and filters in demo (#856)
This was possible since the move to read all tickets with no filters. It simplifies the demo
complexity a bit.

Tangential, but useful to moving to the new API proposal: #765
2019-10-09 12:05:47 -07:00
3727b0d5d8 Update third party code (#851) 2019-10-08 19:06:57 -07:00
cc1b70dd2e Use SearchField's strings for string indexing (#853)
Part of moving to the new API proposal: #765
2019-10-08 16:33:28 -07:00
91090af431 Remove outdated and unused filter package (#852) 2019-10-08 15:58:56 -07:00
6661df62ae Adding Any fields to messages.proto (#846)
See #765 for detailed discussion.

Followup changes will involve wiring up the search_fields, then transitioning the existing tests and examples, then removing the old struct fields.
2019-10-08 15:36:33 -07:00
f63a93b139 Fix cloudbuild (#850) 2019-10-07 16:39:08 -07:00
31648c35f3 Port over an end-to-end test with complete game logic to in-cluster test (#808)
* Port over an end-to-end test with complete logic to in-cluster test

* Fix discrepancy in MMF host name
2019-10-04 12:33:57 -07:00
e96a6e8af7 Reflect latest artifact changes in the release shell script (#831) 2019-10-01 10:48:48 -07:00
d9912c3e28 DeleteFromIgnoreList when tickets got assigned or deleted (#830)
* DeleteFromIgnoreList when tickets got assigned or deleted
2019-09-30 18:28:10 -07:00
8933255ec2 Fix OpenConcensus backend_matches_fetched metric (#829)
* Fix open concensus backend_matches_fetched metric

* Update
2019-09-30 18:10:23 -07:00
3617d3cdbd Use StringEquals properly in e2e tests (#825)
* Use StringEquals properly in e2e tests
2019-09-30 17:57:26 -07:00
736a979b47 Fix create-gke-cluster command (#844)
* Fix create-gke-cluster command
2019-09-30 15:59:53 -07:00
aa99d7860e Add release note approval to release process. Improve ordering in release notes template (#826) 2019-09-27 12:48:40 -07:00
22ad6fed6b HealthCheck workaround (#827)
* HealthCheck workaround

* Update
2019-09-25 15:23:42 -07:00
39e495512b Make CI compatible with go1.13 (#823)
* Make CI compatible with go1.13

* bump version
2019-09-25 10:48:59 -07:00
291e60a735 Fix Scale tests (#828) 2019-09-23 16:20:53 -07:00
b29dfae9cf Reorder make rule dependencies to fix presubmit (#822) 2019-09-20 16:02:15 -07:00
377b40041d Rename indice' const names to reflect their filter types and unify ticketIndice value (#807)
* Rename indice' names for testing to reflect their filter types

* Fix golangci error
2019-09-19 18:20:53 -07:00
6f7b7640c2 Pass synchronizer id on the context of the call to synchronizer (#817)
Pass synchronizer id on the context of the call to synchronizer
2019-09-19 16:01:47 -07:00
039eefb690 Open Match scale testing improvements (#819)
* Open Match scale testing improvements
2019-09-19 15:44:40 -07:00
5de0ae1fc4 Bump default ignore list ttl to 60seconds to give the backend sufficient time to allocate DGS (#816) 2019-09-19 15:21:58 -07:00
1e5560603a Enable Synchronizer by default (#809) 2019-09-18 11:53:20 -07:00
61449fe2cf Implement Roster based MMF that populates roster pools with tickets from the pools supplied. (#806)
* update

* Implement the Roster based match function for scale tests
2019-09-17 17:48:47 -07:00
21cf0697fe Implement test backend that will fetch matches, assign tickets and delete tickets at scale (#804) 2019-09-17 17:14:50 -07:00
12e5a37816 Add end-to-end test for new filter types (#805)
* Add end-to-end test for new filter types
2019-09-17 16:25:09 -07:00
e658cc0d84 Add csproj baseline (#794)
* Add csproj baseline

* Automate CSharp packing via Docker

* Build csharp locally

* Rm dockerfile

* Modify gitignore
2019-09-17 16:09:41 -07:00
9e89735d79 Implement test frontend that creates tickets in Open Match continuously (#803) 2019-09-17 15:47:40 -07:00
5cbbfef1cc Implement the profiles package that will be used by the scale backend to generate profiles for different scenarios for scale testing (#802) 2019-09-17 15:14:10 -07:00
1c0e4ff94e Add implementation for Tickets library to generate fake tickets for scale test (#801) 2019-09-17 11:00:41 -07:00
79862c9950 Add placeholder components for Open Match scale benchmarking (#797) 2019-09-16 18:01:37 -07:00
8ac27d7975 Change protobuf namespace to openmatch (#799)
This resolves #723

It would be very weird for other protobuf packages to be importing "api" for Open Match. This changes to a more reasonable unique name.

Eg, tensorflow uses the package "tensorflow" https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto
2019-09-16 16:41:00 -07:00
86b8cb5aa8 Add string equals filtering and indexing (#798)
Part of implementing #681
2019-09-16 15:54:41 -07:00
fdea3c8f1e Move gke-metadata-server workaround out from install/helm directory (#793)
* Move gke-metadata-server workaround out from install/helm directory
2019-09-16 11:50:17 -07:00
61a28df3e5 Stop using environment variables for Redis connection (#792)
* Stop using environment variables for Redis connection
2019-09-13 11:02:22 -07:00
13fe3fe5a9 Add CSharp namespace (#779) 2019-09-12 09:53:36 -07:00
a674fb1c02 Add bool filtering and indexing (#791)
Add bool filtering and indexing
2019-09-10 14:51:05 -07:00
75ffc83b98 Rename Filter to FloatRangeFilter (#790)
This sets things up for other filter types.
2019-09-06 17:04:21 -07:00
7dc4de6a14 Store indexes used for a ticket, and use them to deindex (#789)
This has two primary advantages:
The redis key of the index can be decided at ticket index / query time. This is important for string equal indexing, where the plan is to concatenate the OM index name and the string value to form the redis key.
Improved correctness when indexes are changed: The ticket will now clean up the indexes it was created with, preventing old indices from existing after all the tickets that used them are gone.

This does add an extra read when deindexing a ticket, but I think the correctness improvement alone is worth that.

Other notes:
Turns out the indexes need to be in a list of interface{} to concatenate with the redis key for the cache, so I changed my mind about computing the list in extractIndexedFields. So extractIndexedFields instead just returns the map of index to values.

Improved a test's assertions by using ElementsMatch.
2019-09-06 16:06:21 -07:00
f02283e2a6 Add all tickets indexed, used on pools with no filters (#785)
Resolves #767
2019-09-06 14:00:00 -07:00
d1fe7f1ac4 Improve synchronizer logging (#784)
* Improve synchronizer logging

* Improve synchronizer logging
2019-09-06 13:43:51 -07:00
84eb9b27ef Use config value for Redis hostname and port (#783) 2019-09-06 13:18:44 -07:00
707de22912 Seperate redis and OM index concepts (#781)
> Currently OM filters directly match filtering fields in redis, and OM ticket properties directly map to values in redis. This change breaks that direct connection. In followup changes, I will be adding other index and filter types. They will be translated into redis sorted set values, so that conversion will take place within these methods. Eg, bool values will be turned into 0 and 1, and bool equal filters will do a range to capture those values.
> 
> This does a couple other minor things:
> 
> * Removes a test case that indexed fields have to be numbers, which is going to be wrong after other filter types are added anyways.
> * Adds a prefix to the redis key for the index. This will be important as other index types are added to avoid collisions.
2019-09-06 12:55:30 -07:00
780e3abf10 Fix demo bug (#780) 2019-09-05 16:35:57 -07:00
524b7d333f Use secure websockets when demo page is on https (#775)
This fixes the scenario where the demo is behind an https proxy. In that scenario, it would
previously try to connect via unsecured websockets, which doesn't work. Specifically, this
is the case for Google Cloud Console's Web Preview.

Tested=Manually, bridging locally and also with the Cloud Console.
2019-09-05 09:22:40 -07:00
c544b9a239 Fix the namespace bug in install/yaml (#769) 2019-09-03 23:57:21 -07:00
04b6f1a5ad Have filter tickets take pool instead of list of filters (#773)
Part of the work for #681

This changes the API of FilterTickets to take a pool instead of filter list. Following the API
in the proposal, different filter types will be different fields on the Pool message.
2019-09-03 16:18:57 -07:00
13952ea54e Fix md-test by adding whitelist value for swagger.io (#774) 2019-09-03 16:00:48 -07:00
a61f4a643e Align Kubernetes API versions and Update the rest of the module versions (#772) 2019-08-30 14:53:00 -07:00
949fa28505 Evaluator http test and implementation (#754)
* Change evaluator API from unary to bidirectional streaming
2019-08-23 16:24:04 -07:00
85cc481f5d Change synchronizer API from unary to bidirectional streaming (#750)
* Change synchronizer API from unary to bidirectional streaming

* bug fix

* reformat

* Update

* Update

* update
2019-08-22 16:20:29 -07:00
c3cbcd7625 Change evaluator API from unary to bidirectional streaming and disable HTTP support for the evaluator (#745)
* Change evaluator API from unary to bidirectional streaming

* Bug fix

* Yet another bug fix

* Update

* Update
2019-08-22 15:37:51 -07:00
e01fc12549 Reformat install/terraform (#751) 2019-08-22 13:25:15 -07:00
e1682100fa Fix swaggerdoc error (#752) 2019-08-22 13:07:17 -07:00
603aef207f Remove dependency on gogo/protobuf (#755) 2019-08-22 12:48:13 -07:00
baf403ac44 Replace json with jsonpb (#761) 2019-08-22 12:32:55 -07:00
b1da77eaba Change backend.FetchMatches from unary to streaming (#743) 2019-08-16 14:51:10 -07:00
bb82a397d2 Ignore globals when linting everywhere. #749
The exclusions list is ever growing because there are many valid
use cases for global variables. The standard library uses them
all over the place. Removing the check, and instead relying on
code review to spot bad uses of globals.
2019-08-16 11:55:12 -07:00
abd2c1434c Explicitly ignore gateway files in golangci (#748) 2019-08-16 11:20:16 -07:00
bc9dc27210 Suppress golangci error (#746)
* Turn off checking shadowing

* Suppress golangci output
2019-08-16 10:11:10 -07:00
084461d387 Change mmf API from unary call to streaming (#740)
* Test to replicate the grpcLimit error

* Change mmf from unary to streaming

* Resolve comment

* Resolve comments
2019-08-15 16:43:22 -07:00
bc7d014db6 Express open-match-build infrastructure as Terraform template (#729)
* Replicate infrastructure configs in terraform

* Express open-match-build infrastructure as Terraform template

* Import changes
2019-08-15 12:56:45 -07:00
230ae76bb4 Set default logging.rpc value to false (#734)
* Fix rpc enabled config alias

* Set default logging.rpc value to false
2019-08-15 12:35:59 -07:00
ebbe5aa6ce Add a breaking API change issue template (#737) 2019-08-15 11:54:19 -07:00
9b350c690c Disable stress testing in CI (#738) 2019-08-15 11:26:07 -07:00
80b817f488 Fix cloudbuild dependency (#733) 2019-08-15 10:41:11 -07:00
df7021de1b Add comments for values.yaml file in the parent chart (#727)
* Checkpoint

* Comments for values.yaml file
2019-08-13 11:27:53 -07:00
5c8f218000 Remove CI subnet workaround (#728)
* Remove CI subnet workaround

* Apply changes
2019-08-06 13:54:19 -07:00
3f538df971 Fix Makefile targets (#726)
* Makefile dependency fix

* Resolve comments
2019-08-05 14:44:05 -07:00
1e856658c9 Fix proto file's go package. (#725)
Fixes #724

Also removed redundant instruction to build messages.pb.go, and unused instruction to build message.pb.gw.go.
2019-08-05 11:10:40 -07:00
eb6697052d Reflects IAM role changes in terraform config (#709)
* Reflects IAM role changes in terraform config

* Resolve comments

* Resolve comments

* Resolve comments

* Update
2019-08-02 18:26:33 -07:00
31d3464a31 Helm README (#713)
* Helm README

* Indentation

* Review comments
2019-08-02 17:54:19 -07:00
c96b65d52b Make load testing upload test results to GCS (#674)
* Delete old helm config and use new config in CI

* Fix tiller dependency

* Fix cloudbuild

* fix yaml postfix

* Make stress test upload results to GCS

* hi

* Update tgz

* Fix bad merge

* Enable test

* Update charts

* Done

* Test

* Fix

* Fix gcpProjectId

* Update charts

* Update

* Fix

* Update

* Add time
2019-08-02 17:08:51 -07:00
9d601351cc Make synchronizer proto properly internal (#722)
* Move generated files from internal/pb to internal/ipb to avoid name conflict.
* No longer server / generate http/json endpoint nor the swagger files.
* The proto package is now an internal package.

resolves #534
2019-08-02 16:26:37 -07:00
7272ca8b93 Let end-to-end tests run in-cluster (#706)
* Replace LoadBalancer in CI with NodePort

* Fix
2019-08-02 15:18:12 -07:00
b463d2e0fd Remove unnecessary dependencies to speed up CI (#715) 2019-08-02 14:57:58 -07:00
07da543f8e Autogenerate image commands based on a single list (#714)
With this change, anything added to the cmd/ folder is automatically
made into an image and included the image commands. Additional
images are also included. This does remove some commands that are
for pushing specific sets of images, but it seems rare/never (asking yfei1
and sawagh) that anyone is actually using these commands.

Includes some commenting to hopefully alleviate the magic being added.
2019-08-02 14:38:52 -07:00
0d54c39828 Update instructions for release-0.6 (#651) 2019-08-02 14:09:16 -07:00
5469c8bc69 Unify SHORT_SHA and VERSION_SUFFIX (#712) 2019-08-02 11:03:52 -07:00
c837211cd1 Skip using PodSecurityPolicy in CI runs (#717)
* Skip psp in CI run

* Update

* Works

* metadataserver psp

* Update chart
2019-08-02 10:34:40 -07:00
5729e72214 Improve context propagation for synchronizer (#697) 2019-08-01 15:13:23 -07:00
66910632da Distinguish RELEASE_NAME and CHART_NAME in Makefile (#711) 2019-08-01 12:54:52 -07:00
c832074112 Makefile bug fixes (#708)
* Bug fixes
2019-07-31 17:27:11 -07:00
a6d526b36b Remove unused app engine and html makefile stuff (#666)
I assume this was left over from the website being in this repo.
2019-07-31 16:10:12 -07:00
13e017ba65 Remove image artifacts from cloudbuild (#707)
* Remove image artifacts from cloudbuild

* Update cloudbuild.yaml
2019-07-31 14:34:33 -07:00
3784300d22 Payload logging (#696) 2019-07-31 12:53:59 -07:00
31fd18e39b CI Reap Namespace (#705) 2019-07-31 12:32:31 -07:00
a54d1fcf21 Let CI runs in one cluster under unique namespaces (#701) 2019-07-31 12:11:37 -07:00
72a435758e Replace all-proto with variables containing all proto files (#703)
This moves the all-proto target from being phony, to containing a concrete lists of targets. This means other targets can depend on $(ALL_PROTO) without becoming phony itself.
2019-07-30 14:33:09 -07:00
6848fa71c2 Remove duplicate step from cloudbuild (#704) 2019-07-30 13:40:29 -07:00
987d90cc44 Have the demo use the template Dockerfile and sit in cmd (#700)
This also gives it a more distinguished name as there are likely to be other demos (with other components) in the near future. It is also sitting in a larger namespace (the cmd folder) which helps too.
2019-07-29 22:12:55 -07:00
baf943fdd3 Fix logger name in the appmain.go (#699) 2019-07-29 15:53:12 -07:00
c7ce1b047b Use templated Dockerfile and make for cmd images (#676)
This leads up to being able to swap out the standard dockerfile for one which uses a locally built binary. Also is just cleaner as there is less redundancy across dockerfiles.

Disables cgo for builds, as it doesn't work with distroless.

Stops relying on phony protos target which causes extra rebuilds of all the protos. (Using variable expansion should fix this issue in a seperate PR)

All commands are now named run, because ARG arbitrarily doesn't work for ENTRYPOINT.
2019-07-26 18:10:59 -07:00
36a194e761 Unify server setup and call log configuration (#695) 2019-07-26 13:26:02 -07:00
605511d177 Enable workload identity in terraform and Makefile (#691)
* Enable workload identity in terraform and Makefile

* Applied changes and added tftstate file
2019-07-26 11:28:04 -07:00
2a08732508 Merge netlistener into util package. (#690) 2019-07-26 08:53:21 -07:00
e1c2b96cb5 Add HorizontalPodAutoscaler policies. (#645) 2019-07-26 07:53:57 -07:00
1bd84355b7 Replace context.Background() in tests to prepare for multi-tenancy. (#687) 2019-07-26 07:13:09 -07:00
a9014fbf78 Fix make test targets (#670) 2019-07-26 01:12:05 -07:00
8050c61618 Add helm wait and disable verify unreliable URLs. (#693) 2019-07-25 13:43:51 -07:00
8b765871c4 Add HTTP client logging support and flags for test. (#688) 2019-07-24 14:11:42 -07:00
88786ecbd1 Breakout synchronizer state into it's own struct. (#686) 2019-07-24 11:25:23 -07:00
f41c175f29 Expand hostname:port and hostname in certgen. (#677) 2019-07-23 15:16:09 -07:00
3607809371 Delete old helm config and use new config for CI (#667)
* Delete old helm config and use new config in CI

* Fix dependency

* Bug fixes

* Fix

* Disable tls
2019-07-23 14:36:24 -07:00
d21ae712a7 Add build/cmd to Makefile (#673)
This is part of the larger goal of simplifying and speeding up builds.

General strategy here:

1. Add make build/cmd target, to build what is currently in /cmd. <- this PR
2. In the basebuild Dockerfile, run make build/cmd, and replace the seperate Dockerfiles for the services in cmd/ with one Dockerfile (with an arg for which service) which copies from build/cmd// to the Dockerfile and runs it.
3. Reconcile the Docker images not included in the /cmd to follow this pattern as well (in individual PRs.)
4. Clean up redundant ways to build things.
5. (If it improves speed) add local build variation for faster builds.
2019-07-22 14:21:06 -07:00
f3f1908318 Add /help, /configz, /debug/* pages. (#662) 2019-07-17 07:33:09 -07:00
9cb4a9ce6e Remove cloudbuild artifacts (#668) 2019-07-17 07:00:29 -07:00
8dad7fd7d0 Configure helm install using subcharts (#652)
* Split up charts

* Fix build/chart/

* Revert Makefile

* Bug fixes

* Checkpoint

* Update

* Revert Makefile
2019-07-16 15:35:52 -07:00
f70cfee14a Cache dependency downloads by adding only go module to Dockerfile (#664)
First copy only the go.sum and go.mod then download dependencies. Docker
caching is [in]validated by the input files changes. So when the dependencies
for the project don't change, the previous image layer can be re-used. go.sum
is included as its hashing verifies the expected files are downloaded.

I'm ignoring cases where the go.mod is missing a dep for now. Later go commands will
fetch the missing deps, and they should make their way into go.mod sooner rather than later.
If it's a mess, it can be cleaned up later.

The time comparison of building all images from before and after is:
clean build: 7m22s -> 7m22s
after small change to demo: 5m59 -> 5m7s

So no speed increase for purely fresh builds (as expected), but saves a minute when deps haven't changed from the last build.
2019-07-16 14:38:01 -07:00
36f92b4336 Instrument HTTP clients and servers. (#663) 2019-07-16 12:54:19 -07:00
164dfdde67 Open locally serving TCP ports in unit tests to avoid triggering firewall screens. (#660) 2019-07-16 11:32:12 -07:00
c0d6531f3f Simplify HTTP client construction. (#661) 2019-07-16 10:01:56 -07:00
52610974de Update Open Match Developer Documentation (#655) 2019-07-15 13:43:51 -07:00
041572eef6 Move monitoring/ to telemetry/. (#656) 2019-07-15 12:44:27 -07:00
e28fe42f3b Fix metrics flushing, add OC agent, and refactor multi-closing. (#648) 2019-07-15 11:39:04 -07:00
880e340859 Merge Logrus Loggers (#649) 2019-07-15 06:38:16 -07:00
88a659544e Enable stress test for v.6 (#650)
* Checkpoint

* Fix

* README

* Fix
2019-07-12 16:56:14 -07:00
9381918163 An attempt to fix the flake test in frontend_service_test (#622) 2019-07-12 11:51:46 -07:00
1c41052bd6 Obsolete e2e test setup files (#633)
* Obsolete e2e test setup files

* Review
2019-07-12 11:03:45 -07:00
819ae18478 Add metrics for Open Match (#643) 2019-07-11 17:17:31 -07:00
ad96f42b94 Add TLS support to the Helm chart. (#644) 2019-07-11 15:41:41 -07:00
1d778c079c Add Terraform Linting (#638) 2019-07-11 14:47:45 -07:00
4bbfafd761 Stress test automation (#631)
* Stress test automation

* Use distroless

* Review
2019-07-11 12:10:15 -07:00
28e5d0a1d1 Update Terraform setup and README (#621)
* Update Terraform README

* Enable GCP API using Terraform

* Review comment

* Update secure-gke.tf

* Update link
2019-07-11 10:56:42 -07:00
a394c8b22e Refactor Helm deployment templates to share common config. (#637) 2019-07-10 13:59:43 -07:00
93276f4d02 Add config based metrics and logging for gRPC. (#634) 2019-07-10 11:44:19 -07:00
310d98a078 Fix Open Match logo in Helm chart (#635) 2019-07-10 10:51:53 -07:00
a84eda4dab Monitoring Dashboard (#630) 2019-07-09 18:08:04 -07:00
ce038bc6dd Remove OPEN_MATCH_DEMO_KUBERNETES_NAMESPACE (#624)
It doesn't work anyways, and would require a lot more than what is here to get it to work.

This resolves #214
2019-07-09 15:06:54 -07:00
74fb195f41 FetchMatches e2e tests (#610)
* Add mmf and evaluator setup for e2e in cluster tests

* Add comments
2019-07-09 13:26:00 -07:00
1dc3fc8b6b Remove unused config values (#579) 2019-07-09 12:08:27 -07:00
de469cb349 Config cleanup and improved health checking (#626) 2019-07-09 11:12:31 -07:00
7462f32125 Update (#625) 2019-07-09 10:09:58 -07:00
3268461a21 Use struct in assignment properties (#612)
Fixing because I noticed that we were still using string here.
Also properties was stating that it was optional and open match didn't interpret the contents, which is true for all of Assignments fields, so clarified that.
2019-07-09 00:07:11 -07:00
3897cd295e Open Match demo creating tickets, matches, and assignments (#611)
This is the working end to end demo!

There are 3 components to the demo:

Uptime just counts up once every second.
Clients simulates 5 game clients, which create a ticket, then wait for assignment on that ticket.
Director simulates a single director, requesting matches and giving fake assignments to all of the tickets.
To run: make create-gke-cluster push-helm push-images install-chart proxy-demo
2019-07-08 18:10:46 -07:00
5a9212a46e Use google.rpc.Status for the assignment error. (#619) 2019-07-08 15:56:40 -07:00
04cfddefd0 Add error to MMF harness signature. (#618) 2019-07-08 14:55:55 -07:00
b7872489ae Use plurals for repeated proto fields (#613)
The style guide for protos state that repeated fields should have a plural name: https://developers.google.com/protocol-buffers/docs/style#repeated-fields
2019-07-08 13:45:18 -07:00
6d65841b77 Make default build-*-image rule point to cmd/*/Dockerfile (#614) 2019-07-08 13:23:31 -07:00
b4fb725008 Break out rpc client cache for reuse. (#615) 2019-07-08 11:44:27 -07:00
50d9a0c234 Add mmf and evaluator setup for e2e in cluster tests (#607)
* Add mmf and evaluator setup for e2e in cluster tests

* Fix

* Fix

* Fix
2019-07-08 10:21:44 -07:00
a68fd5ed1e Update helm dep and remove unused helm template (#609) 2019-07-08 09:40:28 -07:00
be58fae864 Add structs package which simplifies proto struct literals (#605) 2019-07-03 11:31:52 -07:00
f22ad9afc5 Demo scaffolding with uptime counter (#606)
This hooks up the demo webpage to connect a websocket.  It includes several 
related minor changes to get things working properly:

- "make proxy-demo" was broken because it was referencing helm's open-match-demo, which was merged with open-match.
- setup bookkeeping, such as health checks, configuration, logging.
- Turn on the demo in values.yaml, and only include one replica. (more than one demo instance would collide.)
2019-07-02 16:40:48 -07:00
6b1b84c54e Updater logic for the demo (#604)
Updater allows concurrent processes to update different fields on a json object
which is serialized and passed to a func([]byte). In the demo, different SetFunc
for different fields on the base json object will be passed to the relevant components
(clients, director, game servers, etc). These components will run and pass their state
to the updater.

This updater will be combined with bytesub by passing bytesub's AnnounceLatest
method into the base updater New. This way the demo state of each component
will be passed to all current dashboard viewers.
2019-07-02 16:12:31 -07:00
043ffd69e3 Add post validation to keep backend from sending matches with empty tickets (#603)
* Add post validation to keep backend from sending matches with empty tickets
2019-07-01 17:42:14 -07:00
e5f7d3bafe Implements a score-based evaluator for end to end test (#600)
* Implements a score-based evaluator for end to end test

* Add more tests

* Fix

* Add more tests
2019-06-28 12:40:56 -07:00
8f88ba151e Add preview deployment script (#602) 2019-06-28 12:14:44 -07:00
9c83062a41 Add score to mmf (#599) 2019-06-28 11:39:38 -07:00
7b31bdcedf Config based Swagger UI (#601) 2019-06-28 10:46:36 -07:00
269e6cd0ad Helm charts and Makefile commands for in-cluster end to end tests. (#598)
* Add e2eevaluator and e2ematchfunction setup

* Checkpoint

* Update

* Fix
2019-06-28 10:23:20 -07:00
864f13f2e8 SwaggerUI now logs to Stackdriver and reads from matchmaker_config.yaml (#597) 2019-06-27 16:45:32 -07:00
e3a9f59ad9 Add e2eevaluator and e2ematchfunction setup (#595)
* Add e2e setup

* Fix

* Fix
2019-06-27 15:56:02 -07:00
8a3f6e43b8 Add skip ticket checks to FetchMatches and Statestore service (#578)
* Add skip ticket checks to FetchMatches and Statestore service

* Fix

* Fix golangci

* Add ignore list tests
2019-06-27 13:25:25 -07:00
2b8597c72e Enable E2E Cluster Tests (#592) 2019-06-27 10:39:15 -07:00
c403f28c04 Reduce CI Latency by Improving Waits and Reducing Docker Pulls (#593) 2019-06-27 09:33:49 -07:00
317f914daa Enable error handling for evaluation (#594)
add error handling to evaluation
2019-06-26 08:13:41 -07:00
16fbc015b2 Reduce CI Times (#591) 2019-06-25 16:37:42 -07:00
d1d9114ddb E2E Test Framework for k8s and in-memory clusters. (#589) 2019-06-25 15:42:47 -07:00
e6622ff585 Add evaluator to E2E Minimatch tests (#590) 2019-06-25 14:52:02 -07:00
99fb4a8fcf Codify Open Match Continuous Integration as a Terraform template. (#577) 2019-06-25 14:29:19 -07:00
e0ebb139bf Implement core synchronizer functionality. (#575)
Implement core synchronizer functionality.
2019-06-25 14:06:50 -07:00
31dcbe39f7 Terraform documentation and change default project. (#584) 2019-06-25 12:46:29 -07:00
76a1cd8427 Introduce Synchronizer Client that delays connecting to the sychronizer at runtime when processing FetchMatches (#586) 2019-06-25 10:50:03 -07:00
ac6c00c89d Add Helm Chart component for Open Match Demo Evaluator (#585) 2019-06-25 10:30:55 -07:00
a7d97fdf0d Fix compile issue with _setup referencing _test vars. (#588) 2019-06-25 09:43:33 -07:00
f08121cf25 Expose services as backed by a LoadBalancer. (#583) 2019-06-25 09:19:54 -07:00
cd6dd410ee Light reduction of log spam from errors. (#573) 2019-06-24 06:59:54 -07:00
5f0a2409e8 Add calls to synchronizer to backend service. Currently set them to default disabled. (#570) 2019-06-24 06:26:42 -07:00
d445a0b2d5 Make FetchMatches return direct result instead of streaming (#574)
* Make FetchMatches return direct result instead of streaming

* Fix
2019-06-22 18:25:05 -07:00
1526827e3c Refactor fetch matches tests to support more incoming test senarios (#572)
* Checkpoint

* Refactor fetch matches tests to support more incoming test senarios
2019-06-21 23:29:00 -07:00
82e60e861f Deindex ticket after assignment (#571) 2019-06-21 16:23:50 -07:00
5900a1c542 Improve logging on server shutdown. (#567) 2019-06-21 16:01:09 -07:00
a02aa99c7a Use distroless nonroot images. (#559) 2019-06-21 15:29:25 -07:00
2f3f8b7f56 Rename Synchronizer methods in proto to better align with their functionality (#568) 2019-06-21 12:44:32 -07:00
a7eb1719cc Merge demo chart into open-match chart. Also create a open-match repository. (#565) 2019-06-21 11:57:06 -07:00
ea24b702c8 Reduce the size of the default chart for Open Match. (#548) 2019-06-21 10:39:35 -07:00
e7ab30dc63 Fix CI: Change min GKE cluster version to not point to a specific version since they can go away at any time. (#566) 2019-06-21 10:22:14 -07:00
8b88f26e4e Test e2e QueryTickets behaviors (#561)
* Test e2e QueryTickets behaviors

* Fix

* fix

* Fix angry bot

* Update
2019-06-20 18:10:19 -07:00
d5f60ae202 Simplify mmf config proto definitions (#564)
* Simplify mmf config proto definitions

* Update
2019-06-20 15:45:47 -07:00
113ee00a6c Add e2e tests to Assignment logic and refine redis.UpdateAssignment workflow (#557)
* Add e2e tests to Assignment logic
2019-06-20 14:16:11 -07:00
c083f1735a Make create ticket returns precondition failure code when receiving n… (#558)
* Make create ticket returns precondition failure code when receiving non-number properties

* Update based on feedback
2019-06-20 13:09:23 -07:00
52ad8de602 Consolidate e2e service start up code (#553) 2019-06-19 17:18:46 -07:00
3daebfc39d Fix canary tagging. (#560) 2019-06-19 14:56:16 -07:00
3e5da9f7d5 Fix Swagger UI errors. (#556) 2019-06-19 13:14:02 -07:00
951e82b6a2 Add canary tagging. (#555) 2019-06-19 12:58:16 -07:00
d201242610 Use pb getter to avoid program panics when required fields are missing (#554) 2019-06-19 10:51:47 -07:00
1328a109e5 Move generated pb files from internal/pb to pkg/pb (#502)
* Move generated pb files from internal/pb to pkg/pb

* Update base on feedback
2019-06-18 18:04:27 -07:00
2415194e68 Reorganize e2e structure (#551)
* Reorganize e2e structure

* Fix golangci error

* Split up setup code based on feedback
2019-06-18 17:37:46 -07:00
b2214f7b9b Update module dependencies (#547)
* Update dependency version

* Update

* Fix makefile
2019-06-18 16:55:58 -07:00
98220fdc0b Add replicas to Open Match deployments to ensure statelessness. (#549) 2019-06-18 14:52:24 -07:00
b2bf00631a Add more unit tests to frontend service (#544) 2019-06-18 11:15:26 -07:00
49ac68c32a Update miniredis version (#550) 2019-06-18 10:36:25 -07:00
7b3d6d38d3 Terraform Configs (#541) 2019-06-18 06:56:28 -07:00
a1271ff820 Add more unit tests to backend (#543)
* Add more unit tests to backend

* Fix typo

* Fix typo
2019-06-17 18:22:05 -07:00
2932144d80 Add Evaluator Proto, Evaluator Harness and an example Evaluator using the harness. (#545)
* Add Evaluator Proto, Evaluator Harness and an example Evaluator using the harness.

This change just adds a skeleton of the sample evaluator to set up
building the harness. The actual evaluation logic for the example, tests and wiring up the example in the helm charts, demos and e2e tests etc., will follow in future PRs.

* fix golint issues
2019-06-17 17:18:14 -07:00
3a14bf3641 Add bytesub for broadcasting demo state (#510)
This will be used by the demo. The demo will have a central state, which will be updated, serialized to json, and then announced on an instance of ByteSub. Demo webpage clients will subscribe using a websocket to receive the latest state.
2019-06-17 16:32:31 -07:00
7d0ec363e5 Move e2e test cases from /app folder to /e2e folder (#542) 2019-06-17 15:09:52 -07:00
dcff6326b1 Rename Evaluator component to Synchronizer (#530)
The Synchronizer exposes APIs to give a synchronization window context
and to add matches to evaluate for that synchronization window. The
actual evaluator will be re-introduced as the component authored by the
customer that the Synchronizer triggers whenever the window expires.
2019-06-17 09:50:05 -07:00
ffd77212b0 Add test for frontend.GetAssignment method (#531)
* Refactor frontend service for unit tests
2019-06-15 00:35:53 -07:00
ea3e529b0d Add deployment phases to CI (#537) 2019-06-14 16:46:14 -07:00
db2c298a48 Prepare CI for Cluster E2E Testing (#536) 2019-06-14 14:50:13 -07:00
85d5f9fdbb Fix open match logo (#535) 2019-06-14 11:50:18 -07:00
401329030a Delete the website, moved to open-match-docs. (#529) 2019-06-14 07:20:11 -07:00
d9e20f9c29 Refactor frontend service for unit tests (#521)
* Refactor frontend service for unit tests

* Add more tests

* Fix

* Fix
2019-06-13 20:01:37 -07:00
f95164148f Temorarily disable md-test CI check (#532) 2019-06-13 19:15:33 -07:00
ab39bcc93d Disable website autopush. Moved to open-match-docs. (#527) 2019-06-13 15:28:26 -07:00
d1ae3e9620 Refactor mmlogic service for unit tests (#524)
* Refactor mmlogic service for unit tests

* Checkpoint

* Add test samples

* Update

* Fix golangci

* Update
2019-06-13 14:51:26 -07:00
de83c9f06a Refactor backend service for unit tests (#520)
* Refactor backend service for unit tests

* Refactor frontend service for unit tests

* Golangci fix

* Fix npe

* Add tests

* Fix bad merge

* Rewrite

* Fix hexakosioihexekontahexaphobia

* Fix type

* Add more tests

* go mod
2019-06-13 14:40:13 -07:00
9fb445fda6 Create cluster reaper for Open Match e2e tests. (#503) 2019-06-13 13:55:54 -07:00
050367eb88 Use absolute paths in the makefile. Fix macos sed bug. (#526) 2019-06-13 13:23:05 -07:00
40d288964b Fix a bug in redis connect (#525)
* Fix a bug in redis connect

* weird

* Fix go mod
2019-06-13 11:27:20 -07:00
e4c87c2c3a PodSecurityPolicy for Open Match (#505) 2019-06-13 06:54:46 -07:00
bd2927bcc5 Add image for demo (#511)
This is barebones work to get an image for the demo working. This image will eventually contain the demo go-routines that emulate the clients, and presenting a dashboard for the state of the demo. Will followup with actual logic in the demo itself.

TESTED=Manually started om and the demo, ran proxy-demo and got the expected 404.
2019-06-12 15:15:42 -07:00
271e745a61 Read paging size and don't blow up on misconfiguration (#516)
This follows what the documentation on the min/max constants says it does.
Also defines a new default value which is a more reasonable default than the minimum.
2019-06-12 14:16:39 -07:00
98c15e78ad Fix program panics when calling proto subfields (#517) 2019-06-12 12:02:15 -07:00
fbbe3cd2b4 Add tests for example match functions (#499)
* Add more tests

* Update

* Fix
2019-06-12 11:48:28 -07:00
878ef89c40 Remove unnecessary test files (#522) 2019-06-12 11:11:33 -07:00
a9a5a29e58 Add filter package (#513)
This package will be useful for a simple definition of how filters work, as well as a way to process tickets without relying on indexes. This will allow other filter types (eg, strings) to be added without a redis implementation. See package documentation for some more details.

I plan on replacing the current redis indexing with a "all tickets" index to remove the edge cases it gets wrong that we don't want to spend time fixing for v0.6. Instead this package will be responsible for filtering which tickets to return. This also removes the index configuration problem from v0.6. Then for v0.7, once the indexing and database solutions are chosen, we can go back to implementing the correct way to be indexing tickets.

Also included are test cases, separated in their own definition. These test cases should be used in the future for end to end tests, and for tests on indexes. This will help ensure that the system as a whole maintains the behavior specified here.
2019-06-11 16:32:44 -07:00
8cd7cd0035 Update the behavior of backend.FetchMatches (#498)
* Update the behavior of backend.FetchMatches

* Update comments

* You fail I fail

* Fix

* Update

* Implement with buffered channel
2019-06-11 16:16:01 -07:00
92495071d2 Update golangci version and enable it in presubmit check (#514)
* bringitback

* Disable body close in golangci presubmit check
2019-06-11 13:39:02 -07:00
a766b38d62 Fix binauthz policy to allow for elasticsearch image. (#470) 2019-06-10 20:58:26 -07:00
6c941909e8 Fix helm deletion error (#504) 2019-06-10 15:53:22 -07:00
77dc8f8c47 Adds backend service tests (#495)
* Add more tests

* Adds backend service tests

* Fix golangci bot err

* update based on feedback

* Update

* Rename tests
2019-06-07 13:19:26 -07:00
a804e1009b Fix Shadowcheck (#501)
* Fix shadow check error

* Update
2019-06-07 11:08:52 -07:00
3b2efc39c7 e2e test with random port (#473)
* Checkpoint

* e2e tests with random ports

* Update based on feedback

* Fix

* Cleanup for review

* Update
2019-06-06 18:08:30 -07:00
b8054633bf Fix KinD make commands (#500)
The "v" was missing from the kind urls, so it wasn't properly downloaded.

Additionally, KinD does not update the kube config, so the makefile can't automatically configure future kubecfg commands to work properly. As an easy fix, just tell the user to run the commands themselves.

See #500 for KinD context.

This fixes #497
2019-06-06 14:59:30 -07:00
336fad9079 Move harness to pkg/ directory and reorganize examples/ (#487)
* Move sample mmf and harness to pkg/ directory

* Fix makefile error

* Fix cloudbuild
2019-06-05 17:16:48 -07:00
ce59eedd29 Cleanup redundant createStore methods using statestoreTesting helpers (#465)
* Cleanup redundant createStore method using statestoreTesting helpers

* Fix unparam error

* Fix unparam error

* Update

* Fix bad merge
2019-06-05 16:40:53 -07:00
83c0913c34 Remove functionName from harness.FunctionSetting (#494) 2019-06-05 16:19:26 -07:00
6b50cdd804 Remove test-hook that bypasses statestorage package to directly initialize Redis storage (#493) 2019-06-05 16:07:53 -07:00
f427303505 Reuse existing helper functions to grab GRPC clients (#452)
* Reuse existing helper functions to grab GRPC clients

* Update based on feedback
2019-06-05 15:27:00 -07:00
269dd9bc2f Add Swagger UI to enable interactive calls. (#489) 2019-06-05 08:31:29 -07:00
d501dbcde6 Add automation to apply Swagger UI directory. (#478) 2019-06-05 07:18:27 -07:00
04c4e376b5 Use cos_containerd for node pool image type. (#471) 2019-06-05 06:34:03 -07:00
3e61359f05 Create third party folder with grpc-gateway *.proto dependencies (#460)
* Create third party folder with grpc-gateway *.proto dependencies

* Enable automated third_party download

* third_party
2019-06-04 17:34:04 -07:00
8275ed76c5 Consolidate statstore/public.New signature (#482)
* Consolidate statestore.New signature

* Fix bad merge

* Fix bad merge
2019-06-04 17:22:55 -07:00
e8b2525262 Distinguish between example and demo. (#488)
There will be many example MMFs, but we specifically want an runnable demo. (the demo is an example, but not all examples are the demo.)

I change the install/helm from example to demo, as it now specifically only installs the demo. Changes in the Makefile to reflect that.
I also add new make commands to build and push the demo images. Currently it only contains the example mmf, but it will in the near future contain the demo driver image.

Tested = Created a GKE cluster and installed via the make commands.
2019-06-04 16:52:55 -07:00
3517b7725c Add Kubernetes health checks. (#469) 2019-06-04 16:01:30 -07:00
6cd521abf7 Remove TestServerBinding from component tests (#485) 2019-06-04 15:18:29 -07:00
924fccfeb3 Update to new respository location (#483)
CI is broken.
2019-06-04 12:51:43 -07:00
c17ca7a10c Update GetAssignment and UpdateAssignment methods to meet design need (#477)
* Update assignment methods

* nolint on exponential backoff strat
2019-06-04 11:28:17 -07:00
b11863071f Revert presubmit (#480) 2019-06-04 11:15:14 -07:00
20dbcea99f Fix short sha tags in docker builds. (#479) 2019-06-04 11:02:20 -07:00
13505956a0 Enable golangci in presubmit (#475) 2019-06-03 19:57:02 -07:00
3d04025860 Have sample MMF create simple 1v1 matches (#438)
The previous MMF required a weird behavior where you needed to set tickets in rosters
to be overridden. It would also break if there weren't enough tickets to fill the rosters. This
is a simpler example which takes tickets from the pool and assigns them into 1v1 matches.
2019-06-03 17:51:20 -07:00
d0f7f2c4d3 Implemented backend rest config logic (#459)
* Implemented backend rest config logic

* Remove unncessary logs and let logics return rpc status error

* Fix chatty bot

* Fix bad merge
2019-05-31 17:12:03 -07:00
272e7642b1 Move back to helm2 and adjust cluster size again. (#472) 2019-05-31 15:38:27 -07:00
f3f80a70bd Reorganize backend service and e2e test for incoming rest config logic (#458)
* Reorganize backend service and e2e test for incoming rest config logic
2019-05-31 12:57:50 -07:00
80bcd9487f Make backoff strategy configurable (#467) 2019-05-31 11:32:26 -07:00
2e25daf474 Increase the size of the default cluster. We are hitting vCPU limits (#464) 2019-05-30 17:51:03 -07:00
9fe32eef96 Implements frontend and backend Assignments method with tests (#463)
* Implements backend AssignTickets method

* implmenet backend get assignment method with tests

* Fix test comments

* Remove redundant log

* Go mod tidy
2019-05-30 20:18:17 -04:00
0446159872 Modify FunctionConfig to indicate mmf server type (#448)
* Make backend service supports secure mmf server
2019-05-30 19:59:29 -04:00
2ef8614687 Switch to Helm 3-alpha1 (#453) 2019-05-30 16:03:28 -07:00
de8279dfe0 Add server/client test helpers and a basic test. (#462) 2019-05-30 15:42:39 -07:00
8fedc2900f Implements evaluator harness and a default example (#449)
* Implements evaluator harness and a default example
2019-05-30 18:17:35 -04:00
0f95adce20 Update copyright headers (#461) 2019-05-30 14:53:02 -07:00
4f851094a0 Fix some subtle TLS bugs and remove clientCredentialsFromFileData (#457) 2019-05-30 06:37:01 -07:00
9cae854771 Enable most of the golangci checks and fix internal/set tests (#416)
* Enable most of the golangci checks and fix internal/set tests
2019-05-29 18:33:54 -04:00
603089f404 Add Binary Authorization commands. (#451) 2019-05-29 14:59:03 -07:00
d024b46487 Enable GKE Autoscaling (#455) 2019-05-29 14:22:33 -07:00
a2616870c7 Splitup host and prefix variables (#447) 2019-05-29 14:27:37 -04:00
6d8b516026 Expose minimatch config to test context (#445) 2019-05-29 14:13:21 -04:00
b4d3e84e3d Update tools, help, and scope some global vars. (#426) 2019-05-29 07:48:21 -07:00
6b370f56c8 E2E tests for Open Match using Minimatch. (#440)
* E2E tests for Open Match using Minimatch.

The test binary starts all core Open Match services and a sample MMF in
proc. It then uses some test data to create tickets, query for Pools and
fetch matches and validate that the pools and matches have expected
tickets.
2019-05-25 00:23:01 -07:00
d5da3d16b7 Fix minor issues that were encountered in an E2E test case for generating matches. (#437)
* Fix minor issues that were encountered in an E2E test case for generating matches. Here are the issues this change fixes:

1. Add error check to avoid accessing failed connection in tcp net listener
2. Divide by zero error in the redis state storage library if page size is 0.
3. Ability to support null hostname when connecting to mmlogic client in match function harness.
4. Set min / max page size limits to initialize correctly if page size is set to unsupported values.

* review changes
2019-05-22 22:44:03 -07:00
31469bb0f9 Modify the Function Configuration proto to improve code readability. (#435)
Current proto names both the variable bearing the function config - and the config proto as grpc, due to which proto generation generates structures representing the variable and the type as differing by an '_'. Code using this current style turns out confusing and hard to read. Renaming the type different from the variable to improve code readability.
2019-05-21 22:52:07 -07:00
e8adc57f76 Add input validation to FetchMatches. The call now fails with (#434)
InvalidArgument if match configuration is missing or unsupported.
2019-05-21 14:36:24 -07:00
0da8d0d221 Fix 'make delete-chart' - use ignore-not-found properly (#433)
Without '=true', it complains that crd is not a supported kubectl command (because it thinks delete is an arg to ignore-not-found.)
2019-05-21 10:15:27 -07:00
08d9210588 Fix: Frontend config specifies grpcport properly. (#432)
It appears the code reads this properly already. It's just that the missing config probably defaulted to 0, which when assigning a port works.
2019-05-20 16:09:11 -07:00
8882d8c9a1 Fix mmf harness reading port from kub config (#431) 2019-05-20 15:40:23 -07:00
6c65e924ec Use proto3's struct for properties, not string. (#430)
This makes the API more usuable for both json clients (who no longer
have to encode json into a string and put that into their json) and also
for proto clients (who no longer have to use json...). When converted
to json, struct will be encoded directly into an object, which is much
more convenient.

The majority of the rest of this change is fixing tickets in tests.

This removes the dependancy on the json parser that was used for reading
from properties.

I also added a test on indexing and filtering tickets from the state
store, because I can't help myself and I have a problem.
2019-05-20 13:33:55 -07:00
beba937ac5 Add a new Evaluator service to Open Match core components to perform Match evaluation. (#424)
Evaluator component synchronizes all the generated proposals, evaluates
them for quality, deduplicates proposals and generates result
matches. Backend service can scale with number of backend requests but
the evaluator acts as the single point aggregator for results. Hence it
requires to be on a separate service.

This change only introduces the scaffolding for the evaluator, tying
adding it as a core component to Open Match build, deployment and other
tooling. This change does not actually wire the evaluator up into the
match generation flow. The change that adds the core evaluator logic
will follow.
2019-05-17 16:24:18 -07:00
caa755272b Implement FetchMatches on Backend Service (#425) 2019-05-17 16:04:16 -07:00
ea60386fa0 Implement clientwrapper for harness and testing (#415)
* Implement clientwrapper for harness and testing

* Disable clientAuthAndVerify on tlsServer

* Remove stale temp file codes from clients_test.go
2019-05-17 15:31:55 -07:00
9000ae8de4 Have redis filter query return full tickets (#429)
* Have redis filter query and return full tickets

* break out paging logic from redis filter

* Per code review, return grpc statuses
2019-05-17 15:21:23 -07:00
4e2b64722f Splits up stress test utility codes and implements mmlogic stress test (#419) 2019-05-17 12:27:50 -07:00
4c0f24217f Add validation for markdown links (#428) 2019-05-17 09:20:03 -07:00
872b7be6a5 Fixing URL for getting started guide. (#427) 2019-05-17 08:52:00 -07:00
53f2ee208f Proto changes for implementing Backend Service (#421) 2019-05-16 13:49:32 -07:00
b5eaf153e8 Implement Frontend Service Create / Get / Delete Ticket (#417) 2019-05-15 22:39:26 -07:00
d3c7eb2000 Rename serving functions and params with 'Server' prefix (#418) 2019-05-15 14:29:29 -07:00
23243e2815 Add SwaggerUI to website (#413) 2019-05-15 09:02:08 -07:00
3be97908b2 Add make targets for creating TLS certificates. (#410) 2019-05-15 08:28:43 -07:00
5892f81214 Implement the Mmlogic Service (#414) 2019-05-14 18:15:50 -07:00
40892c9b2e Add option to serve with a trusted CA cert. (#409) 2019-05-14 17:18:26 -07:00
9691d3f001 Refactor serving package to move to serving/rpc path. (#400)
* move serving codes to rpc directory to hide tls util methods

* nolint on unused tls util codes
2019-05-14 15:36:32 -07:00
9808066375 Move open match core components from future folders to actual destination. (#412)
* Remove future
2019-05-14 14:34:19 -07:00
534716eef4 Fix golangci errors (#405)
* Fix golangci errors
2019-05-14 13:35:54 -07:00
ece4a602d0 Move binaries to cmd/ (#406) 2019-05-14 13:04:08 -07:00
8eb72d98b2 Delete old Open Match (#408) 2019-05-14 11:04:52 -07:00
6da8a34b67 Ignore errors from make delete-kind-cluster (#407) 2019-05-14 10:37:39 -07:00
b03189e34c Delete evaluator (#404) 2019-05-14 06:40:13 -07:00
9766871a87 new mmf service impl (#403)
* mmf service impl
2019-05-13 19:16:08 -07:00
4eac4cb29a New MMF harness Makefile and skeletons (#402)
* New MMF harness Makefile and skeletons

* Not simple
2019-05-13 16:28:10 -07:00
439286523d Remove old mmf codes (#387)
* Remove old mmf codes

* cleanup makefile

* Cleanup cloudbuild

* makes cloudbuild great again

* Delete unmarshal.go

* Delete unmarshal_test.go
2019-05-13 15:46:37 -07:00
e0058c7c08 Get started guide (#395)
* Get started guide and fix development guide typo
2019-05-13 15:08:58 -07:00
ec40f26e62 Fix make all denpendency (#388) 2019-05-13 11:37:30 -07:00
17134f0a40 Break up tls util codes (#399)
* Break up tls util codes
2019-05-13 11:16:18 -07:00
add2464b33 Basic experimental knative instructions (#398) 2019-05-13 10:58:17 -07:00
b72b4f9b54 Redis implementation for State Storage methods for Tickets (#384) 2019-05-10 16:03:56 -07:00
abdc3aca28 Clean up unused bindata (#383) 2019-05-10 10:43:26 -07:00
3ab724e848 Initialize state storage in Frontend, Backend and MMLogic services (#377) 2019-05-09 23:38:03 -07:00
3c8d0ce1b0 Fix the URL for install yamls. (#382) 2019-05-09 15:58:11 -07:00
c0166e3176 Refactor protos to match CRUD operations (#365) 2019-05-09 15:05:03 -07:00
3623adb90e Fix URLs, Post Submit, add gofmt to presubmit (#381) 2019-05-09 11:12:31 -07:00
fba1bcf445 Fix post commit (#379) 2019-05-09 06:57:17 -07:00
fdd865200a Set CORS policy on open-match.dev (#373) 2019-05-08 14:40:27 -07:00
b0fc8f261f Remove helm chart autopush (#378) 2019-05-08 14:11:08 -07:00
bdd3503d80 Add post commit for website auto push. (#370) 2019-05-08 13:25:20 -07:00
81fd2bab83 Add abstraction to storage layer to decouple Redis from Open Match Services (#367) 2019-05-08 12:54:34 -07:00
212a416789 Add Monitoring Configuration to Open Match (#362) 2019-05-08 12:13:25 -07:00
2425e28129 Improve development guide and remove old user guide. (#371) 2019-05-08 11:33:29 -07:00
3993a2f584 Delete old open match. gRPC harness will remain for now because it does not have an equivalent yet. (#351) 2019-05-08 10:21:32 -07:00
05cb4e503f Fix broken link. (#375) 2019-05-08 09:36:18 -07:00
1cf11e7d81 Add image-spec annotions (#359) 2019-05-07 19:40:28 -07:00
1985ecefed Fix issues with website before launch (#360) (#364) 2019-05-07 19:17:57 -07:00
b7ebb60325 Update helm chart dependencies, and add Jaeger for tracing. (#358) 2019-05-07 16:04:38 -07:00
e4651d9382 Add minimatch binary to gitignore (#366) 2019-05-07 15:09:37 -07:00
04a574688a Monitoring package for Open Match (#363) 2019-05-07 13:49:50 -07:00
d4a901fc71 Update frontend stress to use new API. (#361) 2019-05-07 13:27:50 -07:00
5de79f90cf Fix mmlogic readiness probe. (#357) 2019-05-07 11:20:34 -07:00
e42c8a0232 Add KinD support for OM deployments. (#355) 2019-05-07 11:03:26 -07:00
1503ffae3a Add make proxy*, update tools, and cleanup make output. (#356) 2019-05-07 10:42:50 -07:00
a842da5563 Download includes before using protoc tools. (#349) 2019-05-07 07:07:08 -07:00
c3d6efef72 Use golang vanity url: open-match.dev/open-match (#114) (#321) 2019-05-06 07:57:43 -07:00
0516ab0800 Minimatch for 0.6.0 (#346) 2019-05-06 07:33:47 -07:00
668bfd6104 updating release documentation and small edits to README (#340)
* updating release documentation and small edits to README

* Update release.md

* Update README.md
2019-05-03 19:03:35 -07:00
ef933ed6ef Add stubbed abstracted Redis client. (#345)
* Add stubbed abstracted Redis client.

* Make this work
2019-05-03 15:01:32 -07:00
3ee24e3f28 Add documentation on how to update the docs. (#344) 2019-05-03 11:32:59 -07:00
d0bd794a61 Move the future/fake_frontend to use the future/pb. (#339) 2019-05-03 10:58:41 -07:00
37bbf470de Replace helm chart for the new binaries. (#342) 2019-05-03 10:34:14 -07:00
412cb8557a Move all deprecated Makefile targets to the bottom. (#341) 2019-05-03 09:31:29 -07:00
38e81a9fd1 Revamp the website to include the basics and prepare for launch. (#334) 2019-05-02 15:59:48 -07:00
cb24baf107 Add gRPC/TLS and HTTPS serving support. (#330) 2019-05-02 15:12:13 -07:00
c6f6526823 Add top level Swagger annotations for API. (#335) 2019-05-02 13:28:31 -07:00
41e441050f Add main() for new binaries and wire up to CI (#336) 2019-05-02 12:44:29 -07:00
235e25c748 Remobve 404ing pages. (#331) 2019-05-01 20:19:25 -07:00
93ca5e885c make test now does coverage (#328) 2019-05-01 19:59:18 -07:00
5d67fb1548 Add Root CA support to certgen. (#273) (#308) 2019-05-01 17:35:55 -07:00
faa6e17607 Rename CreateTicketsResponse to CreateTicketResponse for consistency. (#332) 2019-05-01 14:53:08 -07:00
6a0c648a8f Add missing copyright headers and godoc package comments. (#329) 2019-05-01 14:23:35 -07:00
8516e3e036 Frontendapi load testing impl (#290)
* Frontendapi Stress Tests
2019-05-01 11:55:09 -07:00
e476078b9f Auto-generate new protobufs and APIs. (#325) 2019-04-30 14:24:21 -07:00
0d405df761 Fix up some lint errors. (#326) 2019-04-30 12:53:58 -07:00
06c1208ab2 Remove log alias from logrus imports (#327) 2019-04-30 12:38:15 -07:00
af335032a8 Add protoc-gen-swagger proto options to includes. (#324) 2019-04-30 11:34:36 -07:00
a8be8afce2 Add insecure gRPC serving to future/. (#316) 2019-04-30 08:46:42 -07:00
e524121b4b Clearify in release issue to use release notes drafted in rc1 of a re… (#323)
* Clearify in release issue to use release notes drafted in rc1 of a release.  Also improve wording around instructions to create the release.
2019-04-29 17:45:17 -07:00
871abeee69 Initial protos for Open Match 0.6 (#322) 2019-04-29 16:39:49 -07:00
b9af86b829 Allow creating global loggers. (#317) 2019-04-29 16:15:14 -07:00
6a9572c416 Fix the trailing slash issue in Makefile (#319) 2019-04-29 15:44:55 -07:00
636eb07869 Use n1-highcpu-32 machine and cache base image for 2 days in CI (#231) (#301) 2019-04-29 15:04:38 -07:00
d56c983c17 Add readiness probe and remove redis sanity check 2019-04-29 14:02:32 -07:00
a8e857c0ba Introduce internal/future/ directory with it's first file, fake_frontend.go. (#306) (#307) 2019-04-29 12:55:42 -07:00
75da6e1f4a Update master to use 0.0.0-dev for version (#296) 2019-04-29 10:41:54 -07:00
e1fba5c1e8 Update the vanity url to open-match.dev/open-match (#311) 2019-04-29 10:04:40 -07:00
d9911bdfdd golangci support (#242) 2019-04-29 09:36:29 -07:00
175293fdf9 Consolidate build and push docker images for better CPU utilization (#302) 2019-04-29 07:00:41 -07:00
01407fbcad Rephrase makefile set-redis-password command (#313) 2019-04-29 06:29:11 -07:00
edad339a76 Make matchmaker_config.yaml a part of Helm chart config (#204)
* move config pkg under internal/;matchmaker_config.yaml is a part of Helm chart config now

* ignore data race in some tests
2019-04-25 21:51:20 -07:00
c57c841dfc Add ListenerHolder.AddrString() to avoid bad IPv6 assumptions. (#299) 2019-04-25 21:31:15 -07:00
54dc0e0518 Update gcloud.md steps to work with free tier (#294)
While following documentation to create a cluster for Open Match using GCP, the command line example says to create using n1-standard-4 machine type. 
It seems like it is not possible when on a free tier, at least on the default settings: 

`C:\Program Files (x86)\Google\Cloud SDK>gcloud container clusters create --machine-type n1-standard-4 open-match-dev-cluster --zone us-west1-a --tags open-match`

`ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Insufficient regional quota to satisfy request: resource "CPUS": request requires '12.0' and is short '4.0'. project has a quota of '8.0' with '8.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=XXXXX`

To remove as many potential hurdles for people new to GCP, I would suggest to replace by a n1-standard-2 which works straight away, as I expect open-match can work with it?
2019-04-25 09:07:24 -07:00
fa4e8887d0 Update README.md to add Windows batch line version (#295)
* Fixing a few typos
* On Windows shell, you can't use backticks to catch the result of a program call as a string, I believe the standard way to do that is to use the **for** command to catch the result in a variable and then use it. I included that version in the "Deploying Open Match" section, as the current version does not work on Windows.
2019-04-25 08:12:16 -07:00
8384cb00b2 Self-Certificate generation for Open Match. (#274) 2019-04-25 07:40:35 -07:00
b9502a59a0 Fix REST proxy and added proxy health check tests (#275)
* Add proxy tests with swagger and healthcheck handlers

* Block grpcserver Start() until fully initialized

* Serve each .swagger.json file on its corresponding REST endpont

* Resolve comments

* Add waitgroup to fully initialize the grpcserver
2019-04-24 15:30:31 -07:00
139d345915 Fix win64 and binary dependencies. (#287) 2019-04-24 13:36:03 -07:00
5ea5b29af4 Make release issue a github issue template (#280) 2019-04-24 12:34:50 -07:00
812afb2d06 Backend Client should keep generating matches, displaying failures if any (#285) 2019-04-24 10:51:29 -07:00
cc82527eb5 Update documentation to reflect 0.5.0 changes. (#283) 2019-04-24 09:38:28 -07:00
80b20623fb Include cloudbuild in places version needs to be set. Specify rc for all versions (#278) 2019-04-23 17:05:28 -07:00
a270eab4b4 Update release notes based on feedback. (#269) (#276) 2019-04-23 16:07:14 -07:00
36f7dcc242 Makefile for windows (#272) 2019-04-23 15:22:13 -07:00
a4706cbb73 Automatically publish development.open-match.dev on Post Commit (#201) (#263) 2019-04-23 13:37:51 -07:00
c09cc8e27f make clean now deletes build/ (#261) 2019-04-23 11:15:35 -07:00
be55bfd1e8 Increase version to 0.5.0-rc1 (#268)
* Increase version to 0.5.0-rc1

* Increment version to 0.5.0-rc1 in cloudbuild.yaml
2019-04-22 18:09:36 -07:00
8389a62cf1 Release Process for Open Match (#235) 2019-04-22 14:12:02 -07:00
af8895e629 Remove knative link because it fails in tests. (#265) 2019-04-22 11:47:51 -07:00
2a3241307f Properly set tag and repository when making install/yaml/ (#258) 2019-04-22 11:13:03 -07:00
f777a4f407 Publish all install/yaml/*.yaml files. (#264)
* Publish all install/yaml/*.yaml files.

* Update instructions and add publish post commit.

* Add yaml/
2019-04-22 10:02:49 -07:00
88ca8d7b7c DOcumentation (#259) 2019-04-21 17:23:54 -07:00
3a09ce142a Fix namespace issues in example yaml (#257) 2019-04-19 17:03:56 -07:00
8d8fdf0494 Add vanity url redirection support. (#239) 2019-04-19 16:33:00 -07:00
45b0a7c38e Remove deprecated examples, evaluator and mmforc (#249) 2019-04-19 15:34:21 -07:00
4cbee9d8a7 Remove deprecated artifacts from build pipeline (#255) 2019-04-19 14:46:49 -07:00
55afac2c93 Embed profile config in the container to be used for standalone executions. (#254)
Embed profile config in the container to be used for standalone executions. Will create a separate issue to figure out a better way to do this.
2019-04-19 14:07:41 -07:00
8077dbcdba Changes to make the demo steps easier (#253) 2019-04-19 11:30:30 -07:00
f1fc02755b Update theme and logo for Open Match website (#240) 2019-04-19 11:01:31 -07:00
0cce1745bc Changes to Backend API and Backend Client to support GRPC Function Ha… (#246)
* Changes to Backend API and Backend Client to support GRPC Function Harness
2019-04-19 10:41:15 -07:00
d57b5f1872 Helm chart changes to not install mmforc and deploy function Service (#227) (#248)
* Helm chart changes to not install mmforc and deploy function Service
2019-04-19 10:17:06 -07:00
1355e5c79e Fix lint issues in helm chart and improve lint coverage. (#252) 2019-04-19 09:49:42 -07:00
4809f2801f Add Open Match Logo (#251) 2019-04-19 08:28:13 -07:00
68d323f3ea 2nd pass of lint errors. (#247) 2019-04-19 05:42:57 -07:00
b99160e356 Fix grpc harness startup panic due to http proxy not being set up (#244) 2019-04-18 20:02:04 -07:00
98d4c31c61 Fix most of the lint errors from golangci. (#243) 2019-04-18 18:15:46 -07:00
b4beb68920 Reduce log spam in backendapi (#234) 2019-04-18 15:39:41 -07:00
b41a704886 Bump versions of dependencies (#241) 2019-04-18 14:12:05 -07:00
88a692cdf3 Evaluator Harness and Sample golang Evaluator (#238)
* Evaluator Harness and sample serving Evaluator
2019-04-18 12:35:37 -07:00
623519bbb4 Core Logic for the GRPC Harness (#125) (#237)
Core Logic for the MatchFunction GRPC Harness
2019-04-18 12:16:38 -07:00
655abfbb26 Example MMF demonstrating the use of the GRPC harness (#236) 2019-04-18 10:10:06 -07:00
ac81b74fad Add Kaniko build cache (#230)
* Add Kaniko build cache - partly resolves #231
2019-04-18 00:30:02 -07:00
ba62520d9c Prevent sudo on Makefile for commands that require auth. (#225) 2019-04-17 20:58:16 -07:00
0205186e6f Remove install/yaml/ it will be moved to release artifacts. (#232)
* Remove install/yaml/ it will be moved to release artifacts.

* Add the ignore files.

* Create install/yaml/ directory for output targets.
2019-04-17 17:50:39 -07:00
ef2b1ea0a8 Implement REST proxy initializations and modified tests accordingly (#210)
This commit resolves #196 and generates swagger.json files for API visualization
2019-04-17 17:28:36 -07:00
1fe2bd4900 Add 'make presubmit' to keep generated files up to date. (#223) 2019-04-17 17:04:05 -07:00
5333ef2092 Enable cloudbuild dev site to fix local cloud build error (#219) 2019-04-17 16:17:01 -07:00
09b727b555 Remove the deprecated deployment mechanism for openmatch components (#224) 2019-04-17 15:45:38 -07:00
c542d6d1c3 Serving GRPC Harness and example MMF scaffolding (#112) (#216)
* Serving GRPC Harness and example MMF scaffolding

* Serving GRPC Harness and example MMF scaffolding

* Update logger field to add function name

* Update harness to use the TCP listener
2019-04-17 14:57:01 -07:00
8f3f7625ec Increases paralellism of the build (#203) 2019-04-17 13:07:39 -07:00
6a4f309bd5 Remove temp files. (#220) 2019-04-17 12:41:38 -07:00
26f5426b61 Disable logrus.SetReportCaller() (#222) 2019-04-17 12:26:43 -07:00
f464b0bd7b Fix port allocation race condition during tests. (#215) 2019-04-17 11:54:56 -07:00
092b7e634c Move GOPROXY=off to CI only. (#209) 2019-04-17 11:01:37 -07:00
454a3d6cca Bump required Go version because of a dependency. (#207) 2019-04-15 20:24:09 -07:00
50e3ede4b9 Remove use of GOPATH from Makefile (#208) 2019-04-15 16:19:31 -07:00
6c36145e9b Mini Match (#152) 2019-04-12 16:16:42 -07:00
47644004db Add link tests for website and removed broken links. (#202) 2019-04-12 15:26:32 -07:00
1dec4d7555 Unify gRPC server initialization (#198) 2019-04-12 12:47:27 -07:00
1c6f43a95f Add a link to the build queue. (#199) 2019-04-12 11:38:50 -07:00
0cea4ed713 Add temporary redirect site for Open Match (#200) 2019-04-12 11:24:23 -07:00
db912b2d68 Add reduced permissions for mmforc service account. (#197) 2019-04-12 10:25:19 -07:00
726b1d4063 CI with Makefile (#188) 2019-04-12 07:51:10 -07:00
468aef3835 Ignore files for Mini Match. (#194) 2019-04-11 15:16:26 -07:00
c6e257ae76 Unified gRPC server initialization (#195)
* Unified gRPC server initialization

* Fix closure and review feedback
2019-04-11 15:06:07 -07:00
8e071020fa Kubernetes YAML configs for Open Match. (#190) 2019-04-11 14:28:27 -07:00
c032e8f382 Detect sudo invocations to Makefile #164 (#187) 2019-04-11 14:09:52 -07:00
2af432c3d7 Fix build artifacts
Fix build artifacts issue #180
2019-04-11 13:23:44 -07:00
4ddceb62ee fixed bugs in py3 mmf (#193)
fix py3 mmf image
2019-04-11 06:32:59 -07:00
ddb4521444 Add license preamble to proto and dockerfiles. (#186) 2019-04-10 20:24:31 -07:00
86918e69eb Replace CURDIR with REPOSITORY ROOT #156 2019-04-10 16:32:01 -07:00
2d6db7e546 Remove manual stats that ocgrpc interceptor already records. 2019-04-10 16:21:42 -07:00
fc52ef6428 REST Implementation 2019-04-10 15:34:49 -07:00
1bfb30be6f Fix redis connection bugs and segfault in backendclient. (#178) 2019-04-10 13:27:41 -07:00
9ee341baf2 Move configs from backendclient image to ConfigMap. (#175) 2019-04-10 12:59:12 -07:00
7869e6eb81 Add opencensus metrics for Redis 2019-04-10 12:36:35 -07:00
7edca56f56 Disable php-proto building since it's missing gRPC client 2019-04-10 10:06:42 -07:00
eaedaa0265 Split up README.md and add project logo. 2019-04-10 08:26:21 -07:00
9cc8312ff5 Rename Function to MatchFunction and modify related protos (#159) 2019-04-10 08:15:40 -07:00
2f0a1ad05b updating app.yaml 2019-04-09 20:47:33 -07:00
2ff77ac90b Fix 'make create-gke-cluster' (#154)
It is missing a dash on one of the arguments, which breaks things.
2019-04-09 15:59:16 -07:00
2a3cfea505 Add base package file for godoc index and go get. 2019-04-09 14:16:54 -07:00
b8326c2a91 Fix build dependencies to build/site/ 2019-04-09 14:05:03 -07:00
ccc9d87692 Disable the PHP example during the CI build. 2019-04-09 12:01:34 -07:00
bba49f3ec4 Simplify the go package path for proto definitions 2019-04-09 11:41:29 -07:00
632157806f Remove symlinks to config files because they are mounted via ConfigMaps. 2019-04-09 11:11:36 -07:00
6e039cb797 Delete images and scripts obsoleted by Makefile. 2019-04-09 10:40:53 -07:00
8db062f7b9 Use Request/Response protos in gRPC servers. 2019-04-03 21:11:42 -07:00
f379a5eb46 Disable 'Lint: Kubernetes Configs'
It is currently failing.
2019-04-03 18:28:24 -07:00
f3160cfc5c generate install.yaml with Helm
fixed helm templates

changes in helm templates

adding redis auth to the helm chart

helm templates changes

makefile: gen-install

make set-redis-password

make gen-install

fixing indentation in Makefile

remove old redis installation

use public images in install/yaml/

remove helm chart meta from static install yaml files

fixing cloudbuild

remove helm chart meta from static install yaml files

workaround for broken om-configmap data formatting

make gen-prometheus-install

drop namespace in OM resources definitions

override default matchmaker_config at Helm chart installation

fixed Makefile after rebase

matchmaker config: use latest public images

1) install Redis in same namespace with Open-match;2) Making namespace and Helm release names consistent in all places
2019-04-03 13:40:13 -07:00
442a1ff013 Update dependencies and resolve issue #149 2019-04-02 20:21:14 -07:00
0fb75ab35e Delete old cloudbuild.yaml files, obsoleted by PR #98 2019-04-02 11:23:14 -07:00
6308b218cc Minimize dependency on Viper and make config read-only. 2019-04-02 07:46:18 -07:00
624ba5c018 [charts/open-match] fix mmlogicapi service selector 2019-04-01 18:10:15 -07:00
82d034f8e4 Fix dependency issues in the build. 2019-04-01 11:05:57 -07:00
97eed146da update protoc version to 3.7.1
This fixes the bug outlined here https://github.com/protocolbuffers/protobuf/issues/5875
2019-04-01 09:49:19 -07:00
6dd23ff6ad Merge pull request #135 from jeremyje/master
Merge 040wip into master.
2019-03-29 14:29:22 -07:00
03c7db7680 Merge 040wip 2019-03-28 11:12:07 -07:00
e5538401f6 Update protobuf definitions 2019-03-26 17:45:52 -07:00
eaa811f9ac Add example helm chart, replace example dashboard. 2019-03-26 17:45:28 -07:00
3b1c6b9141 Merge 2019-03-26 15:26:17 -07:00
34f9eb9bd3 Building again 2019-03-26 12:31:19 -07:00
3ad7f75fb4 Attempt to fix the build 2019-03-26 12:31:19 -07:00
78bd48118d Tweaks 2019-03-26 12:31:19 -07:00
3e71894111 Merge 2019-03-26 12:31:19 -07:00
36decb4068 Merge 2019-03-26 12:31:19 -07:00
f79b782a3a Go Modules 2019-03-26 11:14:48 -07:00
db186e55ff Move Dockfiles to build C#, Golang, PHP, and Python3 MMFs. 2019-03-26 09:54:10 -07:00
957465ce51 Remove dead code that was moved to internal/app/mmlogicapi/apisrv/ 2019-03-25 16:14:25 -07:00
478eb61589 Delete unnecessary copy of protos in frontendclient. 2019-03-25 16:13:56 -07:00
6d2a5b743b Remote executable bit from files that are not executable. 2019-03-13 09:31:24 -07:00
9c943d5a10 Fix comment 2019-03-12 22:04:42 -07:00
8293d44ee0 Fix typos in comments, set and playerindices 2019-03-12 22:04:42 -07:00
a3bd862e76 store optional Redis password inside the Secret 2019-03-12 21:52:59 -07:00
c424d5eac9 Update .gcloudignore to include .gitignore's filters so that Cloud Build packages don't upload binaries. 2019-03-11 16:29:50 +09:00
2e6f5173e0 Add Prometheus service discovery annotations to the Open Match servers. 2019-03-11 16:25:21 +09:00
ee4bba44ec Makefile for simpler development 2019-03-11 16:14:00 +09:00
8e923a4328 Use grpc error codes for responses. 2019-03-11 16:13:06 +09:00
52efa04ee6 Add RPC dashboard and instructions to add more dashboards. 2019-03-07 10:58:53 -08:00
67d4965648 Helm charts for open-match, prometheus, and grafana 2019-03-06 17:09:09 -08:00
7a7b1cb305 Open Match CI support via Cloud Build 2019-03-04 09:41:19 -08:00
377a9621ff Improve error handling of Redis open connection failures. 2019-02-27 19:35:23 -08:00
432dd5a504 Consolidate Ctrl+Break handling into it's own go package. 2019-02-27 17:52:58 +01:00
7446f5b1eb Move out Ctrl+Break wait signal to it's own package. 2019-02-27 17:52:58 +01:00
15ea999628 Remove init() methods from OM servers since they aren't needed. 2019-02-27 08:58:39 +01:00
b5367ea3aa Add config/ in the search path for configuration so that PWD/config can be used as a ConfigMap mount path. 2019-02-25 16:49:35 -08:00
e022c02cb6 golang mmf serving harness 2019-02-25 04:54:02 -05:00
a13455d5b0 Move application logic from cmd/ to internal/app/ 2019-02-24 13:56:48 +01:00
16741409e7 Cleaner builds using svn for github 2019-02-19 09:24:50 -05:00
d7e8f8b3fa Testing 2019-02-19 07:30:26 -05:00
8c97c8f141 Testing2 2019-02-19 07:26:11 -05:00
6a8755a13d Testing 2019-02-19 07:24:10 -05:00
4ed6d275a3 remove player from ignorelists on frontend.DeletePlayer call 2019-02-19 20:01:29 +09:00
cb49eb8646 Merge remote-tracking branch 'origin/calebatwd/knative-rest-mmf' into 040wip 2019-02-16 04:01:01 -05:00
a7458dabf2 Fix test/example paths 2019-02-14 10:56:33 +09:00
5856b7d873 Merge branch '040wip' of https://github.com/GoogleCloudPlatform/open-match into 040wip 2019-02-11 01:23:06 -05:00
7733824c21 Remove matchmaking config file from base image 2019-02-11 01:22:23 -05:00
f1d261044b Add function port to config 2019-02-11 01:21:28 -05:00
95820431ab Update dev instuctions 2019-02-11 01:20:55 -05:00
0002ecbdb2 Review feedback. 2019-02-09 15:28:48 +09:00
2eb51b5270 Fix build and test breakages 2019-02-09 15:28:48 +09:00
1847f79571 Convert JSON k8s deployment configs to YAML. 2019-02-09 15:17:22 +09:00
58ff12f3f8 Add stackdriver format support via TV4/logrus-stackdriver-formatter. Simply set format in config to stackdriver 2019-02-09 15:14:00 +09:00
b0b7b4bd15 Update git ignore to ignore goland ide files 2019-02-09 15:09:00 +09:00
f3f1f36099 Comment type 2019-02-08 14:21:36 -08:00
f8cfb1b90f Add rest call support to job scheduling. This is a prototype implementation to support knative experimentation. 2019-02-08 14:20:29 -08:00
393e1d6de2 added configurable backoff to MatchObject and Player watchers 2019-02-08 16:19:52 +09:00
a11556433b Merge branch 'master' into 040wip 2019-02-08 01:48:54 -05:00
3ee9c05db7 Merge upstream changes 2019-02-08 01:47:43 -05:00
de7ba2db6b added demo attr to player indices 2019-02-03 20:17:13 -08:00
8393454158 fixes for configmap 2019-02-03 20:17:13 -08:00
6b93ac7663 configmap for matchmaker config 2019-02-03 20:17:13 -08:00
fe2410e9d8 PHP MMF: move cfg values to env vars 2019-02-03 20:17:13 -08:00
d8ecf1c439 doc update 2019-02-03 20:17:13 -08:00
8577f6bd4d Move cfg values to env vars for MMFs 2019-02-03 20:17:13 -08:00
470be06d16 fixed set.Difference() 2019-01-29 22:38:18 -08:00
c6e4dae79b fix google cloud knative url 2019-01-25 11:38:46 -08:00
23f83eddd1 mmlogic GetPlayerPool bugfix 2019-01-23 19:57:36 -05:00
dd794fd004 py3 mmf empty pools bugfix 2019-01-23 19:57:16 -05:00
f234433e33 write to error if all pools are empty in py3 mmf 2019-01-23 19:57:16 -05:00
d52773543d check for empty pools in py3 mmf 2019-01-23 19:57:16 -05:00
bd4ab0b530 mmlogic GetPlayerPool bugfix 2019-01-23 14:18:00 +03:00
6b9cd11be3 fix py3 mmf 2019-01-16 18:01:10 +03:00
1443bd1e80 PHP MMF: move cfg values to env vars 2019-01-16 13:41:44 +03:00
3fd8081dc5 doc update 2019-01-15 11:58:42 -05:00
dda949a6a4 Move cfg values to env vars for MMFs 2019-01-15 11:25:02 -05:00
128f0a2941 Merge branch 'master' of https://github.com/GoogleCloudPlatform/open-match 2019-01-15 09:42:01 -05:00
5f8a57398a Fix cloud build issue caused by 5f827b5c7c81c79ef9341cbebb51880f74b78a35 2019-01-15 09:41:38 -05:00
327d64611b This time with working hyperlink 2019-01-14 23:44:10 +09:00
5b4cdce610 Bump version number 2019-01-14 23:43:11 +09:00
56e08e82d4 Revert accidental file type change 2019-01-14 09:32:13 -05:00
2df027c9f6 Bold release numbers 2019-01-10 00:28:31 -05:00
913af84931 Use public repo URL 2019-01-09 02:18:53 -05:00
de6064f9fd Use public repo URL 2019-01-09 02:18:22 -05:00
867c55a409 Fix registry URL and add symlink issue 2019-01-09 02:15:11 -05:00
36420be2ce Revert accidental removal of symlink 2019-01-09 02:14:32 -05:00
16e9dda64a Bugfix for no commandline args 2019-01-09 02:14:07 -05:00
1ef9a896bf Revert accidental commit of empty file 2019-01-09 02:13:30 -05:00
75f2b84ded Up default timeout 2019-01-09 02:03:47 -05:00
2268baf1ba revert accidential commit of local change 2019-01-09 02:00:36 -05:00
9e43d989ea Remove debug sleep command 2019-01-09 00:10:47 -05:00
869725baee Bump k8s version 2019-01-08 23:56:07 -05:00
ae26ac3cd3 Merge remote-tracking branch 'origin/master' into 030wip 2019-01-08 23:41:55 -05:00
826af77396 Point to public registry and update tag 2019-01-08 23:37:38 -05:00
294d03e18b Roadmap 2019-01-08 22:39:08 -05:00
b27116aedd 030 RC2 2019-01-08 02:19:53 -05:00
074c0584f5 030 RC1 issue thread updates https://github.com/GoogleCloudPlatform/open-match/pull/55 2019-01-07 23:35:42 -05:00
210e00703a production guide now has placeholder notes, low hanging fruit 2019-01-07 23:35:14 -05:00
3ffbddbdd8 Updates to add optional TTL to redis objects 2019-01-05 23:37:38 -05:00
5f827b5c7c doesn't work 2019-01-05 23:01:33 -05:00
a161e6dba9 030 WIP first pass 2018-12-30 05:31:49 -05:00
7e70683d9b fix broken sed command 2018-12-30 04:34:27 -05:00
38bd94c078 Merge NoFr1ends commit 6a5dc1c 2018-12-30 04:16:48 -05:00
83366498d3 Update Docs 2018-12-30 03:45:39 -05:00
929e089e4d rename api call 2018-12-30 03:35:25 -05:00
a6b56b19d2 Merge branch to address issue #42 2018-12-28 04:01:59 -05:00
c2b6fdc198 Updates to FEClient and protos 2018-12-28 02:48:03 -05:00
43a4f046f0 Update config 2018-12-27 03:14:40 -05:00
b79bc2591c Remove references to connstring 2018-12-27 03:07:26 -05:00
61198fd168 No unused code 2018-12-27 03:04:18 -05:00
c1dd3835fe Updated logging 2018-12-27 02:55:16 -05:00
f3c9e87653 updates to documentation and builds 2018-12-27 02:28:43 -05:00
0064116c34 Further deletion and fix indexing for empty fields 2018-12-27 02:09:20 -05:00
298fe18f29 Updates to player deletion logic, metadata indices 2018-12-27 01:27:39 -05:00
6c539ab2a4 Remove manual filenames in logs 2018-12-26 07:43:54 -05:00
b6c59a7a0a Player watcher for FEAPI brought over from Doodle 2018-12-26 07:29:28 -05:00
f0536cedde Merge Ilya's updates 2018-12-26 00:18:00 -05:00
48fa4ba962 Update Redis HA details 2018-12-25 23:58:54 -05:00
39ff99b65e rename 'redis-sentinel' to just 'redis' 2018-12-26 13:51:24 +09:00
78c7b3b949 redis failover deployment 2018-12-26 13:51:24 +09:00
6a5dc1c508 Fix typo in development guide 2018-12-26 13:49:54 +09:00
9f84ec9bc9 First pass. Works but hacky. 2018-12-25 23:47:30 -05:00
e48b7db56f #51 Fix parsing of empty matchobject fields 2018-12-26 13:45:40 +09:00
bffd54727c Merge branch 'udptest' into test_agones 2018-12-19 02:59:04 -05:00
ab90f5f6e0 got udp test workign 2018-12-19 02:56:20 -05:00
632415c746 simple udp client & server to integrate with agones 2018-12-18 23:58:02 +03:00
0882c63eb1 Update messages; more redis code sequestered to redis module 2018-12-16 08:12:42 -05:00
ee6716c60e Merge PL 47 2018-12-15 23:56:35 -05:00
bb5ad8a596 Merge 951bc8509d5eb8fceb138135c001c6a7b7f9bb25 into 275fa2d125e91fd25981124387f6388431f73874 2018-12-15 19:32:28 +00:00
951bc8509d Remove strings import as it's no longer used 2018-12-15 14:11:31 -05:00
ab8cd21633 Update to use Xid instead of UUID. 2018-12-15 14:11:05 -05:00
721cd2f7ae Still needs make file or the like and updated instructions 2018-12-10 14:05:00 +09:00
13cd1da631 Merge remote-tracking branch 'origin/json-logging' into feupdate 2018-12-06 23:28:35 -05:00
275fa2d125 Awkward wording 2018-12-07 13:17:39 +09:00
4a8e018599 Fix merge conflict 2018-12-06 22:04:52 -05:00
c1b5d44947 Update current version number 2018-12-06 22:01:14 -05:00
ae9db9fae8 Merge remote-tracking branch 'origin/master' 2018-12-06 21:56:43 -05:00
104fbd19cd Header level tweaks 2018-12-06 02:54:40 -05:00
3b2571fced Doc updates for 0.2.0 2018-12-06 02:53:16 -05:00
486c64798b Merge tag '020rc2' into feupdate 2018-12-06 02:14:58 -05:00
3fb17c5f22 Merge remote-tracking branch 'origin/master' into 020rc2 2018-12-06 02:12:55 -05:00
3f42e3d986 Finalizing 0.2.0 updates to dev doc 2018-12-06 01:16:26 -05:00
0c74debbb3 Updated docs for 0.2.0 2018-12-05 03:59:57 -05:00
1854ee0ba1 Fix formatting 2018-12-04 01:07:31 -05:00
99d9d7e2b5 Update for 0.2.0 Release 2018-12-02 21:48:48 -05:00
e286435e19 0.2.0 RC2 release notes 2018-11-28 22:40:07 -05:00
52f9e2810f WIP indexing 2018-11-28 04:10:08 -05:00
db60d7ac5f Merge from 0.2.0 2018-11-28 02:23:26 -05:00
b17dccac3b Merge manual golang MMF & README.md updates 2018-11-27 01:57:14 -05:00
b9bb0b1aeb Tested 2018-11-27 01:55:48 -05:00
a6f2edbbae Fully working 2018-11-27 00:12:36 -05:00
3fcedbf13b Remove enum status states. No justification yet. 2018-11-26 17:42:08 -08:00
274edaae2e Grpc code for calling functions in mmforc 2018-11-26 17:40:25 -08:00
8ed865d300 Initial function messages plus protoc regen 2018-11-26 17:05:42 -08:00
55db5c5ba3 Writing proposal 2018-11-25 22:51:08 -05:00
b4f696484f Move set operations to module for use in example MMF 2018-11-24 03:46:07 -05:00
12935d2cab Rename 2018-11-24 02:33:58 -05:00
a0cff79878 Parsing filters now 2018-11-23 10:29:17 -05:00
7a3c5937f2 Updates to simple mmf for 020 2018-11-23 10:03:48 -05:00
f430720d2f example of MMF done in PHP 2018-11-22 13:24:55 +09:00
34010986f7 Caleb fixes from https://github.com/GoogleCloudPlatform/open-match/pull/39 2018-11-21 07:46:08 -05:00
d8a8d16bfc Iterate over attributes, not properties. Thanks Ilya 2018-11-21 07:45:14 -05:00
243f53336c Update backendclient build directions, thanks Ilya 2018-11-20 08:29:24 -05:00
d188be60c8 Fix for new pb module, thanks Ilya 2018-11-20 08:21:09 -05:00
f1541a8cee Remove development config, thanks Ilya 2018-11-20 08:11:27 -05:00
cd1c4c768e ReDucTor caught a typo 2018-11-20 00:51:37 -05:00
967b6cc695 Update dev doc to include MMLogic API 2018-11-20 00:19:10 -05:00
906c0861c7 Make mmlogic deploy json match other APIs 2018-11-20 00:17:47 -05:00
4e0bb5c07d Add DGS (Dedicated Game Server) to glossary 2018-11-20 14:10:41 +09:00
b57dd3e668 https://github.com/GoogleCloudPlatform/open-match/pull/36 2018-11-20 00:09:59 -05:00
b2897ca159 Remove unused file 2018-11-19 06:44:18 -05:00
041f9d7409 v0.2.0 RC1 2018-11-19 06:08:21 -05:00
79282dac10 updated mmf example comments 2018-11-18 01:45:30 -05:00
8142c6efc4 Code cleanup 2018-11-18 01:18:26 -05:00
5f8b0edcdd Code cleanup and comments 2018-11-17 23:04:44 -05:00
326dd6c6dd Add logging config to support json and level selection for logrus 2018-11-17 16:11:33 -08:00
4e4be7049e Updated documenation/comments 2018-11-16 05:23:30 -05:00
008ac9c516 Set to skip the evaluator; WIP 2018-11-16 02:00:42 -05:00
43f9548483 partially working 2018-11-15 07:56:07 -05:00
27b7591770 Remove unused files 2018-11-13 02:59:04 -05:00
acbd4035db First pass at python3 harness 2018-11-13 02:56:48 -05:00
cd98a68628 Merge from master 2018-11-12 21:57:45 -05:00
038957b937 WIP branch update. Need to finish pb move & test 2018-11-12 08:25:20 -05:00
d36945b9af Progress on the MMLogic API 2018-11-09 08:11:45 -05:00
c7acbf4481 Before reducing playerpool rosters to 1 2018-11-07 21:43:42 -05:00
1ec8a03636 Working proto 2018-11-07 02:35:02 -05:00
a42ecc0cd9 Updated proto files 2018-11-07 02:32:51 -05:00
0a2ac0e7e9 Before switching JsonFilterSet to PlayerPool 2018-11-06 23:49:05 -05:00
3e1c696d80 mmlapi updates 2018-11-06 01:17:39 -05:00
008b435921 Updated example python3 MMF 2018-10-23 00:18:08 -04:00
5b3a53f48e Progress on mmlogicapi 2018-10-22 04:48:54 -04:00
eac217a85a Ignorelist WIP 2018-10-18 00:34:57 -04:00
e0bf6ce8af Fix path to .proto in comment 2018-10-14 23:21:44 -04:00
19c61f4726 Merge branch 'master' into mmfapi 2018-10-14 09:07:14 -04:00
3bc443ad99 MMF updates for k8s job metadata 2018-10-14 09:06:45 -04:00
308235936d updates for ignorelists 2018-10-14 04:13:12 -04:00
1226a8dfc2 Merge branch 'master' into blacklist 2018-10-09 09:34:18 -04:00
86c1d4c9ee Adding matched player blacklist 2018-09-17 21:52:14 -04:00
571 changed files with 62253 additions and 9311 deletions

137
.dockerignore Normal file
View File

@ -0,0 +1,137 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.git
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.nupkg
*.so
*.dylib
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# vim swap files
*swp
*swo
*~
# Ping data files
*.ping
*.pings
*.population*
*.percent
*.cities
populations
# Discarded code snippets
build.sh
*-fast.yaml
detritus/
# Dotnet Core ignores
*.swp
*.pdb
*.deps.json
*.*~
project.lock.json
.DS_Store
*.pyc
nupkg/
# Visual Studio Code
.vscode
# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates
# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
x64/
x86/
build/
bld/
[Bb]in/
[Oo]bj/
[Oo]ut/
msbuild.log
msbuild.err
msbuild.wrn
csharp/OpenMatch/obj
Chart.lock
# Visual Studio 2015
.vs/
# Goland
.idea/
# Nodejs files placed when building Hugo, ok to allow if we actually start using Nodejs.
package.json
package-lock.json
site/resources/_gen/
# Node Modules
node_modules/
# Install YAML files, Helm is the source of truth for configuration.
install/yaml/
# Temp Directories
tmp/
# Terraform context
.terraform
*.tfstate
*.tfstate.backup
# Credential Files
creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/query/query
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
test/evaluator/evaluator
test/matchfunction/matchfunction
tools/reaper/reaper
# Open Match Build Directory
build/
# Secrets Directories
install/helm/open-match/secrets/
# Helm tar charts
install/helm/open-match/charts/

29
.gcloudignore Normal file
View File

@ -0,0 +1,29 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
# file at that point).
#
# For more information, run:
# $ gcloud topic gcloudignore
#
.gcloudignore
# If you would like to upload your .git directory, .gitignore file or files
# from your .gitignore file, remove the corresponding line
# below:
.git
.gitignore
#!include:.gitignore

25
.github/ISSUE_TEMPLATE/apichange.md vendored Normal file
View File

@ -0,0 +1,25 @@
---
name: Breaking API change
about: Details of a breaking API change proposal.
title: 'API change: <>'
labels: breaking api change
assignees: ''
---
## Overview
<High level description of this change>
## Motivation
<What is the primary motivation for this API change>
## Impact
<What usage does this impact? Add details here such that a consumer of Open
Match API can clearly tell if this will impact them>
## Change Proto
<Add snippet of the proposed change proto>

30
.github/ISSUE_TEMPLATE/bugreport.md vendored Normal file
View File

@ -0,0 +1,30 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: kind/bug
assignees: ''
---
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via
-->
**What happened**:
**What you expected to happen**:
**How to reproduce it (as minimally and precisely as possible)**:
**Anything else we need to know?**:
**Output of `kubectl version`**:
**Cloud Provider/Platform (AKS, GKE, Minikube etc.)**:
**Open Match Release Version**:
**Install Method(yaml/helm):**:

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: kind/feature
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

181
.github/ISSUE_TEMPLATE/release.md vendored Normal file
View File

@ -0,0 +1,181 @@
---
name: Publish a Release
about: Instructions and checklist for creating a release.
title: 'Release X.Y.Z-rc.N'
labels: kind/release
assignees: ''
---
# Open Match Release Process
Follow these instructions to create an Open Match release. The output of the
release process is new images and new configuration.
## Getting setup
**NOTE: The instructions below are NOT strictly copy-pastable and assume 0.5**
**release. Please update the version number for your commands.**
The Git flow for pushing a new release is similar to the development process
but there are some small differences.
### 1. Clone Repository
```shell
# Clone your fork of the Open Match repository.
git clone git@github.com:afeddersen/open-match.git
# Change directory to the git repository.
cd open-match
# Add a remote, you'll be pushing to this.
git remote add upstream https://github.com/googleforgames/open-match.git
```
### 2. Release Branch
If you're creating the first release of the version, that would be `0.5.0-rc.1`
then you'll need to create the release branch.
```shell
# Create a local release branch.
git checkout -b release-0.5 upstream/master
# Push the branch upstream.
git push upstream release-0.5
```
otherwise there should already be a `release-0.5` branch so run,
```shell
# Checkout the release branch.
git checkout -b release-0.5 upstream/release-0.5
```
**NOTE: The branch name must be in the format, `release-X.Y` otherwise**
**some artifacts will not be pushed.**
## Releases & Versions
Open Match uses Semantic Versioning 2.0.0. If you're not familiar please
see the documentation - [https://semver.org/](https://semver.org/).
Full Release / Stable Release:
* The final software product. Stable, reliable, etc...
* Example: 1.0.0, 1.1.0
Release Candidate (RC):
* A release candidate (RC) is a version with the potential to be the final
product but it hasn't validated by automated and/or manual tests.
* Example: 1.0.0-rc.1
Hot Fixes:
* Code developed to correct a major software bug or fault
that's been discovered after the full release.
* Example: 1.0.1
Preview:
* Rare, a one off release cut from the master branch to provide early access
to APIs or some other major change.
* **NOTE: There's no branch for this release.**
* Example: 0.5-preview.1
**NOTE: Semantic versioning is enforced by `go mod`. A non-compliant version**
**tag will cause `go get` to break for users.**
# Detailed Instructions
## Find and replace
Below this point you will see {version} used as a placeholder for future
releases. Find {version} and replace with the current release (e.g. 0.5.0)
## Create a release branch in the upstream open-match repository
**Note: This step is performed by the person who starts the release. It is
only required once.**
- [ ] Create the branch in the **upstream** repository. It should be named
release-X.Y. Example: release-0.5. At this point there's effectively a code
freeze for this version and all work on master will be included in a future
version. If you're on the branch that you created in the *getting setup*
section above you should be able to push upstream.
```shell
git push origin release-0.5
```
- [ ] Announce a PR freeze on release-X.Y branch on [open-match-discuss@](mailing-list-post).
- [ ] Open the [`Makefile`](makefile-version) and change BASE_VERSION entry.
- [ ] Open the [`install/helm/open-match/Chart.yaml`](om-chart-yaml-version) and change the `appVersion` and `version` entries.
- [ ] Open the [`install/helm/open-match/values.yaml`](om-values-yaml-version) and change the `tag` entries.
- [ ] Open the [`cloudbuild.yaml`] and change the `_OM_VERSION` entry.
- [ ] There might be additional references to the old version but be careful not to change it for places that have it for historical purposes.
- [ ] Run `make release`
- [ ] Run `make api/api.md` in open-match repo to update the auto-generated API references in open-match-docs repo.
- [ ] Use the files under the `build/release/` directory for the Open Match installation guide. Make sure the artifacts work as expected - these are the artifacts that will be published to the GCS bucket and used in our release assets.
- [ ] Create a PR with the changes, include the release candidate name, and point it to the release branch.
- [ ] Go to [open-match-build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build) and update all *post submit* triggers' `_GCB_LATEST_VERSION` value to the `X.Y` of the release. This value should only increase as it's used to determine the latest stable version.
- [ ] Merge your changes once the PR is approved.
## Create a release branch in the upstream open-match-docs repository
- [ ] Open [`Makefile`](makefile-version) and change BASE_VERSION entry.
- [ ] Open [`cloudbuild.yaml`] and change the `_OM_VERSION` entry.
- [ ] Open [`site/config.toml`] and change the `release_version` entry.
- [ ] Open [`site/static/swaggerui/config.json`] and change the `api/VERSION/...` entries
- [ ] Create a PR with the changes, include the release candidate name, and point it to the release branch.
## Complete Milestone
**Note: This step is performed by the person who starts the release. It is
only required once.**
- [ ] Create the next [version milestone](https://github.com/googleforgames/open-match/milestones) and use [semantic versioning](https://semver.org/) when naming it to be consistent with the [Go community](https://blog.golang.org/versioning-proposal).
- [ ] Create a *draft* [release](https://github.com/googleforgames/open-match/releases). Note that github has both "Pre-release" and "draft" as different concepts for a release. Until the release is finalized, only use "Save draft", and do not use "Publish release".
- [ ] Use the [release template](https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md)
- [ ] `Tag` = v{version}. Example: v0.5.0. Append -rc.# for release candidates. Example: v0.5.0-rc.1.
- [ ] `Target` = release-X.Y. Example: release-0.5.
- [ ] `Release Title` = `Tag`
- [ ] `Write` section will contain the contents from the [release template](https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md).
- [ ] Add the milestone to all PRs and issues that were merged since the last milestone. Look at the [releases page](https://github.com/googleforgames/open-match/releases) and look for the "X commits to master since this release" for the diff.
- [ ] Review all [milestone-less closed issues](https://github.com/googleforgames/open-match/issues?q=is%3Aissue+is%3Aclosed+no%3Amilestone) and assign the appropriate milestone.
- [ ] Review all [issues in milestone](https://github.com/googleforgames/open-match/milestones) for proper [labels](https://github.com/googleforgames/open-match/labels) (ex: area/build).
- [ ] Review all [milestone-less closed PRs](https://github.com/googleforgames/open-match/pulls?q=is%3Apr+is%3Aclosed+no%3Amilestone) and assign the appropriate milestone.
- [ ] Review all [PRs in milestone](https://github.com/googleforgames/open-match/milestones) for proper [labels](https://github.com/googleforgames/open-match/labels) (ex: area/build).
- [ ] View all open entries in milestone and move them to a future milestone if they aren't getting closed in time. https://github.com/googleforgames/open-match/milestones/v{version}
- [ ] Review all closed PRs against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/googleforgames/open-match/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+is%3Amerged+milestone%3Av{version}
- [ ] Review all closed issues against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/googleforgames/open-match/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed+milestone%3Av{version}
- [ ] Verify the [milestone](https://github.com/googleforgames/open-match/milestones) is effectively 100% at this point with the exception of the release issue itself.
## Build Artifacts
- [ ] Go to the History section and find the "Post Submit" build of the merged commit that's running. Wait for it to go Green. If it's red, fix error repeat this section. Take note of the docker image version tag for next step. Example: 0.5.0-a4706cb.
- [ ] Run `./docs/governance/templates/release.sh {source version tag} {version}` to copy the images to open-match-public-images.
- [ ] If this is a new minor version in the newest major version then run `./docs/governance/templates/release.sh {source version tag} latest`.
- [ ] Copy the files from `build/release/` generated from `make release` to the release draft you created. You can drag and drop the files using the Github UI.
- [ ] Update [Slack invitation link](https://slack.com/help/articles/201330256-invite-new-members-to-your-workspace#share-an-invite-link) in [open-match.dev](https://open-match.dev/site/docs/contribute/#get-involved).
- [ ] Test Open Match installation under GKE and Minikube enviroment using YAML files and Helm. Follow the [First Match](https://development.open-match.dev/site/docs/getting-started/first_match/) guide, run `make proxy-demo`, and open `localhost:51507` to make sure everything works.
- [ ] Minikube: Run `make create-mini-cluster` to create a local cluster with latest Kubernetes API version.
- [ ] GKE: Run `make create-gke-cluster` to create a GKE cluster.
- [ ] Helm: Run `helm install open-match -n open-match open-match/open-match`
- [ ] Update usage requirements in the Installation doc - e.g. supported minikube version, kubectl version, golang version, etc.
## Finalize
- [ ] Save the release as a draft.
- [ ] Circulate the draft release to active contributors. Where reasonable, get everyone's ok on the release notes before continuing.
- [ ] Publish the [Release](om-release) in Github. This will notify repository watchers.
## Announce
- [ ] Send an email to the [mailing list](mailing-list-post) with the release details (copy-paste the release blog post)
- [ ] Send a chat on the [Slack channel](om-slack). "Open Match {version} has been released! Check it out at {release url}."
[om-slack]: https://open-match.slack.com/
[mailing-list-post]: https://groups.google.com/forum/#!newtopic/open-match-discuss
[release-template]: https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md
[makefile-version]: https://github.com/googleforgames/open-match/blob/master/Makefile#L53
[om-chart-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/Chart.yaml#L16
[om-values-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/values.yaml#L16
[om-release]: https://github.com/googleforgames/open-match/releases/new
[readme-deploy]: https://github.com/googleforgames/open-match/blob/master/README.md#deploy-to-kubernetes

16
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,16 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
If this is your first time, please read our contributor guidelines: https://github.com/googleforgames/open-match/blob/master/CONTRIBUTING.md and developer guide https://github.com/googleforgames/open-match/blob/master/docs/development.md
-->
**What this PR does / Why we need it**:
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Closes #<issue number>`, or `Closes (paste link of issue)`.
-->
Closes #
**Special notes for your reviewer**:

66
.gitignore vendored
View File

@ -1,7 +1,22 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.nupkg
*.so
*.dylib
@ -26,9 +41,13 @@ populations
# Discarded code snippets
build.sh
*-fast.yaml
detritus/
# Dotnet Core ignores
*.swp
*.pdb
*.deps.json
*.*~
project.lock.json
.DS_Store
@ -59,6 +78,53 @@ bld/
msbuild.log
msbuild.err
msbuild.wrn
csharp/OpenMatch/obj
Chart.lock
# Visual Studio 2015
.vs/
# Goland
.idea/
# Nodejs files placed when building Hugo, ok to allow if we actually start using Nodejs.
package.json
package-lock.json
site/resources/_gen/
# Node Modules
node_modules/
# Install YAML files
install/yaml/
# Temp Directories
tmp/
# Terraform context
.terraform
*.tfstate.backup
# Credential Files
creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/query/query
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
test/evaluator/evaluator
test/matchfunction/matchfunction
tools/reaper/reaper
# Secrets Directories
install/helm/open-match/secrets/
# Helm tar charts
install/helm/open-match/charts/

225
.golangci.yaml Normal file
View File

@ -0,0 +1,225 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file contains all available configuration options
# with their default values.
# https://github.com/golangci/golangci-lint#config-file
service:
golangci-lint-version: 1.18.0
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
deadline: 5m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# include test files or not, default is true
tests: true
# list of build tags, all linters use it. Default is empty list.
build-tags:
# which dirs to skip: they won't be analyzed;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but next dirs are always skipped independently
# from this option's value:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs:
# which files to skip: they will be analyzed, but issues from them
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
skip-files: '.*\.gw\.go'
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle, default is "colored-line-number"
format: colored-line-number
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# all available settings of specific linters
linters-settings:
errcheck:
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: true
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: true
govet:
# report about shadowed variables
check-shadowing: true
# settings per analyzer
settings:
printf: # analyzer name, run `go tool vet help` to see all analyzers
funcs: # run `go tool vet help printf` to see available settings for `printf` analyzer
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Infof
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
golint:
# minimal confidence for issues, default is 0.8
min-confidence: 0.8
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 3
depguard:
list-type: blacklist
include-go-root: false
packages:
- github.com/davecgh/go-spew/spew
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
ignore-words:
- someword
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 1
unused:
# treat code as a program (not a library) and report unused exported identifiers; default is false.
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find external interfaces. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
nakedret:
# make an issue if func has more lines of code than this setting and it has naked returns; default is 30
max-func-lines: 30
prealloc:
# XXX: we don't recommend using this linter before doing performance profiling.
# For most programs usage of prealloc will be a premature optimization.
# Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them.
# True by default.
simple: true
range-loops: true # Report preallocation suggestions on range loops, true by default
for-loops: false # Report preallocation suggestions on for loops, false by default
gocritic:
# Which checks should be enabled; can't be combined with 'disabled-checks';
# See https://go-critic.github.io/overview#checks-overview
# To check which checks are enabled run `GL_DEBUG=gocritic golangci-lint run`
# By default list of stable checks is used.
# enabled-checks:
# - rangeValCopy
# Enable multiple checks by tags, run `GL_DEBUG=gocritic golangci-lint` run to see all tags and checks.
# Empty list by default. See https://github.com/go-critic/go-critic#usage -> section "Tags".
enabled-tags:
- performance
settings: # settings passed to gocritic
captLocal: # must be valid enabled check name
paramsOnly: true
rangeValCopy:
sizeThreshold: 32
linters:
enable-all: true
disable:
- dupl
- funlen
- gochecknoglobals
- goconst
- gocyclo
- gosec
- interfacer # deprecated - "A tool that suggests interfaces is prone to bad suggestions"
- lll
#linters:
# enable-all: true
issues:
# List of regexps of issue texts to exclude, empty list by default.
# But independently from this option we use default exclude patterns,
# it can be disabled by `exclude-use-default: false`. To list all
# excluded by default patterns execute `golangci-lint run --help`
exclude:
- abcdef
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
# Exclude some linters from running on test files
- path: _test\.go
linters:
- errcheck
- bodyclose
# Exclude known linters from partially hard-vendored code,
# which is impossible to exclude via "nolint" comments.
- path: internal/hmac/
text: "weak cryptographic primitive"
linters:
- gosec
# Exclude some staticcheck messages
- linters:
- staticcheck
text: "SA9003:"
# Exclude lll issues for long lines with go:generate
- linters:
- lll
source: "^//go:generate "
# Independently from option `exclude` we use default exclude patterns,
# it can be disabled by this option. To list all
# excluded by default patterns execute `golangci-lint run --help`.
# Default value for this option is true.
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

View File

@ -1,14 +0,0 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/backendapi cmd/backendapi
COPY api/protobuf-spec/backend.pb.go cmd/backendapi/proto/
COPY config config
COPY internal internal
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/backendapi .
ENTRYPOINT ["./backendapi"]

28
Dockerfile.base-build Normal file
View File

@ -0,0 +1,28 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
FROM golang:1.14.0
ENV GO111MODULE=on
WORKDIR /go/src/open-match.dev/open-match
# First copy only the go.sum and go.mod then download dependencies. Docker
# caching is [in]validated by the input files changes. So when the dependencies
# for the project don't change, the previous image layer can be re-used. go.sum
# is included as its hashing verifies the expected files are downloaded.
COPY go.sum go.mod ./
RUN go mod download
COPY . .

57
Dockerfile.ci Normal file
View File

@ -0,0 +1,57 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM debian
RUN apt-get update
RUN apt-get install -y -qq git make python3 virtualenv curl sudo unzip apt-transport-https ca-certificates curl software-properties-common gnupg2
# Docker
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
RUN sudo apt-key fingerprint 0EBFCD88
RUN sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
stretch \
stable"
RUN sudo apt-get update
RUN sudo apt-get install -y -qq docker-ce docker-ce-cli containerd.io
# Cloud SDK
RUN export CLOUD_SDK_REPO="cloud-sdk-stretch" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && apt-get install google-cloud-sdk google-cloud-sdk-app-engine-go -y -qq
# Install Golang
# https://github.com/docker-library/golang/blob/master/1.14/stretch/Dockerfile
RUN mkdir -p /toolchain/golang
WORKDIR /toolchain/golang
RUN sudo rm -rf /usr/local/go/
# When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
RUN curl -L https://golang.org/dl/go1.14.linux-amd64.tar.gz | sudo tar -C /usr/local -xz
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN sudo mkdir -p "$GOPATH/src" "$GOPATH/bin" \
&& sudo chmod -R 777 "$GOPATH"
# Prepare toolchain and workspace
RUN mkdir -p /toolchain
WORKDIR /workspace
ENV OPEN_MATCH_CI_MODE=1
ENV KUBECONFIG=$HOME/.kube/config
RUN mkdir -p $HOME/.kube/

60
Dockerfile.cmd Normal file
View File

@ -0,0 +1,60 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match
ARG IMAGE_TITLE
RUN make "build/cmd/${IMAGE_TITLE}"
FROM gcr.io/distroless/static:nonroot
ARG IMAGE_TITLE
WORKDIR /app/
COPY --from=builder --chown=nonroot "/go/src/open-match.dev/open-match/build/cmd/${IMAGE_TITLE}/" "/app/"
ENTRYPOINT ["/app/run"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

View File

@ -1,12 +0,0 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY examples/evaluators/golang/simple examples/evaluators/golang/simple
COPY config config
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/evaluators/golang/simple
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/mmfstub/mmfstub mmfstub
ENTRYPOINT ["./simple"]

View File

@ -1,14 +0,0 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/frontendapi cmd/frontendapi
COPY api/protobuf-spec/frontend.pb.go cmd/frontendapi/proto/
COPY config config
COPY internal internal
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/frontendapi .
ENTRYPOINT ["./frontendapi"]

View File

@ -1,13 +0,0 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY examples/functions/golang/simple examples/functions/golang/simple
COPY config config
COPY internal/statestorage internal/statestorage
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/functions/golang/simple
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o mmf .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/mmfstub/mmfstub mmfstub
CMD ["./mmf"]

View File

@ -1,24 +0,0 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
# Necessary to get a specific version of the golang k8s client
RUN go get github.com/tools/godep
RUN go get k8s.io/client-go/...
WORKDIR /go/src/k8s.io/client-go
RUN git checkout v7.0.0
RUN godep restore ./...
RUN rm -rf vendor/
RUN rm -rf /go/src/github.com/golang/protobuf/
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/
COPY cmd/mmforc cmd/mmforc
COPY config config
COPY internal internal
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
# Uncomment to build production images (removes all troubleshooting tools)
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/mmforc .
CMD ["./mmforc"]

980
Makefile Normal file
View File

@ -0,0 +1,980 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Open Match Make Help
## ====================
##
## Create a GKE Cluster (requires gcloud installed and initialized, https://cloud.google.com/sdk/docs/quickstarts)
## make activate-gcp-apis
## make create-gke-cluster push-helm
##
## Create a Minikube Cluster (requires VirtualBox)
## make create-mini-cluster push-helm
##
## Create a KinD Cluster (Follow instructions to run command before pushing helm.)
## make create-kind-cluster get-kind-kubeconfig
## Finish KinD setup by installing helm:
## make push-helm
##
## Deploy Open Match
## make push-images -j$(nproc)
## make install-chart
##
## Build and Test
## make all -j$(nproc)
## make test
##
## Access telemetry
## make proxy-prometheus
## make proxy-grafana
## make proxy-ui
##
## Teardown
## make delete-mini-cluster
## make delete-gke-cluster
## make delete-kind-cluster && export KUBECONFIG=""
##
## Prepare a Pull Request
## make presubmit
##
# If you want information on how to edit this file checkout,
# http://makefiletutorial.com/
BASE_VERSION = 1.1.0
SHORT_SHA = $(shell git rev-parse --short=7 HEAD | tr -d [:punct:])
BRANCH_NAME = $(shell git rev-parse --abbrev-ref HEAD | tr -d [:punct:])
VERSION = $(BASE_VERSION)-$(SHORT_SHA)
BUILD_DATE = $(shell date -u +'%Y-%m-%dT%H:%M:%SZ')
YEAR_MONTH = $(shell date -u +'%Y%m')
YEAR_MONTH_DAY = $(shell date -u +'%Y%m%d')
MAJOR_MINOR_VERSION = $(shell echo $(BASE_VERSION) | cut -d '.' -f1).$(shell echo $(BASE_VERSION) | cut -d '.' -f2)
PROTOC_VERSION = 3.10.1
HELM_VERSION = 3.0.0
KUBECTL_VERSION = 1.16.2
MINIKUBE_VERSION = latest
GOLANGCI_VERSION = 1.18.0
KIND_VERSION = 0.5.1
SWAGGERUI_VERSION = 3.24.2
GOOGLE_APIS_VERSION = aba342359b6743353195ca53f944fe71e6fb6cd4
GRPC_GATEWAY_VERSION = 1.14.3
TERRAFORM_VERSION = 0.12.13
CHART_TESTING_VERSION = 2.4.0
# A workaround to simplify Open Match development workflow
REDIS_DEV_PASSWORD = helloworld
ENABLE_SECURITY_HARDENING = 0
GO = GO111MODULE=on go
# Defines the absolute local directory of the open-match project
REPOSITORY_ROOT := $(patsubst %/,%,$(dir $(abspath $(MAKEFILE_LIST))))
BUILD_DIR = $(REPOSITORY_ROOT)/build
TOOLCHAIN_DIR = $(BUILD_DIR)/toolchain
TOOLCHAIN_BIN = $(TOOLCHAIN_DIR)/bin
PROTOC_INCLUDES := $(REPOSITORY_ROOT)/third_party
GCP_PROJECT_ID ?=
GCP_PROJECT_FLAG = --project=$(GCP_PROJECT_ID)
OPEN_MATCH_BUILD_PROJECT_ID = open-match-build
OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID = open-match-public-images
REGISTRY ?= gcr.io/$(GCP_PROJECT_ID)
TAG = $(VERSION)
ALTERNATE_TAG = dev
VERSIONED_CANARY_TAG = $(BASE_VERSION)-canary
DATED_CANARY_TAG = $(YEAR_MONTH_DAY)-canary
CANARY_TAG = canary
GKE_CLUSTER_NAME = om-cluster
GCP_REGION = us-west1
GCP_ZONE = us-west1-a
GCP_LOCATION = $(GCP_ZONE)
EXE_EXTENSION =
GCP_LOCATION_FLAG = --zone $(GCP_ZONE)
GO111MODULE = on
GOLANG_TEST_COUNT = 1
SWAGGERUI_PORT = 51500
PROMETHEUS_PORT = 9090
JAEGER_QUERY_PORT = 16686
GRAFANA_PORT = 3000
FRONTEND_PORT = 51504
BACKEND_PORT = 51505
QUERY_PORT = 51503
SYNCHRONIZER_PORT = 51506
DEMO_PORT = 51507
PROTOC := $(TOOLCHAIN_BIN)/protoc$(EXE_EXTENSION)
HELM = $(TOOLCHAIN_BIN)/helm$(EXE_EXTENSION)
MINIKUBE = $(TOOLCHAIN_BIN)/minikube$(EXE_EXTENSION)
KUBECTL = $(TOOLCHAIN_BIN)/kubectl$(EXE_EXTENSION)
KIND = $(TOOLCHAIN_BIN)/kind$(EXE_EXTENSION)
TERRAFORM = $(TOOLCHAIN_BIN)/terraform$(EXE_EXTENSION)
CERTGEN = $(TOOLCHAIN_BIN)/certgen$(EXE_EXTENSION)
GOLANGCI = $(TOOLCHAIN_BIN)/golangci-lint$(EXE_EXTENSION)
CHART_TESTING = $(TOOLCHAIN_BIN)/ct$(EXE_EXTENSION)
GCLOUD = gcloud --quiet
OPEN_MATCH_HELM_NAME = open-match
OPEN_MATCH_KUBERNETES_NAMESPACE = open-match
OPEN_MATCH_SECRETS_DIR = $(REPOSITORY_ROOT)/install/helm/open-match/secrets
GCLOUD_ACCOUNT_EMAIL = $(shell gcloud auth list --format yaml | grep ACTIVE -a2 | grep account: | cut -c 10-)
_GCB_POST_SUBMIT ?= 0
# Latest version triggers builds of :latest images.
_GCB_LATEST_VERSION ?= undefined
IMAGE_BUILD_ARGS = --build-arg BUILD_DATE=$(BUILD_DATE) --build-arg=VCS_REF=$(SHORT_SHA) --build-arg BUILD_VERSION=$(BASE_VERSION)
GCLOUD_EXTRA_FLAGS =
# Make port forwards accessible outside of the proxy machine.
PORT_FORWARD_ADDRESS_FLAG = --address 0.0.0.0
DASHBOARD_PORT = 9092
# Open Match Cluster E2E Test Variables
OPEN_MATCH_CI_LABEL = open-match-ci
# This flag is set when running in Continuous Integration.
ifdef OPEN_MATCH_CI_MODE
export KUBECONFIG = $(HOME)/.kube/config
GCLOUD = gcloud --quiet --no-user-output-enabled
GKE_CLUSTER_NAME = open-match-ci
endif
export PATH := $(TOOLCHAIN_BIN):$(PATH)
# Get the project from gcloud if it's not set.
ifeq ($(GCP_PROJECT_ID),)
export GCP_PROJECT_ID = $(shell gcloud config list --format 'value(core.project)')
endif
ifeq ($(OS),Windows_NT)
HELM_PACKAGE = https://get.helm.sh/helm-v$(HELM_VERSION)-windows-amd64.zip
MINIKUBE_PACKAGE = https://storage.googleapis.com/minikube/releases/$(MINIKUBE_VERSION)/minikube-windows-amd64.exe
EXE_EXTENSION = .exe
PROTOC_PACKAGE = https://github.com/protocolbuffers/protobuf/releases/download/v$(PROTOC_VERSION)/protoc-$(PROTOC_VERSION)-win64.zip
KUBECTL_PACKAGE = https://storage.googleapis.com/kubernetes-release/release/v$(KUBECTL_VERSION)/bin/windows/amd64/kubectl.exe
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-windows-amd64.zip
KIND_PACKAGE = https://github.com/kubernetes-sigs/kind/releases/download/v$(KIND_VERSION)/kind-windows-amd64
TERRAFORM_PACKAGE = https://releases.hashicorp.com/terraform/$(TERRAFORM_VERSION)/terraform_$(TERRAFORM_VERSION)_windows_amd64.zip
CHART_TESTING_PACKAGE = https://github.com/helm/chart-testing/releases/download/v$(CHART_TESTING_VERSION)/chart-testing_$(CHART_TESTING_VERSION)_windows_amd64.zip
SED_REPLACE = sed -i
else
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
HELM_PACKAGE = https://get.helm.sh/helm-v$(HELM_VERSION)-linux-amd64.tar.gz
MINIKUBE_PACKAGE = https://storage.googleapis.com/minikube/releases/$(MINIKUBE_VERSION)/minikube-linux-amd64
PROTOC_PACKAGE = https://github.com/protocolbuffers/protobuf/releases/download/v$(PROTOC_VERSION)/protoc-$(PROTOC_VERSION)-linux-x86_64.zip
KUBECTL_PACKAGE = https://storage.googleapis.com/kubernetes-release/release/v$(KUBECTL_VERSION)/bin/linux/amd64/kubectl
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-linux-amd64.tar.gz
KIND_PACKAGE = https://github.com/kubernetes-sigs/kind/releases/download/v$(KIND_VERSION)/kind-linux-amd64
TERRAFORM_PACKAGE = https://releases.hashicorp.com/terraform/$(TERRAFORM_VERSION)/terraform_$(TERRAFORM_VERSION)_linux_amd64.zip
CHART_TESTING_PACKAGE = https://github.com/helm/chart-testing/releases/download/v$(CHART_TESTING_VERSION)/chart-testing_$(CHART_TESTING_VERSION)_linux_amd64.tar.gz
SED_REPLACE = sed -i
endif
ifeq ($(UNAME_S),Darwin)
HELM_PACKAGE = https://get.helm.sh/helm-v$(HELM_VERSION)-darwin-amd64.tar.gz
MINIKUBE_PACKAGE = https://storage.googleapis.com/minikube/releases/$(MINIKUBE_VERSION)/minikube-darwin-amd64
PROTOC_PACKAGE = https://github.com/protocolbuffers/protobuf/releases/download/v$(PROTOC_VERSION)/protoc-$(PROTOC_VERSION)-osx-x86_64.zip
KUBECTL_PACKAGE = https://storage.googleapis.com/kubernetes-release/release/v$(KUBECTL_VERSION)/bin/darwin/amd64/kubectl
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-darwin-amd64.tar.gz
KIND_PACKAGE = https://github.com/kubernetes-sigs/kind/releases/download/v$(KIND_VERSION)/kind-darwin-amd64
TERRAFORM_PACKAGE = https://releases.hashicorp.com/terraform/$(TERRAFORM_VERSION)/terraform_$(TERRAFORM_VERSION)_darwin_amd64.zip
CHART_TESTING_PACKAGE = https://github.com/helm/chart-testing/releases/download/v$(CHART_TESTING_VERSION)/chart-testing_$(CHART_TESTING_VERSION)_darwin_amd64.tar.gz
SED_REPLACE = sed -i ''
endif
endif
GOLANG_PROTOS = pkg/pb/backend.pb.go pkg/pb/frontend.pb.go pkg/pb/matchfunction.pb.go pkg/pb/query.pb.go pkg/pb/messages.pb.go pkg/pb/extensions.pb.go pkg/pb/evaluator.pb.go internal/ipb/synchronizer.pb.go pkg/pb/backend.pb.gw.go pkg/pb/frontend.pb.gw.go pkg/pb/matchfunction.pb.gw.go pkg/pb/query.pb.gw.go pkg/pb/evaluator.pb.gw.go
SWAGGER_JSON_DOCS = api/frontend.swagger.json api/backend.swagger.json api/query.swagger.json api/matchfunction.swagger.json api/evaluator.swagger.json
ALL_PROTOS = $(GOLANG_PROTOS) $(SWAGGER_JSON_DOCS)
# CMDS is a list of all folders in cmd/
CMDS = $(notdir $(wildcard cmd/*))
# Names of the individual images, ommiting the openmatch prefix.
IMAGES = $(CMDS) mmf-go-soloduel base-build
help:
@cat Makefile | grep ^\#\# | grep -v ^\#\#\# |cut -c 4-
local-cloud-build: LOCAL_CLOUD_BUILD_PUSH = # --push
local-cloud-build: gcloud
cloud-build-local --config=cloudbuild.yaml --dryrun=false $(LOCAL_CLOUD_BUILD_PUSH) --substitutions SHORT_SHA=$(SHORT_SHA),_GCB_POST_SUBMIT=$(_GCB_POST_SUBMIT),_GCB_LATEST_VERSION=$(_GCB_LATEST_VERSION),BRANCH_NAME=$(BRANCH_NAME) .
################################################################################
## #############################################################################
## Image commands:
## These commands are auto-generated based on a complete list of images. All
## folders in cmd/ are turned into an image using Dockerfile.cmd. Additional
## images are specified by the IMAGES variable. Image commands ommit the
## "openmatch-" prefix on the image name and tags.
##
list-images:
@echo $(IMAGES)
#######################################
## build-images / build-<image name>-image: builds images locally
##
build-images: $(foreach IMAGE,$(IMAGES),build-$(IMAGE)-image)
# Include all-protos here so that all dependencies are guaranteed to be downloaded after the base image is created.
# This is important so that the repository does not have any mutations while building individual images.
build-base-build-image: docker $(ALL_PROTOS)
docker build -f Dockerfile.base-build -t open-match-base-build -t $(REGISTRY)/openmatch-base-build:$(TAG) -t $(REGISTRY)/openmatch-base-build:$(ALTERNATE_TAG) .
$(foreach CMD,$(CMDS),build-$(CMD)-image): build-%-image: docker build-base-build-image
docker build \
-f Dockerfile.cmd \
$(IMAGE_BUILD_ARGS) \
--build-arg=IMAGE_TITLE=$* \
-t $(REGISTRY)/openmatch-$*:$(TAG) \
-t $(REGISTRY)/openmatch-$*:$(ALTERNATE_TAG) \
.
build-mmf-go-soloduel-image: docker build-base-build-image
docker build -f examples/functions/golang/soloduel/Dockerfile -t $(REGISTRY)/openmatch-mmf-go-soloduel:$(TAG) -t $(REGISTRY)/openmatch-mmf-go-soloduel:$(ALTERNATE_TAG) .
#######################################
## push-images / push-<image name>-image: builds and pushes images to your
## container registry.
##
push-images: $(foreach IMAGE,$(IMAGES),push-$(IMAGE)-image)
$(foreach IMAGE,$(IMAGES),push-$(IMAGE)-image): push-%-image: build-%-image docker
docker push $(REGISTRY)/openmatch-$*:$(TAG)
docker push $(REGISTRY)/openmatch-$*:$(ALTERNATE_TAG)
ifeq ($(_GCB_POST_SUBMIT),1)
docker tag $(REGISTRY)/openmatch-$*:$(TAG) $(REGISTRY)/openmatch-$*:$(VERSIONED_CANARY_TAG)
docker push $(REGISTRY)/openmatch-$*:$(VERSIONED_CANARY_TAG)
ifeq ($(BASE_VERSION),0.0.0-dev)
docker tag $(REGISTRY)/openmatch-$*:$(TAG) $(REGISTRY)/openmatch-$*:$(DATED_CANARY_TAG)
docker push $(REGISTRY)/openmatch-$*:$(DATED_CANARY_TAG)
docker tag $(REGISTRY)/openmatch-$*:$(TAG) $(REGISTRY)/openmatch-$*:$(CANARY_TAG)
docker push $(REGISTRY)/openmatch-$*:$(CANARY_TAG)
endif
endif
#######################################
## retag-images / retag-<image name>-image: publishes images on the public
## container registry. Used for publishing releases.
##
retag-images: $(foreach IMAGE,$(IMAGES),retag-$(IMAGE)-image)
retag-%-image: SOURCE_REGISTRY = gcr.io/$(OPEN_MATCH_BUILD_PROJECT_ID)
retag-%-image: TARGET_REGISTRY = gcr.io/$(OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID)
retag-%-image: SOURCE_TAG = canary
$(foreach IMAGE,$(IMAGES),retag-$(IMAGE)-image): retag-%-image: docker
docker pull $(SOURCE_REGISTRY)/openmatch-$*:$(SOURCE_TAG)
docker tag $(SOURCE_REGISTRY)/openmatch-$*:$(SOURCE_TAG) $(TARGET_REGISTRY)/openmatch-$*:$(TAG)
docker push $(TARGET_REGISTRY)/openmatch-$*:$(TAG)
#######################################
## clean-images / clean-<image name>-image: removes images from local docker
##
clean-images: docker $(foreach IMAGE,$(IMAGES),clean-$(IMAGE)-image)
-docker rmi -f open-match-base-build
$(foreach IMAGE,$(IMAGES),clean-$(IMAGE)-image): clean-%-image:
-docker rmi -f $(REGISTRY)/openmatch-$*:$(TAG) $(REGISTRY)/openmatch-$*:$(ALTERNATE_TAG)
#####################################################################################################################
update-chart-deps: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/helm/open-match; $(HELM) repo add incubator https://charts.helm.sh/stable; $(HELM) dependency update)
lint-chart: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/ct$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/helm; $(HELM) lint $(OPEN_MATCH_HELM_NAME))
$(CHART_TESTING) lint --all --chart-yaml-schema $(TOOLCHAIN_BIN)/etc/chart_schema.yaml --lint-conf $(TOOLCHAIN_BIN)/etc/lintconf.yaml --chart-dirs $(REPOSITORY_ROOT)/install/helm/
$(CHART_TESTING) lint-and-install --all --chart-yaml-schema $(TOOLCHAIN_BIN)/etc/chart_schema.yaml --lint-conf $(TOOLCHAIN_BIN)/etc/lintconf.yaml --chart-dirs $(REPOSITORY_ROOT)/install/helm/
build/chart/open-match-$(BASE_VERSION).tgz: build/toolchain/bin/helm$(EXE_EXTENSION) lint-chart
mkdir -p $(BUILD_DIR)/chart/
$(HELM) package -d $(BUILD_DIR)/chart/ --version $(BASE_VERSION) $(REPOSITORY_ROOT)/install/helm/open-match
build/chart/index.yaml: build/toolchain/bin/helm$(EXE_EXTENSION) gcloud build/chart/open-match-$(BASE_VERSION).tgz
mkdir -p $(BUILD_DIR)/chart-index/
-gsutil cp gs://open-match-chart/chart/index.yaml $(BUILD_DIR)/chart-index/
-gsutil -m cp gs://open-match-chart/chart/open-match-* $(BUILD_DIR)/chart-index/
$(HELM) repo index $(BUILD_DIR)/chart-index/
$(HELM) repo index --merge $(BUILD_DIR)/chart-index/index.yaml $(BUILD_DIR)/chart/
build/chart/index.yaml.$(YEAR_MONTH_DAY): build/chart/index.yaml
cp $(BUILD_DIR)/chart/index.yaml $(BUILD_DIR)/chart/index.yaml.$(YEAR_MONTH_DAY)
build/chart/: build/chart/index.yaml build/chart/index.yaml.$(YEAR_MONTH_DAY)
install-chart-prerequisite: build/toolchain/bin/kubectl$(EXE_EXTENSION) update-chart-deps
-$(KUBECTL) create namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)
$(KUBECTL) apply -f install/gke-metadata-server-workaround.yaml
# Used for Open Match development. Install om-configmap-override.yaml by default.
HELM_UPGRADE_FLAGS = --cleanup-on-fail -i --no-hooks --debug --timeout=600s --namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) --set global.gcpProjectId=$(GCP_PROJECT_ID) --set open-match-override.enabled=true --set redis.password=$(REDIS_DEV_PASSWORD)
# Used for generate static yamls. Install om-configmap-override.yaml as needed.
HELM_TEMPLATE_FLAGS = --no-hooks --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --set usingHelmTemplate=true
HELM_IMAGE_FLAGS = --set global.image.registry=$(REGISTRY) --set global.image.tag=$(TAG)
install-demo: build/toolchain/bin/helm$(EXE_EXTENSION)
cp $(REPOSITORY_ROOT)/install/02-open-match-demo.yaml $(REPOSITORY_ROOT)/install/tmp-demo.yaml
$(SED_REPLACE) 's|gcr.io/open-match-public-images|$(REGISTRY)|g' $(REPOSITORY_ROOT)/install/tmp-demo.yaml
$(SED_REPLACE) 's|0.0.0-dev|$(TAG)|g' $(REPOSITORY_ROOT)/install/tmp-demo.yaml
$(KUBECTL) apply -f $(REPOSITORY_ROOT)/install/tmp-demo.yaml
rm $(REPOSITORY_ROOT)/install/tmp-demo.yaml
# install-large-chart will install open-match-core, open-match-demo with the demo evaluator and mmf, and telemetry supports.
install-large-chart: install-chart-prerequisite install-demo build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set open-match-telemetry.enabled=true \
--set open-match-customize.enabled=true \
--set open-match-customize.evaluator.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set global.telemetry.jaeger.enabled=true \
--set global.telemetry.prometheus.enabled=true
# install-chart will install open-match-core, open-match-demo, with the demo evaluator and mmf.
install-chart: install-chart-prerequisite install-demo build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=true \
--set open-match-customize.evaluator.enabled=true
# install-scale-chart will wait for installing open-match-core with telemetry supports then install open-match-scale chart.
install-scale-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) -f install/helm/open-match/values-production.yaml \
--set open-match-telemetry.enabled=true \
--set open-match-customize.enabled=true \
--set open-match-customize.function.enabled=true \
--set open-match-customize.evaluator.enabled=true \
--set open-match-customize.function.image=openmatch-scale-mmf \
--set global.telemetry.grafana.enabled=true \
--set global.telemetry.jaeger.enabled=false \
--set global.telemetry.prometheus.enabled=true
$(HELM) template $(OPEN_MATCH_HELM_NAME)-scale install/helm/open-match $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) -f install/helm/open-match/values-production.yaml \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set global.telemetry.prometheus.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set open-match-scale.enabled=true | $(KUBECTL) apply -f -
# install-ci-chart will install open-match-core with pool based mmf for end-to-end in-cluster test.
install-ci-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set query.replicas=1,frontend.replicas=1,backend.replicas=1 \
--set evaluator.hostName=open-match-test \
--set evaluator.grpcPort=50509 \
--set evaluator.httpPort=51509 \
--set open-match-core.registrationInterval=200ms \
--set open-match-core.proposalCollectionInterval=200ms \
--set open-match-core.assignedDeleteTimeout=200ms \
--set open-match-core.pendingReleaseTimeout=200ms \
--set open-match-core.queryPageSize=10 \
--set global.gcpProjectId=intentionally-invalid-value \
--set redis.master.resources.requests.cpu=0.6,redis.master.resources.requests.memory=300Mi \
--set ci=true
delete-chart: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
-$(HELM) uninstall $(OPEN_MATCH_HELM_NAME)
-$(HELM) uninstall $(OPEN_MATCH_HELM_NAME)-demo
-$(KUBECTL) delete psp,clusterrole,clusterrolebinding --selector=release=open-match
-$(KUBECTL) delete psp,clusterrole,clusterrolebinding --selector=release=open-match-demo
-$(KUBECTL) delete namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)
-$(KUBECTL) delete namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)-demo
ifneq ($(BASE_VERSION), 0.0.0-dev)
install/yaml/: REGISTRY = gcr.io/$(OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID)
install/yaml/: TAG = $(BASE_VERSION)
endif
install/yaml/: update-chart-deps install/yaml/install.yaml install/yaml/01-open-match-core.yaml install/yaml/02-open-match-demo.yaml install/yaml/03-prometheus-chart.yaml install/yaml/04-grafana-chart.yaml install/yaml/05-jaeger-chart.yaml install/yaml/06-open-match-override-configmap.yaml install/yaml/07-open-match-default-evaluator.yaml
# We have to hard-code the Jaeger endpoints as we are excluding Jaeger, so Helm cannot determine the endpoints from the Jaeger subchart
install/yaml/01-open-match-core.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set-string global.telemetry.jaeger.agentEndpoint="$(OPEN_MATCH_HELM_NAME)-jaeger-agent:6831" \
--set-string global.telemetry.jaeger.collectorEndpoint="http://$(OPEN_MATCH_HELM_NAME)-jaeger-collector:14268/api/traces" \
install/helm/open-match > install/yaml/01-open-match-core.yaml
install/yaml/02-open-match-demo.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
cp $(REPOSITORY_ROOT)/install/02-open-match-demo.yaml $(REPOSITORY_ROOT)/install/yaml/02-open-match-demo.yaml
$(SED_REPLACE) 's|0.0.0-dev|$(TAG)|g' $(REPOSITORY_ROOT)/install/yaml/02-open-match-demo.yaml
$(SED_REPLACE) 's|gcr.io/open-match-public-images|$(REGISTRY)|g' $(REPOSITORY_ROOT)/install/yaml/02-open-match-demo.yaml
install/yaml/03-prometheus-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-telemetry.enabled=true \
--set global.telemetry.prometheus.enabled=true \
install/helm/open-match > install/yaml/03-prometheus-chart.yaml
# We have to hard-code the Prometheus Server URL as we are excluding Prometheus, so Helm cannot determine the URL from the Prometheus subchart
install/yaml/04-grafana-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-telemetry.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set-string global.telemetry.grafana.prometheusServer="http://$(OPEN_MATCH_HELM_NAME)-prometheus-server.$(OPEN_MATCH_KUBERNETES_NAMESPACE).svc.cluster.local:80/" \
install/helm/open-match > install/yaml/04-grafana-chart.yaml
install/yaml/05-jaeger-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-telemetry.enabled=true \
--set global.telemetry.jaeger.enabled=true \
install/helm/open-match > install/yaml/05-jaeger-chart.yaml
install/yaml/06-open-match-override-configmap.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-override.enabled=true \
-s templates/om-configmap-override.yaml \
install/helm/open-match > install/yaml/06-open-match-override-configmap.yaml
install/yaml/07-open-match-default-evaluator.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-customize.enabled=true \
--set open-match-customize.evaluator.enabled=true \
install/helm/open-match > install/yaml/07-open-match-default-evaluator.yaml
install/yaml/install.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=true \
--set open-match-customize.evaluator.enabled=true \
--set open-match-telemetry.enabled=true \
--set global.telemetry.jaeger.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set global.telemetry.prometheus.enabled=true \
install/helm/open-match > install/yaml/install.yaml
set-redis-password:
@stty -echo; \
printf "Redis password: "; \
read REDIS_PASSWORD; \
stty echo; \
printf "\n"; \
$(KUBECTL) create secret generic open-match-redis -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) --from-literal=redis-password=$$REDIS_PASSWORD --dry-run -o yaml | $(KUBECTL) replace -f - --force
install-toolchain: install-kubernetes-tools install-protoc-tools install-openmatch-tools
install-kubernetes-tools: build/toolchain/bin/kubectl$(EXE_EXTENSION) build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/minikube$(EXE_EXTENSION) build/toolchain/bin/terraform$(EXE_EXTENSION)
install-protoc-tools: build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-swagger$(EXE_EXTENSION)
install-openmatch-tools: build/toolchain/bin/certgen$(EXE_EXTENSION) build/toolchain/bin/reaper$(EXE_EXTENSION)
build/toolchain/bin/helm$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-helm
ifeq ($(suffix $(HELM_PACKAGE)),.zip)
cd $(TOOLCHAIN_DIR)/temp-helm && curl -Lo helm.zip $(HELM_PACKAGE) && unzip -d $(TOOLCHAIN_BIN) -j -q -o helm.zip
else
cd $(TOOLCHAIN_DIR)/temp-helm && curl -Lo helm.tar.gz $(HELM_PACKAGE) && tar xzf helm.tar.gz -C $(TOOLCHAIN_BIN) --strip-components 1
endif
rm -rf $(TOOLCHAIN_DIR)/temp-helm/
build/toolchain/bin/ct$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-charttesting
ifeq ($(suffix $(CHART_TESTING_PACKAGE)),.zip)
cd $(TOOLCHAIN_DIR)/temp-charttesting && curl -Lo charttesting.zip $(CHART_TESTING_PACKAGE) && unzip -j -q -o charttesting.zip
else
cd $(TOOLCHAIN_DIR)/temp-charttesting && curl -Lo charttesting.tar.gz $(CHART_TESTING_PACKAGE) && tar xzf charttesting.tar.gz
endif
mv $(TOOLCHAIN_DIR)/temp-charttesting/* $(TOOLCHAIN_BIN)
rm -rf $(TOOLCHAIN_DIR)/temp-charttesting/
build/toolchain/bin/minikube$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -Lo $(MINIKUBE) $(MINIKUBE_PACKAGE)
chmod +x $(MINIKUBE)
build/toolchain/bin/kubectl$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -Lo $(KUBECTL) $(KUBECTL_PACKAGE)
chmod +x $(KUBECTL)
build/toolchain/bin/golangci-lint$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-golangci
ifeq ($(suffix $(GOLANGCI_PACKAGE)),.zip)
cd $(TOOLCHAIN_DIR)/temp-golangci && curl -Lo golangci.zip $(GOLANGCI_PACKAGE) && unzip -j -q -o golangci.zip
else
cd $(TOOLCHAIN_DIR)/temp-golangci && curl -Lo golangci.tar.gz $(GOLANGCI_PACKAGE) && tar xzf golangci.tar.gz --strip-components 1
endif
mv $(TOOLCHAIN_DIR)/temp-golangci/golangci-lint$(EXE_EXTENSION) $(GOLANGCI)
rm -rf $(TOOLCHAIN_DIR)/temp-golangci/
build/toolchain/bin/kind$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -Lo $(KIND) $(KIND_PACKAGE)
chmod +x $(KIND)
build/toolchain/bin/terraform$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-terraform
cd $(TOOLCHAIN_DIR)/temp-terraform && curl -Lo terraform.zip $(TERRAFORM_PACKAGE) && unzip -j -q -o terraform.zip
mv $(TOOLCHAIN_DIR)/temp-terraform/terraform$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/terraform$(EXE_EXTENSION)
rm -rf $(TOOLCHAIN_DIR)/temp-terraform/
build/toolchain/bin/protoc$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -o $(TOOLCHAIN_DIR)/protoc-temp.zip -L $(PROTOC_PACKAGE)
(cd $(TOOLCHAIN_DIR); unzip -q -o protoc-temp.zip)
rm $(TOOLCHAIN_DIR)/protoc-temp.zip $(TOOLCHAIN_DIR)/readme.txt
build/toolchain/bin/protoc-gen-doc$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cd $(TOOLCHAIN_BIN) && $(GO) build -i -pkgdir . github.com/pseudomuto/protoc-gen-doc/cmd/protoc-gen-doc
build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cd $(TOOLCHAIN_BIN) && $(GO) build -i -pkgdir . github.com/golang/protobuf/protoc-gen-go
build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION):
cd $(TOOLCHAIN_BIN) && $(GO) build -i -pkgdir . github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
build/toolchain/bin/protoc-gen-swagger$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cd $(TOOLCHAIN_BIN) && $(GO) build -i -pkgdir . github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
build/toolchain/bin/certgen$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cd $(TOOLCHAIN_BIN) && $(GO) build $(REPOSITORY_ROOT)/tools/certgen/
build/toolchain/bin/reaper$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cd $(TOOLCHAIN_BIN) && $(GO) build $(REPOSITORY_ROOT)/tools/reaper/
# Fake target for docker
docker: no-sudo
# Fake target for gcloud
gcloud: no-sudo
tls-certs: install/helm/open-match/secrets/
install/helm/open-match/secrets/: install/helm/open-match/secrets/tls/root-ca/ install/helm/open-match/secrets/tls/server/
install/helm/open-match/secrets/tls/root-ca/: build/toolchain/bin/certgen$(EXE_EXTENSION)
mkdir -p $(OPEN_MATCH_SECRETS_DIR)/tls/root-ca
$(CERTGEN) -ca=true -publiccertificate=$(OPEN_MATCH_SECRETS_DIR)/tls/root-ca/public.cert -privatekey=$(OPEN_MATCH_SECRETS_DIR)/tls/root-ca/private.key
install/helm/open-match/secrets/tls/server/: build/toolchain/bin/certgen$(EXE_EXTENSION) install/helm/open-match/secrets/tls/root-ca/
mkdir -p $(OPEN_MATCH_SECRETS_DIR)/tls/server/
$(CERTGEN) -publiccertificate=$(OPEN_MATCH_SECRETS_DIR)/tls/server/public.cert -privatekey=$(OPEN_MATCH_SECRETS_DIR)/tls/server/private.key -rootpubliccertificate=$(OPEN_MATCH_SECRETS_DIR)/tls/root-ca/public.cert -rootprivatekey=$(OPEN_MATCH_SECRETS_DIR)/tls/root-ca/private.key
auth-docker: gcloud docker
$(GCLOUD) $(GCP_PROJECT_FLAG) auth configure-docker
auth-gke-cluster: gcloud
$(GCLOUD) $(GCP_PROJECT_FLAG) container clusters get-credentials $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG)
activate-gcp-apis: gcloud
$(GCLOUD) services enable containerregistry.googleapis.com
$(GCLOUD) services enable container.googleapis.com
$(GCLOUD) services enable containeranalysis.googleapis.com
$(GCLOUD) services enable binaryauthorization.googleapis.com
create-gcp-service-account: gcloud
gcloud $(GCP_PROJECT_FLAG) iam service-accounts create open-match --display-name="Open Match Service Account"
gcloud $(GCP_PROJECT_FLAG) iam service-accounts add-iam-policy-binding --member=open-match@$(GCP_PROJECT_ID).iam.gserviceaccount.com --role=roles/container.clusterAdmin
gcloud $(GCP_PROJECT_FLAG) iam service-accounts keys create ~/key.json --iam-account open-match@$(GCP_PROJECT_ID).iam.gserviceaccount.com
create-kind-cluster: build/toolchain/bin/kind$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KIND) create cluster
get-kind-kubeconfig: build/toolchain/bin/kind$(EXE_EXTENSION)
@echo "============================================="
@echo "= Run this command"
@echo "============================================="
@echo "export KUBECONFIG=\"$(shell $(KIND) get kubeconfig-path)\""
@echo "============================================="
delete-kind-cluster: build/toolchain/bin/kind$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
-$(KIND) delete cluster
create-cluster-role-binding:
$(KUBECTL) create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(GCLOUD_ACCOUNT_EMAIL)
create-gke-cluster: GKE_VERSION = 1.15.12-gke.20 # gcloud beta container get-server-config --zone us-west1-a
create-gke-cluster: GKE_CLUSTER_SHAPE_FLAGS = --machine-type n1-standard-4 --enable-autoscaling --min-nodes 1 --num-nodes 2 --max-nodes 10 --disk-size 50
create-gke-cluster: GKE_FUTURE_COMPAT_FLAGS = --no-enable-basic-auth --no-issue-client-certificate --enable-ip-alias --metadata disable-legacy-endpoints=true --enable-autoupgrade
create-gke-cluster: build/toolchain/bin/kubectl$(EXE_EXTENSION) gcloud
$(GCLOUD) beta $(GCP_PROJECT_FLAG) container clusters create $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG) $(GKE_CLUSTER_SHAPE_FLAGS) $(GKE_FUTURE_COMPAT_FLAGS) $(GKE_CLUSTER_FLAGS) \
--enable-pod-security-policy \
--cluster-version $(GKE_VERSION) \
--image-type cos_containerd \
--tags open-match
$(MAKE) create-cluster-role-binding
delete-gke-cluster: gcloud
-$(GCLOUD) $(GCP_PROJECT_FLAG) container clusters delete $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG) $(GCLOUD_EXTRA_FLAGS)
create-mini-cluster: build/toolchain/bin/minikube$(EXE_EXTENSION)
$(MINIKUBE) start --memory 6144 --cpus 4 --disk-size 50g
delete-mini-cluster: build/toolchain/bin/minikube$(EXE_EXTENSION)
-$(MINIKUBE) delete
gcp-apply-binauthz-policy: build/policies/binauthz.yaml
$(GCLOUD) beta $(GCP_PROJECT_FLAG) container binauthz policy import build/policies/binauthz.yaml
all-protos: $(ALL_PROTOS)
# The proto generator really wants to be run from the $GOPATH root, and doesn't
# support methods for directing it to the correct location that's not the proto
# file's location. So instead put it in a tempororary directory, then move it
# out.
pkg/pb/%.pb.go: api/%.proto third_party/ build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION)
mkdir -p $(REPOSITORY_ROOT)/build/prototmp $(REPOSITORY_ROOT)/pkg/pb
$(PROTOC) $< \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--go_out=plugins=grpc:$(REPOSITORY_ROOT)/build/prototmp
mv $(REPOSITORY_ROOT)/build/prototmp/open-match.dev/open-match/$@ $@
internal/ipb/%.pb.go: internal/api/%.proto third_party/ build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION)
mkdir -p $(REPOSITORY_ROOT)/build/prototmp $(REPOSITORY_ROOT)/internal/ipb
$(PROTOC) $< \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--go_out=plugins=grpc:$(REPOSITORY_ROOT)/build/prototmp
mv $(REPOSITORY_ROOT)/build/prototmp/open-match.dev/open-match/$@ $@
pkg/pb/%.pb.gw.go: api/%.proto third_party/ build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION)
mkdir -p $(REPOSITORY_ROOT)/build/prototmp $(REPOSITORY_ROOT)/pkg/pb
$(PROTOC) $< \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--grpc-gateway_out=logtostderr=true,allow_delete_body=true:$(REPOSITORY_ROOT)/build/prototmp
mv $(REPOSITORY_ROOT)/build/prototmp/open-match.dev/open-match/$@ $@
api/%.swagger.json: api/%.proto third_party/ build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-swagger$(EXE_EXTENSION)
$(PROTOC) $< \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--swagger_out=logtostderr=true,allow_delete_body=true:$(REPOSITORY_ROOT)
api/api.md: third_party/ build/toolchain/bin/protoc-gen-doc$(EXE_EXTENSION)
$(PROTOC) api/*.proto \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--doc_out=. \
--doc_opt=markdown,api_temp.md
# Crazy hack that insert hugo link reference to this API doc -)
cat ./docs/hugo_apiheader.txt ./api_temp.md >> api.md
mv ./api.md $(REPOSITORY_ROOT)/../open-match-docs/site/content/en/docs/Reference/
rm ./api_temp.md
# Include structure of the protos needs to be called out do the dependency chain is run through properly.
pkg/pb/backend.pb.go: pkg/pb/messages.pb.go
pkg/pb/frontend.pb.go: pkg/pb/messages.pb.go
pkg/pb/matchfunction.pb.go: pkg/pb/messages.pb.go
pkg/pb/query.pb.go: pkg/pb/messages.pb.go
pkg/pb/evaluator.pb.go: pkg/pb/messages.pb.go
internal/ipb/synchronizer.pb.go: pkg/pb/messages.pb.go
build: assets
$(GO) build ./...
$(GO) build -tags e2ecluster ./...
define test_folder
$(if $(wildcard $(1)/go.mod), \
cd $(1) && \
$(GO) test -cover -test.count $(GOLANG_TEST_COUNT) -race ./... && \
$(GO) test -cover -test.count $(GOLANG_TEST_COUNT) -run IgnoreRace$$ ./... \
)
$(foreach dir, $(wildcard $(1)/*/.), $(call test_folder, $(dir)))
endef
define fast_test_folder
$(if $(wildcard $(1)/go.mod), \
cd $(1) && \
$(GO) test ./... \
)
$(foreach dir, $(wildcard $(1)/*/.), $(call fast_test_folder, $(dir)))
endef
test: $(ALL_PROTOS) tls-certs third_party/
$(call test_folder,.)
fasttest: $(ALL_PROTOS) tls-certs third_party/
$(call fast_test_folder,.)
test-e2e-cluster: all-protos tls-certs third_party/
$(HELM) test --timeout 7m30s -v 0 --logs -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(OPEN_MATCH_HELM_NAME)
fmt:
$(GO) fmt ./...
gofmt -s -w .
vet:
$(GO) vet ./...
golangci: build/toolchain/bin/golangci-lint$(EXE_EXTENSION)
GO111MODULE=on $(GOLANGCI) run --config=$(REPOSITORY_ROOT)/.golangci.yaml
lint: fmt vet golangci lint-chart terraform-lint
assets: $(ALL_PROTOS) tls-certs third_party/ build/chart/
build/cmd: $(foreach CMD,$(CMDS),build/cmd/$(CMD))
# Building a given build/cmd folder is split into two pieces: BUILD_PHONY and
# COPY_PHONY. The BUILD_PHONY is the common go build command, which is
# reusable. The COPY_PHONY is used by some targets which require additional
# files to be included in the image.
$(foreach CMD,$(CMDS),build/cmd/$(CMD)): build/cmd/%: build/cmd/%/BUILD_PHONY build/cmd/%/COPY_PHONY
build/cmd/%/BUILD_PHONY:
mkdir -p $(BUILD_DIR)/cmd/$*
CGO_ENABLED=0 $(GO) build -a -installsuffix cgo -o $(BUILD_DIR)/cmd/$*/run open-match.dev/open-match/cmd/$*
# Default is that nothing needs to be copied into the direcotry
build/cmd/%/COPY_PHONY:
#
build/cmd/swaggerui/COPY_PHONY:
mkdir -p $(BUILD_DIR)/cmd/swaggerui/static/api
cp third_party/swaggerui/* $(BUILD_DIR)/cmd/swaggerui/static/
$(SED_REPLACE) 's|https://open-match.dev/api/v.*/|/api/|g' $(BUILD_DIR)/cmd/swaggerui/static/config.json
cp api/*.json $(BUILD_DIR)/cmd/swaggerui/static/api/
build/cmd/demo-%/COPY_PHONY:
mkdir -p $(BUILD_DIR)/cmd/demo-$*/
cp -r examples/demo/static $(BUILD_DIR)/cmd/demo-$*/static
build/policies/binauthz.yaml: install/policies/binauthz.yaml
mkdir -p $(BUILD_DIR)/policies
cp -f $(REPOSITORY_ROOT)/install/policies/binauthz.yaml $(BUILD_DIR)/policies/binauthz.yaml
$(SED_REPLACE) 's/$$PROJECT_ID/$(GCP_PROJECT_ID)/g' $(BUILD_DIR)/policies/binauthz.yaml
$(SED_REPLACE) 's/$$GKE_CLUSTER_NAME/$(GKE_CLUSTER_NAME)/g' $(BUILD_DIR)/policies/binauthz.yaml
$(SED_REPLACE) 's/$$GCP_LOCATION/$(GCP_LOCATION)/g' $(BUILD_DIR)/policies/binauthz.yaml
ifeq ($(ENABLE_SECURITY_HARDENING),1)
$(SED_REPLACE) 's/$$EVALUATION_MODE/ALWAYS_DENY/g' $(BUILD_DIR)/policies/binauthz.yaml
else
$(SED_REPLACE) 's/$$EVALUATION_MODE/ALWAYS_ALLOW/g' $(BUILD_DIR)/policies/binauthz.yaml
endif
terraform-test: install/terraform/open-match/.terraform/ install/terraform/open-match-build/.terraform/
(cd $(REPOSITORY_ROOT)/install/terraform/open-match/ && $(TERRAFORM) validate)
(cd $(REPOSITORY_ROOT)/install/terraform/open-match-build/ && $(TERRAFORM) validate)
terraform-plan: install/terraform/open-match/.terraform/
(cd $(REPOSITORY_ROOT)/install/terraform/open-match/ && $(TERRAFORM) plan -var gcp_project_id=$(GCP_PROJECT_ID) -var gcp_location=$(GCP_LOCATION))
terraform-lint: build/toolchain/bin/terraform$(EXE_EXTENSION)
$(TERRAFORM) fmt -recursive
terraform-apply: install/terraform/open-match/.terraform/
(cd $(REPOSITORY_ROOT)/install/terraform/open-match/ && $(TERRAFORM) apply -var gcp_project_id=$(GCP_PROJECT_ID) -var gcp_location=$(GCP_LOCATION))
install/terraform/open-match/.terraform/: build/toolchain/bin/terraform$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/terraform/open-match/ && $(TERRAFORM) init)
install/terraform/open-match-build/.terraform/: build/toolchain/bin/terraform$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/terraform/open-match-build/ && $(TERRAFORM) init)
build/certificates/: build/toolchain/bin/certgen$(EXE_EXTENSION)
mkdir -p $(BUILD_DIR)/certificates/
cd $(BUILD_DIR)/certificates/ && $(CERTGEN)
md-test: docker
docker run -t --rm -v $(REPOSITORY_ROOT):/mnt:ro dkhamsing/awesome_bot --white-list "localhost,https://goreportcard.com,github.com/googleforgames/open-match/tree/release-,github.com/googleforgames/open-match/blob/release-,github.com/googleforgames/open-match/releases/download/v,https://swagger.io/tools/swagger-codegen/" --allow-dupe --allow-redirect --skip-save-results `find . -type f -name '*.md' -not -path './build/*' -not -path './.git*'`
ci-deploy-artifacts: install/yaml/ $(SWAGGER_JSON_DOCS) build/chart/ gcloud
ifeq ($(_GCB_POST_SUBMIT),1)
gsutil cp -a public-read $(REPOSITORY_ROOT)/install/yaml/* gs://open-match-chart/install/v$(BASE_VERSION)/yaml/
gsutil cp -a public-read $(REPOSITORY_ROOT)/api/*.json gs://open-match-chart/api/v$(BASE_VERSION)/
# Deploy Helm Chart
# Since each build will refresh just it's version we can allow this for every post submit.
# Copy the files into multiple locations to keep a backup.
gsutil cp -a public-read $(BUILD_DIR)/chart/*.* gs://open-match-chart/chart/by-hash/$(VERSION)/
gsutil cp -a public-read $(BUILD_DIR)/chart/*.* gs://open-match-chart/chart/
else
@echo "Not deploying build artifacts to open-match.dev because this is not a post commit change."
endif
ci-reap-namespaces: build/toolchain/bin/reaper$(EXE_EXTENSION)
-$(TOOLCHAIN_BIN)/reaper -age=30m
# For presubmit we want to update the protobuf generated files and verify that tests are good.
presubmit: GOLANG_TEST_COUNT = 5
presubmit: clean third_party/ update-chart-deps assets update-deps lint build test md-test terraform-test
build/release/: presubmit clean-install-yaml install/yaml/
mkdir -p $(BUILD_DIR)/release/
cp $(REPOSITORY_ROOT)/install/yaml/* $(BUILD_DIR)/release/
validate-preview-release:
ifneq ($(_GCB_POST_SUBMIT),1)
@echo "You must run make with _GCB_POST_SUBMIT=1"
exit 1
endif
ifneq (,$(findstring -preview,$(BASE_VERSION)))
@echo "Creating preview for $(BASE_VERSION)"
else
@echo "BASE_VERSION must contain -preview, it is $(BASE_VERSION)"
exit 1
endif
preview-release: REGISTRY = gcr.io/$(OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID)
preview-release: TAG = $(BASE_VERSION)
preview-release: validate-preview-release build/release/ retag-images ci-deploy-artifacts
release: REGISTRY = gcr.io/$(OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID)
release: TAG = $(BASE_VERSION)
release: presubmit build/release/
clean-secrets:
rm -rf $(OPEN_MATCH_SECRETS_DIR)
clean-protos:
rm -rf $(REPOSITORY_ROOT)/build/prototmp/
rm -rf $(REPOSITORY_ROOT)/pkg/pb/
rm -rf $(REPOSITORY_ROOT)/internal/ipb/
clean-terraform:
rm -rf $(REPOSITORY_ROOT)/install/terraform/.terraform/
clean-build: clean-toolchain clean-release clean-chart
rm -rf $(BUILD_DIR)/
clean-release:
rm -rf $(BUILD_DIR)/release/
clean-toolchain:
rm -rf $(TOOLCHAIN_DIR)/
clean-chart:
rm -rf $(BUILD_DIR)/chart/
clean-install-yaml:
rm -f $(REPOSITORY_ROOT)/install/yaml/*
clean-swagger-docs:
rm -rf $(REPOSITORY_ROOT)/api/*.json
clean-third-party:
rm -rf $(REPOSITORY_ROOT)/third_party/
clean: clean-images clean-build clean-install-yaml clean-secrets clean-terraform clean-third-party clean-protos clean-swagger-docs
proxy-frontend: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Frontend Health: http://localhost:$(FRONTEND_PORT)/healthz"
@echo "Frontend RPC: http://localhost:$(FRONTEND_PORT)/debug/rpcz"
@echo "Frontend Trace: http://localhost:$(FRONTEND_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=frontend,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(FRONTEND_PORT):51504 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-backend: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Backend Health: http://localhost:$(BACKEND_PORT)/healthz"
@echo "Backend RPC: http://localhost:$(BACKEND_PORT)/debug/rpcz"
@echo "Backend Trace: http://localhost:$(BACKEND_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=backend,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(BACKEND_PORT):51505 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-query: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "QueryService Health: http://localhost:$(QUERY_PORT)/healthz"
@echo "QueryService RPC: http://localhost:$(QUERY_PORT)/debug/rpcz"
@echo "QueryService Trace: http://localhost:$(QUERY_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=query,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(QUERY_PORT):51503 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-synchronizer: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Synchronizer Health: http://localhost:$(SYNCHRONIZER_PORT)/healthz"
@echo "Synchronizer RPC: http://localhost:$(SYNCHRONIZER_PORT)/debug/rpcz"
@echo "Synchronizer Trace: http://localhost:$(SYNCHRONIZER_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=synchronizer,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(SYNCHRONIZER_PORT):51506 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-jaeger: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Jaeger Query Frontend: http://localhost:16686"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app.kubernetes.io/name=jaeger,app.kubernetes.io/component=query" --output jsonpath='{.items[0].metadata.name}') $(JAEGER_QUERY_PORT):16686 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-grafana: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "User: admin"
@echo "Password: openmatch"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=grafana,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(GRAFANA_PORT):3000 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-prometheus: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=prometheus,component=server,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(PROMETHEUS_PORT):9090 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-dashboard: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) port-forward --namespace kube-system $(shell $(KUBECTL) get pod --namespace kube-system --selector="app=kubernetes-dashboard" --output jsonpath='{.items[0].metadata.name}') $(DASHBOARD_PORT):9092 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-ui: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "SwaggerUI Health: http://localhost:$(SWAGGERUI_PORT)/"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=swaggerui,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(SWAGGERUI_PORT):51500 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-demo: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "View Demo: http://localhost:$(DEMO_PORT)"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)-demo $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)-demo --selector="app=open-match-demo,component=demo" --output jsonpath='{.items[0].metadata.name}') $(DEMO_PORT):51507 $(PORT_FORWARD_ADDRESS_FLAG)
# Run `make proxy` instead to run everything at the same time.
# If you run this directly it will just run each proxy sequentially.
proxy-all: proxy-frontend proxy-backend proxy-query proxy-grafana proxy-prometheus proxy-jaeger proxy-synchronizer proxy-ui proxy-dashboard proxy-demo
proxy:
# This is an exception case where we'll call recursive make.
# To simplify accessing all the proxy ports we'll call `make proxy-all` with enough subprocesses to run them concurrently.
$(MAKE) proxy-all -j20
update-deps:
$(GO) mod tidy
third_party/: third_party/google/api third_party/protoc-gen-swagger/options third_party/swaggerui/
third_party/google/api:
mkdir -p $(TOOLCHAIN_DIR)/googleapis-temp/
mkdir -p $(REPOSITORY_ROOT)/third_party/google/api
mkdir -p $(REPOSITORY_ROOT)/third_party/google/rpc
curl -o $(TOOLCHAIN_DIR)/googleapis-temp/googleapis.zip -L https://github.com/googleapis/googleapis/archive/$(GOOGLE_APIS_VERSION).zip
(cd $(TOOLCHAIN_DIR)/googleapis-temp/; unzip -q -o googleapis.zip)
cp -f $(TOOLCHAIN_DIR)/googleapis-temp/googleapis-$(GOOGLE_APIS_VERSION)/google/api/*.proto $(REPOSITORY_ROOT)/third_party/google/api/
cp -f $(TOOLCHAIN_DIR)/googleapis-temp/googleapis-$(GOOGLE_APIS_VERSION)/google/rpc/*.proto $(REPOSITORY_ROOT)/third_party/google/rpc/
rm -rf $(TOOLCHAIN_DIR)/googleapis-temp
third_party/protoc-gen-swagger/options:
mkdir -p $(TOOLCHAIN_DIR)/grpc-gateway-temp/
mkdir -p $(REPOSITORY_ROOT)/third_party/protoc-gen-swagger/options
curl -o $(TOOLCHAIN_DIR)/grpc-gateway-temp/grpc-gateway.zip -L https://github.com/grpc-ecosystem/grpc-gateway/archive/v$(GRPC_GATEWAY_VERSION).zip
(cd $(TOOLCHAIN_DIR)/grpc-gateway-temp/; unzip -q -o grpc-gateway.zip)
cp -f $(TOOLCHAIN_DIR)/grpc-gateway-temp/grpc-gateway-$(GRPC_GATEWAY_VERSION)/protoc-gen-swagger/options/*.proto $(REPOSITORY_ROOT)/third_party/protoc-gen-swagger/options/
rm -rf $(TOOLCHAIN_DIR)/grpc-gateway-temp
third_party/swaggerui/:
mkdir -p $(TOOLCHAIN_DIR)/swaggerui-temp/
mkdir -p $(TOOLCHAIN_BIN)
curl -o $(TOOLCHAIN_DIR)/swaggerui-temp/swaggerui.zip -L \
https://github.com/swagger-api/swagger-ui/archive/v$(SWAGGERUI_VERSION).zip
(cd $(TOOLCHAIN_DIR)/swaggerui-temp/; unzip -q -o swaggerui.zip)
cp -rf $(TOOLCHAIN_DIR)/swaggerui-temp/swagger-ui-$(SWAGGERUI_VERSION)/dist/ \
$(REPOSITORY_ROOT)/third_party/swaggerui
# Update the URL in the main page to point to a known good endpoint.
cp $(REPOSITORY_ROOT)/cmd/swaggerui/config.json $(REPOSITORY_ROOT)/third_party/swaggerui/
$(SED_REPLACE) 's|url:.*|configUrl: "/config.json",|g' $(REPOSITORY_ROOT)/third_party/swaggerui/index.html
$(SED_REPLACE) 's|0.0.0-dev|$(BASE_VERSION)|g' $(REPOSITORY_ROOT)/third_party/swaggerui/config.json
rm -rf $(TOOLCHAIN_DIR)/swaggerui-temp
sync-deps:
$(GO) clean -modcache
$(GO) mod download
# Prevents users from running with sudo.
# There's an exception for Google Cloud Build because it runs as root.
no-sudo:
ifndef OPEN_MATCH_CI_MODE
ifeq ($(shell whoami),root)
@echo "ERROR: Running Makefile as root (or sudo)"
@echo "Please follow the instructions at https://docs.docker.com/install/linux/linux-postinstall/ if you are trying to sudo run the Makefile because of the 'Cannot connect to the Docker daemon' error."
@echo "NOTE: sudo/root do not have the authentication token to talk to any GCP service via gcloud."
exit 1
endif
endif
.PHONY: docker gcloud update-deps sync-deps all build proxy-dashboard proxy-prometheus proxy-grafana clean clean-build clean-toolchain clean-binaries clean-protos presubmit test ci-reap-namespaces md-test vet

221
README.md
View File

@ -1,214 +1,39 @@
# Open Match
![Open Match](https://github.com/googleforgames/open-match-docs/blob/master/site/static/images/logo-with-name.png)
Open Match is an open source game matchmaker designed to allow game creators to re-use a common matchmaker framework. Its designed to be flexible (run it anywhere Kubernetes runs), extensible (match logic can be customized to work for any game), and scalable.
[![GoDoc](https://godoc.org/open-match.dev/open-match?status.svg)](https://godoc.org/open-match.dev/open-match)
[![Go Report Card](https://goreportcard.com/badge/open-match.dev/open-match)](https://goreportcard.com/report/open-match.dev/open-match)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/googleforgames/open-match/blob/master/LICENSE)
[![GitHub release](https://img.shields.io/github/release-pre/googleforgames/open-match.svg)](https://github.com/googleforgames/open-match/releases)
[![Follow on Twitter](https://img.shields.io/twitter/follow/Open_Match.svg?style=social&logo=twitter)](https://twitter.com/intent/follow?screen_name=Open_Match)
Matchmaking is a complicated process, and when large player populations are involved, many popular matchmaking approaches touch on significant areas of computer science including graph theory and massively concurrent processing. Open Match is an effort to provide a foundation upon which these difficult problems can be addressed by the wider game development community. As Josh Menke &mdash; famous for working on matchmaking for many popular triple-A franchises &mdash; put it:
Open Match is an open source game matchmaking framework that simplifies building
a scalable and extensible Matchmaker. It is designed to give the game developer
full control over how to make matches while removing the burden of dealing with
the challenges of running a production service at scale.
["Matchmaking, a lot of it actually really is just really good engineering. There's a lot of really hard networking and plumbing problems that need to be solved, depending on the size of your audience."](https://youtu.be/-pglxege-gU?t=830)
Please visit [Open Match website](https://open-match.dev/site/docs/) for user
documentation, demo instructions etc.
## Contributing to Open Match
This project attempts to solve the networking and plumbing problems, so game developers can focus on the logic to match players into great games.
Open Match is in active development and we would love your contribution! Please
read the [contributing guide](CONTRIBUTING.md) for guidelines on contributing to
Open Match.
## Disclaimer
This software is currently alpha, and subject to change. **It is not yet ready to be used in production.**
The [Open Match Development guide](docs/development.md) has detailed instructions
on getting the source code, making changes, testing and submitting a pull request
to Open Match.
# Core Concepts
## Support
[Watch the introduction of Open Match at Unite Berlin 2018 on YouTube](https://youtu.be/qasAmy_ko2o)
Open Match is designed to support massively concurrent matchmaking, and to be scalable to player populations of hundreds of millions or more. It attempts to apply stateless web tech microservices patterns to game matchmaking. If you're not sure what that means, that's okay &mdash; it is fully open source and designed to be customizable to fit into your online game architecture &mdash; so have a look a the code and modify it as you see fit.
## Glossary
* **MMF** &mdash; Matchmaking function. This is the customizable matchmaking logic.
* **Component** &mdash; One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called 'components'.
* **Roster** &mdash; A list of all the players in a match.
* **Profile** &mdash; The json blob containing all the parameters used to select which players go into a roster.
* **Match Object** &mdash; A json blob to contain the results of the matchmaking function. Sent with an empty roster section to the backend API from your game backend and then returned with the matchmaking results filled in.
* **MMFOrc** &mdash; Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
* **State Storage** &mdash; The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
* **Assignment** &mdash; Refers to assigning a player or group of players to a dedicated game server instance. Open Match offers a path to send dedicated game server connection details from your backend to your game clients after a match has been made.
## Requirements
* [Kubernetes](https://kubernetes.io/) cluster &mdash; tested with version 1.9.
* [Redis 4+](https://redis.io/) &mdash; tested with 4.0.11.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) &mdash; tested with 1.10.3.
## Components
Open Match is a set of processes designed to run on Kubernetes. It contains these **core** components:
1. Frontend API
1. Backend API
1. Matchmaker Function Orchestrator (MMFOrc)
It also explicitly depends on these two **customizable** components.
1. Matchmaking "Function" (MMF)
1. Evaluator
While **core** components are fully open source and *can* be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** example MMF and Evaluator processes, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
### Frontend API
The Frontend API accepts the player data and puts it in state storage so your Matchmaking Function (MMF) can access it.
The Frontend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/frontend.proto`. At the most basic level, it expects clients to connect and send:
* A **unique ID** for the group of players (the group can contain any number of players, including only one).
* A **json blob** containing all player-related data you want to use in your matchmaking function.
The client is expected to maintain a connection, waiting for an update from the API that contains the details required to connect to a dedicated game server instance (an 'assignment'). There are also basic functions for removing an ID from the matchmaking pool or an existing match.
### Backend API
The Backend API puts match profiles in state storage which the Matchmaking Function (MMF) can access and use to decide which players should be put into a match together, then return those matches to dedicated game server instances.
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or scheduler, or even directly to a dedicated game server), and to receive:
* A **unique ID** for a matchmaking profile.
* A **json blob** containing all the match-related data you want to use in your matchmaking function, in an 'empty' match object.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
### Matchmaking Function Orchestrator (MMFOrc)
The MMFOrc kicks off your custom matchmaking function (MMF) for every profile submitted to the Backend API. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
The MMFOrc exists to orchestrate/schedule your **custom components**, running them as often as required to meet the demands of your game. MMFOrc runs in an endless loop, submitting MMFs and Evaluator jobs to Kubernetes.
### Evaluator
The Evaluator resolves conflicts when multiple matches want to include the same player(s).
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposed matches, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and matchmaking functions that always look at different parts of the matchmaking pool, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
Large-scale concurrent matchmaking functions is a complex topic, and users who wish to do this are encouraged to engage with the [Open Match community](https://github.com/GoogleCloudPlatform/open-match#get-involved) about patterns and best practices.
### Matchmaking Functions (MMFs)
Matchmaking Functions (MMFs) are run by the Matchmaker Function Orchestrator (MMFOrc) &mdash; once per profile it sees in state storage. The MMF is run as a Job in Kubernetes, and has full access to read and write from state storage. At a high level, the encouraged pattern is to write a MMF in whatever language you are comfortable in that can do the following things:
1. Read/write from the Open Match state storage &mdash; Open Match ships with Redis as the default state storage.
1. Be packaged in a (Linux) Docker container.
1. Read a profile you wrote to state storage using the Backend API.
1. Select from the player data you wrote to state storage using the Frontend API.
1. Run your custom logic to try to find a match.
1. Write the match object it creates to state storage at a specified key.
1. Remove the players it selected from consideration by other MMFs.
1. (Optional, but recommended) Export stats for metrics collection.
Example MMFs are provided in Golang and C#.
## Open Source Software integrations
### Structured logging
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
### Instrumentation for metrics
Open Match uses [OpenCensus](https://opencensus.io/) for metrics instrumentation. The [gRPC](https://grpc.io/) integrations are built-in, and Golang redigo module integrations are incoming, but [haven't been merged into the official repo](https://github.com/opencensus-integrations/redigo/pull/1). All of the core components expose HTTP `/metrics` endpoints on the port defined in `config/matchmaker_config.json` (default: 9555) for Prometheus to scrape. If you would like to export to a different metrics aggregation platform, we suggest you have a look at the OpenCensus documentation &mdash; there may be one written for you already, and switching to it may be as simple as changing a few lines of code.
**Note:** A standard for instrumentation of MMFs is planned.
### Redis setup
By default, Open Match expects you to run Redis *somewhere*. Connection information can be put in the config file (`matchmaker_config.json`) for any Redis instance reachable from the [Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By default, Open Match sensibly runs in the Kubernetes `default` namespace. In most instances, we expect users will run a copy of Redis in a pod in Kubernetes, with a service pointing to it.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis-sentinel`, which provides an easier path to multi-instance deployments.
## Additional examples
**Note:** These examples will be expanded on in future releases.
The following examples of how to call the APIs are provided in the repository. Both have a `Dockerfile` and `cloudbuild.yaml` files in their respective directories:
* `examples/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `examples/backendclient/main.go` calls the Backend API and passes in the profile found in `backendstub/profiles/testprofile.json` to the `ListMatches` API endpoint, then continually prints the results until you exit, or there are insufficient players to make a match based on the profile..
## Usage
Documentation and usage guides on how to set up and customize Open Match.
## Precompiled container images
Once we reach a 1.0 release, we plan to produce publicly available (Linux) Docker container images of major releases in a public image registry. Until then, refer to the 'Compiling from source' section below.
## Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild_COMPONENT.yaml` files for each component in the repository root.
All the core components for Open Match are written in Golang and use the [Dockerfile multistage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/). This pattern uses intermediate Docker containers as a Golang build environment while producing lightweight, minimized container images as final build artifacts. When the project is ready for production, we will modify the `Dockerfile`s to uncomment the last build stage. Although this pattern is great for production container images, it removes most of the utilities required to troubleshoot issues during development.
## Configuration
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. When `docker build`ing the component container images, the Dockerfile copies the centralized config file into the component directory.
We plan to replace this with a Kubernetes-managed config with dynamic reloading when development time allows. Pull requests are welcome!
### Guides
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release. **WIP**
* [Development guide](./docs/development.md)
### Reference
* [FAQ](./docs/faq.md)
## Get involved
* [Slack channel](https://open-match.slack.com/)
* [Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU)
* [Slack Channel](https://open-match.slack.com/) ([Signup](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLTM5ZWQxNjc1YWI3MzJmN2RiMWJmYWI0ZjFiNzNkZmNkMWQ3YWU5OGVkNzA5Yzc4OGVkOGU5MTc0OTA5ZTA5NDU))
* [File an Issue](https://github.com/googleforgames/open-match/issues/new)
* [Mailing list](https://groups.google.com/forum/#!forum/open-match-discuss)
* [Managed Service Survey](https://goo.gl/forms/cbrFTNCmy9rItSv72)
## Code of Conduct
Participation in this project comes under the [Contributor Covenant Code of Conduct](code-of-conduct.md)
## Development and Contribution
Please read the [contributing](CONTRIBUTING.md) guide for directions on submitting Pull Requests to Open Match.
See the [Development guide](docs/development.md) for documentation for development and building Open Match from source.
The [Release Process](docs/governance/release_process.md) documentation displays the project's upcoming release calendar and release process. (NYI)
Open Match is in active development - we would love your help in shaping its future!
## This all sounds great, but can you explain Docker and/or Kubernetes to me?
### Docker
- [Docker's official "Getting Started" guide](https://docs.docker.com/get-started/)
- [Katacoda's free, interactive Docker course](https://www.katacoda.com/courses/docker)
### Kubernetes
- [You should totally read this comic, and interactive tutorial](https://cloud.google.com/kubernetes-engine/kubernetes-comic/)
- [Katacoda's free, interactive Kubernetes course](https://www.katacoda.com/courses/kubernetes)
## Licence
## License
Apache 2.0
# Missing functionality
* Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
* Instrumentation of MMFs is in the planning stages. Since MMFs are by design meant to be completely customizable (to the point of allowing any process that can be packaged in a Docker container), metrics/stats will need to have an expected format and formalized outgoing pathway. Currently the thought is that it might be that the metrics should be written to a particular key in statestorage in a format compatible with opencensus, and will be collected, aggreggated, and exported to Prometheus using another process.
* The Kubernetes service account used by the MMFOrc should be updated to have min required permissions.
* Autoscaling isn't turned on for the Frontend or Backend API Kubernetes deployments by default.
* Match profiles should be able to define multiple MMF container images to run, but this is not currently supported. This enables A/B testing and several other scenarios.
* Out-of-the-box, the Redis deployment should be a HA configuration using [Redis Sentinel](https://redis.io/topics/sentinel).
* Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet. We don't want to support two redis watcher code paths, so the backend watch of the match object should be switched to unify the way the frontend and backend watch keys. Unfortunately this change touches the whole chain of components that touch backend match objects (mmf, evaluator, backendapi) and so needs additional work and testing before it is integrated.
# Planned improvements
* “Writing your first matchmaker” getting started guide will be included in an upcoming version.
* Documentation for using the example customizable components and the `backendstub` and `frontendstub` applications to do an end-to-end (e2e) test will be written. This all works now, but needs to be written up.
* A [Helm](https://helm.sh/) chart to stand up Open Match will be provided in an upcoming version.
* We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon.
* CI/CD for this repo and the associated status tags are planned.
* Documentation on release process and release calendar.
* [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version.
* Read logrus logging configuration from matchmaker_config.json.
* Golang unit tests will be shipped in an upcoming version.
* A full load-testing and e2e testing suite will be included in an upcoming version.
* All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
* The MMFOrc component name will be updated in a future version to something easier to understand. Suggestions welcome!
* The MMFOrc component currently requires a default service account with permission to kick of k8s jobs, but the revision today makes the service account have full permissions. This needs to be reworked to have min required RBAC permissions before it is used in production, but is fine for closed testing and development.

13
api/LICENSE Normal file
View File

@ -0,0 +1,13 @@
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,15 +1,15 @@
# Open Match APIs
# Open Match API
This directory contains the API specification files for Open Match. API documenation will be produced in a future version, although the protobuf files offer a concise description of the API calls available, along with arguments and return messages.
Open Match API is exposed via [gRPC](https://grpc.io/) and HTTP REST with [Swagger](https://swagger.io/tools/swagger-codegen/).
* [Protobuf .proto files for all APIs](./protobuf-spec/)
gRPC has first-class support for [many languages](https://grpc.io/docs/) and provides the most performance. It is a RPC protocol built on top of HTTP/2 and provides TLS for secure transport.
These proto files are copied to the container image during `docker build` for the Open Match core components. The `Dockerfiles` handle the compilation for you transparently, and copy the resulting `SPEC.pb.go` files to the appropriate place in your final container image.
For HTTP/HTTPS Open Match uses a gRPC proxy to serve the API. Since HTTP does not provide a structure for request/responses we use Swagger to provide a schema. You can view the Swagger docs for each service in this directory's `*.swagger.json` files. In addition each server will host it's swagger doc via `GET /swagger.json` if you want to dynamically load them at runtime.
References:
Lastly, Open Match supports insecure and TLS mode for serving the API. It's strongly preferred to use TLS mode in production but insecure mode can be used for test and local development. To help with certificates management see `tools/certgen` to create self-signed certificates.
* [gRPC](https://grpc.io/)
* [Language Guide (proto3)](https://developers.google.com/protocol-buffers/docs/proto3)
# Open Match API Development Guide
Manual gRPC compilation commmand, from the directory containing the proto:
```protoc -I . ./<filename>.proto --go_out=plugins=grpc:.```
Open Match proto comments follow the same format as [this file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.proto)
If you plan to change the proto definitions, please update the comments and run `make api/api.md` to reflect the latest changes in open-match-docs.

171
api/backend.proto Normal file
View File

@ -0,0 +1,171 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Backend"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
// FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF
message FunctionConfig {
string host = 1;
int32 port = 2;
Type type = 3;
enum Type {
GRPC = 0;
REST = 1;
}
}
message FetchMatchesRequest {
// A configuration for the MatchFunction server of this FetchMatches call.
FunctionConfig config = 1;
// A MatchProfile that will be sent to the MatchFunction server of this FetchMatches call.
MatchProfile profile = 2;
}
message FetchMatchesResponse {
// A Match generated by the user-defined MMF with the specified MatchProfiles.
// A valid Match response will contain at least one ticket.
Match match = 1;
}
message ReleaseTicketsRequest{
// TicketIds is a list of string representing Open Match generated Ids to be re-enabled for MMF querying
// because they are no longer awaiting assignment from a previous match result
repeated string ticket_ids = 1;
}
message ReleaseTicketsResponse {}
message ReleaseAllTicketsRequest{}
message ReleaseAllTicketsResponse {}
// AssignmentGroup contains an Assignment and the Tickets to which it should be applied.
message AssignmentGroup{
// TicketIds is a list of strings representing Open Match generated Ids which apply to an Assignment.
repeated string ticket_ids = 1;
// An Assignment specifies game connection related information to be associated with the TicketIds.
Assignment assignment = 2;
}
// AssignmentFailure contains the id of the Ticket that failed the Assignment and the failure status.
message AssignmentFailure {
enum Cause {
UNKNOWN = 0;
TICKET_NOT_FOUND = 1;
}
string ticket_id = 1;
Cause cause = 2;
}
message AssignTicketsRequest {
// Assignments is a list of assignment groups that contain assignment and the Tickets to which they should be applied.
repeated AssignmentGroup assignments = 1;
}
message AssignTicketsResponse {
// Failures is a list of all the Tickets that failed assignment along with the cause of failure.
repeated AssignmentFailure failures = 1;
}
// The BackendService implements APIs to generate matches and handle ticket assignments.
service BackendService {
// FetchMatches triggers a MatchFunction with the specified MatchProfile and
// returns a set of matches generated by the Match Making Function, and
// accepted by the evaluator.
// Tickets in matches returned by FetchMatches are moved from active to
// pending, and will not be returned by query.
rpc FetchMatches(FetchMatchesRequest) returns (stream FetchMatchesResponse) {
option (google.api.http) = {
post: "/v1/backendservice/matches:fetch"
body: "*"
};
}
// AssignTickets overwrites the Assignment field of the input TicketIds.
rpc AssignTickets(AssignTicketsRequest) returns (AssignTicketsResponse) {
option (google.api.http) = {
post: "/v1/backendservice/tickets:assign"
body: "*"
};
}
// ReleaseTickets moves tickets from the pending state, to the active state.
// This enables them to be returned by query, and find different matches.
//
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc ReleaseTickets(ReleaseTicketsRequest) returns (ReleaseTicketsResponse) {
option (google.api.http) = {
post: "/v1/backendservice/tickets:release"
body: "*"
};
}
// ReleaseAllTickets moves all tickets from the pending state, to the active
// state. This enables them to be returned by query, and find different
// matches.
//
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc ReleaseAllTickets(ReleaseAllTicketsRequest) returns (ReleaseAllTicketsResponse) {
option (google.api.http) = {
post: "/v1/backendservice/tickets:releaseall"
body: "*"
};
}
}

566
api/backend.swagger.json Normal file
View File

@ -0,0 +1,566 @@
{
"swagger": "2.0",
"info": {
"title": "Backend",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/backendservice/matches:fetch": {
"post": {
"summary": "FetchMatches triggers a MatchFunction with the specified MatchProfile and\nreturns a set of matches generated by the Match Making Function, and\naccepted by the evaluator.\nTickets in matches returned by FetchMatches are moved from active to\npending, and will not be returned by query.",
"operationId": "FetchMatches",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchFetchMatchesResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchFetchMatchesRequest"
}
}
],
"tags": [
"BackendService"
]
}
},
"/v1/backendservice/tickets:assign": {
"post": {
"summary": "AssignTickets overwrites the Assignment field of the input TicketIds.",
"operationId": "AssignTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchAssignTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchAssignTicketsRequest"
}
}
],
"tags": [
"BackendService"
]
}
},
"/v1/backendservice/tickets:release": {
"post": {
"summary": "ReleaseTickets moves tickets from the pending state, to the active state.\nThis enables them to be returned by query, and find different matches.",
"description": "BETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "ReleaseTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchReleaseTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchReleaseTicketsRequest"
}
}
],
"tags": [
"BackendService"
]
}
},
"/v1/backendservice/tickets:releaseall": {
"post": {
"summary": "ReleaseAllTickets moves all tickets from the pending state, to the active\nstate. This enables them to be returned by query, and find different\nmatches.",
"description": "BETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "ReleaseAllTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchReleaseAllTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchReleaseAllTicketsRequest"
}
}
],
"tags": [
"BackendService"
]
}
}
},
"definitions": {
"AssignmentFailureCause": {
"type": "string",
"enum": [
"UNKNOWN",
"TICKET_NOT_FOUND"
],
"default": "UNKNOWN"
},
"openmatchAssignTicketsRequest": {
"type": "object",
"properties": {
"assignments": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchAssignmentGroup"
},
"description": "Assignments is a list of assignment groups that contain assignment and the Tickets to which they should be applied."
}
}
},
"openmatchAssignTicketsResponse": {
"type": "object",
"properties": {
"failures": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchAssignmentFailure"
},
"description": "Failures is a list of all the Tickets that failed assignment along with the cause of failure."
}
}
},
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchAssignmentFailure": {
"type": "object",
"properties": {
"ticket_id": {
"type": "string"
},
"cause": {
"$ref": "#/definitions/AssignmentFailureCause"
}
},
"description": "AssignmentFailure contains the id of the Ticket that failed the Assignment and the failure status."
},
"openmatchAssignmentGroup": {
"type": "object",
"properties": {
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "TicketIds is a list of strings representing Open Match generated Ids which apply to an Assignment."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment specifies game connection related information to be associated with the TicketIds."
}
},
"description": "AssignmentGroup contains an Assignment and the Tickets to which it should be applied."
},
"openmatchDoubleRangeFilter": {
"type": "object",
"properties": {
"double_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.double_args this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value."
}
},
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
},
"openmatchFetchMatchesRequest": {
"type": "object",
"properties": {
"config": {
"$ref": "#/definitions/openmatchFunctionConfig",
"description": "A configuration for the MatchFunction server of this FetchMatches call."
},
"profile": {
"$ref": "#/definitions/openmatchMatchProfile",
"description": "A MatchProfile that will be sent to the MatchFunction server of this FetchMatches call."
}
}
},
"openmatchFetchMatchesResponse": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/openmatchMatch",
"description": "A Match generated by the user-defined MMF with the specified MatchProfiles.\nA valid Match response will contain at least one ticket."
}
}
},
"openmatchFunctionConfig": {
"type": "object",
"properties": {
"host": {
"type": "string"
},
"port": {
"type": "integer",
"format": "int32"
},
"type": {
"$ref": "#/definitions/openmatchFunctionConfigType"
}
},
"title": "FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF"
},
"openmatchFunctionConfigType": {
"type": "string",
"enum": [
"GRPC",
"REST"
],
"default": "GRPC"
},
"openmatchMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets belonging to this match."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"openmatchMatchProfile": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of this match profile."
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A MatchProfile is Open Match's representation of a Match specification. It is\nused to indicate the criteria for selecting players for a match. A\nMatchProfile is the input to the API to get matches and is passed to the\nMatchFunction. It contains all the information required by the MatchFunction\nto generate match proposals."
},
"openmatchPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"double_range_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchStringEqualsFilter"
}
},
"tag_present_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchReleaseAllTicketsRequest": {
"type": "object"
},
"openmatchReleaseAllTicketsResponse": {
"type": "object"
},
"openmatchReleaseTicketsRequest": {
"type": "object",
"properties": {
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"title": "TicketIds is a list of string representing Open Match generated Ids to be re-enabled for MMF querying\nbecause they are no longer awaiting assignment from a previous match result"
}
}
},
"openmatchReleaseTicketsResponse": {
"type": "object"
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"openmatchStringEqualsFilter": {
"type": "object",
"properties": {
"string_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.string_args this Filter operates on."
},
"value": {
"type": "string"
}
},
"title": "Filters strings exactly equaling a value.\n string_arg: \"foo\"\n value: \"bar\"\nmatches:\n {\"foo\": \"bar\"}\ndoes not match:\n {\"foo\": \"baz\"}\n {\"bar\": \"foo\"}\n {}"
},
"openmatchTagPresentFilter": {
"type": "object",
"properties": {
"tag": {
"type": "string"
}
},
"title": "Filters to the tag being present on the search_fields.\n tag: \"foo\"\nmatches:\n [\"foo\"]\n [\"bar\",\"foo\"]\ndoes not match:\n [\"bar\"]\n []"
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"openmatchFetchMatchesResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchFetchMatchesResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchFetchMatchesResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

80
api/evaluator.proto Normal file
View File

@ -0,0 +1,80 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Evaluator"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message EvaluateRequest {
// A Matches proposed by the Match Function representing a candidate of the final results.
Match match = 1;
}
message EvaluateResponse {
// A Match ID representing a shortlisted match returned by the evaluator as the final result.
string match_id = 2;
// Deprecated fields
reserved 1;
}
// The Evaluator service implements APIs used to evaluate and shortlist matches proposed by MMFs.
service Evaluator {
// Evaluate evaluates a list of proposed matches based on quality, collision status, and etc, then shortlist the matches and returns the final results.
rpc Evaluate(stream EvaluateRequest) returns (stream EvaluateResponse) {
option (google.api.http) = {
post: "/v1/evaluator/matches:evaluate"
body: "*"
};
}
}

248
api/evaluator.swagger.json Normal file
View File

@ -0,0 +1,248 @@
{
"swagger": "2.0",
"info": {
"title": "Evaluator",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/evaluator/matches:evaluate": {
"post": {
"summary": "Evaluate evaluates a list of proposed matches based on quality, collision status, and etc, then shortlist the matches and returns the final results.",
"operationId": "Evaluate",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchEvaluateResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"description": " (streaming inputs)",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchEvaluateRequest"
}
}
],
"tags": [
"Evaluator"
]
}
}
},
"definitions": {
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchEvaluateRequest": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/openmatchMatch",
"description": "A Matches proposed by the Match Function representing a candidate of the final results."
}
}
},
"openmatchEvaluateResponse": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID representing a shortlisted match returned by the evaluator as the final result."
}
}
},
"openmatchMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets belonging to this match."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"openmatchEvaluateResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchEvaluateResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchEvaluateResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

24
api/extensions.proto Normal file
View File

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
// A DefaultEvaluationCriteria is used for a match's evaluation_input when using
// the default evaluator.
message DefaultEvaluationCriteria {
double score = 1;
}

120
api/frontend.proto Normal file
View File

@ -0,0 +1,120 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
import "google/protobuf/empty.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Frontend"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message CreateTicketRequest {
// A Ticket object with SearchFields defined.
Ticket ticket = 1;
}
message DeleteTicketRequest {
// A TicketId of a generated Ticket to be deleted.
string ticket_id = 1;
}
message GetTicketRequest {
// A TicketId of a generated Ticket.
string ticket_id = 1;
}
message WatchAssignmentsRequest {
// A TicketId of a generated Ticket to get updates on.
string ticket_id = 1;
}
message WatchAssignmentsResponse {
// An updated Assignment of the requested Ticket.
Assignment assignment = 1;
}
// The FrontendService implements APIs to manage and query status of a Tickets.
service FrontendService {
// CreateTicket assigns an unique TicketId to the input Ticket and record it in state storage.
// A ticket is considered as ready for matchmaking once it is created.
// - If a TicketId exists in a Ticket request, an auto-generated TicketId will override this field.
// - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.
rpc CreateTicket(CreateTicketRequest) returns (Ticket) {
option (google.api.http) = {
post: "/v1/frontendservice/tickets"
body: "*"
};
}
// DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.
// The client should delete the Ticket when finished matchmaking with it.
rpc DeleteTicket(DeleteTicketRequest) returns (google.protobuf.Empty) {
option (google.api.http) = {
delete: "/v1/frontendservice/tickets/{ticket_id}"
};
}
// GetTicket get the Ticket associated with the specified TicketId.
rpc GetTicket(GetTicketRequest) returns (Ticket) {
option (google.api.http) = {
get: "/v1/frontendservice/tickets/{ticket_id}"
};
}
// WatchAssignments stream back Assignment of the specified TicketId if it is updated.
// - If the Assignment is not updated, GetAssignment will retry using the configured backoff strategy.
rpc WatchAssignments(WatchAssignmentsRequest)
returns (stream WatchAssignmentsResponse) {
option (google.api.http) = {
get: "/v1/frontendservice/tickets/{ticket_id}/assignments"
};
}
}

312
api/frontend.swagger.json Normal file
View File

@ -0,0 +1,312 @@
{
"swagger": "2.0",
"info": {
"title": "Frontend",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/frontendservice/tickets": {
"post": {
"summary": "CreateTicket assigns an unique TicketId to the input Ticket and record it in state storage.\nA ticket is considered as ready for matchmaking once it is created.\n - If a TicketId exists in a Ticket request, an auto-generated TicketId will override this field.\n - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.",
"operationId": "CreateTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchTicket"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchCreateTicketRequest"
}
}
],
"tags": [
"FrontendService"
]
}
},
"/v1/frontendservice/tickets/{ticket_id}": {
"get": {
"summary": "GetTicket get the Ticket associated with the specified TicketId.",
"operationId": "GetTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchTicket"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "A TicketId of a generated Ticket.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"FrontendService"
]
},
"delete": {
"summary": "DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.\nThe client should delete the Ticket when finished matchmaking with it.",
"operationId": "DeleteTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"properties": {}
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "A TicketId of a generated Ticket to be deleted.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"FrontendService"
]
}
},
"/v1/frontendservice/tickets/{ticket_id}/assignments": {
"get": {
"summary": "WatchAssignments stream back Assignment of the specified TicketId if it is updated.\n - If the Assignment is not updated, GetAssignment will retry using the configured backoff strategy.",
"operationId": "WatchAssignments",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchWatchAssignmentsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "A TicketId of a generated Ticket to get updates on.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"FrontendService"
]
}
}
},
"definitions": {
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchCreateTicketRequest": {
"type": "object",
"properties": {
"ticket": {
"$ref": "#/definitions/openmatchTicket",
"description": "A Ticket object with SearchFields defined."
}
}
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"openmatchWatchAssignmentsResponse": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An updated Assignment of the requested Ticket."
}
}
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"openmatchWatchAssignmentsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchWatchAssignmentsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchWatchAssignmentsResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

80
api/matchfunction.proto Normal file
View File

@ -0,0 +1,80 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Match Function"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message RunRequest {
// A MatchProfile defines constraints of Tickets in a Match and shapes the Match proposed by the MatchFunction.
MatchProfile profile = 1;
}
message RunResponse {
// A Proposal represents a Match candidate that satifies the constraints defined in the input Profile.
// A valid Proposal response will contain at least one ticket.
Match proposal = 1;
}
// The MatchFunction service implements APIs to run user-defined matchmaking logics.
service MatchFunction {
// DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.
// Run pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then
// constructs and streams back match candidates to the Backend service.
rpc Run(RunRequest) returns (stream RunResponse) {
option (google.api.http) = {
post: "/v1/matchfunction:run"
body: "*"
};
}
}

View File

@ -0,0 +1,352 @@
{
"swagger": "2.0",
"info": {
"title": "Match Function",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/matchfunction:run": {
"post": {
"summary": "DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.\nRun pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then\nconstructs and streams back match candidates to the Backend service.",
"operationId": "Run",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchRunResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchRunRequest"
}
}
],
"tags": [
"MatchFunction"
]
}
}
},
"definitions": {
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchDoubleRangeFilter": {
"type": "object",
"properties": {
"double_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.double_args this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value."
}
},
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
},
"openmatchMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets belonging to this match."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"openmatchMatchProfile": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of this match profile."
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A MatchProfile is Open Match's representation of a Match specification. It is\nused to indicate the criteria for selecting players for a match. A\nMatchProfile is the input to the API to get matches and is passed to the\nMatchFunction. It contains all the information required by the MatchFunction\nto generate match proposals."
},
"openmatchPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"double_range_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchStringEqualsFilter"
}
},
"tag_present_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchRunRequest": {
"type": "object",
"properties": {
"profile": {
"$ref": "#/definitions/openmatchMatchProfile",
"description": "A MatchProfile defines constraints of Tickets in a Match and shapes the Match proposed by the MatchFunction."
}
}
},
"openmatchRunResponse": {
"type": "object",
"properties": {
"proposal": {
"$ref": "#/definitions/openmatchMatch",
"description": "A Proposal represents a Match candidate that satifies the constraints defined in the input Profile.\nA valid Proposal response will contain at least one ticket."
}
}
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"openmatchStringEqualsFilter": {
"type": "object",
"properties": {
"string_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.string_args this Filter operates on."
},
"value": {
"type": "string"
}
},
"title": "Filters strings exactly equaling a value.\n string_arg: \"foo\"\n value: \"bar\"\nmatches:\n {\"foo\": \"bar\"}\ndoes not match:\n {\"foo\": \"baz\"}\n {\"bar\": \"foo\"}\n {}"
},
"openmatchTagPresentFilter": {
"type": "object",
"properties": {
"tag": {
"type": "string"
}
},
"title": "Filters to the tag being present on the search_fields.\n tag: \"foo\"\nmatches:\n [\"foo\"]\n [\"bar\",\"foo\"]\ndoes not match:\n [\"bar\"]\n []"
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"openmatchRunResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchRunResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchRunResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

206
api/messages.proto Normal file
View File

@ -0,0 +1,206 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "google/rpc/status.proto";
import "google/protobuf/any.proto";
import "google/protobuf/timestamp.proto";
// A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent
// an individual 'Player', a 'Group' of players, or any other concepts unique to
// your use case. Open Match will not interpret what the Ticket represents but
// just treat it as a matchmaking unit with a set of SearchFields. Open Match
// stores the Ticket in state storage and enables an Assignment to be set on the
// Ticket.
message Ticket {
// Id represents an auto-generated Id issued by Open Match.
string id = 1;
// An Assignment represents a game server assignment associated with a Ticket,
// or whatever finalized matched state means for your use case.
// Open Match does not require or inspect any fields on Assignment.
Assignment assignment = 3;
// Search fields are the fields which Open Match is aware of, and can be used
// when specifying filters.
SearchFields search_fields = 4;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 5;
// Create time is the time the Ticket was created. It is populated by Open
// Match at the time of Ticket creation.
google.protobuf.Timestamp create_time = 6;
// Deprecated fields.
reserved 2;
}
// Search fields are the fields which Open Match is aware of, and can be used
// when specifying filters.
message SearchFields {
// Float arguments. Filterable on ranges.
map<string, double> double_args = 1;
// String arguments. Filterable on equality.
map<string, string> string_args = 2;
// Filterable on presence or absence of given value.
repeated string tags = 3;
}
// An Assignment represents a game server assignment associated with a Ticket.
// Open Match does not require or inspect any fields on assignment.
message Assignment {
// Connection information for this Assignment.
string connection = 1;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 4;
// Deprecated fields.
reserved 2, 3;
}
// Filters numerical values to only those within a range.
// double_arg: "foo"
// max: 10
// min: 5
// matches:
// {"foo": 5}
// {"foo": 7.5}
// {"foo": 10}
// does not match:
// {"foo": 4}
// {"foo": 10.01}
// {"foo": "7.5"}
// {}
message DoubleRangeFilter {
// Name of the ticket's search_fields.double_args this Filter operates on.
string double_arg = 1;
// Maximum value.
double max = 2;
// Minimum value.
double min = 3;
}
// Filters strings exactly equaling a value.
// string_arg: "foo"
// value: "bar"
// matches:
// {"foo": "bar"}
// does not match:
// {"foo": "baz"}
// {"bar": "foo"}
// {}
message StringEqualsFilter {
// Name of the ticket's search_fields.string_args this Filter operates on.
string string_arg = 1;
string value = 2;
}
// Filters to the tag being present on the search_fields.
// tag: "foo"
// matches:
// ["foo"]
// ["bar","foo"]
// does not match:
// ["bar"]
// []
message TagPresentFilter {
string tag = 1;
}
// Pool specfies a set of criteria that are used to select a subset of Tickets
// that meet all the criteria.
message Pool {
// A developer-chosen human-readable name for this Pool.
string name = 1;
// Set of Filters indicating the filtering criteria. Selected tickets must
// match every Filter.
repeated DoubleRangeFilter double_range_filters = 2;
repeated StringEqualsFilter string_equals_filters = 4;
repeated TagPresentFilter tag_present_filters = 5;
// If specified, only Tickets created before the specified time are selected.
google.protobuf.Timestamp created_before = 6;
// If specified, only Tickets created after the specified time are selected.
google.protobuf.Timestamp created_after = 7;
// Deprecated fields.
reserved 3;
}
// A MatchProfile is Open Match's representation of a Match specification. It is
// used to indicate the criteria for selecting players for a match. A
// MatchProfile is the input to the API to get matches and is passed to the
// MatchFunction. It contains all the information required by the MatchFunction
// to generate match proposals.
message MatchProfile {
// Name of this match profile.
string name = 1;
// Set of pools to be queried when generating a match for this MatchProfile.
repeated Pool pools = 3;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 5;
// Deprecated fields.
reserved 2, 4;
}
// A Match is used to represent a completed match object. It can be generated by
// a MatchFunction as a proposal or can be returned by OpenMatch as a result in
// response to the FetchMatches call.
// When a match is returned by the FetchMatches call, it should contain at least
// one ticket to be considered as valid.
message Match {
// A Match ID that should be passed through the stack for tracing.
string match_id = 1;
// Name of the match profile that generated this Match.
string match_profile = 2;
// Name of the match function that generated this Match.
string match_function = 3;
// Tickets belonging to this match.
repeated Ticket tickets = 4;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 7;
// Deprecated fields.
reserved 5, 6;
}

View File

@ -1,482 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: backend.proto
/*
Package backend is a generated protocol buffer package.
It is generated from these files:
backend.proto
It has these top-level messages:
Profile
MatchObject
Result
Roster
ConnectionInfo
Assignments
*/
package backend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a profile to pass to the matchmaking function.
type Profile struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Profile) Reset() { *m = Profile{} }
func (m *Profile) String() string { return proto.CompactTextString(m) }
func (*Profile) ProtoMessage() {}
func (*Profile) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Profile) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Profile) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
// Data structure for all the properties of a match.
type MatchObject struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *MatchObject) Reset() { *m = MatchObject{} }
func (m *MatchObject) String() string { return proto.CompactTextString(m) }
func (*MatchObject) ProtoMessage() {}
func (*MatchObject) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *MatchObject) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *MatchObject) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
// Data structure to hold a list of players in a match.
type Roster struct {
PlayerIds string `protobuf:"bytes,1,opt,name=player_ids,json=playerIds" json:"player_ids,omitempty"`
}
func (m *Roster) Reset() { *m = Roster{} }
func (m *Roster) String() string { return proto.CompactTextString(m) }
func (*Roster) ProtoMessage() {}
func (*Roster) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Roster) GetPlayerIds() string {
if m != nil {
return m.PlayerIds
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
type Assignments struct {
Roster *Roster `protobuf:"bytes,1,opt,name=roster" json:"roster,omitempty"`
ConnectionInfo *ConnectionInfo `protobuf:"bytes,2,opt,name=connection_info,json=connectionInfo" json:"connection_info,omitempty"`
}
func (m *Assignments) Reset() { *m = Assignments{} }
func (m *Assignments) String() string { return proto.CompactTextString(m) }
func (*Assignments) ProtoMessage() {}
func (*Assignments) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
func (m *Assignments) GetRoster() *Roster {
if m != nil {
return m.Roster
}
return nil
}
func (m *Assignments) GetConnectionInfo() *ConnectionInfo {
if m != nil {
return m.ConnectionInfo
}
return nil
}
func init() {
proto.RegisterType((*Profile)(nil), "Profile")
proto.RegisterType((*MatchObject)(nil), "MatchObject")
proto.RegisterType((*Result)(nil), "Result")
proto.RegisterType((*Roster)(nil), "Roster")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Assignments)(nil), "Assignments")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error)
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error)
// Call that manage communication of DGS connection info to players.
//
// Write the DGS connection info for the list of players in the
// Assignments.roster to state storage, so that info can be read by the game
// client(s).
CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error)
// Remove DGS connection info for the list of players in the Roster from
// state storage.
DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error) {
out := new(MatchObject)
err := grpc.Invoke(ctx, "/API/CreateMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error) {
stream, err := grpc.NewClientStream(ctx, &_API_serviceDesc.Streams[0], c.cc, "/API/ListMatches", opts...)
if err != nil {
return nil, err
}
x := &aPIListMatchesClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type API_ListMatchesClient interface {
Recv() (*MatchObject, error)
grpc.ClientStream
}
type aPIListMatchesClient struct {
grpc.ClientStream
}
func (x *aPIListMatchesClient) Recv() (*MatchObject, error) {
m := new(MatchObject)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *aPIClient) DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(context.Context, *Profile) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(*Profile, API_ListMatchesServer) error
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(context.Context, *MatchObject) (*Result, error)
// Call that manage communication of DGS connection info to players.
//
// Write the DGS connection info for the list of players in the
// Assignments.roster to state storage, so that info can be read by the game
// client(s).
CreateAssignments(context.Context, *Assignments) (*Result, error)
// Remove DGS connection info for the list of players in the Roster from
// state storage.
DeleteAssignments(context.Context, *Roster) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Profile)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateMatch(ctx, req.(*Profile))
}
return interceptor(ctx, in, info, handler)
}
func _API_ListMatches_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(Profile)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(APIServer).ListMatches(m, &aPIListMatchesServer{stream})
}
type API_ListMatchesServer interface {
Send(*MatchObject) error
grpc.ServerStream
}
type aPIListMatchesServer struct {
grpc.ServerStream
}
func (x *aPIListMatchesServer) Send(m *MatchObject) error {
return x.ServerStream.SendMsg(m)
}
func _API_DeleteMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(MatchObject)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteMatch(ctx, req.(*MatchObject))
}
return interceptor(ctx, in, info, handler)
}
func _API_CreateAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Assignments)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateAssignments(ctx, req.(*Assignments))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Roster)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignments(ctx, req.(*Roster))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateMatch",
Handler: _API_CreateMatch_Handler,
},
{
MethodName: "DeleteMatch",
Handler: _API_DeleteMatch_Handler,
},
{
MethodName: "CreateAssignments",
Handler: _API_CreateAssignments_Handler,
},
{
MethodName: "DeleteAssignments",
Handler: _API_DeleteAssignments_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "ListMatches",
Handler: _API_ListMatches_Handler,
ServerStreams: true,
},
},
Metadata: "backend.proto",
}
func init() { proto.RegisterFile("backend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 344 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x92, 0xcf, 0x4e, 0xc2, 0x40,
0x10, 0xc6, 0x29, 0xc6, 0x16, 0x66, 0x11, 0x64, 0xe3, 0x81, 0x90, 0xf8, 0x27, 0x3d, 0x88, 0x46,
0xb3, 0x31, 0x78, 0xc1, 0x83, 0x07, 0x82, 0x17, 0x12, 0x8d, 0xa4, 0x3e, 0x00, 0x29, 0xdb, 0x01,
0x56, 0xeb, 0x6e, 0xb3, 0xbb, 0x1c, 0x7c, 0x53, 0x1f, 0xc7, 0xb8, 0x2d, 0xba, 0x1c, 0x3c, 0x78,
0x9c, 0x5f, 0xbf, 0x6f, 0xe6, 0xeb, 0xcc, 0xc2, 0xc1, 0x22, 0xe5, 0x6f, 0x28, 0x33, 0x56, 0x68,
0x65, 0x55, 0x7c, 0x07, 0xd1, 0x4c, 0xab, 0xa5, 0xc8, 0x91, 0xb6, 0xa1, 0x2e, 0xb2, 0x5e, 0x70,
0x16, 0x5c, 0x34, 0x93, 0xba, 0xc8, 0xe8, 0x09, 0x40, 0xa1, 0x55, 0x81, 0xda, 0x0a, 0x34, 0xbd,
0xba, 0xe3, 0x1e, 0x89, 0xef, 0x81, 0x3c, 0xa5, 0x96, 0xaf, 0x9f, 0x17, 0xaf, 0xc8, 0xed, 0xbf,
0xed, 0x23, 0x08, 0x13, 0x34, 0x9b, 0xdc, 0xd2, 0x1e, 0x44, 0x66, 0xc3, 0x39, 0x1a, 0xe3, 0xec,
0x8d, 0x64, 0x5b, 0xd2, 0x23, 0xd8, 0x47, 0xad, 0x95, 0xae, 0xec, 0x65, 0x11, 0x0f, 0x20, 0x4c,
0x94, 0xb1, 0xa8, 0xe9, 0x31, 0x40, 0x91, 0xa7, 0x1f, 0xa8, 0xe7, 0x22, 0x33, 0xd5, 0xec, 0x66,
0x49, 0xa6, 0xd9, 0x77, 0xc2, 0xf6, 0x44, 0x49, 0x89, 0xdc, 0x0a, 0x25, 0xa7, 0x72, 0xa9, 0xe8,
0x15, 0x74, 0xf9, 0x0f, 0x99, 0x1b, 0xab, 0x85, 0x5c, 0x55, 0xbe, 0xc3, 0xdf, 0x0f, 0x2f, 0x8e,
0xc7, 0x6b, 0x20, 0x63, 0x63, 0xc4, 0x4a, 0xbe, 0xa3, 0xb4, 0x86, 0x9e, 0x42, 0xa8, 0xdd, 0x58,
0x67, 0x20, 0xc3, 0x88, 0x95, 0x29, 0x92, 0x0a, 0xd3, 0x11, 0x74, 0xbc, 0xe6, 0x42, 0x2e, 0x95,
0xcb, 0x4d, 0x86, 0x1d, 0xb6, 0x1b, 0x23, 0x69, 0xf3, 0x9d, 0x7a, 0xf8, 0x19, 0xc0, 0xde, 0x78,
0x36, 0xa5, 0x03, 0x20, 0x13, 0x8d, 0xa9, 0x45, 0xb7, 0x58, 0xda, 0x60, 0xd5, 0x6d, 0xfa, 0x2d,
0xe6, 0xad, 0x3a, 0xae, 0xd1, 0x4b, 0x20, 0x8f, 0xc2, 0x58, 0x07, 0xd1, 0xfc, 0x2d, 0xbc, 0x09,
0xe8, 0x39, 0x90, 0x07, 0xcc, 0x71, 0xdb, 0x73, 0x47, 0xd0, 0x8f, 0x58, 0x79, 0x83, 0xb8, 0x46,
0xaf, 0xa1, 0x5b, 0xce, 0xf6, 0xff, 0xb9, 0xc5, 0xbc, 0xca, 0x57, 0x0f, 0xa0, 0x5b, 0x76, 0xf5,
0xd5, 0xdb, 0x8d, 0x78, 0xc2, 0x45, 0xe8, 0xde, 0xd9, 0xed, 0x57, 0x00, 0x00, 0x00, 0xff, 0xff,
0xf2, 0x23, 0x14, 0x36, 0x78, 0x02, 0x00, 0x00,
}

View File

@ -1,76 +0,0 @@
// Follow the guidelines at https://cloud.google.com/endpoints/docs/grpc/transcoding
// to keep the gRPC service definitions friendly to REST transcoding. An excerpt:
//
// "Transcoding involves mapping HTTP/JSON requests and their parameters to gRPC
// methods and their parameters and return types (we'll look at exactly how you
// do this in the following sections). Because of this, while it's possible to
// map an HTTP/JSON request to any arbitrary API method, it's simplest and most
// intuitive to do so if the gRPC API itself is structured in a
// resource-oriented way, just like a traditional HTTP REST API. In other
// words, the API service should be designed so that it uses a small number of
// standard methods (corresponding to HTTP verbs like GET, PUT, and so on) that
// operate on the service's resources (and collections of resources, which are
// themselves a type of resource).
// These standard methods are List, Get, Create, Update, and Delete."
//
syntax = 'proto3';
service API {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
rpc CreateMatch(Profile) returns (MatchObject) {}
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
rpc ListMatches(Profile) returns (stream MatchObject) {}
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
rpc DeleteMatch(MatchObject) returns (Result) {}
// Call that manage communication of DGS connection info to players.
//
// Write the DGS connection info for the list of players in the
// Assignments.roster to state storage, so that info can be read by the game
// client(s).
// TODO: change this to be agnostic; return a 'result' instead of a connection
// string so it can be integrated with session service etc
rpc CreateAssignments(Assignments) returns (Result) {}
// Remove DGS connection info for the list of players in the Roster from
// state storage.
rpc DeleteAssignments(Roster) returns (Result) {}
}
// Data structure for a profile to pass to the matchmaking function.
message Profile{
string id = 1; // By convention, the CRC32 of the properties string.
string properties = 2; // By convention, a JSON-encoded string
}
// Data structure for all the properties of a match.
message MatchObject{
string id = 1; // By convention, a UUID
string properties = 2; // By convention, a JSON-encoded string
Roster roster = 3; // NYI
}
// Simple message to return success/failure and error status.
message Result{
bool success = 1;
string error = 2;
}
// Data structure to hold a list of players in a match.
message Roster{
string player_ids = 1; // By convention, a space-delimited list of player IDs
}
// Simple message used to pass the connection string for the DGS to the player.
message ConnectionInfo{
string connection_string = 1; // Passed by the matchmaker to game clients without modification.
}
message Assignments{
Roster roster = 1;
ConnectionInfo connection_info = 2;
}

View File

@ -1,321 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: frontend.proto
/*
Package frontend is a generated protocol buffer package.
It is generated from these files:
frontend.proto
It has these top-level messages:
Group
PlayerId
ConnectionInfo
Result
*/
package frontend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
type Group struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Group) Reset() { *m = Group{} }
func (m *Group) String() string { return proto.CompactTextString(m) }
func (*Group) ProtoMessage() {}
func (*Group) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Group) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Group) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
type PlayerId struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *PlayerId) Reset() { *m = PlayerId{} }
func (m *PlayerId) String() string { return proto.CompactTextString(m) }
func (*PlayerId) ProtoMessage() {}
func (*PlayerId) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *PlayerId) GetId() string {
if m != nil {
return m.Id
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
func init() {
proto.RegisterType((*Group)(nil), "Group")
proto.RegisterType((*PlayerId)(nil), "PlayerId")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Result)(nil), "Result")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error)
DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error) {
out := new(ConnectionInfo)
err := grpc.Invoke(ctx, "/API/GetAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
CreateRequest(context.Context, *Group) (*Result, error)
DeleteRequest(context.Context, *Group) (*Result, error)
GetAssignment(context.Context, *PlayerId) (*ConnectionInfo, error)
DeleteAssignment(context.Context, *PlayerId) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_GetAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).GetAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/GetAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).GetAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateRequest",
Handler: _API_CreateRequest_Handler,
},
{
MethodName: "DeleteRequest",
Handler: _API_DeleteRequest_Handler,
},
{
MethodName: "GetAssignment",
Handler: _API_GetAssignment_Handler,
},
{
MethodName: "DeleteAssignment",
Handler: _API_DeleteAssignment_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "frontend.proto",
}
func init() { proto.RegisterFile("frontend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0x41, 0x4b, 0xfb, 0x40,
0x10, 0xc5, 0x9b, 0xfc, 0x69, 0xda, 0x0e, 0x34, 0xff, 0xba, 0x78, 0x08, 0x39, 0x88, 0xec, 0xa9,
0x20, 0xee, 0x41, 0x0f, 0x7a, 0xf1, 0x50, 0x2a, 0x94, 0xdc, 0x4a, 0xfc, 0x00, 0x52, 0x93, 0x69,
0x59, 0x88, 0xbb, 0x71, 0x66, 0x72, 0xf0, 0x0b, 0xf9, 0x39, 0xc5, 0x4d, 0x6b, 0x55, 0xc4, 0xe3,
0xfb, 0xed, 0x7b, 0x8f, 0x7d, 0x03, 0xe9, 0x96, 0xbc, 0x13, 0x74, 0xb5, 0x69, 0xc9, 0x8b, 0xd7,
0x37, 0x30, 0x5c, 0x91, 0xef, 0x5a, 0x95, 0x42, 0x6c, 0xeb, 0x2c, 0x3a, 0x8f, 0xe6, 0x93, 0x32,
0xb6, 0xb5, 0x3a, 0x03, 0x68, 0xc9, 0xb7, 0x48, 0x62, 0x91, 0xb3, 0x38, 0xf0, 0x2f, 0x44, 0xe7,
0x30, 0x5e, 0x37, 0x9b, 0x57, 0xa4, 0xa2, 0xfe, 0x99, 0xd5, 0x77, 0x90, 0x2e, 0xbd, 0x73, 0x58,
0x89, 0xf5, 0xae, 0x70, 0x5b, 0xaf, 0x2e, 0xe0, 0xa4, 0xfa, 0x24, 0x8f, 0x2c, 0x64, 0xdd, 0x6e,
0x1f, 0x98, 0x1d, 0x1f, 0x1e, 0x02, 0xd7, 0xb7, 0x90, 0x94, 0xc8, 0x5d, 0x23, 0x2a, 0x83, 0x11,
0x77, 0x55, 0x85, 0xcc, 0xc1, 0x3c, 0x2e, 0x0f, 0x52, 0x9d, 0xc2, 0x10, 0x89, 0x3c, 0xed, 0x7f,
0xd6, 0x8b, 0xab, 0xb7, 0x08, 0xfe, 0x2d, 0xd6, 0x85, 0xd2, 0x30, 0x5d, 0x12, 0x6e, 0x04, 0x4b,
0x7c, 0xe9, 0x90, 0x45, 0x25, 0x26, 0xac, 0xcc, 0x47, 0xa6, 0x6f, 0xd6, 0x83, 0x0f, 0xcf, 0x3d,
0x36, 0xf8, 0xa7, 0xe7, 0x12, 0xa6, 0x2b, 0x94, 0x05, 0xb3, 0xdd, 0xb9, 0x67, 0x74, 0xa2, 0x26,
0xe6, 0x30, 0x3a, 0xff, 0x6f, 0xbe, 0x6f, 0xd4, 0x03, 0x35, 0x87, 0x59, 0x5f, 0xf9, 0x7b, 0xe2,
0x58, 0xfc, 0x94, 0x84, 0xeb, 0x5f, 0xbf, 0x07, 0x00, 0x00, 0xff, 0xff, 0x2b, 0xde, 0x2c, 0x5b,
0x8f, 0x01, 0x00, 0x00,
}

View File

@ -1,59 +0,0 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// -------------
// Follow the guidelines at https://cloud.google.com/endpoints/docs/grpc/transcoding
// to keep the gRPC service definitions friendly to REST transcoding. An excerpt:
//
// "Transcoding involves mapping HTTP/JSON requests and their parameters to gRPC
// methods and their parameters and return types (we'll look at exactly how you
// do this in the following sections). Because of this, while it's possible to
// map an HTTP/JSON request to any arbitrary API method, it's simplest and most
// intuitive to do so if the gRPC API itself is structured in a
// resource-oriented way, just like a traditional HTTP REST API. In other
// words, the API service should be designed so that it uses a small number of
// standard methods (corresponding to HTTP verbs like GET, PUT, and so on) that
// operate on the service's resources (and collections of resources, which are
// themselves a type of resource).
// These standard methods are List, Get, Create, Update, and Delete."
//
syntax = 'proto3';
service API {
rpc CreateRequest(Group) returns (Result) {}
rpc DeleteRequest(Group) returns (Result) {}
rpc GetAssignment(PlayerId) returns (ConnectionInfo) {}
rpc DeleteAssignment(PlayerId) returns (Result) {}
}
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
message Group{
string id = 1; // By convention, string of space-delimited playerIDs
string properties = 2; // By convention, a JSON-encoded string
}
message PlayerId {
string id = 1; // By convention, a UUID
}
// Simple message used to pass the connection string for the DGS to the player.
message ConnectionInfo{
string connection_string = 1; // Passed by the matchmaker to game clients without modification.
}
// Simple message to return success/failure and error status.
message Result{
bool success = 1;
string error = 2;
}

101
api/query.proto Normal file
View File

@ -0,0 +1,101 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "MM Logic (Data Layer)"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message QueryTicketsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketsResponse {
// Tickets that meet all the filtering criteria requested by the pool.
repeated Ticket tickets = 1;
}
message QueryTicketIdsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketIdsResponse {
// TicketIDs that meet all the filtering criteria requested by the pool.
repeated string ids = 1;
}
// The QueryService service implements helper APIs for Match Function to query Tickets from state storage.
service QueryService {
// QueryTickets gets a list of Tickets that match all Filters of the input Pool.
// - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.
// QueryTickets pages the Tickets by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTickets(QueryTicketsRequest) returns (stream QueryTicketsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/tickets:query"
body: "*"
};
}
// QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.
// - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.
// QueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTicketIds(QueryTicketIdsRequest) returns (stream QueryTicketIdsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/ticketids:query"
body: "*"
};
}
}

366
api/query.swagger.json Normal file
View File

@ -0,0 +1,366 @@
{
"swagger": "2.0",
"info": {
"title": "MM Logic (Data Layer)",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/queryservice/ticketids:query": {
"post": {
"summary": "QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.\n - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.\nQueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryTicketIds",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchQueryTicketIdsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchQueryTicketIdsRequest"
}
}
],
"tags": [
"QueryService"
]
}
},
"/v1/queryservice/tickets:query": {
"post": {
"summary": "QueryTickets gets a list of Tickets that match all Filters of the input Pool.\n - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.\nQueryTickets pages the Tickets by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryTickets",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchQueryTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchQueryTicketsRequest"
}
}
],
"tags": [
"QueryService"
]
}
}
},
"definitions": {
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchDoubleRangeFilter": {
"type": "object",
"properties": {
"double_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.double_args this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value."
}
},
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
},
"openmatchPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"double_range_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchStringEqualsFilter"
}
},
"tag_present_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchQueryTicketIdsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/openmatchPool",
"description": "The Pool representing the set of Filters to be queried."
}
}
},
"openmatchQueryTicketIdsResponse": {
"type": "object",
"properties": {
"ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "TicketIDs that meet all the filtering criteria requested by the pool."
}
}
},
"openmatchQueryTicketsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/openmatchPool",
"description": "The Pool representing the set of Filters to be queried."
}
}
},
"openmatchQueryTicketsResponse": {
"type": "object",
"properties": {
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets that meet all the filtering criteria requested by the pool."
}
}
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"openmatchStringEqualsFilter": {
"type": "object",
"properties": {
"string_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.string_args this Filter operates on."
},
"value": {
"type": "string"
}
},
"title": "Filters strings exactly equaling a value.\n string_arg: \"foo\"\n value: \"bar\"\nmatches:\n {\"foo\": \"bar\"}\ndoes not match:\n {\"foo\": \"baz\"}\n {\"bar\": \"foo\"}\n {}"
},
"openmatchTagPresentFilter": {
"type": "object",
"properties": {
"tag": {
"type": "string"
}
},
"title": "Filters to the tag being present on the search_fields.\n tag: \"foo\"\nmatches:\n [\"foo\"]\n [\"bar\",\"foo\"]\ndoes not match:\n [\"bar\"]\n []"
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"openmatchQueryTicketIdsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchQueryTicketIdsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchQueryTicketIdsResponse"
},
"openmatchQueryTicketsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchQueryTicketsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchQueryTicketsResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

176
cloudbuild.yaml Normal file
View File

@ -0,0 +1,176 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Open Match Script for Google Cloud Build #
################################################################################
# To run this locally:
# cloud-build-local --config=cloudbuild.yaml --dryrun=false --substitutions=_OM_VERSION=DEV .
# To run this remotely:
# gcloud builds submit --config=cloudbuild.yaml --substitutions=_OM_VERSION=DEV .
# Requires gcloud to be installed to work. (https://cloud.google.com/sdk/)
# gcloud auth login
# gcloud components install cloud-build-local
# This YAML contains all the build steps for building Open Match.
# All PRs are verified against this script to prevent build breakages and regressions.
# Conventions
# Each build step is ID'ed with "Prefix: Description".
# The prefix portion determines what kind of step it is and it's impact.
# Docker Image: Read-Only, outputs a docker image.
# Lint: Read-Only, verifies correctness and formatting of a file.
# Build: Read-Write, outputs a build artifact. Ok to run in parallel if the artifact will not collide with another one.
# Generate: Read-Write, outputs files within /workspace that are used in other build step. Do not run these in parallel.
# Setup: Read-Write, similar to generate but steps that run before any other step.
# Some useful things to know about Cloud Build.
# The root of this repository is always stored in /workspace.
# Any modifications that occur within /workspace are persisted between builds anything else is forgotten.
# If a build step has intermediate files that need to be persisted for a future step then use volumes.
# An example of this is the go-vol which is where the pkg/ data for go mod is stored.
# More information here: https://cloud.google.com/cloud-build/docs/build-config#build_steps
# A build step is basically a docker image that is tuned for Cloud Build,
# https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/go
steps:
- id: 'Docker Image: open-match-build'
name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/open-match-build', '-f', 'Dockerfile.ci', '.']
waitFor: ['-']
- id: 'Build: Clean'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'clean-third-party', 'clean-protos', 'clean-swagger-docs']
waitFor: ['Docker Image: open-match-build']
# - id: 'Test: Markdown'
# name: 'gcr.io/$PROJECT_ID/open-match-build'
# args: ['make', 'md-test']
# waitFor: ['Build: Clean']
- id: 'Setup: Download Dependencies'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'sync-deps']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Clean']
- id: 'Build: Initialize Toolchain'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'install-toolchain']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Setup: Download Dependencies']
- id: 'Test: Terraform Configuration'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'terraform-test']
waitFor: ['Build: Initialize Toolchain']
- id: 'Build: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'update-chart-deps', 'install/yaml/']
waitFor: ['Build: Initialize Toolchain']
- id: 'Build: Assets'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'assets', '-j12']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Deployment Configs']
- id: 'Build: Binaries'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'build', 'all', '-j12']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Assets']
- id: 'Test: Services'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'GOLANG_TEST_COUNT=10', 'test']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Assets']
- id: 'Build: Docker Images'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', '_GCB_LATEST_VERSION=${_GCB_LATEST_VERSION}', 'SHORT_SHA=${SHORT_SHA}', 'BRANCH_NAME=${BRANCH_NAME}', 'push-images', '-j8']
waitFor: ['Build: Assets']
- id: 'Lint: Format, Vet, Charts'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'lint']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Assets']
- id: 'Test: Deploy Open Match'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'OPEN_MATCH_KUBERNETES_NAMESPACE=open-match-${BUILD_ID}', 'OPEN_MATCH_RELEASE_NAME=open-match-${BUILD_ID}', 'auth-gke-cluster', 'delete-chart', 'ci-reap-namespaces', 'install-ci-chart']
waitFor: ['Build: Docker Images']
- id: 'Deploy: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', '_GCB_LATEST_VERSION=${_GCB_LATEST_VERSION}', 'SHORT_SHA=${SHORT_SHA}', 'BRANCH_NAME=${BRANCH_NAME}', 'ci-deploy-artifacts']
waitFor: ['Lint: Format, Vet, Charts', 'Test: Deploy Open Match']
volumes:
- name: 'go-vol'
path: '/go'
- id: 'Test: End-to-End Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'SHORT_SHA=${SHORT_SHA}', 'OPEN_MATCH_KUBERNETES_NAMESPACE=open-match-${BUILD_ID}', 'test-e2e-cluster']
waitFor: ['Test: Deploy Open Match', 'Build: Assets']
volumes:
- name: 'go-vol'
path: '/go'
- id: 'Test: Delete Open Match'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GCLOUD_EXTRA_FLAGS=--async', 'SHORT_SHA=${SHORT_SHA}', 'OPEN_MATCH_KUBERNETES_NAMESPACE=open-match-${BUILD_ID}', 'OPEN_MATCH_RELEASE_NAME=open-match-${BUILD_ID}', 'GCP_PROJECT_ID=${PROJECT_ID}', 'delete-chart']
waitFor: ['Test: End-to-End Cluster']
artifacts:
objects:
location: '${_ARTIFACTS_BUCKET}'
paths:
- install/yaml/install.yaml
- install/yaml/01-open-match-core.yaml
- install/yaml/02-open-match-demo.yaml
- install/yaml/03-prometheus-chart.yaml
- install/yaml/04-grafana-chart.yaml
- install/yaml/05-jaeger-chart.yaml
- install/yaml/06-open-match-override-configmap.yaml
substitutions:
_OM_VERSION: "1.1.0"
_GCB_POST_SUBMIT: "0"
_GCB_LATEST_VERSION: "undefined"
_ARTIFACTS_BUCKET: "gs://open-match-build-artifacts/output/"
_LOGS_BUCKET: "gs://open-match-build-logs/"
logsBucket: '${_LOGS_BUCKET}'
options:
sourceProvenanceHash: ['SHA256']
machineType: 'N1_HIGHCPU_32'
timeout: 2500s

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendapi:dev',
'-f', 'Dockerfile.backendapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendapi:dev']

View File

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-evaluator:dev',
'-f', 'Dockerfile.evaluator',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-evaluator:dev']

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-frontendapi:dev',
'-f', 'Dockerfile.frontendapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-frontendapi:dev']

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:dev',
'-f', 'Dockerfile.mmf',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:dev']

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmforc:dev',
'-f', 'Dockerfile.mmforc',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmforc:dev']

25
cmd/backend/backend.go Normal file
View File

@ -0,0 +1,25 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the backend service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/backend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("backend", backend.BindService)
}

View File

@ -1,401 +0,0 @@
/*
package apisrv provides an implementation of the gRPC server defined in ../proto/backend.proto
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"context"
"errors"
"fmt"
"net"
"strings"
"time"
backend "github.com/GoogleCloudPlatform/open-match/cmd/backendapi/proto"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"go.opencensus.io/plugin/ocgrpc"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/tidwall/gjson"
"github.com/gomodule/redigo/redis"
"github.com/google/uuid"
"github.com/spf13/viper"
"google.golang.org/grpc"
)
// Logrus structured logging setup
var (
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
"caller": "backend/apisrv/apisrv.go",
}
beLog = log.WithFields(beLogFields)
)
// BackendAPI implements backend.ApiServer, the server generated by compiling
// the protobuf, by fulfilling the backend.APIClient interface.
type BackendAPI struct {
grpc *grpc.Server
cfg *viper.Viper
pool *redis.Pool
}
type backendAPI BackendAPI
// New returns an instantiated srvice
func New(cfg *viper.Viper, pool *redis.Pool) *BackendAPI {
s := BackendAPI{
pool: pool,
grpc: grpc.NewServer(grpc.StatsHandler(&ocgrpc.ServerHandler{})),
cfg: cfg,
}
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(BeLogLines, KeySeverity))
backend.RegisterAPIServer(s.grpc, (*backendAPI)(&s))
beLog.Info("Successfully registered gRPC server")
return &s
}
// Open opens the api grpc service, starting it listening on the configured port.
func (s *BackendAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.backend.port"))
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"port": s.cfg.GetInt("api.backend.port"),
}).Error("net.Listen() error")
return err
}
beLog.WithFields(log.Fields{"port": s.cfg.GetInt("api.backend.port")}).Info("TCP net listener initialized")
go func() {
err := s.grpc.Serve(ln)
if err != nil {
beLog.WithFields(log.Fields{"error": err.Error()}).Error("gRPC serve() error")
}
beLog.Info("serving gRPC endpoints")
}()
return nil
}
// CreateMatch is this service's implementation of the CreateMatch gRPC method
// defined in ../proto/backend.proto
func (s *backendAPI) CreateMatch(c context.Context, p *backend.Profile) (*backend.MatchObject, error) {
// Get a cancel-able context
ctx, cancel := context.WithCancel(c)
defer cancel()
// Create context for tagging OpenCensus metrics.
funcName := "CreateMatch"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"profileID": p.Id,
}).Info("gRPC call executing")
// Write profile
_, err := redisHelpers.Create(ctx, s.pool, p.Id, p.Properties)
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("Statestorage failure to create match profile")
// Failure! Return empty match object and the error
stats.Record(fnCtx, BeGrpcErrors.M(1))
return &backend.MatchObject{}, err
}
beLog.WithFields(log.Fields{
"profileID": p.Id,
}).Info("Profile written to statestorage")
// Generate a request to fill the profile
moID := strings.Replace(uuid.New().String(), "-", "", -1)
profileRequestKey := moID + "." + p.Id
_, err = redisHelpers.Update(ctx, s.pool, s.cfg.GetString("queues.profiles.name"), profileRequestKey)
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("Statestorage failure to queue profile")
// Failure! Return empty match object and the error
stats.Record(fnCtx, BeGrpcErrors.M(1))
return &backend.MatchObject{}, err
}
beLog.WithFields(log.Fields{
"profileID": p.Id,
"matchObjectID": moID,
"profileRequestKey": profileRequestKey,
}).Info("Profile added to processing queue")
// get and return matchobject
watchChan := redisHelpers.Watcher(ctx, s.pool, profileRequestKey) // Watcher() runs the appropriate Redis commands.
mo := &backend.MatchObject{Id: p.Id, Properties: ""}
errString := ("Error retrieving matchmaking results from statestorage")
timeout := time.Duration(s.cfg.GetInt("interval.resultsTimeout")) * time.Second
select {
case <-time.After(timeout):
// TODO:Timeout: deal with the fallout. There are some edge cases here.
// When there is a timeout, need to send a stop to the watch channel.
stats.Record(fnCtx, BeGrpcRequests.M(1))
return mo, errors.New(errString + ": timeout exceeded")
case properties, ok := <-watchChan:
if !ok {
// ok is false if watchChan has been closed by redisHelpers.Watcher()
stats.Record(fnCtx, BeGrpcRequests.M(1))
return mo, errors.New(errString + ": channel closed - was the context cancelled?")
}
beLog.WithFields(log.Fields{
"profileRequestKey": profileRequestKey,
"matchObjectID": moID,
// DEBUG ONLY: This prints the entire result from redis to the logs
"matchProperties": properties, // very verbose!
}).Debug("Received match object from statestorage")
// 'ok' was true, so properties should contain the results from redis.
// Do some error checking on the returned JSON
if !gjson.Valid(properties) {
// Just splitting this across lines for readability/wrappability
thisError := ": Retreived json was malformed"
thisError = thisError + " - did the evaluator write a valid JSON match object?"
stats.Record(fnCtx, BeGrpcErrors.M(1))
return mo, errors.New(errString + thisError)
}
mmfError := gjson.Get(properties, "error")
if mmfError.Exists() {
stats.Record(fnCtx, BeGrpcErrors.M(1))
return mo, errors.New(errString + ": " + mmfError.String())
}
// Passed error checking; safe to send this property blob to the calling client.
mo.Properties = properties
}
beLog.WithFields(log.Fields{
"profileID": p.Id,
"matchObjectID": moID,
"profileRequestKey": profileRequestKey,
}).Info("Matchmaking results received, returning to backend client")
stats.Record(fnCtx, BeGrpcRequests.M(1))
return mo, err
}
// ListMatches is this service's implementation of the ListMatches gRPC method
// defined in ../proto/backend.proto
// This is the steaming version of CreateMatch - continually submitting the profile to be filled
// until the requesting service ends the connection.
func (s *backendAPI) ListMatches(p *backend.Profile, matchStream backend.API_ListMatchesServer) error {
// call creatematch in infinite loop as long as the stream is open
ctx := matchStream.Context() // https://talks.golang.org/2015/gotham-grpc.slide#30
// Create context for tagging OpenCensus metrics.
funcName := "ListMatches"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"profileID": p.Id,
}).Info("gRPC call executing. Calling CreateMatch. Looping until cancelled.")
for {
select {
case <-ctx.Done():
// Context cancelled, probably because the client cancelled their request, time to exit.
beLog.WithFields(log.Fields{
"profileID": p.Id,
}).Info("gRPC Context cancelled; client is probably finished receiving matches")
// TODO: need to make sure that in-flight matches don't get leaked here.
stats.Record(fnCtx, BeGrpcRequests.M(1))
return nil
default:
// Retreive results from Redis
mo, err := s.CreateMatch(ctx, p)
beLog = beLog.WithFields(log.Fields{"func": funcName})
if err != nil {
beLog.WithFields(log.Fields{"error": err.Error()}).Error("Failure calling CreateMatch")
stats.Record(fnCtx, BeGrpcErrors.M(1))
return err
}
beLog.WithFields(log.Fields{"matchProperties": fmt.Sprintf("%v", &mo)}).Debug("Streaming back match object")
matchStream.Send(mo)
// TODO: This should be tunable, but there should be SOME sleep here, to give a requestor a window
// to cleanly close the connection after receiving a match object when they know they don't want to
// request any more matches.
time.Sleep(2 * time.Second)
}
}
}
// DeleteMatch is this service's implementation of the DeleteMatch gRPC method
// defined in ../proto/backend.proto
func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (*backend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "DeleteMatch"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"matchObjectID": mo.Id,
}).Info("gRPC call executing")
_, err := redisHelpers.Delete(ctx, s.pool, mo.Id)
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("Statestorage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
return &backend.Result{Success: false, Error: err.Error()}, err
}
beLog.WithFields(log.Fields{
"matchObjectID": mo.Id,
}).Info("Match Object deleted.")
stats.Record(fnCtx, BeGrpcRequests.M(1))
return &backend.Result{Success: true, Error: ""}, err
}
// CreateAssignments is this service's implementation of the CreateAssignments gRPC method
// defined in ../proto/backend.proto
func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignments) (*backend.Result, error) {
// TODO: make playerIDs a repeated protobuf message field and iterate over it
assignments := strings.Split(a.Roster.PlayerIds, " ")
// Create context for tagging OpenCensus metrics.
funcName := "CreateAssignments"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
}).Info("gRPC call executing")
// TODO: relocate this redis functionality to a module
redisConn := s.pool.Get()
defer redisConn.Close()
// Create player assignments in a transaction
redisConn.Send("MULTI")
for _, playerID := range assignments {
beLog.WithFields(log.Fields{
"query": "HSET",
"playerID": playerID,
s.cfg.GetString("jsonkeys.connstring"): a.ConnectionInfo.ConnectionString,
}).Debug("Statestorage operation")
redisConn.Send("HSET", playerID, s.cfg.GetString("jsonkeys.connstring"), a.ConnectionInfo.ConnectionString)
}
_, err := redisConn.Do("EXEC")
// Issue encountered
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("Statestorage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
stats.Record(fnCtx, BeAssignmentFailures.M(int64(len(assignments))))
return &backend.Result{Success: false, Error: err.Error()}, err
}
// Success!
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
}).Info("Assignments complete")
stats.Record(fnCtx, BeGrpcRequests.M(1))
stats.Record(fnCtx, BeAssignments.M(int64(len(assignments))))
return &backend.Result{Success: true, Error: ""}, err
}
// DeleteAssignments is this service's implementation of the DeleteAssignments gRPC method
// defined in ../proto/backend.proto
func (s *backendAPI) DeleteAssignments(ctx context.Context, a *backend.Roster) (*backend.Result, error) {
// TODO: make playerIDs a repeated protobuf message field and iterate over it
assignments := strings.Split(a.PlayerIds, " ")
// Create context for tagging OpenCensus metrics.
funcName := "DeleteAssignments"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
}).Info("gRPC call executing")
// TODO: relocate this redis functionality to a module
redisConn := s.pool.Get()
defer redisConn.Close()
// Remove player assignments in a transaction
redisConn.Send("MULTI")
// TODO: make playerIDs a repeated protobuf message field and iterate over it
for _, playerID := range assignments {
beLog.WithFields(log.Fields{"query": "DEL", "key": playerID}).Debug("Statestorage operation")
redisConn.Send("DEL", playerID)
}
_, err := redisConn.Do("EXEC")
// Issue encountered
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("Statestorage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
stats.Record(fnCtx, BeAssignmentDeletionFailures.M(int64(len(assignments))))
return &backend.Result{Success: false, Error: err.Error()}, err
}
// Success!
stats.Record(fnCtx, BeGrpcRequests.M(1))
stats.Record(fnCtx, BeAssignmentDeletions.M(int64(len(assignments))))
return &backend.Result{Success: true, Error: ""}, err
}

View File

@ -1,178 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
)
// OpenCensus Measures. These are exported as metrics to your monitoring system
// https://godoc.org/go.opencensus.io/stats
//
// When making opencensus stats, the 'name' param, with forward slashes changed
// to underscores, is appended to the 'namespace' value passed to the
// prometheus exporter to become the Prometheus metric name. You can also look
// into having Prometheus rewrite your metric names on scrape.
//
// For example:
// - defining the promethus export namespace "open_match" when instanciating the exporter:
// pe, err := promethus.NewExporter(promethus.Options{Namespace: "open_match"})
// - and naming the request counter "backend/requests_total":
// MGrpcRequests := stats.Int64("backendapi/requests_total", ...
// - results in the prometheus metric name:
// open_match_backendapi_requests_total
// - [note] when using opencensus views to aggregate the metrics into
// distribution buckets and such, multiple metrics
// will be generated with appended types ("<metric>_bucket",
// "<metric>_count", "<metric>_sum", for example)
//
// In addition, OpenCensus stats propogated to Prometheus have the following
// auto-populated labels pulled from kubernetes, which we should avoid to
// prevent overloading and having to use the HonorLabels param in Prometheus.
//
// - Information about the k8s pod being monitored:
// "pod" (name of the monitored k8s pod)
// "namespace" (k8s namespace of the monitored pod)
// - Information about how promethus is gathering the metrics:
// "instance" (IP and port number being scraped by prometheus)
// "job" (name of the k8s service being scraped by prometheus)
// "endpoint" (name of the k8s port in the k8s service being scraped by prometheus)
//
var (
// API instrumentation
BeGrpcRequests = stats.Int64("backendapi/requests_total", "Number of requests to the gRPC Backend API endpoints", "1")
BeGrpcErrors = stats.Int64("backendapi/errors_total", "Number of errors generated by the gRPC Backend API endpoints", "1")
BeGrpcLatencySecs = stats.Float64("backendapi/latency_seconds", "Latency in seconds of the gRPC Backend API endpoints", "1")
// Logging instrumentation
// There's no need to record this measurement directly if you use
// the logrus hook provided in metrics/helper.go after instantiating the
// logrus instance in your application code.
// https://godoc.org/github.com/sirupsen/logrus#LevelHooks
BeLogLines = stats.Int64("backendapi/logs_total", "Number of Backend API lines logged", "1")
// Failure instrumentation
BeFailures = stats.Int64("backendapi/failures_total", "Number of Backend API failures", "1")
// Counting operations
BeAssignments = stats.Int64("backendapi/assignments_total", "Number of players assigned to matches", "1")
BeAssignmentFailures = stats.Int64("backendapi/assignment/failures_total", "Number of player match assigment failures", "1")
BeAssignmentDeletions = stats.Int64("backendapi/assignment/deletions_total", "Number of player match assigment deletions", "1")
BeAssignmentDeletionFailures = stats.Int64("backendapi/assignment/deletions/failures_total", "Number of player match assigment deletion failures", "1")
)
var (
// KeyMethod is used to tag a measure with the currently running API method.
KeyMethod, _ = tag.NewKey("method")
// KeySeverity is used to tag a the severity of a log message.
KeySeverity, _ = tag.NewKey("severity")
)
var (
// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
latencyDistribution = view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000)
)
// Package metrics provides some convience views.
// You need to register the views for the data to actually be collected.
// Note: The OpenCensus View 'Description' is exported to Prometheus as the HELP string.
// Note: If you get a "Failed to export to Prometheus: inconsistent label
// cardinality" error, chances are you forgot to set the tags specified in the
// view for a given measure when you tried to do a stats.Record()
var (
BeLatencyView = &view.View{
Name: "backend/latency",
Measure: BeGrpcLatencySecs,
Description: "The distribution of backend latencies",
Aggregation: latencyDistribution,
TagKeys: []tag.Key{KeyMethod},
}
BeRequestCountView = &view.View{
Name: "backend/grpc/requests",
Measure: BeGrpcRequests,
Description: "The number of successful backend gRPC requests",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
BeErrorCountView = &view.View{
Name: "backend/grpc/errors",
Measure: BeGrpcErrors,
Description: "The number of gRPC errors",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
BeLogCountView = &view.View{
Name: "log_lines/total",
Measure: BeLogLines,
Description: "The number of lines logged",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeySeverity},
}
BeFailureCountView = &view.View{
Name: "failures",
Measure: BeFailures,
Description: "The number of failures",
Aggregation: view.Count(),
}
BeAssignmentCountView = &view.View{
Name: "backend/assignments",
Measure: BeAssignments,
Description: "The number of successful player match assignments",
Aggregation: view.Count(),
}
BeAssignmentFailureCountView = &view.View{
Name: "backend/assignments/failures",
Measure: BeAssignmentFailures,
Description: "The number of player match assignment failures",
Aggregation: view.Count(),
}
BeAssignmentDeletionCountView = &view.View{
Name: "backend/assignments/deletions",
Measure: BeAssignmentDeletions,
Description: "The number of successful player match assignments",
Aggregation: view.Count(),
}
BeAssignmentDeletionFailureCountView = &view.View{
Name: "backend/assignments/deletions/failures",
Measure: BeAssignmentDeletionFailures,
Description: "The number of player match assignment failures",
Aggregation: view.Count(),
}
)
// DefaultBackendAPIViews are the default backend API OpenCensus measure views.
var DefaultBackendAPIViews = []*view.View{
BeLatencyView,
BeRequestCountView,
BeErrorCountView,
BeLogCountView,
BeFailureCountView,
BeAssignmentCountView,
BeAssignmentFailureCountView,
BeAssignmentDeletionCountView,
BeAssignmentDeletionFailureCountView,
}

View File

@ -1,14 +0,0 @@
/*
BackendAPI contains the unique files required to run the API endpoints for
Open Match's backend. It is assumed you'll either integrate calls to these
endpoints directly into your dedicated game server (simple use case), or call
these endpoints from other, established services in your infrastructure (more
complicated use cases).
Note that the main package for backendapi does very little except read the
config and set up logging and metrics, then start the server. Almost all the
work is being done by backendapi/apisrv, which implements the gRPC server
defined in the backendapi/proto/backend.pb.go file.
*/
package main

View File

@ -1,105 +0,0 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in proto/backend.proto
All the actual important bits are in the API Server source code: apisrv/apisrv.go
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/backendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
)
var (
// Logrus structured logging setup
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
"caller": "backendapi/main.go",
}
beLog = log.WithFields(beLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.BeLogLines, apisrv.KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
beLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultBackendAPIViews // BackendAPI OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
beLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redishelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
beLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
beLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
beLog.Info("Shutting down gRPC server")
}

View File

@ -1 +0,0 @@
../../config/matchmaker_config.json

View File

@ -1,482 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
/*
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: backend.proto
/*
Package backend is a generated protocol buffer package.
It is generated from these files:
backend.proto
It has these top-level messages:
Profile
MatchObject
Result
Roster
ConnectionInfo
Assignments
*/
package backend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a profile to pass to the matchmaking function.
type Profile struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Profile) Reset() { *m = Profile{} }
func (m *Profile) String() string { return proto.CompactTextString(m) }
func (*Profile) ProtoMessage() {}
func (*Profile) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Profile) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Profile) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
// Data structure for all the properties of a match.
type MatchObject struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *MatchObject) Reset() { *m = MatchObject{} }
func (m *MatchObject) String() string { return proto.CompactTextString(m) }
func (*MatchObject) ProtoMessage() {}
func (*MatchObject) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *MatchObject) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *MatchObject) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
// Data structure to hold a list of players in a match.
type Roster struct {
PlayerIds string `protobuf:"bytes,1,opt,name=player_ids,json=playerIds" json:"player_ids,omitempty"`
}
func (m *Roster) Reset() { *m = Roster{} }
func (m *Roster) String() string { return proto.CompactTextString(m) }
func (*Roster) ProtoMessage() {}
func (*Roster) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Roster) GetPlayerIds() string {
if m != nil {
return m.PlayerIds
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
type Assignments struct {
Roster *Roster `protobuf:"bytes,1,opt,name=roster" json:"roster,omitempty"`
ConnectionInfo *ConnectionInfo `protobuf:"bytes,2,opt,name=connection_info,json=connectionInfo" json:"connection_info,omitempty"`
}
func (m *Assignments) Reset() { *m = Assignments{} }
func (m *Assignments) String() string { return proto.CompactTextString(m) }
func (*Assignments) ProtoMessage() {}
func (*Assignments) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
func (m *Assignments) GetRoster() *Roster {
if m != nil {
return m.Roster
}
return nil
}
func (m *Assignments) GetConnectionInfo() *ConnectionInfo {
if m != nil {
return m.ConnectionInfo
}
return nil
}
func init() {
proto.RegisterType((*Profile)(nil), "Profile")
proto.RegisterType((*MatchObject)(nil), "MatchObject")
proto.RegisterType((*Result)(nil), "Result")
proto.RegisterType((*Roster)(nil), "Roster")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Assignments)(nil), "Assignments")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error)
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error)
// Call that manage communication of DGS connection info to players.
//
// Write the DGS connection info for the list of players in the
// Assignments.roster to state storage, so that info can be read by the game
// client(s).
CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error)
// Remove DGS connection info for the list of players in the Roster from
// state storage.
DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error) {
out := new(MatchObject)
err := grpc.Invoke(ctx, "/API/CreateMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error) {
stream, err := grpc.NewClientStream(ctx, &_API_serviceDesc.Streams[0], c.cc, "/API/ListMatches", opts...)
if err != nil {
return nil, err
}
x := &aPIListMatchesClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type API_ListMatchesClient interface {
Recv() (*MatchObject, error)
grpc.ClientStream
}
type aPIListMatchesClient struct {
grpc.ClientStream
}
func (x *aPIListMatchesClient) Recv() (*MatchObject, error) {
m := new(MatchObject)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *aPIClient) DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(context.Context, *Profile) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(*Profile, API_ListMatchesServer) error
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(context.Context, *MatchObject) (*Result, error)
// Call that manage communication of DGS connection info to players.
//
// Write the DGS connection info for the list of players in the
// Assignments.roster to state storage, so that info can be read by the game
// client(s).
CreateAssignments(context.Context, *Assignments) (*Result, error)
// Remove DGS connection info for the list of players in the Roster from
// state storage.
DeleteAssignments(context.Context, *Roster) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Profile)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateMatch(ctx, req.(*Profile))
}
return interceptor(ctx, in, info, handler)
}
func _API_ListMatches_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(Profile)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(APIServer).ListMatches(m, &aPIListMatchesServer{stream})
}
type API_ListMatchesServer interface {
Send(*MatchObject) error
grpc.ServerStream
}
type aPIListMatchesServer struct {
grpc.ServerStream
}
func (x *aPIListMatchesServer) Send(m *MatchObject) error {
return x.ServerStream.SendMsg(m)
}
func _API_DeleteMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(MatchObject)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteMatch(ctx, req.(*MatchObject))
}
return interceptor(ctx, in, info, handler)
}
func _API_CreateAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Assignments)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateAssignments(ctx, req.(*Assignments))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Roster)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignments(ctx, req.(*Roster))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateMatch",
Handler: _API_CreateMatch_Handler,
},
{
MethodName: "DeleteMatch",
Handler: _API_DeleteMatch_Handler,
},
{
MethodName: "CreateAssignments",
Handler: _API_CreateAssignments_Handler,
},
{
MethodName: "DeleteAssignments",
Handler: _API_DeleteAssignments_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "ListMatches",
Handler: _API_ListMatches_Handler,
ServerStreams: true,
},
},
Metadata: "backend.proto",
}
func init() { proto.RegisterFile("backend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 344 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x92, 0xcf, 0x4e, 0xc2, 0x40,
0x10, 0xc6, 0x29, 0xc6, 0x16, 0x66, 0x11, 0x64, 0xe3, 0x81, 0x90, 0xf8, 0x27, 0x3d, 0x88, 0x46,
0xb3, 0x31, 0x78, 0xc1, 0x83, 0x07, 0x82, 0x17, 0x12, 0x8d, 0xa4, 0x3e, 0x00, 0x29, 0xdb, 0x01,
0x56, 0xeb, 0x6e, 0xb3, 0xbb, 0x1c, 0x7c, 0x53, 0x1f, 0xc7, 0xb8, 0x2d, 0xba, 0x1c, 0x3c, 0x78,
0x9c, 0x5f, 0xbf, 0x6f, 0xe6, 0xeb, 0xcc, 0xc2, 0xc1, 0x22, 0xe5, 0x6f, 0x28, 0x33, 0x56, 0x68,
0x65, 0x55, 0x7c, 0x07, 0xd1, 0x4c, 0xab, 0xa5, 0xc8, 0x91, 0xb6, 0xa1, 0x2e, 0xb2, 0x5e, 0x70,
0x16, 0x5c, 0x34, 0x93, 0xba, 0xc8, 0xe8, 0x09, 0x40, 0xa1, 0x55, 0x81, 0xda, 0x0a, 0x34, 0xbd,
0xba, 0xe3, 0x1e, 0x89, 0xef, 0x81, 0x3c, 0xa5, 0x96, 0xaf, 0x9f, 0x17, 0xaf, 0xc8, 0xed, 0xbf,
0xed, 0x23, 0x08, 0x13, 0x34, 0x9b, 0xdc, 0xd2, 0x1e, 0x44, 0x66, 0xc3, 0x39, 0x1a, 0xe3, 0xec,
0x8d, 0x64, 0x5b, 0xd2, 0x23, 0xd8, 0x47, 0xad, 0x95, 0xae, 0xec, 0x65, 0x11, 0x0f, 0x20, 0x4c,
0x94, 0xb1, 0xa8, 0xe9, 0x31, 0x40, 0x91, 0xa7, 0x1f, 0xa8, 0xe7, 0x22, 0x33, 0xd5, 0xec, 0x66,
0x49, 0xa6, 0xd9, 0x77, 0xc2, 0xf6, 0x44, 0x49, 0x89, 0xdc, 0x0a, 0x25, 0xa7, 0x72, 0xa9, 0xe8,
0x15, 0x74, 0xf9, 0x0f, 0x99, 0x1b, 0xab, 0x85, 0x5c, 0x55, 0xbe, 0xc3, 0xdf, 0x0f, 0x2f, 0x8e,
0xc7, 0x6b, 0x20, 0x63, 0x63, 0xc4, 0x4a, 0xbe, 0xa3, 0xb4, 0x86, 0x9e, 0x42, 0xa8, 0xdd, 0x58,
0x67, 0x20, 0xc3, 0x88, 0x95, 0x29, 0x92, 0x0a, 0xd3, 0x11, 0x74, 0xbc, 0xe6, 0x42, 0x2e, 0x95,
0xcb, 0x4d, 0x86, 0x1d, 0xb6, 0x1b, 0x23, 0x69, 0xf3, 0x9d, 0x7a, 0xf8, 0x19, 0xc0, 0xde, 0x78,
0x36, 0xa5, 0x03, 0x20, 0x13, 0x8d, 0xa9, 0x45, 0xb7, 0x58, 0xda, 0x60, 0xd5, 0x6d, 0xfa, 0x2d,
0xe6, 0xad, 0x3a, 0xae, 0xd1, 0x4b, 0x20, 0x8f, 0xc2, 0x58, 0x07, 0xd1, 0xfc, 0x2d, 0xbc, 0x09,
0xe8, 0x39, 0x90, 0x07, 0xcc, 0x71, 0xdb, 0x73, 0x47, 0xd0, 0x8f, 0x58, 0x79, 0x83, 0xb8, 0x46,
0xaf, 0xa1, 0x5b, 0xce, 0xf6, 0xff, 0xb9, 0xc5, 0xbc, 0xca, 0x57, 0x0f, 0xa0, 0x5b, 0x76, 0xf5,
0xd5, 0xdb, 0x8d, 0x78, 0xc2, 0x45, 0xe8, 0xde, 0xd9, 0xed, 0x57, 0x00, 0x00, 0x00, 0xff, 0xff,
0xf2, 0x23, 0x14, 0x36, 0x78, 0x02, 0x00, 0x00,
}

View File

@ -1,4 +0,0 @@
/*
backend is a package compiled from the protobuffer in <REPO_ROOT>/api/protobuf-spec/backend.proto. It is auto-generated and shouldn't be edited.
*/
package backend

View File

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/internal/app/evaluator/defaulteval"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("evaluator", defaulteval.BindService)
}

View File

@ -0,0 +1,31 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/examples/demo"
"open-match.dev/open-match/examples/demo/components"
"open-match.dev/open-match/examples/demo/components/clients"
"open-match.dev/open-match/examples/demo/components/director"
"open-match.dev/open-match/examples/demo/components/uptime"
)
func main() {
demo.Run(map[string]func(*components.DemoShared){
"uptime": uptime.Run,
"clients": clients.Run,
"director": director.Run,
})
}

25
cmd/frontend/frontend.go Normal file
View File

@ -0,0 +1,25 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the frontend service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/frontend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("frontend", frontend.BindService)
}

View File

@ -1,300 +0,0 @@
/*
package apisrv provides an implementation of the gRPC server defined in ../proto/frontend.proto.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"context"
"errors"
"net"
"time"
frontend "github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/proto"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
playerq "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerq"
log "github.com/sirupsen/logrus"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
"google.golang.org/grpc"
)
// Logrus structured logging setup
var (
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
"caller": "frontendapi/apisrv/apisrv.go",
}
feLog = log.WithFields(feLogFields)
)
// FrontendAPI implements frontend.ApiServer, the server generated by compiling
// the protobuf, by fulfilling the frontend.APIClient interface.
type FrontendAPI struct {
grpc *grpc.Server
cfg *viper.Viper
pool *redis.Pool
}
type frontendAPI FrontendAPI
// New returns an instantiated srvice
func New(cfg *viper.Viper, pool *redis.Pool) *FrontendAPI {
s := FrontendAPI{
pool: pool,
grpc: grpc.NewServer(grpc.StatsHandler(&ocgrpc.ServerHandler{})),
cfg: cfg,
}
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(FeLogLines, KeySeverity))
// Register gRPC server
frontend.RegisterAPIServer(s.grpc, (*frontendAPI)(&s))
feLog.Info("Successfully registered gRPC server")
return &s
}
// Open opens the api grpc service, starting it listening on the configured port.
func (s *FrontendAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.frontend.port"))
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"port": s.cfg.GetInt("api.frontend.port"),
}).Error("net.Listen() error")
return err
}
feLog.WithFields(log.Fields{"port": s.cfg.GetInt("api.frontend.port")}).Info("TCP net listener initialized")
go func() {
err := s.grpc.Serve(ln)
if err != nil {
feLog.WithFields(log.Fields{"error": err.Error()}).Error("gRPC serve() error")
}
feLog.Info("serving gRPC endpoints")
}()
return nil
}
// CreateRequest is this service's implementation of the CreateRequest gRPC method // defined in ../proto/frontend.proto
func (s *frontendAPI) CreateRequest(c context.Context, g *frontend.Group) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "CreateRequest"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Write group
// TODO: Remove playerq module and just use redishelper module once
// indexing has its own implementation
err := playerq.Create(redisConn, g.Id, g.Properties)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// DeleteRequest is this service's implementation of the DeleteRequest gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) DeleteRequest(c context.Context, g *frontend.Group) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "DeleteRequest"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Write group
err := playerq.Delete(redisConn, g.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// GetAssignment is this service's implementation of the GetAssignment gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) GetAssignment(c context.Context, p *frontend.PlayerId) (*frontend.ConnectionInfo, error) {
// Get cancellable context
ctx, cancel := context.WithCancel(c)
defer cancel()
// Create context for tagging OpenCensus metrics.
funcName := "GetAssignment"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// get and return connection string
var connString string
watchChan := s.watcher(ctx, s.pool, p.Id) // watcher() runs the appropriate Redis commands.
select {
case <-time.After(30 * time.Second): // TODO: Make this configurable.
err := errors.New("did not see matchmaking results in redis before timeout")
// TODO:Timeout: deal with the fallout
// When there is a timeout, need to send a stop to the watch channel.
// cancelling ctx isn't doing it.
//cancel()
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"playerid": p.Id,
}).Error("State storage error")
errTag, _ := tag.NewKey("errtype")
fnCtx, _ := tag.New(ctx, tag.Insert(errTag, "watch_timeout"))
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.ConnectionInfo{ConnectionString: ""}, err
case connString = <-watchChan:
feLog.Debug(p.Id, "connString:", connString)
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.ConnectionInfo{ConnectionString: connString}, nil
}
// DeleteAssignment is this service's implementation of the DeleteAssignment gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) DeleteAssignment(c context.Context, p *frontend.PlayerId) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "DeleteAssignment"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Write group
err := playerq.Delete(redisConn, p.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
//TODO: Everything below this line will be moved to the redis statestorage library
// in an upcoming version.
// ================================================
// watcher makes a channel and returns it immediately. It also launches an
// asynchronous goroutine that watches a redis key and returns the value of
// the 'connstring' field of that key once it exists on the channel.
//
// The pattern for this function is from 'Go Concurrency Patterns', it is a function
// that wraps a closure goroutine, and returns a channel.
// reference: https://talks.golang.org/2012/concurrency.slide#25
func (s *frontendAPI) watcher(ctx context.Context, pool *redis.Pool, key string) <-chan string {
// Add the key as a field to all logs for the execution of this function.
feLog = feLog.WithFields(log.Fields{"key": key})
feLog.Debug("Watching key in statestorage for changes")
watchChan := make(chan string)
go func() {
// var declaration
var results string
var err = errors.New("haven't queried Redis yet")
// Loop, querying redis until this key has a value
for err != nil {
select {
case <-ctx.Done():
// Cleanup
close(watchChan)
return
default:
results, err = s.retrieveConnstring(ctx, pool, key, s.cfg.GetString("jsonkeys.connstring"))
if err != nil {
time.Sleep(5 * time.Second) // TODO: exp bo + jitter
}
}
}
// Return value retreived from Redis asynchonously and tell calling function we're done
feLog.Debug("Statestorage watched record update detected")
watchChan <- results
close(watchChan)
}()
return watchChan
}
// retrieveConnstring is a concurrent-safe, context-aware redis HGET of the 'connstring' fieldin the input key
// TODO: This will be moved to the redis statestorage module.
func (s *frontendAPI) retrieveConnstring(ctx context.Context, pool *redis.Pool, key string, field string) (string, error) {
// Add the key as a field to all logs for the execution of this function.
feLog = feLog.WithFields(log.Fields{"key": key})
cmd := "HGET"
feLog.WithFields(log.Fields{"query": cmd}).Debug("Statestorage operation")
// Get a connection to redis
redisConn, err := pool.GetContext(ctx)
defer redisConn.Close()
// Encountered an issue getting a connection from the pool.
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"query": cmd}).Error("Statestorage connection error")
return "", err
}
// Run redis query and return
return redis.String(redisConn.Do("HGET", key, field))
}

View File

@ -1,139 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
)
// OpenCensus Measures. These are exported as metrics to your monitoring system
// https://godoc.org/go.opencensus.io/stats
//
// When making opencensus stats, the 'name' param, with forward slashes changed
// to underscores, is appended to the 'namespace' value passed to the
// prometheus exporter to become the Prometheus metric name. You can also look
// into having Prometheus rewrite your metric names on scrape.
//
// For example:
// - defining the promethus export namespace "open_match" when instanciating the exporter:
// pe, err := promethus.NewExporter(promethus.Options{Namespace: "open_match"})
// - and naming the request counter "frontend/requests_total":
// MGrpcRequests := stats.Int64("frontendapi/requests_total", ...
// - results in the prometheus metric name:
// open_match_frontendapi_requests_total
// - [note] when using opencensus views to aggregate the metrics into
// distribution buckets and such, multiple metrics
// will be generated with appended types ("<metric>_bucket",
// "<metric>_count", "<metric>_sum", for example)
//
// In addition, OpenCensus stats propogated to Prometheus have the following
// auto-populated labels pulled from kubernetes, which we should avoid to
// prevent overloading and having to use the HonorLabels param in Prometheus.
//
// - Information about the k8s pod being monitored:
// "pod" (name of the monitored k8s pod)
// "namespace" (k8s namespace of the monitored pod)
// - Information about how promethus is gathering the metrics:
// "instance" (IP and port number being scraped by prometheus)
// "job" (name of the k8s service being scraped by prometheus)
// "endpoint" (name of the k8s port in the k8s service being scraped by prometheus)
//
var (
// API instrumentation
FeGrpcRequests = stats.Int64("frontendapi/requests_total", "Number of requests to the gRPC Frontend API endpoints", "1")
FeGrpcErrors = stats.Int64("frontendapi/errors_total", "Number of errors generated by the gRPC Frontend API endpoints", "1")
FeGrpcLatencySecs = stats.Float64("frontendapi/latency_seconds", "Latency in seconds of the gRPC Frontend API endpoints", "1")
// Logging instrumentation
// There's no need to record this measurement directly if you use
// the logrus hook provided in metrics/helper.go after instantiating the
// logrus instance in your application code.
// https://godoc.org/github.com/sirupsen/logrus#LevelHooks
FeLogLines = stats.Int64("frontendapi/logs_total", "Number of Frontend API lines logged", "1")
// Failure instrumentation
FeFailures = stats.Int64("frontendapi/failures_total", "Number of Frontend API failures", "1")
)
var (
// KeyMethod is used to tag a measure with the currently running API method.
KeyMethod, _ = tag.NewKey("method")
KeySeverity, _ = tag.NewKey("severity")
)
var (
// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
latencyDistribution = view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000)
)
// Package metrics provides some convience views.
// You need to register the views for the data to actually be collected.
// Note: The OpenCensus View 'Description' is exported to Prometheus as the HELP string.
// Note: If you get a "Failed to export to Prometheus: inconsistent label
// cardinality" error, chances are you forgot to set the tags specified in the
// view for a given measure when you tried to do a stats.Record()
var (
FeLatencyView = &view.View{
Name: "frontend/latency",
Measure: FeGrpcLatencySecs,
Description: "The distribution of frontend latencies",
Aggregation: latencyDistribution,
TagKeys: []tag.Key{KeyMethod},
}
FeRequestCountView = &view.View{
Name: "frontend/grpc/requests",
Measure: FeGrpcRequests,
Description: "The number of successful frontend gRPC requests",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
FeErrorCountView = &view.View{
Name: "frontend/grpc/errors",
Measure: FeGrpcErrors,
Description: "The number of gRPC errors",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
FeLogCountView = &view.View{
Name: "log_lines/total",
Measure: FeLogLines,
Description: "The number of lines logged",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeySeverity},
}
FeFailureCountView = &view.View{
Name: "failures",
Measure: FeFailures,
Description: "The number of failures",
Aggregation: view.Count(),
}
)
// DefaultFrontendAPIViews are the default frontend API OpenCensus measure views.
var DefaultFrontendAPIViews = []*view.View{
FeLatencyView,
FeRequestCountView,
FeErrorCountView,
FeLogCountView,
FeFailureCountView,
}

View File

@ -1,14 +0,0 @@
/*
FrontendAPI contains the unique files required to run the API endpoints for
Open Match's frontend. It is assumed you'll either integrate calls to these
endpoints directly into your game client (simple use case), or call these
endpoints from other, established platform services in your infrastructure
(more complicated use cases).
Note that the main package for frontendapi does very little except read the
config and set up logging and metrics, then start the server. Almost all the
work is being done by frontendapi/apisrv, which implements the gRPC server
defined in the frontendapi/proto/frontend.pb.go file.
*/
package main

View File

@ -1,105 +0,0 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
frontendapi/proto/frontend.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
)
var (
// Logrus structured logging setup
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
"caller": "frontendapi/main.go",
}
feLog = log.WithFields(feLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.FeLogLines, apisrv.KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
feLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultFrontendAPIViews // FrontendAPI OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
feLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redishelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
feLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
feLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
feLog.Info("Shutting down gRPC server")
}

View File

@ -1 +0,0 @@
../../config/matchmaker_config.json

View File

@ -1,4 +0,0 @@
/*
frontend is a package compiled from the protobuffer in <REPO_ROOT>/api/protobuf-spec/frontend.proto. It is auto-generated and shouldn't be edited.
*/
package frontend

View File

@ -1,335 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: frontend.proto
/*
Package frontend is a generated protocol buffer package.
It is generated from these files:
frontend.proto
It has these top-level messages:
Group
PlayerId
ConnectionInfo
Result
*/
package frontend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
type Group struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Group) Reset() { *m = Group{} }
func (m *Group) String() string { return proto.CompactTextString(m) }
func (*Group) ProtoMessage() {}
func (*Group) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Group) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Group) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
type PlayerId struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *PlayerId) Reset() { *m = PlayerId{} }
func (m *PlayerId) String() string { return proto.CompactTextString(m) }
func (*PlayerId) ProtoMessage() {}
func (*PlayerId) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *PlayerId) GetId() string {
if m != nil {
return m.Id
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
func init() {
proto.RegisterType((*Group)(nil), "Group")
proto.RegisterType((*PlayerId)(nil), "PlayerId")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Result)(nil), "Result")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error)
DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error) {
out := new(ConnectionInfo)
err := grpc.Invoke(ctx, "/API/GetAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
CreateRequest(context.Context, *Group) (*Result, error)
DeleteRequest(context.Context, *Group) (*Result, error)
GetAssignment(context.Context, *PlayerId) (*ConnectionInfo, error)
DeleteAssignment(context.Context, *PlayerId) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_GetAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).GetAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/GetAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).GetAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateRequest",
Handler: _API_CreateRequest_Handler,
},
{
MethodName: "DeleteRequest",
Handler: _API_DeleteRequest_Handler,
},
{
MethodName: "GetAssignment",
Handler: _API_GetAssignment_Handler,
},
{
MethodName: "DeleteAssignment",
Handler: _API_DeleteAssignment_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "frontend.proto",
}
func init() { proto.RegisterFile("frontend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0x41, 0x4b, 0xfb, 0x40,
0x10, 0xc5, 0x9b, 0xfc, 0x69, 0xda, 0x0e, 0x34, 0xff, 0xba, 0x78, 0x08, 0x39, 0x88, 0xec, 0xa9,
0x20, 0xee, 0x41, 0x0f, 0x7a, 0xf1, 0x50, 0x2a, 0x94, 0xdc, 0x4a, 0xfc, 0x00, 0x52, 0x93, 0x69,
0x59, 0x88, 0xbb, 0x71, 0x66, 0x72, 0xf0, 0x0b, 0xf9, 0x39, 0xc5, 0x4d, 0x6b, 0x55, 0xc4, 0xe3,
0xfb, 0xed, 0x7b, 0x8f, 0x7d, 0x03, 0xe9, 0x96, 0xbc, 0x13, 0x74, 0xb5, 0x69, 0xc9, 0x8b, 0xd7,
0x37, 0x30, 0x5c, 0x91, 0xef, 0x5a, 0x95, 0x42, 0x6c, 0xeb, 0x2c, 0x3a, 0x8f, 0xe6, 0x93, 0x32,
0xb6, 0xb5, 0x3a, 0x03, 0x68, 0xc9, 0xb7, 0x48, 0x62, 0x91, 0xb3, 0x38, 0xf0, 0x2f, 0x44, 0xe7,
0x30, 0x5e, 0x37, 0x9b, 0x57, 0xa4, 0xa2, 0xfe, 0x99, 0xd5, 0x77, 0x90, 0x2e, 0xbd, 0x73, 0x58,
0x89, 0xf5, 0xae, 0x70, 0x5b, 0xaf, 0x2e, 0xe0, 0xa4, 0xfa, 0x24, 0x8f, 0x2c, 0x64, 0xdd, 0x6e,
0x1f, 0x98, 0x1d, 0x1f, 0x1e, 0x02, 0xd7, 0xb7, 0x90, 0x94, 0xc8, 0x5d, 0x23, 0x2a, 0x83, 0x11,
0x77, 0x55, 0x85, 0xcc, 0xc1, 0x3c, 0x2e, 0x0f, 0x52, 0x9d, 0xc2, 0x10, 0x89, 0x3c, 0xed, 0x7f,
0xd6, 0x8b, 0xab, 0xb7, 0x08, 0xfe, 0x2d, 0xd6, 0x85, 0xd2, 0x30, 0x5d, 0x12, 0x6e, 0x04, 0x4b,
0x7c, 0xe9, 0x90, 0x45, 0x25, 0x26, 0xac, 0xcc, 0x47, 0xa6, 0x6f, 0xd6, 0x83, 0x0f, 0xcf, 0x3d,
0x36, 0xf8, 0xa7, 0xe7, 0x12, 0xa6, 0x2b, 0x94, 0x05, 0xb3, 0xdd, 0xb9, 0x67, 0x74, 0xa2, 0x26,
0xe6, 0x30, 0x3a, 0xff, 0x6f, 0xbe, 0x6f, 0xd4, 0x03, 0x35, 0x87, 0x59, 0x5f, 0xf9, 0x7b, 0xe2,
0x58, 0xfc, 0x94, 0x84, 0xeb, 0x5f, 0xbf, 0x07, 0x00, 0x00, 0xff, 0xff, 0x2b, 0xde, 0x2c, 0x5b,
0x8f, 0x01, 0x00, 0x00,
}

25
cmd/minimatch/main.go Normal file
View File

@ -0,0 +1,25 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the minimatch in-process testing binary for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/minimatch"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("minimatch", minimatch.BindService)
}

View File

@ -1,442 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Note: the example only works with the code within the same release/branch.
// This is based on the example from the official k8s golang client repository:
// k8s.io/client-go/examples/create-update-delete-deployment/
package main
import (
"context"
"errors"
"os"
"strconv"
"strings"
"time"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/tidwall/gjson"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/gomodule/redigo/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
batchv1 "k8s.io/api/batch/v1"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
//"k8s.io/kubernetes/pkg/api"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
// Uncomment the following line to load the gcp plugin (only required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
)
var (
// Logrus structured logging setup
mmforcLogFields = log.Fields{
"app": "openmatch",
"component": "mmforc",
"caller": "mmforc/main.go",
}
mmforcLog = log.WithFields(mmforcLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.SetFormatter(&log.JSONFormatter{})
log.AddHook(metrics.NewHook(MmforcLogLines, KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
mmforcLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocMmforcViews := DefaultMmforcViews // mmforc OpenCensus views.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocMmforcViews = append(ocMmforcViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
mmforcLog.WithFields(log.Fields{"viewscount": len(ocMmforcViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocMmforcViews)
}
func main() {
pool := redisHelpers.ConnectionPool(cfg)
redisConn := pool.Get()
defer redisConn.Close()
// Get k8s credentials so we can starts k8s Jobs
mmforcLog.Info("Attempting to acquire k8s credentials")
config, err := rest.InClusterConfig()
if err != nil {
panic(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
mmforcLog.Info("K8s credentials acquired")
start := time.Now()
checkProposals := true
defaultMmfImages := []string{cfg.GetString("defaultImages.mmf.name") + ":" + cfg.GetString("defaultImages.mmf.tag")}
defaultEvalImage := cfg.GetString("defaultImages.evaluator.name") + ":" + cfg.GetString("defaultImages.evaluator.tag")
// main loop; kick off matchmaker functions for profiles in the profile
// queue and an evaluator when proposals are in the proposals queue
for {
ctx, cancel := context.WithCancel(context.Background())
_ = cancel
// Get profiles and kick off a job for each
mmforcLog.WithFields(log.Fields{
"profileQueueName": cfg.GetString("queues.profiles.name"),
"pullCount": cfg.GetInt("queues.profiles.pullCount"),
"query": "SPOP",
"component": "statestorage",
}).Info("Retreiving match profiles")
results, err := redis.Strings(redisConn.Do("SPOP",
cfg.GetString("queues.profiles.name"), cfg.GetInt("queues.profiles.pullCount")))
if err != nil {
panic(err)
}
if len(results) > 0 {
mmforcLog.WithFields(log.Fields{
"numProfiles": len(results),
}).Info("Starting MMF jobs...")
for _, profile := range results {
// Kick off the job asynchrnously
go mmfunc(ctx, profile, cfg, defaultMmfImages, clientset, pool)
// Count the number of jobs running
redisHelpers.Increment(context.Background(), pool, "concurrentMMFs")
}
} else {
mmforcLog.WithFields(log.Fields{
"profileQueueName": cfg.GetString("queues.profiles.name"),
}).Warn("Unable to retreive match profiles from statestorage - have you entered any?")
}
// Check to see if we should run the evaluator.
// Get number of running MMFs
r, err := redisHelpers.Retrieve(context.Background(), pool, "concurrentMMFs")
if err != nil {
mmforcLog.Println(err)
if err.Error() == "redigo: nil returned" {
// No MMFs have run since we last evaluated; reset timer and loop
mmforcLog.Debug("Number of concurrentMMFs is nil")
start = time.Now()
time.Sleep(1000 * time.Millisecond)
}
continue
}
numRunning, err := strconv.Atoi(r)
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Issue retrieving number of currently running MMFs")
}
// We are ready to evaluate either when all MMFs are complete, or the
// timeout is reached.
//
// Tuning how frequently the evaluator runs is a complex topic and
// probably only of interest to users running large-scale production
// workloads with many concurrently running matchmaking functions,
// which have some overlap in the matchmaking player pools. Suffice to
// say that under load, this switch should almost always trigger the
// timeout interval code path. The concurrentMMFs check to see how
// many are still running is meant as a short-circuit to prevent
// waiting to run the evaluator when all your MMFs are already
// finished.
switch {
case time.Since(start).Seconds() >= float64(cfg.GetInt("interval.evaluator")):
mmforcLog.WithFields(log.Fields{
"interval": cfg.GetInt("interval.evaluator"),
}).Info("Maximum evaluator interval exceeded")
checkProposals = true
// Opencensus tagging
ctx, _ = tag.New(ctx, tag.Insert(KeyEvalReason, "interval_exceeded"))
case numRunning <= 0:
mmforcLog.Info("All MMFs complete")
checkProposals = true
numRunning = 0
ctx, _ = tag.New(ctx, tag.Insert(KeyEvalReason, "mmfs_completed"))
}
if checkProposals {
// Make sure there are proposals in the queue.
checkProposals = false
mmforcLog.Info("Checking statestorage for match object proposals")
results, err := redisHelpers.Count(context.Background(), pool, cfg.GetString("queues.proposals.name"))
switch {
case err != nil:
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Couldn't retrieve the length of the proposal queue from statestorage!")
case results == 0:
mmforcLog.WithFields(log.Fields{}).Warn("No proposals in the queue!")
default:
mmforcLog.WithFields(log.Fields{
"numProposals": results,
}).Info("Proposals available, evaluating!")
go evaluator(ctx, cfg, defaultEvalImage, clientset)
}
_, err = redisHelpers.Delete(context.Background(), pool, "concurrentMMFs")
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Error deleting concurrent MMF counter!")
}
start = time.Now()
}
// TODO: Make this tunable via config.
// A sleep here is not critical but just a useful safety valve in case
// things are broken, to keep the main loop from going all-out and spamming the log.
mainSleep := 1000
mmforcLog.WithFields(log.Fields{
"ms": mainSleep,
}).Info("Sleeping...")
time.Sleep(time.Duration(mainSleep) * time.Millisecond)
} // End main for loop
}
// mmfunc generates a k8s job that runs the specified mmf container image.
func mmfunc(ctx context.Context, profile string, cfg *viper.Viper, imageNames []string, clientset *kubernetes.Clientset, pool *redis.Pool) {
// Generate the various keys/names, some of which must be populated to the k8s job.
ids := strings.Split(profile, ".")
moID := ids[0]
proID := ids[1]
timestamp := strconv.Itoa(int(time.Now().Unix()))
jobName := timestamp + "." + moID + "." + proID
// Read the full profile from redis and access any keys that are important to deciding how MMFs are run.
profile, err := redisHelpers.Retrieve(ctx, pool, proID)
if err != nil {
// Note that we couldn't read the profile, and try to run the mmf with default settings.
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
"jobName": moID,
"profile": proID,
"containerImages": imageNames,
}).Warn("Failure retreiving full profile from statestorage - attempting to run default mmf container")
} else {
profileImageNames := gjson.Get(profile, cfg.GetString("jsonkeys.mmfImages"))
// Got profile from state storage, make sure it is valid
if gjson.Valid(profile) && profileImageNames.Exists() {
switch profileImageNames.Type.String() {
case "String":
// case: only one image name at this key.
imageNames = []string{profileImageNames.String()}
case "JSON":
// case: Array of image names at this key.
// TODO: support multiple MMFs per profile. Doing this will require that
// we generate an proposal ID and populate it to the env vars for each
// mmf, so they can each write a proposal for the same profile
// without stomping each other. (The evaluator would then be
// responsible for selecting the proposal to send to the backendapi)
imageNames = []string{}
// Pattern for iterating through a gjson.Result
// https://github.com/tidwall/gjson#iterate-through-an-object-or-array
profileImageNames.ForEach(func(_, name gjson.Result) bool {
// TODO: Swap these two lines when multiple image support is ready
// imageNames = append(imageNames, name.String())
imageNames = []string{name.String()}
return true
})
mmforcLog.WithFields(log.Fields{
"jobName": moID,
"profile": proID,
"containerImages": imageNames,
}).Warn("Profile specifies multiple MMF container images (NYI), running only the last image provided")
}
} else {
mmforcLog.WithFields(log.Fields{
"jobName": moID,
"profile": proID,
"containerImages": imageNames,
}).Warn("Profile JSON was invalid or did not contain a MMF container image name - attempting to run default mmf container")
}
}
mmforcLog.WithFields(log.Fields{
"jobName": moID,
"profile": proID,
"containerImage": imageNames,
}).Info("Attempting to create mmf k8s job")
// Create Jobs
// TODO: Handle returned errors
// TODO: Support multiple MMFs per profile.
// NOTE: For now, always send this an array of length 1 specifying the
// single MMF container image name you want to run, until multi-mmf
// profiles are supported. If you send it more than one, you will get
// undefined (but definitely degenerate) behavior!
for _, imageName := range imageNames {
// Kick off Job with this image name
_ = submitJob(imageName, jobName, clientset)
if err != nil {
// Record failure & log
stats.Record(ctx, mmforcMmfFailures.M(1))
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
"jobName": moID,
"profile": proID,
"containerImage": imageName,
}).Error("MMF job submission failure!")
} else {
// Record Success
stats.Record(ctx, mmforcMmfs.M(1))
}
}
}
// evaluator generates a k8s job that runs the specified evaluator container image.
func evaluator(ctx context.Context, cfg *viper.Viper, imageName string, clientset *kubernetes.Clientset) {
// Generate the job name
timestamp := strconv.Itoa(int(time.Now().Unix()))
jobName := timestamp + ".evaluator"
mmforcLog.WithFields(log.Fields{
"jobName": jobName,
"containerImage": imageName,
}).Info("Attempting to create evaluator k8s job")
// Create Job
// TODO: Handle returned errors
_ = submitJob(imageName, jobName, clientset)
if err != nil {
// Record failure & log
stats.Record(ctx, mmforcEvalFailures.M(1))
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
"jobName": jobName,
"containerImage": imageName,
}).Error("Evaluator job submission failure!")
} else {
// Record success
stats.Record(ctx, mmforcEvals.M(1))
}
}
// submitJob submits a job to kubernetes
func submitJob(imageName string, jobName string, clientset *kubernetes.Clientset) error {
job := generateJobSpec(jobName, imageName)
// Get the namespace for the job from the current namespace, otherwise, use default
namespace := os.Getenv("METADATA_NAMESPACE")
if len(namespace) == 0 {
namespace = apiv1.NamespaceDefault
}
// Submit kubernetes job
jobsClient := clientset.BatchV1().Jobs(namespace)
result, err := jobsClient.Create(job)
if err != nil {
// TODO: replace queued profiles if things go south
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Couldn't create k8s job!")
}
mmforcLog.WithFields(log.Fields{
"jobName": result.GetObjectMeta().GetName(),
}).Info("Created job.")
return err
}
// generateJobSpec is a PoC to test that all the k8s job generation code works.
// In the future we should be decoding into the client object using one of the
// codecs on an input JSON, or piggyback on job templates.
// https://github.com/kubernetes/client-go/issues/193
// TODO: many fields in this job spec assume the container image is an mmf, but
// we use this to kick the evaluator containers too, should be updated to
// reflect that
func generateJobSpec(jobName string, imageName string) *batchv1.Job {
job := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
},
Spec: batchv1.JobSpec{
Completions: int32Ptr(1),
Template: apiv1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "mmf", // TODO: have this reflect mmf vs evaluator
},
Annotations: map[string]string{
// Unused; here as an example.
// Later we can put params here and read them using the
// k8s downward API volumes
"profile": "exampleprofile",
},
},
Spec: apiv1.PodSpec{
RestartPolicy: "Never",
Containers: []apiv1.Container{
{
// TODO: have these reflect mmf vs evaluator
Name: "mmf",
Image: imageName,
ImagePullPolicy: "Always",
Env: []apiv1.EnvVar{
{
Name: "PROFILE",
Value: jobName,
},
},
},
},
},
},
},
}
return job
}
// readability functions used by generateJobSpec
func int32Ptr(i int32) *int32 { return &i }
func strPtr(i string) *string { return &i }

View File

@ -1 +0,0 @@
../../config/matchmaker_config.json

View File

@ -1,128 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
)
// OpenCensus Measures. These are exported as metrics to your monitoring system
// https://godoc.org/go.opencensus.io/stats
//
// When making opencensus stats, the 'name' param, with forward slashes changed
// to underscores, is appended to the 'namespace' value passed to the
// prometheus exporter to become the Prometheus metric name. You can also look
// into having Prometheus rewrite your metric names on scrape.
//
// For example:
// - defining the promethus export namespace "open_match" when instanciating the exporter:
// pe, err := promethus.NewExporter(promethus.Options{Namespace: "open_match"})
// - and naming the request counter "backend/requests_total":
// MGrpcRequests := stats.Int64("backendapi/requests_total", ...
// - results in the prometheus metric name:
// open_match_backendapi_requests_total
// - [note] when using opencensus views to aggregate the metrics into
// distribution buckets and such, multiple metrics
// will be generated with appended types ("<metric>_bucket",
// "<metric>_count", "<metric>_sum", for example)
//
// In addition, OpenCensus stats propogated to Prometheus have the following
// auto-populated labels pulled from kubernetes, which we should avoid to
// prevent overloading and having to use the HonorLabels param in Prometheus.
//
// - Information about the k8s pod being monitored:
// "pod" (name of the monitored k8s pod)
// "namespace" (k8s namespace of the monitored pod)
// - Information about how promethus is gathering the metrics:
// "instance" (IP and port number being scraped by prometheus)
// "job" (name of the k8s service being scraped by prometheus)
// "endpoint" (name of the k8s port in the k8s service being scraped by prometheus)
//
var (
// Logging instrumentation
// There's no need to record this measurement directly if you use
// the logrus hook provided in metrics/helper.go after instantiating the
// logrus instance in your application code.
// https://godoc.org/github.com/sirupsen/logrus#LevelHooks
MmforcLogLines = stats.Int64("mmforc/logs_total", "Number of Backend API lines logged", "1")
// Counting operations
mmforcMmfs = stats.Int64("mmforc/mmfs_total", "Number of mmf jobs submitted to kubernetes", "1")
mmforcMmfFailures = stats.Int64("mmforc/mmf/failures_total", "Number of failures attempting to submit mmf jobs to kubernetes", "1")
mmforcEvals = stats.Int64("mmforc/evaluators_total", "Number of evaluator jobs submitted to kubernetes", "1")
mmforcEvalFailures = stats.Int64("mmforc/evaluator/failures_total", "Number of failures attempting to submit evaluator jobs to kubernetes", "1")
)
var (
// KeyEvalReason is used to tag which code path caused the evaluator to run.
KeyEvalReason, _ = tag.NewKey("evalReason")
// KeySeverity is used to tag a the severity of a log message.
KeySeverity, _ = tag.NewKey("severity")
)
var (
// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
latencyDistribution = view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000)
)
// Package metrics provides some convience views.
// You need to register the views for the data to actually be collected.
// Note: The OpenCensus View 'Description' is exported to Prometheus as the HELP string.
// Note: If you get a "Failed to export to Prometheus: inconsistent label
// cardinality" error, chances are you forgot to set the tags specified in the
// view for a given measure when you tried to do a stats.Record()
var (
mmforcMmfsCountView = &view.View{
Name: "mmforc/mmfs",
Measure: mmforcMmfs,
Description: "The number of mmf jobs submitted to kubernetes",
Aggregation: view.Count(),
}
mmforcMmfFailuresCountView = &view.View{
Name: "mmforc/mmf/failures",
Measure: mmforcMmfFailures,
Description: "The number of mmf jobs that failed submission to kubernetes",
Aggregation: view.Count(),
}
mmforcEvalsCountView = &view.View{
Name: "mmforc/evaluators",
Measure: mmforcEvals,
Description: "The number of evaluator jobs submitted to kubernetes",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyEvalReason},
}
mmforcEvalFailuresCountView = &view.View{
Name: "mmforc/evaluator/failures",
Measure: mmforcEvalFailures,
Description: "The number of evaluator jobs that failed submission to kubernetes",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyEvalReason},
}
)
// DefaultMmforcViews are the default matchmaker orchestrator OpenCensus measure views.
var DefaultMmforcViews = []*view.View{
mmforcEvalsCountView,
mmforcMmfFailuresCountView,
mmforcMmfsCountView,
mmforcEvalFailuresCountView,
}

25
cmd/query/query.go Normal file
View File

@ -0,0 +1,25 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the query service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/query"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("query", query.BindService)
}

24
cmd/scale-backend/main.go Normal file
View File

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/examples/scale/backend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("scale", backend.BindService)
}

View File

@ -0,0 +1,22 @@
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
scaleEvaluator "open-match.dev/open-match/examples/scale/evaluator"
)
func main() {
scaleEvaluator.Run()
}

View File

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/examples/scale/frontend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("scale", frontend.BindService)
}

23
cmd/scale-mmf/main.go Normal file
View File

@ -0,0 +1,23 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
scaleMmf "open-match.dev/open-match/examples/scale/mmf"
)
func main() {
scaleMmf.Run()
}

10
cmd/swaggerui/config.json Normal file
View File

@ -0,0 +1,10 @@
{
"urls": [
{"name": "Frontend", "url": "https://open-match.dev/api/v0.0.0-dev/frontend.swagger.json"},
{"name": "Backend", "url": "https://open-match.dev/api/v0.0.0-dev/backend.swagger.json"},
{"name": "Query", "url": "https://open-match.dev/api/v0.0.0-dev/query.swagger.json"},
{"name": "MatchFunction", "url": "https://open-match.dev/api/v0.0.0-dev/matchfunction.swagger.json"},
{"name": "Synchronizer", "url": "https://open-match.dev/api/v0.0.0-dev/synchronizer.swagger.json"},
{"name": "Evaluator", "url": "https://open-match.dev/api/v0.0.0-dev/evaluator.swagger.json"}
]
}

View File

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is a simple webserver for hosting Open Match Swagger UI.
package main
import (
"open-match.dev/open-match/internal/app/swaggerui"
)
func main() {
swaggerui.RunApplication()
}

View File

@ -0,0 +1,25 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the synchronizer service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/synchronizer"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("synchronizer", synchronizer.BindService)
}

View File

@ -1,113 +0,0 @@
/*
Package config contains convenience functions for reading and managing viper configs.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
)
var (
// Logrus structured logging setup
logFields = log.Fields{
"app": "openmatch",
"component": "config",
"caller": "config/main.go",
}
cfgLog = log.WithFields(logFields)
// Map of the config file keys to environment variable names populated by
// k8s into pods. Examples of redis-related env vars as written by k8s
// REDIS_SENTINEL_PORT_6379_TCP=tcp://10.55.253.195:6379
// REDIS_SENTINEL_PORT=tcp://10.55.253.195:6379
// REDIS_SENTINEL_PORT_6379_TCP_ADDR=10.55.253.195
// REDIS_SENTINEL_SERVICE_PORT=6379
// REDIS_SENTINEL_PORT_6379_TCP_PORT=6379
// REDIS_SENTINEL_PORT_6379_TCP_PROTO=tcp
// REDIS_SENTINEL_SERVICE_HOST=10.55.253.195
envMappings = map[string]string{
"redis.hostname": "REDIS_SENTINEL_SERVICE_HOST",
"redis.port": "REDIS_SENTINEL_SERVICE_PORT",
"redis.pool.maxIdle": "REDIS_POOL_MAXIDLE",
"redis.pool.maxActive": "REDIS_POOL_MAXACTIVE",
"redis.pool.idleTimeout": "REDIS_POOL_IDLETIMEOUT",
"debug": "DEBUG",
}
// Viper config management setup
cfg = viper.New()
// OpenCensus
cfgVarCount = stats.Int64("config/vars_total", "Number of config vars read during initialization", "1")
// CfgVarCountView is the Open Census view for the cfgVarCount measure.
CfgVarCountView = &view.View{
Name: "config/vars_total",
Measure: cfgVarCount,
Description: "The number of config vars read during initialization",
Aggregation: view.Count(),
}
)
// Read reads a config file into a viper.Viper instance and associates environment vars defined in
// config.envMappings
func Read() (*viper.Viper, error) {
// Viper config management initialization
cfg.SetConfigType("json")
cfg.SetConfigName("matchmaker_config")
cfg.AddConfigPath(".")
// Read in config file using Viper
err := cfg.ReadInConfig()
if err != nil {
cfgLog.WithFields(log.Fields{
"error": err.Error(),
}).Fatal("Fatal error reading config file")
}
// Bind this envvars to viper config vars.
// https://github.com/spf13/viper#working-with-environment-variables
// One important thing to recognize when working with ENV variables is
// that the value will be read each time it is accessed. Viper does not
// fix the value when the BindEnv is called.
for cfgKey, envVar := range envMappings {
err = cfg.BindEnv(cfgKey, envVar)
if err != nil {
cfgLog.WithFields(log.Fields{
"configkey": cfgKey,
"envvar": envVar,
"error": err.Error(),
"module": "config",
}).Warn("Unable to bind environment var as a config variable")
} else {
cfgLog.WithFields(log.Fields{
"configkey": cfgKey,
"envvar": envVar,
"module": "config",
}).Info("Binding environment var as a config variable")
}
}
return cfg, err
}

View File

@ -1,53 +0,0 @@
{
"debug": true,
"api": {
"backend": {
"port": 50505
},
"frontend": {
"port": 50504
}
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/matchmaker-dev-201405/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/matchmaker-dev-201405/openmatch-mmf",
"tag": "dev"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
}
},
"jsonkeys": {
"mmfImages": "imagename",
"roster": "profile.roster",
"connstring": "connstring"
},
"interval": {
"evaluator": 10,
"resultsTimeout": 30
}
}

View File

View File

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-backendapi",
"labels":{
"app":"openmatch",
"component": "backend"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "backend"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "backend"
}
},
"spec":{
"containers":[
{
"name":"om-backend",
"image":"gcr.io/matchmaker-dev-201405/openmatch-backendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50505
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

View File

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-backendapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "backend"
},
"ports": [
{
"protocol": "TCP",
"port": 50505,
"targetPort": "grpc"
}
]
}
}

View File

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-frontendapi",
"labels":{
"app":"openmatch",
"component": "frontend"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "frontend"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "frontend"
}
},
"spec":{
"containers":[
{
"name":"om-frontendapi",
"image":"gcr.io/matchmaker-dev-201405/openmatch-frontendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50504
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

View File

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-frontendapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "frontend"
},
"ports": [
{
"protocol": "TCP",
"port": 50504,
"targetPort": "grpc"
}
]
}
}

View File

@ -1,27 +0,0 @@
{
"apiVersion": "monitoring.coreos.com/v1",
"kind": "ServiceMonitor",
"metadata": {
"name": "openmatch-metrics",
"labels": {
"app": "openmatch",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "openmatch",
"agent": "opencensus",
"destination": "prometheus"
}
},
"endpoints": [
{
"port": "metrics",
"interval": "10s"
}
]
}
}

View File

@ -1,78 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-frontend-metrics",
"labels": {
"app": "openmatch",
"component": "frontend",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "frontend"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 19555
}
]
}
}
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-backend-metrics",
"labels": {
"app": "openmatch",
"component": "backend",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "backend"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 29555
}
]
}
}
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-mmforc-metrics",
"labels": {
"app": "openmatch",
"component": "mmforc",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "mmforc"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 39555
}
]
}
}

View File

@ -1,59 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-mmforc",
"labels":{
"app":"openmatch",
"component": "mmforc"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "mmforc"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "mmforc"
}
},
"spec":{
"containers":[
{
"name":"om-mmforc",
"image":"gcr.io/matchmaker-dev-201405/openmatch-mmforc:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "metrics",
"containerPort":9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
},
"env":[
{
"name":"METADATA_NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}
]
}
]
}
}
}
}

View File

@ -1,19 +0,0 @@
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "mmf-sa"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "default",
"namespace": "default"
}
],
"roleRef": {
"kind": "ClusterRole",
"name": "cluster-admin",
"apiGroup": "rbac.authorization.k8s.io"
}
}

View File

@ -1,20 +0,0 @@
{
"apiVersion": "monitoring.coreos.com/v1",
"kind": "Prometheus",
"metadata": {
"name": "prometheus"
},
"spec": {
"serviceMonitorSelector": {
"matchLabels": {
"app": "openmatch"
}
},
"serviceAccountName": "prometheus",
"resources": {
"requests": {
"memory": "400Mi"
}
}
}
}

View File

@ -1,266 +0,0 @@
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "prometheus-operator"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "prometheus-operator"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "prometheus-operator",
"namespace": "default"
}
]
}
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "prometheus"
}
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRole",
"metadata": {
"name": "prometheus"
},
"rules": [
{
"apiGroups": [
""
],
"resources": [
"nodes",
"services",
"endpoints",
"pods"
],
"verbs": [
"get",
"list",
"watch"
]
},
{
"apiGroups": [
""
],
"resources": [
"configmaps"
],
"verbs": [
"get"
]
},
{
"nonResourceURLs": [
"/metrics"
],
"verbs": [
"get"
]
}
]
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "prometheus"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "prometheus"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "prometheus",
"namespace": "default"
}
]
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRole",
"metadata": {
"name": "prometheus-operator"
},
"rules": [
{
"apiGroups": [
"extensions"
],
"resources": [
"thirdpartyresources"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"apiextensions.k8s.io"
],
"resources": [
"customresourcedefinitions"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"monitoring.coreos.com"
],
"resources": [
"alertmanagers",
"prometheuses",
"prometheuses/finalizers",
"servicemonitors"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"apps"
],
"resources": [
"statefulsets"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
""
],
"resources": [
"configmaps",
"secrets"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
""
],
"resources": [
"pods"
],
"verbs": [
"list",
"delete"
]
},
{
"apiGroups": [
""
],
"resources": [
"services",
"endpoints"
],
"verbs": [
"get",
"create",
"update"
]
},
{
"apiGroups": [
""
],
"resources": [
"nodes"
],
"verbs": [
"list",
"watch"
]
},
{
"apiGroups": [
""
],
"resources": [
"namespaces"
],
"verbs": [
"list"
]
}
]
}
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "prometheus-operator"
}
}
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"labels": {
"k8s-app": "prometheus-operator"
},
"name": "prometheus-operator"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"k8s-app": "prometheus-operator"
}
},
"spec": {
"containers": [
{
"args": [
"--kubelet-service=kube-system/kubelet",
"--config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1"
],
"image": "quay.io/coreos/prometheus-operator:v0.17.0",
"name": "prometheus-operator",
"ports": [
{
"containerPort": 8080,
"name": "http"
}
],
"resources": {
"limits": {
"cpu": "200m",
"memory": "100Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
}
}
],
"securityContext": {
"runAsNonRoot": true,
"runAsUser": 65534
},
"serviceAccountName": "prometheus-operator"
}
}
}
}

View File

@ -1,22 +0,0 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "prometheus"
},
"spec": {
"type": "NodePort",
"ports": [
{
"name": "web",
"nodePort": 30900,
"port": 9090,
"protocol": "TCP",
"targetPort": "web"
}
],
"selector": {
"prometheus": "prometheus"
}
}
}

View File

@ -1,38 +0,0 @@
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "redis-master"
},
"spec": {
"selector": {
"matchLabels": {
"app": "mm",
"tier": "storage"
}
},
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "mm",
"tier": "storage"
}
},
"spec": {
"containers": [
{
"name": "redis-master",
"image": "redis:4.0.11",
"ports": [
{
"name": "redis",
"containerPort": 6379
}
]
}
]
}
}
}
}

View File

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis-sentinel"
},
"spec": {
"selector": {
"app": "mm",
"tier": "storage"
},
"ports": [
{
"protocol": "TCP",
"port": 6379,
"targetPort": "redis"
}
]
}
}

16
doc.go Normal file
View File

@ -0,0 +1,16 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package openmatch provides flexible, extensible, and scalable video game matchmaking.
package openmatch // import "open-match.dev/open-match"

View File

@ -1,109 +1,162 @@
# Compiling from source
# Development Guide
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild_<name>.yaml` files for each component in the repository root.
Open Match is a collection of [Go](https://golang.org/) gRPC services that run
within [Kubernetes](https://kubernetes.io).
Note: Although Google Cloud Platform includes some free usage, you may incur charges following this guide if you use GCP products.
## Install Prerequisites
**This project has not completed a first-line security audit, and there are definitely going to be some service accounts that are too permissive. This should be fine for testing/development in a local environment, but absolutely should not be used as-is in a production environment.**
To build Open Match you'll need the following applications installed.
## Example of building using Google Cloud Builder
* [Git](https://git-scm.com/downloads)
* [Go](https://golang.org/doc/install)
* Make (Mac: install [XCode](https://itunes.apple.com/us/app/xcode/id497799835))
* [Docker](https://docs.docker.com/install/) including the
[post-install steps](https://docs.docker.com/install/linux/linux-postinstall/).
The [Quickstart for Docker](https://cloud.google.com/cloud-build/docs/quickstart-docker) guide explains how to set up a project, enable billing, enable Cloud Build, and install the Cloud SDK if you haven't do these things before. Once you get to 'Preparing source files' you are ready to continue with the steps below.
Optional Software
* Clone this repo to a local machine or Google Cloud Shell session, and cd into it.
* Run the following one-line bash script to compile all the images for the first time, and push them to your gcr.io registry. You must enable the [Container Registry API](https://console.cloud.google.com/flows/enableapi?apiid=containerregistry.googleapis.com) first.
```
for dfile in $(ls Dockerfile.*); do gcloud builds submit --config cloudbuild_${dfile##*.}.yaml; done
* [Visual Studio Code](https://code.visualstudio.com/Download) for IDE.
Vim and Emacs work to.
* [VirtualBox](https://www.virtualbox.org/wiki/Downloads) recommended for
[Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/).
On Debian-based Linux you can install all the required packages (except Go) by
running:
```bash
sudo apt-get update
sudo apt-get install -y -q make google-cloud-sdk git unzip tar
```
## Example of starting a GKE cluster
*It's recommended that you install Go using their instructions because package
managers tend to lag behind the latest Go releases.*
A cluster with mostly default settings will work for this development guide. In the Cloud SDK command below we start it with machines that have 4 vCPUs. Alternatively, you can use the 'Create Cluster' button in [Google Cloud Console]("https://console.cloud.google.com/kubernetes").
## Get the Code
```
gcloud container clusters create --machine-type n1-standard-4 open-match-dev-cluster --zone <ZONE>
```bash
# Create a directory for the project.
mkdir -p $HOME/workspace
cd $HOME/workspace
# Download the source code.
git clone https://github.com/googleforgames/open-match.git
cd open-match
# Print the help for the Makefile commands.
make
```
If you don't know which zone to launch the cluster in (`<ZONE>`), you can list all available zones by running the following command.
*Typically for contributing you'll want to
[create a fork](https://help.github.com/en/articles/fork-a-repo) and use that
but for purpose of this guide we'll be using the upstream/master.*
```
gcloud compute zones list
## Building code and images
```bash
# Reset workspace
make clean
# Run tests
make test
# Build all the images.
make build-images -j$(nproc)
# Push images to gcr.io (requires Google Cloud SDK installed)
make push-images -j$(nproc)
# Push images to Docker Hub
make REGISTRY=mydockerusername push-images -j$(nproc)
# Generate Kubernetes installation YAML files (Note that the trailing '/' is needed here)
make install/yaml/
```
## Configuration
_**-j$(nproc)** is a flag to tell make to parallelize the commands based on
the number of CPUs on your machine._
Currently, each component reads a local config file `matchmaker_config.json` , and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally.
## Deploying to Kubernetes
**NOTE** 'defaultImages' container images names in the config file will need to be updated with **your container registry URI**. Here's an example command in Linux to do this (just replace YOUR_REGISTRY_URI with the appropriate location in your environment, should be run from the config directory):
```
sed -i 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' matchmaker_config.json
```
For MacOS the `-i` flag creates backup files when changing the original file in place. You can use the following command, and then delete the `*.backup` files afterwards if you don't need them anymore:
```
sed -i'.backup' -e 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' matchmaker_config.json
```
If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`.
Kubernetes comes in many flavors and Open Match can be used in any of them.
We plan to replace this with a Kubernetes-managed config with dynamic reloading when development time allows. Pull requests are welcome!
_We support GKE ([setup guide](gcloud.md)), Minikube, and Kubernetes in Docker (KinD) in the Makefile.
As long as kubectl is configured to talk to your Kubernetes cluster as the
default context the Makefile will honor that._
## Running Open Match in a development environment
```bash
# Step 1: Create a Kubernetes (k8s) cluster
# KinD cluster: make create-kind-cluster/delete-kind-cluster
# GKE cluster: make create-gke-cluster/delete-gke-cluster
# or create a local Minikube cluster
make create-gke-cluster
# Step 2: Build and Push Open Match Images to gcr.io
make push-images -j$(nproc)
# Step 3: Install Open Match in the cluster.
make install-chart
The rest of this guide assumes you have a cluster (example is using GKE, but works on any cluster with a little tweaking), and kubectl configured to administer that cluster, and you've built all the Docker container images described by `Dockerfiles` in the repository root directory and given them the docker tag 'dev'. It assumes you are in the `<REPO_ROOT>/deployments/k8s/` directory.
# Create a proxy to Open Match pods so that you can access them locally.
# This command consumes a terminal window that you can kill via Ctrl+C.
# You can run `curl -X POST http://localhost:51504/v1/frontend/tickets` to send
# a DeleteTicket request to the frontend service in the cluster.
# Then try visiting http://localhost:3000/ and view the graphs.
make proxy
**NOTE** Kubernetes resources that use container images will need to be updated with **your container registry URI**. Here's an example command in Linux to do this (just replace YOUR_REGISTRY_URI with the appropriate location in your environment):
# Teardown the install
make delete-chart
```
sed -i 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' *deployment.json
```
For MacOS the `-i` flag creates backup files when changing the original file in place. You can use the following command, and then delete the `*.backup` files afterwards if you don't need them anymore:
```
sed -i'.backup' -e 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' *deployment.json
```
If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`.
* Start a copy of redis and a service in front of it:
```
kubectl apply -f redis_deployment.json
kubectl apply -f redis_service.json
```
* Run the **core components**: the frontend API, the backend API, and the matchmaker function orchestrator (MMFOrc).
**NOTE** In order to kick off jobs, the matchmaker function orchestrator needs a service account with permission to administer the cluster. This should be updated to have min required perms before launch, this is pretty permissive but acceptable for closed testing:
```
kubectl apply -f backendapi_deployment.json
kubectl apply -f backendapi_service.json
kubectl apply -f frontendapi_deployment.json
kubectl apply -f frontendapi_service.json
kubectl apply -f mmforc_deployment.json
kubectl apply -f mmforc_serviceaccount.json
```
* [optional, but recommended] Configure the OpenCensus metrics services:
```
kubectl apply -f metrics_services.json
```
* [optional] Trying to apply the Kubernetes Prometheus Operator resource definition files without a cluster-admin rolebinding on GKE doesn't work without running the following command first. See https://github.com/coreos/prometheus-operator/issues/357
```
kubectl create clusterrolebinding projectowner-cluster-admin-binding --clusterrole=cluster-admin --user=<GCP_ACCOUNT>
```
* [optional, uses beta software] If using Prometheus as your metrics gathering backend, configure the [Prometheus Kubernetes Operator](https://github.com/coreos/prometheus-operator):
## Iterating
While iterating on the project, you may need to:
1. Install/Run everything
2. Make some code changes
3. Make sure the changes compile by running `make test`
4. Build and push Docker images to your personal registry by running `make push-images -j$(nproc)`
5. Deploy the code change by running `make install-chart`
6. Verify it's working by [looking at the logs](#accessing-logs) or looking at the monitoring dashboard by running `make proxy-grafana`
7. Tear down Open Match by running `make delete-chart`
## Accessing logs
To look at Open Match core services' logs, run:
```bash
# Replace open-match-frontend with the service name that you would like to access
kubectl logs -n open-match svc/open-match-frontend
```
kubectl apply -f prometheus_operator.json
kubectl apply -f prometheus.json
kubectl apply -f prometheus_service.json
kubectl apply -f metrics_servicemonitor.json
## API References
While integrating with Open Match you may want to understand its API surface concepts or interact with it and get a feel for how it works.
The APIs are defined in `proto` format under the `api/` folder, with references available at [open-match.dev](https://open-match.dev/site/docs/reference/api/).
You can also run `make proxy-ui` to exposes the Swagger UI for Open Match locally on your computer after [deploying it to Kubernetes](#deploying-to-kubernetes), then go to http://localhost:51500 and view the REST APIs as well as interactively call Open Match.
By default you will be talking to the frontend server but you can change the target API url to any of the following:
* api/frontend.swagger.json
* api/backend.swagger.json
* api/synchronizer.swagger.json
* api/query.swagger.json
For a more current list refer to the api/ directory of this repository. Also matchfunction.swagger.json is not supported.
## IDE Support
Open Match is a standard Go project so any IDE that understands that should
work. We use [Go Modules](https://github.com/golang/go/wiki/Modules) which is a
relatively new feature in Go so make sure the IDE you are using was built around
Summer 2019. The latest version of
[Visual Studio Code](https://code.visualstudio.com/download) supports it.
If your IDE is too old you can create a
[Go workspace](https://golang.org/doc/code.html#Workspaces).
```bash
# Create the Go workspace in $HOME/workspace/ directory.
mkdir -p $HOME/workspace/src/open-match.dev/
cd $HOME/workspace/src/open-match.dev/
# Download the source code.
git clone https://github.com/googleforgames/open-match.git
cd open-match
export GOPATH=$HOME/workspace/
```
You should now be able to see the core component pods running using a `kubectl get pods`, and the core component metrics in the Prometheus Web UI by running `kubectl proxy <PROMETHEUS_POD_NAME> 9090:9090` in your local shell, then opening http://localhost:9090/targets in your browser to see which services Prometheus is collecting from.
### End-to-End testing
## Pull Requests
**Note** The programs provided below are just bare-bones manual testing programs with no automation and no claim of code coverage. This sparseness of this part of the documentation is because we expect to discard all of these tools and write a fully automated end-to-end test suite and a collection of load testing tools, with extensive stats output and tracing capabilities before 1.0 release. Tracing has to be integrated first, which will be in an upcoming release.
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
* `examples/frontendclient` is a fake client for the Frontend API. It pretends to be a real game client connecting to Open Match and requests a game, then dumps out the connection string it receives. Note that it doesn't actually test the return path by looking for arbitrary results from your matchmaking function; it pauses and tells you the name of a key to set a connection string in directly using a redis-cli client.
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to openmatch and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools.
* `test/cmd/client` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
### Resources
* [Prometheus Operator spec](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md)
If you want to submit a Pull Request, `make presubmit` can catch most of the issues your change can run into.
Our [continuous integration](https://console.cloud.google.com/cloud-build/builds?project=open-match-build)
runs against all PRs. In order to see your build results you'll need to
become a member of
[open-match-discuss@googlegroups.com](https://groups.google.com/forum/#!forum/open-match-discuss).

View File

@ -1 +0,0 @@
*"I notice that all the APIs use gRPC. What if I want to make my calls using REST, or via a Websocket?"** (gateway/proxy OSS projects are available)

View File

@ -0,0 +1,61 @@
# v{version}
This is the {version} release of Open Match.
Check the [official website](https://open-match.dev) for details on features, installation and usage.
Release Notes
-------------
**Feature Highlights**
{ highlight here the most notable changes and themes at a high level}
**Breaking Changes**
{ detail any behaviors or API surfaces which worked in a previous version which will no longer work correctly }
> Future releases towards 1.0.0 may still have breaking changes.
**Security Fixes**
{ list any changes which fix vulnerabilities in open match }
**Enhancements**
{ go into details on improvements and changes }
Usage Requirements
-------------
* Tested against Kubernetes Version { a list of k8s versions}
* Golang Version = v{ required golang version }
Images
------
```bash
# Servers
docker pull gcr.io/open-match-public-images/openmatch-backend:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontend:{version}
docker pull gcr.io/open-match-public-images/openmatch-query:{version}
docker pull gcr.io/open-match-public-images/openmatch-synchronizer:{version}
# Evaluators
docker pull gcr.io/open-match-public-images/openmatch-evaluator-go-simple:{version}
# Sample Match Making Functions
docker pull gcr.io/open-match-public-images/openmatch-mmf-go-soloduel:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmf-go-pool:{version}
# Test Clients
docker pull gcr.io/open-match-public-images/openmatch-demo-first-match:{version}
```
_This software is currently alpha, and subject to change. Not to be used in production systems._
Installation
------------
* Follow [Open Match Installation Guide](https://open-match.dev/site/docs/installation/) to setup Open Match in your cluster.
API Definitions
------------
- gRPC API Definitions are available in [API references](https://open-match.dev/site/docs/reference/api/) - _Preferred_
- HTTP API Definitions are available in [SwaggerUI](https://open-match.dev/site/swaggerui/index.html)

View File

@ -0,0 +1,24 @@
#!/bin/bash
# Usage:
# ./release.sh 0.5.0-82d034f unstable
# ./release.sh [SOURCE VERSION] [DEST VERSION]
# This is a basic shell script to publish the latest Open Match Images
# There's no guardrails yet so use with care.
# Purge Images
# docker rmi $(docker images -a -q)
# 0.4.0-82d034f
SOURCE_VERSION=$1
DEST_VERSION=$2
SOURCE_PROJECT_ID=open-match-build
DEST_PROJECT_ID=open-match-public-images
IMAGE_NAMES=$(make list-images)
for name in $IMAGE_NAMES
do
source_image=gcr.io/$SOURCE_PROJECT_ID/openmatch-$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/openmatch-$name:$DEST_VERSION
docker pull $source_image
docker tag $source_image $dest_image
docker push $dest_image
done

7
docs/hugo_apiheader.txt Normal file
View File

@ -0,0 +1,7 @@
---
title: "Open Match API References"
linkTitle: "Open Match API References"
weight: 2
description:
This document provides API references for Open Match services.
---

View File

@ -1 +0,0 @@
During alpha, please do not use Open Match as-is in production. To develop against it, please see the [development guide](development.md).

View File

@ -1,7 +0,0 @@
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/backendclient
COPY ./ ./
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o backendclient .
CMD ["./backendclient"]

View File

@ -1,8 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendclient:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendclient:dev']

View File

@ -1,168 +0,0 @@
/*
Stubbed backend api client. This should be run within a k8s cluster, and
assumes that the backend api is up and can be accessed through a k8s service
named om-backendapi
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"bytes"
"context"
"encoding/json"
"errors"
"io"
"io/ioutil"
"log"
"net"
"os"
"strings"
backend "github.com/GoogleCloudPlatform/open-match/examples/backendclient/proto"
"github.com/tidwall/gjson"
"google.golang.org/grpc"
)
func bytesToString(data []byte) string {
return string(data[:])
}
func ppJSON(s string) {
buf := new(bytes.Buffer)
json.Indent(buf, []byte(s), "", " ")
log.Println(buf)
return
}
func main() {
// Read the profile
filename := "profiles/testprofile.json"
if len(os.Args) > 1 {
filename = os.Args[1]
}
log.Println("Reading profile from ", filename)
jsonFile, err := os.Open(filename)
if err != nil {
panic("Failed to open file specified at command line. Did you forget to specify one?")
}
defer jsonFile.Close()
// parse json data and remove extra whitespace before sending to the backend.
jsonData, _ := ioutil.ReadAll(jsonFile) // this reads as a byte array
buffer := new(bytes.Buffer) // convert byte array to buffer to send to json.Compact()
if err := json.Compact(buffer, jsonData); err != nil {
log.Println(err)
}
jsonProfile := buffer.String()
log.Println("Requesting matches that fit profile:")
ppJSON(jsonProfile)
//jsonProfile := bytesToString(jsonData)
// Connect gRPC client
ip, err := net.LookupHost("om-backendapi")
if err != nil {
panic(err)
}
conn, err := grpc.Dial(ip[0]+":50505", grpc.WithInsecure())
if err != nil {
log.Fatalf("failed to connect: %s", err.Error())
}
client := backend.NewAPIClient(conn)
log.Println("API client connected to", ip[0]+":50505")
// Test CreateMatch
p := &backend.Profile{
Id: "test-dm-usc1f",
// Make a stub debug hostname from the current time
Properties: jsonProfile,
}
//
//log.Printf("Looking for matches for profile for the next 5 seconds:")
log.Printf("Establishing HTTPv2 stream...")
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
for {
log.Println("Attempting to send ListMatches call")
stream, err := client.ListMatches(ctx, p)
if err != nil {
log.Fatalf("Attempting to open stream for ListMatches(_) = _, %v", err)
}
log.Printf("Waiting for matches...")
//for i := 0; i < 2; i++ {
for {
match, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatalf("Error reading stream for ListMatches(_) = _, %v", err)
break
}
log.Println("Received match:")
ppJSON(match.Properties)
if match.Properties == "{error: insufficient_players}" {
log.Println("Waiting for a larger player pool...")
break
}
// Validate JSON before trying to parse it
if !gjson.Valid(string(match.Properties)) {
log.Fatal(errors.New("invalid json"))
}
// Get players from the json properties.roster field
log.Println("Gathering roster from received match...")
players := make([]string, 0)
result := gjson.Get(match.Properties, "properties.roster")
result.ForEach(func(teamName, teamRoster gjson.Result) bool {
teamRoster.ForEach(func(_, player gjson.Result) bool {
players = append(players, player.String())
return true // keep iterating
})
return true // keep iterating
})
//log.Printf("players = %+v\n", players)
// Assign players in this match to our server
log.Println("Assigning players to DGS at example.com:12345")
playerstr := strings.Join(players, " ")
roster := &backend.Roster{PlayerIds: playerstr}
ci := &backend.ConnectionInfo{ConnectionString: "example.com:12345"}
assign := &backend.Assignments{Roster: roster, ConnectionInfo: ci}
_, err = client.CreateAssignments(context.Background(), assign)
if err != nil {
panic(err)
}
}
/*
log.Println("deleting assignments")
playerstr = strings.Join(players[0:len(players)/2], " ")
roster.PlayerIds = playerstr
_, err = client.DeleteAssignments(context.Background(), roster)
*/
}
}

Some files were not shown because too many files have changed in this diff Show More