Compare commits

...

179 Commits

Author SHA1 Message Date
02ce5e26b7 Release 1.1 fix (#1316)
* Update repo location

* Update repo location

* Update chart repo location in Makefile
2020-12-17 16:01:44 -05:00
32611444d6 Release 1.1 (#1306)
* updated versions in various files. ran make release and make api/api.md targets as per release steps

* updated versions to 1.1.0-rc.1

* Updated Makefile BASE_VERSION

* Updated GKE_VERSION in create-gke-cluster target

* Updated appVersion and version tags in Chart.yaml

* Updated tag in values.yaml

* updated _OM_VERSION in cloudbuild.yaml

* make release and make api/api.md execution
2020-12-16 15:58:45 -05:00
c0b355da51 release-1.1.0-rc.1 (#1286) 2020-11-18 13:42:34 -08:00
6f05e526fb Improved tests for statestore - redis (#1264) 2020-10-12 19:21:51 -07:00
496d156faa Added unary interceptor and removed extra logs (#1255)
* added unary interceptor and removed logs from frontend service

* removed extra logs from backend serrvice

* updated evaluator logging

* updated query logging


linter fix

* fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-21 15:02:29 -07:00
3a3d618c43 Replaced GS bucket links with substitution variables (#1262) 2020-09-21 12:22:03 -07:00
e1cbd855f5 Added time to assignment metrics to backend (#1241)
* Added time to assignment metrics to backend

- The time to match for tickets is now recorded as a metric

* Fixed formatting errors

* Fixed minor review changes

- Renamed function to calculate time to assignment
- Moved from callback to returning tickets from UpdateAssignments

* Return only successfully assigned tickets

* Fixed linting errors
2020-09-15 11:18:17 -07:00
10b36705f0 Tests update: use require assertion (#1257)
* use require in filter package


fix

* use require in rpc package

* use require in tools/certgen package

* use require in mmf package

* use require in telemetry and logging


fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-09 14:24:18 -07:00
a6fc4724bc Fix spelling in Proto files (#1256)
Regenerated dependent Swagger and Golang files.
2020-09-09 12:20:29 -07:00
511337088a Reduce logging in statestore - redis (#1248)
* reduce logging in statestore - redis  #1228


fix

* added grpc interceptors to log errors

lint fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-02 12:50:39 -07:00
5f67bb36a6 Use require in app tests and improve error messages (#1253) 2020-08-31 13:17:29 -07:00
94d2105809 Use require in tests to avoid nil pointer exceptions (#1249)
* use require in tests to avoid nil pointer exceptions

* statestore tests: replaced assert with require
2020-08-28 12:19:53 -07:00
d85f1f4bc7 Added a PR template (#1250) 2020-08-25 14:16:36 -07:00
79e9afeca7 Use Helm release to name resources (#1246)
* Fix indent of TLS certificate annotations

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Small whitespace fixes

Picked up the VSCode Yaml auto-formatter.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass 'query' config to open-match-customize

It's not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass frontend/backend to open-match-scale

They're not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Allow redis to derive resource names from the release

This ensures that multiple OpenMatch installs in a single namespace do
not attempt to install Redis stacks with the same resource names.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Include release names in PodSecurityPolicies

This avoids conflicts between multiple Open Match installations in the
same namespace.

`openmatch.fullname` named template per Helm default chart.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the Service Account name release-dependent

This makes the existing global.kubernetes.serviceAccount value an
override if specified, but if left unspecified, an appropriate name will
be chosen.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the RBAC resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the TLS Secret names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the CI-test resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make all Pod/Service names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make Grafana dashboard names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make open-match-scale slightly more standalone

This makes the hostname templates more standard in their case, because
there is no need to coordinate the hostname with the superchart.

This chart still uses a lot of templates from the open-match chart
though, so it's not yet standalone-installable.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make ConfigMap default names release-dependent

A specific ConfigMap can be applied in the same way it was previously,
by overriding configs.default.configName and
configs.override.configName, in which case it is up to the person doing
the deployment to manage name conflicts.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Use correct Jaeger service names for subcharts

This fixes an existing issue where the Jaeger connection URLs in
the configuration would be incorrect if your Helm chart was not
installed as a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Populate Grafana Datasource using a ConfigMap

This allows us to access the Prometheus subchart's named template to get
the correct Service name for the datasource.

This fixes an existing issue where the Prometheus data source URL in
Grafana would be incorrect if your Helm chart was not installed as
a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>
2020-08-17 12:04:26 -07:00
3334f7f74a Make: fix create-gke-cluster, create clusterRole (#1234)
If there are multiple `gcloud auth list` accounts the command would fail,
adding grep active to fix.
2020-07-10 10:57:16 -07:00
85ce954eb9 Update backend_service.go (#1233)
Fixed typo
2020-07-09 11:45:33 -07:00
679cfb5839 Rename Ignore list to Pending Release (#1230)
Fix naming across all code. Swagger changes left.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 13:56:30 -07:00
c53a5b7c88 Update Swagger JSONs as well as go proto files (#1231)
Output of run make presubmit on master.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 12:52:51 -07:00
cfb316169a Use supported GKE cluster version (#1232)
Update Makefile.
2020-07-08 12:25:53 -07:00
a9365b5333 fix release.sh not knowing the right images (#1219) 2020-06-01 11:05:27 -07:00
93df53201c Only install ci components when running ci (#1213) 2020-05-08 16:06:22 -07:00
eb86841423 Add release all tickets API (#1215) 2020-05-08 15:07:45 -07:00
771f706317 Fix up gRPC service documentation (#1212) 2020-05-08 14:36:41 -07:00
a9f9a2f2e6 Remove alpha software warning (#1214) 2020-05-08 13:43:54 -07:00
068632285e Give assigned tickets a time to live, default 10 minutes (#1211) 2020-05-08 12:24:27 -07:00
113461114e Improve error message for overrunning mmfs (#1207) 2020-05-08 11:50:48 -07:00
0ac7ae13ac Rework config value naming (#1206) 2020-05-08 11:09:03 -07:00
29a2dbcf99 Unified images used in helm chart and release artifacts (#1184) 2020-05-08 10:42:16 -07:00
48d3b5c0ee Added Grafana dashboard of Open Match concepts (#1193)
Dependency on #1192, resolved #1124.

Added a dashboard in Matchmaking concepts, also removed the ticket dashboard.

https://snapshot.raintank.io/dashboard/snapshot/GzXuMdqx554TB6XsNm3al4d6IEyJrEY3
2020-05-08 10:15:34 -07:00
a5fa651106 Add grpc call options to matchfunction query functions (#1205) 2020-05-07 18:24:38 -07:00
cd84d74ff9 Fix race in e2e test (#1209) 2020-05-07 15:15:19 -07:00
8c2aa1ea81 Fix evaluator not running in mmf matchid collision test (#1210) 2020-05-07 14:53:12 -07:00
493ff8e520 Refactor internal telemetry package (#1192)
This commit refactored the internal telemetry package. The pattern used in internal/app/xxx/xxx.go follows the one used in openconcensus-go. Besides adding metrics covered in #1124, this commit also introduced changes to make the telemetry settings more efficient and easier to turn on/off.

In this factorization, a metric recorded can be cast into different views through different aggregation methods. Since the metric is the one that consumes most of the resources, this can make the telemetry setups more efficient than before.
Also removed some metrics that were meaningful for debugging in v0.8 but are becoming useless for the current stage.
2020-05-06 18:42:20 -07:00
8363bc5fc9 Refactor e2e testing and improve coverage (#1204) 2020-05-05 20:06:32 -07:00
144f646b7f Test tutorials (#1176) 2020-05-05 12:15:11 -07:00
b518b5cc1b Have the test instance host the mmf and evaluator (#1196) 2020-04-23 15:02:11 -07:00
af0b9fd5f7 Remove errant closing of already closed listeners (#1195) 2020-04-23 10:24:52 -07:00
5f4b522ecd Large refactor of rpc and appmain (#1194) 2020-04-21 14:07:09 -07:00
12625d7f53 Moved customized configmap values to default (#1191) 2020-04-20 15:11:13 -07:00
3248c8c4ad Refactor application binding (#1189) 2020-04-15 11:15:49 -07:00
10c0c59997 Use consistent main code for mmf and evaluator (#1185) 2020-04-09 18:37:32 -07:00
c17e3e62c0 Removed make all commands and pinned dependency versions (#1181)
* Removed make all commands

* oops
2020-04-03 12:01:32 -07:00
8e91be6201 Update development.md doc (#1182) 2020-04-02 15:50:00 -07:00
f6c837d6cd Removed make all commands and pinned dependency versions (#1181)
* Removed make all commands

* oops
2020-04-02 13:22:58 -07:00
3c8908aae0 Fix create-gke-cluster version (#1179) 2020-03-30 21:59:10 -07:00
0689d92d9c Fix the tutorials to using the new API, and be tested (#1175)
* Better follow API guidelines

* Fix tutorials

* don't include makefile fix which is broken
2020-03-27 11:58:28 -07:00
3c9a8f5568 Better follow API guidelines (#1173) 2020-03-26 15:56:34 -07:00
30204a2d15 run presubmit to update files (#1172) 2020-03-26 15:21:53 -07:00
a5b6c0c566 Have evaluator client and synchronizer return error when observing invalid match IDs (#1167)
* Have evaluator client and synchronizer return error when observing invalid match IDs

* update

* update

* update

* update

* presubmit
2020-03-26 13:59:21 -07:00
4a00baf847 Implement assignment groups and graceful failure (#1170) 2020-03-26 12:38:40 -07:00
d74262f3ba Fix broken scale dashboard (#1166) 2020-03-21 15:46:15 -07:00
2262652ea9 Add AUTH tests to Redis implementation (#1050)
* Enable and establish Redis connections via Sentinel

* Reimplement direct redis master connect

* Add AUTH tests for Redis implementation

* fix

* update
2020-03-20 17:12:55 -07:00
e15fd47535 Add a built in created time field for Tickets and the ability to filter Tickets by created time. (#1162) 2020-03-20 15:31:17 -07:00
670f38d36e forbid assignment on ticket create (#1160) 2020-03-19 13:47:45 -07:00
f0a85633a5 update third party files (#1163) 2020-03-19 13:18:55 -07:00
6cb47ce191 Enable and establish Redis connections via Sentinel (#1038)
* Enable and establish Redis connections via Sentinel

* Reimplement direct redis master connect

* Enable and establish Redis connections via Sentinel

* feedbacks
2020-03-14 23:55:41 -07:00
529c01330e Use testing.Cleanup instead of manual cleanup. (#1158) 2020-03-12 14:20:16 -07:00
b36a348db7 Remove omerror, replacing with errgroup (#1157)
Turns out there's already a common use package for this pattern.
2020-03-12 14:00:32 -07:00
5e277265ad Removed unused set package (#1156) 2020-03-12 13:25:22 -07:00
4420d7add2 Added QueryTicketIds method to QueryService (#1151)
* Added QueryTicketIds method to QueryService

* comment
2020-03-09 15:11:16 -07:00
3de052279b Optimized MULTI EXEC querys to reduce Redis CPU consumption (#1131)
* Mysterious code to optimize Redis cpu usage

* resolve comments

* update

* fix cloudbuild
2020-03-06 23:23:59 -08:00
7a4aa3589f Removed ticket auto-expiring logic from statestore (#1146) 2020-03-06 17:20:49 -08:00
bca6f487cc Remove legacy volume mounts from om-demo yaml file (#1147) 2020-03-06 08:32:47 -08:00
d0c373a850 Drafted a short README for the benchmarking framework (#1092)
* Drafted a short README for the benchmarking framework

* update

* update

* update
2020-03-05 13:33:05 -08:00
deb2947ae2 Disable swaggerui via helm (#1144) 2020-03-05 12:08:37 -08:00
d889278151 Replace redis indexing with in memory cache (#1135) 2020-03-02 16:23:55 -08:00
1b63fa53dc Update to go 1.14 (#1133) 2020-02-26 14:13:37 -08:00
af02e4818f Do some randomization on return order of tickets (#1127) 2020-02-20 15:34:40 -08:00
cda2d3185f Add filter package, and rework query testing (#1126) 2020-02-20 14:05:42 -08:00
2317977602 Move default evaluator to internal from testing (#1122) 2020-02-14 14:49:57 -08:00
9ef83ed344 Removed scale chart configmap (#1120) 2020-02-12 13:16:18 -08:00
33bd633b1d Disabled redis when generating static yaml resources except core (#1119) 2020-02-11 14:40:38 -08:00
1af8cf1e79 have scale-frontend use individual go routines for each ticket (#1116) 2020-02-10 13:52:28 -08:00
0ef46fc4d4 implement a scenario which behaves like a team based shooter game (#1115) 2020-02-10 11:21:12 -08:00
79daf50531 Enabled more golangci tests to improve code health (#1089)
* Enabled more golangci tests

* update

* update

* update
2020-02-06 14:07:41 -08:00
a9c327b430 Move scale scenarios into unique packages (#1110) 2020-02-06 12:56:21 -08:00
2c637c97b8 Reduced Redis PING check frequency on Redis pool (#1109)
* Reduced Redis PING check frequency on Redis pool

* fix lint

* update

* update comment

* update comment
2020-02-05 18:00:31 -08:00
668b10030b Update Grafana dashboard for more detailed metrics (#1108)
* Update Grafana dashboard for more detailed metrics

* update cpu usage chart

* update
2020-02-05 17:13:17 -08:00
1c7fd24a34 Remove stats processor from scale tests (#1107) 2020-02-05 13:41:56 -08:00
be0cebd457 Disabled cloudbuild cacher to avoid build flakyness (#1103) 2020-02-04 11:34:23 -08:00
fe7bb4da8f Revert "Release 0.9.0 (#1096)" (#1097)
This reverts commit e80de171a0a6e742d42264f4ab4ecd9231cd3edc.
2020-02-03 16:19:16 -08:00
e80de171a0 Release 0.9.0 (#1096) 2020-02-03 15:42:21 -08:00
fdd707347e Update generated files (#1095) 2020-02-03 15:21:26 -08:00
6ef1382414 Fix leaking of client connections by config.Cacher (#1093)
* Fix leaking of client connections by config.Cacher

* fix link
2020-02-03 14:44:10 -08:00
d67a65e648 Reuse query client in scale tests (#1091)
It was previously not reusing it, so the clients would leak over time.
2020-02-03 13:12:02 -08:00
d3e008cd1e Update proto descriptions to reflect API changes (#1090)
* Update proto descriptions to reflect API changes
2020-02-03 11:15:01 -08:00
d93db94ad9 chartredisfix (#1088) 2020-02-03 09:18:13 -08:00
1bd63a01c7 feature: release tickets api (#1059) 2020-01-31 14:03:17 -08:00
cf8d49052c Deprecated mmf harness (#1086) 2020-01-31 11:13:29 -08:00
fca5359eee Used master HEAD in tutorials' go.mod file and fixed go build errors (#1085) 2020-01-30 15:52:23 -08:00
07637135a9 Deprecate Rosters, remove from Match, MatchProfiles (#1084) 2020-01-30 14:52:41 -08:00
8c86a4e643 Add omerrors and use it it in backend_service and evaluator_client (#1081)
Two methods are added:

ProtoFromErr: returns a grpc status given an error, with some reasoned handling for special cases. This will be used to set errors onto the FetchMatchesSummary in a followup PR.
-WaitOnErrors: this allows some number of functions to run that will all return errors. The first to return an error will be the error returned overall, and it ensures all go routines finish.
WaitOnErrors is used to simplify code in backend_service and the grpc portion of evaluator_client.

Also I realized that synchronizeSend should specify which context is being used where better.
2020-01-30 12:42:02 -08:00
31858e0ce5 Changed evaluator API from returning matches to matchids (#1082)
* Changed evaluator API from returning matches to matchids

* update proto desc
2020-01-30 10:10:35 -08:00
fc0b6dc510 Changed Synchronizer proto to return matchIDs instead (#1080)
This commit changed Synchronizer proto to return matchIDs instead. Also bumped up the numbers of the unnamed channels in synchronizer starting from m3c and changed the channel type starting from m4c to chan string as the next step of the API change is to have the evaluator returns the match ids instead.
2020-01-29 19:26:06 -08:00
edade67a6d Added sync.Map to backend and synchronizer (#1078)
This is an intermediate step to resolve #939. Leaving a bunch of TODOs in this PR and will fix them after the proto change.
2020-01-29 18:34:01 -08:00
c92c4ef07a Starts streaming when sending requests from synchronizer to evaluator (#1075)
This commit started to stream when calling the evaluator.Evaluate method such that the synchronizer is able to process the data more efficiently.
2020-01-29 17:23:38 -08:00
0b8425184b Stream proposals from mmf to synchronizer (#1077)
This improves efficiency for overall system latency, and sets up for better mmf error handling.

The overall structure of the fetch matches call has been reworked. The different go routines now set an explicit err variable. So once we have FetchSummary, we can just set the mmf err variable on it. Synchronizer calls which err will always result in an error here (as it's relatively fatal), while mmf and evaluator errors will be passed gently to the client.

One thing this code isn't doing anymore is checking if an mmf returns a match with no tickets. This seems fine to me, but willing to discuss if anyone disagrees.

Deleted the tests for the following reasons:

TestDoFetchMatchesInChannel didn't actually test fetching matches, it only tested creating a client. Since callMmf now both creates the client and makes the call, this code now blocks actually trying to make a connection. I'm not worried about having full branch test coverage on err statements...
TestDoFetchMatchesFilterChannel tested merging of mmf runs. Since there's only one mmf run now, it's no longer necessary.
2020-01-29 14:44:15 -08:00
338a03cce5 Removed synchronizer dashboard and synced grpc dashboard with API changes (#1074)
The previous dashboards don't work with our changes on the API surface.
https://snapshot.raintank.io/dashboard/snapshot/5A6ToilbqqWbeYpuf36jFCrVv3zFFK1V

This commit:

Removed the unused synchronizer dashboard.
Updated the field matches to use QueryService, BackendService and FrontendService instead of the outdated naming.
Resolved #1018
2020-01-28 15:08:16 -08:00
b7850ab81d Remove assignment.error (#1073) 2020-01-28 13:10:41 -08:00
faa730bda8 Remove c# protos and respective makefile commands (#1072) 2020-01-28 12:12:55 -08:00
76ef9546af Add battle royal scale test scenario (#1063)
Tickets choose one of the 20 regions, with a skewed probability. (probability eg: https://play.golang.org/p/V3wfvph34hM) One profile per region, which forms matches of 100 players.
2020-01-28 11:39:57 -08:00
bff8934cd3 Added the ability to specify your own Redis instance via helm (#1069)
Resolved #836
2020-01-28 10:51:41 -08:00
3a5608b547 Remove inaccurate default documentation on range filter (#1071)
Instead, this is actually just relying on the proto's default values of 0 for each. As such it shouldn't be documented.
2020-01-27 16:18:03 -08:00
b7eec77a36 Rename Backend and Frontend API to BackendService and FrontendService (#1065)
Depended and aligned with #1055. After this commit, we'll still have om-backend, om-frontend, and om-query image, but with API surface renamed.

Backend -> BackendService
Frontend -> FrontendService
2020-01-27 15:52:35 -08:00
82a011ea52 Rename Mmlogic to Queryservice (#1055)
Resolved #996.

Manually rename the file name under internal/app/mmlogic and cmd/mmlogic from mmlogic.go to query.go to keep the image name consistent with our backend and frontend naming.

TODO: Rename backend and frontend API to BackendService and FrontendService instead.
2020-01-27 15:27:17 -08:00
92210b1a13 Redis grafana dashboard (#1062)
* Redis grafana dashboard

* Alert notifiers

* update

* update

* update

* update
2020-01-23 21:31:58 -08:00
f46c0b8f3d Revamp go processes dashboard (#1064)
* Revamp go processes dashboard

* added cpu usage chart
2020-01-23 20:08:41 -08:00
a19baf3457 Revamp gRPC grafana dashboard (#1060)
* dashboard prototype

* Remove storage dashboard

* fix

* update
2020-01-23 19:36:56 -08:00
8e1fbaf938 Change backend.FetchMatches proto from taking multiple profiles to one instead (#1056) 2020-01-17 20:08:29 -08:00
957471cf83 Run scale test assignment and deletes in parallel (#1058)
Start 50 go routines for each at the beginning of the test, and pass them from fetch matches with a buffer.

Gets the redis state store first match to handle >500 tickets per second:
https://snapshot.raintank.io/dashboard/snapshot/yO88xrIUe1bFR29iNZt4YuM0xuBb8PX9
2020-01-17 17:09:55 -08:00
e24c4b9884 Fix off by 1 error in first match scale test (#1057) 2020-01-17 16:33:43 -08:00
34cc4987e8 Add a first match scenario to the scale tests (#1054)
This first match scenario runs one pool with all tickets, pairing tickets into 1v1 matches with no logic.

Metrics example: https://snapshot.raintank.io/dashboard/snapshot/JZQvjGLgZlezuZfNxPAh8n098JQuCyPW
2020-01-17 11:39:37 -08:00
8e8f2d688b Add gRPC CSharp bindings (#1051)
* Add gRPC CSharp bindings

* update
2020-01-16 16:54:56 -08:00
f347639df4 🤦 (#1048) 2020-01-15 09:47:06 -08:00
75c74681cb Make scale grafana dashboard optional to install (#1044)
* Optionally enable grafana dashboard for scale chart

* Make scale grafana dashboard optional to install
2020-01-15 09:21:45 -08:00
5b18dcf6f3 Add metric support to the scale tests (#1042) 2020-01-14 17:31:46 -08:00
3bcf327a41 Remove locust (#1041) 2020-01-14 13:53:09 -08:00
9f59844e0d Remove zipkin references from Open Match (#1040) 2020-01-14 12:23:31 -08:00
5a32cef2e9 Update Makefile and .ignore files (#1031)
This commit updated the Makefile and .ignore files for the evaluator and mmf binaries.

Also moved the evaluator to test/evaluator folder - I had it accidentally placed under the test/customize/evaluator dir because of a bad merge when working on deprecating the harness.
2020-01-13 18:59:57 -08:00
b9e2e88ef4 Implement basic tunable parameters logic for benchmarking scenarios (#1030)
This commit implements the knobs to control ShouldCreateTicketForever, ShouldAssignTicket, ShouldDeleteTicket, TicketCreatedQPS, and CreateTicketNumber. Also removed the roster-based-mmf from the repo since it is only used for the scale test and there is no need to build its image in every CI run.

After this commit got checked in, users are able to configure the knobs via the new benchmarking framework and run make install-scale-chart to it.

TODO:

Implement the filter number and profile number logic. This requires a rewrite for examples/scale/tickets and examples/scale/profiles package.
2020-01-08 17:20:37 -08:00
41632e6b8d Increase Redis ping time tolerance and provision more resources for CI (#1034) 2020-01-08 15:32:32 -08:00
188457c21f Added mmf and evaluator for the basic benchmarking scenario (#1029)
* Added mmf and evaluator for the basic benchmarking scenario

* update

* update

* fix
2020-01-07 11:08:12 -08:00
4daea744d5 Added a fixed development password for Redis (#989)
* Added a fixed development password for Redis

* update
2020-01-02 23:30:35 -08:00
1f3dd4bcbf Implement a prototype for Open Match benchmarking framework (#1027)
* Implement a prototype for Open Match benchmarking framework

* update

* update

* update
2019-12-27 18:00:47 -08:00
d82fc4fec6 Add pod tolerations, nodeSelector and affinity in helm (#1015) 2019-12-27 13:02:36 -08:00
8cb43950a1 Move ignorelists.ttl from Redis section to Open Match core (#1028) 2019-12-27 12:27:09 -08:00
9934a7e9da Rewrite synchronizer and corresponding backend (#1024) 2019-12-20 16:40:53 -08:00
8db449b307 Templatize stress test configurations (#1019)
* Templatize stress test configurations

* Update

* presubmit
2019-12-17 11:02:22 -08:00
b78d4672a6 Update client-go to kubernetes-1.13.12 (#1020) 2019-12-11 18:08:10 -08:00
e048b97c71 Moved MMF for end-to-end in-cluster testing to internal (#1014)
* Moved MMF for end-to-end in-cluster testing to internal

* Fix
2019-12-11 16:55:43 -08:00
f56263b074 Deprecate evaluator harness (#1012)
* Have applications read in config from custom input

* Moved original evaluator example to internal package

* Deprecate evaluator harness
2019-12-11 16:04:16 -08:00
aaca99c211 Update README.md (#1016) 2019-12-09 18:12:18 -08:00
9c1b0bcc0e Have applications read in config from custom input (#1007) 2019-12-09 13:26:58 -08:00
80675c32f6 Split up stress test into backend/frontend structure (#1009) 2019-12-09 12:09:00 -08:00
4e408b1abc Show how to generate install/yaml files in dev guide (#1010) 2019-12-08 11:37:52 -08:00
fd4f154a0e Remove unessisary variables and indirection from synchronizer (#1008) 2019-12-06 15:12:18 -08:00
3e2d20edc0 Have synchronizerClient use cacher, to update on config changes (#1006)
This also aligns better with patterns for other clients, and removes some synchronization complexity for this type.
2019-12-05 13:57:57 -08:00
40ba558eb2 Improve Evaluator tutorials experience (#1005)
* Improve Evaluator tutorials experience

* Improve Evaluator tutorials experience
2019-12-04 17:59:04 -08:00
72bcd72d5c Fix Redis Err: Max Clients Reached error (#999)
This commit fixed an issue where Open Match may throw out Err: max clients reached errors from Redis side under load testing scenarios. At this point, Open Match should be able to scale with 1600 profiles and 5000 tickets in the statestore.

The reason that we got those errors from Redis is by default Redis set its maxClient connections limit to 10k. However, Open Match has maxIdle number set to 5000 per pod, which exceed Redis's limit and failed the API calls. This commit manually overrides the maxClient number to 100k, reduce the maxIdle number to 200, and configure the file descriptors' limit to 10k by setting sysctl -w net.core.somaxconn=100000 using the initContainer if enabled.
2019-12-04 17:18:09 -08:00
b276ed1a08 Fixed terraform google provider version to 2.9 (#1004)
* Fixed terraform google provider version to 2.8

* Update versions.tf

* Update versions.tf
2019-12-04 13:20:36 -08:00
d977486dc5 Add more metrics to monitor synchronizer time windows performance (#1001) 2019-12-03 18:43:28 -08:00
1f74497bdd Reduced in-cluster test flakyness and stablize gRPC client connections (#1003) 2019-12-02 16:43:37 -08:00
57e9540faa Use helm to test Open Match in a k8s cluster (#988) 2019-11-25 16:29:05 -08:00
a0be7dcec5 Cherry-picked MMF server changes to upstream (#1000) 2019-11-25 16:09:06 -08:00
391cc4dc72 More cleanups (#984) 2019-11-22 11:16:52 -08:00
2c8779c5d7 Improve README instructions and code templates for the tutorials (#997) 2019-11-21 17:52:08 -08:00
e5aafc5ed7 Added a Grafana dashboard to track Redis client connection gauges (#994) 2019-11-21 09:21:44 -08:00
8554601a70 Update Scale package to sync with the latest config and API changes (#992) 2019-11-20 15:06:19 -08:00
a75833b85a Update release note and release process template (#987) 2019-11-19 14:53:33 -08:00
f01105995d Update gRPC middlewares used in the internal/rpc library (#993) 2019-11-19 13:55:11 -08:00
f949de7dce Update master branch tutorials to use v0.8.0 tags (#985) 2019-11-15 10:16:08 -08:00
335bf73904 Remove redundant matchmaker scaffold and update tutorials (#979) 2019-11-14 13:59:26 -08:00
7a1dcbdf93 More cleanup (#976) 2019-11-13 13:43:01 -08:00
0a65bdefe5 Fix typo in folder name (#975) 2019-11-13 10:15:16 -08:00
bcf0e6b9fb Harden the open-match parent chart (#972) 2019-11-13 09:51:37 -08:00
1f5df7abef Ignore reaper error (#974) 2019-11-13 08:25:02 -08:00
7005d40939 Add solution folder to Matchmaker 102 tutorial (#973) 2019-11-13 02:00:12 -08:00
3536913559 Add logging to the default evaluator (#964) 2019-11-13 01:34:05 -08:00
103213f940 Add the solution for Matchmaker 101 tutorial to a separate solution folder. (#971) 2019-11-13 00:25:09 -08:00
3b8efce53d Add a tutorial for using the default evaluator (#961) 2019-11-13 00:05:38 -08:00
580ed235d7 Generate static yaml to install open match demo (#969)
* Generate static yaml to install open match demo

* Update Makefile to sync with the latest demo update
2019-11-12 22:02:44 -08:00
23cc35ae68 Publish helm index.yaml file to helm install open-match (#962) 2019-11-12 21:42:59 -08:00
c002e75fde A Tutorial to customize the evaluator (#970) 2019-11-12 19:01:04 -08:00
6e6f063958 Update tutorial modules to use v0.8 rc (#963) 2019-11-12 16:12:57 -08:00
8d31b5af07 Fix namespace dependency on CI (#967) 2019-11-12 15:54:21 -08:00
f1a5cd9b81 Have MMF and Evaluator in customize chart use different configs (#959) 2019-11-08 15:44:58 -08:00
d3d906c8be Define Makefile and RBAC rules for open-match-demo namespace migration (#958) 2019-11-08 14:51:42 -08:00
6068507370 Move Match Function installation to the matchmaker.yaml - since customization.yaml is now optional when using default evaluator installation steps (#957) 2019-11-08 14:30:29 -08:00
04b06fcf90 Split out MMF and Evaluator install from open-match-demo (#956) 2019-11-08 11:20:42 -08:00
0c25ac9139 Turn off subcharts by default (#954) 2019-11-08 09:15:49 -08:00
0565a014ad Disable WI in create-gke-cluster step (#947) 2019-11-06 19:10:13 -08:00
57e59c3821 Bumped helm version and dependencies versions for k8s 1.16 support (#938) 2019-11-06 18:27:59 -08:00
608d5bce71 Disable Redis initContainer by default (#941) 2019-11-06 17:12:23 -08:00
52b8754eb8 Update go.mod dendencies (#949) 2019-11-06 13:31:08 -08:00
a10817f550 Fix scale test based on the config changes (#948) 2019-11-06 12:39:38 -08:00
817a0968e7 Update release template (#944) 2019-11-06 11:11:25 -08:00
043a984bab Remove k8s probes in example mmfs and evaluator (#942) 2019-11-06 10:54:17 -08:00
02d8d1f1fe Optimize developer workflow (#943) 2019-11-06 10:28:45 -08:00
242d799c18 Enabled telemetry when generating assets (#945) 2019-11-04 18:05:27 -08:00
492 changed files with 34700 additions and 25093 deletions

View File

@ -33,10 +33,6 @@
*swo
*~
# Load testing residuals
test/stress/*.csv
test/stress/__pycache__
# Ping data files
*.ping
*.pings
@ -120,16 +116,15 @@ creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/mmlogic/mmlogic
cmd/query/query
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
examples/functions/golang/rosterbased/rosterbased
examples/functions/golang/pool/pool
examples/evaluator/golang/simple/simple
test/evaluator/evaluator
test/matchfunction/matchfunction
tools/reaper/reaper
# Open Match Build Directory

View File

@ -91,7 +91,7 @@ Preview:
Below this point you will see {version} used as a placeholder for future
releases. Find {version} and replace with the current release (e.g. 0.5.0)
## Create a release branch in the upstream repository
## Create a release branch in the upstream open-match repository
**Note: This step is performed by the person who starts the release. It is
only required once.**
@ -113,10 +113,19 @@ git push origin release-0.5
- [ ] Open the [`cloudbuild.yaml`] and change the `_OM_VERSION` entry.
- [ ] There might be additional references to the old version but be careful not to change it for places that have it for historical purposes.
- [ ] Run `make release`
- [ ] Create a PR with the changes and include the release candidate name.
- [ ] Go to [open-match-build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build) and update all the triggers' `_GCB_LATEST_VERSION` value to the `X.Y` of the release. This value should only increase as it's used to determine the latest stable version.
- [ ] Run `make api/api.md` in open-match repo to update the auto-generated API references in open-match-docs repo.
- [ ] Use the files under the `build/release/` directory for the Open Match installation guide. Make sure the artifacts work as expected - these are the artifacts that will be published to the GCS bucket and used in our release assets.
- [ ] Create a PR with the changes, include the release candidate name, and point it to the release branch.
- [ ] Go to [open-match-build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build) and update all *post submit* triggers' `_GCB_LATEST_VERSION` value to the `X.Y` of the release. This value should only increase as it's used to determine the latest stable version.
- [ ] Merge your changes once the PR is approved.
## Create a release branch in the upstream open-match-docs repository
- [ ] Open [`Makefile`](makefile-version) and change BASE_VERSION entry.
- [ ] Open [`cloudbuild.yaml`] and change the `_OM_VERSION` entry.
- [ ] Open [`site/config.toml`] and change the `release_version` entry.
- [ ] Open [`site/static/swaggerui/config.json`] and change the `api/VERSION/...` entries
- [ ] Create a PR with the changes, include the release candidate name, and point it to the release branch.
## Complete Milestone
**Note: This step is performed by the person who starts the release. It is
@ -138,19 +147,17 @@ only required once.**
- [ ] Review all closed issues against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/googleforgames/open-match/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed+milestone%3Av{version}
- [ ] Verify the [milestone](https://github.com/googleforgames/open-match/milestones) is effectively 100% at this point with the exception of the release issue itself.
TODO: Add guidelines for labeling issues.
## Build Artifacts
- [ ] Go to [Cloud Build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build), under Post Submit click "Run Trigger".
- [ ] Go to the History section and find the "Post Submit" build that's running. Wait for it to go Green. If it's red, fix error repeat this section. Take note of the docker image version tag for next step. Example: 0.5.0-a4706cb.
- [ ] Go to the History section and find the "Post Submit" build of the merged commit that's running. Wait for it to go Green. If it's red, fix error repeat this section. Take note of the docker image version tag for next step. Example: 0.5.0-a4706cb.
- [ ] Run `./docs/governance/templates/release.sh {source version tag} {version}` to copy the images to open-match-public-images.
- [ ] If this is a new minor version in the newest major version then run `./docs/governance/templates/release.sh {source version tag} latest`.
- [ ] Copy the files from `build/release/` generated from `make release` to the release draft you created. You can drag and drop the files using the Github UI.
- [ ] Open the [`README.md`](readme-deploy) update the version references and submit. (Release candidates can ignore this step.)
- [ ] Run proto-gen-doc to update API references in open-match-docs repo.
- [ ] Update [Slack invitation link](https://slack.com/help/articles/201330256-invite-new-members-to-your-workspace#share-an-invite-link) in [open-match.dev](https://open-match.dev/site/docs/contribute/#get-involved).
- [ ] Test Open Match installation under GKE and Minikube enviroment and make sure the first match example works.
- [ ] Test Open Match installation under GKE and Minikube enviroment using YAML files and Helm. Follow the [First Match](https://development.open-match.dev/site/docs/getting-started/first_match/) guide, run `make proxy-demo`, and open `localhost:51507` to make sure everything works.
- [ ] Minikube: Run `make create-mini-cluster` to create a local cluster with latest Kubernetes API version.
- [ ] GKE: Run `make create-gke-cluster` to create a GKE cluster.
- [ ] Helm: Run `helm install open-match -n open-match open-match/open-match`
- [ ] Update usage requirements in the Installation doc - e.g. supported minikube version, kubectl version, golang version, etc.
## Finalize

16
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,16 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
If this is your first time, please read our contributor guidelines: https://github.com/googleforgames/open-match/blob/master/CONTRIBUTING.md and developer guide https://github.com/googleforgames/open-match/blob/master/docs/development.md
-->
**What this PR does / Why we need it**:
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Closes #<issue number>`, or `Closes (paste link of issue)`.
-->
Closes #
**Special notes for your reviewer**:

11
.gitignore vendored
View File

@ -31,10 +31,6 @@
*swo
*~
# Load testing residuals
test/stress/*.csv
test/stress/__pycache__
# Ping data files
*.ping
*.pings
@ -116,16 +112,15 @@ creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/mmlogic/mmlogic
cmd/query/query
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
examples/functions/golang/rosterbased/rosterbased
examples/functions/golang/pool/pool
examples/evaluator/golang/simple/simple
test/evaluator/evaluator
test/matchfunction/matchfunction
tools/reaper/reaper
# Secrets Directories

View File

@ -171,17 +171,10 @@ linters:
- funlen
- gochecknoglobals
- goconst
- gocritic
- gocyclo
- gofmt
- goimports
- gosec
- interfacer # deprecated - "A tool that suggests interfaces is prone to bad suggestions"
- lll
- prealloc
- scopelint
- staticcheck
- stylecheck
#linters:
# enable-all: true

View File

@ -1,60 +0,0 @@
# Release history
## v0.4.0 (alpha)
### Release notes
- Thanks to completion of Issues [#42](issues/42) and [#45](issues/45), there is no longer a need to use the `openmatch-base` image when building components of Open Match. Each stand alone appliation now is self-contained in its `Dockerfile` and `cloudbuild.yaml` files, and builds have been substantially simplified. **Note**: The default `Dockerfile` and `cloudbuild.yaml` now tag their images with the version number, not `dev`, and the YAML files in the `install` directory now reflect this.
- This paves the way for CI/CD in an upcoming version.
- This paves the way for public images in an upcoming version!
## v0.3.0 (alpha)
This update is focused on the Frontend API and Player Records, including more robust code for indexing, deindexing, reading, writing, and expiring player requests from Open Match state storage. All Frontend API function argument have changed, although many only slightly. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
### Release notes
- The Frontend API calls have all be changed to reflect the fact that they operate on Players in state storage. To queue a game client, 'CreatePlayer' in Open Match, to get updates 'GetUpdates', and to stop matching, 'DeletePlayer'. The calls are now much more obviously related to how Open Match sees players: they are database records that it creates on demand, updates using MMFs and the Backend API, and deletes when the player is no longer looking for a match.
- The Player record in state storage has changed to a more complete hash format, and it no longer makes sense to remove a player's assignment from the Frontend as a separate action to removing their record entirely. `DeleteAssignment()` has therefore been removed. Just use `DeletePlayer` instead; you'll always want the client to re-request matching with its latest attributes anyway.
- There is now a module for [indexing and deindexing players in state storage](internal/statestorage/redis/playerindices/playerindices.go). This is a *much* more efficient, as well as being cleaner and more maintainable than the previous implementation which was **hard-coded to index everything** you passed in to the Frontend API at a specific JSON object depth.
- This paves the way for dynamically choosing your indicies without restarting the matchmaker. This will be implemented if there is demand. Pull Requests are welcome!
- Two internal timestamp-based indices have replaced the previous `timestamp` index. `created` is used to calculate how long a player has been waiting for a match, `accessed` is used to determine when a player needs to be expired out of state storage. Both are prefixed by the string `OM_METADATA` so it should be easy to spot them.
- A call to the Frontend API `GetUpdates()` gRPC endpoint returns a stream of player messages. This is used to send updates to state storage for the `Assignment`, `Status`, and `Error` Player fields in near-realtime. **It is the responsibility of the game client to disconnect** from the stream when it has gotten the results it was waiting for!
- Moved the rest of the gRPC messages into a shared [`messages.proto` file](api/protobuf-spec/messages.proto).
- Added documentation to Frontend API gRPC calls to the [`frontend.proto` file](api/protobuf-spec/frontend.proto).
- [Issue #41](https://github.com/googleforgames/open-match/issues/41)|[PR #48](https://github.com/googleforgames/open-match/pull/48) There is now a HA Redis install available in `install/yaml/01-redis-failover.yaml`. This would be used as a drop-in replacement for a single-instance Redis configuration in `install/yaml/01-redis.yaml`. The HA configuration requires that you install the [Redis Operator](https://github.com/spotahome/redis-operator) (note: **currently alpha**, use at your own risk) in your Kubernetes cluster.
- As part of this change, the kubernetes service name is now `redis` not `redis-sentinel` to denote that it is accessed using a standard Redis client.
- Open Match uses a new feature of the go module [logrus](github.com/sirupsen/logrus) to include filenames and line numbers. If you have an older version in your local build environment, you may need to delete the module and `go get github.com/sirupsen/logrus` again. When building using the provided `cloudbuild.yaml` and `Dockerfile`s this is handled for you.
- The program that was formerly in `examples/frontendclient` has been expanded and has been moved to the `test` directory under (`test/cmd/frontendclient/`)[test/cmd/frontendclient/].
- The client load generator program has been moved from `test/cmd/client` to (`test/cmd/clientloadgen/`)[test/cmd/clientloadgen/] to better reflect what it does.
- [Issue #45](https://github.com/googleforgames/open-match/issues/45) The process for moving the build files (`Dockerfile` and `cloudbuild.yaml`) for each component, example, and test program to their respective directories and out of the repository root has started but won't be completed until a future version.
- Put some basic notes in the [production guide](docs/production.md)
- Added a basic [roadmap](docs/roadmap.md)
## v0.2.0 (alpha)
This is a pretty large update. Custom MMFs or evaluators from 0.1.0 may need some tweaking to work with this version. Some Backend API function arguments have changed. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
v0.2.0 focused on adding additional functionality to Backend API calls and on **reducing the amount of boilerplate code required to make a custom Matchmaking Function**. For this, a new internal API for use by MMFs called the [Matchmaking Logic API (MMLogic API)](README.md#matchmaking-logic-mmlogic-api) has been added. Many of the core components and examples had to be updated to use the new Backend API arguments and the modules to support them, so we recommend you rebuild and redeploy all the components to use v0.2.0.
### Release notes
- MMLogic API is now available. Deploy it to kubernetes using the [appropriate json file]() and check out the [gRPC API specification](api/protobuf-spec/mmlogic.proto) to see how to use it. To write a client against this API, you'll need to compile the protobuf files to your language of choice. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory.
- When using the MMLogic API to filter players into pools, it will attempt to report back the number of players that matched the filters and how long the filters took to query state storage.
- An [example MMF](examples/functions/python3/mmlogic-simple/harness.py) using it has been written in Python3. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory. By default the [example backend client](examples/backendclient/main.go) is now configured to use this MMF, so make sure you have it avaiable before you try to run the latest backend client.
- An [example MMF](examples/functions/php/mmlogic-simple/harness.py) using it has been contributed by Ilya Hrankouski in PHP (thanks!). - The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The [example golang MMF](examples/functions/golang/manual-simple/) has been updated to use the latest data schemas for MatchObjects, and renamed to `manual-simple` to denote that it is manually manipulating Redis, not using the MMLogic API.
- The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The message model for using the Backend API has changed slightly - for calls that make MatchObjects, the expectation is that you will provide a MatchObject with a few fields populated, and it will then be shuttled along through state storage to your MMF and back out again, with various processes 'filling in the blanks' of your MatchObject, which is then returned to your code calling the Backend API. Read the[gRPC API specification](api/protobuf-spec/backend.proto) for more information.
- As part of this, compiled protobuf golang modules now live in the [`internal/pb`](internal/pb) directory. There's a handy [bash script](api/protoc-go.sh) for compiling them from the `api/protobuf-spec` directory into this new `internal/pb` directory for development in your local golang environment if you need it.
- As part of this Backend API message shift and the advent of the MMLogic API, 'player pools' and 'rosters' are now first-class data structures in MatchObjects for those who wish to use them. You can ignore them if you like, but if you want to use some of the MMLogic API calls to automate tasks for you - things like filtering a pool of players according attributes or adding all the players in your rosters to the ignorelist so other MMFs don't try to grab them - you'll need to put your data into the [protobuf messages](api/protobuf-spec/messages.proto) so Open Match knows how to read them. The sample backend client [test profile JSON](examples/backendclient/profiles/testprofile.json)has been updated to use this format if you want to see an example.
- Rosters were formerly space-delimited lists of player IDs. They are now first-class repeated protobuf message fields in the [Roster message format](api/protobuf-spec/messages.proto). That means that in most languages, you can access the roster as a list of players using your native language data structures (more info can be found in the [guide for using protocol buffers in your langauge of choice](https://developers.google.com/protocol-buffers/docs/reference/overview)). If you don't care about the new fields or the new functionality, you can just leave all the other fields but the player ID unset.
- Open Match is transitioning to using [protocol buffer messages](https://developers.google.com/protocol-buffers/) as its internal data format. There is now a Redis state storage [golang module](internal/statestorage/redis/redispb/) for marshaling and unmarshaling MatchObject messages to and from Redis. It isn't very clean code right now but will get worked on for the next couple releases.
- Ignorelists now exist, and have a Redis state storage [golang module](internal/statestorage/redis/ignorelist/) for CRUD access. Currently three ignorelists are defined in the [config file](config/matchmaker_config.json) with their respective parameters. These are implemented as [Sorted Sets in Redis](https://redis.io/commands#sorted_set).
- For those who only want to stand up Open Match and aren't interested in individually tweaking the required kubernetes resources, there are now [three YAML files](install/yaml) that can be used to install Redis, install Open Match, and (optionally) install Prometheus. You'll still need the `sed` [instructions from the Developer Guide](docs/development.md#running-open-match-in-a-development-environment) to substitute in the name of your Docker container registry.
- A super-simple module has been created for doing instersections, unions, and differences of lists of player IDs. It lives in `internal/set/set.go`.
### Roadmap
- It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
- We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
- Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an [outstanding issue to investigate and implement](https://github.com/googleforgames/open-match/issues/41) if it fills our needs, feel free to contribute!
## v0.1.0 (alpha)
Initial release.

View File

@ -13,7 +13,7 @@
# limitations under the License.
# When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
FROM golang:1.13.1
FROM golang:1.14.0
ENV GO111MODULE=on
WORKDIR /go/src/open-match.dev/open-match

View File

@ -34,13 +34,13 @@ RUN export CLOUD_SDK_REPO="cloud-sdk-stretch" && \
apt-get update -y && apt-get install google-cloud-sdk google-cloud-sdk-app-engine-go -y -qq
# Install Golang
# https://github.com/docker-library/golang/blob/master/1.13/stretch/Dockerfile
# https://github.com/docker-library/golang/blob/master/1.14/stretch/Dockerfile
RUN mkdir -p /toolchain/golang
WORKDIR /toolchain/golang
RUN sudo rm -rf /usr/local/go/
# When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
RUN curl -L https://golang.org/dl/go1.13.1.linux-amd64.tar.gz | sudo tar -C /usr/local -xz
RUN curl -L https://golang.org/dl/go1.14.linux-amd64.tar.gz | sudo tar -C /usr/local -xz
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH

466
Makefile
View File

@ -52,7 +52,7 @@
# If you want information on how to edit this file checkout,
# http://makefiletutorial.com/
BASE_VERSION = 0.0.0-dev
BASE_VERSION = 1.1.0
SHORT_SHA = $(shell git rev-parse --short=7 HEAD | tr -d [:punct:])
BRANCH_NAME = $(shell git rev-parse --abbrev-ref HEAD | tr -d [:punct:])
VERSION = $(BASE_VERSION)-$(SHORT_SHA)
@ -60,21 +60,25 @@ BUILD_DATE = $(shell date -u +'%Y-%m-%dT%H:%M:%SZ')
YEAR_MONTH = $(shell date -u +'%Y%m')
YEAR_MONTH_DAY = $(shell date -u +'%Y%m%d')
MAJOR_MINOR_VERSION = $(shell echo $(BASE_VERSION) | cut -d '.' -f1).$(shell echo $(BASE_VERSION) | cut -d '.' -f2)
PROTOC_VERSION = 3.8.0
HELM_VERSION = 3.0.0-beta.5
KUBECTL_VERSION = 1.14.3
PROTOC_VERSION = 3.10.1
HELM_VERSION = 3.0.0
KUBECTL_VERSION = 1.16.2
MINIKUBE_VERSION = latest
GOLANGCI_VERSION = 1.18.0
KIND_VERSION = 0.4.0
SWAGGERUI_VERSION = 3.23.0
TERRAFORM_VERSION = 0.12.3
CHART_TESTING_VERSION = 2.3.3
KIND_VERSION = 0.5.1
SWAGGERUI_VERSION = 3.24.2
GOOGLE_APIS_VERSION = aba342359b6743353195ca53f944fe71e6fb6cd4
GRPC_GATEWAY_VERSION = 1.14.3
TERRAFORM_VERSION = 0.12.13
CHART_TESTING_VERSION = 2.4.0
# A workaround to simplify Open Match development workflow
REDIS_DEV_PASSWORD = helloworld
ENABLE_SECURITY_HARDENING = 0
GO = GO111MODULE=on go
# Defines the absolute local directory of the open-match project
REPOSITORY_ROOT := $(patsubst %/,%,$(dir $(abspath $(MAKEFILE_LIST))))
GO_BUILD_COMMAND = CGO_ENABLED=0 $(GO) build -a -installsuffix cgo .
BUILD_DIR = $(REPOSITORY_ROOT)/build
TOOLCHAIN_DIR = $(BUILD_DIR)/toolchain
TOOLCHAIN_BIN = $(TOOLCHAIN_DIR)/bin
@ -101,10 +105,9 @@ SWAGGERUI_PORT = 51500
PROMETHEUS_PORT = 9090
JAEGER_QUERY_PORT = 16686
GRAFANA_PORT = 3000
LOCUST_PORT = 8089
FRONTEND_PORT = 51504
BACKEND_PORT = 51505
MMLOGIC_PORT = 51503
QUERY_PORT = 51503
SYNCHRONIZER_PORT = 51506
DEMO_PORT = 51507
PROTOC := $(TOOLCHAIN_BIN)/protoc$(EXE_EXTENSION)
@ -115,14 +118,12 @@ KIND = $(TOOLCHAIN_BIN)/kind$(EXE_EXTENSION)
TERRAFORM = $(TOOLCHAIN_BIN)/terraform$(EXE_EXTENSION)
CERTGEN = $(TOOLCHAIN_BIN)/certgen$(EXE_EXTENSION)
GOLANGCI = $(TOOLCHAIN_BIN)/golangci-lint$(EXE_EXTENSION)
DOTNET = $(TOOLCHAIN_DIR)/dotnet/dotnet$(EXE_EXTENSION)
CHART_TESTING = $(TOOLCHAIN_BIN)/ct$(EXE_EXTENSION)
GCLOUD = gcloud --quiet
OPEN_MATCH_CHART_NAME = open-match
OPEN_MATCH_RELEASE_NAME = open-match
OPEN_MATCH_HELM_NAME = open-match
OPEN_MATCH_KUBERNETES_NAMESPACE = open-match
OPEN_MATCH_SECRETS_DIR = $(REPOSITORY_ROOT)/install/helm/open-match/secrets
GCLOUD_ACCOUNT_EMAIL = $(shell gcloud auth list --format yaml | grep account: | cut -c 10-)
GCLOUD_ACCOUNT_EMAIL = $(shell gcloud auth list --format yaml | grep ACTIVE -a2 | grep account: | cut -c 10-)
_GCB_POST_SUBMIT ?= 0
# Latest version triggers builds of :latest images.
_GCB_LATEST_VERSION ?= undefined
@ -140,7 +141,6 @@ ifdef OPEN_MATCH_CI_MODE
export KUBECONFIG = $(HOME)/.kube/config
GCLOUD = gcloud --quiet --no-user-output-enabled
GKE_CLUSTER_NAME = open-match-ci
GKE_CLUSTER_FLAGS = --labels open-match-ci=1 --node-labels=open-match-ci=1
endif
export PATH := $(TOOLCHAIN_BIN):$(PATH)
@ -159,7 +159,6 @@ ifeq ($(OS),Windows_NT)
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-windows-amd64.zip
KIND_PACKAGE = https://github.com/kubernetes-sigs/kind/releases/download/v$(KIND_VERSION)/kind-windows-amd64
TERRAFORM_PACKAGE = https://releases.hashicorp.com/terraform/$(TERRAFORM_VERSION)/terraform_$(TERRAFORM_VERSION)_windows_amd64.zip
DOTNET_PACKAGE = https://download.visualstudio.microsoft.com/download/pr/8ac3e8b7-9918-4e0c-b1be-5aa3e6afd00f/0be99c6ab9362b3c47050cdd50cba846/dotnet-sdk-2.2.402-win-x64.zip
CHART_TESTING_PACKAGE = https://github.com/helm/chart-testing/releases/download/v$(CHART_TESTING_VERSION)/chart-testing_$(CHART_TESTING_VERSION)_windows_amd64.zip
SED_REPLACE = sed -i
else
@ -172,7 +171,6 @@ else
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-linux-amd64.tar.gz
KIND_PACKAGE = https://github.com/kubernetes-sigs/kind/releases/download/v$(KIND_VERSION)/kind-linux-amd64
TERRAFORM_PACKAGE = https://releases.hashicorp.com/terraform/$(TERRAFORM_VERSION)/terraform_$(TERRAFORM_VERSION)_linux_amd64.zip
DOTNET_PACKAGE = https://download.visualstudio.microsoft.com/download/pr/46411df1-f625-45c8-b5e7-08ab736d3daa/0fbc446088b471b0a483f42eb3cbf7a2/dotnet-sdk-2.2.402-linux-x64.tar.gz
CHART_TESTING_PACKAGE = https://github.com/helm/chart-testing/releases/download/v$(CHART_TESTING_VERSION)/chart-testing_$(CHART_TESTING_VERSION)_linux_amd64.tar.gz
SED_REPLACE = sed -i
endif
@ -184,25 +182,22 @@ else
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-darwin-amd64.tar.gz
KIND_PACKAGE = https://github.com/kubernetes-sigs/kind/releases/download/v$(KIND_VERSION)/kind-darwin-amd64
TERRAFORM_PACKAGE = https://releases.hashicorp.com/terraform/$(TERRAFORM_VERSION)/terraform_$(TERRAFORM_VERSION)_darwin_amd64.zip
DOTNET_PACKAGE = https://download.visualstudio.microsoft.com/download/pr/2079de3a-714b-4fa5-840f-70e898b393ef/d631b5018560873ac350d692290881db/dotnet-sdk-2.2.402-osx-x64.tar.gz
CHART_TESTING_PACKAGE = https://github.com/helm/chart-testing/releases/download/v$(CHART_TESTING_VERSION)/chart-testing_$(CHART_TESTING_VERSION)_darwin_amd64.tar.gz
SED_REPLACE = sed -i ''
endif
endif
GOLANG_PROTOS = pkg/pb/backend.pb.go pkg/pb/frontend.pb.go pkg/pb/matchfunction.pb.go pkg/pb/mmlogic.pb.go pkg/pb/messages.pb.go pkg/pb/extensions.pb.go pkg/pb/evaluator.pb.go internal/ipb/synchronizer.pb.go pkg/pb/backend.pb.gw.go pkg/pb/frontend.pb.gw.go pkg/pb/matchfunction.pb.gw.go pkg/pb/mmlogic.pb.gw.go pkg/pb/evaluator.pb.gw.go
GOLANG_PROTOS = pkg/pb/backend.pb.go pkg/pb/frontend.pb.go pkg/pb/matchfunction.pb.go pkg/pb/query.pb.go pkg/pb/messages.pb.go pkg/pb/extensions.pb.go pkg/pb/evaluator.pb.go internal/ipb/synchronizer.pb.go pkg/pb/backend.pb.gw.go pkg/pb/frontend.pb.gw.go pkg/pb/matchfunction.pb.gw.go pkg/pb/query.pb.gw.go pkg/pb/evaluator.pb.gw.go
CSHARP_PROTOS = csharp/OpenMatch/Backend.cs csharp/OpenMatch/Frontend.cs csharp/OpenMatch/Evaluator.cs csharp/OpenMatch/Matchfunction.cs csharp/OpenMatch/Messages.cs csharp/OpenMatch/Mmlogic.cs
SWAGGER_JSON_DOCS = api/frontend.swagger.json api/backend.swagger.json api/query.swagger.json api/matchfunction.swagger.json api/evaluator.swagger.json
SWAGGER_JSON_DOCS = api/frontend.swagger.json api/backend.swagger.json api/mmlogic.swagger.json api/matchfunction.swagger.json api/evaluator.swagger.json
ALL_PROTOS = $(GOLANG_PROTOS) $(SWAGGER_JSON_DOCS) $(CSHARP_PROTOS)
ALL_PROTOS = $(GOLANG_PROTOS) $(SWAGGER_JSON_DOCS)
# CMDS is a list of all folders in cmd/
CMDS = $(notdir $(wildcard cmd/*))
# Names of the individual images, ommiting the openmatch prefix.
IMAGES = $(CMDS) mmf-go-soloduel mmf-go-pool mmf-go-rosterbased evaluator-go-simple reaper stress-frontend base-build
IMAGES = $(CMDS) mmf-go-soloduel base-build
help:
@cat Makefile | grep ^\#\# | grep -v ^\#\#\# |cut -c 4-
@ -220,6 +215,9 @@ local-cloud-build: gcloud
## "openmatch-" prefix on the image name and tags.
##
list-images:
@echo $(IMAGES)
#######################################
## build-images / build-<image name>-image: builds images locally
##
@ -242,21 +240,6 @@ $(foreach CMD,$(CMDS),build-$(CMD)-image): build-%-image: docker build-base-buil
build-mmf-go-soloduel-image: docker build-base-build-image
docker build -f examples/functions/golang/soloduel/Dockerfile -t $(REGISTRY)/openmatch-mmf-go-soloduel:$(TAG) -t $(REGISTRY)/openmatch-mmf-go-soloduel:$(ALTERNATE_TAG) .
build-mmf-go-rosterbased-image: docker build-base-build-image
docker build -f examples/functions/golang/rosterbased/Dockerfile -t $(REGISTRY)/openmatch-mmf-go-rosterbased:$(TAG) -t $(REGISTRY)/openmatch-mmf-go-rosterbased:$(ALTERNATE_TAG) .
build-mmf-go-pool-image: docker build-base-build-image
docker build -f examples/functions/golang/pool/Dockerfile -t $(REGISTRY)/openmatch-mmf-go-pool:$(TAG) -t $(REGISTRY)/openmatch-mmf-go-pool:$(ALTERNATE_TAG) .
build-evaluator-go-simple-image: docker build-base-build-image
docker build -f examples/evaluator/golang/simple/Dockerfile -t $(REGISTRY)/openmatch-evaluator-go-simple:$(TAG) -t $(REGISTRY)/openmatch-evaluator-go-simple:$(ALTERNATE_TAG) .
build-reaper-image: docker build-base-build-image
docker build -f tools/reaper/Dockerfile -t $(REGISTRY)/openmatch-reaper:$(TAG) -t $(REGISTRY)/openmatch-reaper:$(ALTERNATE_TAG) .
build-stress-frontend-image: docker
docker build -f test/stress/Dockerfile -t $(REGISTRY)/openmatch-stress-frontend:$(TAG) -t $(REGISTRY)/openmatch-stress-frontend:$(ALTERNATE_TAG) .
#######################################
## push-images / push-<image name>-image: builds and pushes images to your
## container registry.
@ -302,23 +285,23 @@ $(foreach IMAGE,$(IMAGES),clean-$(IMAGE)-image): clean-%-image:
#####################################################################################################################
update-chart-deps: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/helm/open-match; $(HELM) repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com; $(HELM) dependency update)
(cd $(REPOSITORY_ROOT)/install/helm/open-match; $(HELM) repo add incubator https://charts.helm.sh/stable; $(HELM) dependency update)
lint-chart: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/ct$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/helm; $(HELM) lint $(OPEN_MATCH_CHART_NAME))
(cd $(REPOSITORY_ROOT)/install/helm; $(HELM) lint $(OPEN_MATCH_HELM_NAME))
$(CHART_TESTING) lint --all --chart-yaml-schema $(TOOLCHAIN_BIN)/etc/chart_schema.yaml --lint-conf $(TOOLCHAIN_BIN)/etc/lintconf.yaml --chart-dirs $(REPOSITORY_ROOT)/install/helm/
$(CHART_TESTING) lint-and-install --all --chart-yaml-schema $(TOOLCHAIN_BIN)/etc/chart_schema.yaml --lint-conf $(TOOLCHAIN_BIN)/etc/lintconf.yaml --chart-dirs $(REPOSITORY_ROOT)/install/helm/
print-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/helm; $(HELM) install --name $(OPEN_MATCH_RELEASE_NAME) --dry-run --debug $(OPEN_MATCH_CHART_NAME))
build/chart/open-match-$(BASE_VERSION).tgz: build/toolchain/bin/helm$(EXE_EXTENSION) lint-chart
mkdir -p $(BUILD_DIR)/chart/
$(HELM) package -d $(BUILD_DIR)/chart/ --version $(BASE_VERSION) $(REPOSITORY_ROOT)/install/helm/open-match
build/chart/index.yaml: build/toolchain/bin/helm$(EXE_EXTENSION) gcloud build/chart/open-match-$(BASE_VERSION).tgz
mkdir -p $(BUILD_DIR)/chart-index/
-gsutil cp gs://open-match-chart/chart/index.yaml $(BUILD_DIR)/chart-index/
$(HELM) repo index --merge $(BUILD_DIR)/chart/index.yaml $(BUILD_DIR)/chart/
-gsutil -m cp gs://open-match-chart/chart/open-match-* $(BUILD_DIR)/chart-index/
$(HELM) repo index $(BUILD_DIR)/chart-index/
$(HELM) repo index --merge $(BUILD_DIR)/chart-index/index.yaml $(BUILD_DIR)/chart/
build/chart/index.yaml.$(YEAR_MONTH_DAY): build/chart/index.yaml
cp $(BUILD_DIR)/chart/index.yaml $(BUILD_DIR)/chart/index.yaml.$(YEAR_MONTH_DAY)
@ -330,111 +313,149 @@ install-chart-prerequisite: build/toolchain/bin/kubectl$(EXE_EXTENSION) update-c
$(KUBECTL) apply -f install/gke-metadata-server-workaround.yaml
# Used for Open Match development. Install om-configmap-override.yaml by default.
HELM_UPGRADE_FLAGS = --cleanup-on-fail -i --atomic --no-hooks --debug --timeout=600s --namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) --set global.gcpProjectId=$(GCP_PROJECT_ID) --set open-match-override.enabled=true
HELM_UPGRADE_FLAGS = --cleanup-on-fail -i --no-hooks --debug --timeout=600s --namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) --set global.gcpProjectId=$(GCP_PROJECT_ID) --set open-match-override.enabled=true --set redis.password=$(REDIS_DEV_PASSWORD)
# Used for generate static yamls. Install om-configmap-override.yaml as needed.
HELM_TEMPLATE_FLAGS = --no-hooks --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --set usingHelmTemplate=true
HELM_IMAGE_FLAGS = --set global.image.registry=$(REGISTRY) --set global.image.tag=$(TAG)
install-large-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_RELEASE_NAME) $(HELM_UPGRADE_FLAGS) install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set open-match-telemetry.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set global.telemetry.jaeger.enabled=true \
--set global.telemetry.prometheus.enabled=true \
--set global.logging.rpc.enabled=true
install-demo: build/toolchain/bin/helm$(EXE_EXTENSION)
cp $(REPOSITORY_ROOT)/install/02-open-match-demo.yaml $(REPOSITORY_ROOT)/install/tmp-demo.yaml
$(SED_REPLACE) 's|gcr.io/open-match-public-images|$(REGISTRY)|g' $(REPOSITORY_ROOT)/install/tmp-demo.yaml
$(SED_REPLACE) 's|0.0.0-dev|$(TAG)|g' $(REPOSITORY_ROOT)/install/tmp-demo.yaml
$(KUBECTL) apply -f $(REPOSITORY_ROOT)/install/tmp-demo.yaml
rm $(REPOSITORY_ROOT)/install/tmp-demo.yaml
install-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_RELEASE_NAME) $(HELM_UPGRADE_FLAGS) install/helm/open-match $(HELM_IMAGE_FLAGS)
install-scale-chart: build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_RELEASE_NAME) $(HELM_UPGRADE_FLAGS) install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=true \
# install-large-chart will install open-match-core, open-match-demo with the demo evaluator and mmf, and telemetry supports.
install-large-chart: install-chart-prerequisite install-demo build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set open-match-telemetry.enabled=true \
--set open-match-demo.enabled=false \
--set open-match-customize.enabled=true \
--set open-match-customize.function.image=openmatch-mmf-go-rosterbased\
--set open-match-customize.evaluator.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set global.telemetry.jaeger.enabled=true \
--set global.telemetry.prometheus.enabled=true \
--set open-match-scale.enabled=true \
--set global.logging.rpc.enabled=false
--set global.telemetry.prometheus.enabled=true
# install-chart will install open-match-core, open-match-demo, with the demo evaluator and mmf.
install-chart: install-chart-prerequisite install-demo build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=true \
--set open-match-customize.evaluator.enabled=true
# install-scale-chart will wait for installing open-match-core with telemetry supports then install open-match-scale chart.
install-scale-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) -f install/helm/open-match/values-production.yaml \
--set open-match-telemetry.enabled=true \
--set open-match-customize.enabled=true \
--set open-match-customize.function.enabled=true \
--set open-match-customize.evaluator.enabled=true \
--set open-match-customize.function.image=openmatch-scale-mmf \
--set global.telemetry.grafana.enabled=true \
--set global.telemetry.jaeger.enabled=false \
--set global.telemetry.prometheus.enabled=true
$(HELM) template $(OPEN_MATCH_HELM_NAME)-scale install/helm/open-match $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) -f install/helm/open-match/values-production.yaml \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set global.telemetry.prometheus.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set open-match-scale.enabled=true | $(KUBECTL) apply -f -
# install-ci-chart will install open-match-core with pool based mmf for end-to-end in-cluster test.
install-ci-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
# Ignore errors result from reruning a failed build
-$(KUBECTL) create clusterrolebinding default-view-$(OPEN_MATCH_KUBERNETES_NAMESPACE) --clusterrole=view --serviceaccount=$(OPEN_MATCH_KUBERNETES_NAMESPACE):default
$(HELM) upgrade $(OPEN_MATCH_RELEASE_NAME) $(HELM_UPGRADE_FLAGS) install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set redis.ignoreLists.ttl=1000ms \
--set open-match-test.enabled=true \
--set open-match-demo.enabled=false \
--set open-match-customize.function.image=openmatch-mmf-go-pool \
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set query.replicas=1,frontend.replicas=1,backend.replicas=1 \
--set evaluator.hostName=open-match-test \
--set evaluator.grpcPort=50509 \
--set evaluator.httpPort=51509 \
--set open-match-core.registrationInterval=200ms \
--set open-match-core.proposalCollectionInterval=200ms \
--set open-match-core.assignedDeleteTimeout=200ms \
--set open-match-core.pendingReleaseTimeout=200ms \
--set open-match-core.queryPageSize=10 \
--set global.gcpProjectId=intentionally-invalid-value \
--set redis.master.resources.requests.cpu=0.6,redis.master.resources.requests.memory=300Mi \
--set ci=true
dry-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
$(HELM) upgrade $(HELM_UPGRADE_FLAGS) --dry-run $(OPEN_MATCH_RELEASE_NAME) install/helm/open-match $(HELM_IMAGE_FLAGS)
delete-chart: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
-$(HELM) uninstall $(OPEN_MATCH_RELEASE_NAME)
-$(KUBECTL) --ignore-not-found=true delete crd prometheuses.monitoring.coreos.com
-$(KUBECTL) --ignore-not-found=true delete crd servicemonitors.monitoring.coreos.com
-$(KUBECTL) --ignore-not-found=true delete crd prometheusrules.monitoring.coreos.com
-$(HELM) uninstall $(OPEN_MATCH_HELM_NAME)
-$(HELM) uninstall $(OPEN_MATCH_HELM_NAME)-demo
-$(KUBECTL) delete psp,clusterrole,clusterrolebinding --selector=release=open-match
-$(KUBECTL) delete psp,clusterrole,clusterrolebinding --selector=release=open-match-demo
-$(KUBECTL) delete namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)
-$(KUBECTL) delete namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)-demo
install/yaml/: update-chart-deps install/yaml/install.yaml install/yaml/01-open-match-core.yaml install/yaml/02-open-match-demo.yaml install/yaml/03-prometheus-chart.yaml install/yaml/04-grafana-chart.yaml install/yaml/05-jaeger-chart.yaml install/yaml/06-open-match-override-configmap.yaml
ifneq ($(BASE_VERSION), 0.0.0-dev)
install/yaml/: REGISTRY = gcr.io/$(OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID)
install/yaml/: TAG = $(BASE_VERSION)
endif
install/yaml/: update-chart-deps install/yaml/install.yaml install/yaml/01-open-match-core.yaml install/yaml/02-open-match-demo.yaml install/yaml/03-prometheus-chart.yaml install/yaml/04-grafana-chart.yaml install/yaml/05-jaeger-chart.yaml install/yaml/06-open-match-override-configmap.yaml install/yaml/07-open-match-default-evaluator.yaml
# We have to hard-code the Jaeger endpoints as we are excluding Jaeger, so Helm cannot determine the endpoints from the Jaeger subchart
install/yaml/01-open-match-core.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_RELEASE_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=false \
--set open-match-telemetry.enabled=false \
--set open-match-demo.enabled=false \
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set-string global.telemetry.jaeger.agentEndpoint="$(OPEN_MATCH_HELM_NAME)-jaeger-agent:6831" \
--set-string global.telemetry.jaeger.collectorEndpoint="http://$(OPEN_MATCH_HELM_NAME)-jaeger-collector:14268/api/traces" \
install/helm/open-match > install/yaml/01-open-match-core.yaml
install/yaml/02-open-match-demo.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_RELEASE_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-telemetry.enabled=false \
install/helm/open-match > install/yaml/02-open-match-demo.yaml
cp $(REPOSITORY_ROOT)/install/02-open-match-demo.yaml $(REPOSITORY_ROOT)/install/yaml/02-open-match-demo.yaml
$(SED_REPLACE) 's|0.0.0-dev|$(TAG)|g' $(REPOSITORY_ROOT)/install/yaml/02-open-match-demo.yaml
$(SED_REPLACE) 's|gcr.io/open-match-public-images|$(REGISTRY)|g' $(REPOSITORY_ROOT)/install/yaml/02-open-match-demo.yaml
install/yaml/03-prometheus-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_RELEASE_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=false \
--set open-match-demo.enabled=false \
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-telemetry.enabled=true \
--set global.telemetry.prometheus.enabled=true \
install/helm/open-match > install/yaml/03-prometheus-chart.yaml
# We have to hard-code the Prometheus Server URL as we are excluding Prometheus, so Helm cannot determine the URL from the Prometheus subchart
install/yaml/04-grafana-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_RELEASE_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=false \
--set open-match-demo.enabled=false \
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-telemetry.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set-string global.telemetry.grafana.prometheusServer="http://$(OPEN_MATCH_HELM_NAME)-prometheus-server.$(OPEN_MATCH_KUBERNETES_NAMESPACE).svc.cluster.local:80/" \
install/helm/open-match > install/yaml/04-grafana-chart.yaml
install/yaml/05-jaeger-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_RELEASE_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=false \
--set open-match-demo.enabled=false \
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-telemetry.enabled=true \
--set global.telemetry.jaeger.enabled=true \
install/helm/open-match > install/yaml/05-jaeger-chart.yaml
install/yaml/06-open-match-override-configmap.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_RELEASE_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-override.enabled=true \
-s templates/om-configmap-override.yaml \
install/helm/open-match > install/yaml/06-open-match-override-configmap.yaml
install/yaml/07-open-match-default-evaluator.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-core.enabled=false \
--set open-match-core.redis.enabled=false \
--set open-match-customize.enabled=true \
--set open-match-customize.evaluator.enabled=true \
install/helm/open-match > install/yaml/07-open-match-default-evaluator.yaml
install/yaml/install.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_RELEASE_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=false \
--set open-match-demo.enabled=false \
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set open-match-customize.enabled=true \
--set open-match-customize.evaluator.enabled=true \
--set open-match-telemetry.enabled=true \
--set global.telemetry.jaeger.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set global.telemetry.prometheus.enabled=true \
@ -446,7 +467,7 @@ set-redis-password:
read REDIS_PASSWORD; \
stty echo; \
printf "\n"; \
$(KUBECTL) create secret generic om-redis -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) --from-literal=redis-password=$$REDIS_PASSWORD --dry-run -o yaml | $(KUBECTL) replace -f - --force
$(KUBECTL) create secret generic open-match-redis -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) --from-literal=redis-password=$$REDIS_PASSWORD --dry-run -o yaml | $(KUBECTL) replace -f - --force
install-toolchain: install-kubernetes-tools install-protoc-tools install-openmatch-tools
install-kubernetes-tools: build/toolchain/bin/kubectl$(EXE_EXTENSION) build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/minikube$(EXE_EXTENSION) build/toolchain/bin/terraform$(EXE_EXTENSION)
@ -507,22 +528,6 @@ build/toolchain/bin/terraform$(EXE_EXTENSION):
mv $(TOOLCHAIN_DIR)/temp-terraform/terraform$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/terraform$(EXE_EXTENSION)
rm -rf $(TOOLCHAIN_DIR)/temp-terraform/
build/toolchain/dotnet/:
mkdir -p $(TOOLCHAIN_DIR)/dotnet
ifeq ($(suffix $(DOTNET_PACKAGE)),.zip)
cd $(TOOLCHAIN_DIR)/dotnet && curl -Lo dotnet.zip $(DOTNET_PACKAGE) && unzip -j -q -o dotnet.zip
rm -rf $(TOOLCHAIN_DIR)/dotnet.zip
else
cd $(TOOLCHAIN_DIR)/dotnet && curl -Lo dotnet.tar.gz $(DOTNET_PACKAGE) && tar xzf dotnet.tar.gz --strip-components 1
rm -rf $(TOOLCHAIN_DIR)/dotnet.tar.gz
endif
build/toolchain/python/:
virtualenv --python=python3 $(TOOLCHAIN_DIR)/python/
# Hack to workaround some crazy bug in pip that's chopping off python executable's name.
cd $(TOOLCHAIN_DIR)/python/bin && ln -s python3 pytho
cd $(TOOLCHAIN_DIR)/python/ && . bin/activate && pip install locustio google-cloud-storage && deactivate
build/toolchain/bin/protoc$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -o $(TOOLCHAIN_DIR)/protoc-temp.zip -L $(PROTOC_PACKAGE)
@ -544,13 +549,13 @@ build/toolchain/bin/protoc-gen-swagger$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cd $(TOOLCHAIN_BIN) && $(GO) build -i -pkgdir . github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
build/toolchain/bin/certgen$(EXE_EXTENSION): tools/certgen/certgen$(EXE_EXTENSION)
build/toolchain/bin/certgen$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cp -f $(REPOSITORY_ROOT)/tools/certgen/certgen$(EXE_EXTENSION) $(CERTGEN)
cd $(TOOLCHAIN_BIN) && $(GO) build $(REPOSITORY_ROOT)/tools/certgen/
build/toolchain/bin/reaper$(EXE_EXTENSION): tools/reaper/reaper$(EXE_EXTENSION)
build/toolchain/bin/reaper$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cp -f $(REPOSITORY_ROOT)/tools/reaper/reaper$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/reaper$(EXE_EXTENSION)
cd $(TOOLCHAIN_BIN) && $(GO) build $(REPOSITORY_ROOT)/tools/reaper/
# Fake target for docker
docker: no-sudo
@ -600,7 +605,10 @@ get-kind-kubeconfig: build/toolchain/bin/kind$(EXE_EXTENSION)
delete-kind-cluster: build/toolchain/bin/kind$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
-$(KIND) delete cluster
create-gke-cluster: GKE_VERSION = 1.13.9-gke.3 # gcloud beta container get-server-config --zone us-west1-a
create-cluster-role-binding:
$(KUBECTL) create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(GCLOUD_ACCOUNT_EMAIL)
create-gke-cluster: GKE_VERSION = 1.15.12-gke.20 # gcloud beta container get-server-config --zone us-west1-a
create-gke-cluster: GKE_CLUSTER_SHAPE_FLAGS = --machine-type n1-standard-4 --enable-autoscaling --min-nodes 1 --num-nodes 2 --max-nodes 10 --disk-size 50
create-gke-cluster: GKE_FUTURE_COMPAT_FLAGS = --no-enable-basic-auth --no-issue-client-certificate --enable-ip-alias --metadata disable-legacy-endpoints=true --enable-autoupgrade
create-gke-cluster: build/toolchain/bin/kubectl$(EXE_EXTENSION) gcloud
@ -608,9 +616,9 @@ create-gke-cluster: build/toolchain/bin/kubectl$(EXE_EXTENSION) gcloud
--enable-pod-security-policy \
--cluster-version $(GKE_VERSION) \
--image-type cos_containerd \
--tags open-match \
--identity-namespace=$(GCP_PROJECT_ID).svc.id.goog
$(KUBECTL) create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(GCLOUD_ACCOUNT_EMAIL)
--tags open-match
$(MAKE) create-cluster-role-binding
delete-gke-cluster: gcloud
-$(GCLOUD) $(GCP_PROJECT_FLAG) container clusters delete $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG) $(GCLOUD_EXTRA_FLAGS)
@ -637,24 +645,6 @@ pkg/pb/%.pb.go: api/%.proto third_party/ build/toolchain/bin/protoc$(EXE_EXTENSI
--go_out=plugins=grpc:$(REPOSITORY_ROOT)/build/prototmp
mv $(REPOSITORY_ROOT)/build/prototmp/open-match.dev/open-match/$@ $@
csharp/OpenMatch/Annotations.cs: third_party/
$(PROTOC) third_party/protoc-gen-swagger/options/annotations.proto \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--plugin=protoc-gen-grpc=grpc_csharp_plugin \
--csharp_out=$(REPOSITORY_ROOT)/csharp/OpenMatch
csharp/OpenMatch/Openapiv2.cs: third_party/
$(PROTOC) third_party/protoc-gen-swagger/options/openapiv2.proto \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--plugin=protoc-gen-grpc=grpc_csharp_plugin \
--csharp_out=$(REPOSITORY_ROOT)/csharp/OpenMatch/
csharp/OpenMatch/%.cs: third_party/ build/toolchain/bin/protoc$(EXE_EXTENSION) csharp/OpenMatch/Openapiv2.cs csharp/OpenMatch/Annotations.cs
$(PROTOC) api/$(shell echo $(*F)| tr A-Z a-z).proto \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--plugin=protoc-gen-grpc=grpc_csharp_plugin \
--csharp_out=$(REPOSITORY_ROOT)/csharp/OpenMatch
internal/ipb/%.pb.go: internal/api/%.proto third_party/ build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION)
mkdir -p $(REPOSITORY_ROOT)/build/prototmp $(REPOSITORY_ROOT)/internal/ipb
$(PROTOC) $< \
@ -678,22 +668,17 @@ api/api.md: third_party/ build/toolchain/bin/protoc-gen-doc$(EXE_EXTENSION)
$(PROTOC) api/*.proto \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--doc_out=. \
--doc_opt=markdown,api.md
# Crazy hack that insert hugo link reference to this API doc -)
$(SED_REPLACE) '1 i\---\
title: "Open Match API References" \
linkTitle: "Open Match API References" \
weight: 2 \
description: \
This document provides API references for Open Match services. \
--- \
' ./api.md && mv ./api.md $(REPOSITORY_ROOT)/../open-match-docs/site/content/en/docs/Reference/
--doc_opt=markdown,api_temp.md
# Crazy hack that insert hugo link reference to this API doc -)
cat ./docs/hugo_apiheader.txt ./api_temp.md >> api.md
mv ./api.md $(REPOSITORY_ROOT)/../open-match-docs/site/content/en/docs/Reference/
rm ./api_temp.md
# Include structure of the protos needs to be called out do the dependency chain is run through properly.
pkg/pb/backend.pb.go: pkg/pb/messages.pb.go
pkg/pb/frontend.pb.go: pkg/pb/messages.pb.go
pkg/pb/matchfunction.pb.go: pkg/pb/messages.pb.go
pkg/pb/mmlogic.pb.go: pkg/pb/messages.pb.go
pkg/pb/query.pb.go: pkg/pb/messages.pb.go
pkg/pb/evaluator.pb.go: pkg/pb/messages.pb.go
internal/ipb/synchronizer.pb.go: pkg/pb/messages.pb.go
@ -701,18 +686,31 @@ build: assets
$(GO) build ./...
$(GO) build -tags e2ecluster ./...
define test_folder
$(if $(wildcard $(1)/go.mod), \
cd $(1) && \
$(GO) test -cover -test.count $(GOLANG_TEST_COUNT) -race ./... && \
$(GO) test -cover -test.count $(GOLANG_TEST_COUNT) -run IgnoreRace$$ ./... \
)
$(foreach dir, $(wildcard $(1)/*/.), $(call test_folder, $(dir)))
endef
define fast_test_folder
$(if $(wildcard $(1)/go.mod), \
cd $(1) && \
$(GO) test ./... \
)
$(foreach dir, $(wildcard $(1)/*/.), $(call fast_test_folder, $(dir)))
endef
test: $(ALL_PROTOS) tls-certs third_party/
$(GO) test -cover -test.count $(GOLANG_TEST_COUNT) -race ./...
$(GO) test -cover -test.count $(GOLANG_TEST_COUNT) -run IgnoreRace$$ ./...
$(call test_folder,.)
fasttest: $(ALL_PROTOS) tls-certs third_party/
$(call fast_test_folder,.)
test-e2e-cluster: all-protos tls-certs third_party/
-$(KUBECTL) wait job --for condition=complete -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) -l component=e2e-job --timeout 200s
$(KUBECTL) logs job/e2e-job -n $(OPEN_MATCH_KUBERNETES_NAMESPACE)
$(KUBECTL) wait job --for condition=complete -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) -l component=e2e-job --timeout 0
stress-frontend-%: build/toolchain/python/
$(TOOLCHAIN_DIR)/python/bin/locust -f $(REPOSITORY_ROOT)/test/stress/frontend.py --host=http://localhost:$(FRONTEND_PORT) \
--no-web -c $* -r 100 -t10m --csv=test/stress/stress_user$*
$(HELM) test --timeout 7m30s -v 0 --logs -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(OPEN_MATCH_HELM_NAME)
fmt:
$(GO) fmt ./...
@ -754,61 +752,6 @@ build/cmd/demo-%/COPY_PHONY:
mkdir -p $(BUILD_DIR)/cmd/demo-$*/
cp -r examples/demo/static $(BUILD_DIR)/cmd/demo-$*/static
all: service-binaries example-binaries tools-binaries
service-binaries: cmd/minimatch/minimatch$(EXE_EXTENSION) cmd/swaggerui/swaggerui$(EXE_EXTENSION)
service-binaries: cmd/backend/backend$(EXE_EXTENSION) cmd/frontend/frontend$(EXE_EXTENSION)
service-binaries: cmd/mmlogic/mmlogic$(EXE_EXTENSION) cmd/synchronizer/synchronizer$(EXE_EXTENSION)
example-binaries: example-mmf-binaries example-evaluator-binaries
example-mmf-binaries: examples/functions/golang/soloduel/soloduel$(EXE_EXTENSION) examples/functions/golang/pool/pool$(EXE_EXTENSION) examples/functions/golang/rosterbased/rosterbased$(EXE_EXTENSION)
example-evaluator-binaries: examples/evaluator/golang/simple/simple$(EXE_EXTENSION)
examples/functions/golang/soloduel/soloduel$(EXE_EXTENSION): pkg/pb/mmlogic.pb.go pkg/pb/mmlogic.pb.gw.go api/mmlogic.swagger.json pkg/pb/matchfunction.pb.go pkg/pb/matchfunction.pb.gw.go api/matchfunction.swagger.json
cd $(REPOSITORY_ROOT)/examples/functions/golang/soloduel; $(GO_BUILD_COMMAND)
examples/functions/golang/rosterbased/rosterbased$(EXE_EXTENSION): pkg/pb/mmlogic.pb.go pkg/pb/mmlogic.pb.gw.go api/mmlogic.swagger.json pkg/pb/matchfunction.pb.go pkg/pb/matchfunction.pb.gw.go api/matchfunction.swagger.json
cd $(REPOSITORY_ROOT)/examples/functions/golang/rosterbased; $(GO_BUILD_COMMAND)
examples/functions/golang/pool/pool$(EXE_EXTENSION): pkg/pb/mmlogic.pb.go pkg/pb/mmlogic.pb.gw.go api/mmlogic.swagger.json pkg/pb/matchfunction.pb.go pkg/pb/matchfunction.pb.gw.go api/matchfunction.swagger.json
cd $(REPOSITORY_ROOT)/examples/functions/golang/pool; $(GO_BUILD_COMMAND)
examples/evaluator/golang/simple/simple$(EXE_EXTENSION): pkg/pb/evaluator.pb.go pkg/pb/evaluator.pb.gw.go api/evaluator.swagger.json
cd $(REPOSITORY_ROOT)/examples/evaluator/golang/simple; $(GO_BUILD_COMMAND)
tools-binaries: tools/certgen/certgen$(EXE_EXTENSION) tools/reaper/reaper$(EXE_EXTENSION)
cmd/backend/backend$(EXE_EXTENSION): pkg/pb/backend.pb.go pkg/pb/backend.pb.gw.go api/backend.swagger.json
cd $(REPOSITORY_ROOT)/cmd/backend; $(GO_BUILD_COMMAND)
cmd/frontend/frontend$(EXE_EXTENSION): pkg/pb/frontend.pb.go pkg/pb/frontend.pb.gw.go api/frontend.swagger.json
cd $(REPOSITORY_ROOT)/cmd/frontend; $(GO_BUILD_COMMAND)
cmd/mmlogic/mmlogic$(EXE_EXTENSION): pkg/pb/mmlogic.pb.go pkg/pb/mmlogic.pb.gw.go api/mmlogic.swagger.json
cd $(REPOSITORY_ROOT)/cmd/mmlogic; $(GO_BUILD_COMMAND)
cmd/synchronizer/synchronizer$(EXE_EXTENSION): internal/ipb/synchronizer.pb.go
cd $(REPOSITORY_ROOT)/cmd/synchronizer; $(GO_BUILD_COMMAND)
# Note: This list of dependencies is long but only add file references here. If you add a .PHONY dependency make will always rebuild it.
cmd/minimatch/minimatch$(EXE_EXTENSION): pkg/pb/backend.pb.go pkg/pb/backend.pb.gw.go api/backend.swagger.json
cmd/minimatch/minimatch$(EXE_EXTENSION): pkg/pb/frontend.pb.go pkg/pb/frontend.pb.gw.go api/frontend.swagger.json
cmd/minimatch/minimatch$(EXE_EXTENSION): pkg/pb/mmlogic.pb.go pkg/pb/mmlogic.pb.gw.go api/mmlogic.swagger.json
cmd/minimatch/minimatch$(EXE_EXTENSION): pkg/pb/evaluator.pb.go pkg/pb/evaluator.pb.gw.go api/evaluator.swagger.json
cmd/minimatch/minimatch$(EXE_EXTENSION): pkg/pb/matchfunction.pb.go pkg/pb/matchfunction.pb.gw.go api/matchfunction.swagger.json
cmd/minimatch/minimatch$(EXE_EXTENSION): pkg/pb/messages.pb.go
cmd/minimatch/minimatch$(EXE_EXTENSION): internal/ipb/synchronizer.pb.go
cd $(REPOSITORY_ROOT)/cmd/minimatch; $(GO_BUILD_COMMAND)
cmd/swaggerui/swaggerui$(EXE_EXTENSION): third_party/swaggerui/
cd $(REPOSITORY_ROOT)/cmd/swaggerui; $(GO_BUILD_COMMAND)
tools/certgen/certgen$(EXE_EXTENSION):
cd $(REPOSITORY_ROOT)/tools/certgen/ && $(GO_BUILD_COMMAND)
tools/reaper/reaper$(EXE_EXTENSION):
cd $(REPOSITORY_ROOT)/tools/reaper/ && $(GO_BUILD_COMMAND)
build/policies/binauthz.yaml: install/policies/binauthz.yaml
mkdir -p $(BUILD_DIR)/policies
cp -f $(REPOSITORY_ROOT)/install/policies/binauthz.yaml $(BUILD_DIR)/policies/binauthz.yaml
@ -845,7 +788,7 @@ build/certificates/: build/toolchain/bin/certgen$(EXE_EXTENSION)
cd $(BUILD_DIR)/certificates/ && $(CERTGEN)
md-test: docker
docker run -t --rm -v $(CURDIR):/mnt:ro dkhamsing/awesome_bot --white-list "localhost,https://goreportcard.com,github.com/googleforgames/open-match/tree/release-,github.com/googleforgames/open-match/blob/release-,github.com/googleforgames/open-match/releases/download/v,https://swagger.io/tools/swagger-codegen/" --allow-dupe --allow-redirect --skip-save-results `find . -type f -name '*.md' -not -path './build/*' -not -path './.git*'`
docker run -t --rm -v $(REPOSITORY_ROOT):/mnt:ro dkhamsing/awesome_bot --white-list "localhost,https://goreportcard.com,github.com/googleforgames/open-match/tree/release-,github.com/googleforgames/open-match/blob/release-,github.com/googleforgames/open-match/releases/download/v,https://swagger.io/tools/swagger-codegen/" --allow-dupe --allow-redirect --skip-save-results `find . -type f -name '*.md' -not -path './build/*' -not -path './.git*'`
ci-deploy-artifacts: install/yaml/ $(SWAGGER_JSON_DOCS) build/chart/ gcloud
ifeq ($(_GCB_POST_SUBMIT),1)
@ -861,11 +804,11 @@ else
endif
ci-reap-namespaces: build/toolchain/bin/reaper$(EXE_EXTENSION)
$(TOOLCHAIN_BIN)/reaper -age=30m
-$(TOOLCHAIN_BIN)/reaper -age=30m
# For presubmit we want to update the protobuf generated files and verify that tests are good.
presubmit: GOLANG_TEST_COUNT = 5
presubmit: clean third_party/ update-chart-deps assets update-deps lint build install-toolchain test md-test terraform-test
presubmit: clean third_party/ update-chart-deps assets update-deps lint build test md-test terraform-test
build/release/: presubmit clean-install-yaml install/yaml/
mkdir -p $(BUILD_DIR)/release/
@ -896,28 +839,13 @@ clean-secrets:
clean-protos:
rm -rf $(REPOSITORY_ROOT)/build/prototmp/
rm -rf $(REPOSITORY_ROOT)/csharp/OpenMatch/*.cs
rm -rf $(REPOSITORY_ROOT)/pkg/pb/
rm -rf $(REPOSITORY_ROOT)/internal/ipb/
clean-binaries:
rm -rf $(REPOSITORY_ROOT)/cmd/backend/backend$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/cmd/synchronizer/synchronizer$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/cmd/frontend/frontend$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/cmd/mmlogic/mmlogic$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/cmd/minimatch/minimatch$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/examples/functions/golang/soloduel/soloduel$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/examples/functions/golang/rosterbased/rosterbased$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/examples/functions/golang/pool/pool$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/examples/functions/golang/simple/evaluator$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/cmd/swaggerui/swaggerui$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/tools/certgen/certgen$(EXE_EXTENSION)
rm -rf $(REPOSITORY_ROOT)/tools/reaper/reaper$(EXE_EXTENSION)
clean-terraform:
rm -rf $(REPOSITORY_ROOT)/install/terraform/.terraform/
clean-build: clean-toolchain clean-archives clean-release clean-chart
clean-build: clean-toolchain clean-release clean-chart
rm -rf $(BUILD_DIR)/
clean-release:
@ -929,47 +857,40 @@ clean-toolchain:
clean-chart:
rm -rf $(BUILD_DIR)/chart/
clean-archives:
rm -rf $(BUILD_DIR)/archives/
clean-install-yaml:
rm -f $(REPOSITORY_ROOT)/install/yaml/*
clean-stress-test-tools:
rm -rf $(TOOLCHAIN_DIR)/python
rm -f $(REPOSITORY_ROOT)/test/stress/*.csv
clean-swagger-docs:
rm -rf $(REPOSITORY_ROOT)/api/*.json
clean-third-party:
rm -rf $(REPOSITORY_ROOT)/third_party/
clean: clean-images clean-binaries clean-build clean-install-yaml clean-stress-test-tools clean-secrets clean-terraform clean-third-party clean-protos clean-swagger-docs
clean: clean-images clean-build clean-install-yaml clean-secrets clean-terraform clean-third-party clean-protos clean-swagger-docs
proxy-frontend: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Frontend Health: http://localhost:$(FRONTEND_PORT)/healthz"
@echo "Frontend RPC: http://localhost:$(FRONTEND_PORT)/debug/rpcz"
@echo "Frontend Trace: http://localhost:$(FRONTEND_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=frontend,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(FRONTEND_PORT):51504 $(PORT_FORWARD_ADDRESS_FLAG)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=frontend,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(FRONTEND_PORT):51504 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-backend: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Backend Health: http://localhost:$(BACKEND_PORT)/healthz"
@echo "Backend RPC: http://localhost:$(BACKEND_PORT)/debug/rpcz"
@echo "Backend Trace: http://localhost:$(BACKEND_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=backend,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(BACKEND_PORT):51505 $(PORT_FORWARD_ADDRESS_FLAG)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=backend,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(BACKEND_PORT):51505 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-mmlogic: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "MmLogic Health: http://localhost:$(MMLOGIC_PORT)/healthz"
@echo "MmLogic RPC: http://localhost:$(MMLOGIC_PORT)/debug/rpcz"
@echo "MmLogic Trace: http://localhost:$(MMLOGIC_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=mmlogic,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(MMLOGIC_PORT):51503 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-query: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "QueryService Health: http://localhost:$(QUERY_PORT)/healthz"
@echo "QueryService RPC: http://localhost:$(QUERY_PORT)/debug/rpcz"
@echo "QueryService Trace: http://localhost:$(QUERY_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=query,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(QUERY_PORT):51503 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-synchronizer: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Synchronizer Health: http://localhost:$(SYNCHRONIZER_PORT)/healthz"
@echo "Synchronizer RPC: http://localhost:$(SYNCHRONIZER_PORT)/debug/rpcz"
@echo "Synchronizer Trace: http://localhost:$(SYNCHRONIZER_PORT)/debug/tracez"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=synchronizer,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(SYNCHRONIZER_PORT):51506 $(PORT_FORWARD_ADDRESS_FLAG)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=synchronizer,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(SYNCHRONIZER_PORT):51506 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-jaeger: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Jaeger Query Frontend: http://localhost:16686"
@ -978,29 +899,25 @@ proxy-jaeger: build/toolchain/bin/kubectl$(EXE_EXTENSION)
proxy-grafana: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "User: admin"
@echo "Password: openmatch"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=grafana,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(GRAFANA_PORT):3000 $(PORT_FORWARD_ADDRESS_FLAG)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=grafana,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(GRAFANA_PORT):3000 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-prometheus: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=prometheus,component=server,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(PROMETHEUS_PORT):9090 $(PORT_FORWARD_ADDRESS_FLAG)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=prometheus,component=server,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(PROMETHEUS_PORT):9090 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-dashboard: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) port-forward --namespace kube-system $(shell $(KUBECTL) get pod --namespace kube-system --selector="app=kubernetes-dashboard" --output jsonpath='{.items[0].metadata.name}') $(DASHBOARD_PORT):9092 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-ui: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "SwaggerUI Health: http://localhost:$(SWAGGERUI_PORT)/"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=swaggerui,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(SWAGGERUI_PORT):51500 $(PORT_FORWARD_ADDRESS_FLAG)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match,component=swaggerui,release=$(OPEN_MATCH_HELM_NAME)" --output jsonpath='{.items[0].metadata.name}') $(SWAGGERUI_PORT):51500 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-demo: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "View Demo: http://localhost:$(DEMO_PORT)"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match-demo,component=demo,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(DEMO_PORT):51507 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-locust: build/toolchain/bin/kubectl$(EXE_EXTENSION)
@echo "Locust UI: http://localhost:$(LOCUST_PORT)"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=open-match-test,component=locust-master,release=$(OPEN_MATCH_RELEASE_NAME)" --output jsonpath='{.items[0].metadata.name}') $(LOCUST_PORT):8089 $(PORT_FORWARD_ADDRESS_FLAG)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)-demo $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)-demo --selector="app=open-match-demo,component=demo" --output jsonpath='{.items[0].metadata.name}') $(DEMO_PORT):51507 $(PORT_FORWARD_ADDRESS_FLAG)
# Run `make proxy` instead to run everything at the same time.
# If you run this directly it will just run each proxy sequentially.
proxy-all: proxy-frontend proxy-backend proxy-mmlogic proxy-grafana proxy-prometheus proxy-synchronizer proxy-ui proxy-dashboard proxy-demo proxy-jaeger
proxy-all: proxy-frontend proxy-backend proxy-query proxy-grafana proxy-prometheus proxy-jaeger proxy-synchronizer proxy-ui proxy-dashboard proxy-demo
proxy:
# This is an exception case where we'll call recursive make.
@ -1010,27 +927,24 @@ proxy:
update-deps:
$(GO) mod tidy
build-csharp: build/toolchain/dotnet/ csharp/OpenMatch/Annotations.cs csharp/OpenMatch/Openapiv2.cs
(cd $(REPOSITORY_ROOT)/csharp/OpenMatch && $(DOTNET) build -o .)
third_party/: third_party/google/api third_party/protoc-gen-swagger/options third_party/swaggerui/
third_party/google/api:
mkdir -p $(TOOLCHAIN_DIR)/googleapis-temp/
mkdir -p $(REPOSITORY_ROOT)/third_party/google/api
mkdir -p $(REPOSITORY_ROOT)/third_party/google/rpc
curl -o $(TOOLCHAIN_DIR)/googleapis-temp/googleapis.zip -L https://github.com/googleapis/googleapis/archive/master.zip
curl -o $(TOOLCHAIN_DIR)/googleapis-temp/googleapis.zip -L https://github.com/googleapis/googleapis/archive/$(GOOGLE_APIS_VERSION).zip
(cd $(TOOLCHAIN_DIR)/googleapis-temp/; unzip -q -o googleapis.zip)
cp -f $(TOOLCHAIN_DIR)/googleapis-temp/googleapis-master/google/api/*.proto $(REPOSITORY_ROOT)/third_party/google/api/
cp -f $(TOOLCHAIN_DIR)/googleapis-temp/googleapis-master/google/rpc/*.proto $(REPOSITORY_ROOT)/third_party/google/rpc/
cp -f $(TOOLCHAIN_DIR)/googleapis-temp/googleapis-$(GOOGLE_APIS_VERSION)/google/api/*.proto $(REPOSITORY_ROOT)/third_party/google/api/
cp -f $(TOOLCHAIN_DIR)/googleapis-temp/googleapis-$(GOOGLE_APIS_VERSION)/google/rpc/*.proto $(REPOSITORY_ROOT)/third_party/google/rpc/
rm -rf $(TOOLCHAIN_DIR)/googleapis-temp
third_party/protoc-gen-swagger/options:
mkdir -p $(TOOLCHAIN_DIR)/grpc-gateway-temp/
mkdir -p $(REPOSITORY_ROOT)/third_party/protoc-gen-swagger/options
curl -o $(TOOLCHAIN_DIR)/grpc-gateway-temp/grpc-gateway.zip -L https://github.com/grpc-ecosystem/grpc-gateway/archive/master.zip
curl -o $(TOOLCHAIN_DIR)/grpc-gateway-temp/grpc-gateway.zip -L https://github.com/grpc-ecosystem/grpc-gateway/archive/v$(GRPC_GATEWAY_VERSION).zip
(cd $(TOOLCHAIN_DIR)/grpc-gateway-temp/; unzip -q -o grpc-gateway.zip)
cp -f $(TOOLCHAIN_DIR)/grpc-gateway-temp/grpc-gateway-master/protoc-gen-swagger/options/*.proto $(REPOSITORY_ROOT)/third_party/protoc-gen-swagger/options/
cp -f $(TOOLCHAIN_DIR)/grpc-gateway-temp/grpc-gateway-$(GRPC_GATEWAY_VERSION)/protoc-gen-swagger/options/*.proto $(REPOSITORY_ROOT)/third_party/protoc-gen-swagger/options/
rm -rf $(TOOLCHAIN_DIR)/grpc-gateway-temp
third_party/swaggerui/:
@ -1051,12 +965,6 @@ sync-deps:
$(GO) clean -modcache
$(GO) mod download
sleep-10:
sleep 10
sleep-30:
sleep 30
# Prevents users from running with sudo.
# There's an exception for Google Cloud Build because it runs as root.
no-sudo:
@ -1069,4 +977,4 @@ ifeq ($(shell whoami),root)
endif
endif
.PHONY: docker gcloud update-deps sync-deps sleep-10 sleep-30 all build proxy-dashboard proxy-prometheus proxy-grafana clean clean-build clean-toolchain clean-archives clean-binaries clean-protos presubmit test ci-reap-namespaces md-test vet
.PHONY: docker gcloud update-deps sync-deps all build proxy-dashboard proxy-prometheus proxy-grafana clean clean-build clean-toolchain clean-binaries clean-protos presubmit test ci-reap-namespaces md-test vet

View File

@ -24,13 +24,9 @@ The [Open Match Development guide](docs/development.md) has detailed instruction
on getting the source code, making changes, testing and submitting a pull request
to Open Match.
## Disclaimer
This software is currently alpha, and subject to change.
## Support
* [Slack Channel](https://open-match.slack.com/) ([Signup](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))
* [Slack Channel](https://open-match.slack.com/) ([Signup](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLTM5ZWQxNjc1YWI3MzJmN2RiMWJmYWI0ZjFiNzNkZmNkMWQ3YWU5OGVkNzA5Yzc4OGVkOGU5MTc0OTA5ZTA5NDU))
* [File an Issue](https://github.com/googleforgames/open-match/issues/new)
* [Mailing list](https://groups.google.com/forum/#!forum/open-match-discuss)

View File

@ -67,11 +67,11 @@ message FunctionConfig {
}
message FetchMatchesRequest {
// FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF
// A configuration for the MatchFunction server of this FetchMatches call.
FunctionConfig config = 1;
// MatchProfiles that will be sent to thhe MMF specified in the FunctionConfig.
repeated MatchProfile profiles = 2;
// A MatchProfile that will be sent to the MatchFunction server of this FetchMatches call.
MatchProfile profile = 2;
}
message FetchMatchesResponse {
@ -80,7 +80,20 @@ message FetchMatchesResponse {
Match match = 1;
}
message AssignTicketsRequest {
message ReleaseTicketsRequest{
// TicketIds is a list of string representing Open Match generated Ids to be re-enabled for MMF querying
// because they are no longer awaiting assignment from a previous match result
repeated string ticket_ids = 1;
}
message ReleaseTicketsResponse {}
message ReleaseAllTicketsRequest{}
message ReleaseAllTicketsResponse {}
// AssignmentGroup contains an Assignment and the Tickets to which it should be applied.
message AssignmentGroup{
// TicketIds is a list of strings representing Open Match generated Ids which apply to an Assignment.
repeated string ticket_ids = 1;
@ -88,17 +101,37 @@ message AssignTicketsRequest {
Assignment assignment = 2;
}
message AssignTicketsResponse {}
// AssignmentFailure contains the id of the Ticket that failed the Assignment and the failure status.
message AssignmentFailure {
enum Cause {
UNKNOWN = 0;
TICKET_NOT_FOUND = 1;
}
// The Backent service implements APIs to generate matches and handle ticket assignments.
service Backend {
// FetchMatches triggers a MatchFunction with the specified MatchProfiles, while each MatchProfile
// returns a set of match proposals. FetchMatches method streams the results back to the caller.
// FetchMatches immediately returns an error if it encounters any execution failures.
// - If the synchronizer is enabled, FetchMatch will then call the synchronizer to deduplicate proposals with overlapped tickets.
string ticket_id = 1;
Cause cause = 2;
}
message AssignTicketsRequest {
// Assignments is a list of assignment groups that contain assignment and the Tickets to which they should be applied.
repeated AssignmentGroup assignments = 1;
}
message AssignTicketsResponse {
// Failures is a list of all the Tickets that failed assignment along with the cause of failure.
repeated AssignmentFailure failures = 1;
}
// The BackendService implements APIs to generate matches and handle ticket assignments.
service BackendService {
// FetchMatches triggers a MatchFunction with the specified MatchProfile and
// returns a set of matches generated by the Match Making Function, and
// accepted by the evaluator.
// Tickets in matches returned by FetchMatches are moved from active to
// pending, and will not be returned by query.
rpc FetchMatches(FetchMatchesRequest) returns (stream FetchMatchesResponse) {
option (google.api.http) = {
post: "/v1/backend/matches:fetch"
post: "/v1/backendservice/matches:fetch"
body: "*"
};
}
@ -106,7 +139,32 @@ service Backend {
// AssignTickets overwrites the Assignment field of the input TicketIds.
rpc AssignTickets(AssignTicketsRequest) returns (AssignTicketsResponse) {
option (google.api.http) = {
post: "/v1/backend/tickets:assign"
post: "/v1/backendservice/tickets:assign"
body: "*"
};
}
// ReleaseTickets moves tickets from the pending state, to the active state.
// This enables them to be returned by query, and find different matches.
//
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc ReleaseTickets(ReleaseTicketsRequest) returns (ReleaseTicketsResponse) {
option (google.api.http) = {
post: "/v1/backendservice/tickets:release"
body: "*"
};
}
// ReleaseAllTickets moves all tickets from the pending state, to the active
// state. This enables them to be returned by query, and find different
// matches.
//
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc ReleaseAllTickets(ReleaseAllTicketsRequest) returns (ReleaseAllTicketsResponse) {
option (google.api.http) = {
post: "/v1/backendservice/tickets:releaseall"
body: "*"
};
}

View File

@ -24,9 +24,9 @@
"application/json"
],
"paths": {
"/v1/backend/matches:fetch": {
"/v1/backendservice/matches:fetch": {
"post": {
"summary": "FetchMatches triggers a MatchFunction with the specified MatchProfiles, while each MatchProfile \nreturns a set of match proposals. FetchMatches method streams the results back to the caller.\nFetchMatches immediately returns an error if it encounters any execution failures.\n - If the synchronizer is enabled, FetchMatch will then call the synchronizer to deduplicate proposals with overlapped tickets.",
"summary": "FetchMatches triggers a MatchFunction with the specified MatchProfile and\nreturns a set of matches generated by the Match Making Function, and\naccepted by the evaluator.\nTickets in matches returned by FetchMatches are moved from active to\npending, and will not be returned by query.",
"operationId": "FetchMatches",
"responses": {
"200": {
@ -38,6 +38,7 @@
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -53,11 +54,11 @@
}
],
"tags": [
"Backend"
"BackendService"
]
}
},
"/v1/backend/tickets:assign": {
"/v1/backendservice/tickets:assign": {
"post": {
"summary": "AssignTickets overwrites the Assignment field of the input TicketIds.",
"operationId": "AssignTickets",
@ -71,6 +72,7 @@
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -86,13 +88,144 @@
}
],
"tags": [
"Backend"
"BackendService"
]
}
},
"/v1/backendservice/tickets:release": {
"post": {
"summary": "ReleaseTickets moves tickets from the pending state, to the active state.\nThis enables them to be returned by query, and find different matches.",
"description": "BETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "ReleaseTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchReleaseTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchReleaseTicketsRequest"
}
}
],
"tags": [
"BackendService"
]
}
},
"/v1/backendservice/tickets:releaseall": {
"post": {
"summary": "ReleaseAllTickets moves all tickets from the pending state, to the active\nstate. This enables them to be returned by query, and find different\nmatches.",
"description": "BETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "ReleaseAllTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchReleaseAllTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchReleaseAllTicketsRequest"
}
}
],
"tags": [
"BackendService"
]
}
}
},
"definitions": {
"AssignmentFailureCause": {
"type": "string",
"enum": [
"UNKNOWN",
"TICKET_NOT_FOUND"
],
"default": "UNKNOWN"
},
"openmatchAssignTicketsRequest": {
"type": "object",
"properties": {
"assignments": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchAssignmentGroup"
},
"description": "Assignments is a list of assignment groups that contain assignment and the Tickets to which they should be applied."
}
}
},
"openmatchAssignTicketsResponse": {
"type": "object",
"properties": {
"failures": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchAssignmentFailure"
},
"description": "Failures is a list of all the Tickets that failed assignment along with the cause of failure."
}
}
},
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchAssignmentFailure": {
"type": "object",
"properties": {
"ticket_id": {
"type": "string"
},
"cause": {
"$ref": "#/definitions/AssignmentFailureCause"
}
},
"description": "AssignmentFailure contains the id of the Ticket that failed the Assignment and the failure status."
},
"openmatchAssignmentGroup": {
"type": "object",
"properties": {
"ticket_ids": {
@ -106,31 +239,8 @@
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment specifies game connection related information to be associated with the TicketIds."
}
}
},
"openmatchAssignTicketsResponse": {
"type": "object"
},
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "AssignmentGroup contains an Assignment and the Tickets to which it should be applied."
},
"openmatchDoubleRangeFilter": {
"type": "object",
@ -142,12 +252,12 @@
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
"description": "Minimum value."
}
},
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
@ -157,14 +267,11 @@
"properties": {
"config": {
"$ref": "#/definitions/openmatchFunctionConfig",
"title": "FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF"
"description": "A configuration for the MatchFunction server of this FetchMatches call."
},
"profiles": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchMatchProfile"
},
"description": "MatchProfiles that will be sent to thhe MMF specified in the FunctionConfig."
"profile": {
"$ref": "#/definitions/openmatchMatchProfile",
"description": "A MatchProfile that will be sent to the MatchFunction server of this FetchMatches call."
}
}
},
@ -223,13 +330,6 @@
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"extensions": {
"type": "object",
"additionalProperties": {
@ -238,7 +338,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"openmatchMatchProfile": {
"type": "object",
@ -252,14 +352,7 @@
"items": {
"$ref": "#/definitions/openmatchPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile.\nThe pool names can be used in empty Rosters to specify composition of a\nmatch."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchRoster"
},
"description": "Set of Rosters for this match request. Could be empty Rosters used to\nindicate the composition of the generated Match or they could be partially\npre-populated Ticket list to be used in scenarios such as backfill / join\nin progress."
"description": "Set of pools to be queried when generating a match for this MatchProfile."
},
"extensions": {
"type": "object",
@ -283,7 +376,7 @@
"items": {
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
@ -296,25 +389,40 @@
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchRoster": {
"openmatchReleaseAllTicketsRequest": {
"type": "object"
},
"openmatchReleaseAllTicketsResponse": {
"type": "object"
},
"openmatchReleaseTicketsRequest": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
"title": "TicketIds is a list of string representing Open Match generated Ids to be re-enabled for MMF querying\nbecause they are no longer awaiting assignment from a previous match result"
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
}
},
"openmatchReleaseTicketsResponse": {
"type": "object"
},
"openmatchSearchFields": {
"type": "object",
@ -375,7 +483,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket. \nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -387,9 +495,14 @@
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
@ -406,29 +519,6 @@
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {

View File

@ -61,8 +61,11 @@ message EvaluateRequest {
}
message EvaluateResponse {
// A Match shortlisted by the evaluator representing one of the final results.
Match match = 1;
// A Match ID representing a shortlisted match returned by the evaluator as the final result.
string match_id = 2;
// Deprecated fields
reserved 1;
}
// The Evaluator service implements APIs used to evaluate and shortlist matches proposed by MMFs.

View File

@ -38,6 +38,7 @@
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -67,10 +68,6 @@
"type": "string",
"description": "Connection information for this Assignment."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
},
"extensions": {
"type": "object",
"additionalProperties": {
@ -79,7 +76,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchEvaluateRequest": {
"type": "object",
@ -93,9 +90,9 @@
"openmatchEvaluateResponse": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/openmatchMatch",
"description": "A Match shortlisted by the evaluator representing one of the final results."
"match_id": {
"type": "string",
"description": "A Match ID representing a shortlisted match returned by the evaluator as the final result."
}
}
},
@ -121,13 +118,6 @@
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"extensions": {
"type": "object",
"additionalProperties": {
@ -136,24 +126,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
},
"openmatchRoster": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"openmatchSearchFields": {
"type": "object",
@ -192,7 +165,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket. \nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -204,9 +177,14 @@
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
@ -223,29 +201,6 @@
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {

View File

@ -20,6 +20,7 @@ option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
import "google/protobuf/empty.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
@ -60,69 +61,60 @@ message CreateTicketRequest {
Ticket ticket = 1;
}
message CreateTicketResponse {
// A Ticket object with TicketId generated.
Ticket ticket = 1;
}
message DeleteTicketRequest {
// A TicketId of a generated Ticket to be deleted.
string ticket_id = 1;
}
message DeleteTicketResponse {}
message GetTicketRequest {
// A TicketId of a generated Ticket.
string ticket_id = 1;
}
message GetAssignmentsRequest {
message WatchAssignmentsRequest {
// A TicketId of a generated Ticket to get updates on.
string ticket_id = 1;
}
message GetAssignmentsResponse {
message WatchAssignmentsResponse {
// An updated Assignment of the requested Ticket.
Assignment assignment = 1;
}
// The Frontend service implements APIs to manage and query status of a Tickets.
service Frontend {
// The FrontendService implements APIs to manage and query status of a Tickets.
service FrontendService {
// CreateTicket assigns an unique TicketId to the input Ticket and record it in state storage.
// A ticket is considered as ready for matchmaking once it is created.
// - If a TicketId exists in a Ticket request, an auto-generated TicketId will override this field.
// - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with mmlogic.QueryTickets function.
rpc CreateTicket(CreateTicketRequest) returns (CreateTicketResponse) {
// - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.
rpc CreateTicket(CreateTicketRequest) returns (Ticket) {
option (google.api.http) = {
post: "/v1/frontend/tickets"
post: "/v1/frontendservice/tickets"
body: "*"
};
}
// DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.
// The client must delete the Ticket when finished matchmaking with it.
// - If SearchFields exist in a Ticket, DeleteTicket will deindex the fields lazily.
// Users may still be able to assign/get a ticket after calling DeleteTicket on it.
rpc DeleteTicket(DeleteTicketRequest) returns (DeleteTicketResponse) {
// The client should delete the Ticket when finished matchmaking with it.
rpc DeleteTicket(DeleteTicketRequest) returns (google.protobuf.Empty) {
option (google.api.http) = {
delete: "/v1/frontend/tickets/{ticket_id}"
delete: "/v1/frontendservice/tickets/{ticket_id}"
};
}
// GetTicket get the Ticket associated with the specified TicketId.
rpc GetTicket(GetTicketRequest) returns (Ticket) {
option (google.api.http) = {
get: "/v1/frontend/tickets/{ticket_id}"
get: "/v1/frontendservice/tickets/{ticket_id}"
};
}
// GetAssignments stream back Assignment of the specified TicketId if it is updated.
// WatchAssignments stream back Assignment of the specified TicketId if it is updated.
// - If the Assignment is not updated, GetAssignment will retry using the configured backoff strategy.
rpc GetAssignments(GetAssignmentsRequest)
returns (stream GetAssignmentsResponse) {
rpc WatchAssignments(WatchAssignmentsRequest)
returns (stream WatchAssignmentsResponse) {
option (google.api.http) = {
get: "/v1/frontend/tickets/{ticket_id}/assignments"
get: "/v1/frontendservice/tickets/{ticket_id}/assignments"
};
}
}

View File

@ -24,20 +24,21 @@
"application/json"
],
"paths": {
"/v1/frontend/tickets": {
"/v1/frontendservice/tickets": {
"post": {
"summary": "CreateTicket assigns an unique TicketId to the input Ticket and record it in state storage.\nA ticket is considered as ready for matchmaking once it is created.\n - If a TicketId exists in a Ticket request, an auto-generated TicketId will override this field.\n - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with mmlogic.QueryTickets function.",
"summary": "CreateTicket assigns an unique TicketId to the input Ticket and record it in state storage.\nA ticket is considered as ready for matchmaking once it is created.\n - If a TicketId exists in a Ticket request, an auto-generated TicketId will override this field.\n - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.",
"operationId": "CreateTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchCreateTicketResponse"
"$ref": "#/definitions/openmatchTicket"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -53,11 +54,11 @@
}
],
"tags": [
"Frontend"
"FrontendService"
]
}
},
"/v1/frontend/tickets/{ticket_id}": {
"/v1/frontendservice/tickets/{ticket_id}": {
"get": {
"summary": "GetTicket get the Ticket associated with the specified TicketId.",
"operationId": "GetTicket",
@ -71,6 +72,7 @@
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -85,22 +87,23 @@
}
],
"tags": [
"Frontend"
"FrontendService"
]
},
"delete": {
"summary": "DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.\nThe client must delete the Ticket when finished matchmaking with it. \n - If SearchFields exist in a Ticket, DeleteTicket will deindex the fields lazily.\nUsers may still be able to assign/get a ticket after calling DeleteTicket on it.",
"summary": "DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.\nThe client should delete the Ticket when finished matchmaking with it.",
"operationId": "DeleteTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchDeleteTicketResponse"
"properties": {}
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -115,24 +118,25 @@
}
],
"tags": [
"Frontend"
"FrontendService"
]
}
},
"/v1/frontend/tickets/{ticket_id}/assignments": {
"/v1/frontendservice/tickets/{ticket_id}/assignments": {
"get": {
"summary": "GetAssignments stream back Assignment of the specified TicketId if it is updated.\n - If the Assignment is not updated, GetAssignment will retry using the configured backoff strategy.",
"operationId": "GetAssignments",
"summary": "WatchAssignments stream back Assignment of the specified TicketId if it is updated.\n - If the Assignment is not updated, GetAssignment will retry using the configured backoff strategy.",
"operationId": "WatchAssignments",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchGetAssignmentsResponse"
"$ref": "#/x-stream-definitions/openmatchWatchAssignmentsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -147,7 +151,7 @@
}
],
"tags": [
"Frontend"
"FrontendService"
]
}
}
@ -160,10 +164,6 @@
"type": "string",
"description": "Connection information for this Assignment."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
},
"extensions": {
"type": "object",
"additionalProperties": {
@ -172,7 +172,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchCreateTicketRequest": {
"type": "object",
@ -183,27 +183,6 @@
}
}
},
"openmatchCreateTicketResponse": {
"type": "object",
"properties": {
"ticket": {
"$ref": "#/definitions/openmatchTicket",
"description": "A Ticket object with TicketId generated."
}
}
},
"openmatchDeleteTicketResponse": {
"type": "object"
},
"openmatchGetAssignmentsResponse": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An updated Assignment of the requested Ticket."
}
}
},
"openmatchSearchFields": {
"type": "object",
"properties": {
@ -241,7 +220,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket. \nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -253,9 +232,23 @@
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"openmatchWatchAssignmentsResponse": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An updated Assignment of the requested Ticket."
}
}
},
"protobufAny": {
"type": "object",
@ -272,29 +265,6 @@
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {
@ -322,17 +292,17 @@
}
},
"x-stream-definitions": {
"openmatchGetAssignmentsResponse": {
"openmatchWatchAssignmentsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchGetAssignmentsResponse"
"$ref": "#/definitions/openmatchWatchAssignmentsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchGetAssignmentsResponse"
"title": "Stream result of openmatchWatchAssignmentsResponse"
}
},
"externalDocs": {

View File

@ -69,7 +69,7 @@ message RunResponse {
// The MatchFunction service implements APIs to run user-defined matchmaking logics.
service MatchFunction {
// DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.
// Run pulls Tickets that satisify Profile constraints from Mmlogic, runs matchmaking logics against them, then
// Run pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then
// constructs and streams back match candidates to the Backend service.
rpc Run(RunRequest) returns (stream RunResponse) {
option (google.api.http) = {

View File

@ -26,7 +26,7 @@
"paths": {
"/v1/matchfunction:run": {
"post": {
"summary": "DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.\nRun pulls Tickets that satisify Profile constraints from Mmlogic, runs matchmaking logics against them, then\nconstructs and streams back match candidates to the Backend service.",
"summary": "DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.\nRun pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then\nconstructs and streams back match candidates to the Backend service.",
"operationId": "Run",
"responses": {
"200": {
@ -38,6 +38,7 @@
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -66,10 +67,6 @@
"type": "string",
"description": "Connection information for this Assignment."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
},
"extensions": {
"type": "object",
"additionalProperties": {
@ -78,7 +75,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchDoubleRangeFilter": {
"type": "object",
@ -90,12 +87,12 @@
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
"description": "Minimum value."
}
},
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
@ -122,13 +119,6 @@
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"extensions": {
"type": "object",
"additionalProperties": {
@ -137,7 +127,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"openmatchMatchProfile": {
"type": "object",
@ -151,14 +141,7 @@
"items": {
"$ref": "#/definitions/openmatchPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile.\nThe pool names can be used in empty Rosters to specify composition of a\nmatch."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchRoster"
},
"description": "Set of Rosters for this match request. Could be empty Rosters used to\nindicate the composition of the generated Match or they could be partially\npre-populated Ticket list to be used in scenarios such as backfill / join\nin progress."
"description": "Set of pools to be queried when generating a match for this MatchProfile."
},
"extensions": {
"type": "object",
@ -182,7 +165,7 @@
"items": {
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
@ -195,25 +178,19 @@
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
}
}
},
"openmatchRoster": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchRunRequest": {
"type": "object",
@ -292,7 +269,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket. \nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -304,9 +281,14 @@
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
@ -323,29 +305,6 @@
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {

View File

@ -19,17 +19,20 @@ option csharp_namespace = "OpenMatch";
import "google/rpc/status.proto";
import "google/protobuf/any.proto";
import "google/protobuf/timestamp.proto";
// A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an
// individual 'Player' or a 'Group' of players. Open Match will not interpret
// what the Ticket represents but just treat it as a matchmaking unit with a set
// of SearchFields. Open Match stores the Ticket in state storage and enables an
// Assignment to be associated with this Ticket.
// A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent
// an individual 'Player', a 'Group' of players, or any other concepts unique to
// your use case. Open Match will not interpret what the Ticket represents but
// just treat it as a matchmaking unit with a set of SearchFields. Open Match
// stores the Ticket in state storage and enables an Assignment to be set on the
// Ticket.
message Ticket {
// Id represents an auto-generated Id issued by Open Match.
// Id represents an auto-generated Id issued by Open Match.
string id = 1;
// An Assignment represents a game server assignment associated with a Ticket.
// An Assignment represents a game server assignment associated with a Ticket,
// or whatever finalized matched state means for your use case.
// Open Match does not require or inspect any fields on Assignment.
Assignment assignment = 3;
@ -42,6 +45,10 @@ message Ticket {
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 5;
// Create time is the time the Ticket was created. It is populated by Open
// Match at the time of Ticket creation.
google.protobuf.Timestamp create_time = 6;
// Deprecated fields.
reserved 2;
}
@ -51,30 +58,27 @@ message Ticket {
message SearchFields {
// Float arguments. Filterable on ranges.
map<string, double> double_args = 1;
// String arguments. Filterable on equality.
map<string, string> string_args = 2;
// Filterable on presence or absence of given value.
repeated string tags = 3;
}
// An Assignment represents a game server assignment associated with a Ticket. Open
// match does not require or inspect any fields on assignment.
// An Assignment represents a game server assignment associated with a Ticket.
// Open Match does not require or inspect any fields on assignment.
message Assignment {
// Connection information for this Assignment.
string connection = 1;
// Error when finding an Assignment for this Ticket.
google.rpc.Status error = 3;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 4;
// Deprecated fields.
reserved 2;
reserved 2, 3;
}
// Filters numerical values to only those within a range.
@ -94,10 +98,10 @@ message DoubleRangeFilter {
// Name of the ticket's search_fields.double_args this Filter operates on.
string double_arg = 1;
// Maximum value. Defaults to positive infinity (any value above minv).
// Maximum value.
double max = 2;
// Minimum value. Defaults to 0.
// Minimum value.
double min = 3;
}
@ -129,11 +133,13 @@ message TagPresentFilter {
string tag = 1;
}
// Pool specfies a set of criteria that are used to select a subset of Tickets
// that meet all the criteria.
message Pool {
// A developer-chosen human-readable name for this Pool.
string name = 1;
// Set of Filters indicating the filtering criteria. Selected players must
// Set of Filters indicating the filtering criteria. Selected tickets must
// match every Filter.
repeated DoubleRangeFilter double_range_filters = 2;
@ -141,22 +147,16 @@ message Pool {
repeated TagPresentFilter tag_present_filters = 5;
// If specified, only Tickets created before the specified time are selected.
google.protobuf.Timestamp created_before = 6;
// If specified, only Tickets created after the specified time are selected.
google.protobuf.Timestamp created_after = 7;
// Deprecated fields.
reserved 3;
}
// A Roster is a named collection of Ticket IDs. It exists so that a Tickets
// associated with a Match can be labelled to belong to a team, sub-team etc. It
// can also be used to represent the current state of a Match in scenarios such
// as backfill, join-in-progress etc.
message Roster {
// A developer-chosen human-readable name for this Roster.
string name = 1;
// Tickets belonging to this Roster.
repeated string ticket_ids = 2;
}
// A MatchProfile is Open Match's representation of a Match specification. It is
// used to indicate the criteria for selecting players for a match. A
// MatchProfile is the input to the API to get matches and is passed to the
@ -167,29 +167,21 @@ message MatchProfile {
string name = 1;
// Set of pools to be queried when generating a match for this MatchProfile.
// The pool names can be used in empty Rosters to specify composition of a
// match.
repeated Pool pools = 3;
// Set of Rosters for this match request. Could be empty Rosters used to
// indicate the composition of the generated Match or they could be partially
// pre-populated Ticket list to be used in scenarios such as backfill / join
// in progress.
repeated Roster rosters = 4;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 5;
// Deprecated fields.
reserved 2;
reserved 2, 4;
}
// A Match is used to represent a completed match object. It can be generated by
// a MatchFunction as a proposal or can be returned by OpenMatch as a result in
// response to the FetchMatches call.
// When a match is returned by the FetchMatches call, it should contain at least
// When a match is returned by the FetchMatches call, it should contain at least
// one ticket to be considered as valid.
message Match {
// A Match ID that should be passed through the stack for tracing.
@ -204,14 +196,11 @@ message Match {
// Tickets belonging to this match.
repeated Ticket tickets = 4;
// Set of Rosters that comprise this Match
repeated Roster rosters = 5;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 7;
// Deprecated fields.
reserved 6;
reserved 5, 6;
}

View File

@ -56,24 +56,45 @@ option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
};
message QueryTicketsRequest {
// A Pool is consists of a set of Filters.
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketsResponse {
// Tickets is a list of Ticket representing one or more Tickets which meet all Filter criterias.
// Tickets that meet all the filtering criteria requested by the pool.
repeated Ticket tickets = 1;
}
// The MmLogic service implements helper APIs for Match Function to query Tickets from state storage.
service MmLogic {
message QueryTicketIdsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketIdsResponse {
// TicketIDs that meet all the filtering criteria requested by the pool.
repeated string ids = 1;
}
// The QueryService service implements helper APIs for Match Function to query Tickets from state storage.
service QueryService {
// QueryTickets gets a list of Tickets that match all Filters of the input Pool.
// - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.
// QueryTickets pages the Tickets by `storage.pool.size` and stream back response.
// - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000
// QueryTickets pages the Tickets by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTickets(QueryTicketsRequest) returns (stream QueryTicketsResponse) {
option (google.api.http) = {
post: "/v1/mmlogic/tickets:query"
post: "/v1/queryservice/tickets:query"
body: "*"
};
}
// QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.
// - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.
// QueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTicketIds(QueryTicketIdsRequest) returns (stream QueryTicketIdsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/ticketids:query"
body: "*"
};
}

View File

@ -24,9 +24,43 @@
"application/json"
],
"paths": {
"/v1/mmlogic/tickets:query": {
"/v1/queryservice/ticketids:query": {
"post": {
"summary": "QueryTickets gets a list of Tickets that match all Filters of the input Pool.\n - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.\nQueryTickets pages the Tickets by `storage.pool.size` and stream back response.\n - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000",
"summary": "QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.\n - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.\nQueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryTicketIds",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/openmatchQueryTicketIdsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchQueryTicketIdsRequest"
}
}
],
"tags": [
"QueryService"
]
}
},
"/v1/queryservice/tickets:query": {
"post": {
"summary": "QueryTickets gets a list of Tickets that match all Filters of the input Pool.\n - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.\nQueryTickets pages the Tickets by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryTickets",
"responses": {
"200": {
@ -38,6 +72,7 @@
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
}
@ -53,7 +88,7 @@
}
],
"tags": [
"MmLogic"
"QueryService"
]
}
}
@ -66,10 +101,6 @@
"type": "string",
"description": "Connection information for this Assignment."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
},
"extensions": {
"type": "object",
"additionalProperties": {
@ -78,7 +109,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchDoubleRangeFilter": {
"type": "object",
@ -90,12 +121,12 @@
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
"description": "Minimum value."
}
},
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
@ -112,7 +143,7 @@
"items": {
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
@ -125,6 +156,38 @@
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchQueryTicketIdsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/openmatchPool",
"description": "The Pool representing the set of Filters to be queried."
}
}
},
"openmatchQueryTicketIdsResponse": {
"type": "object",
"properties": {
"ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "TicketIDs that meet all the filtering criteria requested by the pool."
}
}
},
@ -133,7 +196,7 @@
"properties": {
"pool": {
"$ref": "#/definitions/openmatchPool",
"description": "A Pool is consists of a set of Filters."
"description": "The Pool representing the set of Filters to be queried."
}
}
},
@ -145,7 +208,7 @@
"items": {
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets is a list of Ticket representing one or more Tickets which meet all Filter criterias."
"description": "Tickets that meet all the filtering criteria requested by the pool."
}
}
},
@ -208,7 +271,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket. \nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -220,9 +283,14 @@
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
@ -239,29 +307,6 @@
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {
@ -289,6 +334,18 @@
}
},
"x-stream-definitions": {
"openmatchQueryTicketIdsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchQueryTicketIdsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of openmatchQueryTicketIdsResponse"
},
"openmatchQueryTicketsResponse": {
"type": "object",
"properties": {

View File

@ -48,8 +48,8 @@
steps:
- id: 'Docker Image: open-match-build'
name: gcr.io/kaniko-project/executor
args: ['--destination=gcr.io/$PROJECT_ID/open-match-build', '--cache=true', '--cache-ttl=48h', '--dockerfile=Dockerfile.ci']
name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/open-match-build', '-f', 'Dockerfile.ci', '.']
waitFor: ['-']
- id: 'Build: Clean'
@ -57,10 +57,10 @@ steps:
args: ['make', 'clean-third-party', 'clean-protos', 'clean-swagger-docs']
waitFor: ['Docker Image: open-match-build']
- id: 'Test: Markdown'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'md-test']
waitFor: ['Build: Clean']
# - id: 'Test: Markdown'
# name: 'gcr.io/$PROJECT_ID/open-match-build'
# args: ['make', 'md-test']
# waitFor: ['Build: Clean']
- id: 'Setup: Download Dependencies'
name: 'gcr.io/$PROJECT_ID/open-match-build'
@ -153,7 +153,7 @@ steps:
artifacts:
objects:
location: gs://open-match-build-artifacts/output/
location: '${_ARTIFACTS_BUCKET}'
paths:
- install/yaml/install.yaml
- install/yaml/01-open-match-core.yaml
@ -164,10 +164,12 @@ artifacts:
- install/yaml/06-open-match-override-configmap.yaml
substitutions:
_OM_VERSION: "0.0.0-dev"
_OM_VERSION: "1.1.0"
_GCB_POST_SUBMIT: "0"
_GCB_LATEST_VERSION: "undefined"
logsBucket: 'gs://open-match-build-logs/'
_ARTIFACTS_BUCKET: "gs://open-match-build-artifacts/output/"
_LOGS_BUCKET: "gs://open-match-build-logs/"
logsBucket: '${_LOGS_BUCKET}'
options:
sourceProvenanceHash: ['SHA256']
machineType: 'N1_HIGHCPU_32'

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/backend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("backend", backend.BindService)
appmain.RunApplication("backend", backend.BindService)
}

View File

@ -11,17 +11,14 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/tutorials/matchmaker101/evaluator/evaluate"
)
const (
// Replace this with the port on which your Evaluator service is exposed.
evaluatorPort = 50508
"open-match.dev/open-match/internal/app/evaluator/defaulteval"
"open-match.dev/open-match/internal/appmain"
)
func main() {
evaluate.Start(evaluatorPort)
appmain.RunApplication("evaluator", defaulteval.BindService)
}

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/frontend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("frontend", frontend.BindService)
appmain.RunApplication("frontend", frontend.BindService)
}

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/minimatch"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("minimatch", minimatch.BindService)
appmain.RunApplication("minimatch", minimatch.BindService)
}

View File

@ -12,14 +12,14 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the mmlogic service for Open Match.
// Package main is the query service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/mmlogic"
"open-match.dev/open-match/internal/app/query"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("mmlogic", mmlogic.BindService)
appmain.RunApplication("query", query.BindService)
}

View File

@ -16,8 +16,9 @@ package main
import (
"open-match.dev/open-match/examples/scale/backend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
backend.Run()
appmain.RunApplication("scale", backend.BindService)
}

View File

@ -1,4 +1,3 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,13 +11,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package e2e
package main
import (
"open-match.dev/open-match/internal/testing/e2e"
"testing"
scaleEvaluator "open-match.dev/open-match/examples/scale/evaluator"
)
func TestMain(m *testing.M) {
e2e.RunMain(m)
func main() {
scaleEvaluator.Run()
}

View File

@ -16,8 +16,9 @@ package main
import (
"open-match.dev/open-match/examples/scale/frontend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
frontend.Run()
appmain.RunApplication("scale", frontend.BindService)
}

View File

@ -12,12 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package e2e
package main
import (
"testing"
scaleMmf "open-match.dev/open-match/examples/scale/mmf"
)
func TestMain(m *testing.M) {
RunMain(m)
func main() {
scaleMmf.Run()
}

View File

@ -2,7 +2,7 @@
"urls": [
{"name": "Frontend", "url": "https://open-match.dev/api/v0.0.0-dev/frontend.swagger.json"},
{"name": "Backend", "url": "https://open-match.dev/api/v0.0.0-dev/backend.swagger.json"},
{"name": "Mmlogic", "url": "https://open-match.dev/api/v0.0.0-dev/mmlogic.swagger.json"},
{"name": "Query", "url": "https://open-match.dev/api/v0.0.0-dev/query.swagger.json"},
{"name": "MatchFunction", "url": "https://open-match.dev/api/v0.0.0-dev/matchfunction.swagger.json"},
{"name": "Synchronizer", "url": "https://open-match.dev/api/v0.0.0-dev/synchronizer.swagger.json"},
{"name": "Evaluator", "url": "https://open-match.dev/api/v0.0.0-dev/evaluator.swagger.json"}

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/synchronizer"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("synchronizer", synchronizer.BindService)
appmain.RunApplication("synchronizer", synchronizer.BindService)
}

View File

@ -1,54 +0,0 @@
// <auto-generated>
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: third_party/protoc-gen-swagger/options/annotations.proto
// </auto-generated>
#pragma warning disable 1591, 0612, 3021
#region Designer generated code
using pb = global::Google.Protobuf;
using pbc = global::Google.Protobuf.Collections;
using pbr = global::Google.Protobuf.Reflection;
using scg = global::System.Collections.Generic;
namespace Grpc.Gateway.ProtocGenSwagger.Options {
/// <summary>Holder for reflection information generated from third_party/protoc-gen-swagger/options/annotations.proto</summary>
public static partial class AnnotationsReflection {
#region Descriptor
/// <summary>File descriptor for third_party/protoc-gen-swagger/options/annotations.proto</summary>
public static pbr::FileDescriptor Descriptor {
get { return descriptor; }
}
private static pbr::FileDescriptor descriptor;
static AnnotationsReflection() {
byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"Cjh0aGlyZF9wYXJ0eS9wcm90b2MtZ2VuLXN3YWdnZXIvb3B0aW9ucy9hbm5v",
"dGF0aW9ucy5wcm90bxInZ3JwYy5nYXRld2F5LnByb3RvY19nZW5fc3dhZ2dl",
"ci5vcHRpb25zGipwcm90b2MtZ2VuLXN3YWdnZXIvb3B0aW9ucy9vcGVuYXBp",
"djIucHJvdG8aIGdvb2dsZS9wcm90b2J1Zi9kZXNjcmlwdG9yLnByb3RvOmoK",
"EW9wZW5hcGl2Ml9zd2FnZ2VyEhwuZ29vZ2xlLnByb3RvYnVmLkZpbGVPcHRp",
"b25zGJIIIAEoCzIwLmdycGMuZ2F0ZXdheS5wcm90b2NfZ2VuX3N3YWdnZXIu",
"b3B0aW9ucy5Td2FnZ2VyOnAKE29wZW5hcGl2Ml9vcGVyYXRpb24SHi5nb29n",
"bGUucHJvdG9idWYuTWV0aG9kT3B0aW9ucxiSCCABKAsyMi5ncnBjLmdhdGV3",
"YXkucHJvdG9jX2dlbl9zd2FnZ2VyLm9wdGlvbnMuT3BlcmF0aW9uOmsKEG9w",
"ZW5hcGl2Ml9zY2hlbWESHy5nb29nbGUucHJvdG9idWYuTWVzc2FnZU9wdGlv",
"bnMYkgggASgLMi8uZ3JwYy5nYXRld2F5LnByb3RvY19nZW5fc3dhZ2dlci5v",
"cHRpb25zLlNjaGVtYTplCg1vcGVuYXBpdjJfdGFnEh8uZ29vZ2xlLnByb3Rv",
"YnVmLlNlcnZpY2VPcHRpb25zGJIIIAEoCzIsLmdycGMuZ2F0ZXdheS5wcm90",
"b2NfZ2VuX3N3YWdnZXIub3B0aW9ucy5UYWc6bAoPb3BlbmFwaXYyX2ZpZWxk",
"Eh0uZ29vZ2xlLnByb3RvYnVmLkZpZWxkT3B0aW9ucxiSCCABKAsyMy5ncnBj",
"LmdhdGV3YXkucHJvdG9jX2dlbl9zd2FnZ2VyLm9wdGlvbnMuSlNPTlNjaGVt",
"YUJDWkFnaXRodWIuY29tL2dycGMtZWNvc3lzdGVtL2dycGMtZ2F0ZXdheS9w",
"cm90b2MtZ2VuLXN3YWdnZXIvb3B0aW9uc2IGcHJvdG8z"));
descriptor = pbr::FileDescriptor.FromGeneratedCode(descriptorData,
new pbr::FileDescriptor[] { global::Grpc.Gateway.ProtocGenSwagger.Options.Openapiv2Reflection.Descriptor, pbr::FileDescriptor.DescriptorProtoFileDescriptor, },
new pbr::GeneratedClrTypeInfo(null, null));
}
#endregion
}
}
#endregion Designer generated code

View File

@ -1,834 +0,0 @@
// <auto-generated>
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: api/backend.proto
// </auto-generated>
#pragma warning disable 1591, 0612, 3021
#region Designer generated code
using pb = global::Google.Protobuf;
using pbc = global::Google.Protobuf.Collections;
using pbr = global::Google.Protobuf.Reflection;
using scg = global::System.Collections.Generic;
namespace OpenMatch {
/// <summary>Holder for reflection information generated from api/backend.proto</summary>
public static partial class BackendReflection {
#region Descriptor
/// <summary>File descriptor for api/backend.proto</summary>
public static pbr::FileDescriptor Descriptor {
get { return descriptor; }
}
private static pbr::FileDescriptor descriptor;
static BackendReflection() {
byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"ChFhcGkvYmFja2VuZC5wcm90bxIJb3Blbm1hdGNoGhJhcGkvbWVzc2FnZXMu",
"cHJvdG8aHGdvb2dsZS9hcGkvYW5ub3RhdGlvbnMucHJvdG8aLHByb3RvYy1n",
"ZW4tc3dhZ2dlci9vcHRpb25zL2Fubm90YXRpb25zLnByb3RvInYKDkZ1bmN0",
"aW9uQ29uZmlnEgwKBGhvc3QYASABKAkSDAoEcG9ydBgCIAEoBRIsCgR0eXBl",
"GAMgASgOMh4ub3Blbm1hdGNoLkZ1bmN0aW9uQ29uZmlnLlR5cGUiGgoEVHlw",
"ZRIICgRHUlBDEAASCAoEUkVTVBABImsKE0ZldGNoTWF0Y2hlc1JlcXVlc3QS",
"KQoGY29uZmlnGAEgASgLMhkub3Blbm1hdGNoLkZ1bmN0aW9uQ29uZmlnEikK",
"CHByb2ZpbGVzGAIgAygLMhcub3Blbm1hdGNoLk1hdGNoUHJvZmlsZSI3ChRG",
"ZXRjaE1hdGNoZXNSZXNwb25zZRIfCgVtYXRjaBgBIAEoCzIQLm9wZW5tYXRj",
"aC5NYXRjaCJVChRBc3NpZ25UaWNrZXRzUmVxdWVzdBISCgp0aWNrZXRfaWRz",
"GAEgAygJEikKCmFzc2lnbm1lbnQYAiABKAsyFS5vcGVubWF0Y2guQXNzaWdu",
"bWVudCIXChVBc3NpZ25UaWNrZXRzUmVzcG9uc2Uy/QEKB0JhY2tlbmQSdwoM",
"RmV0Y2hNYXRjaGVzEh4ub3Blbm1hdGNoLkZldGNoTWF0Y2hlc1JlcXVlc3Qa",
"Hy5vcGVubWF0Y2guRmV0Y2hNYXRjaGVzUmVzcG9uc2UiJILT5JMCHiIZL3Yx",
"L2JhY2tlbmQvbWF0Y2hlczpmZXRjaDoBKjABEnkKDUFzc2lnblRpY2tldHMS",
"Hy5vcGVubWF0Y2guQXNzaWduVGlja2V0c1JlcXVlc3QaIC5vcGVubWF0Y2gu",
"QXNzaWduVGlja2V0c1Jlc3BvbnNlIiWC0+STAh8iGi92MS9iYWNrZW5kL3Rp",
"Y2tldHM6YXNzaWduOgEqQooDWiBvcGVuLW1hdGNoLmRldi9vcGVuLW1hdGNo",
"L3BrZy9wYqoCCU9wZW5NYXRjaJJB2AISsQEKB0JhY2tlbmQiSQoKT3BlbiBN",
"YXRjaBIWaHR0cHM6Ly9vcGVuLW1hdGNoLmRldhojb3Blbi1tYXRjaC1kaXNj",
"dXNzQGdvb2dsZWdyb3Vwcy5jb20qVgoSQXBhY2hlIDIuMCBMaWNlbnNlEkBo",
"dHRwczovL2dpdGh1Yi5jb20vZ29vZ2xlZm9yZ2FtZXMvb3Blbi1tYXRjaC9i",
"bG9iL21hc3Rlci9MSUNFTlNFMgMxLjAqAgECMhBhcHBsaWNhdGlvbi9qc29u",
"OhBhcHBsaWNhdGlvbi9qc29uUjsKAzQwNBI0CipSZXR1cm5lZCB3aGVuIHRo",
"ZSByZXNvdXJjZSBkb2VzIG5vdCBleGlzdC4SBgoEmgIBB3I9ChhPcGVuIE1h",
"dGNoIERvY3VtZW50YXRpb24SIWh0dHBzOi8vb3Blbi1tYXRjaC5kZXYvc2l0",
"ZS9kb2NzL2IGcHJvdG8z"));
descriptor = pbr::FileDescriptor.FromGeneratedCode(descriptorData,
new pbr::FileDescriptor[] { global::OpenMatch.MessagesReflection.Descriptor, global::Google.Api.AnnotationsReflection.Descriptor, global::Grpc.Gateway.ProtocGenSwagger.Options.AnnotationsReflection.Descriptor, },
new pbr::GeneratedClrTypeInfo(null, new pbr::GeneratedClrTypeInfo[] {
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.FunctionConfig), global::OpenMatch.FunctionConfig.Parser, new[]{ "Host", "Port", "Type" }, null, new[]{ typeof(global::OpenMatch.FunctionConfig.Types.Type) }, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.FetchMatchesRequest), global::OpenMatch.FetchMatchesRequest.Parser, new[]{ "Config", "Profiles" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.FetchMatchesResponse), global::OpenMatch.FetchMatchesResponse.Parser, new[]{ "Match" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.AssignTicketsRequest), global::OpenMatch.AssignTicketsRequest.Parser, new[]{ "TicketIds", "Assignment" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.AssignTicketsResponse), global::OpenMatch.AssignTicketsResponse.Parser, null, null, null, null)
}));
}
#endregion
}
#region Messages
/// <summary>
/// FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF
/// </summary>
public sealed partial class FunctionConfig : pb::IMessage<FunctionConfig> {
private static readonly pb::MessageParser<FunctionConfig> _parser = new pb::MessageParser<FunctionConfig>(() => new FunctionConfig());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<FunctionConfig> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.BackendReflection.Descriptor.MessageTypes[0]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FunctionConfig() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FunctionConfig(FunctionConfig other) : this() {
host_ = other.host_;
port_ = other.port_;
type_ = other.type_;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FunctionConfig Clone() {
return new FunctionConfig(this);
}
/// <summary>Field number for the "host" field.</summary>
public const int HostFieldNumber = 1;
private string host_ = "";
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public string Host {
get { return host_; }
set {
host_ = pb::ProtoPreconditions.CheckNotNull(value, "value");
}
}
/// <summary>Field number for the "port" field.</summary>
public const int PortFieldNumber = 2;
private int port_;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int Port {
get { return port_; }
set {
port_ = value;
}
}
/// <summary>Field number for the "type" field.</summary>
public const int TypeFieldNumber = 3;
private global::OpenMatch.FunctionConfig.Types.Type type_ = 0;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.FunctionConfig.Types.Type Type {
get { return type_; }
set {
type_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as FunctionConfig);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(FunctionConfig other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (Host != other.Host) return false;
if (Port != other.Port) return false;
if (Type != other.Type) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (Host.Length != 0) hash ^= Host.GetHashCode();
if (Port != 0) hash ^= Port.GetHashCode();
if (Type != 0) hash ^= Type.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (Host.Length != 0) {
output.WriteRawTag(10);
output.WriteString(Host);
}
if (Port != 0) {
output.WriteRawTag(16);
output.WriteInt32(Port);
}
if (Type != 0) {
output.WriteRawTag(24);
output.WriteEnum((int) Type);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (Host.Length != 0) {
size += 1 + pb::CodedOutputStream.ComputeStringSize(Host);
}
if (Port != 0) {
size += 1 + pb::CodedOutputStream.ComputeInt32Size(Port);
}
if (Type != 0) {
size += 1 + pb::CodedOutputStream.ComputeEnumSize((int) Type);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(FunctionConfig other) {
if (other == null) {
return;
}
if (other.Host.Length != 0) {
Host = other.Host;
}
if (other.Port != 0) {
Port = other.Port;
}
if (other.Type != 0) {
Type = other.Type;
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
Host = input.ReadString();
break;
}
case 16: {
Port = input.ReadInt32();
break;
}
case 24: {
Type = (global::OpenMatch.FunctionConfig.Types.Type) input.ReadEnum();
break;
}
}
}
}
#region Nested types
/// <summary>Container for nested types declared in the FunctionConfig message type.</summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static partial class Types {
public enum Type {
[pbr::OriginalName("GRPC")] Grpc = 0,
[pbr::OriginalName("REST")] Rest = 1,
}
}
#endregion
}
public sealed partial class FetchMatchesRequest : pb::IMessage<FetchMatchesRequest> {
private static readonly pb::MessageParser<FetchMatchesRequest> _parser = new pb::MessageParser<FetchMatchesRequest>(() => new FetchMatchesRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<FetchMatchesRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.BackendReflection.Descriptor.MessageTypes[1]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FetchMatchesRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FetchMatchesRequest(FetchMatchesRequest other) : this() {
config_ = other.config_ != null ? other.config_.Clone() : null;
profiles_ = other.profiles_.Clone();
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FetchMatchesRequest Clone() {
return new FetchMatchesRequest(this);
}
/// <summary>Field number for the "config" field.</summary>
public const int ConfigFieldNumber = 1;
private global::OpenMatch.FunctionConfig config_;
/// <summary>
/// FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.FunctionConfig Config {
get { return config_; }
set {
config_ = value;
}
}
/// <summary>Field number for the "profiles" field.</summary>
public const int ProfilesFieldNumber = 2;
private static readonly pb::FieldCodec<global::OpenMatch.MatchProfile> _repeated_profiles_codec
= pb::FieldCodec.ForMessage(18, global::OpenMatch.MatchProfile.Parser);
private readonly pbc::RepeatedField<global::OpenMatch.MatchProfile> profiles_ = new pbc::RepeatedField<global::OpenMatch.MatchProfile>();
/// <summary>
/// MatchProfiles that will be sent to thhe MMF specified in the FunctionConfig.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public pbc::RepeatedField<global::OpenMatch.MatchProfile> Profiles {
get { return profiles_; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as FetchMatchesRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(FetchMatchesRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Config, other.Config)) return false;
if(!profiles_.Equals(other.profiles_)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (config_ != null) hash ^= Config.GetHashCode();
hash ^= profiles_.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (config_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Config);
}
profiles_.WriteTo(output, _repeated_profiles_codec);
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (config_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Config);
}
size += profiles_.CalculateSize(_repeated_profiles_codec);
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(FetchMatchesRequest other) {
if (other == null) {
return;
}
if (other.config_ != null) {
if (config_ == null) {
Config = new global::OpenMatch.FunctionConfig();
}
Config.MergeFrom(other.Config);
}
profiles_.Add(other.profiles_);
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (config_ == null) {
Config = new global::OpenMatch.FunctionConfig();
}
input.ReadMessage(Config);
break;
}
case 18: {
profiles_.AddEntriesFrom(input, _repeated_profiles_codec);
break;
}
}
}
}
}
public sealed partial class FetchMatchesResponse : pb::IMessage<FetchMatchesResponse> {
private static readonly pb::MessageParser<FetchMatchesResponse> _parser = new pb::MessageParser<FetchMatchesResponse>(() => new FetchMatchesResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<FetchMatchesResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.BackendReflection.Descriptor.MessageTypes[2]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FetchMatchesResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FetchMatchesResponse(FetchMatchesResponse other) : this() {
match_ = other.match_ != null ? other.match_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public FetchMatchesResponse Clone() {
return new FetchMatchesResponse(this);
}
/// <summary>Field number for the "match" field.</summary>
public const int MatchFieldNumber = 1;
private global::OpenMatch.Match match_;
/// <summary>
/// A Match generated by the user-defined MMF with the specified MatchProfiles.
/// A valid Match response will contain at least one ticket.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Match Match {
get { return match_; }
set {
match_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as FetchMatchesResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(FetchMatchesResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Match, other.Match)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (match_ != null) hash ^= Match.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (match_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Match);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (match_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Match);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(FetchMatchesResponse other) {
if (other == null) {
return;
}
if (other.match_ != null) {
if (match_ == null) {
Match = new global::OpenMatch.Match();
}
Match.MergeFrom(other.Match);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (match_ == null) {
Match = new global::OpenMatch.Match();
}
input.ReadMessage(Match);
break;
}
}
}
}
}
public sealed partial class AssignTicketsRequest : pb::IMessage<AssignTicketsRequest> {
private static readonly pb::MessageParser<AssignTicketsRequest> _parser = new pb::MessageParser<AssignTicketsRequest>(() => new AssignTicketsRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<AssignTicketsRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.BackendReflection.Descriptor.MessageTypes[3]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public AssignTicketsRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public AssignTicketsRequest(AssignTicketsRequest other) : this() {
ticketIds_ = other.ticketIds_.Clone();
assignment_ = other.assignment_ != null ? other.assignment_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public AssignTicketsRequest Clone() {
return new AssignTicketsRequest(this);
}
/// <summary>Field number for the "ticket_ids" field.</summary>
public const int TicketIdsFieldNumber = 1;
private static readonly pb::FieldCodec<string> _repeated_ticketIds_codec
= pb::FieldCodec.ForString(10);
private readonly pbc::RepeatedField<string> ticketIds_ = new pbc::RepeatedField<string>();
/// <summary>
/// TicketIds is a list of strings representing Open Match generated Ids which apply to an Assignment.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public pbc::RepeatedField<string> TicketIds {
get { return ticketIds_; }
}
/// <summary>Field number for the "assignment" field.</summary>
public const int AssignmentFieldNumber = 2;
private global::OpenMatch.Assignment assignment_;
/// <summary>
/// An Assignment specifies game connection related information to be associated with the TicketIds.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Assignment Assignment {
get { return assignment_; }
set {
assignment_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as AssignTicketsRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(AssignTicketsRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if(!ticketIds_.Equals(other.ticketIds_)) return false;
if (!object.Equals(Assignment, other.Assignment)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
hash ^= ticketIds_.GetHashCode();
if (assignment_ != null) hash ^= Assignment.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
ticketIds_.WriteTo(output, _repeated_ticketIds_codec);
if (assignment_ != null) {
output.WriteRawTag(18);
output.WriteMessage(Assignment);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
size += ticketIds_.CalculateSize(_repeated_ticketIds_codec);
if (assignment_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Assignment);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(AssignTicketsRequest other) {
if (other == null) {
return;
}
ticketIds_.Add(other.ticketIds_);
if (other.assignment_ != null) {
if (assignment_ == null) {
Assignment = new global::OpenMatch.Assignment();
}
Assignment.MergeFrom(other.Assignment);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
ticketIds_.AddEntriesFrom(input, _repeated_ticketIds_codec);
break;
}
case 18: {
if (assignment_ == null) {
Assignment = new global::OpenMatch.Assignment();
}
input.ReadMessage(Assignment);
break;
}
}
}
}
}
public sealed partial class AssignTicketsResponse : pb::IMessage<AssignTicketsResponse> {
private static readonly pb::MessageParser<AssignTicketsResponse> _parser = new pb::MessageParser<AssignTicketsResponse>(() => new AssignTicketsResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<AssignTicketsResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.BackendReflection.Descriptor.MessageTypes[4]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public AssignTicketsResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public AssignTicketsResponse(AssignTicketsResponse other) : this() {
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public AssignTicketsResponse Clone() {
return new AssignTicketsResponse(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as AssignTicketsResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(AssignTicketsResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(AssignTicketsResponse other) {
if (other == null) {
return;
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
}
}
}
}
#endregion
}
#endregion Designer generated code

View File

@ -1,336 +0,0 @@
// <auto-generated>
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: api/evaluator.proto
// </auto-generated>
#pragma warning disable 1591, 0612, 3021
#region Designer generated code
using pb = global::Google.Protobuf;
using pbc = global::Google.Protobuf.Collections;
using pbr = global::Google.Protobuf.Reflection;
using scg = global::System.Collections.Generic;
namespace OpenMatch {
/// <summary>Holder for reflection information generated from api/evaluator.proto</summary>
public static partial class EvaluatorReflection {
#region Descriptor
/// <summary>File descriptor for api/evaluator.proto</summary>
public static pbr::FileDescriptor Descriptor {
get { return descriptor; }
}
private static pbr::FileDescriptor descriptor;
static EvaluatorReflection() {
byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"ChNhcGkvZXZhbHVhdG9yLnByb3RvEglvcGVubWF0Y2gaEmFwaS9tZXNzYWdl",
"cy5wcm90bxocZ29vZ2xlL2FwaS9hbm5vdGF0aW9ucy5wcm90bxoscHJvdG9j",
"LWdlbi1zd2FnZ2VyL29wdGlvbnMvYW5ub3RhdGlvbnMucHJvdG8iMgoPRXZh",
"bHVhdGVSZXF1ZXN0Eh8KBW1hdGNoGAEgASgLMhAub3Blbm1hdGNoLk1hdGNo",
"IjMKEEV2YWx1YXRlUmVzcG9uc2USHwoFbWF0Y2gYASABKAsyEC5vcGVubWF0",
"Y2guTWF0Y2gyfwoJRXZhbHVhdG9yEnIKCEV2YWx1YXRlEhoub3Blbm1hdGNo",
"LkV2YWx1YXRlUmVxdWVzdBobLm9wZW5tYXRjaC5FdmFsdWF0ZVJlc3BvbnNl",
"IimC0+STAiMiHi92MS9ldmFsdWF0b3IvbWF0Y2hlczpldmFsdWF0ZToBKigB",
"MAFCjANaIG9wZW4tbWF0Y2guZGV2L29wZW4tbWF0Y2gvcGtnL3BiqgIJT3Bl",
"bk1hdGNokkHaAhKzAQoJRXZhbHVhdG9yIkkKCk9wZW4gTWF0Y2gSFmh0dHBz",
"Oi8vb3Blbi1tYXRjaC5kZXYaI29wZW4tbWF0Y2gtZGlzY3Vzc0Bnb29nbGVn",
"cm91cHMuY29tKlYKEkFwYWNoZSAyLjAgTGljZW5zZRJAaHR0cHM6Ly9naXRo",
"dWIuY29tL2dvb2dsZWZvcmdhbWVzL29wZW4tbWF0Y2gvYmxvYi9tYXN0ZXIv",
"TElDRU5TRTIDMS4wKgIBAjIQYXBwbGljYXRpb24vanNvbjoQYXBwbGljYXRp",
"b24vanNvblI7CgM0MDQSNAoqUmV0dXJuZWQgd2hlbiB0aGUgcmVzb3VyY2Ug",
"ZG9lcyBub3QgZXhpc3QuEgYKBJoCAQdyPQoYT3BlbiBNYXRjaCBEb2N1bWVu",
"dGF0aW9uEiFodHRwczovL29wZW4tbWF0Y2guZGV2L3NpdGUvZG9jcy9iBnBy",
"b3RvMw=="));
descriptor = pbr::FileDescriptor.FromGeneratedCode(descriptorData,
new pbr::FileDescriptor[] { global::OpenMatch.MessagesReflection.Descriptor, global::Google.Api.AnnotationsReflection.Descriptor, global::Grpc.Gateway.ProtocGenSwagger.Options.AnnotationsReflection.Descriptor, },
new pbr::GeneratedClrTypeInfo(null, new pbr::GeneratedClrTypeInfo[] {
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.EvaluateRequest), global::OpenMatch.EvaluateRequest.Parser, new[]{ "Match" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.EvaluateResponse), global::OpenMatch.EvaluateResponse.Parser, new[]{ "Match" }, null, null, null)
}));
}
#endregion
}
#region Messages
public sealed partial class EvaluateRequest : pb::IMessage<EvaluateRequest> {
private static readonly pb::MessageParser<EvaluateRequest> _parser = new pb::MessageParser<EvaluateRequest>(() => new EvaluateRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<EvaluateRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.EvaluatorReflection.Descriptor.MessageTypes[0]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public EvaluateRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public EvaluateRequest(EvaluateRequest other) : this() {
match_ = other.match_ != null ? other.match_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public EvaluateRequest Clone() {
return new EvaluateRequest(this);
}
/// <summary>Field number for the "match" field.</summary>
public const int MatchFieldNumber = 1;
private global::OpenMatch.Match match_;
/// <summary>
/// A Matches proposed by the Match Function representing a candidate of the final results.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Match Match {
get { return match_; }
set {
match_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as EvaluateRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(EvaluateRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Match, other.Match)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (match_ != null) hash ^= Match.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (match_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Match);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (match_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Match);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(EvaluateRequest other) {
if (other == null) {
return;
}
if (other.match_ != null) {
if (match_ == null) {
Match = new global::OpenMatch.Match();
}
Match.MergeFrom(other.Match);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (match_ == null) {
Match = new global::OpenMatch.Match();
}
input.ReadMessage(Match);
break;
}
}
}
}
}
public sealed partial class EvaluateResponse : pb::IMessage<EvaluateResponse> {
private static readonly pb::MessageParser<EvaluateResponse> _parser = new pb::MessageParser<EvaluateResponse>(() => new EvaluateResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<EvaluateResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.EvaluatorReflection.Descriptor.MessageTypes[1]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public EvaluateResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public EvaluateResponse(EvaluateResponse other) : this() {
match_ = other.match_ != null ? other.match_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public EvaluateResponse Clone() {
return new EvaluateResponse(this);
}
/// <summary>Field number for the "match" field.</summary>
public const int MatchFieldNumber = 1;
private global::OpenMatch.Match match_;
/// <summary>
/// A Match shortlisted by the evaluator representing one of the final results.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Match Match {
get { return match_; }
set {
match_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as EvaluateResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(EvaluateResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Match, other.Match)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (match_ != null) hash ^= Match.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (match_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Match);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (match_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Match);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(EvaluateResponse other) {
if (other == null) {
return;
}
if (other.match_ != null) {
if (match_ == null) {
Match = new global::OpenMatch.Match();
}
Match.MergeFrom(other.Match);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (match_ == null) {
Match = new global::OpenMatch.Match();
}
input.ReadMessage(Match);
break;
}
}
}
}
}
#endregion
}
#endregion Designer generated code

View File

@ -1,989 +0,0 @@
// <auto-generated>
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: api/frontend.proto
// </auto-generated>
#pragma warning disable 1591, 0612, 3021
#region Designer generated code
using pb = global::Google.Protobuf;
using pbc = global::Google.Protobuf.Collections;
using pbr = global::Google.Protobuf.Reflection;
using scg = global::System.Collections.Generic;
namespace OpenMatch {
/// <summary>Holder for reflection information generated from api/frontend.proto</summary>
public static partial class FrontendReflection {
#region Descriptor
/// <summary>File descriptor for api/frontend.proto</summary>
public static pbr::FileDescriptor Descriptor {
get { return descriptor; }
}
private static pbr::FileDescriptor descriptor;
static FrontendReflection() {
byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"ChJhcGkvZnJvbnRlbmQucHJvdG8SCW9wZW5tYXRjaBoSYXBpL21lc3NhZ2Vz",
"LnByb3RvGhxnb29nbGUvYXBpL2Fubm90YXRpb25zLnByb3RvGixwcm90b2Mt",
"Z2VuLXN3YWdnZXIvb3B0aW9ucy9hbm5vdGF0aW9ucy5wcm90byI4ChNDcmVh",
"dGVUaWNrZXRSZXF1ZXN0EiEKBnRpY2tldBgBIAEoCzIRLm9wZW5tYXRjaC5U",
"aWNrZXQiOQoUQ3JlYXRlVGlja2V0UmVzcG9uc2USIQoGdGlja2V0GAEgASgL",
"MhEub3Blbm1hdGNoLlRpY2tldCIoChNEZWxldGVUaWNrZXRSZXF1ZXN0EhEK",
"CXRpY2tldF9pZBgBIAEoCSIWChREZWxldGVUaWNrZXRSZXNwb25zZSIlChBH",
"ZXRUaWNrZXRSZXF1ZXN0EhEKCXRpY2tldF9pZBgBIAEoCSIqChVHZXRBc3Np",
"Z25tZW50c1JlcXVlc3QSEQoJdGlja2V0X2lkGAEgASgJIkMKFkdldEFzc2ln",
"bm1lbnRzUmVzcG9uc2USKQoKYXNzaWdubWVudBgBIAEoCzIVLm9wZW5tYXRj",
"aC5Bc3NpZ25tZW50Mu4DCghGcm9udGVuZBJwCgxDcmVhdGVUaWNrZXQSHi5v",
"cGVubWF0Y2guQ3JlYXRlVGlja2V0UmVxdWVzdBofLm9wZW5tYXRjaC5DcmVh",
"dGVUaWNrZXRSZXNwb25zZSIfgtPkkwIZIhQvdjEvZnJvbnRlbmQvdGlja2V0",
"czoBKhJ5CgxEZWxldGVUaWNrZXQSHi5vcGVubWF0Y2guRGVsZXRlVGlja2V0",
"UmVxdWVzdBofLm9wZW5tYXRjaC5EZWxldGVUaWNrZXRSZXNwb25zZSIogtPk",
"kwIiKiAvdjEvZnJvbnRlbmQvdGlja2V0cy97dGlja2V0X2lkfRJlCglHZXRU",
"aWNrZXQSGy5vcGVubWF0Y2guR2V0VGlja2V0UmVxdWVzdBoRLm9wZW5tYXRj",
"aC5UaWNrZXQiKILT5JMCIhIgL3YxL2Zyb250ZW5kL3RpY2tldHMve3RpY2tl",
"dF9pZH0SjQEKDkdldEFzc2lnbm1lbnRzEiAub3Blbm1hdGNoLkdldEFzc2ln",
"bm1lbnRzUmVxdWVzdBohLm9wZW5tYXRjaC5HZXRBc3NpZ25tZW50c1Jlc3Bv",
"bnNlIjSC0+STAi4SLC92MS9mcm9udGVuZC90aWNrZXRzL3t0aWNrZXRfaWR9",
"L2Fzc2lnbm1lbnRzMAFCiwNaIG9wZW4tbWF0Y2guZGV2L29wZW4tbWF0Y2gv",
"cGtnL3BiqgIJT3Blbk1hdGNokkHZAhKyAQoIRnJvbnRlbmQiSQoKT3BlbiBN",
"YXRjaBIWaHR0cHM6Ly9vcGVuLW1hdGNoLmRldhojb3Blbi1tYXRjaC1kaXNj",
"dXNzQGdvb2dsZWdyb3Vwcy5jb20qVgoSQXBhY2hlIDIuMCBMaWNlbnNlEkBo",
"dHRwczovL2dpdGh1Yi5jb20vZ29vZ2xlZm9yZ2FtZXMvb3Blbi1tYXRjaC9i",
"bG9iL21hc3Rlci9MSUNFTlNFMgMxLjAqAgECMhBhcHBsaWNhdGlvbi9qc29u",
"OhBhcHBsaWNhdGlvbi9qc29uUjsKAzQwNBI0CipSZXR1cm5lZCB3aGVuIHRo",
"ZSByZXNvdXJjZSBkb2VzIG5vdCBleGlzdC4SBgoEmgIBB3I9ChhPcGVuIE1h",
"dGNoIERvY3VtZW50YXRpb24SIWh0dHBzOi8vb3Blbi1tYXRjaC5kZXYvc2l0",
"ZS9kb2NzL2IGcHJvdG8z"));
descriptor = pbr::FileDescriptor.FromGeneratedCode(descriptorData,
new pbr::FileDescriptor[] { global::OpenMatch.MessagesReflection.Descriptor, global::Google.Api.AnnotationsReflection.Descriptor, global::Grpc.Gateway.ProtocGenSwagger.Options.AnnotationsReflection.Descriptor, },
new pbr::GeneratedClrTypeInfo(null, new pbr::GeneratedClrTypeInfo[] {
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.CreateTicketRequest), global::OpenMatch.CreateTicketRequest.Parser, new[]{ "Ticket" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.CreateTicketResponse), global::OpenMatch.CreateTicketResponse.Parser, new[]{ "Ticket" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.DeleteTicketRequest), global::OpenMatch.DeleteTicketRequest.Parser, new[]{ "TicketId" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.DeleteTicketResponse), global::OpenMatch.DeleteTicketResponse.Parser, null, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.GetTicketRequest), global::OpenMatch.GetTicketRequest.Parser, new[]{ "TicketId" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.GetAssignmentsRequest), global::OpenMatch.GetAssignmentsRequest.Parser, new[]{ "TicketId" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.GetAssignmentsResponse), global::OpenMatch.GetAssignmentsResponse.Parser, new[]{ "Assignment" }, null, null, null)
}));
}
#endregion
}
#region Messages
public sealed partial class CreateTicketRequest : pb::IMessage<CreateTicketRequest> {
private static readonly pb::MessageParser<CreateTicketRequest> _parser = new pb::MessageParser<CreateTicketRequest>(() => new CreateTicketRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<CreateTicketRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.FrontendReflection.Descriptor.MessageTypes[0]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public CreateTicketRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public CreateTicketRequest(CreateTicketRequest other) : this() {
ticket_ = other.ticket_ != null ? other.ticket_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public CreateTicketRequest Clone() {
return new CreateTicketRequest(this);
}
/// <summary>Field number for the "ticket" field.</summary>
public const int TicketFieldNumber = 1;
private global::OpenMatch.Ticket ticket_;
/// <summary>
/// A Ticket object with SearchFields defined.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Ticket Ticket {
get { return ticket_; }
set {
ticket_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as CreateTicketRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(CreateTicketRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Ticket, other.Ticket)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (ticket_ != null) hash ^= Ticket.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (ticket_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Ticket);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (ticket_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Ticket);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(CreateTicketRequest other) {
if (other == null) {
return;
}
if (other.ticket_ != null) {
if (ticket_ == null) {
Ticket = new global::OpenMatch.Ticket();
}
Ticket.MergeFrom(other.Ticket);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (ticket_ == null) {
Ticket = new global::OpenMatch.Ticket();
}
input.ReadMessage(Ticket);
break;
}
}
}
}
}
public sealed partial class CreateTicketResponse : pb::IMessage<CreateTicketResponse> {
private static readonly pb::MessageParser<CreateTicketResponse> _parser = new pb::MessageParser<CreateTicketResponse>(() => new CreateTicketResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<CreateTicketResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.FrontendReflection.Descriptor.MessageTypes[1]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public CreateTicketResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public CreateTicketResponse(CreateTicketResponse other) : this() {
ticket_ = other.ticket_ != null ? other.ticket_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public CreateTicketResponse Clone() {
return new CreateTicketResponse(this);
}
/// <summary>Field number for the "ticket" field.</summary>
public const int TicketFieldNumber = 1;
private global::OpenMatch.Ticket ticket_;
/// <summary>
/// A Ticket object with TicketId generated.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Ticket Ticket {
get { return ticket_; }
set {
ticket_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as CreateTicketResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(CreateTicketResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Ticket, other.Ticket)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (ticket_ != null) hash ^= Ticket.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (ticket_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Ticket);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (ticket_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Ticket);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(CreateTicketResponse other) {
if (other == null) {
return;
}
if (other.ticket_ != null) {
if (ticket_ == null) {
Ticket = new global::OpenMatch.Ticket();
}
Ticket.MergeFrom(other.Ticket);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (ticket_ == null) {
Ticket = new global::OpenMatch.Ticket();
}
input.ReadMessage(Ticket);
break;
}
}
}
}
}
public sealed partial class DeleteTicketRequest : pb::IMessage<DeleteTicketRequest> {
private static readonly pb::MessageParser<DeleteTicketRequest> _parser = new pb::MessageParser<DeleteTicketRequest>(() => new DeleteTicketRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<DeleteTicketRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.FrontendReflection.Descriptor.MessageTypes[2]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public DeleteTicketRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public DeleteTicketRequest(DeleteTicketRequest other) : this() {
ticketId_ = other.ticketId_;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public DeleteTicketRequest Clone() {
return new DeleteTicketRequest(this);
}
/// <summary>Field number for the "ticket_id" field.</summary>
public const int TicketIdFieldNumber = 1;
private string ticketId_ = "";
/// <summary>
/// A TicketId of a generated Ticket to be deleted.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public string TicketId {
get { return ticketId_; }
set {
ticketId_ = pb::ProtoPreconditions.CheckNotNull(value, "value");
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as DeleteTicketRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(DeleteTicketRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (TicketId != other.TicketId) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (TicketId.Length != 0) hash ^= TicketId.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (TicketId.Length != 0) {
output.WriteRawTag(10);
output.WriteString(TicketId);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (TicketId.Length != 0) {
size += 1 + pb::CodedOutputStream.ComputeStringSize(TicketId);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(DeleteTicketRequest other) {
if (other == null) {
return;
}
if (other.TicketId.Length != 0) {
TicketId = other.TicketId;
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
TicketId = input.ReadString();
break;
}
}
}
}
}
public sealed partial class DeleteTicketResponse : pb::IMessage<DeleteTicketResponse> {
private static readonly pb::MessageParser<DeleteTicketResponse> _parser = new pb::MessageParser<DeleteTicketResponse>(() => new DeleteTicketResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<DeleteTicketResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.FrontendReflection.Descriptor.MessageTypes[3]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public DeleteTicketResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public DeleteTicketResponse(DeleteTicketResponse other) : this() {
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public DeleteTicketResponse Clone() {
return new DeleteTicketResponse(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as DeleteTicketResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(DeleteTicketResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(DeleteTicketResponse other) {
if (other == null) {
return;
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
}
}
}
}
public sealed partial class GetTicketRequest : pb::IMessage<GetTicketRequest> {
private static readonly pb::MessageParser<GetTicketRequest> _parser = new pb::MessageParser<GetTicketRequest>(() => new GetTicketRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<GetTicketRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.FrontendReflection.Descriptor.MessageTypes[4]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetTicketRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetTicketRequest(GetTicketRequest other) : this() {
ticketId_ = other.ticketId_;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetTicketRequest Clone() {
return new GetTicketRequest(this);
}
/// <summary>Field number for the "ticket_id" field.</summary>
public const int TicketIdFieldNumber = 1;
private string ticketId_ = "";
/// <summary>
/// A TicketId of a generated Ticket.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public string TicketId {
get { return ticketId_; }
set {
ticketId_ = pb::ProtoPreconditions.CheckNotNull(value, "value");
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as GetTicketRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(GetTicketRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (TicketId != other.TicketId) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (TicketId.Length != 0) hash ^= TicketId.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (TicketId.Length != 0) {
output.WriteRawTag(10);
output.WriteString(TicketId);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (TicketId.Length != 0) {
size += 1 + pb::CodedOutputStream.ComputeStringSize(TicketId);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(GetTicketRequest other) {
if (other == null) {
return;
}
if (other.TicketId.Length != 0) {
TicketId = other.TicketId;
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
TicketId = input.ReadString();
break;
}
}
}
}
}
public sealed partial class GetAssignmentsRequest : pb::IMessage<GetAssignmentsRequest> {
private static readonly pb::MessageParser<GetAssignmentsRequest> _parser = new pb::MessageParser<GetAssignmentsRequest>(() => new GetAssignmentsRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<GetAssignmentsRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.FrontendReflection.Descriptor.MessageTypes[5]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetAssignmentsRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetAssignmentsRequest(GetAssignmentsRequest other) : this() {
ticketId_ = other.ticketId_;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetAssignmentsRequest Clone() {
return new GetAssignmentsRequest(this);
}
/// <summary>Field number for the "ticket_id" field.</summary>
public const int TicketIdFieldNumber = 1;
private string ticketId_ = "";
/// <summary>
/// A TicketId of a generated Ticket to get updates on.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public string TicketId {
get { return ticketId_; }
set {
ticketId_ = pb::ProtoPreconditions.CheckNotNull(value, "value");
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as GetAssignmentsRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(GetAssignmentsRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (TicketId != other.TicketId) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (TicketId.Length != 0) hash ^= TicketId.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (TicketId.Length != 0) {
output.WriteRawTag(10);
output.WriteString(TicketId);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (TicketId.Length != 0) {
size += 1 + pb::CodedOutputStream.ComputeStringSize(TicketId);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(GetAssignmentsRequest other) {
if (other == null) {
return;
}
if (other.TicketId.Length != 0) {
TicketId = other.TicketId;
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
TicketId = input.ReadString();
break;
}
}
}
}
}
public sealed partial class GetAssignmentsResponse : pb::IMessage<GetAssignmentsResponse> {
private static readonly pb::MessageParser<GetAssignmentsResponse> _parser = new pb::MessageParser<GetAssignmentsResponse>(() => new GetAssignmentsResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<GetAssignmentsResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.FrontendReflection.Descriptor.MessageTypes[6]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetAssignmentsResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetAssignmentsResponse(GetAssignmentsResponse other) : this() {
assignment_ = other.assignment_ != null ? other.assignment_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public GetAssignmentsResponse Clone() {
return new GetAssignmentsResponse(this);
}
/// <summary>Field number for the "assignment" field.</summary>
public const int AssignmentFieldNumber = 1;
private global::OpenMatch.Assignment assignment_;
/// <summary>
/// An updated Assignment of the requested Ticket.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Assignment Assignment {
get { return assignment_; }
set {
assignment_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as GetAssignmentsResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(GetAssignmentsResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Assignment, other.Assignment)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (assignment_ != null) hash ^= Assignment.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (assignment_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Assignment);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (assignment_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Assignment);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(GetAssignmentsResponse other) {
if (other == null) {
return;
}
if (other.assignment_ != null) {
if (assignment_ == null) {
Assignment = new global::OpenMatch.Assignment();
}
Assignment.MergeFrom(other.Assignment);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (assignment_ == null) {
Assignment = new global::OpenMatch.Assignment();
}
input.ReadMessage(Assignment);
break;
}
}
}
}
}
#endregion
}
#endregion Designer generated code

View File

@ -1,336 +0,0 @@
// <auto-generated>
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: api/matchfunction.proto
// </auto-generated>
#pragma warning disable 1591, 0612, 3021
#region Designer generated code
using pb = global::Google.Protobuf;
using pbc = global::Google.Protobuf.Collections;
using pbr = global::Google.Protobuf.Reflection;
using scg = global::System.Collections.Generic;
namespace OpenMatch {
/// <summary>Holder for reflection information generated from api/matchfunction.proto</summary>
public static partial class MatchfunctionReflection {
#region Descriptor
/// <summary>File descriptor for api/matchfunction.proto</summary>
public static pbr::FileDescriptor Descriptor {
get { return descriptor; }
}
private static pbr::FileDescriptor descriptor;
static MatchfunctionReflection() {
byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"ChdhcGkvbWF0Y2hmdW5jdGlvbi5wcm90bxIJb3Blbm1hdGNoGhJhcGkvbWVz",
"c2FnZXMucHJvdG8aHGdvb2dsZS9hcGkvYW5ub3RhdGlvbnMucHJvdG8aLHBy",
"b3RvYy1nZW4tc3dhZ2dlci9vcHRpb25zL2Fubm90YXRpb25zLnByb3RvIjYK",
"ClJ1blJlcXVlc3QSKAoHcHJvZmlsZRgBIAEoCzIXLm9wZW5tYXRjaC5NYXRj",
"aFByb2ZpbGUiMQoLUnVuUmVzcG9uc2USIgoIcHJvcG9zYWwYASABKAsyEC5v",
"cGVubWF0Y2guTWF0Y2gyaQoNTWF0Y2hGdW5jdGlvbhJYCgNSdW4SFS5vcGVu",
"bWF0Y2guUnVuUmVxdWVzdBoWLm9wZW5tYXRjaC5SdW5SZXNwb25zZSIggtPk",
"kwIaIhUvdjEvbWF0Y2hmdW5jdGlvbjpydW46ASowAUKRA1ogb3Blbi1tYXRj",
"aC5kZXYvb3Blbi1tYXRjaC9wa2cvcGKqAglPcGVuTWF0Y2iSQd8CErgBCg5N",
"YXRjaCBGdW5jdGlvbiJJCgpPcGVuIE1hdGNoEhZodHRwczovL29wZW4tbWF0",
"Y2guZGV2GiNvcGVuLW1hdGNoLWRpc2N1c3NAZ29vZ2xlZ3JvdXBzLmNvbSpW",
"ChJBcGFjaGUgMi4wIExpY2Vuc2USQGh0dHBzOi8vZ2l0aHViLmNvbS9nb29n",
"bGVmb3JnYW1lcy9vcGVuLW1hdGNoL2Jsb2IvbWFzdGVyL0xJQ0VOU0UyAzEu",
"MCoCAQIyEGFwcGxpY2F0aW9uL2pzb246EGFwcGxpY2F0aW9uL2pzb25SOwoD",
"NDA0EjQKKlJldHVybmVkIHdoZW4gdGhlIHJlc291cmNlIGRvZXMgbm90IGV4",
"aXN0LhIGCgSaAgEHcj0KGE9wZW4gTWF0Y2ggRG9jdW1lbnRhdGlvbhIhaHR0",
"cHM6Ly9vcGVuLW1hdGNoLmRldi9zaXRlL2RvY3MvYgZwcm90bzM="));
descriptor = pbr::FileDescriptor.FromGeneratedCode(descriptorData,
new pbr::FileDescriptor[] { global::OpenMatch.MessagesReflection.Descriptor, global::Google.Api.AnnotationsReflection.Descriptor, global::Grpc.Gateway.ProtocGenSwagger.Options.AnnotationsReflection.Descriptor, },
new pbr::GeneratedClrTypeInfo(null, new pbr::GeneratedClrTypeInfo[] {
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.RunRequest), global::OpenMatch.RunRequest.Parser, new[]{ "Profile" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.RunResponse), global::OpenMatch.RunResponse.Parser, new[]{ "Proposal" }, null, null, null)
}));
}
#endregion
}
#region Messages
public sealed partial class RunRequest : pb::IMessage<RunRequest> {
private static readonly pb::MessageParser<RunRequest> _parser = new pb::MessageParser<RunRequest>(() => new RunRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<RunRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.MatchfunctionReflection.Descriptor.MessageTypes[0]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public RunRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public RunRequest(RunRequest other) : this() {
profile_ = other.profile_ != null ? other.profile_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public RunRequest Clone() {
return new RunRequest(this);
}
/// <summary>Field number for the "profile" field.</summary>
public const int ProfileFieldNumber = 1;
private global::OpenMatch.MatchProfile profile_;
/// <summary>
/// A MatchProfile defines constraints of Tickets in a Match and shapes the Match proposed by the MatchFunction.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.MatchProfile Profile {
get { return profile_; }
set {
profile_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as RunRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(RunRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Profile, other.Profile)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (profile_ != null) hash ^= Profile.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (profile_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Profile);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (profile_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Profile);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(RunRequest other) {
if (other == null) {
return;
}
if (other.profile_ != null) {
if (profile_ == null) {
Profile = new global::OpenMatch.MatchProfile();
}
Profile.MergeFrom(other.Profile);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (profile_ == null) {
Profile = new global::OpenMatch.MatchProfile();
}
input.ReadMessage(Profile);
break;
}
}
}
}
}
public sealed partial class RunResponse : pb::IMessage<RunResponse> {
private static readonly pb::MessageParser<RunResponse> _parser = new pb::MessageParser<RunResponse>(() => new RunResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<RunResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.MatchfunctionReflection.Descriptor.MessageTypes[1]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public RunResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public RunResponse(RunResponse other) : this() {
proposal_ = other.proposal_ != null ? other.proposal_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public RunResponse Clone() {
return new RunResponse(this);
}
/// <summary>Field number for the "proposal" field.</summary>
public const int ProposalFieldNumber = 1;
private global::OpenMatch.Match proposal_;
/// <summary>
/// A Proposal represents a Match candidate that satifies the constraints defined in the input Profile.
/// A valid Proposal response will contain at least one ticket.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Match Proposal {
get { return proposal_; }
set {
proposal_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as RunResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(RunResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Proposal, other.Proposal)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (proposal_ != null) hash ^= Proposal.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (proposal_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Proposal);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (proposal_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Proposal);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(RunResponse other) {
if (other == null) {
return;
}
if (other.proposal_ != null) {
if (proposal_ == null) {
Proposal = new global::OpenMatch.Match();
}
Proposal.MergeFrom(other.Proposal);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (proposal_ == null) {
Proposal = new global::OpenMatch.Match();
}
input.ReadMessage(Proposal);
break;
}
}
}
}
}
#endregion
}
#endregion Designer generated code

File diff suppressed because it is too large Load Diff

View File

@ -1,322 +0,0 @@
// <auto-generated>
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: api/mmlogic.proto
// </auto-generated>
#pragma warning disable 1591, 0612, 3021
#region Designer generated code
using pb = global::Google.Protobuf;
using pbc = global::Google.Protobuf.Collections;
using pbr = global::Google.Protobuf.Reflection;
using scg = global::System.Collections.Generic;
namespace OpenMatch {
/// <summary>Holder for reflection information generated from api/mmlogic.proto</summary>
public static partial class MmlogicReflection {
#region Descriptor
/// <summary>File descriptor for api/mmlogic.proto</summary>
public static pbr::FileDescriptor Descriptor {
get { return descriptor; }
}
private static pbr::FileDescriptor descriptor;
static MmlogicReflection() {
byte[] descriptorData = global::System.Convert.FromBase64String(
string.Concat(
"ChFhcGkvbW1sb2dpYy5wcm90bxIJb3Blbm1hdGNoGhJhcGkvbWVzc2FnZXMu",
"cHJvdG8aHGdvb2dsZS9hcGkvYW5ub3RhdGlvbnMucHJvdG8aLHByb3RvYy1n",
"ZW4tc3dhZ2dlci9vcHRpb25zL2Fubm90YXRpb25zLnByb3RvIjQKE1F1ZXJ5",
"VGlja2V0c1JlcXVlc3QSHQoEcG9vbBgBIAEoCzIPLm9wZW5tYXRjaC5Qb29s",
"IjoKFFF1ZXJ5VGlja2V0c1Jlc3BvbnNlEiIKB3RpY2tldHMYASADKAsyES5v",
"cGVubWF0Y2guVGlja2V0MoIBCgdNbUxvZ2ljEncKDFF1ZXJ5VGlja2V0cxIe",
"Lm9wZW5tYXRjaC5RdWVyeVRpY2tldHNSZXF1ZXN0Gh8ub3Blbm1hdGNoLlF1",
"ZXJ5VGlja2V0c1Jlc3BvbnNlIiSC0+STAh4iGS92MS9tbWxvZ2ljL3RpY2tl",
"dHM6cXVlcnk6ASowAUKYA1ogb3Blbi1tYXRjaC5kZXYvb3Blbi1tYXRjaC9w",
"a2cvcGKqAglPcGVuTWF0Y2iSQeYCEr8BChVNTSBMb2dpYyAoRGF0YSBMYXll",
"cikiSQoKT3BlbiBNYXRjaBIWaHR0cHM6Ly9vcGVuLW1hdGNoLmRldhojb3Bl",
"bi1tYXRjaC1kaXNjdXNzQGdvb2dsZWdyb3Vwcy5jb20qVgoSQXBhY2hlIDIu",
"MCBMaWNlbnNlEkBodHRwczovL2dpdGh1Yi5jb20vZ29vZ2xlZm9yZ2FtZXMv",
"b3Blbi1tYXRjaC9ibG9iL21hc3Rlci9MSUNFTlNFMgMxLjAqAgECMhBhcHBs",
"aWNhdGlvbi9qc29uOhBhcHBsaWNhdGlvbi9qc29uUjsKAzQwNBI0CipSZXR1",
"cm5lZCB3aGVuIHRoZSByZXNvdXJjZSBkb2VzIG5vdCBleGlzdC4SBgoEmgIB",
"B3I9ChhPcGVuIE1hdGNoIERvY3VtZW50YXRpb24SIWh0dHBzOi8vb3Blbi1t",
"YXRjaC5kZXYvc2l0ZS9kb2NzL2IGcHJvdG8z"));
descriptor = pbr::FileDescriptor.FromGeneratedCode(descriptorData,
new pbr::FileDescriptor[] { global::OpenMatch.MessagesReflection.Descriptor, global::Google.Api.AnnotationsReflection.Descriptor, global::Grpc.Gateway.ProtocGenSwagger.Options.AnnotationsReflection.Descriptor, },
new pbr::GeneratedClrTypeInfo(null, new pbr::GeneratedClrTypeInfo[] {
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.QueryTicketsRequest), global::OpenMatch.QueryTicketsRequest.Parser, new[]{ "Pool" }, null, null, null),
new pbr::GeneratedClrTypeInfo(typeof(global::OpenMatch.QueryTicketsResponse), global::OpenMatch.QueryTicketsResponse.Parser, new[]{ "Tickets" }, null, null, null)
}));
}
#endregion
}
#region Messages
public sealed partial class QueryTicketsRequest : pb::IMessage<QueryTicketsRequest> {
private static readonly pb::MessageParser<QueryTicketsRequest> _parser = new pb::MessageParser<QueryTicketsRequest>(() => new QueryTicketsRequest());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<QueryTicketsRequest> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.MmlogicReflection.Descriptor.MessageTypes[0]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public QueryTicketsRequest() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public QueryTicketsRequest(QueryTicketsRequest other) : this() {
pool_ = other.pool_ != null ? other.pool_.Clone() : null;
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public QueryTicketsRequest Clone() {
return new QueryTicketsRequest(this);
}
/// <summary>Field number for the "pool" field.</summary>
public const int PoolFieldNumber = 1;
private global::OpenMatch.Pool pool_;
/// <summary>
/// A Pool is consists of a set of Filters.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public global::OpenMatch.Pool Pool {
get { return pool_; }
set {
pool_ = value;
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as QueryTicketsRequest);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(QueryTicketsRequest other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if (!object.Equals(Pool, other.Pool)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
if (pool_ != null) hash ^= Pool.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
if (pool_ != null) {
output.WriteRawTag(10);
output.WriteMessage(Pool);
}
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
if (pool_ != null) {
size += 1 + pb::CodedOutputStream.ComputeMessageSize(Pool);
}
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(QueryTicketsRequest other) {
if (other == null) {
return;
}
if (other.pool_ != null) {
if (pool_ == null) {
Pool = new global::OpenMatch.Pool();
}
Pool.MergeFrom(other.Pool);
}
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
if (pool_ == null) {
Pool = new global::OpenMatch.Pool();
}
input.ReadMessage(Pool);
break;
}
}
}
}
}
public sealed partial class QueryTicketsResponse : pb::IMessage<QueryTicketsResponse> {
private static readonly pb::MessageParser<QueryTicketsResponse> _parser = new pb::MessageParser<QueryTicketsResponse>(() => new QueryTicketsResponse());
private pb::UnknownFieldSet _unknownFields;
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pb::MessageParser<QueryTicketsResponse> Parser { get { return _parser; } }
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public static pbr::MessageDescriptor Descriptor {
get { return global::OpenMatch.MmlogicReflection.Descriptor.MessageTypes[1]; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
pbr::MessageDescriptor pb::IMessage.Descriptor {
get { return Descriptor; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public QueryTicketsResponse() {
OnConstruction();
}
partial void OnConstruction();
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public QueryTicketsResponse(QueryTicketsResponse other) : this() {
tickets_ = other.tickets_.Clone();
_unknownFields = pb::UnknownFieldSet.Clone(other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public QueryTicketsResponse Clone() {
return new QueryTicketsResponse(this);
}
/// <summary>Field number for the "tickets" field.</summary>
public const int TicketsFieldNumber = 1;
private static readonly pb::FieldCodec<global::OpenMatch.Ticket> _repeated_tickets_codec
= pb::FieldCodec.ForMessage(10, global::OpenMatch.Ticket.Parser);
private readonly pbc::RepeatedField<global::OpenMatch.Ticket> tickets_ = new pbc::RepeatedField<global::OpenMatch.Ticket>();
/// <summary>
/// Tickets is a list of Ticket representing one or more Tickets which meet all Filter criterias.
/// </summary>
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public pbc::RepeatedField<global::OpenMatch.Ticket> Tickets {
get { return tickets_; }
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override bool Equals(object other) {
return Equals(other as QueryTicketsResponse);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public bool Equals(QueryTicketsResponse other) {
if (ReferenceEquals(other, null)) {
return false;
}
if (ReferenceEquals(other, this)) {
return true;
}
if(!tickets_.Equals(other.tickets_)) return false;
return Equals(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override int GetHashCode() {
int hash = 1;
hash ^= tickets_.GetHashCode();
if (_unknownFields != null) {
hash ^= _unknownFields.GetHashCode();
}
return hash;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public override string ToString() {
return pb::JsonFormatter.ToDiagnosticString(this);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void WriteTo(pb::CodedOutputStream output) {
tickets_.WriteTo(output, _repeated_tickets_codec);
if (_unknownFields != null) {
_unknownFields.WriteTo(output);
}
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public int CalculateSize() {
int size = 0;
size += tickets_.CalculateSize(_repeated_tickets_codec);
if (_unknownFields != null) {
size += _unknownFields.CalculateSize();
}
return size;
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(QueryTicketsResponse other) {
if (other == null) {
return;
}
tickets_.Add(other.tickets_);
_unknownFields = pb::UnknownFieldSet.MergeFrom(_unknownFields, other._unknownFields);
}
[global::System.Diagnostics.DebuggerNonUserCodeAttribute]
public void MergeFrom(pb::CodedInputStream input) {
uint tag;
while ((tag = input.ReadTag()) != 0) {
switch(tag) {
default:
_unknownFields = pb::UnknownFieldSet.MergeFieldFrom(_unknownFields, input);
break;
case 10: {
tickets_.AddEntriesFrom(input, _repeated_tickets_codec);
break;
}
}
}
}
}
#endregion
}
#endregion Designer generated code

View File

@ -1,16 +0,0 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
<PackageId>OpenMatch</PackageId>
<Version>0.0.0-dev</Version>
<Authors>Google LLC</Authors>
<Company>Google LLC</Company>
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Google.Api.CommonProtos" Version="1.7.0" />
</ItemGroup>
</Project>

File diff suppressed because it is too large Load Diff

View File

@ -9,14 +9,12 @@ To build Open Match you'll need the following applications installed.
* [Git](https://git-scm.com/downloads)
* [Go](https://golang.org/doc/install)
* [Python3 with virtualenv](https://wiki.python.org/moin/BeginnersGuide/Download)
* Make (Mac: install [XCode](https://itunes.apple.com/us/app/xcode/id497799835))
* [Docker](https://docs.docker.com/install/) including the
[post-install steps](https://docs.docker.com/install/linux/linux-postinstall/).
Optional Software
* [Google Cloud Platform](gcloud.md)
* [Visual Studio Code](https://code.visualstudio.com/Download) for IDE.
Vim and Emacs work to.
* [VirtualBox](https://www.virtualbox.org/wiki/Downloads) recommended for
@ -27,8 +25,7 @@ running:
```bash
sudo apt-get update
sudo apt-get install -y -q python3 python3-virtualenv virtualenv make \
google-cloud-sdk git unzip tar
sudo apt-get install -y -q make google-cloud-sdk git unzip tar
```
*It's recommended that you install Go using their instructions because package
@ -51,13 +48,11 @@ make
[create a fork](https://help.github.com/en/articles/fork-a-repo) and use that
but for purpose of this guide we'll be using the upstream/master.*
## Building
## Building code and images
```bash
# Reset workspace
make clean
# Compile all the binaries
make all -j$(nproc)
# Run tests
make test
# Build all the images.
@ -66,6 +61,8 @@ make build-images -j$(nproc)
make push-images -j$(nproc)
# Push images to Docker Hub
make REGISTRY=mydockerusername push-images -j$(nproc)
# Generate Kubernetes installation YAML files (Note that the trailing '/' is needed here)
make install/yaml/
```
_**-j$(nproc)** is a flag to tell make to parallelize the commands based on
@ -85,11 +82,9 @@ default context the Makefile will honor that._
# GKE cluster: make create-gke-cluster/delete-gke-cluster
# or create a local Minikube cluster
make create-gke-cluster
# Step 2: Download helm and install Tiller in the cluster
make push-helm
# Step 3: Build and Push Open Match Images to gcr.io
# Step 2: Build and Push Open Match Images to gcr.io
make push-images -j$(nproc)
# Install Open Match in the cluster.
# Step 3: Install Open Match in the cluster.
make install-chart
# Create a proxy to Open Match pods so that you can access them locally.
@ -103,19 +98,36 @@ make proxy
make delete-chart
```
## Interaction
## Iterating
While iterating on the project, you may need to:
1. Install/Run everything
2. Make some code changes
3. Make sure the changes compile by running `make test`
4. Build and push Docker images to your personal registry by running `make push-images -j$(nproc)`
5. Deploy the code change by running `make install-chart`
6. Verify it's working by [looking at the logs](#accessing-logs) or looking at the monitoring dashboard by running `make proxy-grafana`
7. Tear down Open Match by running `make delete-chart`
Before integrating with Open Match you can manually interact with it to get a feel for how it works.
## Accessing logs
To look at Open Match core services' logs, run:
```bash
# Replace open-match-frontend with the service name that you would like to access
kubectl logs -n open-match svc/open-match-frontend
```
`make proxy-ui` exposes the Swagger UI for Open Match locally on your computer.
You can then go to http://localhost:51500 and view the API as well as interactively call Open Match.
## API References
While integrating with Open Match you may want to understand its API surface concepts or interact with it and get a feel for how it works.
The APIs are defined in `proto` format under the `api/` folder, with references available at [open-match.dev](https://open-match.dev/site/docs/reference/api/).
You can also run `make proxy-ui` to exposes the Swagger UI for Open Match locally on your computer after [deploying it to Kubernetes](#deploying-to-kubernetes), then go to http://localhost:51500 and view the REST APIs as well as interactively call Open Match.
By default you will be talking to the frontend server but you can change the target API url to any of the following:
* api/frontend.swagger.json
* api/backend.swagger.json
* api/synchronizer.swagger.json
* api/mmlogic.swagger.json
* api/query.swagger.json
For a more current list refer to the api/ directory of this repository. Also matchfunction.swagger.json is not supported.
@ -142,55 +154,9 @@ export GOPATH=$HOME/workspace/
## Pull Requests
If you want to submit a Pull Request there's some tools to help prepare your
change.
```bash
# Runs code generators, tests, and linters.
make presubmit
```
`make presubmit` catches most of the issues your change can run into. If the
submit checks fail you can run it locally via,
```bash
make local-cloud-build
```
If you want to submit a Pull Request, `make presubmit` can catch most of the issues your change can run into.
Our [continuous integration](https://console.cloud.google.com/cloud-build/builds?project=open-match-build)
runs against all PRs. In order to see your build results you'll need to
become a member of
[open-match-discuss@googlegroups.com](https://groups.google.com/forum/#!forum/open-match-discuss).
## Makefile
The Makefile is the core of Open Match's build process. There's a lot of
commands but here's a list of the important ones and patterns to remember them.
```bash
# Help
make
# Reset workspace (delete all build artifacts)
make clean
# Delete auto-generated protobuf code and swagger API docs.
make clean-protos clean-swagger-docs
# make clean-* deletes some part of the build outputs.
# Build all Docker images
make build-images
# Build frontend docker image.
make build-frontend-image
# Formats, Vets, and tests the codebase.
make fmt vet test
# Same as above also regenerates autogen files.
make presubmit
# Run website on http://localhost:8080
make run-site
# Proxy all Open Match processes to view them.
make proxy
```

View File

@ -1,26 +0,0 @@
# Create a GKE Cluster
Below are the steps to create a GKE cluster in Google Cloud Platform.
* Create a GCP project via [Google Cloud Console](https://console.cloud.google.com/).
* Billing must be enabled. If you're a new customer you can get some [free credits](https://cloud.google.com/free/).
* When you create a project you'll need to set a Project ID, if you forget it you can see it here, https://console.cloud.google.com/iam-admin/settings/project.
* Install [Google Cloud SDK](https://cloud.google.com/sdk/) which is the command line tool to work against your project.
Here are the next steps using the gcloud tool.
```bash
# Login to your Google Account for GCP
gcloud auth login
gcloud config set project $YOUR_GCP_PROJECT_ID
# Enable necessary GCP services
gcloud services enable containerregistry.googleapis.com
gcloud services enable container.googleapis.com
# Test that everything is good, this command should work.
gcloud compute zones list
# Create a GKE Cluster in this project
gcloud container clusters create --machine-type n1-standard-2 open-match-dev-cluster --zone us-west1-a --tags open-match
```

View File

@ -2,7 +2,7 @@
This is the {version} release of Open Match.
Check the [README](https://github.com/googleforgames/open-match/tree/release-{version}) for details on features, installation and usage.
Check the [official website](https://open-match.dev) for details on features, installation and usage.
Release Notes
-------------
@ -13,13 +13,18 @@ Release Notes
**Breaking Changes**
{ detail any behaviors or API surfaces which worked in a previous version which will no longer work correctly }
> Future releases towards 1.0.0 may still have breaking changes.
**Security Fixes**
{ list any changes which fix vulnerabilities in open match }
**Enhancements**
{ go into details on improvements and changes }
See [CHANGELOG](https://github.com/googleforgames/open-match/blob/release-{version}/CHANGELOG.md) for more details on changes.
Usage Requirements
-------------
* Tested against Kubernetes Version { a list of k8s versions}
* Golang Version = v{ required golang version }
Images
------
@ -28,7 +33,7 @@ Images
# Servers
docker pull gcr.io/open-match-public-images/openmatch-backend:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontend:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmlogic:{version}
docker pull gcr.io/open-match-public-images/openmatch-query:{version}
docker pull gcr.io/open-match-public-images/openmatch-synchronizer:{version}
# Evaluators
@ -47,15 +52,10 @@ _This software is currently alpha, and subject to change. Not to be used in prod
Installation
------------
To deploy Open Match in your Kubernetes cluster run the following commands:
* Follow [Open Match Installation Guide](https://open-match.dev/site/docs/installation/) to setup Open Match in your cluster.
```bash
# Grant yourself cluster-admin permissions so that you can deploy service accounts.
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(YOUR_KUBERNETES_USER_NAME)
# Place all Open Match components in their own namespace.
kubectl create namespace open-match
# Install Open Match services.
kubectl apply -f https://github.com/googleforgames/open-match/releases/download/v{version}/01-open-match-core.yaml --namespace open-match
# Install the demo.
kubectl apply -f https://github.com/googleforgames/open-match/releases/download/v{version}/02-open-match-demo.yaml --namespace open-match
```
API Definitions
------------
- gRPC API Definitions are available in [API references](https://open-match.dev/site/docs/reference/api/) - _Preferred_
- HTTP API Definitions are available in [SwaggerUI](https://open-match.dev/site/swaggerui/index.html)

View File

@ -12,24 +12,13 @@ SOURCE_VERSION=$1
DEST_VERSION=$2
SOURCE_PROJECT_ID=open-match-build
DEST_PROJECT_ID=open-match-public-images
IMAGE_NAMES="openmatch-backend openmatch-frontend openmatch-mmlogic openmatch-synchronizer openmatch-minimatch openmatch-demo-first-match openmatch-mmf-go-soloduel openmatch-mmf-go-pool openmatch-evaluator-go-simple openmatch-swaggerui openmatch-reaper"
IMAGE_NAMES=$(make list-images)
for name in $IMAGE_NAMES
do
source_image=gcr.io/$SOURCE_PROJECT_ID/$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION
source_image=gcr.io/$SOURCE_PROJECT_ID/openmatch-$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/openmatch-$name:$DEST_VERSION
docker pull $source_image
docker tag $source_image $dest_image
docker push $dest_image
done
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "Add these lines to your release notes:"
for name in $IMAGE_NAMES
do
echo "docker pull gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION"
done

7
docs/hugo_apiheader.txt Normal file
View File

@ -0,0 +1,7 @@
---
title: "Open Match API References"
linkTitle: "Open Match API References"
weight: 2
description:
This document provides API references for Open Match services.
---

View File

@ -81,12 +81,12 @@ func runScenario(ctx context.Context, name string, update updater.SetFunc) {
update(s)
// See https://open-match.dev/site/docs/guides/api/
conn, err := grpc.Dial("om-frontend.open-match.svc.cluster.local:50504", grpc.WithInsecure())
conn, err := grpc.Dial("open-match-frontend.open-match.svc.cluster.local:50504", grpc.WithInsecure())
if err != nil {
panic(err)
}
defer conn.Close()
fe := pb.NewFrontendClient(conn)
fe := pb.NewFrontendServiceClient(conn)
//////////////////////////////////////////////////////////////////////////////
s.Status = "Creating Open Match Ticket"
@ -102,7 +102,7 @@ func runScenario(ctx context.Context, name string, update updater.SetFunc) {
if err != nil {
panic(err)
}
ticketId = resp.Ticket.Id
ticketId = resp.Id
}
//////////////////////////////////////////////////////////////////////////////
@ -111,11 +111,11 @@ func runScenario(ctx context.Context, name string, update updater.SetFunc) {
var assignment *pb.Assignment
{
req := &pb.GetAssignmentsRequest{
req := &pb.WatchAssignmentsRequest{
TicketId: ticketId,
}
stream, err := fe.GetAssignments(ctx, req)
stream, err := fe.WatchAssignments(ctx, req)
for assignment.GetConnection() == "" {
resp, err := stream.Recv()
if err != nil {

View File

@ -17,11 +17,12 @@ package director
import (
"context"
"fmt"
"google.golang.org/grpc"
"io"
"math/rand"
"time"
"google.golang.org/grpc"
"open-match.dev/open-match/examples/demo/components"
"open-match.dev/open-match/pkg/pb"
)
@ -67,12 +68,12 @@ func run(ds *components.DemoShared) {
ds.Update(s)
// See https://open-match.dev/site/docs/guides/api/
conn, err := grpc.Dial("om-backend.open-match.svc.cluster.local:50505", grpc.WithInsecure())
conn, err := grpc.Dial("open-match-backend.open-match.svc.cluster.local:50505", grpc.WithInsecure())
if err != nil {
panic(err)
}
defer conn.Close()
be := pb.NewBackendClient(conn)
be := pb.NewBackendServiceClient(conn)
//////////////////////////////////////////////////////////////////////////////
s.Status = "Match Match: Sending Request"
@ -82,17 +83,15 @@ func run(ds *components.DemoShared) {
{
req := &pb.FetchMatchesRequest{
Config: &pb.FunctionConfig{
Host: "om-function.open-match.svc.cluster.local",
Host: "om-function.open-match-demo.svc.cluster.local",
Port: 50502,
Type: pb.FunctionConfig_GRPC,
},
Profiles: []*pb.MatchProfile{
{
Name: "1v1",
Pools: []*pb.Pool{
{
Name: "Everyone",
},
Profile: &pb.MatchProfile{
Name: "1v1",
Pools: []*pb.Pool{
{
Name: "Everyone",
},
},
},
@ -132,9 +131,13 @@ func run(ds *components.DemoShared) {
}
req := &pb.AssignTicketsRequest{
TicketIds: ids,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
Assignments: []*pb.AssignmentGroup{
{
TicketIds: ids,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
},
},
},
}

View File

@ -1,24 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/examples/evaluator/golang/simple
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o simple .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/examples/evaluator/golang/simple/simple /app/
ENTRYPOINT ["/app/simple"]

View File

@ -1,113 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package evaluate
import (
"math"
"sort"
"github.com/golang/protobuf/ptypes"
"github.com/sirupsen/logrus"
harness "open-match.dev/open-match/pkg/harness/evaluator/golang"
"open-match.dev/open-match/pkg/pb"
)
type matchInp struct {
match *pb.Match
inp *pb.DefaultEvaluationCriteria
}
// Evaluate is where your custom evaluation logic lives.
// This sample evaluator sorts and deduplicates the input matches.
func Evaluate(p *harness.EvaluatorParams) ([]*pb.Match, error) {
matches := make([]*matchInp, 0, len(p.Matches))
nilEvlautionInputs := 0
for _, m := range p.Matches {
// Evaluation criteria is optional, but sort it lower than any matches which
// provided criteria.
inp := &pb.DefaultEvaluationCriteria{
Score: math.Inf(-1),
}
if a, ok := m.Extensions["evaluation_input"]; ok {
err := ptypes.UnmarshalAny(a, inp)
if err != nil {
p.Logger.WithFields(logrus.Fields{
"match_id": m.MatchId,
"error": err,
}).Error("Failed to unmarshal match's DefaultEvaluationCriteria. Rejecting match.")
continue
}
} else {
nilEvlautionInputs++
}
matches = append(matches, &matchInp{
match: m,
inp: inp,
})
}
if nilEvlautionInputs > 0 {
p.Logger.WithFields(logrus.Fields{
"count": nilEvlautionInputs,
}).Info("Some matches don't have the optional field evaluation_input set.")
}
sort.Sort(byScore(matches))
d := decollider{
ticketsUsed: map[string]struct{}{},
}
for _, m := range matches {
d.maybeAdd(m)
}
return d.results, nil
}
type decollider struct {
results []*pb.Match
ticketsUsed map[string]struct{}
}
func (d *decollider) maybeAdd(m *matchInp) {
for _, t := range m.match.GetTickets() {
if _, ok := d.ticketsUsed[t.Id]; ok {
return
}
}
for _, t := range m.match.GetTickets() {
d.ticketsUsed[t.Id] = struct{}{}
}
d.results = append(d.results, m.match)
}
type byScore []*matchInp
func (m byScore) Len() int {
return len(m)
}
func (m byScore) Swap(i, j int) {
m[i], m[j] = m[j], m[i]
}
func (m byScore) Less(i, j int) bool {
return m[i].inp.Score > m[j].inp.Score
}

View File

@ -1,24 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
simple "open-match.dev/open-match/examples/evaluator/golang/simple/evaluate"
harness "open-match.dev/open-match/pkg/harness/evaluator/golang"
)
func main() {
// Invoke the harness to setup a GRPC service that handles requests to run the evaluator.
harness.RunEvaluator(simple.Evaluate)
}

View File

@ -1,24 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/examples/functions/golang/pool
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o matchfunction .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/examples/functions/golang/pool/matchfunction /app/
ENTRYPOINT ["/app/matchfunction"]

View File

@ -1,84 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package mmf provides a sample match function that uses the GRPC harness to set up
// the match making function as a service. This sample is a reference
// to demonstrate the usage of the GRPC harness and should only be used as
// a starting point for your match function. You will need to modify the
// matchmaking logic in this function based on your game's requirements.
package mmf
import (
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/any"
"github.com/pkg/errors"
"github.com/rs/xid"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"open-match.dev/open-match/pkg/pb"
)
var (
matchName = "pool-based-match"
)
// MakeMatches is where your custom matchmaking logic lives.
// This is the core match making function that will be triggered by Open Match to generate matches.
// The goal of this function is to generate predictable matches that can be validated without flakyness.
// This match function loops through all the pools and generates one match per pool aggregating all players
// in that pool in the generated match.
func MakeMatches(params *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
var result []*pb.Match
for pool, tickets := range params.PoolNameToTickets {
if len(tickets) != 0 {
roster := &pb.Roster{Name: pool}
for _, ticket := range tickets {
roster.TicketIds = append(roster.GetTicketIds(), ticket.GetId())
}
evaluationInput, err := ptypes.MarshalAny(&pb.DefaultEvaluationCriteria{
Score: scoreCalculator(tickets),
})
if err != nil {
return nil, errors.Wrap(err, "Failed to marshal DefaultEvaluationCriteria.")
}
result = append(result, &pb.Match{
MatchId: xid.New().String(),
MatchProfile: params.ProfileName,
MatchFunction: matchName,
Tickets: tickets,
Rosters: []*pb.Roster{roster},
Extensions: map[string]*any.Any{
"evaluation_input": evaluationInput,
},
})
}
}
return result, nil
}
// This match function defines the quality of a match as the sum of the Double
// Args values of all tickets per match. This is for testing purposes, and not
// an example of a good score calculation.
func scoreCalculator(tickets []*pb.Ticket) float64 {
matchScore := 0.0
for _, ticket := range tickets {
for _, v := range ticket.GetSearchFields().GetDoubleArgs() {
matchScore += v
}
}
return matchScore
}

View File

@ -1,124 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mmf
import (
"testing"
"open-match.dev/open-match/pkg/pb"
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/any"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
)
func TestMakeMatches(t *testing.T) {
assert := assert.New(t)
tickets := []*pb.Ticket{
{
Id: "1",
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
"level": 10,
"defense": 100,
},
},
},
{
Id: "2",
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
"level": 10,
"defense": 50,
},
},
},
{
Id: "3",
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
"level": 10,
"defense": 522,
},
},
}, {
Id: "4",
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
"level": 10,
"mana": 1,
},
},
},
}
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": tickets[:2],
"pool2": tickets[2:],
}
p := &mmfHarness.MatchFunctionParams{
Logger: &logrus.Entry{},
ProfileName: "test-profile",
Rosters: []*pb.Roster{},
PoolNameToTickets: poolNameToTickets,
}
matches, err := MakeMatches(p)
assert.Nil(err)
assert.Equal(len(matches), 2)
actual := []*pb.Match{}
for _, match := range matches {
actual = append(actual, &pb.Match{
MatchProfile: match.MatchProfile,
MatchFunction: match.MatchFunction,
Tickets: match.Tickets,
Rosters: match.Rosters,
Extensions: match.Extensions,
})
}
matchGen := func(poolName string, tickets []*pb.Ticket) *pb.Match {
tids := []string{}
for _, ticket := range tickets {
tids = append(tids, ticket.GetId())
}
evaluationInput, err := ptypes.MarshalAny(&pb.DefaultEvaluationCriteria{
Score: scoreCalculator(tickets),
})
if err != nil {
t.Fatal(err)
}
return &pb.Match{
MatchProfile: p.ProfileName,
MatchFunction: matchName,
Tickets: tickets,
Rosters: []*pb.Roster{{Name: poolName, TicketIds: tids}},
Extensions: map[string]*any.Any{
"evaluation_input": evaluationInput,
},
}
}
for poolName, tickets := range poolNameToTickets {
assert.Contains(actual, matchGen(poolName, tickets))
}
}

View File

@ -1,151 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mmf
import (
"fmt"
"time"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"open-match.dev/open-match/pkg/matchfunction"
"open-match.dev/open-match/pkg/pb"
)
var (
matchName = "roster-based-matchfunction"
emptyRosterSpot = "EMPTY_ROSTER_SPOT"
logger = logrus.WithFields(logrus.Fields{
"app": "matchfunction",
"component": "mmf.rosterbased",
})
)
// Run is this match function's implementation of the gRPC call defined in api/matchfunction.proto.
func (s *MatchFunctionService) Run(req *pb.RunRequest, stream pb.MatchFunction_RunServer) error {
// Fetch tickets for the pools specified in the Match Profile.
poolTickets, err := matchfunction.QueryPools(stream.Context(), s.mmlogicClient, req.GetProfile().GetPools())
if err != nil {
return err
}
// Generate proposals.
proposals, err := makeMatches(req.GetProfile(), poolTickets)
if err != nil {
return err
}
logger.WithFields(logrus.Fields{
"proposals": proposals,
}).Trace("proposals returned by match function")
// Stream the generated proposals back to Open Match.
for _, proposal := range proposals {
if err := stream.Send(&pb.RunResponse{Proposal: proposal}); err != nil {
return err
}
}
return nil
}
func makeMatches(p *pb.MatchProfile, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
// This roster based match function expects the match profile to have a
// populated roster specifying the empty slots for each pool name and also
// have the ticket pools referenced in the roster. It generates matches by
// populating players from the specified pools into rosters.
wantTickets, err := wantPoolTickets(p.Rosters)
if err != nil {
return nil, err
}
var matches []*pb.Match
count := 0
for {
insufficientTickets := false
matchTickets := []*pb.Ticket{}
matchRosters := []*pb.Roster{}
// Loop through each pool wanted in the rosters and pick the number of
// wanted players from the respective Pool.
for poolName, want := range wantTickets {
have, ok := poolTickets[poolName]
if !ok {
// A wanted Pool was not found in the Pools specified in the profile.
insufficientTickets = true
break
}
if len(have) < want {
// The Pool in the profile has fewer tickets than what the roster needs.
insufficientTickets = true
break
}
// Populate the wanted tickets from the Tickets in the corresponding Pool.
matchTickets = append(matchTickets, have[0:want]...)
poolTickets[poolName] = have[want:]
var ids []string
for _, ticket := range matchTickets {
ids = append(ids, ticket.Id)
}
matchRosters = append(matchRosters, &pb.Roster{
Name: poolName,
TicketIds: ids,
})
}
if insufficientTickets {
// Ran out of Tickets. Matches cannot be created from the remaining Tickets.
break
}
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), count),
MatchProfile: p.GetName(),
MatchFunction: matchName,
Tickets: matchTickets,
Rosters: matchRosters,
})
count++
}
return matches, nil
}
// wantPoolTickets parses the roster to return a map of the Pool name to the
// number of empty roster slots for that Pool.
func wantPoolTickets(rosters []*pb.Roster) (map[string]int, error) {
wantTickets := make(map[string]int)
for _, r := range rosters {
if _, ok := wantTickets[r.Name]; ok {
// We do not expect multiple Roster Pools to have the same name.
logger.Errorf("multiple rosters with same name not supported")
return nil, status.Error(codes.InvalidArgument, "multiple rosters with same name not supported")
}
wantTickets[r.Name] = 0
for _, slot := range r.TicketIds {
if slot == emptyRosterSpot {
wantTickets[r.Name] = wantTickets[r.Name] + 1
}
}
}
return wantTickets, nil
}

View File

@ -20,16 +20,14 @@
package main
import (
soloduel "open-match.dev/open-match/examples/functions/golang/soloduel/mmf"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"open-match.dev/open-match/examples/functions/golang/soloduel/mmf"
)
const (
queryServiceAddr = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
)
func main() {
// Invoke the harness to setup a GRPC service that handles requests to run the
// match function. The harness itself queries open match for player pools for
// the specified request and passes the pools to the match function to generate
// proposals.
mmfHarness.RunMatchFunction(&mmfHarness.FunctionSettings{
Func: soloduel.MakeMatches,
})
mmf.Start(queryServiceAddr, serverPort)
}

View File

@ -20,9 +20,11 @@ package mmf
import (
"fmt"
"log"
"time"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/matchfunction"
"open-match.dev/open-match/pkg/pb"
)
@ -30,15 +32,17 @@ var (
matchName = "a-simple-1v1-matchfunction"
)
// MakeMatches is where your custom matchmaking logic lives.
func MakeMatches(p *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
// This simple match function does the following things
// 1. Deduplicates the tickets from the pools into a single list.
// 2. Groups players into 1v1 matches.
// matchFunctionService implements pb.MatchFunctionServer, the server generated
// by compiling the protobuf, by fulfilling the pb.MatchFunctionServer interface.
type matchFunctionService struct {
grpc *grpc.Server
queryServiceClient pb.QueryServiceClient
port int
}
func makeMatches(poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
tickets := map[string]*pb.Ticket{}
for _, pool := range p.PoolNameToTickets {
for _, pool := range poolTickets {
for _, ticket := range pool {
tickets[ticket.GetId()] = ticket
}
@ -56,8 +60,8 @@ func MakeMatches(p *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
if len(thisMatch) >= 2 {
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%s-time-%s-num-%d", p.ProfileName, t, matchNum),
MatchProfile: p.ProfileName,
MatchId: fmt.Sprintf("profile-%s-time-%s-num-%d", matchName, t, matchNum),
MatchProfile: matchName,
MatchFunction: matchName,
Tickets: thisMatch,
})
@ -69,3 +73,33 @@ func MakeMatches(p *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
return matches, nil
}
// Run is this match function's implementation of the gRPC call defined in api/matchfunction.proto.
func (s *matchFunctionService) Run(req *pb.RunRequest, stream pb.MatchFunction_RunServer) error {
// Fetch tickets for the pools specified in the Match Profile.
log.Printf("Generating proposals for function %v", req.GetProfile().GetName())
poolTickets, err := matchfunction.QueryPools(stream.Context(), s.queryServiceClient, req.GetProfile().GetPools())
if err != nil {
log.Printf("Failed to query tickets for the given pools, got %s", err.Error())
return err
}
// Generate proposals.
proposals, err := makeMatches(poolTickets)
if err != nil {
log.Printf("Failed to generate matches, got %s", err.Error())
return err
}
log.Printf("Streaming %v proposals to Open Match", len(proposals))
// Stream the generated proposals back to Open Match.
for _, proposal := range proposals {
if err := stream.Send(&pb.RunResponse{Proposal: proposal}); err != nil {
log.Printf("Failed to stream proposals to Open Match, got %s", err.Error())
return err
}
}
return nil
}

View File

@ -19,33 +19,24 @@ import (
"open-match.dev/open-match/pkg/pb"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"github.com/stretchr/testify/require"
)
func TestMakeMatchesDeduplicate(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": {{Id: "1"}},
"pool2": {{Id: "1"}},
}
p := &mmfHarness.MatchFunctionParams{
Logger: &logrus.Entry{},
ProfileName: "test-profile",
Rosters: []*pb.Roster{},
PoolNameToTickets: poolNameToTickets,
}
matches, err := MakeMatches(p)
assert.Nil(err)
assert.Equal(len(matches), 0)
matches, err := makeMatches(poolNameToTickets)
require.Nil(err)
require.Equal(len(matches), 0)
}
func TestMakeMatches(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": {{Id: "1"}, {Id: "2"}, {Id: "3"}},
@ -53,21 +44,12 @@ func TestMakeMatches(t *testing.T) {
"pool3": {{Id: "5"}, {Id: "6"}, {Id: "7"}},
}
p := &mmfHarness.MatchFunctionParams{
Logger: &logrus.Entry{},
ProfileName: "test-profile",
Rosters: []*pb.Roster{},
PoolNameToTickets: poolNameToTickets,
}
matches, err := MakeMatches(p)
assert.Nil(err)
assert.Equal(len(matches), 3)
matches, err := makeMatches(poolNameToTickets)
require.Nil(err)
require.Equal(len(matches), 3)
for _, match := range matches {
assert.Equal(2, len(match.Tickets))
assert.Equal(matchName, match.MatchFunction)
assert.Equal(p.ProfileName, match.MatchProfile)
assert.Nil(match.Rosters)
require.Equal(2, len(match.Tickets))
require.Equal(matchName, match.MatchFunction)
}
}

View File

@ -0,0 +1,58 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package mmf provides a sample match function that uses the GRPC harness to set up 1v1 matches.
// This sample is a reference to demonstrate the usage of the GRPC harness and should only be used as
// a starting point for your match function. You will need to modify the
// matchmaking logic in this function based on your game's requirements.
package mmf
import (
"fmt"
"log"
"net"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/pb"
)
// Start creates and starts the Match Function server and also connects to Open
// Match's queryService service. This connection is used at runtime to fetch tickets
// for pools specified in MatchProfile.
func Start(queryServiceAddr string, serverPort int) {
// Connect to QueryService.
conn, err := grpc.Dial(queryServiceAddr, grpc.WithInsecure())
if err != nil {
log.Fatalf("Failed to connect to Open Match, got %s", err.Error())
}
defer conn.Close()
mmfService := matchFunctionService{
queryServiceClient: pb.NewQueryServiceClient(conn),
}
// Create and host a new gRPC service on the configured port.
server := grpc.NewServer()
pb.RegisterMatchFunctionServer(server, &mmfService)
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", serverPort))
if err != nil {
log.Fatalf("TCP net listener initialization failed for port %v, got %s", serverPort, err.Error())
}
log.Printf("TCP net listener initialized for port %v", serverPort)
err = server.Serve(ln)
if err != nil {
log.Fatalf("gRPC serve failed, got %s", err.Error())
}
}

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: om-evaluator
namespace: open-match
spec:
containers:
- name: om-evaluator
image: "gcr.io/open-match-build/openmatch-evaluator-go-simple"
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 50508
- name: http
containerPort: 51508
hostname: om-evaluator

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: om-function
namespace: open-match
spec:
containers:
- name: om-function
image: "gcr.io/open-match-build/openmatch-mmf-go-soloduel"
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 50502
- name: http
containerPort: 51502
hostname: om-function

20
examples/scale/README.md Normal file
View File

@ -0,0 +1,20 @@
## How to use this framework
This is the framework that we use to benchmark Open Match against different matchmaking scenarios. For now (02/24/2020), this framework supports a Battle Royale, a Basic 1v1 matchmaking, and a Team Shooter scenario. You are welcome to write up your own `Scenario`, test it, and share the number that you are able to get to us.
1. The `Scenario` struct under the `scenarios/scenarios.go` file defines the parameters that this framework currently support/plan to support.
2. Each subpackage `battleroyal`, `firstmatch`, and `teamshooter` implements to `GameScenario` interface defined under `scenarios/scenarios.go` file. Feel free to write your own benchmark scenario by implementing the interface.
- Ticket `func() *pb.Ticket` - Tickets generator
- Profiles `func() []*pb.MatchProfile` - Profiles generator
- MMF `MatchFunction(p *pb.MatchProfile, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error)` - Custom matchmaking logic using a MatchProfile and a map struct that contains the mapping from pool name to the tickets of that pool.
- Evaluate `Evaluate(stream pb.Evaluator_EvaluateServer) error` - Custom logic implementation of the evaluator.
Follow the instructions below if you want to use any of the existing benchmarking scenarios.
1. Open the `scenarios.go` file under the scenarios directory.
2. Change the value of the `ActiveScenario` variable to the scenario that you would like Open Match to run against.
3. Make sure you have `kubectl` connected to an existing Kubernetes cluster and run `make push-images` followed by `make install-scale-chart` to push the images and install Open Match core along with the scale components in the cluster.
4. Run `make proxy`
- Open `localhost:3000` to see the Grafana dashboards.
- Open `localhost:9090` to see the Prometheus query server.
- Open `localhost:[COMPONENT_HTTP_ENDPOINT]/help` to see how to access the zpages.

View File

@ -20,14 +20,15 @@ import (
"io"
"math/rand"
"sync"
"sync/atomic"
"time"
"github.com/sirupsen/logrus"
"open-match.dev/open-match/examples/scale/profiles"
"go.opencensus.io/trace"
"open-match.dev/open-match/examples/scale/scenarios"
"open-match.dev/open-match/internal/appmain"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/logging"
"open-match.dev/open-match/internal/rpc"
"open-match.dev/open-match/internal/telemetry"
"open-match.dev/open-match/pkg/pb"
)
@ -37,30 +38,35 @@ var (
"component": "scale.backend",
})
// TODO: Add metrics to track matches created, tickets assigned, deleted.
matchCount uint64
assigned uint64
deleted uint64
activeScenario = scenarios.ActiveScenario
mIterations = telemetry.Counter("scale_backend_iterations", "fetch match iterations")
mFetchMatchCalls = telemetry.Counter("scale_backend_fetch_match_calls", "fetch match calls")
mFetchMatchSuccesses = telemetry.Counter("scale_backend_fetch_match_successes", "fetch match successes")
mFetchMatchErrors = telemetry.Counter("scale_backend_fetch_match_errors", "fetch match errors")
mMatchesReturned = telemetry.Counter("scale_backend_matches_returned", "matches returned")
mSumTicketsReturned = telemetry.Counter("scale_backend_sum_tickets_returned", "tickets in matches returned")
mMatchesAssigned = telemetry.Counter("scale_backend_matches_assigned", "matches assigned")
mMatchAssignsFailed = telemetry.Counter("scale_backend_match_assigns_failed", "match assigns failed")
mTicketsDeleted = telemetry.Counter("scale_backend_tickets_deleted", "tickets deleted")
mTicketDeletesFailed = telemetry.Counter("scale_backend_ticket_deletes_failed", "ticket deletes failed")
)
// Run triggers execution of functions that continuously fetch, assign and
// delete matches.
func Run() {
cfg, err := config.Read()
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("cannot read configuration.")
}
func BindService(p *appmain.Params, b *appmain.Bindings) error {
go run(p.Config())
return nil
}
logging.ConfigureLogging(cfg)
func run(cfg config.View) {
beConn, err := rpc.GRPCClientFromConfig(cfg, "api.backend")
if err != nil {
logger.Fatalf("failed to connect to Open Match Backend, got %v", err)
}
defer beConn.Close()
be := pb.NewBackendClient(beConn)
be := pb.NewBackendServiceClient(beConn)
feConn, err := rpc.GRPCClientFromConfig(cfg, "api.frontend")
if err != nil {
@ -68,116 +74,134 @@ func Run() {
}
defer feConn.Close()
fe := pb.NewFrontendClient(feConn)
fe := pb.NewFrontendServiceClient(feConn)
// The buffered channels attempt to decouple fetch, assign and delete. It is
// best effort and these operations may still block each other if buffers are full.
matches := make(chan *pb.Match, 1000)
deleteIds := make(chan string, 1000)
w := logger.Writer()
defer w.Close()
go doFetch(cfg, be, matches)
go doAssign(be, matches, deleteIds)
go doDelete(fe, deleteIds)
matchesForAssignment := make(chan *pb.Match, 30000)
ticketsForDeletion := make(chan string, 30000)
// The above goroutines run forever and so the main goroutine needs to block.
select {}
}
for i := 0; i < 50; i++ {
go runAssignments(be, matchesForAssignment, ticketsForDeletion)
go runDeletions(fe, ticketsForDeletion)
}
// doFetch continuously fetches all profiles in a loop and queues up the fetched
// matches for assignment.
func doFetch(cfg config.View, be pb.BackendClient, matches chan *pb.Match) {
startTime := time.Now()
mprofiles := profiles.Generate(cfg)
for {
// Don't go faster than this, as it likely means that FetchMatches is throwing
// errors, and will continue doing so if queried very quickly.
for range time.Tick(time.Millisecond * 250) {
// Keep pulling matches from Open Match backend
profiles := activeScenario.Profiles()
var wg sync.WaitGroup
for _, p := range mprofiles {
for _, p := range profiles {
wg.Add(1)
p := p
go func(wg *sync.WaitGroup) {
go func(wg *sync.WaitGroup, p *pb.MatchProfile) {
defer wg.Done()
fetch(be, p, matches)
}(&wg)
runFetchMatches(be, p, matchesForAssignment)
}(&wg, p)
}
// Wait for all FetchMatches calls to complete before proceeding.
// Wait for all profiles to complete before proceeding.
wg.Wait()
logger.Infof("FetchedMatches:%v, AssignedTickets:%v, DeletedTickets:%v in time %v", atomic.LoadUint64(&matchCount), atomic.LoadUint64(&assigned), atomic.LoadUint64(&deleted), time.Since(startTime))
telemetry.RecordUnitMeasurement(context.Background(), mIterations)
}
}
func fetch(be pb.BackendClient, p *pb.MatchProfile, matches chan *pb.Match) {
func runFetchMatches(be pb.BackendServiceClient, p *pb.MatchProfile, matchesForAssignment chan<- *pb.Match) {
ctx, span := trace.StartSpan(context.Background(), "scale.backend/FetchMatches")
defer span.End()
req := &pb.FetchMatchesRequest{
Config: &pb.FunctionConfig{
Host: "om-function",
Port: 50502,
Type: pb.FunctionConfig_GRPC,
},
Profiles: []*pb.MatchProfile{p},
Profile: p,
}
stream, err := be.FetchMatches(context.Background(), req)
telemetry.RecordUnitMeasurement(ctx, mFetchMatchCalls)
stream, err := be.FetchMatches(ctx, req)
if err != nil {
logger.Errorf("FetchMatches failed, got %v", err)
telemetry.RecordUnitMeasurement(ctx, mFetchMatchErrors)
logger.WithError(err).Error("failed to get available stream client")
return
}
for {
// Pull the Match
resp, err := stream.Recv()
if err == io.EOF {
telemetry.RecordUnitMeasurement(ctx, mFetchMatchSuccesses)
return
}
if err != nil {
logger.Errorf("FetchMatches failed, got %v", err)
telemetry.RecordUnitMeasurement(ctx, mFetchMatchErrors)
logger.WithError(err).Error("failed to get matches from stream client")
return
}
matches <- resp.GetMatch()
atomic.AddUint64(&matchCount, 1)
telemetry.RecordNUnitMeasurement(ctx, mSumTicketsReturned, int64(len(resp.GetMatch().Tickets)))
telemetry.RecordUnitMeasurement(ctx, mMatchesReturned)
matchesForAssignment <- resp.GetMatch()
}
}
// doAssign continuously assigns matches that were queued in the matches channel
// by doFetch and after successful assignment, queues all the tickets to deleteIds
// channel for deletion by doDelete.
func doAssign(be pb.BackendClient, matches chan *pb.Match, deleteIds chan string) {
for match := range matches {
func runAssignments(be pb.BackendServiceClient, matchesForAssignment <-chan *pb.Match, ticketsForDeletion chan<- string) {
ctx := context.Background()
for m := range matchesForAssignment {
ids := []string{}
for _, t := range match.Tickets {
ids = append(ids, t.Id)
for _, t := range m.Tickets {
ids = append(ids, t.GetId())
}
req := &pb.AssignTicketsRequest{
TicketIds: ids,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
},
if activeScenario.BackendAssignsTickets {
_, err := be.AssignTickets(context.Background(), &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: ids,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
},
},
},
})
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mMatchAssignsFailed)
logger.WithError(err).Error("failed to assign tickets")
continue
}
telemetry.RecordUnitMeasurement(ctx, mMatchesAssigned)
}
if _, err := be.AssignTickets(context.Background(), req); err != nil {
logger.Errorf("AssignTickets failed, got %v", err)
continue
}
atomic.AddUint64(&assigned, uint64(len(ids)))
for _, id := range ids {
deleteIds <- id
ticketsForDeletion <- id
}
}
}
// doDelete deletes all the tickets whose ids get added to the deleteIds channel.
func doDelete(fe pb.FrontendClient, deleteIds chan string) {
for id := range deleteIds {
req := &pb.DeleteTicketRequest{
TicketId: id,
}
func runDeletions(fe pb.FrontendServiceClient, ticketsForDeletion <-chan string) {
ctx := context.Background()
if _, err := fe.DeleteTicket(context.Background(), req); err != nil {
logger.Errorf("DeleteTicket failed for ticket %v, got %v", id, err)
continue
}
for id := range ticketsForDeletion {
if activeScenario.BackendDeletesTickets {
req := &pb.DeleteTicketRequest{
TicketId: id,
}
atomic.AddUint64(&deleted, 1)
_, err := fe.DeleteTicket(context.Background(), req)
if err == nil {
telemetry.RecordUnitMeasurement(ctx, mTicketsDeleted)
} else {
telemetry.RecordUnitMeasurement(ctx, mTicketDeletesFailed)
logger.WithError(err).Error("failed to delete tickets")
}
}
}
}

View File

@ -1,4 +1,3 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,44 +11,52 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package app contains the common application initialization code for Open Match servers.
package app
package evaluator
import (
"fmt"
"net"
"github.com/sirupsen/logrus"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/logging"
"open-match.dev/open-match/internal/rpc"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/pb"
utilTesting "open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/examples/scale/scenarios"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "app.main",
"component": "scale.evaluator",
})
)
// RunApplication creates a server.
func RunApplication(serverName string, bindService func(*rpc.ServerParams, config.View) error) {
cfg, err := config.Read()
// Run triggers execution of an evaluator.
func Run() {
activeScenario := scenarios.ActiveScenario
server := grpc.NewServer(utilTesting.NewGRPCServerOptions(logger)...)
pb.RegisterEvaluatorServer(server, activeScenario.Evaluator)
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", 50508))
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("cannot read configuration.")
"port": 50508,
}).Fatal("net.Listen() error")
}
logging.ConfigureLogging(cfg)
p, err := rpc.NewServerParamsFromConfig(cfg, "api."+serverName)
logger.WithFields(logrus.Fields{
"port": 50508,
}).Info("TCP net listener initialized")
logger.Info("Serving gRPC endpoint")
err = server.Serve(ln)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("cannot construct server.")
}).Fatal("gRPC serve() error")
}
if err := bindService(p, cfg); err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("failed to bind %s service.", serverName)
}
rpc.MustServeForever(p)
}

View File

@ -16,15 +16,18 @@ package frontend
import (
"context"
"math/rand"
"sync"
"sync/atomic"
"time"
"github.com/sirupsen/logrus"
"open-match.dev/open-match/examples/scale/tickets"
"go.opencensus.io/stats"
"go.opencensus.io/trace"
"open-match.dev/open-match/examples/scale/scenarios"
"open-match.dev/open-match/internal/appmain"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/logging"
"open-match.dev/open-match/internal/rpc"
"open-match.dev/open-match/internal/telemetry"
"open-match.dev/open-match/pkg/pb"
)
@ -33,61 +36,117 @@ var (
"app": "openmatch",
"component": "scale.frontend",
})
activeScenario = scenarios.ActiveScenario
mTicketsCreated = telemetry.Counter("scale_frontend_tickets_created", "tickets created")
mTicketCreationsFailed = telemetry.Counter("scale_frontend_ticket_creations_failed", "tickets created")
mRunnersWaiting = concurrentGauge(telemetry.Gauge("scale_frontend_runners_waiting", "runners waiting"))
mRunnersCreating = concurrentGauge(telemetry.Gauge("scale_frontend_runners_creating", "runners creating"))
)
// Run triggers execution of the scale frontend component that creates
// tickets at scale in Open Match.
func Run() {
cfg, err := config.Read()
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatal("cannot read configuration.")
}
func BindService(p *appmain.Params, b *appmain.Bindings) error {
go run(p.Config())
logging.ConfigureLogging(cfg)
doCreate(cfg)
return nil
}
func doCreate(cfg config.View) {
concurrent := cfg.GetInt("testConfig.concurrent-creates")
func run(cfg config.View) {
conn, err := rpc.GRPCClientFromConfig(cfg, "api.frontend")
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatal("failed to get Frontend connection")
}
fe := pb.NewFrontendServiceClient(conn)
defer conn.Close()
fe := pb.NewFrontendClient(conn)
ticketQPS := int(activeScenario.FrontendTicketCreatedQPS)
ticketTotal := activeScenario.FrontendTotalTicketsToCreate
var created uint64
var failed uint64
start := time.Now()
for {
var wg sync.WaitGroup
for i := 0; i <= concurrent; i++ {
wg.Add(1)
go func(wg *sync.WaitGroup) {
defer wg.Done()
req := &pb.CreateTicketRequest{
Ticket: tickets.Ticket(cfg),
}
totalCreated := 0
if _, err := fe.CreateTicket(context.Background(), req); err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Error("failed to create a ticket.")
atomic.AddUint64(&failed, 1)
return
}
atomic.AddUint64(&created, 1)
}(&wg)
for range time.Tick(time.Second) {
for i := 0; i < ticketQPS; i++ {
if ticketTotal == -1 || totalCreated < ticketTotal {
go runner(fe)
}
}
// Wait for all concurrent creates to complete.
wg.Wait()
logger.Infof("%v tickets created, %v failed in %v", created, failed, time.Since(start))
}
}
func runner(fe pb.FrontendServiceClient) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
g := stateGauge{}
defer g.stop()
g.start(mRunnersWaiting)
// A random sleep at the start of the worker evens calls out over the second
// period, and makes timing between ticket creation calls a more realistic
// poisson distribution.
time.Sleep(time.Duration(rand.Int63n(int64(time.Second))))
g.start(mRunnersCreating)
id, err := createTicket(ctx, fe)
if err != nil {
logger.WithError(err).Error("failed to create a ticket")
return
}
_ = id
}
func createTicket(ctx context.Context, fe pb.FrontendServiceClient) (string, error) {
ctx, span := trace.StartSpan(ctx, "scale.frontend/CreateTicket")
defer span.End()
req := &pb.CreateTicketRequest{
Ticket: activeScenario.Ticket(),
}
resp, err := fe.CreateTicket(ctx, req)
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mTicketCreationsFailed)
return "", err
}
telemetry.RecordUnitMeasurement(ctx, mTicketsCreated)
return resp.Id, nil
}
// Allows concurrent moficiation of a gauge value by modifying the concurrent
// value with a delta.
func concurrentGauge(s *stats.Int64Measure) func(delta int64) {
m := sync.Mutex{}
v := int64(0)
return func(delta int64) {
m.Lock()
defer m.Unlock()
v += delta
telemetry.SetGauge(context.Background(), s, v)
}
}
// stateGauge will have a single value be applied to one gauge at a time.
type stateGauge struct {
f func(int64)
}
// start begins a stage measured in a gauge, stopping any previously started
// stage.
func (g *stateGauge) start(f func(int64)) {
g.stop()
g.f = f
f(1)
}
// stop finishes the current stage by decrementing the gauge.
func (g *stateGauge) stop() {
if g.f != nil {
g.f(-1)
g.f = nil
}
}

69
examples/scale/mmf/mmf.go Normal file
View File

@ -0,0 +1,69 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mmf
import (
"fmt"
"net"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/pb"
utilTesting "open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/examples/scale/scenarios"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "scale.mmf",
})
)
// Run triggers execution of a MMF.
func Run() {
activeScenario := scenarios.ActiveScenario
conn, err := grpc.Dial("open-match-query.open-match.svc.cluster.local:50503", utilTesting.NewGRPCDialOptions(logger)...)
if err != nil {
logger.Fatalf("Failed to connect to Open Match, got %v", err)
}
defer conn.Close()
server := grpc.NewServer(utilTesting.NewGRPCServerOptions(logger)...)
pb.RegisterMatchFunctionServer(server, activeScenario.MMF)
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", 50502))
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"port": 50502,
}).Fatal("net.Listen() error")
}
logger.WithFields(logrus.Fields{
"port": 50502,
}).Info("TCP net listener initialized")
logger.Info("Serving gRPC endpoint")
err = server.Serve(ln)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatal("gRPC serve() error")
}
}

View File

@ -1,47 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package profiles
import (
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/testing/e2e"
"open-match.dev/open-match/pkg/pb"
)
// greedyProfiles generates a single profile that has only one Pool that has a single
// filter which covers the entire range of player ranks, thereby pulling in the entire
// player population during each profile execution.
func greedyProfiles(cfg config.View) []*pb.MatchProfile {
return []*pb.MatchProfile{
{
Name: "greedy",
Pools: []*pb.Pool{
{
Name: "all",
DoubleRangeFilters: []*pb.DoubleRangeFilter{
{
DoubleArg: e2e.DoubleArgMMR,
Min: float64(cfg.GetInt("testConfig.minRating")),
Max: float64(cfg.GetInt("testConfig.maxRating")),
},
},
},
},
Rosters: []*pb.Roster{
makeRosterSlots("all", cfg.GetInt("testConfig.ticketsPerMatch")),
},
},
}
}

View File

@ -1,81 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package profiles
import (
"fmt"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/testing/e2e"
"open-match.dev/open-match/pkg/pb"
)
// multifilterProfiles generates a multiple profiles, each containing a single Pool
// that specifies multiple filters to pick a partitioned player population. Note
// that across all the profiles returned, the entire population is covered and given
// the overlapping nature of filters, multiple profiles returned by this method may
// match to the same set of players.
func multifilterProfiles(cfg config.View) []*pb.MatchProfile {
regions := cfg.GetStringSlice("testConfig.regions")
ratingFilters := makeRangeFilters(&rangeConfig{
name: "Rating",
min: cfg.GetInt("testConfig.minRating"),
max: cfg.GetInt("testConfig.maxRating"),
rangeSize: cfg.GetInt("testConfig.multifilter.rangeSize"),
rangeOverlap: cfg.GetInt("testConfig.multifilter.rangeOverlap"),
})
latencyFilters := makeRangeFilters(&rangeConfig{
name: "Latency",
min: 0,
max: 100,
rangeSize: 70,
rangeOverlap: 0,
})
var profiles []*pb.MatchProfile
for _, region := range regions {
for _, latency := range latencyFilters {
for _, rating := range ratingFilters {
poolName := fmt.Sprintf("%s_%s_%s", region, rating.name, latency.name)
p := &pb.Pool{
Name: poolName,
DoubleRangeFilters: []*pb.DoubleRangeFilter{
{
DoubleArg: e2e.DoubleArgMMR,
Min: float64(rating.min),
Max: float64(rating.max),
},
{
DoubleArg: region,
Min: float64(latency.min),
Max: float64(latency.max),
},
},
}
prof := &pb.MatchProfile{
Name: fmt.Sprintf("Profile_%s", poolName),
Pools: []*pb.Pool{p},
Rosters: []*pb.Roster{makeRosterSlots(p.GetName(), cfg.GetInt("testConfig.ticketsPerMatch"))},
}
profiles = append(profiles, prof)
}
}
}
return profiles
}

View File

@ -1,94 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package profiles
import (
"fmt"
"math"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/testing/e2e"
"open-match.dev/open-match/pkg/pb"
)
// multipoolProfiles generates a multiple profiles, each containing a multiple player
// Pools specifying multiple filters each. The profile also requests a roster of a
// configured number of players per pool to be placed in a match.
func multipoolProfiles(cfg config.View) []*pb.MatchProfile {
characters := cfg.GetStringSlice("testConfig.characters")
regions := cfg.GetStringSlice("testConfig.regions")
ratingFilters := makeRangeFilters(&rangeConfig{
name: "Rating",
min: cfg.GetInt("testConfig.minRating"),
max: cfg.GetInt("testConfig.maxRating"),
rangeSize: cfg.GetInt("testConfig.multipool.rangeSize"),
rangeOverlap: cfg.GetInt("testConfig.multipool.rangeOverlap"),
})
latencyFilters := makeRangeFilters(&rangeConfig{
name: "Latency",
min: 0,
max: 100,
rangeSize: 70,
rangeOverlap: 0,
})
var profiles []*pb.MatchProfile
for _, region := range regions {
for _, latency := range latencyFilters {
for _, rating := range ratingFilters {
var pools []*pb.Pool
var rosters []*pb.Roster
for _, character := range characters {
poolName := fmt.Sprintf("%s_%s_%s_%s", region, rating.name, latency.name, character)
p := &pb.Pool{
Name: poolName,
DoubleRangeFilters: []*pb.DoubleRangeFilter{
// TODO: Use StringEqualsFilter for the character args.
{
DoubleArg: character,
Min: 0,
Max: math.MaxFloat64,
},
{
DoubleArg: e2e.DoubleArgMMR,
Min: float64(rating.min),
Max: float64(rating.max),
},
{
DoubleArg: region,
Min: float64(latency.min),
Max: float64(latency.max),
},
},
}
rosters = append(rosters, makeRosterSlots(poolName, cfg.GetInt("testConfig.multipool.characterCount")))
pools = append(pools, p)
}
prof := &pb.MatchProfile{
Name: fmt.Sprintf("Profile_%s", fmt.Sprintf("%s_%s_%s", region, rating.name, latency.name)),
Pools: pools,
Rosters: rosters,
}
profiles = append(profiles, prof)
}
}
}
return profiles
}

View File

@ -1,53 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package profiles
import (
"github.com/sirupsen/logrus"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/pkg/pb"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "scale.profiles",
})
// Greedy config type is used to pick profiles that pull in all players.
Greedy = "greedy"
// MultiFilter config type is used to pick profiles that have a Pool that uses multiple filters.
MultiFilter = "multifilter"
// MultiPool config type is used to pick profiles that have multiple Pools with multiple filters each.
MultiPool = "multipool"
// emptyRosterSpot is the string that represents an empty slot on a Roster.
emptyRosterSpot = "EMPTY_ROSTER_SPOT"
)
// Generate generates test profiles for scale demo
func Generate(cfg config.View) []*pb.MatchProfile {
profile := cfg.GetString("testConfig.profile")
switch profile {
case Greedy:
return greedyProfiles(cfg)
case MultiFilter:
return multifilterProfiles(cfg)
case MultiPool:
return multipoolProfiles(cfg)
}
logger.Warningf("Unexpected profile name %s, not returning any profiles", profile)
return nil
}

View File

@ -1,72 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package profiles
import (
"fmt"
"open-match.dev/open-match/pkg/pb"
)
type rangeFilter struct {
name string
min int
max int
}
type rangeConfig struct {
name string
min int
max int
rangeSize int
rangeOverlap int
}
// makeRosterSlots generates a roster with the specified name and with the
// specified number of empty roster slots.
func makeRosterSlots(name string, count int) *pb.Roster {
roster := &pb.Roster{
Name: name,
}
for i := 0; i <= count; i++ {
roster.TicketIds = append(roster.TicketIds, emptyRosterSpot)
}
return roster
}
// makeRangeFilters generates multiple filters over a given range based on
// the size of the range and the overlap specified for the filters.
func makeRangeFilters(config *rangeConfig) []*rangeFilter {
var filters []*rangeFilter
r := config.min
for r <= config.max {
max := r + config.rangeSize
if max > config.max {
r = config.max
}
filters = append(filters, &rangeFilter{
name: fmt.Sprintf("%s_%dto%d", config.name, r, max),
min: r,
max: max,
})
r = r + 1 + (config.rangeSize - config.rangeOverlap)
}
return filters
}

View File

@ -0,0 +1,141 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package battleroyal
import (
"fmt"
"io"
"math/rand"
"time"
"open-match.dev/open-match/pkg/pb"
)
const (
poolName = "all"
regionArg = "region"
)
func battleRoyalRegionName(i int) string {
return fmt.Sprintf("region_%d", i)
}
func Scenario() *BattleRoyalScenario {
return &BattleRoyalScenario{
regions: 20,
}
}
type BattleRoyalScenario struct {
regions int
}
func (b *BattleRoyalScenario) Profiles() []*pb.MatchProfile {
p := []*pb.MatchProfile{}
for i := 0; i < b.regions; i++ {
p = append(p, &pb.MatchProfile{
Name: battleRoyalRegionName(i),
Pools: []*pb.Pool{
{
Name: poolName,
StringEqualsFilters: []*pb.StringEqualsFilter{
{
StringArg: regionArg,
Value: battleRoyalRegionName(i),
},
},
},
},
})
}
return p
}
func (b *BattleRoyalScenario) Ticket() *pb.Ticket {
// Simple way to give an uneven distribution of region population.
a := rand.Intn(b.regions) + 1
r := rand.Intn(a)
return &pb.Ticket{
SearchFields: &pb.SearchFields{
StringArgs: map[string]string{
regionArg: battleRoyalRegionName(r),
},
},
}
}
func (b *BattleRoyalScenario) MatchFunction(p *pb.MatchProfile, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
const playersInMatch = 100
tickets := poolTickets[poolName]
var matches []*pb.Match
for i := 0; i+playersInMatch <= len(tickets); i += playersInMatch {
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
Tickets: tickets[i : i+playersInMatch],
MatchProfile: p.GetName(),
MatchFunction: "battleRoyal",
})
}
return matches, nil
}
// fifoEvaluate accepts all matches which don't contain the same ticket as in a
// previously accepted match. Essentially first to claim the ticket wins.
func (b *BattleRoyalScenario) Evaluate(stream pb.Evaluator_EvaluateServer) error {
used := map[string]struct{}{}
// TODO: once the evaluator client supports sending and recieving at the
// same time, don't buffer, just send results immediately.
matchIDs := []string{}
outer:
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("Error reading evaluator input stream: %w", err)
}
m := req.GetMatch()
for _, t := range m.Tickets {
if _, ok := used[t.Id]; ok {
continue outer
}
}
for _, t := range m.Tickets {
used[t.Id] = struct{}{}
}
matchIDs = append(matchIDs, m.GetMatchId())
}
for _, mID := range matchIDs {
err := stream.Send(&pb.EvaluateResponse{MatchId: mID})
if err != nil {
return fmt.Errorf("Error sending evaluator output stream: %w", err)
}
}
return nil
}

View File

@ -0,0 +1,111 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package firstmatch
import (
"fmt"
"io"
"time"
"open-match.dev/open-match/pkg/pb"
)
const (
poolName = "all"
)
func Scenario() *FirstMatchScenario {
return &FirstMatchScenario{}
}
type FirstMatchScenario struct {
}
func (_ *FirstMatchScenario) Profiles() []*pb.MatchProfile {
return []*pb.MatchProfile{
{
Name: "entirePool",
Pools: []*pb.Pool{
{
Name: poolName,
},
},
},
}
}
func (_ *FirstMatchScenario) Ticket() *pb.Ticket {
return &pb.Ticket{}
}
func (_ *FirstMatchScenario) MatchFunction(p *pb.MatchProfile, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
tickets := poolTickets[poolName]
var matches []*pb.Match
for i := 0; i+1 < len(tickets); i += 2 {
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
Tickets: []*pb.Ticket{tickets[i], tickets[i+1]},
MatchProfile: p.GetName(),
MatchFunction: "rangeExpandingMatchFunction",
})
}
return matches, nil
}
// fifoEvaluate accepts all matches which don't contain the same ticket as in a
// previously accepted match. Essentially first to claim the ticket wins.
func (_ *FirstMatchScenario) Evaluate(stream pb.Evaluator_EvaluateServer) error {
used := map[string]struct{}{}
// TODO: once the evaluator client supports sending and recieving at the
// same time, don't buffer, just send results immediately.
matchIDs := []string{}
outer:
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("Error reading evaluator input stream: %w", err)
}
m := req.GetMatch()
for _, t := range m.Tickets {
if _, ok := used[t.Id]; ok {
continue outer
}
}
for _, t := range m.Tickets {
used[t.Id] = struct{}{}
}
matchIDs = append(matchIDs, m.GetMatchId())
}
for _, mID := range matchIDs {
err := stream.Send(&pb.EvaluateResponse{MatchId: mID})
if err != nil {
return fmt.Errorf("Error sending evaluator output stream: %w", err)
}
}
return nil
}

View File

@ -0,0 +1,156 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package scenarios
import (
"sync"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
"open-match.dev/open-match/examples/scale/scenarios/battleroyal"
"open-match.dev/open-match/examples/scale/scenarios/firstmatch"
"open-match.dev/open-match/examples/scale/scenarios/teamshooter"
"open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/pkg/matchfunction"
"open-match.dev/open-match/pkg/pb"
)
var (
queryServiceAddress = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService Endpoint.
logger = logrus.WithFields(logrus.Fields{
"app": "scale",
})
)
// GameScenario defines what tickets look like, and how they should be matched.
type GameScenario interface {
// Ticket creates a new ticket, with randomized parameters.
Ticket() *pb.Ticket
// Profiles lists all of the profiles that should run.
Profiles() []*pb.MatchProfile
// MatchFunction is the custom logic implementation of the match function.
MatchFunction(p *pb.MatchProfile, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error)
// Evaluate is the custom logic implementation of the evaluator.
Evaluate(stream pb.Evaluator_EvaluateServer) error
}
// ActiveScenario sets the scenario with preset parameters that we want to use for current Open Match benchmark run.
var ActiveScenario = func() *Scenario {
var gs GameScenario = firstmatch.Scenario()
// TODO: Select which scenario to use based on some configuration or choice,
// so it's easier to run different scenarios without changing code.
gs = battleroyal.Scenario()
gs = teamshooter.Scenario()
return &Scenario{
FrontendTotalTicketsToCreate: -1,
FrontendTicketCreatedQPS: 100,
BackendAssignsTickets: true,
BackendDeletesTickets: true,
Ticket: gs.Ticket,
Profiles: gs.Profiles,
MMF: queryPoolsWrapper(gs.MatchFunction),
Evaluator: gs.Evaluate,
}
}()
// Scenario defines the controllable fields for Open Match benchmark scenarios
type Scenario struct {
// TODO: supports the following controllable parameters
// MatchFunction Configs
// MatchOverlapRatio float32
// TicketSearchFieldsUnitSize int
// TicketSearchFieldsNumber int
// GameFrontend Configs
// TicketExtensionSize int
// PendingTicketNumber int
// MatchExtensionSize int
FrontendTotalTicketsToCreate int // TotalTicketsToCreate = -1 let scale-frontend create tickets forever
FrontendTicketCreatedQPS uint32
// GameBackend Configs
// ProfileNumber int
// FilterNumber int
BackendAssignsTickets bool
BackendDeletesTickets bool
Ticket func() *pb.Ticket
Profiles func() []*pb.MatchProfile
MMF matchFunction
Evaluator evaluatorFunction
}
type matchFunction func(*pb.RunRequest, pb.MatchFunction_RunServer) error
type evaluatorFunction func(pb.Evaluator_EvaluateServer) error
func (mmf matchFunction) Run(req *pb.RunRequest, srv pb.MatchFunction_RunServer) error {
return mmf(req, srv)
}
func (eval evaluatorFunction) Evaluate(srv pb.Evaluator_EvaluateServer) error {
return eval(srv)
}
func getQueryServiceGRPCClient() pb.QueryServiceClient {
conn, err := grpc.Dial(queryServiceAddress, testing.NewGRPCDialOptions(logger)...)
if err != nil {
logger.Fatalf("Failed to connect to Open Match, got %v", err)
}
return pb.NewQueryServiceClient(conn)
}
func queryPoolsWrapper(mmf func(req *pb.MatchProfile, pools map[string][]*pb.Ticket) ([]*pb.Match, error)) matchFunction {
var q pb.QueryServiceClient
var startQ sync.Once
return func(req *pb.RunRequest, stream pb.MatchFunction_RunServer) error {
startQ.Do(func() {
q = getQueryServiceGRPCClient()
})
poolTickets, err := matchfunction.QueryPools(stream.Context(), q, req.GetProfile().GetPools())
if err != nil {
return err
}
proposals, err := mmf(req.GetProfile(), poolTickets)
if err != nil {
return err
}
logger.WithFields(logrus.Fields{
"proposals": proposals,
}).Trace("proposals returned by match function")
for _, proposal := range proposals {
if err := stream.Send(&pb.RunResponse{Proposal: proposal}); err != nil {
return err
}
}
return nil
}
}

View File

@ -0,0 +1,330 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// TeamShooterScenario is a scenario which is designed to emulate the
// approximate behavior to open match that a skill based team game would have.
// It doesn't try to provide good matchmaking for real players. There are three
// arguments used:
// mode: The game mode the players wants to play in. mode is a hard partition.
// regions: Players may have good latency to one or more regions. A player will
// search for matches in all eligible regions.
// skill: Players have a random skill based on a normal distribution. Players
// will only be matched with other players who have a close skill value. The
// match functions have overlapping partitions of the skill brackets.
package teamshooter
import (
"fmt"
"io"
"math"
"math/rand"
"sort"
"time"
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/any"
"github.com/golang/protobuf/ptypes/wrappers"
"open-match.dev/open-match/pkg/pb"
)
const (
poolName = "all"
skillArg = "skill"
modeArg = "mode"
)
// TeamShooterScenario provides the required methods for running a scenario.
type TeamShooterScenario struct {
// Names of available region tags.
regions []string
// Maximum regions a player can search in.
maxRegions int
// Number of tickets which form a match.
playersPerGame int
// For each pair of consequitive values, the value to split profiles on by
// skill.
skillBoundaries []float64
// Maximum difference between two tickets to consider a match valid.
maxSkillDifference float64
// List of mode names.
modes []string
// Returns a random mode, with some weight.
randomMode func() string
}
// Scenario creates a new TeamShooterScenario.
func Scenario() *TeamShooterScenario {
modes, randomMode := weightedChoice(map[string]int{
"pl": 100, // Payload, very popular.
"cp": 25, // Capture point, 1/4 as popular.
})
regions := []string{}
for i := 0; i < 2; i++ {
regions = append(regions, fmt.Sprintf("region_%d", i))
}
return &TeamShooterScenario{
regions: regions,
maxRegions: 1,
playersPerGame: 12,
skillBoundaries: []float64{math.Inf(-1), 0, math.Inf(1)},
maxSkillDifference: 0.01,
modes: modes,
randomMode: randomMode,
}
}
// Profiles shards the player base on mode, region, and skill.
func (t *TeamShooterScenario) Profiles() []*pb.MatchProfile {
p := []*pb.MatchProfile{}
for _, region := range t.regions {
for _, mode := range t.modes {
for i := 0; i+1 < len(t.skillBoundaries); i++ {
skillMin := t.skillBoundaries[i] - t.maxSkillDifference/2
skillMax := t.skillBoundaries[i+1] + t.maxSkillDifference/2
p = append(p, &pb.MatchProfile{
Name: fmt.Sprintf("%s_%s_%v-%v", region, mode, skillMin, skillMax),
Pools: []*pb.Pool{
{
Name: poolName,
DoubleRangeFilters: []*pb.DoubleRangeFilter{
{
DoubleArg: skillArg,
Min: skillMin,
Max: skillMax,
},
},
TagPresentFilters: []*pb.TagPresentFilter{
{
Tag: region,
},
},
StringEqualsFilters: []*pb.StringEqualsFilter{
{
StringArg: modeArg,
Value: mode,
},
},
},
},
})
}
}
}
return p
}
// Ticket creates a randomized player.
func (t *TeamShooterScenario) Ticket() *pb.Ticket {
region := rand.Intn(len(t.regions))
numRegions := rand.Intn(t.maxRegions) + 1
tags := []string{}
for i := 0; i < numRegions; i++ {
tags = append(tags, t.regions[region])
// The Earth is actually a circle.
region = (region + 1) % len(t.regions)
}
return &pb.Ticket{
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
skillArg: clamp(rand.NormFloat64(), -3, 3),
},
StringArgs: map[string]string{
modeArg: t.randomMode(),
},
Tags: tags,
},
}
}
// MatchFunction puts tickets into matches based on their skill, finding the
// required number of tickets for a game within the maximum skill difference.
func (t *TeamShooterScenario) MatchFunction(p *pb.MatchProfile, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
skill := func(t *pb.Ticket) float64 {
return t.SearchFields.DoubleArgs[skillArg]
}
tickets := poolTickets[poolName]
var matches []*pb.Match
sort.Slice(tickets, func(i, j int) bool {
return skill(tickets[i]) < skill(tickets[j])
})
for i := 0; i+t.playersPerGame <= len(tickets); i++ {
mt := tickets[i : i+t.playersPerGame]
if skill(mt[len(mt)-1])-skill(mt[0]) < t.maxSkillDifference {
avg := float64(0)
for _, t := range mt {
avg += skill(t)
}
avg /= float64(len(mt))
q := float64(0)
for _, t := range mt {
diff := skill(t) - avg
q -= diff * diff
}
m, err := (&matchExt{
id: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
matchProfile: p.GetName(),
matchFunction: "skillmatcher",
tickets: mt,
quality: q,
}).pack()
if err != nil {
return nil, err
}
matches = append(matches, m)
}
}
return matches, nil
}
// Evaluate returns matches in order of highest quality, skipping any matches
// which contain tickets that are already used.
func (t *TeamShooterScenario) Evaluate(stream pb.Evaluator_EvaluateServer) error {
// Unpacked proposal matches.
proposals := []*matchExt{}
// Ticket ids which are used in a match.
used := map[string]struct{}{}
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("Error reading evaluator input stream: %w", err)
}
p, err := unpackMatch(req.GetMatch())
if err != nil {
return err
}
proposals = append(proposals, p)
}
// Higher quality is better.
sort.Slice(proposals, func(i, j int) bool {
return proposals[i].quality > proposals[j].quality
})
outer:
for _, p := range proposals {
for _, t := range p.tickets {
if _, ok := used[t.Id]; ok {
continue outer
}
}
for _, t := range p.tickets {
used[t.Id] = struct{}{}
}
err := stream.Send(&pb.EvaluateResponse{MatchId: p.id})
if err != nil {
return fmt.Errorf("Error sending evaluator output stream: %w", err)
}
}
return nil
}
// matchExt presents the match and extension data in a native form, and allows
// easy conversion to and from proto format.
type matchExt struct {
id string
tickets []*pb.Ticket
quality float64
matchProfile string
matchFunction string
}
func unpackMatch(m *pb.Match) (*matchExt, error) {
v := &wrappers.DoubleValue{}
err := ptypes.UnmarshalAny(m.Extensions["quality"], v)
if err != nil {
return nil, fmt.Errorf("Error unpacking match quality: %w", err)
}
return &matchExt{
id: m.MatchId,
tickets: m.Tickets,
quality: v.Value,
matchProfile: m.MatchProfile,
matchFunction: m.MatchFunction,
}, nil
}
func (m *matchExt) pack() (*pb.Match, error) {
v := &wrappers.DoubleValue{Value: m.quality}
a, err := ptypes.MarshalAny(v)
if err != nil {
return nil, fmt.Errorf("Error packing match quality: %w", err)
}
return &pb.Match{
MatchId: m.id,
Tickets: m.tickets,
MatchProfile: m.matchProfile,
MatchFunction: m.matchFunction,
Extensions: map[string]*any.Any{
"quality": a,
},
}, nil
}
func clamp(v float64, min float64, max float64) float64 {
if v < min {
return min
}
if v > max {
return max
}
return v
}
// weightedChoice takes a map of values, and their relative probability. It
// returns a list of the values, along with a function which will return random
// choices from the values with the weighted probability.
func weightedChoice(m map[string]int) ([]string, func() string) {
s := make([]string, 0, len(m))
total := 0
for k, v := range m {
s = append(s, k)
total += v
}
return s, func() string {
remainder := rand.Intn(total)
for k, v := range m {
remainder -= v
if remainder < 0 {
return k
}
}
panic("weightedChoice is broken.")
}
}

View File

@ -1,83 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tickets
import (
"math/rand"
"open-match.dev/open-match/internal/config"
"github.com/sirupsen/logrus"
"open-match.dev/open-match/internal/testing/e2e"
"open-match.dev/open-match/pkg/pb"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "scale.tickets",
})
)
// Ticket generates a ticket based on the config for scale testing
func Ticket(cfg config.View) *pb.Ticket {
characters := cfg.GetStringSlice("testConfig.characters")
regions := cfg.GetStringSlice("testConfig.regions")
min := cfg.GetFloat64("testConfig.minRating")
max := cfg.GetFloat64("testConfig.maxRating")
latencyMap := latency(regions)
ticket := &pb.Ticket{
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
e2e.DoubleArgMMR: normalDist(40, min, max, 20),
},
StringArgs: map[string]string{
e2e.Role: characters[rand.Intn(len(characters))],
},
},
}
for _, r := range regions {
ticket.SearchFields.DoubleArgs[r] = latencyMap[r]
}
return ticket
}
// latency generates a latency mapping of each region to a latency value. It picks
// one region with latency between 0ms to 100ms and sets latencies to all other regions
// to a value between 100ms to 300ms.
func latency(regions []string) map[string]float64 {
latencies := make(map[string]float64)
for _, r := range regions {
latencies[r] = normalDist(175, 100, 300, 75)
}
latencies[regions[rand.Intn(len(regions))]] = normalDist(25, 0, 100, 75)
return latencies
}
// normalDist generates a random integer in a normal distribution
func normalDist(avg float64, min float64, max float64, stdev float64) float64 {
sample := (rand.NormFloat64() * stdev) + avg
switch {
case sample > max:
sample = max
case sample < min:
sample = min
}
return sample
}

66
go.mod
View File

@ -15,43 +15,59 @@ module open-match.dev/open-match
// limitations under the License.
// When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
go 1.13.1
go 1.14
require (
cloud.google.com/go v0.40.0 // indirect
cloud.google.com/go v0.47.0 // indirect
contrib.go.opencensus.io/exporter/jaeger v0.1.0
contrib.go.opencensus.io/exporter/ocagent v0.5.0
contrib.go.opencensus.io/exporter/ocagent v0.6.0
contrib.go.opencensus.io/exporter/prometheus v0.1.0
contrib.go.opencensus.io/exporter/stackdriver v0.12.2
contrib.go.opencensus.io/exporter/zipkin v0.1.1
contrib.go.opencensus.io/exporter/stackdriver v0.12.8
github.com/Bose/minisentinel v0.0.0-20191213132324-b7726ed8ed71
github.com/TV4/logrus-stackdriver-formatter v0.1.0
github.com/alicebob/miniredis/v2 v2.8.1-0.20190618082157-e29950035715
github.com/cenkalti/backoff v2.1.1+incompatible
github.com/alicebob/miniredis/v2 v2.11.0
github.com/apache/thrift v0.13.0 // indirect
github.com/aws/aws-sdk-go v1.25.27 // indirect
github.com/cenkalti/backoff v2.2.1+incompatible
github.com/fsnotify/fsnotify v1.4.7
github.com/gogo/protobuf v1.3.0 // indirect
github.com/gogo/protobuf v1.3.1 // indirect
github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9 // indirect
github.com/golang/protobuf v1.3.2
github.com/gomodule/redigo v1.7.1-0.20190322064113-39e2c31b7ca3
github.com/google/gofuzz v1.0.0 // indirect
github.com/gomodule/redigo v2.0.1-0.20191111085604-09d84710e01a+incompatible
github.com/googleapis/gnostic v0.3.1 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0
github.com/grpc-ecosystem/grpc-gateway v1.9.6
github.com/imdario/mergo v0.3.7 // indirect
github.com/openzipkin/zipkin-go v0.1.6
github.com/grpc-ecosystem/go-grpc-middleware v1.1.0
github.com/grpc-ecosystem/grpc-gateway v1.12.0
github.com/imdario/mergo v0.3.8 // indirect
github.com/json-iterator/go v1.1.8 // indirect
github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect
github.com/pelletier/go-toml v1.6.0 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v1.0.0
github.com/prometheus/client_golang v1.2.1
github.com/pseudomuto/protoc-gen-doc v1.3.2 // indirect
github.com/rs/xid v1.2.1
github.com/sirupsen/logrus v1.4.2
github.com/spf13/afero v1.2.2 // indirect
github.com/spf13/viper v1.4.0
github.com/stretchr/testify v1.3.0
go.opencensus.io v0.22.0
golang.org/x/net v0.0.0-20190522155817-f3200d17e092
google.golang.org/genproto v0.0.0-20190611190212-a7e196e89fd3
google.golang.org/grpc v1.21.1
github.com/spf13/afero v1.2.1 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.5.0
github.com/stretchr/testify v1.4.0
go.opencensus.io v0.22.1
golang.org/x/crypto v0.0.0-20191105034135-c7e5f84aec59 // indirect
golang.org/x/net v0.0.0-20191105084925-a882066a44e0
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e
golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd // indirect
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect
google.golang.org/api v0.13.0 // indirect
google.golang.org/appengine v1.6.5 // indirect
google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6
google.golang.org/grpc v1.25.0
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/api v0.0.0-20190708094356-59223ed9f6ce // kubernetes-1.12.10
k8s.io/apimachinery v0.0.0-20190221084156-01f179d85dbc // kubernetes-1.12.10
k8s.io/client-go v9.0.0+incompatible // kubernetes-1.12.10
gopkg.in/yaml.v2 v2.2.5 // indirect
k8s.io/api v0.0.0-20191004102255-dacd7df5a50b // kubernetes-1.13.12
k8s.io/apimachinery v0.0.0-20191004074956-01f8b7d1121a // kubernetes-1.13.12
k8s.io/client-go v0.0.0-20191004102537-eb5b9a8cfde7 // kubernetes-1.13.12
k8s.io/klog v1.0.0 // indirect
sigs.k8s.io/yaml v1.1.0 // indirect
)

298
go.sum
View File

@ -1,45 +1,71 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.40.0 h1:FjSY7bOj+WzJe6TZRVtXI2b9kAYvtNg4lMbcH2+MUkk=
cloud.google.com/go v0.40.0/go.mod h1:Tk58MuI9rbLMKlAjeO/bDnteAx7tX2gJIXw4T5Jwlro=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.47.0 h1:1JUtpcY9E7+eTospEwWS2QXP3DEn7poB3E2j0jN74mM=
cloud.google.com/go v0.47.0/go.mod h1:5p3Ky/7f3N10VBkhuR5LFtddroTiMyjZV/Kj5qOQFxU=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
contrib.go.opencensus.io/exporter/jaeger v0.1.0 h1:WNc9HbA38xEQmsI40Tjd/MNU/g8byN2Of7lwIjv0Jdc=
contrib.go.opencensus.io/exporter/jaeger v0.1.0/go.mod h1:VYianECmuFPwU37O699Vc1GOcy+y8kOsfaxHRImmjbA=
contrib.go.opencensus.io/exporter/ocagent v0.5.0 h1:TKXjQSRS0/cCDrP7KvkgU6SmILtF/yV2TOs/02K/WZQ=
contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0=
contrib.go.opencensus.io/exporter/ocagent v0.6.0 h1:Z1n6UAyr0QwM284yUuh5Zd8JlvxUGAhFZcgMJkMPrGM=
contrib.go.opencensus.io/exporter/ocagent v0.6.0/go.mod h1:zmKjrJcdo0aYcVS7bmEeSEBLPA9YJp5bjrofdU3pIXs=
contrib.go.opencensus.io/exporter/prometheus v0.1.0 h1:SByaIoWwNgMdPSgl5sMqM2KDE5H/ukPWBRo314xiDvg=
contrib.go.opencensus.io/exporter/prometheus v0.1.0/go.mod h1:cGFniUXGZlKRjzOyuZJ6mgB+PgBcCIa79kEKR8YCW+A=
contrib.go.opencensus.io/exporter/stackdriver v0.12.2 h1:jU1p9F07ASK11wYgSTPKtFlTvTtCDj6R1d3nRt0ZHDE=
contrib.go.opencensus.io/exporter/stackdriver v0.12.2/go.mod h1:iwB6wGarfphGGe/e5CWqyUk/cLzKnWsOKPVW3no6OTw=
contrib.go.opencensus.io/exporter/zipkin v0.1.1 h1:PR+1zWqY8ceXs1qDQQIlgXe+sdiwCf0n32bH4+Epk8g=
contrib.go.opencensus.io/exporter/zipkin v0.1.1/go.mod h1:GMvdSl3eJ2gapOaLKzTKE3qDgUkJ86k9k3yY2eqwkzc=
contrib.go.opencensus.io/resource v0.1.1/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
contrib.go.opencensus.io/exporter/stackdriver v0.12.8 h1:iXI5hr7pUwMx0IwMphpKz5Q3If/G5JiWFVZ5MPPxP9E=
contrib.go.opencensus.io/exporter/stackdriver v0.12.8/go.mod h1:XyyafDnFOsqoxHJgTFycKZMrRUrPThLh2iYTJF6uoO0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Bose/minisentinel v0.0.0-20191213132324-b7726ed8ed71 h1:J52um+Sp3v8TpSY0wOgpjr84np+xvrY3503DRirJ6wI=
github.com/Bose/minisentinel v0.0.0-20191213132324-b7726ed8ed71/go.mod h1:E4OavwrrOME3uj3Zm9Rla8ZDqlAR5GqKA+mMIPoilYk=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/FZambia/sentinel v1.0.0 h1:KJ0ryjKTZk5WMp0dXvSdNqp3lFaW1fNFuEYfrkLOYIc=
github.com/FZambia/sentinel v1.0.0/go.mod h1:ytL1Am/RLlAoAXG6Kj5LNuw/TRRQrv2rt2FT26vP5gI=
github.com/Masterminds/semver v1.4.2 h1:WBLTQ37jOCzSLtXNdoo8bNM8876KhNqOKvrlGITgsTc=
github.com/Masterminds/semver v1.4.2/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/sprig v2.15.0+incompatible h1:0gSxPGWS9PAr7U2NsQ2YQg6juRDINkUyuvbb4b2Xm8w=
github.com/Masterminds/sprig v2.15.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/TV4/logrus-stackdriver-formatter v0.1.0 h1:nFea8RiX7ecTnWPM+9FIqwZYJdcGo58CHMGIVdYzMXg=
github.com/TV4/logrus-stackdriver-formatter v0.1.0/go.mod h1:wwS7hOiBvP6SBD0UXCa767+VhHkaXrfX0MzUojYcN0Q=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alicebob/gopher-json v0.0.0-20180125190556-5a6b3ba71ee6 h1:45bxf7AZMwWcqkLzDAQugVEwedisr5nRJ1r+7LYnv0U=
github.com/alicebob/gopher-json v0.0.0-20180125190556-5a6b3ba71ee6/go.mod h1:SGnFV6hVsYE877CKEZ6tDNTjaSXYUk6QqoIK6PrAtcc=
github.com/alicebob/miniredis/v2 v2.8.1-0.20190618082157-e29950035715 h1:orxqmCgZI1neQETsk5EIA/RkVTDm4Yw9x2f7RynLck4=
github.com/alicebob/miniredis/v2 v2.8.1-0.20190618082157-e29950035715/go.mod h1:gUxwu+6dLLmJHIXOOBlgcXqbcpPPp+NzOnBzgqFIGYA=
github.com/apache/thrift v0.12.0 h1:pODnxUFNcjP9UTLZGTdeh+j16A8lJbRvD3rOtrk/7bs=
github.com/alicebob/miniredis/v2 v2.11.0 h1:Dz6uJ4w3Llb1ZiFoqyzF9aLuzbsEWCeKwstu9MzmSAk=
github.com/alicebob/miniredis/v2 v2.11.0/go.mod h1:UA48pmi7aSazcGAvcdKcBB49z521IC9VjTTRz2nIaJE=
github.com/antihax/optional v0.0.0-20180407024304-ca021399b1a6/go.mod h1:V8iCPQYkqmusNa815XgQio277wI47sdRh1dUOLdyC6Q=
github.com/aokoli/goutils v1.0.1 h1:7fpzNGoJ3VA8qcrm++XEE1QUe0mIwNeLa02Nwq7RDkg=
github.com/aokoli/goutils v1.0.1/go.mod h1:SijmP0QR8LtwsmDs8Yii5Z/S4trXFGFC2oO5g9DP+DQ=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0 h1:5hryIiq9gtn+MiLVn0wP37kb/uTeRZgN08WoCsAhIhI=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/aws/aws-sdk-go v1.19.18 h1:Hb3+b9HCqrOrbAtFstUWg7H5TQ+/EcklJtE8VShVs8o=
github.com/aws/aws-sdk-go v1.19.18/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.23.20/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.25.27 h1:UANmajXi1Vn7eZ9GgdDtkFjxDiaHY6tUixCiB6Bj128=
github.com/aws/aws-sdk-go v1.25.27/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/cenkalti/backoff v2.1.1+incompatible h1:tKJnvO2kl0zmb/jA5UKAt4VoEVw1qxKWjE/Bpp46npY=
github.com/cenkalti/backoff v2.1.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/census-instrumentation/opencensus-proto v0.2.0 h1:LzQXZOgg4CQfE6bFvXGM30YZL1WW/M337pXml+GrcZ4=
github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff v2.2.1+incompatible h1:tNowT99t7UNflLxfYYSlKYsBpXdEet03Pg2g16Swow4=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/census-instrumentation/opencensus-proto v0.2.1 h1:glEXhBS5PSLLv4IXzLA5yPRVX4bilULVyxxbrfOtDAk=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.0 h1:yTUvW7Vhb89inJ+8irsUqiWjh8iT6sQPZiQzI6ReGkA=
github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
@ -49,6 +75,7 @@ github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/davecgh/go-spew v0.0.0-20161028175848-04cdfd42973b/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -58,139 +85,182 @@ github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.0.14/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v0.1.0 h1:EQciDnbrYxy13PgWoY8AqoxGiPrpgBZ1R8UNe3ddc+A=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.0 h1:G8O7TerXerS4F6sx9OV7/nRfJdnXgHZu/S/7F2SN+UE=
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9 h1:uHTyIjqVhYRhLbJ8nIiOJHkEZZ+5YoOsAbD3sk82NiE=
github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gomodule/redigo v1.7.1-0.20190322064113-39e2c31b7ca3 h1:6amM4HsNPOvMLVc2ZnyqrjeQ92YAVWn7T4WBKK87inY=
github.com/gomodule/redigo v1.7.1-0.20190322064113-39e2c31b7ca3/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
github.com/gomodule/redigo v2.0.0+incompatible/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
github.com/gomodule/redigo v2.0.1-0.20191111085604-09d84710e01a+incompatible h1:1mCVU17Wc8oyVUlx1ZXpnWz1DNP6v0R5z5ElKCTvVrY=
github.com/gomodule/redigo v2.0.1-0.20191111085604-09d84710e01a+incompatible/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/googleapis/gax-go/v2 v2.0.4 h1:hU4mGcQI4DaAYW+IbTun+2qEZVFxK0ySjQLTbS0VQKc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v0.0.0-20161128191214-064e2069ce9c/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.3.1 h1:WeAefnSUHlBb0iJKwxFDZdbfGwkd7xRNuV+IpXMJhYk=
github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU=
github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2 h1:Pgr17XVTNXAk3q/r4CpKzC5xBM/qW1uVLV+IhRZpIIk=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 h1:+ngKgrYPPJrOjhax5N+uePQ0Fh1Z7PheYoUI/0nzkPA=
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0 h1:Iju5GlWwrvL6UBg4zJJt3btmonfrMlCDdsejg4CZE7c=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.1.0 h1:THDBEeQ9xZ8JEaCLyLQqXMMdRqNr0QAUJTIkQAUtFjg=
github.com/grpc-ecosystem/go-grpc-middleware v1.1.0/go.mod h1:f5nM7jw/oeRSadq3xCzHAvxcr8HZnzsqU6ILg/0NiiE=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.6 h1:8p0pcgLlw2iuZVsdHdPaMUXFOA+6gDixcXbHEMzSyW8=
github.com/grpc-ecosystem/grpc-gateway v1.9.6/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.4/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.12.0 h1:SFRyYOyhgiU1kJG/PmbkWP/iSlizvDJEz531dq5kneg=
github.com/grpc-ecosystem/grpc-gateway v1.12.0/go.mod h1:8XEsbTttt/W+VvjtQhLACqCisSPWTxCZ7sBRjU6iH9c=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/imdario/mergo v0.3.7 h1:Y+UAYTZ7gDEuOfhxKWy+dvb5dRQ6rJjFSdX2HZY1/gI=
github.com/imdario/mergo v0.3.7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/huandu/xstrings v1.0.0 h1:pO2K/gKgKaat5LdpAhxhluX2GPQMaI3W5FUz/I/UnWk=
github.com/huandu/xstrings v1.0.0/go.mod h1:4qWG/gcEcfX4z/mBDHJ++3ReCw9ibxbsNJbcucJdbSo=
github.com/imdario/mergo v0.3.4/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.8 h1:CGgOkSJeqMRmt0D9XLWExdT4m4F1vd3FV3VPt+0VxkQ=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6 h1:MrUvLMLTMxbqFJ9kzlvat/rYZqZnW3u4wkLzWTaFwKs=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8 h1:QiWkFLKq0T7mpzwOTu6BzNDbfTE8OLrYhVKYMLF46Ok=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2 h1:DB17ag19krx9CFsz4o3enTrPXyIXCl+2iCXH/aMAp9s=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matryer/is v1.2.0 h1:92UTHpy8CDwaJ08GqLDzhhuixiBUUD1p3AU6PHddz4A=
github.com/matryer/is v1.2.0/go.mod h1:2fLPjFQM9rhQ15aVEtbuwhJinnOqrmgXPNdZsdwlWXA=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007 h1:28i1IjGcx8AofiB4N3q5Yls55VEaitzuEPkFJEVgGkA=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007/go.mod h1:m2XC9Qq0AlmmVksL6FktJCdTYyLk7V3fKyp0sl1yWQo=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/openzipkin/zipkin-go v0.1.6 h1:yXiysv1CSK7Q5yjGy1710zZGnsbMUIjluWBxtLXHPBo=
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.2.1 h1:JnMpQc6ppsNgw9QPAGF6Dod479itz7lvlsMzzNayLOI=
github.com/prometheus/client_golang v1.2.1/go.mod h1:XMU6Z2MjaRKVu/dC1qupJI9SiNkDYzz3xecMgSW/F+U=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1 h1:K0MGApIoQvMw27RTdJkPbr3JZ7DNbtxQNyi5STVM6Kw=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/pseudomuto/protoc-gen-doc v1.3.2 h1:61vWZuxYa8D7Rn4h+2dgoTNqnluBmJya2MgbqO32z6g=
github.com/pseudomuto/protoc-gen-doc v1.3.2/go.mod h1:y5+P6n3iGrbKG+9O04V5ld71in3v/bX88wUwgt+U8EA=
github.com/pseudomuto/protokit v0.2.0 h1:hlnBDcy3YEDXH7kc9gV+NLaN0cDzhDvD1s7Y6FZ8RpM=
github.com/pseudomuto/protokit v0.2.0/go.mod h1:2PdH30hxVHsup8KpBTOXTBeMVhJZVio3Q8ViKSAXT0Q=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
@ -199,44 +269,68 @@ github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6Mwd
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/afero v1.2.1 h1:qgMbHoJbPbw579P+1zVY+6n4nIFuIchaIjzZ/I/Yq8M=
github.com/spf13/afero v1.2.1/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.4.0 h1:yXHLWeravcrgGyFSyCgdYpXQ9dR9c/WED3pg1RhxqEU=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.5.0 h1:GpsTwfsQ27oS/Aha/6d1oD7tpKIqWnOA6tgOX9HHkt4=
github.com/spf13/viper v1.5.0/go.mod h1:AkYRkVJF8TkSG/xet6PzXX+l39KhhXa2pdqVSxnTcn4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v0.0.0-20170130113145-4d4bfba8f1d1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yuin/gopher-lua v0.0.0-20190206043414-8bfc7677f583 h1:SZPG5w7Qxq7bMcMVl6e3Ht2X7f+AAGQdzjkbyOnNNZ8=
github.com/yuin/gopher-lua v0.0.0-20190206043414-8bfc7677f583/go.mod h1:gqRgreBUhTSL0GeU64rtZ3Uq3wtjOa/TB2YfrtkCbVQ=
github.com/yuin/gopher-lua v0.0.0-20191213034115-f46add6fdb5c h1:RCby8AaF+weuP1M+nwMQ4uQYO2shgD6UFAKvnXszwTw=
github.com/yuin/gopher-lua v0.0.0-20191213034115-f46add6fdb5c/go.mod h1:gqRgreBUhTSL0GeU64rtZ3Uq3wtjOa/TB2YfrtkCbVQ=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.1 h1:8dP3SGL7MPB94crU3bEPplMPe83FI4EouesJUeFHv50=
go.opencensus.io v0.22.1/go.mod h1:Ap50jQcDJrx6rB6VgeeFPtuPIf3wMRvRfrfYDO6+BmA=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180501155221-613d6eafa307/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191105034135-c7e5f84aec59 h1:PyXRxSVbvzDGuqYXjHndV7xDzJ7w2K8KD9Ef8GB7KOE=
golang.org/x/crypto v0.0.0-20191105034135-c7e5f84aec59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191002040644-a1355ae1e2c3/go.mod h1:NOZ3BPKG0ec/BKJQgnvsSFpcKLM5xXVWnvZS97DWHgE=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -247,21 +341,30 @@ golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092 h1:4QSRKanuywn15aTZvI/mIDEgPQpswuFndXpOj3rKEco=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191002035440-2ec189313ef0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191105084925-a882066a44e0 h1:QPlSTtPE2k6PZPasQUbzuK3p9JbS+vMXYVto8g/yrsg=
golang.org/x/net v0.0.0-20191105084925-a882066a44e0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
golang.org/x/sync v0.0.0-20190412183630-56d357773e84/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -270,17 +373,26 @@ golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190204203706-41f3e6584952/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b h1:ag/x1USPSsqHud38I9BAC88qdNLDHHtQ4mlgQIZPPNA=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190712062909-fae7ac547cb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190912141932-bc967efca4b8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd h1:3x5uuvBgE6oaXJjCOvpCC1IpgJogqQ+PqGGU3ZxAgII=
golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 h1:/5xXl8Y5W96D+TtHSlonuFqGHIWVuyCkGJLwGh9JJFs=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -288,36 +400,65 @@ golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190927191325-030b2cf1153e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191010171213-8abd42400456/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.3.2/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.5.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.6.0 h1:2tJEkRfnZL5g1GeBUlITh/rqT5HG3sFcoVCUUxmgJ2g=
google.golang.org/api v0.6.0/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.10.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0 h1:Q3Ui3V3/CVinFWFiW39Iw0kMuVrRzYX0wN6OPFp0lTA=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181107211654-5fc9ac540362/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190611190212-a7e196e89fd3 h1:0LGHEA/u5XLibPOx6D7D8FBT/ax6wT57vNKY0QckCwo=
google.golang.org/genproto v0.0.0-20190611190212-a7e196e89fd3/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190716160619-c506a9f90610/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20190927181202-20e1ac93f88c/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191009194640-548a555dbc03/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6 h1:UXl+Zk3jqqcbEVV7ace5lrt4YdA4tXiz3f/KbmD29Vo=
google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1 h1:j6XxA85m/6txkUCHvzlV5f+HBNl/1r5cZ2A/3IEFOO8=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.0 h1:ItERT+UbGdX+s4u+nQNlVM/Q7cbmf7icKfvzbWqVtq0=
google.golang.org/grpc v1.25.0/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
@ -325,16 +466,25 @@ gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5 h1:ymVxjfMaHvXD8RqPRmzHHsB3VvucivSkIAvJFDI5O3c=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/api v0.0.0-20190708094356-59223ed9f6ce h1:5dyrqZX6sr+cG/e26KF2p6GuTIABBACjjGSOSdjS9sI=
k8s.io/api v0.0.0-20190708094356-59223ed9f6ce/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA=
k8s.io/apimachinery v0.0.0-20190221084156-01f179d85dbc h1:7z9/6jKWBqkK9GI1RRB0B5fZcmkatLQ/nv8kysch24o=
k8s.io/apimachinery v0.0.0-20190221084156-01f179d85dbc/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/client-go v9.0.0+incompatible h1:2kqW3X2xQ9SbFvWZjGEHBLlWc1LG9JIJNXWkuqwdZ3A=
k8s.io/client-go v9.0.0+incompatible/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
k8s.io/api v0.0.0-20191004102255-dacd7df5a50b h1:38Nx0U83WjBqn1hUWxlgKc7mvH7WhyHfypxeW3zWwCQ=
k8s.io/api v0.0.0-20191004102255-dacd7df5a50b/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA=
k8s.io/apimachinery v0.0.0-20191004074956-01f8b7d1121a h1:lDydUqHrbL/1l5ZQrqD1RIlabhmX8aiZEtxVUb+30iU=
k8s.io/apimachinery v0.0.0-20191004074956-01f8b7d1121a/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0=
k8s.io/client-go v0.0.0-20191004102537-eb5b9a8cfde7 h1:WyPHgjjXvF4zVVwKGZKKiJGBUW45AuN44uSOuH8euuE=
k8s.io/client-go v0.0.0-20191004102537-eb5b9a8cfde7/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s=
k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=

View File

@ -0,0 +1,148 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: open-match-demo
labels:
app: open-match-demo
release: open-match-demo
---
kind: Service
apiVersion: v1
metadata:
name: om-function
namespace: open-match-demo
labels:
app: open-match-customize
component: matchfunction
release: open-match-demo
spec:
selector:
app: open-match-customize
component: matchfunction
release: open-match-demo
clusterIP: None
type: ClusterIP
ports:
- name: grpc
protocol: TCP
port: 50502
- name: http
protocol: TCP
port: 51502
---
kind: Service
apiVersion: v1
metadata:
name: om-demo
namespace: open-match-demo
labels:
app: open-match-demo
component: demo
release: open-match-demo
spec:
selector:
app: open-match-demo
component: demo
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 51507
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: om-function
namespace: open-match-demo
labels:
app: open-match-customize
component: matchfunction
release: open-match-demo
spec:
replicas: 3
selector:
matchLabels:
app: open-match-customize
component: matchfunction
template:
metadata:
namespace: open-match-demo
labels:
app: open-match-customize
component: matchfunction
release: open-match-demo
spec:
containers:
- name: om-function
image: "gcr.io/open-match-public-images/openmatch-mmf-go-soloduel:0.0.0-dev"
ports:
- name: grpc
containerPort: 50502
- name: http
containerPort: 51502
imagePullPolicy: Always
resources:
requests:
memory: 100Mi
cpu: 100m
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: om-demo
namespace: open-match-demo
labels:
app: open-match-demo
component: demo
release: open-match-demo
spec:
replicas: 1
selector:
matchLabels:
app: open-match-demo
component: demo
template:
metadata:
namespace: open-match-demo
labels:
app: open-match-demo
component: demo
release: open-match-demo
spec:
containers:
- name: om-demo
image: "gcr.io/open-match-public-images/openmatch-demo-first-match:0.0.0-dev"
imagePullPolicy: Always
ports:
- name: http
containerPort: 51507
livenessProbe:
httpGet:
scheme: HTTP
path: /healthz
port: 51507
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
scheme: HTTP
path: /healthz?readiness=true
port: 51507
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 2

View File

@ -1,20 +1,18 @@
### Open Match Helm Chart Templates
This directory contains the [helm](https://helm.sh/ "helm") chart templates used to customize and deploy Open Match.
Templates under the `templates/` directory are for the core components in Open Match - e.g. backend, frontend, mmlogic, synchronizor, some security policies, and configmaps are defined under this folder.
Templates under the `templates/` directory are for the core components in Open Match - e.g. backend, frontend, query, synchronizor, some security policies, and configmaps are defined under this folder.
Open Match also provides templates for optional components that are disabled by default under the `subcharts/` directory.
1. `open-match-demo` contains the template for a sample director.
2. `open-match-customize` contains flexible templates to deploy your own matchfunction and evaluator.
3. `open-match-telemetry` contains monitoring supports for Open Match, you may choose to enable/disable [jaeger](https://www.jaegertracing.io/ "jaeger"), [prometheus](http://prometheus.io "prometheus"), [stackdriver](https://cloud.google.com/stackdriver/ "stackdriver"), [zipkin](https://zipkin.io/ "zipkin"), and [grafana](https://grafana.com/ "grafana") by overriding the config values in the provided templates.
4. `open-match-test` contains templates of the end-to-end in-cluster tests and distributed stress tests for Open Match.
1. `open-match-customize` contains flexible templates to deploy your own matchfunction and evaluator.
2. `open-match-telemetry` contains monitoring supports for Open Match, you may choose to enable/disable [jaeger](https://www.jaegertracing.io/ "jaeger"), [prometheus](http://prometheus.io "prometheus"), [stackdriver](https://cloud.google.com/stackdriver/ "stackdriver"), and [grafana](https://grafana.com/ "grafana") by overriding the config values in the provided templates.
You may control the behavior of Open Match by overriding the configs in `install/helm/open-match/values.yaml` file. Here are a few examples:
```diff
# install/helm/open-match/values.yaml
# 1. Configs under the `global` section affects all components - including components in the subcharts.
# 2. Configs under the subchart name - e.g. `open-match-test` only affects the settings in that subchart.
# 2. Configs under the subchart name - e.g. `open-match-customize` only affects the settings in that subchart.
# 3. Otherwise, the configs are for core components (templates in the parent chart) only.
# Overrides spec.type of a specific Kubernetes Service
@ -40,8 +38,8 @@ global:
+ enabled: true
# Enables an optional component in Open Match
# Equivalent helm cli flag --set open-match-demo.enabled=true
open-match-demo:
# Equivalent helm cli flag --set open-match-telemetry.enabled=true
open-match-telemetry:
- enabled: false
+ enabled: true

View File

@ -12,10 +12,27 @@
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
appVersion: "0.0.0-dev"
version: 0.0.0-dev
apiVersion: v2
appVersion: "1.1.0"
version: 1.1.0
name: open-match
dependencies:
- name: redis
version: 9.5.0
repository: https://charts.helm.sh/stable
condition: open-match-core.redis.enabled
- name: open-match-telemetry
version: 0.0.0-dev
condition: open-match-telemetry.enabled
repository: "file://./subcharts/open-match-telemetry"
- name: open-match-customize
version: 0.0.0-dev
condition: open-match-customize.enabled
repository: "file://./subcharts/open-match-customize"
- name: open-match-scale
version: 0.0.0-dev
condition: open-match-scale.enabled
repository: "file://./subcharts/open-match-scale"
description: Flexible, extensible, and scalable video game matchmaking.
keywords:
- kubernetes
@ -33,4 +50,3 @@ maintainers:
url: https://groups.google.com/forum/#!forum/open-match-discuss
engine: gotpl
icon: https://open-match.dev/site/images/logo.svg
tillerVersion: ">2.10.0"

View File

@ -1,21 +0,0 @@
dependencies:
- name: redis
repository: https://kubernetes-charts.storage.googleapis.com/
version: 8.0.9
- name: open-match-demo
repository: file://./subcharts/open-match-demo
version: 0.0.0-dev
- name: open-match-telemetry
repository: file://./subcharts/open-match-telemetry
version: 0.0.0-dev
- name: open-match-customize
repository: file://./subcharts/open-match-customize
version: 0.0.0-dev
- name: open-match-test
repository: file://./subcharts/open-match-test
version: 0.0.0-dev
- name: open-match-scale
repository: file://./subcharts/open-match-scale
version: 0.0.0-dev
digest: sha256:58df48dae8d884f81dc06a3a2150082d68b271fc204abb19da12d1bf892647e1
generated: "2019-09-18T01:18:15.939673104-07:00"

View File

@ -1,39 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
dependencies:
- name: redis
version: 8.0.9
repository: https://kubernetes-charts.storage.googleapis.com/
condition: open-match-core.enabled
- name: open-match-demo
version: 0.0.0-dev
condition: open-match-demo.enabled
repository: "file://./subcharts/open-match-demo"
- name: open-match-telemetry
version: 0.0.0-dev
condition: open-match-telemetry.enabled
repository: "file://./subcharts/open-match-telemetry"
- name: open-match-customize
version: 0.0.0-dev
condition: open-match-customize.enabled
repository: "file://./subcharts/open-match-customize"
- name: open-match-test
version: 0.0.0-dev
condition: open-match-test.enabled
repository: "file://./subcharts/open-match-test"
- name: open-match-scale
version: 0.0.0-dev
condition: open-match-scale.enabled
repository: "file://./subcharts/open-match-scale"

View File

@ -0,0 +1,20 @@
{*
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*}
{{/* vim: set filetype=mustache: */}}
{{- define "openmatchcustomize.function.hostName" -}}
{{- .Values.function.hostName | default (printf "%s-function" (include "openmatch.fullname" . ) ) -}}
{{- end -}}

View File

@ -12,10 +12,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Ugly workaround to split out MMF and evaluator
# TODO: Reconsider helm chart structure and move things out after v0.8 release
{{- if index .Values "evaluator" "enabled" }}
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -43,20 +46,20 @@ spec:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
{{- include "openmatch.HorizontalPodAutoscaler.spec.common" . | nindent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "openmatch.name" . }}
@ -80,13 +83,13 @@ spec:
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" . | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.evaluatorConfigs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.evaluator.hostName }}
- name: {{ include "openmatch.evaluator.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" . | nindent 10 }}
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.evaluatorConfigs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}
image: "{{ .Values.global.image.registry }}/{{ .Values.evaluator.image}}:{{ .Values.global.image.tag }}"
ports:
@ -95,4 +98,5 @@ spec:
- name: http
containerPort: {{ .Values.evaluator.httpPort }}
{{- include "openmatch.container.common" . | nindent 8 }}
{{- include "kubernetes.probe" (dict "port" .Values.evaluator.httpPort "isHTTPS" .Values.global.tls.enabled) | nindent 8 }}
{{- end }}

View File

@ -12,10 +12,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Ugly workaround to split out MMF and evaluator
# TODO: Reconsider helm chart structure and move things out after v0.8 release
{{- if index .Values "function" "enabled" }}
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -43,20 +46,20 @@ spec:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
{{- include "openmatch.HorizontalPodAutoscaler.spec.common" . | nindent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -81,13 +84,13 @@ spec:
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" . | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.mmfConfigs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.function.hostName }}
- name: {{ include "openmatchcustomize.function.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" . | nindent 10 }}
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.mmfConfigs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}
image: "{{ .Values.global.image.registry }}/{{ .Values.function.image}}:{{ .Values.global.image.tag }}"
ports:
@ -96,4 +99,5 @@ spec:
- name: http
containerPort: {{ .Values.function.httpPort }}
{{- include "openmatch.container.common" . | nindent 8 }}
{{- include "kubernetes.probe" (dict "port" .Values.function.httpPort "isHTTPS" .Values.global.tls.enabled) | nindent 8 }}
{{- end }}

View File

@ -12,25 +12,48 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Default values for open-match-test.
# Default values for open-match-customize.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
function:
enabled: false
replicas: 3
portType: ClusterIP
image: openmatch-mmf-go-soloduel
evaluator:
enabled: false
replicas: 3
portType: ClusterIP
image: openmatch-evaluator-go-simple
image: openmatch-default-evaluator
configs:
evaluatorConfigs:
# We use harness to implement the MMFs. MMF itself only requires one configmap but harness expects two,
# so we mount the same configmap twice to bypass this restriction.
# TODO: Remove this bit after deprecating the harness dependency on configmap
om-configmap-default:
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
customize-configmap:
volumeName: customize-config-volume
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
customize:
volumeName: om-config-volume-override
mountPath: /app/config/override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'
mmfConfigs:
# We use harness to implement the MMFs. MMF itself only requires one configmap but harness expects two,
# so we mount the same configmap twice to bypass this restriction.
# TODO: Remove this bit after deprecating the harness dependency on configmap
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
customize:
volumeName: om-config-volume-override
mountPath: /app/config/override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'

View File

@ -1,75 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.demo.hostName }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
app: {{ template "openmatch.name" . }}
component: demo
release: {{ .Release.Name }}
spec:
selector:
app: {{ template "openmatch.name" . }}
component: demo
type: {{ coalesce .Values.global.kubernetes.service.portType .Values.demo.portType }}
ports:
- name: http
protocol: TCP
port: {{ .Values.demo.httpPort }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.demo.hostName }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
app: {{ template "openmatch.name" . }}
component: demo
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.demo.replicas }}
selector:
matchLabels:
app: {{ template "openmatch.name" . }}
component: demo
template:
metadata:
namespace: {{ .Release.Namespace }}
annotations:
{{- include "openmatch.chartmeta" . | nindent 8 }}
labels:
app: {{ template "openmatch.name" . }}
component: demo
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" . | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
containers:
- name: {{ .Values.demo.hostName }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" . | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}
image: "{{ .Values.global.image.registry }}/{{ .Values.demo.image}}:{{ .Values.global.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.demo.httpPort }}
{{- include "kubernetes.probe" (dict "port" .Values.demo.httpPort) | nindent 8 }}

View File

@ -1,38 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Default values for open-match-demo.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
demo:
hostName: om-demo
httpPort: 51507
portType: ClusterIP
replicas: 1
image: openmatch-demo-first-match
image:
registry: gcr.io/open-match-public-images
tag: 0.0.0-dev
pullPolicy: Always
# TODO: Split tls configs into a separate config file. For now Open Match assumes core components share the same secure mode
# with the mmfs and evaluator, so we have to copy these secure settings and define a new configmap for it whenever we what
# to create a new evaluator and mmf. We should create a global configmap for the security settings for all subcharts
# under the /install/helm/open-match directory to avoid copy&paste files around.
configs:
demo-configmap:
mountPath: /app/config/om
volumeName: demo-config-volume

View File

@ -0,0 +1,934 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"links": [],
"panels": [
{
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 16,
"title": "Iterations",
"type": "row"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 1
},
"id": 4,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_backend_iterations[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Iterations per second",
"refId": "A"
}
],
"thresholds": [
{
"colorMode": "ok",
"fill": true,
"line": true,
"op": "gt",
"value": 1,
"yaxis": "left"
}
],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Fetch Match Iterations",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 9
},
"id": 14,
"panels": [],
"title": "Tickets",
"type": "row"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"description": "",
"fill": 1,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 10
},
"id": 2,
"legend": {
"avg": true,
"current": true,
"hideEmpty": false,
"hideZero": false,
"max": true,
"min": true,
"show": true,
"total": false,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_frontend_tickets_created[5m]))",
"format": "time_series",
"instant": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "Tickets Created per second",
"refId": "A"
},
{
"expr": "sum(rate(scale_frontend_ticket_creations_failed[5m]))",
"format": "time_series",
"hide": false,
"instant": false,
"intervalFactor": 1,
"legendFormat": "Ticket Creations Failed per second",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Scale Frontend Ticket Creation",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 10
},
"id": 12,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_backend_sum_tickets_returned[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Backend Tickets in Matches pers second",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Tickets In Matches",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"cacheTimeout": null,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 19
},
"id": 24,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(scale_frontend_runners_waiting)",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Runners Waiting To Start",
"refId": "A"
},
{
"expr": "sum(scale_frontend_runners_creating)",
"format": "time_series",
"instant": false,
"intervalFactor": 1,
"legendFormat": "Runners Creating Ticket",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Outstanding Frontend Runners",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 19
},
"id": 22,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_backend_tickets_deleted[5m]))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "Backend Tickets Deleted per second",
"refId": "B"
},
{
"expr": "sum(rate(scale_backend_ticket_deletes_failed[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Backend Ticket Deletions Failed per second",
"refId": "C"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Ticket Deletion",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 27
},
"id": 18,
"panels": [],
"title": "Fetch MatchCalls",
"type": "row"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 28
},
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_backend_fetch_match_calls[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Fetch Match Calls Started per second",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Fetch Match Calls Started",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": null,
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 28
},
"id": 19,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_backend_fetch_match_successes[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Fetch Match Calls Succeeding per second",
"refId": "A"
},
{
"expr": "sum(rate(scale_backend_fetch_match_errors[5m]))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "Fetch Match Calls Ending In Errors per second",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Fetch Match Results",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 36
},
"id": 21,
"panels": [],
"title": "Matches",
"type": "row"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 37
},
"id": 8,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_backend_matches_returned[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Matches Returned From Fetch Matches per second",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Matches Made",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 37
},
"id": 10,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(scale_backend_matches_assigned[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Matches Assigned per second",
"refId": "A"
},
{
"expr": "sum(rate(scale_backend_match_assigns_failed[5m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "Match Assignments Failed per second",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Match Assignment",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": "",
"schemaVersion": 18,
"style": "dark",
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-15m",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"1s",
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "",
"title": "Scale",
"uid": "PCNBQKPWk",
"version": 1
}

View File

@ -0,0 +1,42 @@
{*
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*}
{{/* vim: set filetype=mustache: */}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "openmatchscale.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{- define "openmatchscale.scaleBackend.hostName" -}}
{{- .Values.scaleBackend.hostName | default (printf "%s-backend" (include "openmatchscale.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatchscale.scaleFrontend.hostName" -}}
{{- .Values.scaleFrontend.hostName | default (printf "%s-frontend" (include "openmatchscale.fullname" . ) ) -}}
{{- end -}}

View File

@ -15,7 +15,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.scaleBackend.hostName }}
name: {{ include "openmatchscale.scaleBackend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -34,7 +34,7 @@ spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scaleBackend.hostName }}
name: {{ include "openmatchscale.scaleBackend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -52,19 +52,20 @@ spec:
namespace: {{ .Release.Namespace }}
annotations:
{{- include "openmatch.chartmeta" . | nindent 8 }}
{{- include "prometheus.annotations" (dict "port" .Values.scaleBackend.httpPort "prometheus" .Values.global.telemetry.prometheus) | nindent 8 }}
labels:
app: {{ template "openmatch.name" . }}
component: scaleBackend
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" . | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.scaleBackend.hostName }}
- name: {{ include "openmatchscale.scaleBackend.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" . | nindent 10 }}
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}
image: "{{ .Values.global.image.registry }}/{{ .Values.scaleBackend.image}}:{{ .Values.global.image.tag }}"
ports:

View File

@ -1,54 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ConfigMap
metadata:
name: scale-configmap
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
app: {{ template "openmatch.name" . }}
component: config
release: {{ .Release.Name }}
data:
matchmaker_config.yaml: |-
api:
frontend:
hostname: "{{ .Values.frontend.hostName }}"
grpcport: "{{ .Values.frontend.grpcPort }}"
backend:
hostname: "{{ .Values.backend.hostName }}"
grpcport: "{{ .Values.backend.grpcPort }}"
testConfig:
profile: "{{ .Values.testConfig.profile }}"
concurrentCreates: "{{ .Values.testConfig.concurrentCreates }}"
regions:
{{- range .Values.testConfig.regions }}
- {{ . }}
{{- end }}
characters:
{{- range .Values.testConfig.characters }}
- {{ . }}
{{- end }}
minRating: "{{ .Values.testConfig.minRating }}"
maxRating: "{{ .Values.testConfig.maxRating }}"
ticketsPerMatch: "{{ .Values.testConfig.ticketsPerMatch }}"
multifilter:
rangeSize: "{{ .Values.testConfig.multifilter.rangeSize }}"
rangeOverlap: "{{ .Values.testConfig.multifilter.rangeOverlap }}"
multipool:
rangeSize: "{{ .Values.testConfig.multipool.rangeSize }}"
rangeOverlap: "{{ .Values.testConfig.multipool.rangeOverlap }}"
characterCount: "{{ .Values.testConfig.multipool.characterCount }}"

View File

@ -15,7 +15,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.scaleFrontend.hostName }}
name: {{ include "openmatchscale.scaleFrontend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -31,10 +31,10 @@ spec:
protocol: TCP
port: {{ .Values.scaleFrontend.httpPort }}
---
apiVersion: apps/v1
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.scaleFrontend.hostName }}
name: {{ include "openmatchscale.scaleFrontend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -52,19 +52,20 @@ spec:
namespace: {{ .Release.Namespace }}
annotations:
{{- include "openmatch.chartmeta" . | nindent 8 }}
{{- include "prometheus.annotations" (dict "port" .Values.scaleFrontend.httpPort "prometheus" .Values.global.telemetry.prometheus) | nindent 8 }}
labels:
app: {{ template "openmatch.name" . }}
component: scaleFrontend
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" . | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.scaleFrontend.hostName }}
- name: {{ include "openmatchscale.scaleFrontend.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" . | nindent 10 }}
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}
image: "{{ .Values.global.image.registry }}/{{ .Values.scaleFrontend.image}}:{{ .Values.global.image.tag }}"
ports:

View File

@ -0,0 +1,25 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{{- if .Values.global.telemetry.grafana.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openmatchscale.fullname" . }}-dashboard
namespace: {{ .Release.Namespace }}
labels:
grafana_dashboard: "1"
data:
{{- (.Files.Glob "dashboards/*.json").AsConfig | nindent 2 }}
{{- end }}

View File

@ -13,40 +13,25 @@
# limitations under the License.
scaleFrontend:
hostName: om-scale-frontend
hostName:
httpPort: 51509
replicas: 1
image: openmatch-scale-frontend
scaleBackend:
hostName: om-scale-backend
httpPort: 51510
hostName:
httpPort: 51509
replicas: 1
image: openmatch-scale-backend
configs:
scale-configmap:
mountPath: /app/config/om
volumeName: scale-config-volume
testConfig:
profile: greedy
concurrentCreates: 500
regions:
- region.europe-west1
- region.europe-west2
- region.europe-west3
- region.europe-west4
characters:
- cleric
- knight
minRating: 0
maxRating: 100
ticketsPerMatch: 8
multifilter:
rangeSize: 10
rangeOverlap: 5
multipool:
rangeSize: 10
rangeOverlap: 5
characterCount: 4
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
override:
volumeName: om-config-volume-override
mountPath: /app/config/override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'

View File

@ -12,8 +12,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
apiVersion: v2
appVersion: "0.0.0-dev"
description: A chart to deploy telemetry support for Open Match
name: open-match-telemetry
version: 0.0.0-dev
dependencies:
- name: prometheus
version: 9.2.0
repository: https://charts.helm.sh/stable
condition: global.telemetry.prometheus.enabled,prometheus.enabled
- name: grafana
version: 4.0.1
repository: https://charts.helm.sh/stable
condition: global.telemetry.grafana.enabled,grafana.enabled
- name: jaeger
version: 0.13.3
repository: https://charts.helm.sh/stable
condition: global.telemetry.jaeger.enabled,jaeger.enabled

View File

@ -20,5 +20,3 @@ Steps
Some templates came from the Grafana Labs site.
To update copy and paste the URL into the import dashboard page and then click "Share" and save the JSON to a file.
Do not download the JSON directly from the website.
go-processes.json - https://grafana.com/dashboards/6671

Some files were not shown because too many files have changed in this diff Show More