Compare commits

..

448 Commits

Author SHA1 Message Date
a818dc6c06 Release changes for v1.7.0-rc.1 (#1534)
* Update k8s.io packages (#1531)

* update supported version of k8s.io/client-go

* update tutorial deps

* add context

* release changes for v1.7.0-rc.1
2023-02-11 14:10:17 +05:30
2e6aa4f36f adding Mark and Joseph (#1533) 2023-02-09 20:31:37 +05:30
50b4063bee add Content-Type and Transfer-Encoding to matchfunction:run POST request (#1530) 2023-01-27 15:06:03 +05:30
31a4a45d73 Bump github.com/gogo/protobuf from 1.3.1 to 1.3.2 (#1529)
Bumps [github.com/gogo/protobuf](https://github.com/gogo/protobuf) from 1.3.1 to 1.3.2.
- [Release notes](https://github.com/gogo/protobuf/releases)
- [Commits](https://github.com/gogo/protobuf/compare/v1.3.1...v1.3.2)

---
updated-dependencies:
- dependency-name: github.com/gogo/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-17 15:19:36 -05:00
67be35006c update to google.golang.org/protobuf and deprecation changes (#1444)
* update to google.golang.org/protobuf and deprecation changes

* update go version to 1.17

* updating deprecated grpc.Insecure() to insecure.NewCredential()

* update go to 1.18 to solve dependency issue

* lint disable typecheck

* adding create ticket test case condition

* limiting number of parallel tests

* tutorials dependency update

* update grpc protobuf files

* update go version  to 1.19.3

* update go version to 1.19 in tutorials

* added command in make file to update tutorial-deps

* update deps

* update tutorial-deps

* add gke-gcloud-auth-plugin

* command make tutorial-deps

* fix tutorial-deps

* fix go.sum file

* fix deps
2023-01-17 13:58:56 -05:00
d0ce1b317f update helm chart repo link (#1524) 2023-01-05 12:43:20 -05:00
93cd5c7a9f Added Redis Enterprise Deploy Instructions (#1517) 2022-12-16 16:55:45 -05:00
3193921816 Update CODEOWNERS (#1513)
Add Andrew Grundy as CODEOWNER

Co-authored-by: Mridul Goswami <mridulgoswami@google.com>
2022-12-14 15:33:39 +05:30
7a3bb82089 Add Redis Enterprise tutorial for Open Match (#1512)
* Add tutorial for Open Match coupled with Redis Enterprise for data-layer

* Fixed namespace and removed duplicate command

* update formatting and additonal step to wipe cluster of previous open match core install

* Update README.md

removed TODO

* make changes to solution/matchfunction/matchfunction.yaml for the correct namespace
2022-12-13 14:10:44 -05:00
a4eb6d6cbd add logging level configuration (#1511)
Allow users to set logging level to anything other than hard coded debug

Co-authored-by: Mridul Goswami <mridulgoswami@google.com>
2022-12-12 19:34:18 +05:30
927a976a10 shifted e2e tests to project root (#1481) 2022-12-05 10:13:38 -05:00
33efc848ff Add open-match-override setting (#1490)
* Add open-match-override setting

* Added enabled

Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
Co-authored-by: Mridul Goswami <mridulgoswami@google.com>
2022-12-05 13:58:29 +00:00
04c019c6cb Default values of configs (#1508)
* setting validation and default values of configs

* config check in internal/config package
2022-12-02 12:10:42 -05:00
1e51ad859c specify hpa for individual service (#1506) 2022-11-01 10:29:46 -04:00
fdd8783a34 Ticket metrics panels (#1499)
* change in calculation of active tickets

* grafana panels for new ticket metrics

* updated create cluster and proxy commands
2022-10-04 14:09:17 -04:00
036be6455d Added metrics for ticket behavior (#1491) (#1494)
* Added metrics for total number of tickets and total number of backfills (#1, #4 of proposed metrics)

* Fixed totalBackfillTicketsView Name

* Added metric for keeping track of tickets in pending state

* altered name of total tickets to total 'active' tickets to remove confusion

* updated pending tickets metric name

* Register totalActiveTicketsView and pendingTotalTicketsView
2022-09-21 12:38:47 -04:00
5d5f4de7a7 lower GOLANG_TEST_COUNT to 3 which allows test to pass locally. Patch fix for now (#1488) 2022-08-26 13:39:17 +05:30
a9f985d217 Add custom annotations to Service Account (#1469)
Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
2022-08-23 14:01:11 -04:00
6598a55e74 Added persistent field to store any config/metadata in ticket and backfill (#1475)
* update persistent field when updating whole backfill

* Added persistent field in ticket and backfill
2022-08-18 01:27:34 -04:00
4d6da1632a Updated github.com/gogo/protobuf due to security vulnerability (#1459) 2022-08-02 13:41:53 -04:00
40a06447d0 Fix typo (#1436)
Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
Co-authored-by: Mridul Goswami <mridulgoswami@google.com>
2022-07-18 22:48:47 +05:30
a9d122f50c Update WatchAssignment function (#1476)
* removed for loop from watchassignment function and shifted ctx.Done to callback function

* update  GKE version to regular supported
2022-07-18 11:55:38 -04:00
73ec73f2e8 add mridulji as codeowner (#1468) 2022-06-27 20:55:21 +05:30
361f8ff3db Added step to release template to update tutorial references for current version (#1464) 2022-06-22 12:12:44 -04:00
8297cac2b8 Set default value of assignedDeleteTimeout (#1465)
Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
2022-06-21 12:44:42 -04:00
120a114647 Using uuid instead time value to make unique matchId. (#1437)
* use uuid for matchId instead time value because matchfunction seems to be called concurrently so I got 'multiple match functions used same match_id:' errors.

* use uuid for matchId instead time value because matchfunction seems to be called concurrently so I got 'multiple match functions used same match_id:' errors.

* Revert "use uuid for matchId instead time value because matchfunction seems to be called concurrently so I got 'multiple match functions used same match_id:' errors."

This reverts commit 99b4e92ab9f1bc44feae3475702e769c83320f5a.

* use uuid for matchId instead time value because matchfunction seems t…
o be called concurrently so I got 'multiple match functions used same match_id:' errors.

Co-authored-by: Mridul Goswami <mridulgoswami@google.com>
Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
2022-06-21 11:13:30 -04:00
7af54ee1bc update telemetry helm chart versions and gke version (#1462)
* update telemetry helm chart versions and gke version

* split configmaps for grafana dashboards

* splitting grafana dashboard configMaps by reading filenames

* renaming label grafana dashboard for converting into string

Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
2022-06-21 10:33:09 -04:00
68cecb91e5 Adjust Helm README spaces for portType ClusterIP (#1445)
Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
2022-06-07 15:47:07 -04:00
67dc60dba8 Update bugreport.md (#1455)
Remove unnecessary colon
2022-06-07 13:37:03 -04:00
09d1ff7171 fixing 404 response parser (#1461) 2022-06-07 11:32:30 -04:00
b6e5114715 Redis helm chart version change to 16.3.1 (#1440)
* upgrade helm version

* upgrade redis chart version

* required changes for latest redis chart version

Co-authored-by: Jon Foust <38893532+syntxerror@users.noreply.github.com>
2022-03-03 11:38:31 -05:00
23d2fd5042 Update CODEOWNERS 2022-03-03 10:06:51 -05:00
2b73d52e0c AssignTickets empty check and test cases added (#1438) 2022-02-07 09:52:16 -05:00
47c34587dc docker build optimization by using mount cache for go dependencies (#1435) 2022-01-25 19:31:45 -05:00
76937b6350 Redis default values update (#1430)
I have set resource requests against each component part of redis and set the 'slaveCount' to 3 (as this actually sets the total number of pods and a minimum of 3 is required for a robust Redis Sentinel deployment, see: https://redis.io/topics/sentinel#fundamental-things-to-know-about-sentinel-before-deploying)
2021-11-29 11:06:48 -05:00
2e03c1a197 fix outdate apiVersion (#1419)
Recent k8s APIs remove Deployment from extensions/v1beta1, it's now in apps/v1
2021-09-22 13:05:31 -04:00
eca40e3298 re-enable workload identity (#1403) 2021-08-24 00:21:56 -04:00
902c9d69b4 Update development.md (#1406)
update to new main branch naming convention.
2021-08-23 21:27:16 -04:00
67767cf1cd updated default gke version. updated grpc version in go.mod files (#1402) 2021-07-28 16:29:42 -04:00
6f46731b15 Respond to AcknowledgeBackfill with the tickets that were assigned (#1382)
Fixes #1381
2021-06-09 13:15:28 -04:00
0d1a77c5de add andrewgrundy as codeowner (#1380) 2021-04-29 20:36:12 -04:00
f2a23f5ba1 add mode to profile name for range of game modes (#1375) 2021-04-16 17:01:24 -04:00
3fa588c1f8 Add backfill scenario to scale tests (#1339)
* Implement backfill querying

* Update location for stable and incubator charts

* Add MMF backfill example

* Simplify MMF backfill example

* Add backfill scenario to scale tests

* Update backfill scenario

* Improve backfill scenario

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2021-04-13 17:17:52 -04:00
cc08f39205 Sentinel fix (2) (#1369)
* update master

* fixing config.json

* add override for sentinel.usePassword

* Update go.sum

removed leftover from conflict
2021-04-07 14:35:50 -04:00
ec9cf00bcf Revert "Sentinel fix (#1367)" (#1368)
This reverts commit 8b8617f68d5aec70b1016912d07acfe31e3d12ab.
2021-04-05 13:03:23 -04:00
8b8617f68d Sentinel fix (#1367)
* update master

* override sentinel.usePassword to false

Co-authored-by: jonfoust <38893532+jonfoust@users.noreply.github.com>
2021-04-03 03:22:03 -04:00
ce9b989e58 Update to gRPC Gateway v2 (#1358) 2021-03-22 12:20:34 -04:00
5c00395c78 Updating jonfoust username to syntxerror as a code reviewer (#1363)
Co-authored-by: jonfoust <38893532+jonfoust@users.noreply.github.com>
2021-03-19 15:26:22 -07:00
faf3eded1f Return 404 when deleting ticket/backfill ticket that does not exist (#1352) 2021-02-16 15:36:23 -05:00
250d44aefd Fix WatchAssignments causes memory leaks (#1350) 2021-01-29 16:02:23 -08:00
13fdf5960f Make tests output readable (#1349) 2021-01-27 16:33:31 +03:00
aa5a1f9da1 Fix minor typos (#1347) 2021-01-22 01:49:07 +03:00
ad1ca16218 Add string err comparisons to backfill e2e (#1344)
Make failure output more readable.
2021-01-20 17:00:56 +03:00
7d849f3f04 Backfill: Skip not found errors on Backend (#1341)
Backfill: Skip not found errors on Backend
There could be the case when backfill returned by the MMF was deleted
in CleanupBackfills.
Add UT to check that error was skipped
2021-01-20 01:30:01 +03:00
05c8c8aa76 Fix leftover after 1080 PR (#1342) 2021-01-19 12:48:15 -08:00
f50c9eec80 Minor fixing some typos (#1343) 2021-01-19 11:26:17 -08:00
c6f23f01ca Improve proto comments (#1340) 2021-01-19 11:05:21 -08:00
21efdb6691 Move Cleanup Backfills after main SynchronizerCycle & add workers pool (#1334)
Use workers in cleanup process. Move backfill cleanup to the end of sync cycle.
TestCleanUpExpiredBackfills call FetchMatches twice.
2021-01-19 10:43:08 +03:00
81a1dc38b6 add fix in helm chart to use custom redis instance (#1330)
Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2021-01-18 16:52:22 +03:00
d0ddf22658 Expired backfills can not be updated or acknowledged (#1335)
* do not acknowledge expired backfills

* use NoError

* parse ZSCORE response to float, not to int

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2021-01-18 15:48:54 +03:00
ee247c6c1a Updated release steps. Added additional step to publish release notes to OM Blog (#1338) 2021-01-15 17:26:56 -05:00
a17eb3bc72 Fix proto comments for better markdown output (#1331) 2021-01-15 11:26:14 -08:00
3d194f541e Add help comments in Makefile (#1332)
* Add help comments in Makefile

* Delete utilities subtitle

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>

* Reorder subtitle definition

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2021-01-14 16:43:51 +03:00
3a0cd7611b Move the Redis chart to bitnami as update to 12.3.3 (#1315) 2021-01-12 11:58:52 -08:00
c13b461795 Make redis lock expiration configurable (#1325) 2021-01-07 13:29:27 -08:00
b9e55fc727 Add pod tolerations, nodeSelector and affinity in helm for subcharts (#1311)
Fix #1015

Co-authored-by: Scott Redig <sredig@google.com>
2021-01-07 13:05:30 -08:00
dd1386a55b Clean up expired backfills (#1297)
Add `CleanupBackfills()` call to synchronizer.
Put delete backfill logic to statestore.
Add mutex to DeleteBackfillCompletely and update deleteBackfill test.
Remove goroutine.
* use new context in CleanupBackfills().
* move cleanup to the start of the Synchronizer sync cycle.
Co-authored-by: Alexander <alexander.apalikov@globant.com>
2020-12-30 15:41:33 +03:00
defac9065b Frontend acknowledge backfill (#1293)
* Frontend: Add AcknowledgeBackfill method
Update Tickets associated with backfill, remove all assigned

Add Mutex lock, UpdateBackfill accordingly after UpdateAssignments call.
New function name seems more reasonable as it do only Redis timestamp
updates, doUpdateAcknowledgmentTimestamp.

* Add Generation autoincrement test in test helper func
Add more logic as in doAssign() function
Deindex tickets and add error logging for all NotFound
tickets.
2020-12-28 16:37:29 +03:00
f203384fbf Add comments to MMF backfill example (#1320)
Add comments to MatchMaking Function with Backfill example.
It creates matches with Backfills first, then full matches with 1 vs 1 player match, and if number of players left is 1 create a match with new Backfill in it.

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2020-12-22 22:14:53 +03:00
7ef9c052bd Update backend service (#1318)
Add missing Backfill indexing on Create or Update Backfill on backend.
Release tickets when backfill generation mismatch happens
Refactored - new doRelease() function for tickets.

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2020-12-22 21:43:00 +03:00
ea744b8b51 Fix install-scale-chart target (#1322)
* Implement backfill querying

* Update location for stable and incubator charts

* Add MMF backfill example

* Simplify MMF backfill example

* Render jaeger configuration if it is enabled

Helm fails to install open-match chart with disabled jaeger because it cannot find
openmatch.jaeger.agent template which is declared in jaeger subchart. Helm is not able to
find that template because jaeger subchart is not loaded because it is marked as disabled
in open-match chart dependencies.

* Update install-scale-chart target

Currently open-match-scale subchart is installed separately from open-match chart but they are tightly coupled.
Pods declared in scale subchart have dependencies on service accounts, config maps provisioned by open-match
chart. So the problem is that helm renders incorrect service account, config map names. It can be fixed by
specifying explicit names in install-scale-chart target.

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2020-12-22 21:14:27 +03:00
1a8fc62833 add @sawagh to codeowners (#1319) 2020-12-21 20:51:50 +03:00
1d5574b8a3 MMF backfill example (#1317)
* Implement backfill querying

* Update location for stable and incubator charts

* Add MMF backfill example

* Simplify MMF backfill example

Co-authored-by: Alexander Apalikov <alexander.apalikov@globant.com>
2020-12-18 18:28:32 +03:00
75a3d43477 Fix typo (#1305)
And trigger e2e-cluster tests on master.
2020-12-18 12:30:34 +03:00
252fc8090d Backfill: Autoincrement generation on every Backfill update (#1308)
* Backfill: Autoincrement generation on every Backfill update

In order Backfill Cache to work in QueryBackfill, every update should
store a backfill as a new Generation Backfill.

In the future Generation could be renamed to Version field in Backfill,
one change at a time.

* Update Generation on Backend and Frontend Updates

No updates on AcknowledgeBackfill.

* Fix tests after merging master

Add initial Generation as 1 everywhere - on CreateBackfill from Backend
and Frontend.

* Add missing license header
2020-12-17 18:12:24 +03:00
2c617f2cb6 Update location for stable and incubator charts (#1314)
* Implement backfill querying

* Update location for stable and incubator charts
2020-12-17 13:36:16 +03:00
fcd590eca6 Implement backfill querying (#1310) 2020-12-16 23:15:51 +03:00
4b3147511b create CODEOWNERS
list of those with review perms for easy PR review notifications
2020-12-14 15:29:58 -08:00
c85af44567 Frontend: UpdateBackfill and DeleteBackfill handlers (#1292) 2020-12-03 10:58:24 -08:00
688262111d Create, Update backfill after MMF run (#1299) 2020-12-02 17:51:27 -08:00
26d1aa236a Redis: Backfill last acknowledged (#1288) 2020-12-01 21:52:52 -08:00
fff37cd82c Update autogeneretaded protobuf files and Swagger for Frontend (#1295) 2020-11-30 10:26:34 -08:00
98a227b515 Update go.sum (#1296) 2020-11-30 09:59:35 -08:00
88cd95fe57 Backfill indexing (#1290) 2020-11-29 22:16:09 -08:00
248494c04c Frontend Create Backfill (#1279) 2020-11-25 22:54:56 -08:00
aa4398e786 Improve comments for RPC funcs (#1287) 2020-11-23 11:25:28 -08:00
fc5c3629e8 fixed the wrong spelling (#1291) 2020-11-23 09:43:30 -08:00
8d86709632 Makefile update to make api/api.md target commands universal across various environments (#1283) 2020-11-19 22:22:54 -08:00
0a273674b9 Update supported gke version for create-gke-cluster target (#1289) 2020-11-19 22:05:39 -08:00
e2247a7f53 Add Backfill support to internal statestore (#1273) 2020-11-19 14:20:55 -08:00
b269896c23 Undo change that I shouldn't have been able to do 2020-11-16 16:35:07 -08:00
a210185098 Testing change to build system, DO NOT SUBMIT 2020-11-16 16:33:49 -08:00
4df95deb54 Added test for unavailable gRPC function (#1282) 2020-11-12 21:29:40 -08:00
a9b8eec9e0 Updating the dependencies for the project (#1281)
* Updated dependencies
* Updated tutorials dependencies
* Updated tests
2020-11-12 15:21:07 -05:00
afa59327a4 Add ability to filter backfills (#1278) 2020-11-08 21:57:57 -08:00
d86b6c5121 Add comments when displaying makefile usage (#1276) 2020-11-03 10:50:33 -08:00
2eb2921914 Adding Exclude property to DoubleRangeFilter and test coverage. (#1268) 2020-11-02 13:46:09 -08:00
80d882b7c7 Consider backfill's id when de-colliding matches (#1277) 2020-11-02 11:53:27 -08:00
0f34e31778 New fields in protobuf definitions (#1272) 2020-10-30 11:45:51 -07:00
d45eb74510 Revert "Unavailable gRPC match functions forces us to wait the proposalCollectionInterval before failing (#1271)" (#1275)
This reverts commit 1765ab7b7e8fcc24015f5c40938c661f82bdbc9a.
2020-10-29 11:58:32 -07:00
1765ab7b7e Unavailable gRPC match functions forces us to wait the proposalCollectionInterval before failing (#1271) 2020-10-28 11:30:30 -07:00
6f05e526fb Improved tests for statestore - redis (#1264) 2020-10-12 19:21:51 -07:00
496d156faa Added unary interceptor and removed extra logs (#1255)
* added unary interceptor and removed logs from frontend service

* removed extra logs from backend serrvice

* updated evaluator logging

* updated query logging


linter fix

* fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-21 15:02:29 -07:00
3a3d618c43 Replaced GS bucket links with substitution variables (#1262) 2020-09-21 12:22:03 -07:00
e1cbd855f5 Added time to assignment metrics to backend (#1241)
* Added time to assignment metrics to backend

- The time to match for tickets is now recorded as a metric

* Fixed formatting errors

* Fixed minor review changes

- Renamed function to calculate time to assignment
- Moved from callback to returning tickets from UpdateAssignments

* Return only successfully assigned tickets

* Fixed linting errors
2020-09-15 11:18:17 -07:00
10b36705f0 Tests update: use require assertion (#1257)
* use require in filter package


fix

* use require in rpc package

* use require in tools/certgen package

* use require in mmf package

* use require in telemetry and logging


fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-09 14:24:18 -07:00
a6fc4724bc Fix spelling in Proto files (#1256)
Regenerated dependent Swagger and Golang files.
2020-09-09 12:20:29 -07:00
511337088a Reduce logging in statestore - redis (#1248)
* reduce logging in statestore - redis  #1228


fix

* added grpc interceptors to log errors

lint fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-02 12:50:39 -07:00
5f67bb36a6 Use require in app tests and improve error messages (#1253) 2020-08-31 13:17:29 -07:00
94d2105809 Use require in tests to avoid nil pointer exceptions (#1249)
* use require in tests to avoid nil pointer exceptions

* statestore tests: replaced assert with require
2020-08-28 12:19:53 -07:00
d85f1f4bc7 Added a PR template (#1250) 2020-08-25 14:16:36 -07:00
79e9afeca7 Use Helm release to name resources (#1246)
* Fix indent of TLS certificate annotations

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Small whitespace fixes

Picked up the VSCode Yaml auto-formatter.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass 'query' config to open-match-customize

It's not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass frontend/backend to open-match-scale

They're not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Allow redis to derive resource names from the release

This ensures that multiple OpenMatch installs in a single namespace do
not attempt to install Redis stacks with the same resource names.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Include release names in PodSecurityPolicies

This avoids conflicts between multiple Open Match installations in the
same namespace.

`openmatch.fullname` named template per Helm default chart.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the Service Account name release-dependent

This makes the existing global.kubernetes.serviceAccount value an
override if specified, but if left unspecified, an appropriate name will
be chosen.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the RBAC resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the TLS Secret names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the CI-test resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make all Pod/Service names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make Grafana dashboard names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make open-match-scale slightly more standalone

This makes the hostname templates more standard in their case, because
there is no need to coordinate the hostname with the superchart.

This chart still uses a lot of templates from the open-match chart
though, so it's not yet standalone-installable.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make ConfigMap default names release-dependent

A specific ConfigMap can be applied in the same way it was previously,
by overriding configs.default.configName and
configs.override.configName, in which case it is up to the person doing
the deployment to manage name conflicts.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Use correct Jaeger service names for subcharts

This fixes an existing issue where the Jaeger connection URLs in
the configuration would be incorrect if your Helm chart was not
installed as a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Populate Grafana Datasource using a ConfigMap

This allows us to access the Prometheus subchart's named template to get
the correct Service name for the datasource.

This fixes an existing issue where the Prometheus data source URL in
Grafana would be incorrect if your Helm chart was not installed as
a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>
2020-08-17 12:04:26 -07:00
3334f7f74a Make: fix create-gke-cluster, create clusterRole (#1234)
If there are multiple `gcloud auth list` accounts the command would fail,
adding grep active to fix.
2020-07-10 10:57:16 -07:00
85ce954eb9 Update backend_service.go (#1233)
Fixed typo
2020-07-09 11:45:33 -07:00
679cfb5839 Rename Ignore list to Pending Release (#1230)
Fix naming across all code. Swagger changes left.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 13:56:30 -07:00
c53a5b7c88 Update Swagger JSONs as well as go proto files (#1231)
Output of run make presubmit on master.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 12:52:51 -07:00
cfb316169a Use supported GKE cluster version (#1232)
Update Makefile.
2020-07-08 12:25:53 -07:00
a9365b5333 fix release.sh not knowing the right images (#1219) 2020-06-01 11:05:27 -07:00
93df53201c Only install ci components when running ci (#1213) 2020-05-08 16:06:22 -07:00
eb86841423 Add release all tickets API (#1215) 2020-05-08 15:07:45 -07:00
771f706317 Fix up gRPC service documentation (#1212) 2020-05-08 14:36:41 -07:00
a9f9a2f2e6 Remove alpha software warning (#1214) 2020-05-08 13:43:54 -07:00
068632285e Give assigned tickets a time to live, default 10 minutes (#1211) 2020-05-08 12:24:27 -07:00
113461114e Improve error message for overrunning mmfs (#1207) 2020-05-08 11:50:48 -07:00
0ac7ae13ac Rework config value naming (#1206) 2020-05-08 11:09:03 -07:00
29a2dbcf99 Unified images used in helm chart and release artifacts (#1184) 2020-05-08 10:42:16 -07:00
48d3b5c0ee Added Grafana dashboard of Open Match concepts (#1193)
Dependency on #1192, resolved #1124.

Added a dashboard in Matchmaking concepts, also removed the ticket dashboard.

https://snapshot.raintank.io/dashboard/snapshot/GzXuMdqx554TB6XsNm3al4d6IEyJrEY3
2020-05-08 10:15:34 -07:00
a5fa651106 Add grpc call options to matchfunction query functions (#1205) 2020-05-07 18:24:38 -07:00
cd84d74ff9 Fix race in e2e test (#1209) 2020-05-07 15:15:19 -07:00
8c2aa1ea81 Fix evaluator not running in mmf matchid collision test (#1210) 2020-05-07 14:53:12 -07:00
493ff8e520 Refactor internal telemetry package (#1192)
This commit refactored the internal telemetry package. The pattern used in internal/app/xxx/xxx.go follows the one used in openconcensus-go. Besides adding metrics covered in #1124, this commit also introduced changes to make the telemetry settings more efficient and easier to turn on/off.

In this factorization, a metric recorded can be cast into different views through different aggregation methods. Since the metric is the one that consumes most of the resources, this can make the telemetry setups more efficient than before.
Also removed some metrics that were meaningful for debugging in v0.8 but are becoming useless for the current stage.
2020-05-06 18:42:20 -07:00
8363bc5fc9 Refactor e2e testing and improve coverage (#1204) 2020-05-05 20:06:32 -07:00
144f646b7f Test tutorials (#1176) 2020-05-05 12:15:11 -07:00
b518b5cc1b Have the test instance host the mmf and evaluator (#1196) 2020-04-23 15:02:11 -07:00
af0b9fd5f7 Remove errant closing of already closed listeners (#1195) 2020-04-23 10:24:52 -07:00
5f4b522ecd Large refactor of rpc and appmain (#1194) 2020-04-21 14:07:09 -07:00
12625d7f53 Moved customized configmap values to default (#1191) 2020-04-20 15:11:13 -07:00
3248c8c4ad Refactor application binding (#1189) 2020-04-15 11:15:49 -07:00
10c0c59997 Use consistent main code for mmf and evaluator (#1185) 2020-04-09 18:37:32 -07:00
c17e3e62c0 Removed make all commands and pinned dependency versions (#1181)
* Removed make all commands

* oops
2020-04-03 12:01:32 -07:00
8e91be6201 Update development.md doc (#1182) 2020-04-02 15:50:00 -07:00
f6c837d6cd Removed make all commands and pinned dependency versions (#1181)
* Removed make all commands

* oops
2020-04-02 13:22:58 -07:00
3c8908aae0 Fix create-gke-cluster version (#1179) 2020-03-30 21:59:10 -07:00
0689d92d9c Fix the tutorials to using the new API, and be tested (#1175)
* Better follow API guidelines

* Fix tutorials

* don't include makefile fix which is broken
2020-03-27 11:58:28 -07:00
3c9a8f5568 Better follow API guidelines (#1173) 2020-03-26 15:56:34 -07:00
30204a2d15 run presubmit to update files (#1172) 2020-03-26 15:21:53 -07:00
a5b6c0c566 Have evaluator client and synchronizer return error when observing invalid match IDs (#1167)
* Have evaluator client and synchronizer return error when observing invalid match IDs

* update

* update

* update

* update

* presubmit
2020-03-26 13:59:21 -07:00
4a00baf847 Implement assignment groups and graceful failure (#1170) 2020-03-26 12:38:40 -07:00
d74262f3ba Fix broken scale dashboard (#1166) 2020-03-21 15:46:15 -07:00
2262652ea9 Add AUTH tests to Redis implementation (#1050)
* Enable and establish Redis connections via Sentinel

* Reimplement direct redis master connect

* Add AUTH tests for Redis implementation

* fix

* update
2020-03-20 17:12:55 -07:00
e15fd47535 Add a built in created time field for Tickets and the ability to filter Tickets by created time. (#1162) 2020-03-20 15:31:17 -07:00
670f38d36e forbid assignment on ticket create (#1160) 2020-03-19 13:47:45 -07:00
f0a85633a5 update third party files (#1163) 2020-03-19 13:18:55 -07:00
6cb47ce191 Enable and establish Redis connections via Sentinel (#1038)
* Enable and establish Redis connections via Sentinel

* Reimplement direct redis master connect

* Enable and establish Redis connections via Sentinel

* feedbacks
2020-03-14 23:55:41 -07:00
529c01330e Use testing.Cleanup instead of manual cleanup. (#1158) 2020-03-12 14:20:16 -07:00
b36a348db7 Remove omerror, replacing with errgroup (#1157)
Turns out there's already a common use package for this pattern.
2020-03-12 14:00:32 -07:00
5e277265ad Removed unused set package (#1156) 2020-03-12 13:25:22 -07:00
4420d7add2 Added QueryTicketIds method to QueryService (#1151)
* Added QueryTicketIds method to QueryService

* comment
2020-03-09 15:11:16 -07:00
3de052279b Optimized MULTI EXEC querys to reduce Redis CPU consumption (#1131)
* Mysterious code to optimize Redis cpu usage

* resolve comments

* update

* fix cloudbuild
2020-03-06 23:23:59 -08:00
7a4aa3589f Removed ticket auto-expiring logic from statestore (#1146) 2020-03-06 17:20:49 -08:00
bca6f487cc Remove legacy volume mounts from om-demo yaml file (#1147) 2020-03-06 08:32:47 -08:00
d0c373a850 Drafted a short README for the benchmarking framework (#1092)
* Drafted a short README for the benchmarking framework

* update

* update

* update
2020-03-05 13:33:05 -08:00
deb2947ae2 Disable swaggerui via helm (#1144) 2020-03-05 12:08:37 -08:00
d889278151 Replace redis indexing with in memory cache (#1135) 2020-03-02 16:23:55 -08:00
1b63fa53dc Update to go 1.14 (#1133) 2020-02-26 14:13:37 -08:00
af02e4818f Do some randomization on return order of tickets (#1127) 2020-02-20 15:34:40 -08:00
cda2d3185f Add filter package, and rework query testing (#1126) 2020-02-20 14:05:42 -08:00
2317977602 Move default evaluator to internal from testing (#1122) 2020-02-14 14:49:57 -08:00
9ef83ed344 Removed scale chart configmap (#1120) 2020-02-12 13:16:18 -08:00
33bd633b1d Disabled redis when generating static yaml resources except core (#1119) 2020-02-11 14:40:38 -08:00
1af8cf1e79 have scale-frontend use individual go routines for each ticket (#1116) 2020-02-10 13:52:28 -08:00
0ef46fc4d4 implement a scenario which behaves like a team based shooter game (#1115) 2020-02-10 11:21:12 -08:00
79daf50531 Enabled more golangci tests to improve code health (#1089)
* Enabled more golangci tests

* update

* update

* update
2020-02-06 14:07:41 -08:00
a9c327b430 Move scale scenarios into unique packages (#1110) 2020-02-06 12:56:21 -08:00
2c637c97b8 Reduced Redis PING check frequency on Redis pool (#1109)
* Reduced Redis PING check frequency on Redis pool

* fix lint

* update

* update comment

* update comment
2020-02-05 18:00:31 -08:00
668b10030b Update Grafana dashboard for more detailed metrics (#1108)
* Update Grafana dashboard for more detailed metrics

* update cpu usage chart

* update
2020-02-05 17:13:17 -08:00
1c7fd24a34 Remove stats processor from scale tests (#1107) 2020-02-05 13:41:56 -08:00
be0cebd457 Disabled cloudbuild cacher to avoid build flakyness (#1103) 2020-02-04 11:34:23 -08:00
fe7bb4da8f Revert "Release 0.9.0 (#1096)" (#1097)
This reverts commit e80de171a0a6e742d42264f4ab4ecd9231cd3edc.
2020-02-03 16:19:16 -08:00
e80de171a0 Release 0.9.0 (#1096) 2020-02-03 15:42:21 -08:00
fdd707347e Update generated files (#1095) 2020-02-03 15:21:26 -08:00
6ef1382414 Fix leaking of client connections by config.Cacher (#1093)
* Fix leaking of client connections by config.Cacher

* fix link
2020-02-03 14:44:10 -08:00
d67a65e648 Reuse query client in scale tests (#1091)
It was previously not reusing it, so the clients would leak over time.
2020-02-03 13:12:02 -08:00
d3e008cd1e Update proto descriptions to reflect API changes (#1090)
* Update proto descriptions to reflect API changes
2020-02-03 11:15:01 -08:00
d93db94ad9 chartredisfix (#1088) 2020-02-03 09:18:13 -08:00
1bd63a01c7 feature: release tickets api (#1059) 2020-01-31 14:03:17 -08:00
cf8d49052c Deprecated mmf harness (#1086) 2020-01-31 11:13:29 -08:00
fca5359eee Used master HEAD in tutorials' go.mod file and fixed go build errors (#1085) 2020-01-30 15:52:23 -08:00
07637135a9 Deprecate Rosters, remove from Match, MatchProfiles (#1084) 2020-01-30 14:52:41 -08:00
8c86a4e643 Add omerrors and use it it in backend_service and evaluator_client (#1081)
Two methods are added:

ProtoFromErr: returns a grpc status given an error, with some reasoned handling for special cases. This will be used to set errors onto the FetchMatchesSummary in a followup PR.
-WaitOnErrors: this allows some number of functions to run that will all return errors. The first to return an error will be the error returned overall, and it ensures all go routines finish.
WaitOnErrors is used to simplify code in backend_service and the grpc portion of evaluator_client.

Also I realized that synchronizeSend should specify which context is being used where better.
2020-01-30 12:42:02 -08:00
31858e0ce5 Changed evaluator API from returning matches to matchids (#1082)
* Changed evaluator API from returning matches to matchids

* update proto desc
2020-01-30 10:10:35 -08:00
fc0b6dc510 Changed Synchronizer proto to return matchIDs instead (#1080)
This commit changed Synchronizer proto to return matchIDs instead. Also bumped up the numbers of the unnamed channels in synchronizer starting from m3c and changed the channel type starting from m4c to chan string as the next step of the API change is to have the evaluator returns the match ids instead.
2020-01-29 19:26:06 -08:00
edade67a6d Added sync.Map to backend and synchronizer (#1078)
This is an intermediate step to resolve #939. Leaving a bunch of TODOs in this PR and will fix them after the proto change.
2020-01-29 18:34:01 -08:00
c92c4ef07a Starts streaming when sending requests from synchronizer to evaluator (#1075)
This commit started to stream when calling the evaluator.Evaluate method such that the synchronizer is able to process the data more efficiently.
2020-01-29 17:23:38 -08:00
0b8425184b Stream proposals from mmf to synchronizer (#1077)
This improves efficiency for overall system latency, and sets up for better mmf error handling.

The overall structure of the fetch matches call has been reworked. The different go routines now set an explicit err variable. So once we have FetchSummary, we can just set the mmf err variable on it. Synchronizer calls which err will always result in an error here (as it's relatively fatal), while mmf and evaluator errors will be passed gently to the client.

One thing this code isn't doing anymore is checking if an mmf returns a match with no tickets. This seems fine to me, but willing to discuss if anyone disagrees.

Deleted the tests for the following reasons:

TestDoFetchMatchesInChannel didn't actually test fetching matches, it only tested creating a client. Since callMmf now both creates the client and makes the call, this code now blocks actually trying to make a connection. I'm not worried about having full branch test coverage on err statements...
TestDoFetchMatchesFilterChannel tested merging of mmf runs. Since there's only one mmf run now, it's no longer necessary.
2020-01-29 14:44:15 -08:00
338a03cce5 Removed synchronizer dashboard and synced grpc dashboard with API changes (#1074)
The previous dashboards don't work with our changes on the API surface.
https://snapshot.raintank.io/dashboard/snapshot/5A6ToilbqqWbeYpuf36jFCrVv3zFFK1V

This commit:

Removed the unused synchronizer dashboard.
Updated the field matches to use QueryService, BackendService and FrontendService instead of the outdated naming.
Resolved #1018
2020-01-28 15:08:16 -08:00
b7850ab81d Remove assignment.error (#1073) 2020-01-28 13:10:41 -08:00
faa730bda8 Remove c# protos and respective makefile commands (#1072) 2020-01-28 12:12:55 -08:00
76ef9546af Add battle royal scale test scenario (#1063)
Tickets choose one of the 20 regions, with a skewed probability. (probability eg: https://play.golang.org/p/V3wfvph34hM) One profile per region, which forms matches of 100 players.
2020-01-28 11:39:57 -08:00
bff8934cd3 Added the ability to specify your own Redis instance via helm (#1069)
Resolved #836
2020-01-28 10:51:41 -08:00
3a5608b547 Remove inaccurate default documentation on range filter (#1071)
Instead, this is actually just relying on the proto's default values of 0 for each. As such it shouldn't be documented.
2020-01-27 16:18:03 -08:00
b7eec77a36 Rename Backend and Frontend API to BackendService and FrontendService (#1065)
Depended and aligned with #1055. After this commit, we'll still have om-backend, om-frontend, and om-query image, but with API surface renamed.

Backend -> BackendService
Frontend -> FrontendService
2020-01-27 15:52:35 -08:00
82a011ea52 Rename Mmlogic to Queryservice (#1055)
Resolved #996.

Manually rename the file name under internal/app/mmlogic and cmd/mmlogic from mmlogic.go to query.go to keep the image name consistent with our backend and frontend naming.

TODO: Rename backend and frontend API to BackendService and FrontendService instead.
2020-01-27 15:27:17 -08:00
92210b1a13 Redis grafana dashboard (#1062)
* Redis grafana dashboard

* Alert notifiers

* update

* update

* update

* update
2020-01-23 21:31:58 -08:00
f46c0b8f3d Revamp go processes dashboard (#1064)
* Revamp go processes dashboard

* added cpu usage chart
2020-01-23 20:08:41 -08:00
a19baf3457 Revamp gRPC grafana dashboard (#1060)
* dashboard prototype

* Remove storage dashboard

* fix

* update
2020-01-23 19:36:56 -08:00
8e1fbaf938 Change backend.FetchMatches proto from taking multiple profiles to one instead (#1056) 2020-01-17 20:08:29 -08:00
957471cf83 Run scale test assignment and deletes in parallel (#1058)
Start 50 go routines for each at the beginning of the test, and pass them from fetch matches with a buffer.

Gets the redis state store first match to handle >500 tickets per second:
https://snapshot.raintank.io/dashboard/snapshot/yO88xrIUe1bFR29iNZt4YuM0xuBb8PX9
2020-01-17 17:09:55 -08:00
e24c4b9884 Fix off by 1 error in first match scale test (#1057) 2020-01-17 16:33:43 -08:00
34cc4987e8 Add a first match scenario to the scale tests (#1054)
This first match scenario runs one pool with all tickets, pairing tickets into 1v1 matches with no logic.

Metrics example: https://snapshot.raintank.io/dashboard/snapshot/JZQvjGLgZlezuZfNxPAh8n098JQuCyPW
2020-01-17 11:39:37 -08:00
8e8f2d688b Add gRPC CSharp bindings (#1051)
* Add gRPC CSharp bindings

* update
2020-01-16 16:54:56 -08:00
f347639df4 🤦 (#1048) 2020-01-15 09:47:06 -08:00
75c74681cb Make scale grafana dashboard optional to install (#1044)
* Optionally enable grafana dashboard for scale chart

* Make scale grafana dashboard optional to install
2020-01-15 09:21:45 -08:00
5b18dcf6f3 Add metric support to the scale tests (#1042) 2020-01-14 17:31:46 -08:00
3bcf327a41 Remove locust (#1041) 2020-01-14 13:53:09 -08:00
9f59844e0d Remove zipkin references from Open Match (#1040) 2020-01-14 12:23:31 -08:00
5a32cef2e9 Update Makefile and .ignore files (#1031)
This commit updated the Makefile and .ignore files for the evaluator and mmf binaries.

Also moved the evaluator to test/evaluator folder - I had it accidentally placed under the test/customize/evaluator dir because of a bad merge when working on deprecating the harness.
2020-01-13 18:59:57 -08:00
b9e2e88ef4 Implement basic tunable parameters logic for benchmarking scenarios (#1030)
This commit implements the knobs to control ShouldCreateTicketForever, ShouldAssignTicket, ShouldDeleteTicket, TicketCreatedQPS, and CreateTicketNumber. Also removed the roster-based-mmf from the repo since it is only used for the scale test and there is no need to build its image in every CI run.

After this commit got checked in, users are able to configure the knobs via the new benchmarking framework and run make install-scale-chart to it.

TODO:

Implement the filter number and profile number logic. This requires a rewrite for examples/scale/tickets and examples/scale/profiles package.
2020-01-08 17:20:37 -08:00
41632e6b8d Increase Redis ping time tolerance and provision more resources for CI (#1034) 2020-01-08 15:32:32 -08:00
188457c21f Added mmf and evaluator for the basic benchmarking scenario (#1029)
* Added mmf and evaluator for the basic benchmarking scenario

* update

* update

* fix
2020-01-07 11:08:12 -08:00
4daea744d5 Added a fixed development password for Redis (#989)
* Added a fixed development password for Redis

* update
2020-01-02 23:30:35 -08:00
1f3dd4bcbf Implement a prototype for Open Match benchmarking framework (#1027)
* Implement a prototype for Open Match benchmarking framework

* update

* update

* update
2019-12-27 18:00:47 -08:00
d82fc4fec6 Add pod tolerations, nodeSelector and affinity in helm (#1015) 2019-12-27 13:02:36 -08:00
8cb43950a1 Move ignorelists.ttl from Redis section to Open Match core (#1028) 2019-12-27 12:27:09 -08:00
9934a7e9da Rewrite synchronizer and corresponding backend (#1024) 2019-12-20 16:40:53 -08:00
8db449b307 Templatize stress test configurations (#1019)
* Templatize stress test configurations

* Update

* presubmit
2019-12-17 11:02:22 -08:00
b78d4672a6 Update client-go to kubernetes-1.13.12 (#1020) 2019-12-11 18:08:10 -08:00
e048b97c71 Moved MMF for end-to-end in-cluster testing to internal (#1014)
* Moved MMF for end-to-end in-cluster testing to internal

* Fix
2019-12-11 16:55:43 -08:00
f56263b074 Deprecate evaluator harness (#1012)
* Have applications read in config from custom input

* Moved original evaluator example to internal package

* Deprecate evaluator harness
2019-12-11 16:04:16 -08:00
aaca99c211 Update README.md (#1016) 2019-12-09 18:12:18 -08:00
9c1b0bcc0e Have applications read in config from custom input (#1007) 2019-12-09 13:26:58 -08:00
80675c32f6 Split up stress test into backend/frontend structure (#1009) 2019-12-09 12:09:00 -08:00
4e408b1abc Show how to generate install/yaml files in dev guide (#1010) 2019-12-08 11:37:52 -08:00
fd4f154a0e Remove unessisary variables and indirection from synchronizer (#1008) 2019-12-06 15:12:18 -08:00
3e2d20edc0 Have synchronizerClient use cacher, to update on config changes (#1006)
This also aligns better with patterns for other clients, and removes some synchronization complexity for this type.
2019-12-05 13:57:57 -08:00
40ba558eb2 Improve Evaluator tutorials experience (#1005)
* Improve Evaluator tutorials experience

* Improve Evaluator tutorials experience
2019-12-04 17:59:04 -08:00
72bcd72d5c Fix Redis Err: Max Clients Reached error (#999)
This commit fixed an issue where Open Match may throw out Err: max clients reached errors from Redis side under load testing scenarios. At this point, Open Match should be able to scale with 1600 profiles and 5000 tickets in the statestore.

The reason that we got those errors from Redis is by default Redis set its maxClient connections limit to 10k. However, Open Match has maxIdle number set to 5000 per pod, which exceed Redis's limit and failed the API calls. This commit manually overrides the maxClient number to 100k, reduce the maxIdle number to 200, and configure the file descriptors' limit to 10k by setting sysctl -w net.core.somaxconn=100000 using the initContainer if enabled.
2019-12-04 17:18:09 -08:00
b276ed1a08 Fixed terraform google provider version to 2.9 (#1004)
* Fixed terraform google provider version to 2.8

* Update versions.tf

* Update versions.tf
2019-12-04 13:20:36 -08:00
d977486dc5 Add more metrics to monitor synchronizer time windows performance (#1001) 2019-12-03 18:43:28 -08:00
1f74497bdd Reduced in-cluster test flakyness and stablize gRPC client connections (#1003) 2019-12-02 16:43:37 -08:00
57e9540faa Use helm to test Open Match in a k8s cluster (#988) 2019-11-25 16:29:05 -08:00
a0be7dcec5 Cherry-picked MMF server changes to upstream (#1000) 2019-11-25 16:09:06 -08:00
391cc4dc72 More cleanups (#984) 2019-11-22 11:16:52 -08:00
2c8779c5d7 Improve README instructions and code templates for the tutorials (#997) 2019-11-21 17:52:08 -08:00
e5aafc5ed7 Added a Grafana dashboard to track Redis client connection gauges (#994) 2019-11-21 09:21:44 -08:00
8554601a70 Update Scale package to sync with the latest config and API changes (#992) 2019-11-20 15:06:19 -08:00
a75833b85a Update release note and release process template (#987) 2019-11-19 14:53:33 -08:00
f01105995d Update gRPC middlewares used in the internal/rpc library (#993) 2019-11-19 13:55:11 -08:00
f949de7dce Update master branch tutorials to use v0.8.0 tags (#985) 2019-11-15 10:16:08 -08:00
335bf73904 Remove redundant matchmaker scaffold and update tutorials (#979) 2019-11-14 13:59:26 -08:00
7a1dcbdf93 More cleanup (#976) 2019-11-13 13:43:01 -08:00
0a65bdefe5 Fix typo in folder name (#975) 2019-11-13 10:15:16 -08:00
bcf0e6b9fb Harden the open-match parent chart (#972) 2019-11-13 09:51:37 -08:00
1f5df7abef Ignore reaper error (#974) 2019-11-13 08:25:02 -08:00
7005d40939 Add solution folder to Matchmaker 102 tutorial (#973) 2019-11-13 02:00:12 -08:00
3536913559 Add logging to the default evaluator (#964) 2019-11-13 01:34:05 -08:00
103213f940 Add the solution for Matchmaker 101 tutorial to a separate solution folder. (#971) 2019-11-13 00:25:09 -08:00
3b8efce53d Add a tutorial for using the default evaluator (#961) 2019-11-13 00:05:38 -08:00
580ed235d7 Generate static yaml to install open match demo (#969)
* Generate static yaml to install open match demo

* Update Makefile to sync with the latest demo update
2019-11-12 22:02:44 -08:00
23cc35ae68 Publish helm index.yaml file to helm install open-match (#962) 2019-11-12 21:42:59 -08:00
c002e75fde A Tutorial to customize the evaluator (#970) 2019-11-12 19:01:04 -08:00
6e6f063958 Update tutorial modules to use v0.8 rc (#963) 2019-11-12 16:12:57 -08:00
8d31b5af07 Fix namespace dependency on CI (#967) 2019-11-12 15:54:21 -08:00
f1a5cd9b81 Have MMF and Evaluator in customize chart use different configs (#959) 2019-11-08 15:44:58 -08:00
d3d906c8be Define Makefile and RBAC rules for open-match-demo namespace migration (#958) 2019-11-08 14:51:42 -08:00
6068507370 Move Match Function installation to the matchmaker.yaml - since customization.yaml is now optional when using default evaluator installation steps (#957) 2019-11-08 14:30:29 -08:00
04b06fcf90 Split out MMF and Evaluator install from open-match-demo (#956) 2019-11-08 11:20:42 -08:00
0c25ac9139 Turn off subcharts by default (#954) 2019-11-08 09:15:49 -08:00
0565a014ad Disable WI in create-gke-cluster step (#947) 2019-11-06 19:10:13 -08:00
57e59c3821 Bumped helm version and dependencies versions for k8s 1.16 support (#938) 2019-11-06 18:27:59 -08:00
608d5bce71 Disable Redis initContainer by default (#941) 2019-11-06 17:12:23 -08:00
52b8754eb8 Update go.mod dendencies (#949) 2019-11-06 13:31:08 -08:00
a10817f550 Fix scale test based on the config changes (#948) 2019-11-06 12:39:38 -08:00
817a0968e7 Update release template (#944) 2019-11-06 11:11:25 -08:00
043a984bab Remove k8s probes in example mmfs and evaluator (#942) 2019-11-06 10:54:17 -08:00
02d8d1f1fe Optimize developer workflow (#943) 2019-11-06 10:28:45 -08:00
242d799c18 Enabled telemetry when generating assets (#945) 2019-11-04 18:05:27 -08:00
33189f9154 Added jaeger tracing to Open Match core services (#934) 2019-11-01 13:02:46 -07:00
9ef7cb6277 Matchmaker tutorial modificationt to improve tutorial experience (#935) 2019-11-01 12:49:12 -07:00
b05c9f5574 Proposal: Make extension a map of string to any (#901)
Replaces the extension field (and match.evaluator_input) with a map of string to any.  The previous concept was that different components would read from specific extension fields.   However nothing was enforcing this behavior.  (fun fact: the very first use of this was incorrect, I used extension instead of evaluator_input when updating the default evaluator, but caught myself in the review.)

Instead have a map of string to proto.  This allows any producer to add whatever values
it wants, and the consumers to look for the specific values they want.

# Pros:
Better compos-ability, and less "forced" duplication.  Now various components can simply add information which is required by the other components processing each message.  If there is a unified system, they can use one extension.  If it's a system composed of various parts, then they simply add and use the protos required by the connected pieces.

This allows data to flow through OM better: if there are systems which are composed together outside of OM core, they don't need custom fields or manipulation. eg, if I pass my match to a well known system with returns assignments from Agones, and requires a specific extension, and I add a layer of processing before hand which requires its own extension, the match doesn't need to be modified to have the required extension when being passed to the Agones allocator.  Or if there's data on tickets which need to flow through OM to the director, they can just be added.

# Cons:
- Very simple use cases for OM are a bit more complex.
- JSON specifically now needs to add a map around the any around the actual data.
- Users who have a single extension type have a bit more work to do.  I think the recommendation should be to use the empty string, "", for such cases.
- Read, Modify, Set operations on extension data are more complicated.
2019-10-31 16:58:22 -07:00
8a29f15fe0 Unify helm image tags (#919) 2019-10-31 16:37:09 -07:00
5fa0cc700c Refactor evaluator client in the synchronizer (#933)
This fixes a number of bugs:
- If it's a grpc connection, the evaluator client also creates an http connection.
- Even if it's an http connection, it never sets it to use that http connection, always trying to recreate the client and failing.
- The http evaluator code does not use the http client which it created.
- If the config is updated, the old evaluator connection details are still used.
- If the client errors, it may still use a broken client.

Refactors the file into several completely separate components:
- A grpc based client.
- An http based client.
- A differed client which selects between creating a grpc or http client, and will detect changes to the config and recreate the client.

As a result of these changes, the type of connection is explicit based on config.  If a grpc port is present, it will always use that connection, never trying to create an http client.

The actual code to create clients and make requests is mostly unchanged.
2019-10-31 16:05:28 -07:00
d579de63aa Update proto comments to reflect latest API changes (#932) 2019-10-31 15:28:34 -07:00
797352a3fc Add Cacher, which invalidates a cache when config changes (#931)
I will use this in the evaluator_client, where we want to re-use a client if possible, but get a new client if the used values change. This solves the generic problem on a meta level, instead of having to manually remember and compare the config values used. It also prevents programming errors where a new config value is read, but the code doesn't properly detect if it has changed.

The ForceReset method will be used when the evaluator client has an error, so that the system will recover from a client which is stuck in an error state on the next call.

I anticipate there will be other places to use this inside open match.
2019-10-31 14:39:17 -07:00
68882a79bb Update release.md (#928) 2019-10-31 14:06:14 -07:00
11bf81e146 Tutorial to build a matchmaker that uses multiple pools per match profile (#929) 2019-10-31 13:46:38 -07:00
02aa992ac7 Fix cloudbuild (#922) 2019-10-31 13:04:59 -07:00
f5b651669c Added protoc-gen-doc plugin to generate API references (#925) 2019-10-31 11:34:24 -07:00
b9522a8bb5 Add a template for a basic OM based Matchmaker. The tutorials will use this as starting point. (#927) 2019-10-31 10:26:55 -07:00
8c6fbcbe49 Update the MMF101 to be a mode based matchmaker (#926) 2019-10-31 09:42:46 -07:00
99141686c9 Move Evaluator to tutorial namespace and enable tutorial configuration for Open Match (#923) 2019-10-30 19:21:23 -07:00
3cf9c2ad6a Added jaeger sample configuration to the telemetry binder (#902) 2019-10-30 18:10:04 -07:00
755c0e82f1 Fetch ignore list before querying for Tickets to fix a race where a ignored ticket could be returned as they move out of ignore list upon assignment (#918) 2019-10-30 17:41:27 -07:00
18bc9f31fd Implement Tutorial for authoring a basic match function (#913) 2019-10-30 17:23:05 -07:00
05325d3b77 Remane bindStackdriver to bindStackdriverMetrics (#921) 2019-10-30 16:32:15 -07:00
3c7f73ed03 Install override yaml file by default when using helm upgrade (#920) 2019-10-30 15:20:53 -07:00
525d35b341 Upgrade to helm3 (#896) 2019-10-29 18:34:23 -07:00
d3e8638a3b Added Storage Dashboard in Grafana (#909) 2019-10-29 17:26:15 -07:00
af19404eef Split up override configmap from open-match-core static yaml (#916) 2019-10-29 16:55:22 -07:00
2f9e1c2209 Remove struct import from protos (#915) 2019-10-29 14:00:02 -07:00
669f7d63b7 Remove evaluator config for default config yaml (#914) 2019-10-29 13:14:10 -07:00
8740494f3e Added Makefile proxy and helm default configs for jaeger (#900) 2019-10-28 17:36:40 -07:00
3899bd2fcd Have internal/config/config.go read in file through absolute path (#910) 2019-10-28 13:20:34 -07:00
dac6ac141e Update RosterBased match function to not use the harness (#904)
Update RosterBased match function to not use the harness
2019-10-25 16:28:23 -07:00
c859e04bf9 Remove global configmap (#907) 2019-10-25 15:44:15 -07:00
7a48467cb5 Config change first step (#906) 2019-10-25 15:09:04 -07:00
74992cdf79 Added time metrics to statestore wrapper (#899) 2019-10-25 11:36:35 -07:00
dd21919c00 Fix csharp dependency (#903) 2019-10-25 10:34:09 -07:00
031b39e9c2 Added templates for bug/feature reports (#898) 2019-10-24 11:19:01 -07:00
3f8f858d85 Update helm templates to follow k8s spec rules (#897) 2019-10-24 10:53:32 -07:00
1dbd3a5a45 Create a privileged service account for Redis and disable THP to prevent Redis memory leaks (#884) 2019-10-22 19:11:36 -07:00
fc94a7c451 Add csharp generated code and update .csproj dependency (#813) 2019-10-22 15:28:13 -07:00
60d20ebae5 Update resource yamls to use k8s v1.16 API (#891) 2019-10-21 11:16:07 -07:00
e369ac3c0b Update tutorial READMEs with more concrete steps (#888)
* Update tutorial READMEs with more concrete steps
2019-10-17 11:41:58 -07:00
2c35ecb304 Rename telemetry util function to match what it actually does (#874)
* Rename telemetry util function to match what it actually does

* Improve comments
2019-10-16 17:47:59 -07:00
7350524e78 Added open-match namespace in yaml installation files (#886)
* Create open-match namespace in yaml installation files when open-match-core is enabled

* Rename and comment
2019-10-16 16:13:38 -07:00
2aee5d128d Configure Open Match to improve scaling (#881) 2019-10-15 14:28:35 -07:00
4773b7b7cf Added a Grafana dashboard to monitor Redis connection latency (#876) 2019-10-15 13:37:26 -07:00
1abdace01e Pin Dockerfile base-build to go version 1.13.1. (#875)
I had a problem just now where my local go test would pass, but docker files would fail to build. My system had an old version of golang:latest cached, which failed to build properly due to missing a new method. So instead pin the dockerfile to a specific verison. This way builds will be more deterministic: If a new version of Go comes out, the new features won't work for anyone until this is updated, at which point everyone's local cache will be invalidated.
2019-10-14 18:32:25 -07:00
bbcf8d47b4 Update release template to install open match without telemetry services (#868) 2019-10-14 15:05:26 -07:00
d65dee6be0 Change k8s service type into headless service and use DNS resolver to get endpoint (#864)
* Implement client side load balancing with DNS resolver for Open Match headless service deployment

* Update comments
2019-10-14 14:44:34 -07:00
02e6b3bbde Rename attribute to *_arg, FloatRangeFilter to DoubleRangeFilter (#873)
Part of #765

Helps with #512 as the fields names not matching removes most incompatible query cases.
2019-10-14 13:16:31 -07:00
77090d1a5b Tutorial (#871)
* Tutorial skeleton

* Update

* Yet another update

* Add header
2019-10-14 12:20:43 -07:00
ce3a7bf389 Remove properties fields from messages.proto (#870)
Part of moving to the new API proposal: #765
2019-10-11 14:09:38 -07:00
bcf710b755 Remove evaluator client test (#866)
Part of moving to the new API proposal: #765

This didn't break earlier when it should have, because the test wasn't working.

As this is covered by the end to end tests anyways, delete.
2019-10-11 13:13:55 -07:00
2b7eec8c07 Remove index configuration (#862)
Part of moving to the new API proposal: #765
2019-10-11 12:54:41 -07:00
99164df2db Have mmf harness pass extension instead of properties (#865)
Part of moving to the new API proposal: #765
2019-10-11 12:00:35 -07:00
efa1ce5a0b Update telemetry/metrics to support adding histogram view (#845)
* Update telemetry/metrics to support adding histogram view

* Update comment
2019-10-11 11:42:48 -07:00
cb610a92b1 Define minimalistic pod resources to deploy sample mmf and evaluator (#867)
* Define minimalistic pod resources to deploy sample mmf and evaluator

* newline
2019-10-11 11:10:14 -07:00
89691b5512 Remove internal dependencies from the demo (#859)
* Remove internal dependencies from the demo
2019-10-10 19:32:08 -07:00
252d473d72 Generate helm charts at runtime and delete install/helm/open-match/charts repo (#824)
* Generate helm charts at runtime
2019-10-10 19:15:49 -07:00
56cfb8e66e Remove last references to ticket.properties (#863)
Part of moving to the new API proposal: #765
2019-10-10 18:50:17 -07:00
5f32d4b765 Switch Default Evaluator to use proto any (#854)
Added a new proto to messages, DefaultEvaluationCriteria.

Reworked the default evaluator to use it.
Changed the pool mmf to output it.
2019-10-10 16:47:56 -07:00
658aee8874 Use SearchField's doubles for double indexing (#860)
Part of moving to the new API proposal: #765

This replaces the double indexing, and fixes the spots which break because of it.
Future PRs will remove indexing and the passing of configuration that were because of it, and then also remove the properties altogether. (there's still a handful of places which use properties but don't actually use it for indexing.)
2019-10-10 15:54:33 -07:00
6ac7910fb1 Remove property usage from e2e/ticket_test (#861)
Part of moving to the new API proposal: #765
2019-10-10 12:54:22 -07:00
fcf7c81c84 Commit missed proto build changes (#858)
Missed this with my previous change.
2019-10-09 18:30:14 -07:00
e3d630729c Replace boolean filter with tag filter (#855)
Part of moving to the new API proposal: #765
2019-10-09 17:41:59 -07:00
da9d48ddb1 Remove unnecesssary properties and filters in demo (#856)
This was possible since the move to read all tickets with no filters. It simplifies the demo
complexity a bit.

Tangential, but useful to moving to the new API proposal: #765
2019-10-09 12:05:47 -07:00
3727b0d5d8 Update third party code (#851) 2019-10-08 19:06:57 -07:00
cc1b70dd2e Use SearchField's strings for string indexing (#853)
Part of moving to the new API proposal: #765
2019-10-08 16:33:28 -07:00
91090af431 Remove outdated and unused filter package (#852) 2019-10-08 15:58:56 -07:00
6661df62ae Adding Any fields to messages.proto (#846)
See #765 for detailed discussion.

Followup changes will involve wiring up the search_fields, then transitioning the existing tests and examples, then removing the old struct fields.
2019-10-08 15:36:33 -07:00
f63a93b139 Fix cloudbuild (#850) 2019-10-07 16:39:08 -07:00
31648c35f3 Port over an end-to-end test with complete game logic to in-cluster test (#808)
* Port over an end-to-end test with complete logic to in-cluster test

* Fix discrepancy in MMF host name
2019-10-04 12:33:57 -07:00
e96a6e8af7 Reflect latest artifact changes in the release shell script (#831) 2019-10-01 10:48:48 -07:00
d9912c3e28 DeleteFromIgnoreList when tickets got assigned or deleted (#830)
* DeleteFromIgnoreList when tickets got assigned or deleted
2019-09-30 18:28:10 -07:00
8933255ec2 Fix OpenConcensus backend_matches_fetched metric (#829)
* Fix open concensus backend_matches_fetched metric

* Update
2019-09-30 18:10:23 -07:00
3617d3cdbd Use StringEquals properly in e2e tests (#825)
* Use StringEquals properly in e2e tests
2019-09-30 17:57:26 -07:00
736a979b47 Fix create-gke-cluster command (#844)
* Fix create-gke-cluster command
2019-09-30 15:59:53 -07:00
aa99d7860e Add release note approval to release process. Improve ordering in release notes template (#826) 2019-09-27 12:48:40 -07:00
22ad6fed6b HealthCheck workaround (#827)
* HealthCheck workaround

* Update
2019-09-25 15:23:42 -07:00
39e495512b Make CI compatible with go1.13 (#823)
* Make CI compatible with go1.13

* bump version
2019-09-25 10:48:59 -07:00
291e60a735 Fix Scale tests (#828) 2019-09-23 16:20:53 -07:00
b29dfae9cf Reorder make rule dependencies to fix presubmit (#822) 2019-09-20 16:02:15 -07:00
377b40041d Rename indice' const names to reflect their filter types and unify ticketIndice value (#807)
* Rename indice' names for testing to reflect their filter types

* Fix golangci error
2019-09-19 18:20:53 -07:00
6f7b7640c2 Pass synchronizer id on the context of the call to synchronizer (#817)
Pass synchronizer id on the context of the call to synchronizer
2019-09-19 16:01:47 -07:00
039eefb690 Open Match scale testing improvements (#819)
* Open Match scale testing improvements
2019-09-19 15:44:40 -07:00
5de0ae1fc4 Bump default ignore list ttl to 60seconds to give the backend sufficient time to allocate DGS (#816) 2019-09-19 15:21:58 -07:00
1e5560603a Enable Synchronizer by default (#809) 2019-09-18 11:53:20 -07:00
61449fe2cf Implement Roster based MMF that populates roster pools with tickets from the pools supplied. (#806)
* update

* Implement the Roster based match function for scale tests
2019-09-17 17:48:47 -07:00
21cf0697fe Implement test backend that will fetch matches, assign tickets and delete tickets at scale (#804) 2019-09-17 17:14:50 -07:00
12e5a37816 Add end-to-end test for new filter types (#805)
* Add end-to-end test for new filter types
2019-09-17 16:25:09 -07:00
e658cc0d84 Add csproj baseline (#794)
* Add csproj baseline

* Automate CSharp packing via Docker

* Build csharp locally

* Rm dockerfile

* Modify gitignore
2019-09-17 16:09:41 -07:00
9e89735d79 Implement test frontend that creates tickets in Open Match continuously (#803) 2019-09-17 15:47:40 -07:00
5cbbfef1cc Implement the profiles package that will be used by the scale backend to generate profiles for different scenarios for scale testing (#802) 2019-09-17 15:14:10 -07:00
1c0e4ff94e Add implementation for Tickets library to generate fake tickets for scale test (#801) 2019-09-17 11:00:41 -07:00
79862c9950 Add placeholder components for Open Match scale benchmarking (#797) 2019-09-16 18:01:37 -07:00
8ac27d7975 Change protobuf namespace to openmatch (#799)
This resolves #723

It would be very weird for other protobuf packages to be importing "api" for Open Match. This changes to a more reasonable unique name.

Eg, tensorflow uses the package "tensorflow" https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto
2019-09-16 16:41:00 -07:00
86b8cb5aa8 Add string equals filtering and indexing (#798)
Part of implementing #681
2019-09-16 15:54:41 -07:00
fdea3c8f1e Move gke-metadata-server workaround out from install/helm directory (#793)
* Move gke-metadata-server workaround out from install/helm directory
2019-09-16 11:50:17 -07:00
61a28df3e5 Stop using environment variables for Redis connection (#792)
* Stop using environment variables for Redis connection
2019-09-13 11:02:22 -07:00
13fe3fe5a9 Add CSharp namespace (#779) 2019-09-12 09:53:36 -07:00
a674fb1c02 Add bool filtering and indexing (#791)
Add bool filtering and indexing
2019-09-10 14:51:05 -07:00
75ffc83b98 Rename Filter to FloatRangeFilter (#790)
This sets things up for other filter types.
2019-09-06 17:04:21 -07:00
7dc4de6a14 Store indexes used for a ticket, and use them to deindex (#789)
This has two primary advantages:
The redis key of the index can be decided at ticket index / query time. This is important for string equal indexing, where the plan is to concatenate the OM index name and the string value to form the redis key.
Improved correctness when indexes are changed: The ticket will now clean up the indexes it was created with, preventing old indices from existing after all the tickets that used them are gone.

This does add an extra read when deindexing a ticket, but I think the correctness improvement alone is worth that.

Other notes:
Turns out the indexes need to be in a list of interface{} to concatenate with the redis key for the cache, so I changed my mind about computing the list in extractIndexedFields. So extractIndexedFields instead just returns the map of index to values.

Improved a test's assertions by using ElementsMatch.
2019-09-06 16:06:21 -07:00
f02283e2a6 Add all tickets indexed, used on pools with no filters (#785)
Resolves #767
2019-09-06 14:00:00 -07:00
d1fe7f1ac4 Improve synchronizer logging (#784)
* Improve synchronizer logging

* Improve synchronizer logging
2019-09-06 13:43:51 -07:00
84eb9b27ef Use config value for Redis hostname and port (#783) 2019-09-06 13:18:44 -07:00
707de22912 Seperate redis and OM index concepts (#781)
> Currently OM filters directly match filtering fields in redis, and OM ticket properties directly map to values in redis. This change breaks that direct connection. In followup changes, I will be adding other index and filter types. They will be translated into redis sorted set values, so that conversion will take place within these methods. Eg, bool values will be turned into 0 and 1, and bool equal filters will do a range to capture those values.
> 
> This does a couple other minor things:
> 
> * Removes a test case that indexed fields have to be numbers, which is going to be wrong after other filter types are added anyways.
> * Adds a prefix to the redis key for the index. This will be important as other index types are added to avoid collisions.
2019-09-06 12:55:30 -07:00
780e3abf10 Fix demo bug (#780) 2019-09-05 16:35:57 -07:00
524b7d333f Use secure websockets when demo page is on https (#775)
This fixes the scenario where the demo is behind an https proxy. In that scenario, it would
previously try to connect via unsecured websockets, which doesn't work. Specifically, this
is the case for Google Cloud Console's Web Preview.

Tested=Manually, bridging locally and also with the Cloud Console.
2019-09-05 09:22:40 -07:00
c544b9a239 Fix the namespace bug in install/yaml (#769) 2019-09-03 23:57:21 -07:00
04b6f1a5ad Have filter tickets take pool instead of list of filters (#773)
Part of the work for #681

This changes the API of FilterTickets to take a pool instead of filter list. Following the API
in the proposal, different filter types will be different fields on the Pool message.
2019-09-03 16:18:57 -07:00
13952ea54e Fix md-test by adding whitelist value for swagger.io (#774) 2019-09-03 16:00:48 -07:00
a61f4a643e Align Kubernetes API versions and Update the rest of the module versions (#772) 2019-08-30 14:53:00 -07:00
949fa28505 Evaluator http test and implementation (#754)
* Change evaluator API from unary to bidirectional streaming
2019-08-23 16:24:04 -07:00
85cc481f5d Change synchronizer API from unary to bidirectional streaming (#750)
* Change synchronizer API from unary to bidirectional streaming

* bug fix

* reformat

* Update

* Update

* update
2019-08-22 16:20:29 -07:00
c3cbcd7625 Change evaluator API from unary to bidirectional streaming and disable HTTP support for the evaluator (#745)
* Change evaluator API from unary to bidirectional streaming

* Bug fix

* Yet another bug fix

* Update

* Update
2019-08-22 15:37:51 -07:00
e01fc12549 Reformat install/terraform (#751) 2019-08-22 13:25:15 -07:00
e1682100fa Fix swaggerdoc error (#752) 2019-08-22 13:07:17 -07:00
603aef207f Remove dependency on gogo/protobuf (#755) 2019-08-22 12:48:13 -07:00
baf403ac44 Replace json with jsonpb (#761) 2019-08-22 12:32:55 -07:00
b1da77eaba Change backend.FetchMatches from unary to streaming (#743) 2019-08-16 14:51:10 -07:00
bb82a397d2 Ignore globals when linting everywhere. #749
The exclusions list is ever growing because there are many valid
use cases for global variables. The standard library uses them
all over the place. Removing the check, and instead relying on
code review to spot bad uses of globals.
2019-08-16 11:55:12 -07:00
abd2c1434c Explicitly ignore gateway files in golangci (#748) 2019-08-16 11:20:16 -07:00
bc9dc27210 Suppress golangci error (#746)
* Turn off checking shadowing

* Suppress golangci output
2019-08-16 10:11:10 -07:00
084461d387 Change mmf API from unary call to streaming (#740)
* Test to replicate the grpcLimit error

* Change mmf from unary to streaming

* Resolve comment

* Resolve comments
2019-08-15 16:43:22 -07:00
bc7d014db6 Express open-match-build infrastructure as Terraform template (#729)
* Replicate infrastructure configs in terraform

* Express open-match-build infrastructure as Terraform template

* Import changes
2019-08-15 12:56:45 -07:00
230ae76bb4 Set default logging.rpc value to false (#734)
* Fix rpc enabled config alias

* Set default logging.rpc value to false
2019-08-15 12:35:59 -07:00
ebbe5aa6ce Add a breaking API change issue template (#737) 2019-08-15 11:54:19 -07:00
9b350c690c Disable stress testing in CI (#738) 2019-08-15 11:26:07 -07:00
80b817f488 Fix cloudbuild dependency (#733) 2019-08-15 10:41:11 -07:00
df7021de1b Add comments for values.yaml file in the parent chart (#727)
* Checkpoint

* Comments for values.yaml file
2019-08-13 11:27:53 -07:00
5c8f218000 Remove CI subnet workaround (#728)
* Remove CI subnet workaround

* Apply changes
2019-08-06 13:54:19 -07:00
3f538df971 Fix Makefile targets (#726)
* Makefile dependency fix

* Resolve comments
2019-08-05 14:44:05 -07:00
1e856658c9 Fix proto file's go package. (#725)
Fixes #724

Also removed redundant instruction to build messages.pb.go, and unused instruction to build message.pb.gw.go.
2019-08-05 11:10:40 -07:00
eb6697052d Reflects IAM role changes in terraform config (#709)
* Reflects IAM role changes in terraform config

* Resolve comments

* Resolve comments

* Resolve comments

* Update
2019-08-02 18:26:33 -07:00
31d3464a31 Helm README (#713)
* Helm README

* Indentation

* Review comments
2019-08-02 17:54:19 -07:00
c96b65d52b Make load testing upload test results to GCS (#674)
* Delete old helm config and use new config in CI

* Fix tiller dependency

* Fix cloudbuild

* fix yaml postfix

* Make stress test upload results to GCS

* hi

* Update tgz

* Fix bad merge

* Enable test

* Update charts

* Done

* Test

* Fix

* Fix gcpProjectId

* Update charts

* Update

* Fix

* Update

* Add time
2019-08-02 17:08:51 -07:00
9d601351cc Make synchronizer proto properly internal (#722)
* Move generated files from internal/pb to internal/ipb to avoid name conflict.
* No longer server / generate http/json endpoint nor the swagger files.
* The proto package is now an internal package.

resolves #534
2019-08-02 16:26:37 -07:00
7272ca8b93 Let end-to-end tests run in-cluster (#706)
* Replace LoadBalancer in CI with NodePort

* Fix
2019-08-02 15:18:12 -07:00
b463d2e0fd Remove unnecessary dependencies to speed up CI (#715) 2019-08-02 14:57:58 -07:00
07da543f8e Autogenerate image commands based on a single list (#714)
With this change, anything added to the cmd/ folder is automatically
made into an image and included the image commands. Additional
images are also included. This does remove some commands that are
for pushing specific sets of images, but it seems rare/never (asking yfei1
and sawagh) that anyone is actually using these commands.

Includes some commenting to hopefully alleviate the magic being added.
2019-08-02 14:38:52 -07:00
0d54c39828 Update instructions for release-0.6 (#651) 2019-08-02 14:09:16 -07:00
5469c8bc69 Unify SHORT_SHA and VERSION_SUFFIX (#712) 2019-08-02 11:03:52 -07:00
c837211cd1 Skip using PodSecurityPolicy in CI runs (#717)
* Skip psp in CI run

* Update

* Works

* metadataserver psp

* Update chart
2019-08-02 10:34:40 -07:00
5729e72214 Improve context propagation for synchronizer (#697) 2019-08-01 15:13:23 -07:00
66910632da Distinguish RELEASE_NAME and CHART_NAME in Makefile (#711) 2019-08-01 12:54:52 -07:00
c832074112 Makefile bug fixes (#708)
* Bug fixes
2019-07-31 17:27:11 -07:00
a6d526b36b Remove unused app engine and html makefile stuff (#666)
I assume this was left over from the website being in this repo.
2019-07-31 16:10:12 -07:00
13e017ba65 Remove image artifacts from cloudbuild (#707)
* Remove image artifacts from cloudbuild

* Update cloudbuild.yaml
2019-07-31 14:34:33 -07:00
3784300d22 Payload logging (#696) 2019-07-31 12:53:59 -07:00
31fd18e39b CI Reap Namespace (#705) 2019-07-31 12:32:31 -07:00
a54d1fcf21 Let CI runs in one cluster under unique namespaces (#701) 2019-07-31 12:11:37 -07:00
72a435758e Replace all-proto with variables containing all proto files (#703)
This moves the all-proto target from being phony, to containing a concrete lists of targets. This means other targets can depend on $(ALL_PROTO) without becoming phony itself.
2019-07-30 14:33:09 -07:00
6848fa71c2 Remove duplicate step from cloudbuild (#704) 2019-07-30 13:40:29 -07:00
987d90cc44 Have the demo use the template Dockerfile and sit in cmd (#700)
This also gives it a more distinguished name as there are likely to be other demos (with other components) in the near future. It is also sitting in a larger namespace (the cmd folder) which helps too.
2019-07-29 22:12:55 -07:00
baf943fdd3 Fix logger name in the appmain.go (#699) 2019-07-29 15:53:12 -07:00
c7ce1b047b Use templated Dockerfile and make for cmd images (#676)
This leads up to being able to swap out the standard dockerfile for one which uses a locally built binary. Also is just cleaner as there is less redundancy across dockerfiles.

Disables cgo for builds, as it doesn't work with distroless.

Stops relying on phony protos target which causes extra rebuilds of all the protos. (Using variable expansion should fix this issue in a seperate PR)

All commands are now named run, because ARG arbitrarily doesn't work for ENTRYPOINT.
2019-07-26 18:10:59 -07:00
36a194e761 Unify server setup and call log configuration (#695) 2019-07-26 13:26:02 -07:00
605511d177 Enable workload identity in terraform and Makefile (#691)
* Enable workload identity in terraform and Makefile

* Applied changes and added tftstate file
2019-07-26 11:28:04 -07:00
2a08732508 Merge netlistener into util package. (#690) 2019-07-26 08:53:21 -07:00
e1c2b96cb5 Add HorizontalPodAutoscaler policies. (#645) 2019-07-26 07:53:57 -07:00
1bd84355b7 Replace context.Background() in tests to prepare for multi-tenancy. (#687) 2019-07-26 07:13:09 -07:00
a9014fbf78 Fix make test targets (#670) 2019-07-26 01:12:05 -07:00
8050c61618 Add helm wait and disable verify unreliable URLs. (#693) 2019-07-25 13:43:51 -07:00
8b765871c4 Add HTTP client logging support and flags for test. (#688) 2019-07-24 14:11:42 -07:00
88786ecbd1 Breakout synchronizer state into it's own struct. (#686) 2019-07-24 11:25:23 -07:00
f41c175f29 Expand hostname:port and hostname in certgen. (#677) 2019-07-23 15:16:09 -07:00
3607809371 Delete old helm config and use new config for CI (#667)
* Delete old helm config and use new config in CI

* Fix dependency

* Bug fixes

* Fix

* Disable tls
2019-07-23 14:36:24 -07:00
d21ae712a7 Add build/cmd to Makefile (#673)
This is part of the larger goal of simplifying and speeding up builds.

General strategy here:

1. Add make build/cmd target, to build what is currently in /cmd. <- this PR
2. In the basebuild Dockerfile, run make build/cmd, and replace the seperate Dockerfiles for the services in cmd/ with one Dockerfile (with an arg for which service) which copies from build/cmd// to the Dockerfile and runs it.
3. Reconcile the Docker images not included in the /cmd to follow this pattern as well (in individual PRs.)
4. Clean up redundant ways to build things.
5. (If it improves speed) add local build variation for faster builds.
2019-07-22 14:21:06 -07:00
f3f1908318 Add /help, /configz, /debug/* pages. (#662) 2019-07-17 07:33:09 -07:00
9cb4a9ce6e Remove cloudbuild artifacts (#668) 2019-07-17 07:00:29 -07:00
8dad7fd7d0 Configure helm install using subcharts (#652)
* Split up charts

* Fix build/chart/

* Revert Makefile

* Bug fixes

* Checkpoint

* Update

* Revert Makefile
2019-07-16 15:35:52 -07:00
f70cfee14a Cache dependency downloads by adding only go module to Dockerfile (#664)
First copy only the go.sum and go.mod then download dependencies. Docker
caching is [in]validated by the input files changes. So when the dependencies
for the project don't change, the previous image layer can be re-used. go.sum
is included as its hashing verifies the expected files are downloaded.

I'm ignoring cases where the go.mod is missing a dep for now. Later go commands will
fetch the missing deps, and they should make their way into go.mod sooner rather than later.
If it's a mess, it can be cleaned up later.

The time comparison of building all images from before and after is:
clean build: 7m22s -> 7m22s
after small change to demo: 5m59 -> 5m7s

So no speed increase for purely fresh builds (as expected), but saves a minute when deps haven't changed from the last build.
2019-07-16 14:38:01 -07:00
36f92b4336 Instrument HTTP clients and servers. (#663) 2019-07-16 12:54:19 -07:00
164dfdde67 Open locally serving TCP ports in unit tests to avoid triggering firewall screens. (#660) 2019-07-16 11:32:12 -07:00
c0d6531f3f Simplify HTTP client construction. (#661) 2019-07-16 10:01:56 -07:00
584 changed files with 51173 additions and 18567 deletions

View File

@ -18,6 +18,7 @@
*.exe
*.exe~
*.dll
*.nupkg
*.so
*.dylib
@ -32,10 +33,6 @@
*swo
*~
# Load testing residuals
test/stress/*.csv
test/stress/__pycache__
# Ping data files
*.ping
*.pings
@ -51,6 +48,8 @@ detritus/
# Dotnet Core ignores
*.swp
*.pdb
*.deps.json
*.*~
project.lock.json
.DS_Store
@ -81,6 +80,9 @@ bld/
msbuild.log
msbuild.err
msbuild.wrn
csharp/OpenMatch/obj
Chart.lock
# Visual Studio 2015
.vs/
@ -114,15 +116,15 @@ creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/mmlogic/mmlogic
cmd/query/query
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
examples/functions/golang/pool/pool
examples/evaluator/golang/simple/simple
test/evaluator/evaluator
test/matchfunction/matchfunction
tools/reaper/reaper
# Open Match Build Directory
@ -130,3 +132,6 @@ build/
# Secrets Directories
install/helm/open-match/secrets/
# Helm tar charts
install/helm/open-match/charts/

1
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1 @@
* @laremere @aLekSer @HazWard @calebatwd @syntxerror @sawagh @amg84 @scosgrave @mridulji @markmandel @joeholley

25
.github/ISSUE_TEMPLATE/apichange.md vendored Normal file
View File

@ -0,0 +1,25 @@
---
name: Breaking API change
about: Details of a breaking API change proposal.
title: 'API change: <>'
labels: breaking api change
assignees: ''
---
## Overview
<High level description of this change>
## Motivation
<What is the primary motivation for this API change>
## Impact
<What usage does this impact? Add details here such that a consumer of Open
Match API can clearly tell if this will impact them>
## Change Proto
<Add snippet of the proposed change proto>

30
.github/ISSUE_TEMPLATE/bugreport.md vendored Normal file
View File

@ -0,0 +1,30 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: kind/bug
assignees: ''
---
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via
-->
**What happened**:
**What you expected to happen**:
**How to reproduce it (as minimally and precisely as possible)**:
**Anything else we need to know?**:
**Output of `kubectl version`**:
**Cloud Provider/Platform (AKS, GKE, Minikube etc.)**:
**Open Match Release Version**:
**Install Method(yaml/helm)**:

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: kind/feature
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -1,5 +1,5 @@
---
name: release
name: Publish a Release
about: Instructions and checklist for creating a release.
title: 'Release X.Y.Z-rc.N'
labels: kind/release
@ -8,80 +8,90 @@ assignees: ''
# Open Match Release Process
Follow these instructions to create an Open Match release. The output of the
Follow these instructions to create an Open Match release. The output of the
release process is new images and new configuration.
## Getting setup
*note: the commands below are pasted from the 0.5 release. make the necessary
changes to match your naming & environment.*
**NOTE: The instructions below are NOT strictly copy-pastable and assume 0.5**
**release. Please update the version number for your commands.**
The Git flow for pushing a new release is similar to the development process
but there are some small differences.
**1. Clone your fork of the Open Match repository.**
### 1. Clone Repository
```shell
# Clone your fork of the Open Match repository.
git clone git@github.com:afeddersen/open-match.git
```
**2. Move into the new open-match directory.**
```shell
# Change directory to the git repository.
cd open-match
```
**3. Configure a remote that points to the upstream repository. This is required to sync changes you make in a fork with the original repository. Note: Upstream is the gatekeeper of the project or the source of truth to which you wish to contribute.**
```shell
# Add a remote, you'll be pushing to this.
git remote add upstream https://github.com/googleforgames/open-match.git
```
**3. Fetch the branches and their respective commits from the upstream repo.**
### 2. Release Branch
If you're creating the first release of the version, that would be `0.5.0-rc.1`
then you'll need to create the release branch.
```shell
git fetch upstream
# Create a local release branch.
git checkout -b release-0.5 upstream/master
# Push the branch upstream.
git push upstream release-0.5
```
**4. Create a local release branch that tracks upstream and check it out.**
otherwise there should already be a `release-0.5` branch so run,
```shell
# Checkout the release branch.
git checkout -b release-0.5 upstream/release-0.5
```
**NOTE: The branch name must be in the format, `release-X.Y` otherwise**
**some artifacts will not be pushed.**
## Releases & Versions
Open Match uses Semantic Versioning 2.0.0. If you're not familiar please
Open Match uses Semantic Versioning 2.0.0. If you're not familiar please
see the documentation - [https://semver.org/](https://semver.org/).
Full Release / Stable Release:
* The final software product. Stable, reliable, etc...
* Naming example: 1.0.0
* The final software product. Stable, reliable, etc...
* Example: 1.0.0, 1.1.0
Release Candidate (RC):
* A release candidate (RC) is a version with the potential to be the final
product but it hasn't validated by automated and/or manual tests.
* Naming example: 1.0.0-rc.1
* Example: 1.0.0-rc.1
Hot Fixes:
* Code developed to correct a major software bug or fault
that's been discovered after the full release.
* Naming example: 1.0.1
* Example: 1.0.1
Preview:
* Rare, a one off release cut from the master branch to provide early access
to APIs or some other major change.
* **NOTE: There's no branch for this release.**
* Example: 0.5-preview.1
**NOTE: Semantic versioning is enforced by `go mod`. A non-compliant version**
**tag will cause `go get` to break for users.**
# Detailed Instructions
## Find and replace
Below this point you will see {version} used as a placeholder for future
releases. Find {version} and replace with the current release (e.g. 0.5.0)
## Create a release branch in the upstream repository
releases. Find {version} and replace with the current release (e.g. 0.5.0)
## Create a release branch in the upstream open-match repository
**Note: This step is performed by the person who starts the release. It is
only required once.**
@ -89,7 +99,7 @@ only required once.**
- [ ] Create the branch in the **upstream** repository. It should be named
release-X.Y. Example: release-0.5. At this point there's effectively a code
freeze for this version and all work on master will be included in a future
version. If you're on the branch that you created in the *getting setup*
version. If you're on the branch that you created in the *getting setup*
section above you should be able to push upstream.
```shell
@ -98,22 +108,29 @@ git push origin release-0.5
- [ ] Announce a PR freeze on release-X.Y branch on [open-match-discuss@](mailing-list-post).
- [ ] Open the [`Makefile`](makefile-version) and change BASE_VERSION entry.
- [ ] Open the [`install/helm/open-match/Chart.yaml`](om-chart-yaml-version) and [`install/helm/open-match-example/Chart.yaml`](om-example-chart-yaml-version) and change the `appVersion` and `version` entries.
- [ ] Open the [`install/helm/open-match/values.yaml`](om-values-yaml-version) and [`install/helm/open-match-example/values.yaml`](om-example-values-yaml-version) and change the `tag` entries.
- [ ] Open the [`site/config.toml`] and change the `release_branch` and `release_version` entries.
- [ ] Open the [`install/helm/open-match/Chart.yaml`](om-chart-yaml-version) and change the `appVersion` and `version` entries.
- [ ] Open the [`install/helm/open-match/values.yaml`](om-values-yaml-version) and change the `tag` entries.
- [ ] Open the [`cloudbuild.yaml`] and change the `_OM_VERSION` entry.
- [ ] Run `make clean release`
- [ ] There might be additional references to the old version but be careful not to change it for places that have it for historical purposes.
- [ ] Create a PR with the changes and include the release candidate name.
- [ ] Run `make release`
- [ ] Run `make api/api.md` in open-match repo to update the auto-generated API references in open-match-docs repo.
- [ ] Create a PR with the changes, include the release candidate name, and point it to the release branch.
- [ ] Go to [open-match-build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build) and update all *post submit* triggers' `_GCB_LATEST_VERSION` value to the `X.Y` of the release. This value should only increase as it's used to determine the latest stable version.
- [ ] Merge your changes once the PR is approved.
## Create a release branch in the upstream open-match-docs repository
- [ ] Open [`Makefile`](makefile-version) and change BASE_VERSION entry.
- [ ] Open [`cloudbuild.yaml`] and change the `_OM_VERSION` entry.
- [ ] Open [`site/config.toml`] and change the `release_version` entry.
- [ ] Open [`site/static/swaggerui/config.json`] and change the `api/VERSION/...` entries
- [ ] Create a PR with the changes, include the release candidate name, and point it to the release branch.
## Complete Milestone
**Note: This step is performed by the person who starts the release. It is
**Note: This step is performed by the person who starts the release. It is
only required once.**
- [ ] Create the next [version milestone](https://github.com/googleforgames/open-match/milestones) and use [semantic versioning](https://semver.org/) when naming it to be consistent with the [Go community](https://blog.golang.org/versioning-proposal).
- [ ] Create a *draft* [release](https://github.com/googleforgames/open-match/releases).
- [ ] Create a *draft* [release](https://github.com/googleforgames/open-match/releases). Note that github has both "Pre-release" and "draft" as different concepts for a release. Until the release is finalized, only use "Save draft", and do not use "Publish release".
- [ ] Use the [release template](https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md)
- [ ] `Tag` = v{version}. Example: v0.5.0. Append -rc.# for release candidates. Example: v0.5.0-rc.1.
- [ ] `Target` = release-X.Y. Example: release-0.5.
@ -129,18 +146,27 @@ only required once.**
- [ ] Review all closed issues against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/googleforgames/open-match/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed+milestone%3Av{version}
- [ ] Verify the [milestone](https://github.com/googleforgames/open-match/milestones) is effectively 100% at this point with the exception of the release issue itself.
TODO: Add guidelines for labeling issues.
## Build Artifacts
- [ ] Go to [Cloud Build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build), under Post Submit click "Run Trigger".
- [ ] Go to the History section and find the "Post Submit" build that's running. Wait for it to go Green. If it's red, fix error repeat this section. Take note of the docker image version tag for next step. Example: 0.5.0-a4706cb.
- [ ] Go to the History section and find the "Post Submit" build of the merged commit that's running. Wait for it to go Green. If it's red, fix error repeat this section. Take note of the docker image version tag for next step. Example: 0.5.0-a4706cb.
- [ ] Run `./docs/governance/templates/release.sh {source version tag} {version}` to copy the images to open-match-public-images.
- [ ] If this is a new minor version in the newest major version then run `./docs/governance/templates/release.sh {source version tag} latest`.
- [ ] Once the images have successfully been pushed to the registry, modify the line `open-match.dev/open-match v0.0.0-dev` in all `go.mod` files in the [Tutorials] (https://github.com/googleforgames/open-match/tree/main/tutorials) directory to use the current release version. This includes all solution subdirectories as well
- [ ] Use the files under the `build/release/` directory for the Open Match installation guide. Make sure the artifacts work as expected - these are the artifacts that will be published to the GCS bucket and used in our release assets.
- [ ] Copy the files from `build/release/` generated from `make release` to the release draft you created. You can drag and drop the files using the Github UI.
- [ ] Run `make delete-gke-cluster create-gke-cluster` and run through the instructions under the [README](readme-deploy), verify the pods are healthy. You'll need to adjust the path to the `build/release/install.yaml` and `build/release/install-demo.yaml` in your local clone since you haven't published them yet.
- [ ] Open the [`README.md`](readme-deploy) update the version references and submit. (Release candidates can ignore this step.)
- [ ] Publish the [Release](om-release) in Github.
- [ ] Update [Slack invitation link](https://slack.com/help/articles/201330256-invite-new-members-to-your-workspace#share-an-invite-link) in [open-match.dev](https://open-match.dev/site/docs/contribute/#get-involved).
- [ ] Test Open Match installation under GKE and Minikube enviroment using YAML files and Helm. Follow the [First Match](https://development.open-match.dev/site/docs/getting-started/first_match/) guide, run `make proxy-demo`, and open `localhost:51507` to make sure everything works.
- [ ] Minikube: Run `make create-mini-cluster` to create a local cluster with latest Kubernetes API version.
- [ ] GKE: Run `make create-gke-cluster` to create a GKE cluster.
- [ ] Helm: Run `helm install open-match -n open-match open-match/open-match`
- [ ] Update usage requirements in the Installation doc - e.g. supported minikube version, kubectl version, golang version, etc.
## Finalize
- [ ] Save the release as a draft.
- [ ] Circulate the draft release to active contributors. Where reasonable, get everyone's ok on the release notes before continuing.
- [ ] Publish the [Release](om-release) in Github. This will notify repository watchers.
- [ ] Publish the [Release](om-release) on Open Match [Blog](https://open-match.dev/site/blog/).
## Announce
@ -151,9 +177,7 @@ TODO: Add guidelines for labeling issues.
[mailing-list-post]: https://groups.google.com/forum/#!newtopic/open-match-discuss
[release-template]: https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md
[makefile-version]: https://github.com/googleforgames/open-match/blob/master/Makefile#L53
[om-example-chart-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/Chart.yaml#L16
[om-example-values-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/values.yaml#L16
[om-example-chart-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match-example/Chart.yaml#L16
[om-example-values-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match-example/values.yaml#L16
[om-chart-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/Chart.yaml#L16
[om-values-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/values.yaml#L16
[om-release]: https://github.com/googleforgames/open-match/releases/new
[readme-deploy]: https://github.com/googleforgames/open-match/blob/master/README.md#deploy-to-kubernetes

16
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,16 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
If this is your first time, please read our contributor guidelines: https://github.com/googleforgames/open-match/blob/master/CONTRIBUTING.md and developer guide https://github.com/googleforgames/open-match/blob/master/docs/development.md
-->
**What this PR does / Why we need it**:
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Closes #<issue number>`, or `Closes (paste link of issue)`.
-->
Closes #
**Special notes for your reviewer**:

18
.gitignore vendored
View File

@ -16,6 +16,7 @@
*.exe
*.exe~
*.dll
*.nupkg
*.so
*.dylib
@ -30,10 +31,6 @@
*swo
*~
# Load testing residuals
test/stress/*.csv
test/stress/__pycache__
# Ping data files
*.ping
*.pings
@ -49,6 +46,8 @@ detritus/
# Dotnet Core ignores
*.swp
*.pdb
*.deps.json
*.*~
project.lock.json
.DS_Store
@ -79,6 +78,8 @@ bld/
msbuild.log
msbuild.err
msbuild.wrn
csharp/OpenMatch/obj
Chart.lock
# Visual Studio 2015
.vs/
@ -111,16 +112,19 @@ creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/mmlogic/mmlogic
cmd/query/query
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
examples/functions/golang/pool/pool
examples/evaluator/golang/simple/simple
test/evaluator/evaluator
test/matchfunction/matchfunction
tools/reaper/reaper
# Secrets Directories
install/helm/open-match/secrets/
# Helm tar charts
install/helm/open-match/charts/

View File

@ -16,6 +16,8 @@
# with their default values.
# https://github.com/golangci/golangci-lint#config-file
service:
golangci-lint-version: 1.18.0
# options for analysis running
run:
@ -45,7 +47,7 @@ run:
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
skip-files:
skip-files: '.*\.gw\.go'
# output configuration options
output:
@ -165,18 +167,15 @@ linters-settings:
linters:
enable-all: true
disable:
- goimports
- stylecheck
- gocritic
- dupl
- funlen
- gochecknoglobals
- goconst
- gocyclo
- gosec
- lll
- staticcheck
- scopelint
- prealloc
- gofmt
- interfacer # deprecated - "A tool that suggests interfaces is prone to bad suggestions"
- lll
- typecheck
#linters:
# enable-all: true
@ -191,42 +190,11 @@ issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
- path: internal[/\\]config[/\\]
linters:
- gochecknoglobals
- path: consts\.go
linters:
- gochecknoglobals
# Exclude some linters from running on test files
- path: _test\.go
linters:
- errcheck
- bodyclose
# The following are allowed global variable patterns.
# Generally it's ok to have constants or variables that effectively act as constants such as a static logger or flag values.
# The filters below specify the source code pattern that's allowed when declaring a global
# 'source: "flag."' will match 'var destFlag = flag.String("dest", "", "")'
- source: "flag."
linters:
- gochecknoglobals
- source: "telemetry."
linters:
- gochecknoglobals
- source: "View."
linters:
- gochecknoglobals
- source: "tag."
linters:
- gochecknoglobals
- source: "logrus."
linters:
- gochecknoglobals
- source: "stats."
linters:
- gochecknoglobals
- source: "serviceAddressList"
linters:
- gochecknoglobals
# Exclude known linters from partially hard-vendored code,
# which is impossible to exclude via "nolint" comments.

View File

@ -1,60 +0,0 @@
# Release history
## v0.4.0 (alpha)
### Release notes
- Thanks to completion of Issues [#42](issues/42) and [#45](issues/45), there is no longer a need to use the `openmatch-base` image when building components of Open Match. Each stand alone appliation now is self-contained in its `Dockerfile` and `cloudbuild.yaml` files, and builds have been substantially simplified. **Note**: The default `Dockerfile` and `cloudbuild.yaml` now tag their images with the version number, not `dev`, and the YAML files in the `install` directory now reflect this.
- This paves the way for CI/CD in an upcoming version.
- This paves the way for public images in an upcoming version!
## v0.3.0 (alpha)
This update is focused on the Frontend API and Player Records, including more robust code for indexing, deindexing, reading, writing, and expiring player requests from Open Match state storage. All Frontend API function argument have changed, although many only slightly. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
### Release notes
- The Frontend API calls have all be changed to reflect the fact that they operate on Players in state storage. To queue a game client, 'CreatePlayer' in Open Match, to get updates 'GetUpdates', and to stop matching, 'DeletePlayer'. The calls are now much more obviously related to how Open Match sees players: they are database records that it creates on demand, updates using MMFs and the Backend API, and deletes when the player is no longer looking for a match.
- The Player record in state storage has changed to a more complete hash format, and it no longer makes sense to remove a player's assignment from the Frontend as a separate action to removing their record entirely. `DeleteAssignment()` has therefore been removed. Just use `DeletePlayer` instead; you'll always want the client to re-request matching with its latest attributes anyway.
- There is now a module for [indexing and deindexing players in state storage](internal/statestorage/redis/playerindices/playerindices.go). This is a *much* more efficient, as well as being cleaner and more maintainable than the previous implementation which was **hard-coded to index everything** you passed in to the Frontend API at a specific JSON object depth.
- This paves the way for dynamically choosing your indicies without restarting the matchmaker. This will be implemented if there is demand. Pull Requests are welcome!
- Two internal timestamp-based indices have replaced the previous `timestamp` index. `created` is used to calculate how long a player has been waiting for a match, `accessed` is used to determine when a player needs to be expired out of state storage. Both are prefixed by the string `OM_METADATA` so it should be easy to spot them.
- A call to the Frontend API `GetUpdates()` gRPC endpoint returns a stream of player messages. This is used to send updates to state storage for the `Assignment`, `Status`, and `Error` Player fields in near-realtime. **It is the responsibility of the game client to disconnect** from the stream when it has gotten the results it was waiting for!
- Moved the rest of the gRPC messages into a shared [`messages.proto` file](api/protobuf-spec/messages.proto).
- Added documentation to Frontend API gRPC calls to the [`frontend.proto` file](api/protobuf-spec/frontend.proto).
- [Issue #41](https://github.com/googleforgames/open-match/issues/41)|[PR #48](https://github.com/googleforgames/open-match/pull/48) There is now a HA Redis install available in `install/yaml/01-redis-failover.yaml`. This would be used as a drop-in replacement for a single-instance Redis configuration in `install/yaml/01-redis.yaml`. The HA configuration requires that you install the [Redis Operator](https://github.com/spotahome/redis-operator) (note: **currently alpha**, use at your own risk) in your Kubernetes cluster.
- As part of this change, the kubernetes service name is now `redis` not `redis-sentinel` to denote that it is accessed using a standard Redis client.
- Open Match uses a new feature of the go module [logrus](github.com/sirupsen/logrus) to include filenames and line numbers. If you have an older version in your local build environment, you may need to delete the module and `go get github.com/sirupsen/logrus` again. When building using the provided `cloudbuild.yaml` and `Dockerfile`s this is handled for you.
- The program that was formerly in `examples/frontendclient` has been expanded and has been moved to the `test` directory under (`test/cmd/frontendclient/`)[test/cmd/frontendclient/].
- The client load generator program has been moved from `test/cmd/client` to (`test/cmd/clientloadgen/`)[test/cmd/clientloadgen/] to better reflect what it does.
- [Issue #45](https://github.com/googleforgames/open-match/issues/45) The process for moving the build files (`Dockerfile` and `cloudbuild.yaml`) for each component, example, and test program to their respective directories and out of the repository root has started but won't be completed until a future version.
- Put some basic notes in the [production guide](docs/production.md)
- Added a basic [roadmap](docs/roadmap.md)
## v0.2.0 (alpha)
This is a pretty large update. Custom MMFs or evaluators from 0.1.0 may need some tweaking to work with this version. Some Backend API function arguments have changed. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
v0.2.0 focused on adding additional functionality to Backend API calls and on **reducing the amount of boilerplate code required to make a custom Matchmaking Function**. For this, a new internal API for use by MMFs called the [Matchmaking Logic API (MMLogic API)](README.md#matchmaking-logic-mmlogic-api) has been added. Many of the core components and examples had to be updated to use the new Backend API arguments and the modules to support them, so we recommend you rebuild and redeploy all the components to use v0.2.0.
### Release notes
- MMLogic API is now available. Deploy it to kubernetes using the [appropriate json file]() and check out the [gRPC API specification](api/protobuf-spec/mmlogic.proto) to see how to use it. To write a client against this API, you'll need to compile the protobuf files to your language of choice. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory.
- When using the MMLogic API to filter players into pools, it will attempt to report back the number of players that matched the filters and how long the filters took to query state storage.
- An [example MMF](examples/functions/python3/mmlogic-simple/harness.py) using it has been written in Python3. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory. By default the [example backend client](examples/backendclient/main.go) is now configured to use this MMF, so make sure you have it avaiable before you try to run the latest backend client.
- An [example MMF](examples/functions/php/mmlogic-simple/harness.py) using it has been contributed by Ilya Hrankouski in PHP (thanks!). - The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The [example golang MMF](examples/functions/golang/manual-simple/) has been updated to use the latest data schemas for MatchObjects, and renamed to `manual-simple` to denote that it is manually manipulating Redis, not using the MMLogic API.
- The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The message model for using the Backend API has changed slightly - for calls that make MatchObjects, the expectation is that you will provide a MatchObject with a few fields populated, and it will then be shuttled along through state storage to your MMF and back out again, with various processes 'filling in the blanks' of your MatchObject, which is then returned to your code calling the Backend API. Read the[gRPC API specification](api/protobuf-spec/backend.proto) for more information.
- As part of this, compiled protobuf golang modules now live in the [`internal/pb`](internal/pb) directory. There's a handy [bash script](api/protoc-go.sh) for compiling them from the `api/protobuf-spec` directory into this new `internal/pb` directory for development in your local golang environment if you need it.
- As part of this Backend API message shift and the advent of the MMLogic API, 'player pools' and 'rosters' are now first-class data structures in MatchObjects for those who wish to use them. You can ignore them if you like, but if you want to use some of the MMLogic API calls to automate tasks for you - things like filtering a pool of players according attributes or adding all the players in your rosters to the ignorelist so other MMFs don't try to grab them - you'll need to put your data into the [protobuf messages](api/protobuf-spec/messages.proto) so Open Match knows how to read them. The sample backend client [test profile JSON](examples/backendclient/profiles/testprofile.json)has been updated to use this format if you want to see an example.
- Rosters were formerly space-delimited lists of player IDs. They are now first-class repeated protobuf message fields in the [Roster message format](api/protobuf-spec/messages.proto). That means that in most languages, you can access the roster as a list of players using your native language data structures (more info can be found in the [guide for using protocol buffers in your langauge of choice](https://developers.google.com/protocol-buffers/docs/reference/overview)). If you don't care about the new fields or the new functionality, you can just leave all the other fields but the player ID unset.
- Open Match is transitioning to using [protocol buffer messages](https://developers.google.com/protocol-buffers/) as its internal data format. There is now a Redis state storage [golang module](internal/statestorage/redis/redispb/) for marshaling and unmarshaling MatchObject messages to and from Redis. It isn't very clean code right now but will get worked on for the next couple releases.
- Ignorelists now exist, and have a Redis state storage [golang module](internal/statestorage/redis/ignorelist/) for CRUD access. Currently three ignorelists are defined in the [config file](config/matchmaker_config.json) with their respective parameters. These are implemented as [Sorted Sets in Redis](https://redis.io/commands#sorted_set).
- For those who only want to stand up Open Match and aren't interested in individually tweaking the required kubernetes resources, there are now [three YAML files](install/yaml) that can be used to install Redis, install Open Match, and (optionally) install Prometheus. You'll still need the `sed` [instructions from the Developer Guide](docs/development.md#running-open-match-in-a-development-environment) to substitute in the name of your Docker container registry.
- A super-simple module has been created for doing instersections, unions, and differences of lists of player IDs. It lives in `internal/set/set.go`.
### Roadmap
- It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
- We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
- Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an [outstanding issue to investigate and implement](https://github.com/googleforgames/open-match/issues/41) if it fills our needs, feel free to contribute!
## v0.1.0 (alpha)
Initial release.

View File

@ -12,9 +12,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM golang:latest
# When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
FROM golang:1.19.5
ENV GO111MODULE=on
WORKDIR /go/src/open-match.dev/open-match
COPY . .
# First copy only the go.sum and go.mod then download dependencies. Docker
# caching is [in]validated by the input files changes. So when the dependencies
# for the project don't change, the previous image layer can be re-used. go.sum
# is included as its hashing verifies the expected files are downloaded.
COPY go.sum go.mod ./
RUN go mod download
COPY . .

View File

@ -15,7 +15,7 @@
FROM debian
RUN apt-get update
RUN apt-get install -y -qq git make python3 virtualenv curl sudo unzip apt-transport-https ca-certificates curl software-properties-common gnupg2 bc
RUN apt-get install -y -qq git make python3 virtualenv curl sudo unzip apt-transport-https ca-certificates curl software-properties-common gnupg2
# Docker
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
@ -31,14 +31,18 @@ RUN sudo apt-get install -y -qq docker-ce docker-ce-cli containerd.io
RUN export CLOUD_SDK_REPO="cloud-sdk-stretch" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && apt-get install google-cloud-sdk google-cloud-sdk-app-engine-go -y -qq
apt-get update -y && apt-get install google-cloud-sdk google-cloud-sdk-app-engine-go -y -qq && \
sudo apt-get update -y && \
sudo apt-get install -y google-cloud-sdk-gke-gcloud-auth-plugin
# Install Golang
# https://github.com/docker-library/golang/blob/fd272b2b72db82a0bd516ce3d09bba624651516c/1.12/stretch/Dockerfile
# https://github.com/docker-library/golang/blob/master/1.14/stretch/Dockerfile
RUN mkdir -p /toolchain/golang
WORKDIR /toolchain/golang
RUN sudo rm -rf /usr/local/go/
RUN curl -L https://storage.googleapis.com/golang/go1.12.6.linux-amd64.tar.gz | sudo tar -C /usr/local -xz
# When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
RUN curl -L https://golang.org/dl/go1.19.5.linux-amd64.tar.gz | sudo tar -C /usr/local -xz
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH

View File

@ -14,20 +14,26 @@
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/backend/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
WORKDIR /go/src/open-match.dev/open-match
ARG IMAGE_TITLE
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
make "build/cmd/${IMAGE_TITLE}"
FROM gcr.io/distroless/static:nonroot
ARG IMAGE_TITLE
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/backend/backend /app/
ENTRYPOINT ["/app/backend"]
COPY --from=builder --chown=nonroot "/go/src/open-match.dev/open-match/build/cmd/${IMAGE_TITLE}/" "/app/"
ENTRYPOINT ["/app/run"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Backend API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md

957
Makefile

File diff suppressed because it is too large Load Diff

View File

@ -24,13 +24,9 @@ The [Open Match Development guide](docs/development.md) has detailed instruction
on getting the source code, making changes, testing and submitting a pull request
to Open Match.
## Disclaimer
This software is currently alpha, and subject to change.
## Support
* [Slack Channel](https://open-match.slack.com/) ([Signup](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))
* [Slack Channel](https://open-match.slack.com/) ([Signup](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLTM5ZWQxNjc1YWI3MzJmN2RiMWJmYWI0ZjFiNzNkZmNkMWQ3YWU5OGVkNzA5Yzc4OGVkOGU5MTc0OTA5ZTA5NDU))
* [File an Issue](https://github.com/googleforgames/open-match/issues/new)
* [Mailing list](https://groups.google.com/forum/#!forum/open-match-discuss)

View File

@ -6,4 +6,10 @@ gRPC has first-class support for [many languages](https://grpc.io/docs/) and pro
For HTTP/HTTPS Open Match uses a gRPC proxy to serve the API. Since HTTP does not provide a structure for request/responses we use Swagger to provide a schema. You can view the Swagger docs for each service in this directory's `*.swagger.json` files. In addition each server will host it's swagger doc via `GET /swagger.json` if you want to dynamically load them at runtime.
Lastly, Open Match supports insecure and TLS mode for serving the API. It's strongly preferred to use TLS mode in production but insecure mode can be used for test and local development. To help with certificates management see `tools/certgen` to create self-signed certificates.
Lastly, Open Match supports insecure and TLS mode for serving the API. It's strongly preferred to use TLS mode in production but insecure mode can be used for test and local development. To help with certificates management see `tools/certgen` to create self-signed certificates.
# Open Match API Development Guide
Open Match proto comments follow the same format as [this file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.proto)
If you plan to change the proto definitions, please update the comments and run `make api/api.md` to reflect the latest changes in open-match-docs.

View File

@ -13,14 +13,15 @@
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
import "protoc-gen-openapiv2/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_swagger) = {
info: {
title: "Backend"
version: "1.0"
@ -54,8 +55,7 @@ option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
// Configuration for the Match Function to be triggered by Open Match to
// generate proposals.
// FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF
message FunctionConfig {
string host = 1;
int32 port = 2;
@ -67,48 +67,102 @@ message FunctionConfig {
}
message FetchMatchesRequest {
// Configuration of the MatchFunction to be executed for the given list of MatchProfiles
// A configuration for the MatchFunction server of this FetchMatches call.
FunctionConfig config = 1;
// MatchProfiles for which this MatchFunction should be executed.
repeated MatchProfile profiles = 2;
// A MatchProfile that will be sent to the MatchFunction server of this FetchMatches call.
MatchProfile profile = 2;
}
message FetchMatchesResponse {
// Result Match for the requested MatchProfile.
// Note that OpenMatch will validate the proposals, a valid match should contain at least one ticket.
repeated Match matches = 1;
// A Match generated by the user-defined MMF with the specified MatchProfiles.
// A valid Match response will contain at least one ticket.
Match match = 1;
}
message AssignTicketsRequest {
// List of Ticket IDs for which the Assignment is to be made.
message ReleaseTicketsRequest{
// TicketIds is a list of string representing Open Match generated Ids to be re-enabled for MMF querying
// because they are no longer awaiting assignment from a previous match result
repeated string ticket_ids = 1;
}
message ReleaseTicketsResponse {}
message ReleaseAllTicketsRequest{}
message ReleaseAllTicketsResponse {}
// AssignmentGroup contains an Assignment and the Tickets to which it should be applied.
message AssignmentGroup {
// TicketIds is a list of strings representing Open Match generated Ids which apply to an Assignment.
repeated string ticket_ids = 1;
// Assignment to be associated with the Ticket IDs.
// An Assignment specifies game connection related information to be associated with the TicketIds.
Assignment assignment = 2;
}
message AssignTicketsResponse {}
// AssignmentFailure contains the id of the Ticket that failed the Assignment and the failure status.
message AssignmentFailure {
enum Cause {
UNKNOWN = 0;
TICKET_NOT_FOUND = 1;
}
// The service implementing the Backent API that is called to generate matches
// and make assignments for Tickets.
service Backend {
// FetchMatch triggers execution of the specfied MatchFunction for each of the
// specified MatchProfiles. Each MatchFunction execution returns a set of
// proposals which are then evaluated to generate results. FetchMatch method
// streams these results back to the caller.
rpc FetchMatches(FetchMatchesRequest) returns (FetchMatchesResponse) {
string ticket_id = 1;
Cause cause = 2;
}
message AssignTicketsRequest {
// Assignments is a list of assignment groups that contain assignment and the Tickets to which they should be applied.
repeated AssignmentGroup assignments = 1;
}
message AssignTicketsResponse {
// Failures is a list of all the Tickets that failed assignment along with the cause of failure.
repeated AssignmentFailure failures = 1;
}
// The BackendService implements APIs to generate matches and handle ticket assignments.
service BackendService {
// FetchMatches triggers a MatchFunction with the specified MatchProfile and
// returns a set of matches generated by the Match Making Function, and
// accepted by the evaluator.
// Tickets in matches returned by FetchMatches are moved from active to
// pending, and will not be returned by query.
rpc FetchMatches(FetchMatchesRequest) returns (stream FetchMatchesResponse) {
option (google.api.http) = {
post: "/v1/backend/matches:fetch"
post: "/v1/backendservice/matches:fetch"
body: "*"
};
}
// AssignTickets sets the specified Assignment on the Tickets for the Ticket
// IDs passed.
// AssignTickets overwrites the Assignment field of the input TicketIds.
rpc AssignTickets(AssignTicketsRequest) returns (AssignTicketsResponse) {
option (google.api.http) = {
post: "/v1/backend/tickets:assign"
post: "/v1/backendservice/tickets:assign"
body: "*"
};
}
// ReleaseTickets moves tickets from the pending state, to the active state.
// This enables them to be returned by query, and find different matches.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc ReleaseTickets(ReleaseTicketsRequest) returns (ReleaseTicketsResponse) {
option (google.api.http) = {
post: "/v1/backendservice/tickets:release"
body: "*"
};
}
// ReleaseAllTickets moves all tickets from the pending state, to the active
// state. This enables them to be returned by query, and find different
// matches.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc ReleaseAllTickets(ReleaseAllTicketsRequest) returns (ReleaseAllTicketsResponse) {
option (google.api.http) = {
post: "/v1/backendservice/tickets:releaseall"
body: "*"
};
}

View File

@ -13,6 +13,11 @@
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"tags": [
{
"name": "BackendService"
}
],
"schemes": [
"http",
"https"
@ -24,22 +29,38 @@
"application/json"
],
"paths": {
"/v1/backend/matches:fetch": {
"/v1/backendservice/matches:fetch": {
"post": {
"summary": "FetchMatch triggers execution of the specfied MatchFunction for each of the\nspecified MatchProfiles. Each MatchFunction execution returns a set of\nproposals which are then evaluated to generate results. FetchMatch method\nstreams these results back to the caller.",
"operationId": "FetchMatches",
"summary": "FetchMatches triggers a MatchFunction with the specified MatchProfile and\nreturns a set of matches generated by the Match Making Function, and\naccepted by the evaluator.\nTickets in matches returned by FetchMatches are moved from active to\npending, and will not be returned by query.",
"operationId": "BackendService_FetchMatches",
"responses": {
"200": {
"description": "A successful response.",
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/definitions/apiFetchMatchesResponse"
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchFetchMatchesResponse"
},
"error": {
"$ref": "#/definitions/rpcStatus"
}
},
"title": "Stream result of openmatchFetchMatchesResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
@ -48,31 +69,38 @@
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiFetchMatchesRequest"
"$ref": "#/definitions/openmatchFetchMatchesRequest"
}
}
],
"tags": [
"Backend"
"BackendService"
]
}
},
"/v1/backend/tickets:assign": {
"/v1/backendservice/tickets:assign": {
"post": {
"summary": "AssignTickets sets the specified Assignment on the Tickets for the Ticket\nIDs passed.",
"operationId": "AssignTickets",
"summary": "AssignTickets overwrites the Assignment field of the input TicketIds.",
"operationId": "BackendService_AssignTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiAssignTicketsResponse"
"$ref": "#/definitions/openmatchAssignTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
@ -81,18 +109,170 @@
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiAssignTicketsRequest"
"$ref": "#/definitions/openmatchAssignTicketsRequest"
}
}
],
"tags": [
"Backend"
"BackendService"
]
}
},
"/v1/backendservice/tickets:release": {
"post": {
"summary": "ReleaseTickets moves tickets from the pending state, to the active state.\nThis enables them to be returned by query, and find different matches.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "BackendService_ReleaseTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchReleaseTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchReleaseTicketsRequest"
}
}
],
"tags": [
"BackendService"
]
}
},
"/v1/backendservice/tickets:releaseall": {
"post": {
"summary": "ReleaseAllTickets moves all tickets from the pending state, to the active\nstate. This enables them to be returned by query, and find different\nmatches.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "BackendService_ReleaseAllTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchReleaseAllTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchReleaseAllTicketsRequest"
}
}
],
"tags": [
"BackendService"
]
}
}
},
"definitions": {
"apiAssignTicketsRequest": {
"AssignmentFailureCause": {
"type": "string",
"enum": [
"UNKNOWN",
"TICKET_NOT_FOUND"
],
"default": "UNKNOWN"
},
"DoubleRangeFilterExclude": {
"type": "string",
"enum": [
"NONE",
"MIN",
"MAX",
"BOTH"
],
"default": "NONE",
"title": "- NONE: No bounds should be excluded when evaluating the filter, i.e.: MIN \u003c= x \u003c= MAX\n - MIN: Only the minimum bound should be excluded when evaluating the filter, i.e.: MIN \u003c x \u003c= MAX\n - MAX: Only the maximum bound should be excluded when evaluating the filter, i.e.: MIN \u003c= x \u003c MAX\n - BOTH: Both bounds should be excluded when evaluating the filter, i.e.: MIN \u003c x \u003c MAX"
},
"openmatchAssignTicketsRequest": {
"type": "object",
"properties": {
"assignments": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchAssignmentGroup"
},
"description": "Assignments is a list of assignment groups that contain assignment and the Tickets to which they should be applied."
}
}
},
"openmatchAssignTicketsResponse": {
"type": "object",
"properties": {
"failures": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchAssignmentFailure"
},
"description": "Failures is a list of all the Tickets that failed assignment along with the cause of failure."
}
}
},
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchAssignmentFailure": {
"type": "object",
"properties": {
"ticket_id": {
"type": "string"
},
"cause": {
"$ref": "#/definitions/AssignmentFailureCause"
}
},
"description": "AssignmentFailure contains the id of the Ticket that failed the Assignment and the failure status."
},
"openmatchAssignmentGroup": {
"type": "object",
"properties": {
"ticket_ids": {
@ -100,84 +280,100 @@
"items": {
"type": "string"
},
"description": "List of Ticket IDs for which the Assignment is to be made."
"description": "TicketIds is a list of strings representing Open Match generated Ids which apply to an Assignment."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment to be associated with the Ticket IDs."
}
}
},
"apiAssignTicketsResponse": {
"type": "object"
},
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment specifies game connection related information to be associated with the TicketIds."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "AssignmentGroup contains an Assignment and the Tickets to which it should be applied."
},
"apiFetchMatchesRequest": {
"openmatchBackfill": {
"type": "object",
"properties": {
"config": {
"$ref": "#/definitions/apiFunctionConfig",
"title": "Configuration of the MatchFunction to be executed for the given list of MatchProfiles"
},
"profiles": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatchProfile"
},
"description": "MatchProfiles for which this MatchFunction should be executed."
}
}
},
"apiFetchMatchesResponse": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "Result Match for the requested MatchProfile.\nNote that OpenMatch will validate the proposals, a valid match should contain at least one ticket."
}
}
},
"apiFilter": {
"type": "object",
"properties": {
"attribute": {
"id": {
"type": "string",
"description": "Name of the ticket attribute this Filter operates on."
"description": "Id represents an auto-generated Id issued by Open Match."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by\nthe Match Function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a backfill. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
},
"generation": {
"type": "string",
"format": "int64",
"description": "Generation gets incremented on GameServers update operations.\nPrevents the MMF from overriding a newer version from the game server.\nDo NOT read or write to this field, it is for internal tracking, and changing the value will cause bugs."
}
},
"description": "Represents a backfill entity which is used to fill partially full matches.\n\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal."
},
"openmatchDoubleRangeFilter": {
"type": "object",
"properties": {
"double_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.double_args this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
"description": "Minimum value."
},
"exclude": {
"$ref": "#/definitions/DoubleRangeFilterExclude",
"description": "Defines the bounds to apply when filtering tickets by their search_fields.double_args value.\nBETA FEATURE WARNING: This field and the associated values are\nnot finalized and still subject to possible change or removal."
}
},
"description": "A hard filter used to query a subset of Tickets meeting the filtering\ncriteria."
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
},
"apiFunctionConfig": {
"openmatchFetchMatchesRequest": {
"type": "object",
"properties": {
"config": {
"$ref": "#/definitions/openmatchFunctionConfig",
"description": "A configuration for the MatchFunction server of this FetchMatches call."
},
"profile": {
"$ref": "#/definitions/openmatchMatchProfile",
"description": "A MatchProfile that will be sent to the MatchFunction server of this FetchMatches call."
}
}
},
"openmatchFetchMatchesResponse": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/openmatchMatch",
"description": "A Match generated by the user-defined MMF with the specified MatchProfiles.\nA valid Match response will contain at least one ticket."
}
}
},
"openmatchFunctionConfig": {
"type": "object",
"properties": {
"host": {
@ -188,12 +384,12 @@
"format": "int32"
},
"type": {
"$ref": "#/definitions/apiFunctionConfigType"
"$ref": "#/definitions/openmatchFunctionConfigType"
}
},
"description": "Configuration for the Match Function to be triggered by Open Match to\ngenerate proposals."
"title": "FunctionConfig specifies a MMF address and client type for Backend to establish connections with the MMF"
},
"apiFunctionConfigType": {
"openmatchFunctionConfigType": {
"type": "string",
"enum": [
"GRPC",
@ -201,7 +397,7 @@
],
"default": "GRPC"
},
"apiMatch": {
"openmatchMatch": {
"type": "object",
"properties": {
"match_id": {
@ -219,183 +415,209 @@
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"title": "Set of Rosters that comprise this Match"
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
"backfill": {
"$ref": "#/definitions/openmatchBackfill",
"description": "Backfill request which contains additional information to the match\nand contains an association to a GameServer.\nBETA FEATURE WARNING: This field is not finalized and still subject\nto possible change or removal."
},
"allocate_gameserver": {
"type": "boolean",
"description": "AllocateGameServer signalise Director that Backfill is new and it should \nallocate a GameServer, this Backfill would be assigned.\nBETA FEATURE WARNING: This field is not finalized and still subject\nto possible change or removal."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"apiMatchProfile": {
"openmatchMatchProfile": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of this match profile."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Set of properties associated with this MatchProfile. (Optional)\nOpen Match does not interpret these properties but passes them through to\nthe MatchFunction."
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/apiPool"
"$ref": "#/definitions/openmatchPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile.\nThe pool names can be used in empty Rosters to specify composition of a\nmatch."
"description": "Set of pools to be queried when generating a match for this MatchProfile."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Set of Rosters for this match request. Could be empty Rosters used to\nindicate the composition of the generated Match or they could be partially\npre-populated Ticket list to be used in scenarios such as backfill / join\nin progress."
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A MatchProfile is Open Match's representation of a Match specification. It is\nused to indicate the criteria for selecting players for a match. A\nMatchProfile is the input to the API to get matches and is passed to the\nMatchFunction. It contains all the information required by the MatchFunction\nto generate match proposals."
},
"apiPool": {
"openmatchPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"filters": {
"double_range_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiFilter"
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchStringEqualsFilter"
}
},
"tag_present_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"apiRoster": {
"openmatchReleaseAllTicketsRequest": {
"type": "object"
},
"openmatchReleaseAllTicketsResponse": {
"type": "object"
},
"openmatchReleaseTicketsRequest": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
"title": "TicketIds is a list of string representing Open Match generated Ids to be re-enabled for MMF querying\nbecause they are no longer awaiting assignment from a previous match result"
}
}
},
"openmatchReleaseTicketsResponse": {
"type": "object"
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"apiTicket": {
"openmatchStringEqualsFilter": {
"type": "object",
"properties": {
"string_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.string_args this Filter operates on."
},
"value": {
"type": "string"
}
},
"title": "Filters strings exactly equaling a value.\n string_arg: \"foo\"\n value: \"bar\"\nmatches:\n {\"foo\": \"bar\"}\ndoes not match:\n {\"foo\": \"baz\"}\n {\"bar\": \"foo\"}\n {}"
},
"openmatchTagPresentFilter": {
"type": "object",
"properties": {
"tag": {
"type": "string"
}
},
"title": "Filters to the tag being present on the search_fields.\n tag: \"foo\"\nmatches:\n [\"foo\"]\n [\"bar\",\"foo\"]\ndoes not match:\n [\"bar\"]\n []"
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a ticket. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"@type": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
"additionalProperties": {},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := anypb.New(foo)\n if err != nil {\n ...\n }\n ...\n foo := \u0026pb.Foo{}\n if err := any.UnmarshalTo(foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
@ -403,11 +625,11 @@
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
"description": "The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client."
},
"details": {
"type": "array",
@ -417,8 +639,7 @@
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
"description": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). Each `Status` message contains\nthree pieces of data: error code, error message, and error details.\n\nYou can find out more about this error model and how to work with it in the\n[API Design Guide](https://cloud.google.com/apis/design/errors)."
}
},
"externalDocs": {

View File

@ -13,14 +13,15 @@
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
import "protoc-gen-openapiv2/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_swagger) = {
info: {
title: "Evaluator"
version: "1.0"
@ -51,25 +52,26 @@ option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/internal/proto/examplepb/a_bit_of_everything.proto
};
message EvaluateRequest {
// List of Matches to evaluate.
repeated api.Match matches = 1;
// A Matches proposed by the Match Function representing a candidate of the final results.
Match match = 1;
}
message EvaluateResponse {
// Accepted list of Matches.
repeated api.Match matches = 1;
// A Match ID representing a shortlisted match returned by the evaluator as the final result.
string match_id = 2;
// Deprecated fields
reserved 1;
}
// The service implementing the Evaluator API that is called to evaluate
// matches generated by MMFs and shortlist them to accepted results.
// The Evaluator service implements APIs used to evaluate and shortlist matches proposed by MMFs.
service Evaluator {
// Evaluate accepts a list of proposed matches, evaluates them for quality,
// collisions etc. and returns matches that should be accepted as results.
rpc Evaluate(EvaluateRequest) returns (EvaluateResponse) {
// Evaluate evaluates a list of proposed matches based on quality, collision status, and etc, then shortlist the matches and returns the final results.
rpc Evaluate(stream EvaluateRequest) returns (stream EvaluateResponse) {
option (google.api.http) = {
post: "/v1/evaluator/matches:evaluate"
body: "*"

View File

@ -13,6 +13,11 @@
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"tags": [
{
"name": "Evaluator"
}
],
"schemes": [
"http",
"https"
@ -26,29 +31,46 @@
"paths": {
"/v1/evaluator/matches:evaluate": {
"post": {
"summary": "Evaluate accepts a list of proposed matches, evaluates them for quality,\ncollisions etc. and returns matches that should be accepted as results.",
"operationId": "Evaluate",
"summary": "Evaluate evaluates a list of proposed matches based on quality, collision status, and etc, then shortlist the matches and returns the final results.",
"operationId": "Evaluator_Evaluate",
"responses": {
"200": {
"description": "A successful response.",
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/definitions/apiEvaluateResponse"
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchEvaluateResponse"
},
"error": {
"$ref": "#/definitions/rpcStatus"
}
},
"title": "Stream result of openmatchEvaluateResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"description": " (streaming inputs)",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiEvaluateRequest"
"$ref": "#/definitions/openmatchEvaluateRequest"
}
}
],
@ -59,49 +81,80 @@
}
},
"definitions": {
"apiAssignment": {
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"apiEvaluateRequest": {
"openmatchBackfill": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "List of Matches to evaluate."
"description": "Customized information not inspected by Open Match, to be used by\nthe Match Function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a backfill. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
},
"generation": {
"type": "string",
"format": "int64",
"description": "Generation gets incremented on GameServers update operations.\nPrevents the MMF from overriding a newer version from the game server.\nDo NOT read or write to this field, it is for internal tracking, and changing the value will cause bugs."
}
},
"description": "Represents a backfill entity which is used to fill partially full matches.\n\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal."
},
"openmatchEvaluateRequest": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/openmatchMatch",
"description": "A Matches proposed by the Match Function representing a candidate of the final results."
}
}
},
"apiEvaluateResponse": {
"openmatchEvaluateResponse": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "Accepted list of Matches."
"match_id": {
"type": "string",
"description": "A Match ID representing a shortlisted match returned by the evaluator as the final result."
}
}
},
"apiMatch": {
"openmatchMatch": {
"type": "object",
"properties": {
"match_id": {
@ -119,139 +172,103 @@
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"title": "Set of Rosters that comprise this Match"
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
"backfill": {
"$ref": "#/definitions/openmatchBackfill",
"description": "Backfill request which contains additional information to the match\nand contains an association to a GameServer.\nBETA FEATURE WARNING: This field is not finalized and still subject\nto possible change or removal."
},
"allocate_gameserver": {
"type": "boolean",
"description": "AllocateGameServer signalise Director that Backfill is new and it should \nallocate a GameServer, this Backfill would be assigned.\nBETA FEATURE WARNING: This field is not finalized and still subject\nto possible change or removal."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"apiRoster": {
"openmatchSearchFields": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"ticket_ids": {
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
"description": "Filterable on presence or absence of given value."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"apiTicket": {
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a ticket. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"@type": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
"additionalProperties": {},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := anypb.New(foo)\n if err != nil {\n ...\n }\n ...\n foo := \u0026pb.Foo{}\n if err := any.UnmarshalTo(foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
@ -259,11 +276,11 @@
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
"description": "The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client."
},
"details": {
"type": "array",
@ -273,8 +290,7 @@
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
"description": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). Each `Status` message contains\nthree pieces of data: error code, error message, and error details.\n\nYou can find out more about this error model and how to work with it in the\n[API Design Guide](https://cloud.google.com/apis/design/errors)."
}
},
"externalDocs": {

View File

@ -12,8 +12,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package golang contains the go files required to run the evaluator harness
// as a GRPC service. To use this harness, you should author the evaluation
// function and pass that in as the callback when setting up the evaluator
// harness service.
package golang
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
// A DefaultEvaluationCriteria is used for a match's evaluation_input when using
// the default evaluator.
message DefaultEvaluationCriteria {
double score = 1;
}

View File

@ -13,14 +13,16 @@
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
import "protoc-gen-openapiv2/options/annotations.proto";
import "google/protobuf/empty.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_swagger) = {
info: {
title: "Frontend"
version: "1.0"
@ -51,80 +53,170 @@ option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/internal/proto/examplepb/a_bit_of_everything.proto
};
message CreateTicketRequest {
// Ticket object with the properties of the Ticket to be created.
Ticket ticket = 1;
}
message CreateTicketResponse {
// Ticket object for the created Ticket - with the ticket ID populated.
// A Ticket object with SearchFields defined.
Ticket ticket = 1;
}
message DeleteTicketRequest {
// Ticket ID of the Ticket to be deleted.
// A TicketId of a generated Ticket to be deleted.
string ticket_id = 1;
}
message DeleteTicketResponse {}
message GetTicketRequest {
// Ticket ID of the Ticket to fetch.
// A TicketId of a generated Ticket.
string ticket_id = 1;
}
message GetAssignmentsRequest {
// Ticket ID of the Ticket to get updates on.
message WatchAssignmentsRequest {
// A TicketId of a generated Ticket to get updates on.
string ticket_id = 1;
}
message GetAssignmentsResponse {
// The updated Ticket object.
message WatchAssignmentsResponse {
// An updated Assignment of the requested Ticket.
Assignment assignment = 1;
}
// The Frontend service enables creating Tickets for matchmaking and fetching
// the status of these Tickets.
service Frontend {
// CreateTicket will create a new ticket, assign a Ticket ID to it and put the
// Ticket in state storage. It will then look through the 'properties' field
// for the attributes defined as indices the matchmakaking config. If the
// attributes exist and are valid integers, they will be indexed. Creating a
// ticket adds the Ticket to the pool of Tickets considered for matchmaking.
rpc CreateTicket(CreateTicketRequest) returns (CreateTicketResponse) {
// BETA FEATURE WARNING: This Request message is not finalized and still subject
// to possible change or removal.
message AcknowledgeBackfillRequest {
// An existing ID of Backfill to acknowledge.
string backfill_id = 1;
// An updated Assignment of the requested Backfill.
Assignment assignment = 2;
}
// BETA FEATURE WARNING: This Request message is not finalized and still subject
// to possible change or removal.
message AcknowledgeBackfillResponse {
// The Backfill that was acknowledged.
Backfill backfill = 1;
// All of the Tickets that were successfully assigned
repeated Ticket tickets = 2;
}
// BETA FEATURE WARNING: This Request message is not finalized and still subject
// to possible change or removal.
message CreateBackfillRequest {
// An empty Backfill object.
Backfill backfill = 1;
}
// BETA FEATURE WARNING: This Request message is not finalized and still subject
// to possible change or removal.
message DeleteBackfillRequest {
// An existing ID of Backfill to delete.
string backfill_id = 1;
}
// BETA FEATURE WARNING: This Request message is not finalized and still subject
// to possible change or removal.
message GetBackfillRequest {
// An existing ID of Backfill to retrieve.
string backfill_id = 1;
}
// UpdateBackfillRequest - update searchFields, extensions and set assignment.
//
// BETA FEATURE WARNING: This Request message is not finalized and still subject
// to possible change or removal.
message UpdateBackfillRequest {
// A Backfill object with ID set and fields to update.
Backfill backfill = 1;
}
// The FrontendService implements APIs to manage and query status of a Tickets.
service FrontendService {
// CreateTicket assigns an unique TicketId to the input Ticket and record it in state storage.
// A ticket is considered as ready for matchmaking once it is created.
// - If a TicketId exists in a Ticket request, an auto-generated TicketId will override this field.
// - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.
rpc CreateTicket(CreateTicketRequest) returns (Ticket) {
option (google.api.http) = {
post: "/v1/frontend/tickets"
post: "/v1/frontendservice/tickets"
body: "*"
};
}
// DeleteTicket removes the Ticket from state storage and from corresponding
// configured indices and lazily removes the ticket from state storage.
// Deleting a ticket immediately stops the ticket from being
// considered for future matchmaking requests, yet when the ticket itself will be deleted
// is undeterministic. Users may still be able to assign/get a ticket after calling DeleteTicket on it.
rpc DeleteTicket(DeleteTicketRequest) returns (DeleteTicketResponse) {
// DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.
// The client should delete the Ticket when finished matchmaking with it.
rpc DeleteTicket(DeleteTicketRequest) returns (google.protobuf.Empty) {
option (google.api.http) = {
delete: "/v1/frontend/tickets/{ticket_id}"
delete: "/v1/frontendservice/tickets/{ticket_id}"
};
}
// GetTicket fetches the ticket associated with the specified Ticket ID.
// GetTicket get the Ticket associated with the specified TicketId.
rpc GetTicket(GetTicketRequest) returns (Ticket) {
option (google.api.http) = {
get: "/v1/frontend/tickets/{ticket_id}"
get: "/v1/frontendservice/tickets/{ticket_id}"
};
}
// GetAssignments streams matchmaking results from Open Match for the
// provided Ticket ID.
rpc GetAssignments(GetAssignmentsRequest)
returns (stream GetAssignmentsResponse) {
// WatchAssignments stream back Assignment of the specified TicketId if it is updated.
// - If the Assignment is not updated, GetAssignment will retry using the configured backoff strategy.
rpc WatchAssignments(WatchAssignmentsRequest)
returns (stream WatchAssignmentsResponse) {
option (google.api.http) = {
get: "/v1/frontend/tickets/{ticket_id}/assignments"
get: "/v1/frontendservice/tickets/{ticket_id}/assignments"
};
}
// AcknowledgeBackfill is used to notify OpenMatch about GameServer connection info
// This triggers an assignment process.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc AcknowledgeBackfill(AcknowledgeBackfillRequest) returns (AcknowledgeBackfillResponse) {
option (google.api.http) = {
post: "/v1/frontendservice/backfills/{backfill_id}/acknowledge"
body: "*"
};
}
// CreateBackfill creates a new Backfill object.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc CreateBackfill(CreateBackfillRequest) returns (Backfill) {
option (google.api.http) = {
post: "/v1/frontendservice/backfills"
body: "*"
};
}
// DeleteBackfill receives a backfill ID and deletes its resource.
// Any tickets waiting for this backfill will be returned to the active pool, no longer pending.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc DeleteBackfill(DeleteBackfillRequest) returns (google.protobuf.Empty) {
option (google.api.http) = {
delete: "/v1/frontendservice/backfills/{backfill_id}"
};
}
// GetBackfill returns a backfill object by its ID.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc GetBackfill(GetBackfillRequest) returns (Backfill) {
option (google.api.http) = {
get: "/v1/frontendservice/backfills/{backfill_id}"
};
}
// UpdateBackfill updates search_fields and extensions for the backfill with the provided id.
// Any tickets waiting for this backfill will be returned to the active pool, no longer pending.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc UpdateBackfill(UpdateBackfillRequest) returns (Backfill) {
option (google.api.http) = {
patch: "/v1/frontendservice/backfills"
body: "*"
};
}
}

View File

@ -13,6 +13,11 @@
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"tags": [
{
"name": "FrontendService"
}
],
"schemes": [
"http",
"https"
@ -24,22 +29,240 @@
"application/json"
],
"paths": {
"/v1/frontend/tickets": {
"/v1/frontendservice/backfills": {
"post": {
"summary": "CreateTicket will create a new ticket, assign a Ticket ID to it and put the\nTicket in state storage. It will then look through the 'properties' field\nfor the attributes defined as indices the matchmakaking config. If the\nattributes exist and are valid integers, they will be indexed. Creating a\nticket adds the Ticket to the pool of Tickets considered for matchmaking.",
"operationId": "CreateTicket",
"summary": "CreateBackfill creates a new Backfill object.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "FrontendService_CreateBackfill",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiCreateTicketResponse"
"$ref": "#/definitions/openmatchBackfill"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"description": "BETA FEATURE WARNING: This Request message is not finalized and still subject\nto possible change or removal.",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchCreateBackfillRequest"
}
}
],
"tags": [
"FrontendService"
]
},
"patch": {
"summary": "UpdateBackfill updates search_fields and extensions for the backfill with the provided id.\nAny tickets waiting for this backfill will be returned to the active pool, no longer pending.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "FrontendService_UpdateBackfill",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchBackfill"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"description": "UpdateBackfillRequest - update searchFields, extensions and set assignment.\n\nBETA FEATURE WARNING: This Request message is not finalized and still subject\nto possible change or removal.",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchUpdateBackfillRequest"
}
}
],
"tags": [
"FrontendService"
]
}
},
"/v1/frontendservice/backfills/{backfill_id}": {
"get": {
"summary": "GetBackfill returns a backfill object by its ID.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "FrontendService_GetBackfill",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchBackfill"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "backfill_id",
"description": "An existing ID of Backfill to retrieve.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"FrontendService"
]
},
"delete": {
"summary": "DeleteBackfill receives a backfill ID and deletes its resource.\nAny tickets waiting for this backfill will be returned to the active pool, no longer pending.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "FrontendService_DeleteBackfill",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"type": "object",
"properties": {}
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "backfill_id",
"description": "An existing ID of Backfill to delete.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"FrontendService"
]
}
},
"/v1/frontendservice/backfills/{backfill_id}/acknowledge": {
"post": {
"summary": "AcknowledgeBackfill is used to notify OpenMatch about GameServer connection info\nThis triggers an assignment process.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "FrontendService_AcknowledgeBackfill",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchAcknowledgeBackfillResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "backfill_id",
"description": "An existing ID of Backfill to acknowledge.",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An updated Assignment of the requested Backfill."
}
},
"description": "BETA FEATURE WARNING: This Request message is not finalized and still subject\nto possible change or removal."
}
}
],
"tags": [
"FrontendService"
]
}
},
"/v1/frontendservice/tickets": {
"post": {
"summary": "CreateTicket assigns an unique TicketId to the input Ticket and record it in state storage.\nA ticket is considered as ready for matchmaking once it is created.\n - If a TicketId exists in a Ticket request, an auto-generated TicketId will override this field.\n - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.",
"operationId": "FrontendService_CreateTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/openmatchTicket"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
@ -48,257 +271,327 @@
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiCreateTicketRequest"
"$ref": "#/definitions/openmatchCreateTicketRequest"
}
}
],
"tags": [
"Frontend"
"FrontendService"
]
}
},
"/v1/frontend/tickets/{ticket_id}": {
"/v1/frontendservice/tickets/{ticket_id}": {
"get": {
"summary": "GetTicket fetches the ticket associated with the specified Ticket ID.",
"operationId": "GetTicket",
"summary": "GetTicket get the Ticket associated with the specified TicketId.",
"operationId": "FrontendService_GetTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiTicket"
"$ref": "#/definitions/openmatchTicket"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "Ticket ID of the Ticket to fetch.",
"description": "A TicketId of a generated Ticket.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"Frontend"
"FrontendService"
]
},
"delete": {
"summary": "DeleteTicket removes the Ticket from state storage and from corresponding\nconfigured indices and lazily removes the ticket from state storage.\nDeleting a ticket immediately stops the ticket from being\nconsidered for future matchmaking requests, yet when the ticket itself will be deleted\nis undeterministic. Users may still be able to assign/get a ticket after calling DeleteTicket on it.",
"operationId": "DeleteTicket",
"summary": "DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.\nThe client should delete the Ticket when finished matchmaking with it.",
"operationId": "FrontendService_DeleteTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiDeleteTicketResponse"
"type": "object",
"properties": {}
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "Ticket ID of the Ticket to be deleted.",
"description": "A TicketId of a generated Ticket to be deleted.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"Frontend"
"FrontendService"
]
}
},
"/v1/frontend/tickets/{ticket_id}/assignments": {
"/v1/frontendservice/tickets/{ticket_id}/assignments": {
"get": {
"summary": "GetAssignments streams matchmaking results from Open Match for the\nprovided Ticket ID.",
"operationId": "GetAssignments",
"summary": "WatchAssignments stream back Assignment of the specified TicketId if it is updated.\n - If the Assignment is not updated, GetAssignment will retry using the configured backoff strategy.",
"operationId": "FrontendService_WatchAssignments",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/apiGetAssignmentsResponse"
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchWatchAssignmentsResponse"
},
"error": {
"$ref": "#/definitions/rpcStatus"
}
},
"title": "Stream result of openmatchWatchAssignmentsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "Ticket ID of the Ticket to get updates on.",
"description": "A TicketId of a generated Ticket to get updates on.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"Frontend"
"FrontendService"
]
}
}
},
"definitions": {
"apiAssignment": {
"openmatchAcknowledgeBackfillResponse": {
"type": "object",
"properties": {
"backfill": {
"$ref": "#/definitions/openmatchBackfill",
"description": "The Backfill that was acknowledged."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTicket"
},
"title": "All of the Tickets that were successfully assigned"
}
},
"description": "BETA FEATURE WARNING: This Request message is not finalized and still subject\nto possible change or removal."
},
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"apiCreateTicketRequest": {
"type": "object",
"properties": {
"ticket": {
"$ref": "#/definitions/apiTicket",
"description": "Ticket object with the properties of the Ticket to be created."
}
}
},
"apiCreateTicketResponse": {
"type": "object",
"properties": {
"ticket": {
"$ref": "#/definitions/apiTicket",
"description": "Ticket object for the created Ticket - with the ticket ID populated."
}
}
},
"apiDeleteTicketResponse": {
"type": "object"
},
"apiGetAssignmentsResponse": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "The updated Ticket object."
}
}
},
"apiTicket": {
"openmatchBackfill": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
"description": "Id represents an auto-generated Id issued by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by\nthe Match Function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a backfill. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
},
"generation": {
"type": "string",
"format": "int64",
"description": "Generation gets incremented on GameServers update operations.\nPrevents the MMF from overriding a newer version from the game server.\nDo NOT read or write to this field, it is for internal tracking, and changing the value will cause bugs."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "Represents a backfill entity which is used to fill partially full matches.\n\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal."
},
"openmatchCreateBackfillRequest": {
"type": "object",
"properties": {
"backfill": {
"$ref": "#/definitions/openmatchBackfill",
"description": "An empty Backfill object."
}
},
"description": "BETA FEATURE WARNING: This Request message is not finalized and still subject\nto possible change or removal."
},
"openmatchCreateTicketRequest": {
"type": "object",
"properties": {
"ticket": {
"$ref": "#/definitions/openmatchTicket",
"description": "A Ticket object with SearchFields defined."
}
}
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a ticket. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"openmatchUpdateBackfillRequest": {
"type": "object",
"properties": {
"backfill": {
"$ref": "#/definitions/openmatchBackfill",
"description": "A Backfill object with ID set and fields to update."
}
},
"description": "UpdateBackfillRequest - update searchFields, extensions and set assignment.\n\nBETA FEATURE WARNING: This Request message is not finalized and still subject\nto possible change or removal."
},
"openmatchWatchAssignmentsResponse": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An updated Assignment of the requested Ticket."
}
}
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"@type": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
"additionalProperties": {},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := anypb.New(foo)\n if err != nil {\n ...\n }\n ...\n foo := \u0026pb.Foo{}\n if err := any.UnmarshalTo(foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
@ -306,11 +599,11 @@
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
"description": "The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client."
},
"details": {
"type": "array",
@ -320,47 +613,7 @@
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"apiGetAssignmentsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/apiGetAssignmentsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of apiGetAssignmentsResponse"
"description": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). Each `Status` message contains\nthree pieces of data: error code, error message, and error details.\n\nYou can find out more about this error model and how to work with it in the\n[API Design Guide](https://cloud.google.com/apis/design/errors)."
}
},
"externalDocs": {

View File

@ -13,14 +13,15 @@
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
import "protoc-gen-openapiv2/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_swagger) = {
info: {
title: "Match Function"
version: "1.0"
@ -55,23 +56,23 @@ option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
};
message RunRequest {
// The MatchProfile that describes the Match that this MatchFunction needs to
// generate proposals for.
// A MatchProfile defines constraints of Tickets in a Match and shapes the Match proposed by the MatchFunction.
MatchProfile profile = 1;
}
message RunResponse {
// The proposal generated by this MatchFunction Run.
// Note that OpenMatch will validate the proposals, a valid match should contain at least one ticket.
repeated Match proposals = 1;
// A Proposal represents a Match candidate that satifies the constraints defined in the input Profile.
// A valid Proposal response will contain at least one ticket.
Match proposal = 1;
}
// This proto defines the API for running Match Functions as long-lived,
// 'serving' functions.
// The MatchFunction service implements APIs to run user-defined matchmaking logics.
service MatchFunction {
// This is the function that is executed when by the Open Match backend to
// generate Match proposals.
rpc Run(RunRequest) returns (RunResponse) {
// DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.
// Run pulls Tickets that satisfy Profile constraints from QueryService,
// runs matchmaking logic against them, then constructs and streams back
// match candidates to the Backend service.
rpc Run(RunRequest) returns (stream RunResponse) {
option (google.api.http) = {
post: "/v1/matchfunction:run"
body: "*"

View File

@ -13,6 +13,11 @@
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"tags": [
{
"name": "MatchFunction"
}
],
"schemes": [
"http",
"https"
@ -26,20 +31,36 @@
"paths": {
"/v1/matchfunction:run": {
"post": {
"summary": "This is the function that is executed when by the Open Match backend to\ngenerate Match proposals.",
"operationId": "Run",
"summary": "DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.\nRun pulls Tickets that satisfy Profile constraints from QueryService,\nruns matchmaking logic against them, then constructs and streams back\nmatch candidates to the Backend service.",
"operationId": "MatchFunction_Run",
"responses": {
"200": {
"description": "A successful response.",
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/definitions/apiRunResponse"
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchRunResponse"
},
"error": {
"$ref": "#/definitions/rpcStatus"
}
},
"title": "Stream result of openmatchRunResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
@ -48,7 +69,7 @@
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiRunRequest"
"$ref": "#/definitions/openmatchRunRequest"
}
}
],
@ -59,45 +80,97 @@
}
},
"definitions": {
"apiAssignment": {
"DoubleRangeFilterExclude": {
"type": "string",
"enum": [
"NONE",
"MIN",
"MAX",
"BOTH"
],
"default": "NONE",
"title": "- NONE: No bounds should be excluded when evaluating the filter, i.e.: MIN \u003c= x \u003c= MAX\n - MIN: Only the minimum bound should be excluded when evaluating the filter, i.e.: MIN \u003c x \u003c= MAX\n - MAX: Only the maximum bound should be excluded when evaluating the filter, i.e.: MIN \u003c= x \u003c MAX\n - BOTH: Both bounds should be excluded when evaluating the filter, i.e.: MIN \u003c x \u003c MAX"
},
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"apiFilter": {
"openmatchBackfill": {
"type": "object",
"properties": {
"attribute": {
"id": {
"type": "string",
"description": "Name of the ticket attribute this Filter operates on."
"description": "Id represents an auto-generated Id issued by Open Match."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by\nthe Match Function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a backfill. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
},
"generation": {
"type": "string",
"format": "int64",
"description": "Generation gets incremented on GameServers update operations.\nPrevents the MMF from overriding a newer version from the game server.\nDo NOT read or write to this field, it is for internal tracking, and changing the value will cause bugs."
}
},
"description": "Represents a backfill entity which is used to fill partially full matches.\n\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal."
},
"openmatchDoubleRangeFilter": {
"type": "object",
"properties": {
"double_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.double_args this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
"description": "Minimum value."
},
"exclude": {
"$ref": "#/definitions/DoubleRangeFilterExclude",
"description": "Defines the bounds to apply when filtering tickets by their search_fields.double_args value.\nBETA FEATURE WARNING: This field and the associated values are\nnot finalized and still subject to possible change or removal."
}
},
"description": "A hard filter used to query a subset of Tickets meeting the filtering\ncriteria."
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
},
"apiMatch": {
"openmatchMatch": {
"type": "object",
"properties": {
"match_id": {
@ -115,204 +188,206 @@
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"title": "Set of Rosters that comprise this Match"
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
"backfill": {
"$ref": "#/definitions/openmatchBackfill",
"description": "Backfill request which contains additional information to the match\nand contains an association to a GameServer.\nBETA FEATURE WARNING: This field is not finalized and still subject\nto possible change or removal."
},
"allocate_gameserver": {
"type": "boolean",
"description": "AllocateGameServer signalise Director that Backfill is new and it should \nallocate a GameServer, this Backfill would be assigned.\nBETA FEATURE WARNING: This field is not finalized and still subject\nto possible change or removal."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least\none ticket to be considered as valid."
},
"apiMatchProfile": {
"openmatchMatchProfile": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of this match profile."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Set of properties associated with this MatchProfile. (Optional)\nOpen Match does not interpret these properties but passes them through to\nthe MatchFunction."
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/apiPool"
"$ref": "#/definitions/openmatchPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile.\nThe pool names can be used in empty Rosters to specify composition of a\nmatch."
"description": "Set of pools to be queried when generating a match for this MatchProfile."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Set of Rosters for this match request. Could be empty Rosters used to\nindicate the composition of the generated Match or they could be partially\npre-populated Ticket list to be used in scenarios such as backfill / join\nin progress."
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "A MatchProfile is Open Match's representation of a Match specification. It is\nused to indicate the criteria for selecting players for a match. A\nMatchProfile is the input to the API to get matches and is passed to the\nMatchFunction. It contains all the information required by the MatchFunction\nto generate match proposals."
},
"apiPool": {
"openmatchPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"filters": {
"double_range_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiFilter"
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchStringEqualsFilter"
}
},
"tag_present_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchRunRequest": {
"type": "object",
"properties": {
"profile": {
"$ref": "#/definitions/openmatchMatchProfile",
"description": "A MatchProfile defines constraints of Tickets in a Match and shapes the Match proposed by the MatchFunction."
}
}
},
"apiRoster": {
"openmatchRunResponse": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
"proposal": {
"$ref": "#/definitions/openmatchMatch",
"description": "A Proposal represents a Match candidate that satifies the constraints defined in the input Profile.\nA valid Proposal response will contain at least one ticket."
}
}
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"ticket_ids": {
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
"description": "Filterable on presence or absence of given value."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"apiRunRequest": {
"openmatchStringEqualsFilter": {
"type": "object",
"properties": {
"profile": {
"$ref": "#/definitions/apiMatchProfile",
"description": "The MatchProfile that describes the Match that this MatchFunction needs to\ngenerate proposals for."
"string_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.string_args this Filter operates on."
},
"value": {
"type": "string"
}
}
},
"title": "Filters strings exactly equaling a value.\n string_arg: \"foo\"\n value: \"bar\"\nmatches:\n {\"foo\": \"bar\"}\ndoes not match:\n {\"foo\": \"baz\"}\n {\"bar\": \"foo\"}\n {}"
},
"apiRunResponse": {
"openmatchTagPresentFilter": {
"type": "object",
"properties": {
"proposals": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "The proposal generated by this MatchFunction Run.\nNote that OpenMatch will validate the proposals, a valid match should contain at least one ticket."
"tag": {
"type": "string"
}
}
},
"title": "Filters to the tag being present on the search_fields.\n tag: \"foo\"\nmatches:\n [\"foo\"]\n [\"bar\",\"foo\"]\ndoes not match:\n [\"bar\"]\n []"
},
"apiTicket": {
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a ticket. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"@type": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
"additionalProperties": {},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := anypb.New(foo)\n if err != nil {\n ...\n }\n ...\n foo := \u0026pb.Foo{}\n if err := any.UnmarshalTo(foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
@ -320,11 +395,11 @@
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
"description": "The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client."
},
"details": {
"type": "array",
@ -334,8 +409,7 @@
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
"description": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). Each `Status` message contains\nthree pieces of data: error code, error message, and error details.\n\nYou can find out more about this error model and how to work with it in the\n[API Design Guide](https://cloud.google.com/apis/design/errors)."
}
},
"externalDocs": {

View File

@ -13,77 +13,172 @@
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "google/rpc/status.proto";
import "google/protobuf/struct.proto";
import "google/protobuf/any.proto";
import "google/protobuf/timestamp.proto";
// A Ticket is a basic matchmaking entity in Open Match. In order to enter
// matchmaking using Open Match, the client should generate a Ticket, passing in
// the properties to be associated with this Ticket. Open Match will generate an
// ID for a Ticket during creation. A Ticket could be used to represent an
// individual 'Player' or a 'Group' of players. Open Match will not interpret
// what the Ticket represents but just treat it as a matchmaking unit with a set
// of properties. Open Match stores the Ticket in state storage and enables an
// Assignment to be associated with this Ticket.
// A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent
// an individual 'Player', a 'Group' of players, or any other concepts unique to
// your use case. Open Match will not interpret what the Ticket represents but
// just treat it as a matchmaking unit with a set of SearchFields. Open Match
// stores the Ticket in state storage and enables an Assignment to be set on the
// Ticket.
message Ticket {
// The Ticket ID generated by Open Match.
// Id represents an auto-generated Id issued by Open Match.
string id = 1;
// Properties contains custom info about the ticket. Top level values can be
// used in indexing and filtering to find tickets.
google.protobuf.Struct properties = 2;
// Assignment associated with the Ticket.
// An Assignment represents a game server assignment associated with a Ticket,
// or whatever finalized matched state means for your use case.
// Open Match does not require or inspect any fields on Assignment.
Assignment assignment = 3;
// Search fields are the fields which Open Match is aware of, and can be used
// when specifying filters.
SearchFields search_fields = 4;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 5;
// Customized information not inspected by Open Match, to be kept persistent
// throughout the life-cycle of a ticket.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> persistent_field = 6;
// Create time is the time the Ticket was created. It is populated by Open
// Match at the time of Ticket creation.
google.protobuf.Timestamp create_time = 7;
// Deprecated fields.
reserved 2;
}
// An Assignment object represents the assignment associated with a Ticket. Open
// match does not require or inspect any fields on assignment.
// Search fields are the fields which Open Match is aware of, and can be used
// when specifying filters.
message SearchFields {
// Float arguments. Filterable on ranges.
map<string, double> double_args = 1;
// String arguments. Filterable on equality.
map<string, string> string_args = 2;
// Filterable on presence or absence of given value.
repeated string tags = 3;
}
// An Assignment represents a game server assignment associated with a Ticket.
// Open Match does not require or inspect any fields on assignment.
message Assignment {
// Connection information for this Assignment.
string connection = 1;
// Other details to be sent to the players.
google.protobuf.Struct properties = 2;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 4;
// Error when finding an Assignment for this Ticket.
google.rpc.Status error = 3;
// Deprecated fields.
reserved 2, 3;
}
// A hard filter used to query a subset of Tickets meeting the filtering
// criteria.
message Filter {
// Name of the ticket attribute this Filter operates on.
string attribute = 1;
// Filters numerical values to only those within a range.
// double_arg: "foo"
// max: 10
// min: 5
// matches:
// {"foo": 5}
// {"foo": 7.5}
// {"foo": 10}
// does not match:
// {"foo": 4}
// {"foo": 10.01}
// {"foo": "7.5"}
// {}
message DoubleRangeFilter {
// Name of the ticket's search_fields.double_args this Filter operates on.
string double_arg = 1;
// Maximum value. Defaults to positive infinity (any value above minv).
// Maximum value.
double max = 2;
// Minimum value. Defaults to 0.
// Minimum value.
double min = 3;
enum Exclude {
// No bounds should be excluded when evaluating the filter, i.e.: MIN <= x <= MAX
NONE = 0;
// Only the minimum bound should be excluded when evaluating the filter, i.e.: MIN < x <= MAX
MIN = 1;
// Only the maximum bound should be excluded when evaluating the filter, i.e.: MIN <= x < MAX
MAX = 2;
// Both bounds should be excluded when evaluating the filter, i.e.: MIN < x < MAX
BOTH = 3;
}
// Defines the bounds to apply when filtering tickets by their search_fields.double_args value.
// BETA FEATURE WARNING: This field and the associated values are
// not finalized and still subject to possible change or removal.
Exclude exclude = 4;
}
// Filters strings exactly equaling a value.
// string_arg: "foo"
// value: "bar"
// matches:
// {"foo": "bar"}
// does not match:
// {"foo": "baz"}
// {"bar": "foo"}
// {}
message StringEqualsFilter {
// Name of the ticket's search_fields.string_args this Filter operates on.
string string_arg = 1;
string value = 2;
}
// Filters to the tag being present on the search_fields.
// tag: "foo"
// matches:
// ["foo"]
// ["bar","foo"]
// does not match:
// ["bar"]
// []
message TagPresentFilter {
string tag = 1;
}
// Pool specfies a set of criteria that are used to select a subset of Tickets
// that meet all the criteria.
message Pool {
// A developer-chosen human-readable name for this Pool.
string name = 1;
// Set of Filters indicating the filtering criteria. Selected players must
// Set of Filters indicating the filtering criteria. Selected tickets must
// match every Filter.
repeated Filter filters = 2;
}
repeated DoubleRangeFilter double_range_filters = 2;
// A Roster is a named collection of Ticket IDs. It exists so that a Tickets
// associated with a Match can be labelled to belong to a team, sub-team etc. It
// can also be used to represent the current state of a Match in scenarios such
// as backfill, join-in-progress etc.
message Roster {
// A developer-chosen human-readable name for this Roster.
string name = 1;
repeated StringEqualsFilter string_equals_filters = 4;
// Tickets belonging to this Roster.
repeated string ticket_ids = 2;
repeated TagPresentFilter tag_present_filters = 5;
// If specified, only Tickets created before the specified time are selected.
google.protobuf.Timestamp created_before = 6;
// If specified, only Tickets created after the specified time are selected.
google.protobuf.Timestamp created_after = 7;
// Deprecated fields.
reserved 3;
}
// A MatchProfile is Open Match's representation of a Match specification. It is
@ -95,27 +190,22 @@ message MatchProfile {
// Name of this match profile.
string name = 1;
// Set of properties associated with this MatchProfile. (Optional)
// Open Match does not interpret these properties but passes them through to
// the MatchFunction.
google.protobuf.Struct properties = 2;
// Set of pools to be queried when generating a match for this MatchProfile.
// The pool names can be used in empty Rosters to specify composition of a
// match.
repeated Pool pools = 3;
// Set of Rosters for this match request. Could be empty Rosters used to
// indicate the composition of the generated Match or they could be partially
// pre-populated Ticket list to be used in scenarios such as backfill / join
// in progress.
repeated Roster rosters = 4;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 5;
// Deprecated fields.
reserved 2, 4;
}
// A Match is used to represent a completed match object. It can be generated by
// a MatchFunction as a proposal or can be returned by OpenMatch as a result in
// response to the FetchMatches call.
// When a match is returned by the FetchMatches call, it should contain at least
// When a match is returned by the FetchMatches call, it should contain at least
// one ticket to be considered as valid.
message Match {
// A Match ID that should be passed through the stack for tracing.
@ -130,9 +220,55 @@ message Match {
// Tickets belonging to this match.
repeated Ticket tickets = 4;
// Set of Rosters that comprise this Match
repeated Roster rosters = 5;
// Customized information not inspected by Open Match, to be used by the match
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 7;
// Match properties for this Match. Open Match does not interpret this field.
google.protobuf.Struct properties = 6;
// Backfill request which contains additional information to the match
// and contains an association to a GameServer.
// BETA FEATURE WARNING: This field is not finalized and still subject
// to possible change or removal.
Backfill backfill = 8;
// AllocateGameServer signalise Director that Backfill is new and it should
// allocate a GameServer, this Backfill would be assigned.
// BETA FEATURE WARNING: This field is not finalized and still subject
// to possible change or removal.
bool allocate_gameserver = 9;
// Deprecated fields.
reserved 5, 6;
}
// Represents a backfill entity which is used to fill partially full matches.
//
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
message Backfill {
// Id represents an auto-generated Id issued by Open Match.
string id = 1;
// Search fields are the fields which Open Match is aware of, and can be used
// when specifying filters.
SearchFields search_fields = 2;
// Customized information not inspected by Open Match, to be used by
// the Match Function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> extensions = 3;
// Customized information not inspected by Open Match, to be kept persistent
// throughout the life-cycle of a backfill.
// Optional, depending on the requirements of the connected systems.
map<string, google.protobuf.Any> persistent_field = 4;
// Create time is the time the Ticket was created. It is populated by Open
// Match at the time of Ticket creation.
google.protobuf.Timestamp create_time = 5;
// Generation gets incremented on GameServers update operations.
// Prevents the MMF from overriding a newer version from the game server.
// Do NOT read or write to this field, it is for internal tracking, and changing the value will cause bugs.
int64 generation = 6;
}

View File

@ -1,78 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "MM Logic (Data Layer)"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message QueryTicketsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketsResponse {
// The Tickets that meet the Filter criteria requested by the Pool.
repeated Ticket tickets = 1;
}
// The MMLogic API provides utility functions for common MMF functionality such
// as retreiving Tickets from state storage.
service MmLogic {
// QueryTickets gets the list of Tickets that match every Filter in the
// specified Pool.
rpc QueryTickets(QueryTicketsRequest) returns (stream QueryTicketsResponse) {
option (google.api.http) = {
post: "/v1/mmlogic/tickets:query"
body: "*"
};
}
}

View File

@ -1,303 +0,0 @@
{
"swagger": "2.0",
"info": {
"title": "MM Logic (Data Layer)",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/mmlogic/tickets:query": {
"post": {
"summary": "QueryTickets gets the list of Tickets that match every Filter in the\nspecified Pool.",
"operationId": "QueryTickets",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/apiQueryTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiQueryTicketsRequest"
}
}
],
"tags": [
"MmLogic"
]
}
}
},
"definitions": {
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiFilter": {
"type": "object",
"properties": {
"attribute": {
"type": "string",
"description": "Name of the ticket attribute this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
}
},
"description": "A hard filter used to query a subset of Tickets meeting the filtering\ncriteria."
},
"apiPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
}
}
},
"apiQueryTicketsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/apiPool",
"description": "The Pool representing the set of Filters to be queried."
}
}
},
"apiQueryTicketsResponse": {
"type": "object",
"properties": {
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
},
"description": "The Tickets that meet the Filter criteria requested by the Pool."
}
}
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"apiQueryTicketsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/apiQueryTicketsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of apiQueryTicketsResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

125
api/query.proto Normal file
View File

@ -0,0 +1,125 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package openmatch;
option go_package = "open-match.dev/open-match/pkg/pb";
option csharp_namespace = "OpenMatch";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-openapiv2/options/annotations.proto";
option (grpc.gateway.protoc_gen_openapiv2.options.openapiv2_swagger) = {
info: {
title: "MM Logic (Data Layer)"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/internal/proto/examplepb/a_bit_of_everything.proto
};
message QueryTicketsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketsResponse {
// Tickets that meet all the filtering criteria requested by the pool.
repeated Ticket tickets = 1;
}
message QueryTicketIdsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketIdsResponse {
// TicketIDs that meet all the filtering criteria requested by the pool.
repeated string ids = 1;
}
// BETA FEATURE WARNING: This Request messages are not finalized and
// still subject to possible change or removal.
message QueryBackfillsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
// BETA FEATURE WARNING: This Request messages are not finalized and
// still subject to possible change or removal.
message QueryBackfillsResponse {
// Backfills that meet all the filtering criteria requested by the pool.
repeated Backfill backfills = 1;
}
// The QueryService service implements helper APIs for Match Function to query Tickets from state storage.
service QueryService {
// QueryTickets gets a list of Tickets that match all Filters of the input Pool.
// - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.
// QueryTickets pages the Tickets by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTickets(QueryTicketsRequest) returns (stream QueryTicketsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/tickets:query"
body: "*"
};
}
// QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.
// - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.
// QueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTicketIds(QueryTicketIdsRequest) returns (stream QueryTicketIdsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/ticketids:query"
body: "*"
};
}
// QueryBackfills gets a list of Backfills.
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
rpc QueryBackfills(QueryBackfillsRequest) returns (stream QueryBackfillsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/backfills:query"
body: "*"
};
}
}

501
api/query.swagger.json Normal file
View File

@ -0,0 +1,501 @@
{
"swagger": "2.0",
"info": {
"title": "MM Logic (Data Layer)",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"tags": [
{
"name": "QueryService"
}
],
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/queryservice/backfills:query": {
"post": {
"summary": "QueryBackfills gets a list of Backfills.\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "QueryService_QueryBackfills",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchQueryBackfillsResponse"
},
"error": {
"$ref": "#/definitions/rpcStatus"
}
},
"title": "Stream result of openmatchQueryBackfillsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"description": "BETA FEATURE WARNING: This Request messages are not finalized and \nstill subject to possible change or removal.",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchQueryBackfillsRequest"
}
}
],
"tags": [
"QueryService"
]
}
},
"/v1/queryservice/ticketids:query": {
"post": {
"summary": "QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.\n - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.\nQueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryService_QueryTicketIds",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchQueryTicketIdsResponse"
},
"error": {
"$ref": "#/definitions/rpcStatus"
}
},
"title": "Stream result of openmatchQueryTicketIdsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchQueryTicketIdsRequest"
}
}
],
"tags": [
"QueryService"
]
}
},
"/v1/queryservice/tickets:query": {
"post": {
"summary": "QueryTickets gets a list of Tickets that match all Filters of the input Pool.\n - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.\nQueryTickets pages the Tickets by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryService_QueryTickets",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/openmatchQueryTicketsResponse"
},
"error": {
"$ref": "#/definitions/rpcStatus"
}
},
"title": "Stream result of openmatchQueryTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"type": "string",
"format": "string"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/openmatchQueryTicketsRequest"
}
}
],
"tags": [
"QueryService"
]
}
}
},
"definitions": {
"DoubleRangeFilterExclude": {
"type": "string",
"enum": [
"NONE",
"MIN",
"MAX",
"BOTH"
],
"default": "NONE",
"title": "- NONE: No bounds should be excluded when evaluating the filter, i.e.: MIN \u003c= x \u003c= MAX\n - MIN: Only the minimum bound should be excluded when evaluating the filter, i.e.: MIN \u003c x \u003c= MAX\n - MAX: Only the maximum bound should be excluded when evaluating the filter, i.e.: MIN \u003c= x \u003c MAX\n - BOTH: Both bounds should be excluded when evaluating the filter, i.e.: MIN \u003c x \u003c MAX"
},
"openmatchAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchBackfill": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by\nthe Match Function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a backfill. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
},
"generation": {
"type": "string",
"format": "int64",
"description": "Generation gets incremented on GameServers update operations.\nPrevents the MMF from overriding a newer version from the game server.\nDo NOT read or write to this field, it is for internal tracking, and changing the value will cause bugs."
}
},
"description": "Represents a backfill entity which is used to fill partially full matches.\n\nBETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal."
},
"openmatchDoubleRangeFilter": {
"type": "object",
"properties": {
"double_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.double_args this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value."
},
"exclude": {
"$ref": "#/definitions/DoubleRangeFilterExclude",
"description": "Defines the bounds to apply when filtering tickets by their search_fields.double_args value.\nBETA FEATURE WARNING: This field and the associated values are\nnot finalized and still subject to possible change or removal."
}
},
"title": "Filters numerical values to only those within a range.\n double_arg: \"foo\"\n max: 10\n min: 5\nmatches:\n {\"foo\": 5}\n {\"foo\": 7.5}\n {\"foo\": 10}\ndoes not match:\n {\"foo\": 4}\n {\"foo\": 10.01}\n {\"foo\": \"7.5\"}\n {}"
},
"openmatchPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"double_range_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchDoubleRangeFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected tickets must\nmatch every Filter."
},
"string_equals_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchStringEqualsFilter"
}
},
"tag_present_filters": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTagPresentFilter"
}
},
"created_before": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created before the specified time are selected."
},
"created_after": {
"type": "string",
"format": "date-time",
"description": "If specified, only Tickets created after the specified time are selected."
}
},
"description": "Pool specfies a set of criteria that are used to select a subset of Tickets\nthat meet all the criteria."
},
"openmatchQueryBackfillsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/openmatchPool",
"description": "The Pool representing the set of Filters to be queried."
}
},
"description": "BETA FEATURE WARNING: This Request messages are not finalized and \nstill subject to possible change or removal."
},
"openmatchQueryBackfillsResponse": {
"type": "object",
"properties": {
"backfills": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchBackfill"
},
"description": "Backfills that meet all the filtering criteria requested by the pool."
}
},
"description": "BETA FEATURE WARNING: This Request messages are not finalized and \nstill subject to possible change or removal."
},
"openmatchQueryTicketIdsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/openmatchPool",
"description": "The Pool representing the set of Filters to be queried."
}
}
},
"openmatchQueryTicketIdsResponse": {
"type": "object",
"properties": {
"ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "TicketIDs that meet all the filtering criteria requested by the pool."
}
}
},
"openmatchQueryTicketsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/openmatchPool",
"description": "The Pool representing the set of Filters to be queried."
}
}
},
"openmatchQueryTicketsResponse": {
"type": "object",
"properties": {
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/openmatchTicket"
},
"description": "Tickets that meet all the filtering criteria requested by the pool."
}
}
},
"openmatchSearchFields": {
"type": "object",
"properties": {
"double_args": {
"type": "object",
"additionalProperties": {
"type": "number",
"format": "double"
},
"description": "Float arguments. Filterable on ranges."
},
"string_args": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "String arguments. Filterable on equality."
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Filterable on presence or absence of given value."
}
},
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"openmatchStringEqualsFilter": {
"type": "object",
"properties": {
"string_arg": {
"type": "string",
"description": "Name of the ticket's search_fields.string_args this Filter operates on."
},
"value": {
"type": "string"
}
},
"title": "Filters strings exactly equaling a value.\n string_arg: \"foo\"\n value: \"bar\"\nmatches:\n {\"foo\": \"bar\"}\ndoes not match:\n {\"foo\": \"baz\"}\n {\"bar\": \"foo\"}\n {}"
},
"openmatchTagPresentFilter": {
"type": "object",
"properties": {
"tag": {
"type": "string"
}
},
"title": "Filters to the tag being present on the search_fields.\n tag: \"foo\"\nmatches:\n [\"foo\"]\n [\"bar\",\"foo\"]\ndoes not match:\n [\"bar\"]\n []"
},
"openmatchTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Id represents an auto-generated Id issued by Open Match."
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
"description": "Search fields are the fields which Open Match is aware of, and can be used\nwhen specifying filters."
},
"extensions": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
},
"persistent_field": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufAny"
},
"description": "Customized information not inspected by Open Match, to be kept persistent \nthroughout the life-cycle of a ticket. \nOptional, depending on the requirements of the connected systems."
},
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
}
},
"additionalProperties": {},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := anypb.New(foo)\n if err != nil {\n ...\n }\n ...\n foo := \u0026pb.Foo{}\n if err := any.UnmarshalTo(foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). Each `Status` message contains\nthree pieces of data: error code, error message, and error details.\n\nYou can find out more about this error model and how to work with it in the\n[API Design Guide](https://cloud.google.com/apis/design/errors)."
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

View File

@ -1,103 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "internal/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Synchronizer"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message RegisterRequest {
}
message RegisterResponse {
// Identifier for this request valid for the current synchronization cycle.
string id = 1;
}
message EvaluateProposalsRequest {
// List of proposals to evaluate in the current synchronization cycle.
repeated Match matches = 1;
// Identifier for this request issued during request registration.
string id = 2;
}
message EvaluateProposalsResponse {
// Results from evaluating proposals for this request.
repeated Match matches = 1;
}
// The service implementing the Synchronizer API that synchronizes the evaluation
// of proposals returned from Match functions.
service Synchronizer {
// Register associates this request with the current synchronization cycle and
// returns an identifier for this registration. The caller returns this
// identifier back in the evaluation request. This enables synchronizer to
// identify stale evaluation requests belonging to a prior window.
rpc Register(RegisterRequest) returns (RegisterResponse) {
option (google.api.http) = {
get: "/v1/synchronizer/register"
};
}
// EvaluateProposals accepts a list of proposals and a registration identifier
// for this request. If the synchronization cycle to which the request was
// registered is completed, this request fails otherwise the proposals are
// added to the list of proposals to be evaluated in the current cycle. At the
// end of the cycle, the user defined evaluation method is triggered and the
// matches accepted by it are returned as results.
rpc EvaluateProposals(EvaluateProposalsRequest) returns (EvaluateProposalsResponse) {
option (google.api.http) = {
post: "/v1/synchronizer/proposals:evaluate"
body: "*"
};
}
}

View File

@ -1,320 +0,0 @@
{
"swagger": "2.0",
"info": {
"title": "Synchronizer",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/synchronizer/proposals:evaluate": {
"post": {
"summary": "EvaluateProposals accepts a list of proposals and a registration identifier\nfor this request. If the synchronization cycle to which the request was\nregistered is completed, this request fails otherwise the proposals are\nadded to the list of proposals to be evaluated in the current cycle. At the\n end of the cycle, the user defined evaluation method is triggered and the\nmatches accepted by it are returned as results.",
"operationId": "EvaluateProposals",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiEvaluateProposalsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiEvaluateProposalsRequest"
}
}
],
"tags": [
"Synchronizer"
]
}
},
"/v1/synchronizer/register": {
"get": {
"summary": "Register associates this request with the current synchronization cycle and\nreturns an identifier for this registration. The caller returns this\nidentifier back in the evaluation request. This enables synchronizer to\nidentify stale evaluation requests belonging to a prior window.",
"operationId": "Register",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiRegisterResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"tags": [
"Synchronizer"
]
}
}
},
"definitions": {
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiEvaluateProposalsRequest": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "List of proposals to evaluate in the current synchronization cycle."
},
"id": {
"type": "string",
"description": "Identifier for this request issued during request registration."
}
}
},
"apiEvaluateProposalsResponse": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "Results from evaluating proposals for this request."
}
}
},
"apiMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
},
"apiRegisterResponse": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Identifier for this request valid for the current synchronization cycle."
}
}
},
"apiRoster": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

View File

@ -48,19 +48,19 @@
steps:
- id: 'Docker Image: open-match-build'
name: gcr.io/kaniko-project/executor
args: ['--destination=gcr.io/$PROJECT_ID/open-match-build', '--cache=true', '--cache-ttl=48h', '--dockerfile=Dockerfile.ci', '.']
name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/open-match-build', '-f', 'Dockerfile.ci', '.']
waitFor: ['-']
- id: 'Build: Clean'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'clean']
args: ['make', 'clean-third-party', 'clean-protos', 'clean-swagger-docs']
waitFor: ['Docker Image: open-match-build']
- id: 'Test: Markdown'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'md-test']
waitFor: ['Build: Clean']
# - id: 'Test: Markdown'
# name: 'gcr.io/$PROJECT_ID/open-match-build'
# args: ['make', 'md-test']
# waitFor: ['Build: Clean']
- id: 'Setup: Download Dependencies'
name: 'gcr.io/$PROJECT_ID/open-match-build'
@ -70,15 +70,7 @@ steps:
path: '/go'
waitFor: ['Build: Clean']
- id: 'Build: Install Kubernetes Tools'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'install-kubernetes-tools']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Clean']
- id: 'Build: Install Toolchain'
- id: 'Build: Initialize Toolchain'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'install-toolchain']
volumes:
@ -86,13 +78,23 @@ steps:
path: '/go'
waitFor: ['Setup: Download Dependencies']
- id: 'Test: Terraform Configuration'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'terraform-test']
waitFor: ['Build: Initialize Toolchain']
- id: 'Build: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'update-chart-deps', 'install/yaml/']
waitFor: ['Build: Initialize Toolchain']
- id: 'Build: Assets'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'assets', '-j12']
args: ['make', '_CHARTS_BUCKET=${_CHARTS_BUCKET}', 'assets', '-j12']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Install Toolchain']
waitFor: ['Build: Deployment Configs']
- id: 'Build: Binaries'
name: 'gcr.io/$PROJECT_ID/open-match-build'
@ -115,95 +117,60 @@ steps:
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', '_GCB_LATEST_VERSION=${_GCB_LATEST_VERSION}', 'SHORT_SHA=${SHORT_SHA}', 'BRANCH_NAME=${BRANCH_NAME}', 'push-images', '-j8']
waitFor: ['Build: Assets']
- id: 'Build: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'VERSION_SUFFIX=$SHORT_SHA', 'clean-install-yaml', 'install/yaml/']
waitFor: ['Build: Install Toolchain']
- id: 'Lint: Format, Vet, Charts'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'lint']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Assets', 'Build: Deployment Configs']
- id: 'Test: Terraform Configuration'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'terraform-test']
waitFor: ['Build: Install Toolchain']
- id: 'Test: Create Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'delete-gke-cluster', 'create-gke-cluster', 'push-helm']
waitFor: ['Build: Install Kubernetes Tools']
waitFor: ['Build: Assets']
- id: 'Test: Deploy Open Match'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'install-ci-chart']
waitFor: ['Test: Create Cluster', 'Build: Docker Images']
- id: 'Test: End-to-End Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'SHORT_SHA=${SHORT_SHA}', 'test-e2e-cluster']
waitFor: ['Test: Deploy Open Match', 'Build: Assets']
volumes:
- name: 'go-vol'
path: '/go'
- id: 'Test: Delete Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'GCLOUD_EXTRA_FLAGS=--async', 'GCP_PROJECT_ID=${PROJECT_ID}', 'ci-reap-clusters', 'delete-gke-cluster']
waitFor: ['Test: End-to-End Cluster']
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'OPEN_MATCH_KUBERNETES_NAMESPACE=open-match-${BUILD_ID}', 'OPEN_MATCH_RELEASE_NAME=open-match-${BUILD_ID}', 'auth-gke-cluster', 'delete-chart', 'ci-reap-namespaces', 'install-ci-chart']
waitFor: ['Build: Docker Images']
- id: 'Deploy: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', '_GCB_LATEST_VERSION=${_GCB_LATEST_VERSION}', 'VERSION_SUFFIX=${SHORT_SHA}', 'BRANCH_NAME=${BRANCH_NAME}', 'ci-deploy-artifacts']
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', '_GCB_LATEST_VERSION=${_GCB_LATEST_VERSION}', 'SHORT_SHA=${SHORT_SHA}', 'BRANCH_NAME=${BRANCH_NAME}', '_CHARTS_BUCKET=${_CHARTS_BUCKET}', 'ci-deploy-artifacts']
waitFor: ['Lint: Format, Vet, Charts', 'Test: Deploy Open Match']
volumes:
- name: 'go-vol'
path: '/go'
- id: 'Test: End-to-End Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'SHORT_SHA=${SHORT_SHA}', 'OPEN_MATCH_KUBERNETES_NAMESPACE=open-match-${BUILD_ID}', 'test-e2e-cluster']
waitFor: ['Test: Deploy Open Match', 'Build: Assets']
volumes:
- name: 'go-vol'
path: '/go'
- id: 'Test: Delete Open Match'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GCLOUD_EXTRA_FLAGS=--async', 'SHORT_SHA=${SHORT_SHA}', 'OPEN_MATCH_KUBERNETES_NAMESPACE=open-match-${BUILD_ID}', 'OPEN_MATCH_RELEASE_NAME=open-match-${BUILD_ID}', 'GCP_PROJECT_ID=${PROJECT_ID}', 'delete-chart']
waitFor: ['Test: End-to-End Cluster']
artifacts:
objects:
location: gs://open-match-build-artifacts/output/
location: '${_ARTIFACTS_BUCKET}'
paths:
- cmd/backend/backend
- cmd/frontend/frontend
- cmd/mmlogic/mmlogic
- cmd/synchronizer/synchronizer
- cmd/minimatch/minimatch
- cmd/swaggerui/swaggerui
- install/yaml/install.yaml
- install/yaml/install-demo.yaml
- install/yaml/01-redis-chart.yaml
- install/yaml/02-open-match.yaml
- install/yaml/01-open-match-core.yaml
- install/yaml/02-open-match-demo.yaml
- install/yaml/03-prometheus-chart.yaml
- install/yaml/04-grafana-chart.yaml
- install/yaml/05-jaeger-chart.yaml
- examples/functions/golang/soloduel/soloduel
- examples/functions/golang/pool/pool
- examples/evaluator/golang/simple/simple
- tools/certgen/certgen
- tools/reaper/reaper
images:
- 'gcr.io/$PROJECT_ID/openmatch-backend:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-frontend:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmlogic:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-synchronizer:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-minimatch:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-demo:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmf-go-soloduel:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmf-go-pool:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-evaluator-go-simple:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-swaggerui:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-reaper:${_OM_VERSION}-${SHORT_SHA}'
- install/yaml/06-open-match-override-configmap.yaml
substitutions:
_OM_VERSION: "0.6.0"
_OM_VERSION: "1.7.0-rc.1"
_GCB_POST_SUBMIT: "0"
_GCB_LATEST_VERSION: "undefined"
logsBucket: 'gs://open-match-build-logs/'
_ARTIFACTS_BUCKET: "gs://open-match-build-artifacts/output/"
_LOGS_BUCKET: "gs://open-match-build-logs/"
_CHARTS_BUCKET: "gs://open-match-chart"
logsBucket: '${_LOGS_BUCKET}'
options:
sourceProvenanceHash: ['SHA256']
machineType: 'N1_HIGHCPU_32'

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/backend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("backend", backend.BindService)
appmain.RunApplication("backend", backend.BindService)
}

View File

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/internal/app/evaluator/defaulteval"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("evaluator", defaulteval.BindService)
}

View File

@ -0,0 +1,31 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/examples/demo"
"open-match.dev/open-match/examples/demo/components"
"open-match.dev/open-match/examples/demo/components/clients"
"open-match.dev/open-match/examples/demo/components/director"
"open-match.dev/open-match/examples/demo/components/uptime"
)
func main() {
demo.Run(map[string]func(*components.DemoShared){
"uptime": uptime.Run,
"clients": clients.Run,
"director": director.Run,
})
}

View File

@ -1,56 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/frontend/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/frontend/frontend /app/
ENTRYPOINT ["/app/frontend"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Frontend API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/frontend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("frontend", frontend.BindService)
appmain.RunApplication("frontend", frontend.BindService)
}

View File

@ -1,56 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/minimatch/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/minimatch/minimatch /app/
ENTRYPOINT ["/app/minimatch"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Mini Match"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/minimatch"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("minimatch", minimatch.BindService)
appmain.RunApplication("minimatch", minimatch.BindService)
}

View File

@ -1,56 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/mmlogic/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/mmlogic/mmlogic /app/
ENTRYPOINT ["/app/mmlogic"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Data API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

View File

@ -12,14 +12,14 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the mmlogic service for Open Match.
// Package main is the query service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/mmlogic"
"open-match.dev/open-match/internal/app/query"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("mmlogic", mmlogic.BindService)
appmain.RunApplication("query", query.BindService)
}

View File

@ -12,9 +12,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package examples defines the constants that some of the examples may share.
package examples
package main
const (
MatchScore = "match_score"
import (
"open-match.dev/open-match/examples/scale/backend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("scale", backend.BindService)
}

View File

@ -1,4 +1,3 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,13 +11,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package e2e
package main
import (
"open-match.dev/open-match/internal/testing/e2e"
"testing"
scaleEvaluator "open-match.dev/open-match/examples/scale/evaluator"
)
func TestMain(m *testing.M) {
e2e.RunMain(m)
func main() {
scaleEvaluator.Run()
}

View File

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"open-match.dev/open-match/examples/scale/frontend"
"open-match.dev/open-match/internal/appmain"
)
func main() {
appmain.RunApplication("scale", frontend.BindService)
}

View File

@ -12,12 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package e2e
package main
import (
"testing"
scaleMmf "open-match.dev/open-match/examples/scale/mmf"
)
func TestMain(m *testing.M) {
RunMain(m)
func main() {
scaleMmf.Run()
}

View File

@ -1,61 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/swaggerui/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
COPY api/*.json /go/src/open-match.dev/open-match/third_party/swaggerui/api/
# Since we copy the swagger docs to the container point to them so they are served locally.
# This is important because if there are local changes we want those reflecting in the container.
RUN sed -i 's|https://open-match.dev/api/v.*/|/api/|g' /go/src/open-match.dev/open-match/third_party/swaggerui/config.json
FROM gcr.io/distroless/static:nonroot
WORKDIR /app
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/swaggerui/swaggerui /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/third_party/swaggerui/ /app/static
ENTRYPOINT ["/app/swaggerui"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Swagger UI"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/GoogleCloudPlatform/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/GoogleCloudPlatform/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

View File

@ -2,7 +2,7 @@
"urls": [
{"name": "Frontend", "url": "https://open-match.dev/api/v0.0.0-dev/frontend.swagger.json"},
{"name": "Backend", "url": "https://open-match.dev/api/v0.0.0-dev/backend.swagger.json"},
{"name": "Mmlogic", "url": "https://open-match.dev/api/v0.0.0-dev/mmlogic.swagger.json"},
{"name": "Query", "url": "https://open-match.dev/api/v0.0.0-dev/query.swagger.json"},
{"name": "MatchFunction", "url": "https://open-match.dev/api/v0.0.0-dev/matchfunction.swagger.json"},
{"name": "Synchronizer", "url": "https://open-match.dev/api/v0.0.0-dev/synchronizer.swagger.json"},
{"name": "Evaluator", "url": "https://open-match.dev/api/v0.0.0-dev/evaluator.swagger.json"}

View File

@ -1,56 +0,0 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/synchronizer/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/synchronizer/synchronizer /app/
ENTRYPOINT ["/app/synchronizer"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Synchronizer API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

View File

@ -16,10 +16,10 @@
package main
import (
"open-match.dev/open-match/internal/app"
"open-match.dev/open-match/internal/app/synchronizer"
"open-match.dev/open-match/internal/appmain"
)
func main() {
app.RunApplication("synchronizer", synchronizer.BindService)
appmain.RunApplication("synchronizer", synchronizer.BindService)
}

View File

@ -9,14 +9,12 @@ To build Open Match you'll need the following applications installed.
* [Git](https://git-scm.com/downloads)
* [Go](https://golang.org/doc/install)
* [Python3 with virtualenv](https://wiki.python.org/moin/BeginnersGuide/Download)
* Make (Mac: install [XCode](https://itunes.apple.com/us/app/xcode/id497799835))
* [Docker](https://docs.docker.com/install/) including the
[post-install steps](https://docs.docker.com/install/linux/linux-postinstall/).
Optional Software
* [Google Cloud Platform](gcloud.md)
* [Visual Studio Code](https://code.visualstudio.com/Download) for IDE.
Vim and Emacs work to.
* [VirtualBox](https://www.virtualbox.org/wiki/Downloads) recommended for
@ -27,8 +25,7 @@ running:
```bash
sudo apt-get update
sudo apt-get install -y -q python3 python3-virtualenv virtualenv make \
google-cloud-sdk git unzip tar
sudo apt-get install -y -q make google-cloud-sdk git unzip tar
```
*It's recommended that you install Go using their instructions because package
@ -49,15 +46,13 @@ make
*Typically for contributing you'll want to
[create a fork](https://help.github.com/en/articles/fork-a-repo) and use that
but for purpose of this guide we'll be using the upstream/master.*
but for purpose of this guide we'll be using the upstream/main.*
## Building
## Building code and images
```bash
# Reset workspace
make clean
# Compile all the binaries
make all -j$(nproc)
# Run tests
make test
# Build all the images.
@ -66,6 +61,8 @@ make build-images -j$(nproc)
make push-images -j$(nproc)
# Push images to Docker Hub
make REGISTRY=mydockerusername push-images -j$(nproc)
# Generate Kubernetes installation YAML files (Note that the trailing '/' is needed here)
make install/yaml/
```
_**-j$(nproc)** is a flag to tell make to parallelize the commands based on
@ -85,11 +82,9 @@ default context the Makefile will honor that._
# GKE cluster: make create-gke-cluster/delete-gke-cluster
# or create a local Minikube cluster
make create-gke-cluster
# Step 2: Download helm and install Tiller in the cluster
make push-helm
# Step 3: Build and Push Open Match Images to gcr.io
# Step 2: Build and Push Open Match Images to gcr.io
make push-images -j$(nproc)
# Install Open Match in the cluster.
# Step 3: Install Open Match in the cluster.
make install-chart
# Create a proxy to Open Match pods so that you can access them locally.
@ -103,19 +98,36 @@ make proxy
make delete-chart
```
## Interaction
## Iterating
While iterating on the project, you may need to:
1. Install/Run everything
2. Make some code changes
3. Make sure the changes compile by running `make test`
4. Build and push Docker images to your personal registry by running `make push-images -j$(nproc)`
5. Deploy the code change by running `make install-chart`
6. Verify it's working by [looking at the logs](#accessing-logs) or looking at the monitoring dashboard by running `make proxy-grafana`
7. Tear down Open Match by running `make delete-chart`
Before integrating with Open Match you can manually interact with it to get a feel for how it works.
## Accessing logs
To look at Open Match core services' logs, run:
```bash
# Replace open-match-frontend with the service name that you would like to access
kubectl logs -n open-match svc/open-match-frontend
```
`make proxy-ui` exposes the Swagger UI for Open Match locally on your computer.
You can then go to http://localhost:51500 and view the API as well as interactively call Open Match.
## API References
While integrating with Open Match you may want to understand its API surface concepts or interact with it and get a feel for how it works.
The APIs are defined in `proto` format under the `api/` folder, with references available at [open-match.dev](https://open-match.dev/site/docs/reference/api/).
You can also run `make proxy-ui` to exposes the Swagger UI for Open Match locally on your computer after [deploying it to Kubernetes](#deploying-to-kubernetes), then go to http://localhost:51500 and view the REST APIs as well as interactively call Open Match.
By default you will be talking to the frontend server but you can change the target API url to any of the following:
* api/frontend.swagger.json
* api/backend.swagger.json
* api/synchronizer.swagger.json
* api/mmlogic.swagger.json
* api/query.swagger.json
For a more current list refer to the api/ directory of this repository. Also matchfunction.swagger.json is not supported.
@ -142,55 +154,9 @@ export GOPATH=$HOME/workspace/
## Pull Requests
If you want to submit a Pull Request there's some tools to help prepare your
change.
```bash
# Runs code generators, tests, and linters.
make presubmit
```
`make presubmit` catches most of the issues your change can run into. If the
submit checks fail you can run it locally via,
```bash
make local-cloud-build
```
If you want to submit a Pull Request, `make presubmit` can catch most of the issues your change can run into.
Our [continuous integration](https://console.cloud.google.com/cloud-build/builds?project=open-match-build)
runs against all PRs. In order to see your build results you'll need to
become a member of
[open-match-discuss@googlegroups.com](https://groups.google.com/forum/#!forum/open-match-discuss).
## Makefile
The Makefile is the core of Open Match's build process. There's a lot of
commands but here's a list of the important ones and patterns to remember them.
```bash
# Help
make
# Reset workspace (delete all build artifacts)
make clean
# Delete auto-generated protobuf code and swagger API docs.
make clean-protos clean-swagger-docs
# make clean-* deletes some part of the build outputs.
# Build all Docker images
make build-images
# Build frontend docker image.
make build-frontend-image
# Formats, Vets, and tests the codebase.
make fmt vet test
# Same as above also regenerates autogen files.
make presubmit
# Run website on http://localhost:8080
make run-site
# Proxy all Open Match processes to view them.
make proxy
```

View File

@ -1,26 +0,0 @@
# Create a GKE Cluster
Below are the steps to create a GKE cluster in Google Cloud Platform.
* Create a GCP project via [Google Cloud Console](https://console.cloud.google.com/).
* Billing must be enabled. If you're a new customer you can get some [free credits](https://cloud.google.com/free/).
* When you create a project you'll need to set a Project ID, if you forget it you can see it here, https://console.cloud.google.com/iam-admin/settings/project.
* Install [Google Cloud SDK](https://cloud.google.com/sdk/) which is the command line tool to work against your project.
Here are the next steps using the gcloud tool.
```bash
# Login to your Google Account for GCP
gcloud auth login
gcloud config set project $YOUR_GCP_PROJECT_ID
# Enable necessary GCP services
gcloud services enable containerregistry.googleapis.com
gcloud services enable container.googleapis.com
# Test that everything is good, this command should work.
gcloud compute zones list
# Create a GKE Cluster in this project
gcloud container clusters create --machine-type n1-standard-2 open-match-dev-cluster --zone us-west1-a --tags open-match
```

View File

@ -2,44 +2,49 @@
This is the {version} release of Open Match.
Check the [README](https://github.com/googleforgames/open-match/tree/release-{version}) for details on features, installation and usage.
Check the [official website](https://open-match.dev) for details on features, installation and usage.
Release Notes
-------------
{ insert enhancements from the changelog and/or security and breaking changes }
**Feature Highlights**
{ highlight here the most notable changes and themes at a high level}
**Breaking Changes**
* API Changed #PR
{ detail any behaviors or API surfaces which worked in a previous version which will no longer work correctly }
**Enhancements**
* New Harness #PR
> Future releases towards 1.0.0 may still have breaking changes.
**Security Fixes**
* Reduced privileges required for MMF. #PR
{ list any changes which fix vulnerabilities in open match }
See [CHANGELOG](https://github.com/googleforgames/open-match/blob/release-{version}/CHANGELOG.md) for more details on changes.
**Enhancements**
{ go into details on improvements and changes }
Usage Requirements
-------------
* Tested against Kubernetes Version { a list of k8s versions}
* Golang Version = v{ required golang version }
Images
------
```bash
# Servers
docker pull gcr.io/open-match-public-images/openmatch-backendapi:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontendapi:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmforc:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmlogicapi:{version}
docker pull gcr.io/open-match-public-images/openmatch-backend:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontend:{version}
docker pull gcr.io/open-match-public-images/openmatch-query:{version}
docker pull gcr.io/open-match-public-images/openmatch-synchronizer:{version}
# Evaluators
docker pull gcr.io/open-match-public-images/openmatch-evaluator-serving:{version}
docker pull gcr.io/open-match-public-images/openmatch-evaluator-go-simple:{version}
# Sample Match Making Functions
docker pull gcr.io/open-match-public-images/openmatch-mmf-go-simple:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmf-go-soloduel:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmf-go-pool:{version}
# Test Clients
docker pull gcr.io/open-match-public-images/openmatch-backendclient:{version}
docker pull gcr.io/open-match-public-images/openmatch-clientloadgen:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontendclient:{version}
docker pull gcr.io/open-match-public-images/openmatch-demo-first-match:{version}
```
_This software is currently alpha, and subject to change. Not to be used in production systems._
@ -47,15 +52,10 @@ _This software is currently alpha, and subject to change. Not to be used in prod
Installation
------------
To deploy Open Match in your Kubernetes cluster run the following commands:
* Follow [Open Match Installation Guide](https://open-match.dev/site/docs/installation/) to setup Open Match in your cluster.
```bash
# Grant yourself cluster-admin permissions so that you can deploy service accounts.
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(YOUR_KUBERNETES_USER_NAME)
# Place all Open Match components in their own namespace.
kubectl create namespace open-match
# Install Open Match and monitoring services.
kubectl apply -f https://github.com/googleforgames/open-match/releases/download/v{version}/install.yaml --namespace open-match
# Install the demo.
kubectl apply -f https://github.com/googleforgames/open-match/releases/download/v{version}/install-demo.yaml --namespace open-match
```
API Definitions
------------
- gRPC API Definitions are available in [API references](https://open-match.dev/site/docs/reference/api/) - _Preferred_
- HTTP API Definitions are available in [SwaggerUI](https://open-match.dev/site/swaggerui/index.html)

View File

@ -12,24 +12,13 @@ SOURCE_VERSION=$1
DEST_VERSION=$2
SOURCE_PROJECT_ID=open-match-build
DEST_PROJECT_ID=open-match-public-images
IMAGE_NAMES="openmatch-backendapi openmatch-frontendapi openmatch-mmforc openmatch-mmlogicapi openmatch-evaluator-serving openmatch-mmf-go-simple openmatch-backendclient openmatch-clientloadgen openmatch-frontendclient"
IMAGE_NAMES=$(make list-images)
for name in $IMAGE_NAMES
do
source_image=gcr.io/$SOURCE_PROJECT_ID/$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION
source_image=gcr.io/$SOURCE_PROJECT_ID/openmatch-$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/openmatch-$name:$DEST_VERSION
docker pull $source_image
docker tag $source_image $dest_image
docker push $dest_image
done
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "Add these lines to your release notes:"
for name in $IMAGE_NAMES
do
echo "docker pull gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION"
done

7
docs/hugo_apiheader.txt Normal file
View File

@ -0,0 +1,7 @@
---
title: "Open Match API References"
linkTitle: "Open Match API References"
weight: 2
description:
This document provides API references for Open Match services.
---

View File

@ -37,7 +37,7 @@ func New() *ByteSub {
}
}
// AnnounceLatest writes b to all of the subscribers, with caviets listed in Subscribe.
// AnnounceLatest writes b to all of the subscribers, with caveats listed in Subscribe.
func (s *ByteSub) AnnounceLatest(b []byte) {
s.r.Lock()
defer s.r.Unlock()

View File

@ -51,7 +51,7 @@ func TestFastAndSlow(t *testing.T) {
for count := 0; true; count++ {
if v := <-slow; v == "3" {
if count > 1 {
t.Error("Expected to recieve at most 1 other value on slow before recieving the latest value.")
t.Error("Expected to receive at most 1 other value on slow before receiving the latest value.")
}
break
}

View File

@ -20,12 +20,11 @@ import (
"math/rand"
"time"
"google.golang.org/grpc"
"open-match.dev/open-match/examples/demo/components"
"open-match.dev/open-match/examples/demo/updater"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/rpc"
"open-match.dev/open-match/pkg/pb"
"open-match.dev/open-match/pkg/structs"
)
func Run(ds *components.DemoShared) {
@ -35,7 +34,7 @@ func Run(ds *components.DemoShared) {
name := fmt.Sprintf("fakeplayer_%d", i)
go func() {
for !isContextDone(ds.Ctx) {
runScenario(ds.Ctx, ds.Cfg, name, u.ForField(name))
runScenario(ds.Ctx, name, u.ForField(name))
}
}()
}
@ -55,7 +54,7 @@ type status struct {
Assignment *pb.Assignment
}
func runScenario(ctx context.Context, cfg config.View, name string, update updater.SetFunc) {
func runScenario(ctx context.Context, name string, update updater.SetFunc) {
defer func() {
r := recover()
if r != nil {
@ -81,12 +80,13 @@ func runScenario(ctx context.Context, cfg config.View, name string, update updat
s.Status = "Connecting to Open Match frontend"
update(s)
conn, err := rpc.GRPCClientFromConfig(cfg, "api.frontend")
// See https://open-match.dev/site/docs/guides/api/
conn, err := grpc.Dial("open-match-frontend.open-match.svc.cluster.local:50504", grpc.WithInsecure())
if err != nil {
panic(err)
}
defer conn.Close()
fe := pb.NewFrontendClient(conn)
fe := pb.NewFrontendServiceClient(conn)
//////////////////////////////////////////////////////////////////////////////
s.Status = "Creating Open Match Ticket"
@ -95,19 +95,14 @@ func runScenario(ctx context.Context, cfg config.View, name string, update updat
var ticketId string
{
req := &pb.CreateTicketRequest{
Ticket: &pb.Ticket{
Properties: structs.Struct{
"name": structs.String(name),
"mode.demo": structs.Number(1),
}.S(),
},
Ticket: &pb.Ticket{},
}
resp, err := fe.CreateTicket(ctx, req)
if err != nil {
panic(err)
}
ticketId = resp.Ticket.Id
ticketId = resp.Id
}
//////////////////////////////////////////////////////////////////////////////
@ -116,11 +111,11 @@ func runScenario(ctx context.Context, cfg config.View, name string, update updat
var assignment *pb.Assignment
{
req := &pb.GetAssignmentsRequest{
req := &pb.WatchAssignmentsRequest{
TicketId: ticketId,
}
stream, err := fe.GetAssignments(ctx, req)
stream, err := fe.WatchAssignments(ctx, req)
for assignment.GetConnection() == "" {
resp, err := stream.Recv()
if err != nil {

View File

@ -18,11 +18,9 @@ import (
"context"
"open-match.dev/open-match/examples/demo/updater"
"open-match.dev/open-match/internal/config"
)
type DemoShared struct {
Ctx context.Context
Cfg config.View
Update updater.SetFunc
}

View File

@ -17,11 +17,13 @@ package director
import (
"context"
"fmt"
"io"
"math/rand"
"time"
"google.golang.org/grpc"
"open-match.dev/open-match/examples/demo/components"
"open-match.dev/open-match/internal/rpc"
"open-match.dev/open-match/pkg/pb"
)
@ -65,12 +67,13 @@ func run(ds *components.DemoShared) {
s.Status = "Connecting to backend"
ds.Update(s)
conn, err := rpc.GRPCClientFromConfig(ds.Cfg, "api.backend")
// See https://open-match.dev/site/docs/guides/api/
conn, err := grpc.Dial("open-match-backend.open-match.svc.cluster.local:50505", grpc.WithInsecure())
if err != nil {
panic(err)
}
defer conn.Close()
be := pb.NewBackendClient(conn)
be := pb.NewBackendServiceClient(conn)
//////////////////////////////////////////////////////////////////////////////
s.Status = "Match Match: Sending Request"
@ -80,35 +83,35 @@ func run(ds *components.DemoShared) {
{
req := &pb.FetchMatchesRequest{
Config: &pb.FunctionConfig{
Host: ds.Cfg.GetString("api.functions.hostname"),
Port: int32(ds.Cfg.GetInt("api.functions.grpcport")),
Host: "om-function.open-match-demo.svc.cluster.local",
Port: 50502,
Type: pb.FunctionConfig_GRPC,
},
Profiles: []*pb.MatchProfile{
{
Name: "1v1",
Pools: []*pb.Pool{
{
Name: "Everyone",
Filters: []*pb.Filter{
{
Attribute: "mode.demo",
Min: -100,
Max: 100,
},
},
},
Profile: &pb.MatchProfile{
Name: "1v1",
Pools: []*pb.Pool{
{
Name: "Everyone",
},
},
},
}
resp, err := be.FetchMatches(ds.Ctx, req)
stream, err := be.FetchMatches(ds.Ctx, req)
if err != nil {
panic(err)
}
matches = resp.Matches
for {
resp, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
panic(err)
}
matches = append(matches, resp.GetMatch())
}
}
//////////////////////////////////////////////////////////////////////////////
@ -128,9 +131,13 @@ func run(ds *components.DemoShared) {
}
req := &pb.AssignTicketsRequest{
TicketIds: ids,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
Assignments: []*pb.AssignmentGroup{
{
TicketIds: ids,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
},
},
},
}

View File

@ -12,44 +12,28 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package main
// Package demo contains the core startup code for running a demo.
package demo
import (
"bytes"
"context"
"encoding/json"
"fmt"
"github.com/sirupsen/logrus"
"golang.org/x/net/websocket"
"log"
"net/http"
"golang.org/x/net/websocket"
"open-match.dev/open-match/examples/demo/bytesub"
"open-match.dev/open-match/examples/demo/components"
"open-match.dev/open-match/examples/demo/components/clients"
"open-match.dev/open-match/examples/demo/components/director"
"open-match.dev/open-match/examples/demo/components/uptime"
"open-match.dev/open-match/examples/demo/updater"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/logging"
"open-match.dev/open-match/internal/telemetry"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "examples.demo",
})
)
func main() {
cfg, err := config.Read()
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("cannot read configuration.")
}
logging.ConfigureLogging(cfg)
logger.Info("Initializing Server")
// Run starts the provided components, and hosts a webserver for observing the
// output of those components.
func Run(comps map[string]func(*components.DemoShared)) {
log.Print("Initializing Server")
fileServe := http.FileServer(http.Dir("/app/static"))
http.Handle("/static/", http.StripPrefix("/static/", fileServe))
@ -78,25 +62,16 @@ func main() {
bs.Subscribe(ws.Request().Context(), ws)
}))
logger.Info("Starting Server")
log.Print("Starting Server")
go startComponents(cfg, u)
address := fmt.Sprintf(":%d", cfg.GetInt("api.demo.httpport"))
err = http.ListenAndServe(address, nil)
logger.WithError(err).Warning("HTTP server closed.")
}
func startComponents(cfg config.View, u *updater.Updater) {
for name, f := range map[string]func(*components.DemoShared){
"uptime": uptime.Run,
"clients": clients.Run,
"director": director.Run,
} {
for name, f := range comps {
go f(&components.DemoShared{
Ctx: context.Background(),
Cfg: cfg,
Update: u.ForField(name),
})
}
address := fmt.Sprintf(":%d", 51507)
err := http.ListenAndServe(address, nil)
log.Printf("HTTP server closed: %s", err.Error())
}

View File

@ -13,7 +13,11 @@
// limitations under the License.
window.onload = function() {
const ws = new WebSocket("ws://" + window.location.host + "/connect");
let protocol = "ws://";
if (window.location.protocol == "https:") {
protocol = "wss://";
}
const ws = new WebSocket(protocol + window.location.host + "/connect");
ws.onopen = function (event) {
return false;

View File

@ -37,7 +37,7 @@ type Updater struct {
type SetFunc func(v interface{})
// New creates an Updater. Set is called when fields update, using the json
// sererialized value of Updater's tree. All updates after ctx is canceled are
// serialized value of Updater's tree. All updates after ctx is canceled are
// ignored.
func New(ctx context.Context, set func([]byte)) *Updater {
f := func(v interface{}) {

View File

@ -1,54 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package evaluate
import (
"open-match.dev/open-match/examples"
harness "open-match.dev/open-match/pkg/harness/evaluator/golang"
"open-match.dev/open-match/pkg/pb"
)
// Evaluate is where your custom evaluation logic lives.
// This sample evaluator sorts and deduplicates the input matches.
func Evaluate(p *harness.EvaluatorParams) ([]*pb.Match, error) {
scoreInDescendingOrder := func(a, b *pb.Match) bool {
return a.GetProperties().GetFields()[examples.MatchScore].GetNumberValue() > b.GetProperties().GetFields()[examples.MatchScore].GetNumberValue()
}
by(scoreInDescendingOrder).Sort(p.Matches)
results := []*pb.Match{}
dedup := map[string]bool{}
for _, match := range p.Matches {
if isNonCollidingMatch(match, dedup) {
for _, ticket := range match.GetTickets() {
dedup[ticket.GetId()] = true
}
results = append(results, match)
}
}
return results, nil
}
func isNonCollidingMatch(match *pb.Match, validTickets map[string]bool) bool {
for _, ticket := range match.GetTickets() {
id := ticket.GetId()
if _, ok := validTickets[id]; ok {
return false
}
}
return true
}

View File

@ -1,105 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package evaluate
import (
"testing"
"github.com/stretchr/testify/assert"
"open-match.dev/open-match/examples"
harness "open-match.dev/open-match/pkg/harness/evaluator/golang"
"open-match.dev/open-match/pkg/pb"
"open-match.dev/open-match/pkg/structs"
)
func TestEvaluate(t *testing.T) {
ticket1 := &pb.Ticket{Id: "1"}
ticket2 := &pb.Ticket{Id: "2"}
ticket3 := &pb.Ticket{Id: "3"}
ticket12Score1 := &pb.Match{
Tickets: []*pb.Ticket{ticket1, ticket2},
Properties: structs.Struct{
examples.MatchScore: structs.Number(1),
}.S(),
}
ticket12Score10 := &pb.Match{
Tickets: []*pb.Ticket{ticket2, ticket1},
Properties: structs.Struct{
examples.MatchScore: structs.Number(10),
}.S(),
}
ticket123Score5 := &pb.Match{
Tickets: []*pb.Ticket{ticket1, ticket2, ticket3},
Properties: structs.Struct{
examples.MatchScore: structs.Number(5),
}.S(),
}
ticket3Score50 := &pb.Match{
Tickets: []*pb.Ticket{ticket3},
Properties: structs.Struct{
examples.MatchScore: structs.Number(50),
}.S(),
}
tests := []struct {
description string
testMatches []*pb.Match
wantMatches []*pb.Match
}{
{
description: "test empty request returns empty response",
testMatches: []*pb.Match{},
wantMatches: []*pb.Match{},
},
{
description: "test input matches output when receiving one match",
testMatches: []*pb.Match{ticket12Score1},
wantMatches: []*pb.Match{ticket12Score1},
},
{
description: "test deduplicates and expect the one with higher score",
testMatches: []*pb.Match{ticket12Score1, ticket12Score10},
wantMatches: []*pb.Match{ticket12Score10},
},
{
description: "test first returns matches with higher score",
testMatches: []*pb.Match{ticket123Score5, ticket12Score10},
wantMatches: []*pb.Match{ticket12Score10},
},
{
description: "test evaluator returns two matches with the highest score",
testMatches: []*pb.Match{ticket12Score1, ticket12Score10, ticket123Score5, ticket3Score50},
wantMatches: []*pb.Match{ticket12Score10, ticket3Score50},
},
}
for _, test := range tests {
test := test
t.Run(test.description, func(t *testing.T) {
t.Parallel()
gotMatches, err := Evaluate(&harness.EvaluatorParams{Matches: test.testMatches})
assert.Nil(t, err)
assert.Equal(t, len(test.wantMatches), len(gotMatches))
for _, match := range gotMatches {
assert.Contains(t, test.wantMatches, match)
}
})
}
}

View File

@ -1,53 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package evaluate
import (
"sort"
"open-match.dev/open-match/pkg/pb"
)
// by is the type of a "less" function that defines the ordering of its Planet arguments.
type by func(p1, p2 *pb.Match) bool
// matchSorter joins a By function and a slice of Matches to be sorted.
type matchSorter struct {
matches []*pb.Match
by func(a, b *pb.Match) bool // Closure used in the Less method.
}
// Sort is a method on the function type, By, that sorts the argument slice according to the function.
func (by by) Sort(matches []*pb.Match) {
sort.Sort(&matchSorter{
matches: matches,
by: by, // The Sort method's receiver is the function (closure) that defines the sort order.
})
}
// Len is part of sort.Interface.
func (s *matchSorter) Len() int {
return len(s.matches)
}
// Swap is part of sort.Interface.
func (s *matchSorter) Swap(i, j int) {
s.matches[i], s.matches[j] = s.matches[j], s.matches[i]
}
// Less is part of sort.Interface. It is implemented by calling the "by" closure in the sorter.
func (s *matchSorter) Less(i, j int) bool {
return s.by(s.matches[i], s.matches[j])
}

View File

@ -1,4 +1,4 @@
# Copyright 2019 Google LLC
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -14,11 +14,11 @@
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/examples/functions/golang/pool
WORKDIR /go/src/open-match.dev/open-match/examples/functions/golang/backfill
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o matchfunction .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/examples/functions/golang/pool/matchfunction /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/examples/functions/golang/backfill/matchfunction /app/
ENTRYPOINT ["/app/matchfunction"]
ENTRYPOINT ["/app/matchfunction"]

View File

@ -1,4 +1,4 @@
// Copyright 2019 Google LLC
// Copyright 2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main a sample match function that uses the GRPC harness to set up
// Package main defines a sample match function that uses the GRPC harness to set up
// the match making function as a service. This sample is a reference
// to demonstrate the usage of the GRPC harness and should only be used as
// a starting point for your match function. You will need to modify the
@ -20,16 +20,14 @@
package main
import (
pool "open-match.dev/open-match/examples/functions/golang/pool/mmf"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"open-match.dev/open-match/examples/functions/golang/backfill/mmf"
)
const (
queryServiceAddr = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
)
func main() {
// Invoke the harness to setup a GRPC service that handles requests to run the
// match function. The harness itself queries open match for player pools for
// the specified request and passes the pools to the match function to generate
// proposals.
mmfHarness.RunMatchFunction(&mmfHarness.FunctionSettings{
Func: pool.MakeMatches,
})
mmf.Start(queryServiceAddr, serverPort)
}

View File

@ -0,0 +1,297 @@
// Copyright 2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package mmf provides a sample match function that uses the GRPC harness to set up 1v1 matches.
// This sample is a reference to demonstrate the usage of backfill and should only be used as
// a starting point for your match function. You will need to modify the
// matchmaking logic in this function based on your game's requirements.
package mmf
import (
"fmt"
"time"
"log"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/anypb"
"google.golang.org/protobuf/types/known/timestamppb"
"google.golang.org/protobuf/types/known/wrapperspb"
"open-match.dev/open-match/pkg/matchfunction"
"open-match.dev/open-match/pkg/pb"
)
const (
playersPerMatch = 2
openSlotsKey = "open-slots"
matchName = "backfill-matchfunction"
)
// matchFunctionService implements pb.MatchFunctionServer, the server generated
// by compiling the protobuf, by fulfilling the pb.MatchFunctionServer interface.
type matchFunctionService struct {
grpc *grpc.Server
queryServiceClient pb.QueryServiceClient
port int
}
func (s *matchFunctionService) Run(req *pb.RunRequest, stream pb.MatchFunction_RunServer) error {
log.Printf("Generating proposals for function %v", req.GetProfile().GetName())
var proposals []*pb.Match
profile := req.GetProfile()
pools := profile.GetPools()
for _, p := range pools {
tickets, err := matchfunction.QueryPool(stream.Context(), s.queryServiceClient, p)
if err != nil {
log.Printf("Failed to query tickets for the given pool, got %s", err.Error())
return err
}
backfills, err := matchfunction.QueryBackfillPool(stream.Context(), s.queryServiceClient, p)
if err != nil {
log.Printf("Failed to query backfills for the given pool, got %s", err.Error())
return err
}
matches, err := makeMatches(profile, p, tickets, backfills)
if err != nil {
log.Printf("Failed to generate matches, got %s", err.Error())
return err
}
proposals = append(proposals, matches...)
}
log.Printf("Streaming %v proposals to Open Match", len(proposals))
// Stream the generated proposals back to Open Match.
for _, proposal := range proposals {
if err := stream.Send(&pb.RunResponse{Proposal: proposal}); err != nil {
log.Printf("Failed to stream proposals to Open Match, got %s", err.Error())
return err
}
}
return nil
}
// makeMatches tries to handle backfills at first, then it makes full matches, at the end it makes a match with backfill
// if tickets left
func makeMatches(profile *pb.MatchProfile, pool *pb.Pool, tickets []*pb.Ticket, backfills []*pb.Backfill) ([]*pb.Match, error) {
var matches []*pb.Match
newMatches, remainingTickets, err := handleBackfills(profile, tickets, backfills, len(matches))
if err != nil {
return nil, err
}
matches = append(matches, newMatches...)
newMatches, remainingTickets = makeFullMatches(profile, remainingTickets, len(matches))
matches = append(matches, newMatches...)
if len(remainingTickets) > 0 {
match, err := makeMatchWithBackfill(profile, pool, remainingTickets, len(matches))
if err != nil {
return nil, err
}
matches = append(matches, match)
}
return matches, nil
}
// handleBackfills looks at each backfill's openSlots which is a number of required tickets,
// acquires that tickets, decreases openSlots in backfill and makes a match with updated backfill and associated tickets.
func handleBackfills(profile *pb.MatchProfile, tickets []*pb.Ticket, backfills []*pb.Backfill, lastMatchId int) ([]*pb.Match, []*pb.Ticket, error) {
matchId := lastMatchId
var matches []*pb.Match
for _, b := range backfills {
openSlots, err := getOpenSlots(b)
if err != nil {
return nil, tickets, err
}
var matchTickets []*pb.Ticket
for openSlots > 0 && len(tickets) > 0 {
matchTickets = append(matchTickets, tickets[0])
tickets = tickets[1:]
openSlots--
}
if len(matchTickets) > 0 {
err := setOpenSlots(b, openSlots)
if err != nil {
return nil, tickets, err
}
matchId++
match := newMatch(matchId, profile.Name, matchTickets, b)
matches = append(matches, &match)
}
}
return matches, tickets, nil
}
// makeMatchWithBackfill makes not full match, creates backfill for it with openSlots = playersPerMatch-len(tickets).
func makeMatchWithBackfill(profile *pb.MatchProfile, pool *pb.Pool, tickets []*pb.Ticket, lastMatchId int) (*pb.Match, error) {
if len(tickets) == 0 {
return nil, fmt.Errorf("tickets are required")
}
if len(tickets) >= playersPerMatch {
return nil, fmt.Errorf("too many tickets")
}
matchId := lastMatchId
searchFields := newSearchFields(pool)
backfill, err := newBackfill(searchFields, playersPerMatch-len(tickets))
if err != nil {
return nil, err
}
matchId++
match := newMatch(matchId, profile.Name, tickets, backfill)
// indicates that it is a new match and new game server should be allocated for it
match.AllocateGameserver = true
return &match, nil
}
// makeFullMatches makes matches without backfill
func makeFullMatches(profile *pb.MatchProfile, tickets []*pb.Ticket, lastMatchId int) ([]*pb.Match, []*pb.Ticket) {
ticketNum := 0
matchId := lastMatchId
var matches []*pb.Match
for ticketNum < playersPerMatch && len(tickets) >= playersPerMatch {
ticketNum++
if ticketNum == playersPerMatch {
matchId++
match := newMatch(matchId, profile.Name, tickets[:playersPerMatch], nil)
matches = append(matches, &match)
tickets = tickets[playersPerMatch:]
ticketNum = 0
}
}
return matches, tickets
}
// newSearchFields creates search fields based on pool's search criteria. This is just example of how it can be done.
func newSearchFields(pool *pb.Pool) *pb.SearchFields {
searchFields := pb.SearchFields{}
rangeFilters := pool.GetDoubleRangeFilters()
if rangeFilters != nil {
doubleArgs := make(map[string]float64)
for _, f := range rangeFilters {
doubleArgs[f.DoubleArg] = (f.Max - f.Min) / 2
}
if len(doubleArgs) > 0 {
searchFields.DoubleArgs = doubleArgs
}
}
stringFilters := pool.GetStringEqualsFilters()
if stringFilters != nil {
stringArgs := make(map[string]string)
for _, f := range stringFilters {
stringArgs[f.StringArg] = f.Value
}
if len(stringArgs) > 0 {
searchFields.StringArgs = stringArgs
}
}
tagFilters := pool.GetTagPresentFilters()
if tagFilters != nil {
tags := make([]string, len(tagFilters))
for _, f := range tagFilters {
tags = append(tags, f.Tag)
}
if len(tags) > 0 {
searchFields.Tags = tags
}
}
return &searchFields
}
func newBackfill(searchFields *pb.SearchFields, openSlots int) (*pb.Backfill, error) {
b := pb.Backfill{
SearchFields: searchFields,
Generation: 0,
CreateTime: timestamppb.Now(),
}
err := setOpenSlots(&b, int32(openSlots))
return &b, err
}
func newMatch(num int, profile string, tickets []*pb.Ticket, b *pb.Backfill) pb.Match {
t := time.Now().Format("2006-01-02T15:04:05.00")
return pb.Match{
MatchId: fmt.Sprintf("profile-%s-time-%s-num-%d", profile, t, num),
MatchProfile: profile,
MatchFunction: matchName,
Tickets: tickets,
Backfill: b,
}
}
func setOpenSlots(b *pb.Backfill, val int32) error {
if b.Extensions == nil {
b.Extensions = make(map[string]*anypb.Any)
}
any, err := anypb.New(&wrapperspb.Int32Value{Value: val})
if err != nil {
return err
}
b.Extensions[openSlotsKey] = any
return nil
}
func getOpenSlots(b *pb.Backfill) (int32, error) {
if b == nil {
return 0, fmt.Errorf("expected backfill is not nil")
}
if b.Extensions != nil {
if any, ok := b.Extensions[openSlotsKey]; ok {
var val wrapperspb.Int32Value
err := any.UnmarshalTo(&val)
if err != nil {
return 0, err
}
return val.Value, nil
}
}
return playersPerMatch, nil
}

View File

@ -0,0 +1,141 @@
// Copyright 2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mmf
import (
"testing"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/anypb"
"google.golang.org/protobuf/types/known/wrapperspb"
"open-match.dev/open-match/pkg/pb"
)
func TestHandleBackfills(t *testing.T) {
for _, tc := range []struct {
name string
tickets []*pb.Ticket
backfills []*pb.Backfill
lastMatchId int
expectedMatchLen int
expectedTicketLen int
expectedOpenSlots int32
expectedErr bool
}{
{name: "returns no matches when no backfills specified", expectedMatchLen: 0, expectedTicketLen: 0},
{name: "returns no matches when no tickets specified", expectedMatchLen: 0, expectedTicketLen: 0},
{name: "returns a match with open slots decreased", tickets: []*pb.Ticket{{Id: "1"}}, backfills: []*pb.Backfill{withOpenSlots(1)}, expectedMatchLen: 1, expectedTicketLen: 0, expectedOpenSlots: playersPerMatch - 2},
} {
testCase := tc
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
profile := pb.MatchProfile{Name: "matchProfile"}
matches, tickets, err := handleBackfills(&profile, testCase.tickets, testCase.backfills, testCase.lastMatchId)
require.Equal(t, testCase.expectedErr, err != nil)
require.Equal(t, testCase.expectedTicketLen, len(tickets))
if err != nil {
require.Equal(t, 0, len(matches))
} else {
for _, m := range matches {
require.NotNil(t, m.Backfill)
openSlots, err := getOpenSlots(m.Backfill)
require.NoError(t, err)
require.Equal(t, testCase.expectedOpenSlots, openSlots)
}
}
})
}
}
func TestMakeMatchWithBackfill(t *testing.T) {
for _, testCase := range []struct {
name string
tickets []*pb.Ticket
lastMatchId int
expectedOpenSlots int32
expectedErr bool
}{
{name: "returns an error when length of tickets is greater then playerPerMatch", tickets: []*pb.Ticket{{Id: "1"}, {Id: "2"}, {Id: "3"}, {Id: "4"}, {Id: "5"}}, expectedErr: true},
{name: "returns an error when length of tickets is equal to playerPerMatch", tickets: []*pb.Ticket{{Id: "1"}, {Id: "2"}, {Id: "3"}, {Id: "4"}}, expectedErr: true},
{name: "returns an error when no tickets are provided", expectedErr: true},
{name: "returns a match with backfill", tickets: []*pb.Ticket{{Id: "1"}}, expectedOpenSlots: playersPerMatch - 1},
} {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
pool := pb.Pool{}
profile := pb.MatchProfile{Name: "matchProfile"}
match, err := makeMatchWithBackfill(&profile, &pool, testCase.tickets, testCase.lastMatchId)
require.Equal(t, testCase.expectedErr, err != nil)
if err == nil {
require.NotNil(t, match)
require.NotNil(t, match.Backfill)
require.True(t, match.AllocateGameserver)
require.Equal(t, "", match.Backfill.Id)
openSlots, err := getOpenSlots(match.Backfill)
require.Nil(t, err)
require.Equal(t, testCase.expectedOpenSlots, openSlots)
}
})
}
}
func TestMakeFullMatches(t *testing.T) {
for _, testCase := range []struct {
name string
tickets []*pb.Ticket
lastMatchId int
expectedMatchLen int
expectedTicketLen int
}{
{name: "returns no matches when there are no tickets", tickets: []*pb.Ticket{}, expectedMatchLen: 0, expectedTicketLen: 0},
{name: "returns no matches when length of tickets is less then playersPerMatch", tickets: []*pb.Ticket{{Id: "1"}}, expectedMatchLen: 0, expectedTicketLen: 1},
{name: "returns a match when length of tickets is greater then playersPerMatch", tickets: []*pb.Ticket{{Id: "1"}, {Id: "2"}}, expectedMatchLen: 1, expectedTicketLen: 0},
} {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
profile := pb.MatchProfile{Name: "matchProfile"}
matches, tickets := makeFullMatches(&profile, testCase.tickets, testCase.lastMatchId)
require.Equal(t, testCase.expectedMatchLen, len(matches))
require.Equal(t, testCase.expectedTicketLen, len(tickets))
for _, m := range matches {
require.Nil(t, m.Backfill)
require.Equal(t, playersPerMatch, len(m.Tickets))
}
})
}
}
func withOpenSlots(openSlots int) *pb.Backfill {
val, err := anypb.New(&wrapperspb.Int32Value{Value: int32(openSlots)})
if err != nil {
panic(err)
}
return &pb.Backfill{
Extensions: map[string]*anypb.Any{
openSlotsKey: val,
},
}
}

View File

@ -0,0 +1,59 @@
// Copyright 2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package mmf provides a sample match function that uses the GRPC harness to set up 1v1 matches.
// This sample is a reference to demonstrate the usage of backfill and should only be used as
// a starting point for your match function. You will need to modify the
// matchmaking logic in this function based on your game's requirements.
package mmf
import (
"fmt"
"log"
"net"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/pb"
)
func Start(queryServiceAddr string, serverPort int) {
// Connect to QueryService.
conn, err := grpc.Dial(queryServiceAddr, grpc.WithInsecure())
if err != nil {
log.Fatalf("Failed to connect to Open Match, got %s", err.Error())
}
defer conn.Close()
mmfService := matchFunctionService{
queryServiceClient: pb.NewQueryServiceClient(conn),
}
// Create and host a new gRPC service on the configured port.
server := grpc.NewServer()
pb.RegisterMatchFunctionServer(server, &mmfService)
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", serverPort))
if err != nil {
log.Fatalf("TCP net listener initialization failed for port %v, got %s", serverPort, err.Error())
}
log.Printf("TCP net listener initialized for port %v", serverPort)
err = server.Serve(ln)
if err != nil {
log.Fatalf("gRPC serve failed, got %s", err.Error())
}
}

View File

@ -1,74 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package mmf provides a sample match function that uses the GRPC harness to set up
// the match making function as a service. This sample is a reference
// to demonstrate the usage of the GRPC harness and should only be used as
// a starting point for your match function. You will need to modify the
// matchmaking logic in this function based on your game's requirements.
package mmf
import (
"github.com/rs/xid"
"open-match.dev/open-match/examples"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"open-match.dev/open-match/pkg/pb"
"open-match.dev/open-match/pkg/structs"
)
var (
matchName = "pool-based-match"
)
// MakeMatches is where your custom matchmaking logic lives.
// This is the core match making function that will be triggered by Open Match to generate matches.
// The goal of this function is to generate predictable matches that can be validated without flakyness.
// This match function loops through all the pools and generates one match per pool aggregating all players
// in that pool in the generated match.
func MakeMatches(params *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
var result []*pb.Match
for pool, tickets := range params.PoolNameToTickets {
if len(tickets) != 0 {
roster := &pb.Roster{Name: pool}
for _, ticket := range tickets {
roster.TicketIds = append(roster.GetTicketIds(), ticket.GetId())
}
result = append(result, &pb.Match{
MatchId: xid.New().String(),
MatchProfile: params.ProfileName,
MatchFunction: matchName,
Tickets: tickets,
Rosters: []*pb.Roster{roster},
Properties: structs.Struct{
examples.MatchScore: structs.Number(scoreCalculator(tickets)),
}.S(),
})
}
}
return result, nil
}
// This match function defines the quality of a match as the sum of the attribute values of all tickets per match
func scoreCalculator(tickets []*pb.Ticket) float64 {
matchScore := 0.0
for _, ticket := range tickets {
for _, v := range ticket.GetProperties().GetFields() {
matchScore += v.GetNumberValue()
}
}
return matchScore
}

View File

@ -1,109 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mmf
import (
"testing"
"open-match.dev/open-match/examples"
"open-match.dev/open-match/pkg/pb"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"open-match.dev/open-match/pkg/structs"
)
func TestMakeMatches(t *testing.T) {
assert := assert.New(t)
tickets := []*pb.Ticket{
{
Id: "1",
Properties: structs.Struct{
"level": structs.Number(10),
"defense": structs.Number(100),
}.S(),
},
{
Id: "2",
Properties: structs.Struct{
"level": structs.Number(10),
"attack": structs.Number(50),
}.S(),
},
{
Id: "3",
Properties: structs.Struct{
"level": structs.Number(10),
"speed": structs.Number(522),
}.S(),
}, {
Id: "4",
Properties: structs.Struct{
"level": structs.Number(10),
"mana": structs.Number(1),
}.S(),
},
}
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": tickets[:2],
"pool2": tickets[2:],
}
p := &mmfHarness.MatchFunctionParams{
Logger: &logrus.Entry{},
ProfileName: "test-profile",
Rosters: []*pb.Roster{},
PoolNameToTickets: poolNameToTickets,
}
matches, err := MakeMatches(p)
assert.Nil(err)
assert.Equal(len(matches), 2)
actual := []*pb.Match{}
for _, match := range matches {
actual = append(actual, &pb.Match{
MatchProfile: match.MatchProfile,
MatchFunction: match.MatchFunction,
Tickets: match.Tickets,
Rosters: match.Rosters,
Properties: match.Properties,
})
}
matchGen := func(poolName string, tickets []*pb.Ticket) *pb.Match {
tids := []string{}
for _, ticket := range tickets {
tids = append(tids, ticket.GetId())
}
return &pb.Match{
MatchProfile: p.ProfileName,
MatchFunction: matchName,
Tickets: tickets,
Rosters: []*pb.Roster{{Name: poolName, TicketIds: tids}},
Properties: structs.Struct{
examples.MatchScore: structs.Number(scoreCalculator(tickets)),
}.S(),
}
}
for poolName, tickets := range poolNameToTickets {
assert.Contains(actual, matchGen(poolName, tickets))
}
}

View File

@ -20,16 +20,14 @@
package main
import (
soloduel "open-match.dev/open-match/examples/functions/golang/soloduel/mmf"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"open-match.dev/open-match/examples/functions/golang/soloduel/mmf"
)
const (
queryServiceAddr = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
)
func main() {
// Invoke the harness to setup a GRPC service that handles requests to run the
// match function. The harness itself queries open match for player pools for
// the specified request and passes the pools to the match function to generate
// proposals.
mmfHarness.RunMatchFunction(&mmfHarness.FunctionSettings{
Func: soloduel.MakeMatches,
})
mmf.Start(queryServiceAddr, serverPort)
}

View File

@ -20,9 +20,11 @@ package mmf
import (
"fmt"
"log"
"time"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/matchfunction"
"open-match.dev/open-match/pkg/pb"
)
@ -30,15 +32,17 @@ var (
matchName = "a-simple-1v1-matchfunction"
)
// MakeMatches is where your custom matchmaking logic lives.
func MakeMatches(p *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
// This simple match function does the following things
// 1. Deduplicates the tickets from the pools into a single list.
// 2. Groups players into 1v1 matches.
// matchFunctionService implements pb.MatchFunctionServer, the server generated
// by compiling the protobuf, by fulfilling the pb.MatchFunctionServer interface.
type matchFunctionService struct {
grpc *grpc.Server
queryServiceClient pb.QueryServiceClient
port int
}
func makeMatches(poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
tickets := map[string]*pb.Ticket{}
for _, pool := range p.PoolNameToTickets {
for _, pool := range poolTickets {
for _, ticket := range pool {
tickets[ticket.GetId()] = ticket
}
@ -56,8 +60,8 @@ func MakeMatches(p *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
if len(thisMatch) >= 2 {
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%s-time-%s-num-%d", p.ProfileName, t, matchNum),
MatchProfile: p.ProfileName,
MatchId: fmt.Sprintf("profile-%s-time-%s-num-%d", matchName, t, matchNum),
MatchProfile: matchName,
MatchFunction: matchName,
Tickets: thisMatch,
})
@ -69,3 +73,33 @@ func MakeMatches(p *mmfHarness.MatchFunctionParams) ([]*pb.Match, error) {
return matches, nil
}
// Run is this match function's implementation of the gRPC call defined in api/matchfunction.proto.
func (s *matchFunctionService) Run(req *pb.RunRequest, stream pb.MatchFunction_RunServer) error {
// Fetch tickets for the pools specified in the Match Profile.
log.Printf("Generating proposals for function %v", req.GetProfile().GetName())
poolTickets, err := matchfunction.QueryPools(stream.Context(), s.queryServiceClient, req.GetProfile().GetPools())
if err != nil {
log.Printf("Failed to query tickets for the given pools, got %s", err.Error())
return err
}
// Generate proposals.
proposals, err := makeMatches(poolTickets)
if err != nil {
log.Printf("Failed to generate matches, got %s", err.Error())
return err
}
log.Printf("Streaming %v proposals to Open Match", len(proposals))
// Stream the generated proposals back to Open Match.
for _, proposal := range proposals {
if err := stream.Send(&pb.RunResponse{Proposal: proposal}); err != nil {
log.Printf("Failed to stream proposals to Open Match, got %s", err.Error())
return err
}
}
return nil
}

View File

@ -19,33 +19,24 @@ import (
"open-match.dev/open-match/pkg/pb"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
mmfHarness "open-match.dev/open-match/pkg/harness/function/golang"
"github.com/stretchr/testify/require"
)
func TestMakeMatchesDeduplicate(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": {{Id: "1"}},
"pool2": {{Id: "1"}},
}
p := &mmfHarness.MatchFunctionParams{
Logger: &logrus.Entry{},
ProfileName: "test-profile",
Rosters: []*pb.Roster{},
PoolNameToTickets: poolNameToTickets,
}
matches, err := MakeMatches(p)
assert.Nil(err)
assert.Equal(len(matches), 0)
matches, err := makeMatches(poolNameToTickets)
require.Nil(err)
require.Equal(len(matches), 0)
}
func TestMakeMatches(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": {{Id: "1"}, {Id: "2"}, {Id: "3"}},
@ -53,21 +44,12 @@ func TestMakeMatches(t *testing.T) {
"pool3": {{Id: "5"}, {Id: "6"}, {Id: "7"}},
}
p := &mmfHarness.MatchFunctionParams{
Logger: &logrus.Entry{},
ProfileName: "test-profile",
Rosters: []*pb.Roster{},
PoolNameToTickets: poolNameToTickets,
}
matches, err := MakeMatches(p)
assert.Nil(err)
assert.Equal(len(matches), 3)
matches, err := makeMatches(poolNameToTickets)
require.Nil(err)
require.Equal(len(matches), 3)
for _, match := range matches {
assert.Equal(2, len(match.Tickets))
assert.Equal(matchName, match.MatchFunction)
assert.Equal(p.ProfileName, match.MatchProfile)
assert.Nil(match.Rosters)
require.Equal(2, len(match.Tickets))
require.Equal(matchName, match.MatchFunction)
}
}

View File

@ -0,0 +1,58 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package mmf provides a sample match function that uses the GRPC harness to set up 1v1 matches.
// This sample is a reference to demonstrate the usage of the GRPC harness and should only be used as
// a starting point for your match function. You will need to modify the
// matchmaking logic in this function based on your game's requirements.
package mmf
import (
"fmt"
"log"
"net"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/pb"
)
// Start creates and starts the Match Function server and also connects to Open
// Match's queryService service. This connection is used at runtime to fetch tickets
// for pools specified in MatchProfile.
func Start(queryServiceAddr string, serverPort int) {
// Connect to QueryService.
conn, err := grpc.Dial(queryServiceAddr, grpc.WithInsecure())
if err != nil {
log.Fatalf("Failed to connect to Open Match, got %s", err.Error())
}
defer conn.Close()
mmfService := matchFunctionService{
queryServiceClient: pb.NewQueryServiceClient(conn),
}
// Create and host a new gRPC service on the configured port.
server := grpc.NewServer()
pb.RegisterMatchFunctionServer(server, &mmfService)
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", serverPort))
if err != nil {
log.Fatalf("TCP net listener initialization failed for port %v, got %s", serverPort, err.Error())
}
log.Printf("TCP net listener initialized for port %v", serverPort)
err = server.Serve(ln)
if err != nil {
log.Fatalf("gRPC serve failed, got %s", err.Error())
}
}

20
examples/scale/README.md Normal file
View File

@ -0,0 +1,20 @@
## How to use this framework
This is the framework that we use to benchmark Open Match against different matchmaking scenarios. For now (02/24/2020), this framework supports a Battle Royale, a Basic 1v1 matchmaking, and a Team Shooter scenario. You are welcome to write up your own `Scenario`, test it, and share the number that you are able to get to us.
1. The `Scenario` struct under the `scenarios/scenarios.go` file defines the parameters that this framework currently support/plan to support.
2. Each subpackage `battleroyal`, `firstmatch`, and `teamshooter` implements to `GameScenario` interface defined under `scenarios/scenarios.go` file. Feel free to write your own benchmark scenario by implementing the interface.
- Ticket `func() *pb.Ticket` - Tickets generator
- Profiles `func() []*pb.MatchProfile` - Profiles generator
- MMF `MatchFunction(p *pb.MatchProfile, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error)` - Custom matchmaking logic using a MatchProfile and a map struct that contains the mapping from pool name to the tickets of that pool.
- Evaluate `Evaluate(stream pb.Evaluator_EvaluateServer) error` - Custom logic implementation of the evaluator.
Follow the instructions below if you want to use any of the existing benchmarking scenarios.
1. Open the `scenarios.go` file under the scenarios directory.
2. Change the value of the `ActiveScenario` variable to the scenario that you would like Open Match to run against.
3. Make sure you have `kubectl` connected to an existing Kubernetes cluster and run `make push-images` followed by `make install-scale-chart` to push the images and install Open Match core along with the scale components in the cluster.
4. Run `make proxy`
- Open `localhost:3000` to see the Grafana dashboards.
- Open `localhost:9090` to see the Prometheus query server.
- Open `localhost:[COMPONENT_HTTP_ENDPOINT]/help` to see how to access the zpages.

View File

@ -0,0 +1,251 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package backend
import (
"context"
"fmt"
"io"
"math/rand"
"sync"
"time"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"open-match.dev/open-match/examples/scale/scenarios"
"open-match.dev/open-match/internal/appmain"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/rpc"
"open-match.dev/open-match/internal/telemetry"
"open-match.dev/open-match/pkg/pb"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "scale.backend",
})
activeScenario = scenarios.ActiveScenario
mIterations = telemetry.Counter("scale_backend_iterations", "fetch match iterations")
mFetchMatchCalls = telemetry.Counter("scale_backend_fetch_match_calls", "fetch match calls")
mFetchMatchSuccesses = telemetry.Counter("scale_backend_fetch_match_successes", "fetch match successes")
mFetchMatchErrors = telemetry.Counter("scale_backend_fetch_match_errors", "fetch match errors")
mMatchesReturned = telemetry.Counter("scale_backend_matches_returned", "matches returned")
mSumTicketsReturned = telemetry.Counter("scale_backend_sum_tickets_returned", "tickets in matches returned")
mMatchesAssigned = telemetry.Counter("scale_backend_matches_assigned", "matches assigned")
mMatchAssignsFailed = telemetry.Counter("scale_backend_match_assigns_failed", "match assigns failed")
mBackfillsDeleted = telemetry.Counter("scale_backend_backfills_deleted", "backfills deleted")
mBackfillDeletesFailed = telemetry.Counter("scale_backend_backfill_deletes_failed", "backfill deletes failed")
)
// Run triggers execution of functions that continuously fetch, assign and
// delete matches.
func BindService(p *appmain.Params, b *appmain.Bindings) error {
go run(p.Config())
return nil
}
func run(cfg config.View) {
beConn, err := rpc.GRPCClientFromConfig(cfg, "api.backend")
if err != nil {
logger.Fatalf("failed to connect to Open Match Backend, got %v", err)
}
defer beConn.Close()
be := pb.NewBackendServiceClient(beConn)
feConn, err := rpc.GRPCClientFromConfig(cfg, "api.frontend")
if err != nil {
logger.Fatalf("failed to connect to Open Match Frontend, got %v", err)
}
defer feConn.Close()
fe := pb.NewFrontendServiceClient(feConn)
w := logger.Writer()
defer w.Close()
matchesToAssign := make(chan *pb.Match, 30000)
if activeScenario.BackendAssignsTickets {
for i := 0; i < 100; i++ {
go runAssignments(be, matchesToAssign)
}
}
backfillsToDelete := make(chan *pb.Backfill, 30000)
if activeScenario.BackendDeletesBackfills {
for i := 0; i < 100; i++ {
go runDeleteBackfills(fe, backfillsToDelete)
}
}
matchesToAcknowledge := make(chan *pb.Match, 30000)
if activeScenario.BackendAcknowledgesBackfills {
for i := 0; i < 100; i++ {
go runAcknowledgeBackfills(fe, matchesToAcknowledge, backfillsToDelete)
}
}
// Don't go faster than this, as it likely means that FetchMatches is throwing
// errors, and will continue doing so if queried very quickly.
for range time.Tick(time.Millisecond * 250) {
// Keep pulling matches from Open Match backend
profiles := activeScenario.Profiles()
var wg sync.WaitGroup
for _, p := range profiles {
wg.Add(1)
go func(wg *sync.WaitGroup, p *pb.MatchProfile) {
defer wg.Done()
runFetchMatches(be, p, matchesToAssign, matchesToAcknowledge)
}(&wg, p)
}
// Wait for all profiles to complete before proceeding.
wg.Wait()
telemetry.RecordUnitMeasurement(context.Background(), mIterations)
}
}
func runFetchMatches(be pb.BackendServiceClient, p *pb.MatchProfile, matchesToAssign chan<- *pb.Match, matchesToAcknowledge chan<- *pb.Match) {
ctx, span := trace.StartSpan(context.Background(), "scale.backend/FetchMatches")
defer span.End()
req := &pb.FetchMatchesRequest{
Config: &pb.FunctionConfig{
Host: "open-match-function",
Port: 50502,
Type: pb.FunctionConfig_GRPC,
},
Profile: p,
}
telemetry.RecordUnitMeasurement(ctx, mFetchMatchCalls)
stream, err := be.FetchMatches(ctx, req)
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mFetchMatchErrors)
logger.WithError(err).Error("failed to get available stream client")
return
}
for {
// Pull the Match
resp, err := stream.Recv()
if err == io.EOF {
telemetry.RecordUnitMeasurement(ctx, mFetchMatchSuccesses)
return
}
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mFetchMatchErrors)
logger.WithError(err).Error("failed to get matches from stream client")
return
}
telemetry.RecordNUnitMeasurement(ctx, mSumTicketsReturned, int64(len(resp.GetMatch().Tickets)))
telemetry.RecordUnitMeasurement(ctx, mMatchesReturned)
if activeScenario.BackendAssignsTickets {
matchesToAssign <- resp.GetMatch()
}
if activeScenario.BackendAcknowledgesBackfills {
matchesToAcknowledge <- resp.GetMatch()
}
}
}
func runDeleteBackfills(fe pb.FrontendServiceClient, backfillsToDelete <-chan *pb.Backfill) {
for b := range backfillsToDelete {
if !activeScenario.BackfillDeleteCond(b) {
continue
}
ctx := context.Background()
_, err := fe.DeleteBackfill(ctx, &pb.DeleteBackfillRequest{BackfillId: b.Id})
if err != nil {
logger.WithError(err).Errorf("failed to delete backfill: %s", b.Id)
telemetry.RecordUnitMeasurement(ctx, mBackfillDeletesFailed)
} else {
telemetry.RecordUnitMeasurement(ctx, mBackfillsDeleted)
}
}
}
func runAcknowledgeBackfills(fe pb.FrontendServiceClient, matchesToAcknowledge <-chan *pb.Match, backfillsToDelete chan<- *pb.Backfill) {
for m := range matchesToAcknowledge {
backfillId := m.Backfill.GetId()
if backfillId == "" {
continue
}
err := acknowledgeBackfill(fe, backfillId)
if err != nil {
logger.WithError(err).Errorf("failed to acknowledge backfill: %s", backfillId)
continue
}
if activeScenario.BackendDeletesBackfills {
backfillsToDelete <- m.Backfill
}
}
}
func acknowledgeBackfill(fe pb.FrontendServiceClient, backfillId string) error {
ctx, span := trace.StartSpan(context.Background(), "scale.frontend/AcknowledgeBackfill")
defer span.End()
_, err := fe.AcknowledgeBackfill(ctx, &pb.AcknowledgeBackfillRequest{
BackfillId: backfillId,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
},
})
return err
}
func runAssignments(be pb.BackendServiceClient, matchesToAssign <-chan *pb.Match) {
ctx := context.Background()
for m := range matchesToAssign {
ids := []string{}
for _, t := range m.Tickets {
ids = append(ids, t.GetId())
}
_, err := be.AssignTickets(context.Background(), &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: ids,
Assignment: &pb.Assignment{
Connection: fmt.Sprintf("%d.%d.%d.%d:2222", rand.Intn(256), rand.Intn(256), rand.Intn(256), rand.Intn(256)),
},
},
},
})
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mMatchAssignsFailed)
logger.WithError(err).Error("failed to assign tickets")
continue
}
telemetry.RecordUnitMeasurement(ctx, mMatchesAssigned)
}
}

View File

@ -1,4 +1,3 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,44 +11,52 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package app contains the common application initialization code for Open Match servers.
package app
package evaluator
import (
"fmt"
"net"
"github.com/sirupsen/logrus"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/logging"
"open-match.dev/open-match/internal/rpc"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/pb"
utilTesting "open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/examples/scale/scenarios"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "app.main",
"component": "scale.evaluator",
})
)
// RunApplication creates a server.
func RunApplication(serverName string, bindService func(*rpc.ServerParams, config.View) error) {
cfg, err := config.Read()
// Run triggers execution of an evaluator.
func Run() {
activeScenario := scenarios.ActiveScenario
server := grpc.NewServer(utilTesting.NewGRPCServerOptions(logger)...)
pb.RegisterEvaluatorServer(server, activeScenario.Evaluator)
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", 50508))
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("cannot read configuration.")
"port": 50508,
}).Fatal("net.Listen() error")
}
logging.ConfigureLogging(cfg)
p, err := rpc.NewServerParamsFromConfig(cfg, "api."+serverName)
logger.WithFields(logrus.Fields{
"port": 50508,
}).Info("TCP net listener initialized")
logger.Info("Serving gRPC endpoint")
err = server.Serve(ln)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("cannot construct server.")
}).Fatal("gRPC serve() error")
}
if err := bindService(p, cfg); err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatalf("failed to bind %s service.", serverName)
}
rpc.MustServeForever(p)
}

View File

@ -0,0 +1,241 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package frontend
import (
"context"
"math/rand"
"sync"
"time"
"github.com/sirupsen/logrus"
"go.opencensus.io/stats"
"go.opencensus.io/trace"
"open-match.dev/open-match/examples/scale/scenarios"
"open-match.dev/open-match/internal/appmain"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/rpc"
"open-match.dev/open-match/internal/telemetry"
"open-match.dev/open-match/pkg/pb"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "scale.frontend",
})
activeScenario = scenarios.ActiveScenario
mTicketsCreated = telemetry.Counter("scale_frontend_tickets_created", "tickets created")
mTicketCreationsFailed = telemetry.Counter("scale_frontend_ticket_creations_failed", "tickets created")
mRunnersWaiting = concurrentGauge(telemetry.Gauge("scale_frontend_runners_waiting", "runners waiting"))
mRunnersCreating = concurrentGauge(telemetry.Gauge("scale_frontend_runners_creating", "runners creating"))
mTicketsDeleted = telemetry.Counter("scale_frontend_tickets_deleted", "tickets deleted")
mTicketDeletesFailed = telemetry.Counter("scale_frontend_ticket_deletes_failed", "ticket deletes failed")
mBackfillsCreated = telemetry.Counter("scale_frontend_backfills_created", "backfills_created")
mBackfillCreationsFailed = telemetry.Counter("scale_frontend_backfill_creations_failed", "backfill creations failed")
mTicketsTimeToAssignment = telemetry.HistogramWithBounds("scale_frontend_tickets_time_to_assignment", "tickets time to assignment", stats.UnitMilliseconds, []float64{0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000, 200000, 500000, 1000000})
)
type ticketToWatch struct {
id string
createdAt time.Time
}
// Run triggers execution of the scale frontend component that creates
// tickets at scale in Open Match.
func BindService(p *appmain.Params, b *appmain.Bindings) error {
go run(p.Config())
return nil
}
func run(cfg config.View) {
conn, err := rpc.GRPCClientFromConfig(cfg, "api.frontend")
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatal("failed to get Frontend connection")
}
fe := pb.NewFrontendServiceClient(conn)
if activeScenario.FrontendCreatesBackfillsOnStart {
createBackfills(fe, activeScenario.FrontendTotalBackfillsToCreate)
}
ticketQPS := int(activeScenario.FrontendTicketCreatedQPS)
ticketTotal := activeScenario.FrontendTotalTicketsToCreate
totalCreated := 0
for range time.Tick(time.Second) {
for i := 0; i < ticketQPS; i++ {
if ticketTotal == -1 || totalCreated < ticketTotal {
go runner(fe)
}
}
}
}
func runner(fe pb.FrontendServiceClient) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
g := stateGauge{}
defer g.stop()
g.start(mRunnersWaiting)
// A random sleep at the start of the worker evens calls out over the second
// period, and makes timing between ticket creation calls a more realistic
// poisson distribution.
time.Sleep(time.Duration(rand.Int63n(int64(time.Second))))
g.start(mRunnersCreating)
createdAt := time.Now()
id, err := createTicket(ctx, fe)
if err != nil {
logger.WithError(err).Error("failed to create a ticket")
return
}
err = watchAssignments(ctx, fe, ticketToWatch{id: id, createdAt: createdAt})
if err != nil {
logger.WithError(err).Errorf("failed to get ticket assignment: %s", id)
} else {
ms := time.Since(createdAt).Nanoseconds() / 1e6
stats.Record(ctx, mTicketsTimeToAssignment.M(ms))
}
if activeScenario.FrontendDeletesTickets {
err = deleteTicket(ctx, fe, id)
if err != nil {
logger.WithError(err).Errorf("failed to delete ticket: %s", id)
}
}
}
func createTicket(ctx context.Context, fe pb.FrontendServiceClient) (string, error) {
ctx, span := trace.StartSpan(ctx, "scale.frontend/CreateTicket")
defer span.End()
req := &pb.CreateTicketRequest{
Ticket: activeScenario.Ticket(),
}
resp, err := fe.CreateTicket(ctx, req)
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mTicketCreationsFailed)
return "", err
}
telemetry.RecordUnitMeasurement(ctx, mTicketsCreated)
return resp.Id, nil
}
func watchAssignments(ctx context.Context, fe pb.FrontendServiceClient, ticket ticketToWatch) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
stream, err := fe.WatchAssignments(ctx, &pb.WatchAssignmentsRequest{TicketId: ticket.id})
if err != nil {
return err
}
var a *pb.Assignment
for a.GetConnection() == "" {
resp, err := stream.Recv()
if err != nil {
return err
}
a = resp.Assignment
}
return nil
}
func createBackfills(fe pb.FrontendServiceClient, numBackfillsToCreate int) error {
for i := 0; i < numBackfillsToCreate; i++ {
err := createBackfill(fe)
if err != nil {
return err
}
}
return nil
}
func createBackfill(fe pb.FrontendServiceClient) error {
ctx, span := trace.StartSpan(context.Background(), "scale.frontend/CreateBackfill")
defer span.End()
req := pb.CreateBackfillRequest{
Backfill: activeScenario.Backfill(),
}
_, err := fe.CreateBackfill(ctx, &req)
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mBackfillCreationsFailed)
logger.WithError(err).Error("failed to create backfill")
return err
}
telemetry.RecordUnitMeasurement(ctx, mBackfillsCreated)
return nil
}
func deleteTicket(ctx context.Context, fe pb.FrontendServiceClient, ticketId string) error {
_, err := fe.DeleteTicket(ctx, &pb.DeleteTicketRequest{TicketId: ticketId})
if err != nil {
telemetry.RecordUnitMeasurement(ctx, mTicketDeletesFailed)
} else {
telemetry.RecordUnitMeasurement(ctx, mTicketsDeleted)
}
return err
}
// Allows concurrent moficiation of a gauge value by modifying the concurrent
// value with a delta.
func concurrentGauge(s *stats.Int64Measure) func(delta int64) {
m := sync.Mutex{}
v := int64(0)
return func(delta int64) {
m.Lock()
defer m.Unlock()
v += delta
telemetry.SetGauge(context.Background(), s, v)
}
}
// stateGauge will have a single value be applied to one gauge at a time.
type stateGauge struct {
f func(int64)
}
// start begins a stage measured in a gauge, stopping any previously started
// stage.
func (g *stateGauge) start(f func(int64)) {
g.stop()
g.f = f
f(1)
}
// stop finishes the current stage by decrementing the gauge.
func (g *stateGauge) stop() {
if g.f != nil {
g.f(-1)
g.f = nil
}
}

69
examples/scale/mmf/mmf.go Normal file
View File

@ -0,0 +1,69 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mmf
import (
"fmt"
"net"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
"open-match.dev/open-match/pkg/pb"
utilTesting "open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/examples/scale/scenarios"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "scale.mmf",
})
)
// Run triggers execution of a MMF.
func Run() {
activeScenario := scenarios.ActiveScenario
conn, err := grpc.Dial("open-match-query.open-match.svc.cluster.local:50503", utilTesting.NewGRPCDialOptions(logger)...)
if err != nil {
logger.Fatalf("Failed to connect to Open Match, got %v", err)
}
defer conn.Close()
server := grpc.NewServer(utilTesting.NewGRPCServerOptions(logger)...)
pb.RegisterMatchFunctionServer(server, activeScenario.MMF)
ln, err := net.Listen("tcp", fmt.Sprintf(":%d", 50502))
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"port": 50502,
}).Fatal("net.Listen() error")
}
logger.WithFields(logrus.Fields{
"port": 50502,
}).Info("TCP net listener initialized")
logger.Info("Serving gRPC endpoint")
err = server.Serve(ln)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
}).Fatal("gRPC serve() error")
}
}

View File

@ -0,0 +1,270 @@
// Copyright 2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package backfill
import (
"fmt"
"io"
"time"
"google.golang.org/protobuf/types/known/anypb"
"google.golang.org/protobuf/types/known/wrapperspb"
"open-match.dev/open-match/pkg/pb"
)
const (
poolName = "all"
openSlotsKey = "open-slots"
)
func Scenario() *BackfillScenario {
ticketsPerMatch := 4
return &BackfillScenario{
TicketsPerMatch: ticketsPerMatch,
MaxTicketsPerNotFullMatch: 3,
BackfillDeleteCond: func(b *pb.Backfill) bool {
openSlots := getOpenSlots(b, ticketsPerMatch)
return openSlots <= 0
},
}
}
type BackfillScenario struct {
TicketsPerMatch int
MaxTicketsPerNotFullMatch int
BackfillDeleteCond func(*pb.Backfill) bool
}
func (s *BackfillScenario) Profiles() []*pb.MatchProfile {
return []*pb.MatchProfile{
{
Name: "entirePool",
Pools: []*pb.Pool{
{
Name: poolName,
},
},
},
}
}
func (s *BackfillScenario) Ticket() *pb.Ticket {
return &pb.Ticket{}
}
func (s *BackfillScenario) Backfill() *pb.Backfill {
return &pb.Backfill{}
}
func (s *BackfillScenario) MatchFunction(p *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
return statefullMMF(p, poolBackfills, poolTickets, s.TicketsPerMatch, s.MaxTicketsPerNotFullMatch)
}
// statefullMMF is a MMF implementation which is used in scenario when we want MMF to create not full match and fill it later.
// 1. The first FetchMatches is called
// 2. MMF grabs maxTicketsPerNotFullMatch tickets and makes a match and new backfill for it
// 3. MMF sets backfill's open slots to ticketsPerMatch - maxTicketsPerNotFullMatch
// 4. MMF returns the match as a result
// 5. The second FetchMatches is called
// 6. MMF gets previously created backfill
// 7. MMF gets backfill's open slots value
// 8. MMF grabs openSlots tickets and makes a match with previously created backfill
// 9. MMF sets backfill's open slots to 0
// 10. MMF returns the match as a result
func statefullMMF(p *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket, ticketsPerMatch int, maxTicketsPerNotFullMatch int) ([]*pb.Match, error) {
var matches []*pb.Match
for pool, backfills := range poolBackfills {
tickets, ok := poolTickets[pool]
if !ok || len(tickets) == 0 {
// no tickets in pool
continue
}
// process backfills first
for _, b := range backfills {
l := len(tickets)
if l == 0 {
// no tickets left
break
}
openSlots := getOpenSlots(b, ticketsPerMatch)
if openSlots <= 0 {
// no free open slots
continue
}
if l > openSlots {
l = openSlots
}
setOpenSlots(b, openSlots-l)
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
Tickets: tickets[0:l],
MatchProfile: p.GetName(),
MatchFunction: "backfill",
Backfill: b,
})
tickets = tickets[l:]
}
// create not full matches with backfill
for {
l := len(tickets)
if l == 0 {
// no tickets left
break
}
if l > maxTicketsPerNotFullMatch {
l = maxTicketsPerNotFullMatch
}
b := pb.Backfill{}
setOpenSlots(&b, ticketsPerMatch-l)
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
Tickets: tickets[0:l],
MatchProfile: p.GetName(),
MatchFunction: "backfill",
Backfill: &b,
AllocateGameserver: true,
})
tickets = tickets[l:]
}
}
return matches, nil
}
func getOpenSlots(b *pb.Backfill, defaultVal int) int {
if b.Extensions == nil {
return defaultVal
}
any, ok := b.Extensions[openSlotsKey]
if !ok {
return defaultVal
}
var val wrapperspb.Int32Value
err := any.UnmarshalTo(&val)
if err != nil {
panic(err)
}
return int(val.Value)
}
func setOpenSlots(b *pb.Backfill, val int) {
if b.Extensions == nil {
b.Extensions = make(map[string]*anypb.Any)
}
any, err := anypb.New(&wrapperspb.Int32Value{Value: int32(val)})
if err != nil {
panic(err)
}
b.Extensions[openSlotsKey] = any
}
// statelessMMF is a MMF implementation which is used in scenario when we want MMF to fill backfills created by a Gameserver. It doesn't create
// or update any backfill.
// 1. FetchMatches is called
// 2. MMF gets a backfill
// 3. MMF grabs ticketsPerMatch tickets and makes a match with the backfill
// 4. MMF returns the match as a result
func statelessMMF(p *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket, ticketsPerMatch int) ([]*pb.Match, error) {
var matches []*pb.Match
for pool, backfills := range poolBackfills {
tickets, ok := poolTickets[pool]
if !ok || len(tickets) == 0 {
// no tickets in pool
continue
}
for _, b := range backfills {
l := len(tickets)
if l == 0 {
// no tickets left
break
}
if l > ticketsPerMatch && ticketsPerMatch > 0 {
l = ticketsPerMatch
}
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
Tickets: tickets[0:l],
MatchProfile: p.GetName(),
MatchFunction: "backfill",
Backfill: b,
})
tickets = tickets[l:]
}
}
return matches, nil
}
func (s *BackfillScenario) Evaluate(stream pb.Evaluator_EvaluateServer) error {
tickets := map[string]struct{}{}
backfills := map[string]struct{}{}
matchIds := []string{}
outer:
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("failed to read evaluator input stream: %w", err)
}
m := req.GetMatch()
if _, ok := backfills[m.Backfill.Id]; ok {
continue outer
}
for _, t := range m.Tickets {
if _, ok := tickets[t.Id]; ok {
continue outer
}
}
for _, t := range m.Tickets {
tickets[t.Id] = struct{}{}
}
matchIds = append(matchIds, m.GetMatchId())
}
for _, id := range matchIds {
err := stream.Send(&pb.EvaluateResponse{MatchId: id})
if err != nil {
return fmt.Errorf("failed to sending evaluator output stream: %w", err)
}
}
return nil
}

View File

@ -0,0 +1,145 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package battleroyal
import (
"fmt"
"io"
"math/rand"
"time"
"open-match.dev/open-match/pkg/pb"
)
const (
poolName = "all"
regionArg = "region"
)
func battleRoyalRegionName(i int) string {
return fmt.Sprintf("region_%d", i)
}
func Scenario() *BattleRoyalScenario {
return &BattleRoyalScenario{
regions: 20,
}
}
type BattleRoyalScenario struct {
regions int
}
func (b *BattleRoyalScenario) Profiles() []*pb.MatchProfile {
p := []*pb.MatchProfile{}
for i := 0; i < b.regions; i++ {
p = append(p, &pb.MatchProfile{
Name: battleRoyalRegionName(i),
Pools: []*pb.Pool{
{
Name: poolName,
StringEqualsFilters: []*pb.StringEqualsFilter{
{
StringArg: regionArg,
Value: battleRoyalRegionName(i),
},
},
},
},
})
}
return p
}
func (b *BattleRoyalScenario) Ticket() *pb.Ticket {
// Simple way to give an uneven distribution of region population.
a := rand.Intn(b.regions) + 1
r := rand.Intn(a)
return &pb.Ticket{
SearchFields: &pb.SearchFields{
StringArgs: map[string]string{
regionArg: battleRoyalRegionName(r),
},
},
}
}
func (b *BattleRoyalScenario) Backfill() *pb.Backfill {
return nil
}
func (b *BattleRoyalScenario) MatchFunction(p *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
const playersInMatch = 100
tickets := poolTickets[poolName]
var matches []*pb.Match
for i := 0; i+playersInMatch <= len(tickets); i += playersInMatch {
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
Tickets: tickets[i : i+playersInMatch],
MatchProfile: p.GetName(),
MatchFunction: "battleRoyal",
})
}
return matches, nil
}
// fifoEvaluate accepts all matches which don't contain the same ticket as in a
// previously accepted match. Essentially first to claim the ticket wins.
func (b *BattleRoyalScenario) Evaluate(stream pb.Evaluator_EvaluateServer) error {
used := map[string]struct{}{}
// TODO: once the evaluator client supports sending and receiving at the
// same time, don't buffer, just send results immediately.
matchIDs := []string{}
outer:
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("Error reading evaluator input stream: %w", err)
}
m := req.GetMatch()
for _, t := range m.Tickets {
if _, ok := used[t.Id]; ok {
continue outer
}
}
for _, t := range m.Tickets {
used[t.Id] = struct{}{}
}
matchIDs = append(matchIDs, m.GetMatchId())
}
for _, mID := range matchIDs {
err := stream.Send(&pb.EvaluateResponse{MatchId: mID})
if err != nil {
return fmt.Errorf("Error sending evaluator output stream: %w", err)
}
}
return nil
}

View File

@ -0,0 +1,115 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package firstmatch
import (
"fmt"
"io"
"time"
"open-match.dev/open-match/pkg/pb"
)
const (
poolName = "all"
)
func Scenario() *FirstMatchScenario {
return &FirstMatchScenario{}
}
type FirstMatchScenario struct {
}
func (*FirstMatchScenario) Profiles() []*pb.MatchProfile {
return []*pb.MatchProfile{
{
Name: "entirePool",
Pools: []*pb.Pool{
{
Name: poolName,
},
},
},
}
}
func (*FirstMatchScenario) Ticket() *pb.Ticket {
return &pb.Ticket{}
}
func (*FirstMatchScenario) Backfill() *pb.Backfill {
return nil
}
func (*FirstMatchScenario) MatchFunction(p *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
tickets := poolTickets[poolName]
var matches []*pb.Match
for i := 0; i+1 < len(tickets); i += 2 {
matches = append(matches, &pb.Match{
MatchId: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
Tickets: []*pb.Ticket{tickets[i], tickets[i+1]},
MatchProfile: p.GetName(),
MatchFunction: "rangeExpandingMatchFunction",
})
}
return matches, nil
}
// fifoEvaluate accepts all matches which don't contain the same ticket as in a
// previously accepted match. Essentially first to claim the ticket wins.
func (*FirstMatchScenario) Evaluate(stream pb.Evaluator_EvaluateServer) error {
used := map[string]struct{}{}
// TODO: once the evaluator client supports sending and receiving at the
// same time, don't buffer, just send results immediately.
matchIDs := []string{}
outer:
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("Error reading evaluator input stream: %w", err)
}
m := req.GetMatch()
for _, t := range m.Tickets {
if _, ok := used[t.Id]; ok {
continue outer
}
}
for _, t := range m.Tickets {
used[t.Id] = struct{}{}
}
matchIDs = append(matchIDs, m.GetMatchId())
}
for _, mID := range matchIDs {
err := stream.Send(&pb.EvaluateResponse{MatchId: mID})
if err != nil {
return fmt.Errorf("Error sending evaluator output stream: %w", err)
}
}
return nil
}

View File

@ -0,0 +1,177 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package scenarios
import (
"sync"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
"open-match.dev/open-match/examples/scale/scenarios/backfill"
"open-match.dev/open-match/examples/scale/scenarios/firstmatch"
"open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/pkg/matchfunction"
"open-match.dev/open-match/pkg/pb"
)
var (
queryServiceAddress = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService Endpoint.
logger = logrus.WithFields(logrus.Fields{
"app": "scale",
})
)
// GameScenario defines what tickets look like, and how they should be matched.
type GameScenario interface {
// Ticket creates a new ticket, with randomized parameters.
Ticket() *pb.Ticket
// Backfill creates a new backfill, with randomized parameters.
Backfill() *pb.Backfill
// Profiles lists all of the profiles that should run.
Profiles() []*pb.MatchProfile
// MatchFunction is the custom logic implementation of the match function.
MatchFunction(p *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error)
// Evaluate is the custom logic implementation of the evaluator.
Evaluate(stream pb.Evaluator_EvaluateServer) error
}
// ActiveScenario sets the scenario with preset parameters that we want to use for current Open Match benchmark run.
var ActiveScenario = func() *Scenario {
var gs GameScenario = firstmatch.Scenario()
// TODO: Select which scenario to use based on some configuration or choice,
// so it's easier to run different scenarios without changing code.
//gs = battleroyal.Scenario()
//gs = teamshooter.Scenario()
s := backfill.Scenario()
gs = s
return &Scenario{
FrontendTotalTicketsToCreate: -1,
FrontendTicketCreatedQPS: 100,
FrontendCreatesBackfillsOnStart: true,
FrontendTotalBackfillsToCreate: 1000,
FrontendDeletesTickets: true,
BackendAssignsTickets: false,
BackendAcknowledgesBackfills: true,
BackendDeletesBackfills: true,
Ticket: gs.Ticket,
Backfill: gs.Backfill,
BackfillDeleteCond: s.BackfillDeleteCond,
Profiles: gs.Profiles,
MMF: queryPoolsWrapper(gs.MatchFunction),
Evaluator: gs.Evaluate,
}
}()
// Scenario defines the controllable fields for Open Match benchmark scenarios
type Scenario struct {
// TODO: supports the following controllable parameters
// MatchFunction Configs
// MatchOverlapRatio float32
// TicketSearchFieldsUnitSize int
// TicketSearchFieldsNumber int
// GameFrontend Configs
// TicketExtensionSize int
// PendingTicketNumber int
// MatchExtensionSize int
FrontendTicketCreatedQPS uint32
FrontendTotalTicketsToCreate int // TotalTicketsToCreate = -1 let scale-frontend create tickets forever
FrontendTotalBackfillsToCreate int
FrontendCreatesBackfillsOnStart bool
FrontendDeletesTickets bool
// GameBackend Configs
// ProfileNumber int
// FilterNumber int
BackendAssignsTickets bool
BackendAcknowledgesBackfills bool
BackendDeletesBackfills bool
Ticket func() *pb.Ticket
Backfill func() *pb.Backfill
BackfillDeleteCond func(*pb.Backfill) bool
Profiles func() []*pb.MatchProfile
MMF matchFunction
Evaluator evaluatorFunction
}
type matchFunction func(*pb.RunRequest, pb.MatchFunction_RunServer) error
type evaluatorFunction func(pb.Evaluator_EvaluateServer) error
func (mmf matchFunction) Run(req *pb.RunRequest, srv pb.MatchFunction_RunServer) error {
return mmf(req, srv)
}
func (eval evaluatorFunction) Evaluate(srv pb.Evaluator_EvaluateServer) error {
return eval(srv)
}
func getQueryServiceGRPCClient() pb.QueryServiceClient {
conn, err := grpc.Dial(queryServiceAddress, testing.NewGRPCDialOptions(logger)...)
if err != nil {
logger.Fatalf("Failed to connect to Open Match, got %v", err)
}
return pb.NewQueryServiceClient(conn)
}
func queryPoolsWrapper(mmf func(req *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error)) matchFunction {
var q pb.QueryServiceClient
var startQ sync.Once
return func(req *pb.RunRequest, stream pb.MatchFunction_RunServer) error {
startQ.Do(func() {
q = getQueryServiceGRPCClient()
})
poolTickets, err := matchfunction.QueryPools(stream.Context(), q, req.GetProfile().GetPools())
if err != nil {
return err
}
poolBackfills, err := matchfunction.QueryBackfillPools(stream.Context(), q, req.GetProfile().GetPools())
if err != nil {
return err
}
proposals, err := mmf(req.GetProfile(), poolBackfills, poolTickets)
if err != nil {
return err
}
logger.WithFields(logrus.Fields{
"proposals": proposals,
}).Trace("proposals returned by match function")
for _, proposal := range proposals {
if err := stream.Send(&pb.RunResponse{Proposal: proposal}); err != nil {
return err
}
}
return nil
}
}

View File

@ -0,0 +1,336 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// TeamShooterScenario is a scenario which is designed to emulate the
// approximate behavior to open match that a skill based team game would have.
// It doesn't try to provide good matchmaking for real players. There are three
// arguments used:
// mode: The game mode the players wants to play in. mode is a hard partition.
// regions: Players may have good latency to one or more regions. A player will
//
// search for matches in all eligible regions.
//
// skill: Players have a random skill based on a normal distribution. Players
//
// will only be matched with other players who have a close skill value. The
// match functions have overlapping partitions of the skill brackets.
package teamshooter
import (
"fmt"
"io"
"math"
"math/rand"
"sort"
"time"
"google.golang.org/protobuf/types/known/anypb"
"google.golang.org/protobuf/types/known/wrapperspb"
"open-match.dev/open-match/pkg/pb"
)
const (
poolName = "all"
skillArg = "skill"
modeArg = "mode"
)
// TeamShooterScenario provides the required methods for running a scenario.
type TeamShooterScenario struct {
// Names of available region tags.
regions []string
// Maximum regions a player can search in.
maxRegions int
// Number of tickets which form a match.
playersPerGame int
// For each pair of consequitive values, the value to split profiles on by
// skill.
skillBoundaries []float64
// Maximum difference between two tickets to consider a match valid.
maxSkillDifference float64
// List of mode names.
modes []string
// Returns a random mode, with some weight.
randomMode func() string
}
// Scenario creates a new TeamShooterScenario.
func Scenario() *TeamShooterScenario {
modes, randomMode := weightedChoice(map[string]int{
"pl": 100, // Payload, very popular.
"cp": 25, // Capture point, 1/4 as popular.
})
regions := []string{}
for i := 0; i < 2; i++ {
regions = append(regions, fmt.Sprintf("region_%d", i))
}
return &TeamShooterScenario{
regions: regions,
maxRegions: 1,
playersPerGame: 12,
skillBoundaries: []float64{math.Inf(-1), 0, math.Inf(1)},
maxSkillDifference: 0.01,
modes: modes,
randomMode: randomMode,
}
}
// Profiles shards the player base on mode, region, and skill.
func (t *TeamShooterScenario) Profiles() []*pb.MatchProfile {
p := []*pb.MatchProfile{}
for _, region := range t.regions {
for _, mode := range t.modes {
for i := 0; i+1 < len(t.skillBoundaries); i++ {
skillMin := t.skillBoundaries[i] - t.maxSkillDifference/2
skillMax := t.skillBoundaries[i+1] + t.maxSkillDifference/2
p = append(p, &pb.MatchProfile{
Name: fmt.Sprintf("%s_%s_%v-%v", region, mode, skillMin, skillMax),
Pools: []*pb.Pool{
{
Name: poolName,
DoubleRangeFilters: []*pb.DoubleRangeFilter{
{
DoubleArg: skillArg,
Min: skillMin,
Max: skillMax,
},
},
TagPresentFilters: []*pb.TagPresentFilter{
{
Tag: region,
},
},
StringEqualsFilters: []*pb.StringEqualsFilter{
{
StringArg: modeArg,
Value: mode,
},
},
},
},
})
}
}
}
return p
}
// Ticket creates a randomized player.
func (t *TeamShooterScenario) Ticket() *pb.Ticket {
region := rand.Intn(len(t.regions))
numRegions := rand.Intn(t.maxRegions) + 1
tags := []string{}
for i := 0; i < numRegions; i++ {
tags = append(tags, t.regions[region])
// The Earth is actually a circle.
region = (region + 1) % len(t.regions)
}
return &pb.Ticket{
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
skillArg: clamp(rand.NormFloat64(), -3, 3),
},
StringArgs: map[string]string{
modeArg: t.randomMode(),
},
Tags: tags,
},
}
}
func (t *TeamShooterScenario) Backfill() *pb.Backfill {
return nil
}
// MatchFunction puts tickets into matches based on their skill, finding the
// required number of tickets for a game within the maximum skill difference.
func (t *TeamShooterScenario) MatchFunction(p *pb.MatchProfile, poolBackfills map[string][]*pb.Backfill, poolTickets map[string][]*pb.Ticket) ([]*pb.Match, error) {
skill := func(t *pb.Ticket) float64 {
return t.SearchFields.DoubleArgs[skillArg]
}
tickets := poolTickets[poolName]
var matches []*pb.Match
sort.Slice(tickets, func(i, j int) bool {
return skill(tickets[i]) < skill(tickets[j])
})
for i := 0; i+t.playersPerGame <= len(tickets); i++ {
mt := tickets[i : i+t.playersPerGame]
if skill(mt[len(mt)-1])-skill(mt[0]) < t.maxSkillDifference {
avg := float64(0)
for _, t := range mt {
avg += skill(t)
}
avg /= float64(len(mt))
q := float64(0)
for _, t := range mt {
diff := skill(t) - avg
q -= diff * diff
}
m, err := (&matchExt{
id: fmt.Sprintf("profile-%v-time-%v-%v", p.GetName(), time.Now().Format("2006-01-02T15:04:05.00"), len(matches)),
matchProfile: p.GetName(),
matchFunction: "skillmatcher",
tickets: mt,
quality: q,
}).pack()
if err != nil {
return nil, err
}
matches = append(matches, m)
}
}
return matches, nil
}
// Evaluate returns matches in order of highest quality, skipping any matches
// which contain tickets that are already used.
func (t *TeamShooterScenario) Evaluate(stream pb.Evaluator_EvaluateServer) error {
// Unpacked proposal matches.
proposals := []*matchExt{}
// Ticket ids which are used in a match.
used := map[string]struct{}{}
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("Error reading evaluator input stream: %w", err)
}
p, err := unpackMatch(req.GetMatch())
if err != nil {
return err
}
proposals = append(proposals, p)
}
// Higher quality is better.
sort.Slice(proposals, func(i, j int) bool {
return proposals[i].quality > proposals[j].quality
})
outer:
for _, p := range proposals {
for _, t := range p.tickets {
if _, ok := used[t.Id]; ok {
continue outer
}
}
for _, t := range p.tickets {
used[t.Id] = struct{}{}
}
err := stream.Send(&pb.EvaluateResponse{MatchId: p.id})
if err != nil {
return fmt.Errorf("Error sending evaluator output stream: %w", err)
}
}
return nil
}
// matchExt presents the match and extension data in a native form, and allows
// easy conversion to and from proto format.
type matchExt struct {
id string
tickets []*pb.Ticket
quality float64
matchProfile string
matchFunction string
}
func unpackMatch(m *pb.Match) (*matchExt, error) {
v := &wrapperspb.DoubleValue{}
err := m.Extensions["quality"].UnmarshalTo(v)
if err != nil {
return nil, fmt.Errorf("Error unpacking match quality: %w", err)
}
return &matchExt{
id: m.MatchId,
tickets: m.Tickets,
quality: v.Value,
matchProfile: m.MatchProfile,
matchFunction: m.MatchFunction,
}, nil
}
func (m *matchExt) pack() (*pb.Match, error) {
v := &wrapperspb.DoubleValue{Value: m.quality}
a, err := anypb.New(v)
if err != nil {
return nil, fmt.Errorf("Error packing match quality: %w", err)
}
return &pb.Match{
MatchId: m.id,
Tickets: m.tickets,
MatchProfile: m.matchProfile,
MatchFunction: m.matchFunction,
Extensions: map[string]*anypb.Any{
"quality": a,
},
}, nil
}
func clamp(v float64, min float64, max float64) float64 {
if v < min {
return min
}
if v > max {
return max
}
return v
}
// weightedChoice takes a map of values, and their relative probability. It
// returns a list of the values, along with a function which will return random
// choices from the values with the weighted probability.
func weightedChoice(m map[string]int) ([]string, func() string) {
s := make([]string, 0, len(m))
total := 0
for k, v := range m {
s = append(s, k)
total += v
}
return s, func() string {
remainder := rand.Intn(total)
for k, v := range m {
remainder -= v
if remainder < 0 {
return k
}
}
panic("weightedChoice is broken.")
}
}

138
go.mod
View File

@ -14,39 +14,115 @@ module open-match.dev/open-match
// See the License for the specific language governing permissions and
// limitations under the License.
go 1.12
// When updating Go version, update Dockerfile.ci, Dockerfile.base-build, and go.mod
go 1.19
require (
cloud.google.com/go v0.40.0
contrib.go.opencensus.io/exporter/jaeger v0.1.0
contrib.go.opencensus.io/exporter/ocagent v0.5.0
contrib.go.opencensus.io/exporter/prometheus v0.1.0
contrib.go.opencensus.io/exporter/stackdriver v0.12.2
contrib.go.opencensus.io/exporter/zipkin v0.1.1
contrib.go.opencensus.io/exporter/jaeger v0.2.1
contrib.go.opencensus.io/exporter/ocagent v0.7.0
contrib.go.opencensus.io/exporter/prometheus v0.2.0
contrib.go.opencensus.io/exporter/stackdriver v0.13.4
github.com/Bose/minisentinel v0.0.0-20200130220412-917c5a9223bb
github.com/TV4/logrus-stackdriver-formatter v0.1.0
github.com/alicebob/miniredis/v2 v2.8.1-0.20190618082157-e29950035715
github.com/cenkalti/backoff v2.1.1+incompatible
github.com/fsnotify/fsnotify v1.4.7
github.com/go-logfmt/logfmt v0.4.0 // indirect
github.com/gogo/protobuf v1.2.1
github.com/golang/protobuf v1.3.1
github.com/gomodule/redigo v1.7.1-0.20190322064113-39e2c31b7ca3
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0
github.com/grpc-ecosystem/grpc-gateway v1.9.2
github.com/imdario/mergo v0.3.7 // indirect
github.com/openzipkin/zipkin-go v0.1.6
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v1.0.0
github.com/alicebob/miniredis/v2 v2.14.1
github.com/cenkalti/backoff v2.2.1+incompatible
github.com/fsnotify/fsnotify v1.4.9
github.com/go-redsync/redsync/v4 v4.3.0
github.com/golang/protobuf v1.5.2
github.com/gomodule/redigo v2.0.1-0.20191111085604-09d84710e01a+incompatible
github.com/grpc-ecosystem/go-grpc-middleware v1.2.2
github.com/grpc-ecosystem/grpc-gateway/v2 v2.15.0
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.8.0
github.com/rs/xid v1.2.1
github.com/sirupsen/logrus v1.4.2
github.com/spf13/viper v1.4.0
github.com/stretchr/testify v1.3.0
go.opencensus.io v0.22.0
golang.org/x/net v0.0.0-20190522155817-f3200d17e092
google.golang.org/genproto v0.0.0-20190611190212-a7e196e89fd3
google.golang.org/grpc v1.21.1
k8s.io/api v0.0.0-20190624085159-95846d7ef82a
k8s.io/apimachinery v0.0.0-20190624085041-961b39a1baa0
k8s.io/client-go v0.0.0-20190620085101-78d2af792bab
k8s.io/utils v0.0.0-20190607212802-c55fbcfc754a // indirect
github.com/sirupsen/logrus v1.7.0
github.com/spf13/viper v1.7.1
github.com/stretchr/testify v1.8.1
go.opencensus.io v0.24.0
golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10
golang.org/x/sync v0.1.0
google.golang.org/genproto v0.0.0-20221207170731-23e4bf6bdc37
google.golang.org/grpc v1.51.0
google.golang.org/protobuf v1.28.1
k8s.io/api v0.26.1 // kubernetes-1.14.10
k8s.io/apimachinery v0.26.1
k8s.io/client-go v0.26.1
)
require (
cloud.google.com/go/compute v1.13.0 // indirect
cloud.google.com/go/compute/metadata v0.2.1 // indirect
cloud.google.com/go/container v1.7.0 // indirect
cloud.google.com/go/monitoring v1.8.0 // indirect
cloud.google.com/go/trace v1.4.0 // indirect
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 // indirect
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d // indirect
github.com/alicebob/gopher-json v0.0.0-20200520072559-a9ecdc9d1d3a // indirect
github.com/aws/aws-sdk-go v1.35.26 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/census-instrumentation/opencensus-proto v0.2.1 // indirect
github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/go-stack/stack v1.8.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.2.0 // indirect
github.com/googleapis/gax-go/v2 v2.7.0 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.0 // indirect
github.com/hashicorp/golang-lru v0.5.1 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/imdario/mergo v0.3.11 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/magiconair/properties v1.8.1 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.1.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/opentracing/opentracing-go v1.1.0 // indirect
github.com/pelletier/go-toml v1.8.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.14.0 // indirect
github.com/prometheus/procfs v0.2.0 // indirect
github.com/prometheus/statsd_exporter v0.15.0 // indirect
github.com/spf13/afero v1.4.1 // indirect
github.com/spf13/cast v1.3.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/uber/jaeger-client-go v2.25.0+incompatible // indirect
github.com/yuin/gopher-lua v0.0.0-20191220021717-ab39c6098bdb // indirect
golang.org/x/oauth2 v0.3.0 // indirect
golang.org/x/sys v0.3.0 // indirect
golang.org/x/term v0.3.0 // indirect
golang.org/x/text v0.5.0 // indirect
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
google.golang.org/api v0.103.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
gopkg.in/alecthomas/kingpin.v2 v2.2.6 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.51.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/klog/v2 v2.80.1 // indirect
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 // indirect
k8s.io/utils v0.0.0-20221107191617-1a15be271d1d // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)

819
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,148 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: open-match-demo
labels:
app: open-match-demo
release: open-match-demo
---
kind: Service
apiVersion: v1
metadata:
name: om-function
namespace: open-match-demo
labels:
app: open-match-customize
component: matchfunction
release: open-match-demo
spec:
selector:
app: open-match-customize
component: matchfunction
release: open-match-demo
clusterIP: None
type: ClusterIP
ports:
- name: grpc
protocol: TCP
port: 50502
- name: http
protocol: TCP
port: 51502
---
kind: Service
apiVersion: v1
metadata:
name: om-demo
namespace: open-match-demo
labels:
app: open-match-demo
component: demo
release: open-match-demo
spec:
selector:
app: open-match-demo
component: demo
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 51507
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: om-function
namespace: open-match-demo
labels:
app: open-match-customize
component: matchfunction
release: open-match-demo
spec:
replicas: 3
selector:
matchLabels:
app: open-match-customize
component: matchfunction
template:
metadata:
namespace: open-match-demo
labels:
app: open-match-customize
component: matchfunction
release: open-match-demo
spec:
containers:
- name: om-function
image: "gcr.io/open-match-public-images/openmatch-mmf-go-soloduel:0.0.0-dev"
ports:
- name: grpc
containerPort: 50502
- name: http
containerPort: 51502
imagePullPolicy: Always
resources:
requests:
memory: 100Mi
cpu: 100m
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: om-demo
namespace: open-match-demo
labels:
app: open-match-demo
component: demo
release: open-match-demo
spec:
replicas: 1
selector:
matchLabels:
app: open-match-demo
component: demo
template:
metadata:
namespace: open-match-demo
labels:
app: open-match-demo
component: demo
release: open-match-demo
spec:
containers:
- name: om-demo
image: "gcr.io/open-match-public-images/openmatch-demo-first-match:0.0.0-dev"
imagePullPolicy: Always
ports:
- name: http
containerPort: 51507
livenessProbe:
httpGet:
scheme: HTTP
path: /healthz
port: 51507
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
scheme: HTTP
path: /healthz?readiness=true
port: 51507
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 2

View File

@ -0,0 +1,29 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: gce:podsecuritypolicy:gke-metadata-server-workaround
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gce:podsecuritypolicy:privileged
subjects:
- kind: ServiceAccount
name: gke-metadata-server
namespace: kube-system

View File

@ -1,24 +1,67 @@
Open Match Helm Chart
=====================
### Open Match Helm Chart Templates
This directory contains the [helm](https://helm.sh/ "helm") chart templates used to customize and deploy Open Match.
Open Match provides a Helm chart to quickly
Templates under the `templates/` directory are for the core components in Open Match - e.g. backend, frontend, query, synchronizor, some security policies, and configmaps are defined under this folder.
```bash
# Install Helm and Tiller
# See https://github.com/helm/helm/releases for
cd /tmp && curl -Lo helm.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-v2.13.0-linux-amd64.tar.gz && tar xvzf helm.tar.gz --strip-components 1 && mv helm $(PREFIX)/bin/helm && mv tiller $(PREFIX)/bin/tiller
Open Match also provides templates for optional components that are disabled by default under the `subcharts/` directory.
1. `open-match-customize` contains flexible templates to deploy your own matchfunction and evaluator.
2. `open-match-telemetry` contains monitoring supports for Open Match, you may choose to enable/disable [jaeger](https://www.jaegertracing.io/ "jaeger"), [prometheus](http://prometheus.io "prometheus"), [stackdriver](https://cloud.google.com/stackdriver/ "stackdriver"), and [grafana](https://grafana.com/ "grafana") by overriding the config values in the provided templates.
# Install Helm to Kubernetes Cluster
kubectl create serviceaccount --namespace kube-system tiller
helm init --service-account tiller --force-upgrade
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
# Run if RBAC is enabled.
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
You may control the behavior of Open Match by overriding the configs in `install/helm/open-match/values.yaml` file. Here are a few examples:
# Deploy Open Match
helm upgrade --install --wait --debug open-match \
install/helm/open-match \
--namespace=open-match \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG)
```diff
# install/helm/open-match/values.yaml
# 1. Configs under the `global` section affects all components - including components in the subcharts.
# 2. Configs under the subchart name - e.g. `open-match-customize` only affects the settings in that subchart.
# 3. Otherwise, the configs are for core components (templates in the parent chart) only.
# Overrides spec.type of a specific Kubernetes Service
# Equivalent helm cli flag --set swaggerui.portType=LoadBalancer
swaggerui:
- portType: ClusterIP
+ portType: LoadBalancer
# Overrides spec.type of all Open Match components - including components in the subcharts
# Equivalent helm cli flag --set global.kubernetes.service.portType=LoadBalancer
global:
kubernetes:
service:
- portType: ClusterIP
+ portType: LoadBalancer
# Enables grafana support in Open Match
# Equivalent helm cli flag --set global.telemetry.grafana.enabled=true
global:
telemetry:
grafana:
- enabled: false
+ enabled: true
# Enables an optional component in Open Match
# Equivalent helm cli flag --set open-match-telemetry.enabled=true
open-match-telemetry:
- enabled: false
+ enabled: true
# Enables rpc logging in Open Match
# Equivalent helm cli flag --set global.logging.rpc.enabled=true
global:
logging:
rpc:
- enabled: false
+ enabled: true
# Instructs Open Match to use customized matchfunction and evaluator images
# Equivalent helm cli flag --set open-match-customize.image.registry=[XXX],open-match-customize.image.tag=[XXX]
open-match-customize:
enabled: true
+ image:
+ registry: [YOUR_REGISTRY_URL]
+ tag: [YOUR_IMAGE_TAG]
+ function:
+ image: [YOUR_MATCHFUNCTION_IMAGE_NAME]
+ evaluator:
+ image: [YOUR_EVALUATOR_IMAGE_NAME]
```
Please see [Helm - Chart Template Guide](https://helm.sh/docs/chart_template_guide/#the-chart-template-developer-s-guide "Chart Template Guide") for the advanced usages and our [Makefile](https://github.com/googleforgames/open-match/blob/master/Makefile#L358 "Makefile") for how we use the helm charts to deploy Open Match.

View File

@ -12,10 +12,27 @@
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
appVersion: "0.6.0"
version: 0.6.0
apiVersion: v2
appVersion: "1.7.0-rc.1"
version: 1.7.0-rc.1
name: open-match
dependencies:
- name: redis
version: 16.3.1
repository: https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami
condition: open-match-core.redis.enabled
- name: open-match-telemetry
version: 0.0.0-dev
condition: open-match-telemetry.enabled
repository: "file://./subcharts/open-match-telemetry"
- name: open-match-customize
version: 0.0.0-dev
condition: open-match-customize.enabled
repository: "file://./subcharts/open-match-customize"
- name: open-match-scale
version: 0.0.0-dev
condition: open-match-scale.enabled
repository: "file://./subcharts/open-match-scale"
description: Flexible, extensible, and scalable video game matchmaking.
keywords:
- kubernetes
@ -33,4 +50,3 @@ maintainers:
url: https://groups.google.com/forum/#!forum/open-match-discuss
engine: gotpl
icon: https://open-match.dev/site/images/logo.svg
tillerVersion: ">2.10.0"

View File

@ -1,12 +0,0 @@
# Install Open Match using Helm
This chart installs the Open Match application and defines deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
To deploy this chart run:
```bash
helm upgrade --install --wait --debug open-match install/helm/open-match \
--namespace=open-match \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG)
```

File diff suppressed because it is too large Load Diff

View File

@ -1,487 +0,0 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"iteration": 1562708434300,
"links": [],
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 10,
"panels": [],
"title": "Server",
"type": "row"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"description": "",
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 1
},
"id": 6,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum by (grpc_server_method)(rate(grpc_io_server_completed_rpcs[$timewindow]))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{grpc_server_method}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Request Rate",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"description": "",
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 1
},
"id": 12,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.95, sum(rate(grpc_io_server_server_latency_bucket[$timewindow])) by (grpc_server_method, le))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{grpc_server_method}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "95%-ile Latency",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": null,
"format": "ms",
"label": null,
"logBase": 2,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 9
},
"id": 8,
"panels": [],
"title": "Client",
"type": "row"
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"description": "",
"fill": 1,
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 10
},
"id": 4,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum by (grpc_client_method)(rate(grpc_io_client_completed_rpcs[$timewindow]))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{grpc_client_method}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Client Request Rate",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"decimals": null,
"format": "reqps",
"label": "",
"logBase": 10,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"description": "",
"fill": 1,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 10
},
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.95, sum(rate(grpc_io_client_roundtrip_latency_bucket[$timewindow])) by (grpc_client_method, le))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{grpc_client_method}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "95%-ile Client Latency",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "ms",
"label": null,
"logBase": 2,
"max": null,
"min": "0",
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"schemaVersion": 18,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"allValue": null,
"current": {
"text": "5m",
"value": "5m"
},
"hide": 0,
"includeAll": false,
"label": "Time Window",
"multi": false,
"name": "timewindow",
"options": [
{
"selected": true,
"text": "5m",
"value": "5m"
},
{
"selected": false,
"text": "10m",
"value": "10m"
},
{
"selected": false,
"text": "15m",
"value": "15m"
},
{
"selected": false,
"text": "30m",
"value": "30m"
},
{
"selected": false,
"text": "1h",
"value": "1h"
},
{
"selected": false,
"text": "4h",
"value": "4h"
}
],
"query": "5m,10m,15m,30m,1h,4h",
"skipUrlSync": false,
"type": "custom"
}
]
},
"time": {
"from": "now-30m",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "",
"title": "gRPC",
"uid": "nlrmG_Cmk",
"version": 1
}

Some files were not shown because too many files have changed in this diff Show More