Compare commits

...

416 Commits

Author SHA1 Message Date
2b94ebb1d4 0.6.0-rc.1 Release () 2019-07-15 15:45:55 -07:00
52610974de Update Open Match Developer Documentation () 2019-07-15 13:43:51 -07:00
041572eef6 Move monitoring/ to telemetry/. () 2019-07-15 12:44:27 -07:00
e28fe42f3b Fix metrics flushing, add OC agent, and refactor multi-closing. () 2019-07-15 11:39:04 -07:00
880e340859 Merge Logrus Loggers () 2019-07-15 06:38:16 -07:00
88a659544e Enable stress test for v.6 ()
* Checkpoint

* Fix

* README

* Fix
2019-07-12 16:56:14 -07:00
9381918163 An attempt to fix the flake test in frontend_service_test () 2019-07-12 11:51:46 -07:00
1c41052bd6 Obsolete e2e test setup files ()
* Obsolete e2e test setup files

* Review
2019-07-12 11:03:45 -07:00
819ae18478 Add metrics for Open Match () 2019-07-11 17:17:31 -07:00
ad96f42b94 Add TLS support to the Helm chart. () 2019-07-11 15:41:41 -07:00
1d778c079c Add Terraform Linting () 2019-07-11 14:47:45 -07:00
4bbfafd761 Stress test automation ()
* Stress test automation

* Use distroless

* Review
2019-07-11 12:10:15 -07:00
28e5d0a1d1 Update Terraform setup and README ()
* Update Terraform README

* Enable GCP API using Terraform

* Review comment

* Update secure-gke.tf

* Update link
2019-07-11 10:56:42 -07:00
a394c8b22e Refactor Helm deployment templates to share common config. () 2019-07-10 13:59:43 -07:00
93276f4d02 Add config based metrics and logging for gRPC. () 2019-07-10 11:44:19 -07:00
310d98a078 Fix Open Match logo in Helm chart () 2019-07-10 10:51:53 -07:00
a84eda4dab Monitoring Dashboard () 2019-07-09 18:08:04 -07:00
ce038bc6dd Remove OPEN_MATCH_DEMO_KUBERNETES_NAMESPACE ()
It doesn't work anyways, and would require a lot more than what is here to get it to work.

This resolves 
2019-07-09 15:06:54 -07:00
74fb195f41 FetchMatches e2e tests ()
* Add mmf and evaluator setup for e2e in cluster tests

* Add comments
2019-07-09 13:26:00 -07:00
1dc3fc8b6b Remove unused config values () 2019-07-09 12:08:27 -07:00
de469cb349 Config cleanup and improved health checking () 2019-07-09 11:12:31 -07:00
7462f32125 Update () 2019-07-09 10:09:58 -07:00
3268461a21 Use struct in assignment properties ()
Fixing because I noticed that we were still using string here.
Also properties was stating that it was optional and open match didn't interpret the contents, which is true for all of Assignments fields, so clarified that.
2019-07-09 00:07:11 -07:00
3897cd295e Open Match demo creating tickets, matches, and assignments ()
This is the working end to end demo!

There are 3 components to the demo:

Uptime just counts up once every second.
Clients simulates 5 game clients, which create a ticket, then wait for assignment on that ticket.
Director simulates a single director, requesting matches and giving fake assignments to all of the tickets.
To run: make create-gke-cluster push-helm push-images install-chart proxy-demo
2019-07-08 18:10:46 -07:00
5a9212a46e Use google.rpc.Status for the assignment error. () 2019-07-08 15:56:40 -07:00
04cfddefd0 Add error to MMF harness signature. () 2019-07-08 14:55:55 -07:00
b7872489ae Use plurals for repeated proto fields ()
The style guide for protos state that repeated fields should have a plural name: https://developers.google.com/protocol-buffers/docs/style#repeated-fields
2019-07-08 13:45:18 -07:00
6d65841b77 Make default build-*-image rule point to cmd/*/Dockerfile () 2019-07-08 13:23:31 -07:00
b4fb725008 Break out rpc client cache for reuse. () 2019-07-08 11:44:27 -07:00
50d9a0c234 Add mmf and evaluator setup for e2e in cluster tests ()
* Add mmf and evaluator setup for e2e in cluster tests

* Fix

* Fix

* Fix
2019-07-08 10:21:44 -07:00
a68fd5ed1e Update helm dep and remove unused helm template () 2019-07-08 09:40:28 -07:00
be58fae864 Add structs package which simplifies proto struct literals () 2019-07-03 11:31:52 -07:00
f22ad9afc5 Demo scaffolding with uptime counter ()
This hooks up the demo webpage to connect a websocket.  It includes several 
related minor changes to get things working properly:

- "make proxy-demo" was broken because it was referencing helm's open-match-demo, which was merged with open-match.
- setup bookkeeping, such as health checks, configuration, logging.
- Turn on the demo in values.yaml, and only include one replica. (more than one demo instance would collide.)
2019-07-02 16:40:48 -07:00
6b1b84c54e Updater logic for the demo ()
Updater allows concurrent processes to update different fields on a json object
which is serialized and passed to a func([]byte). In the demo, different SetFunc
for different fields on the base json object will be passed to the relevant components
(clients, director, game servers, etc). These components will run and pass their state
to the updater.

This updater will be combined with bytesub by passing bytesub's AnnounceLatest
method into the base updater New. This way the demo state of each component
will be passed to all current dashboard viewers.
2019-07-02 16:12:31 -07:00
043ffd69e3 Add post validation to keep backend from sending matches with empty tickets ()
* Add post validation to keep backend from sending matches with empty tickets
2019-07-01 17:42:14 -07:00
e5f7d3bafe Implements a score-based evaluator for end to end test ()
* Implements a score-based evaluator for end to end test

* Add more tests

* Fix

* Add more tests
2019-06-28 12:40:56 -07:00
8f88ba151e Add preview deployment script () 2019-06-28 12:14:44 -07:00
9c83062a41 Add score to mmf () 2019-06-28 11:39:38 -07:00
7b31bdcedf Config based Swagger UI () 2019-06-28 10:46:36 -07:00
269e6cd0ad Helm charts and Makefile commands for in-cluster end to end tests. ()
* Add e2eevaluator and e2ematchfunction setup

* Checkpoint

* Update

* Fix
2019-06-28 10:23:20 -07:00
864f13f2e8 SwaggerUI now logs to Stackdriver and reads from matchmaker_config.yaml () 2019-06-27 16:45:32 -07:00
e3a9f59ad9 Add e2eevaluator and e2ematchfunction setup ()
* Add e2e setup

* Fix

* Fix
2019-06-27 15:56:02 -07:00
8a3f6e43b8 Add skip ticket checks to FetchMatches and Statestore service ()
* Add skip ticket checks to FetchMatches and Statestore service

* Fix

* Fix golangci

* Add ignore list tests
2019-06-27 13:25:25 -07:00
2b8597c72e Enable E2E Cluster Tests () 2019-06-27 10:39:15 -07:00
c403f28c04 Reduce CI Latency by Improving Waits and Reducing Docker Pulls () 2019-06-27 09:33:49 -07:00
317f914daa Enable error handling for evaluation ()
add error handling to evaluation
2019-06-26 08:13:41 -07:00
16fbc015b2 Reduce CI Times () 2019-06-25 16:37:42 -07:00
d1d9114ddb E2E Test Framework for k8s and in-memory clusters. () 2019-06-25 15:42:47 -07:00
e6622ff585 Add evaluator to E2E Minimatch tests () 2019-06-25 14:52:02 -07:00
99fb4a8fcf Codify Open Match Continuous Integration as a Terraform template. () 2019-06-25 14:29:19 -07:00
e0ebb139bf Implement core synchronizer functionality. ()
Implement core synchronizer functionality.
2019-06-25 14:06:50 -07:00
31dcbe39f7 Terraform documentation and change default project. () 2019-06-25 12:46:29 -07:00
76a1cd8427 Introduce Synchronizer Client that delays connecting to the sychronizer at runtime when processing FetchMatches () 2019-06-25 10:50:03 -07:00
ac6c00c89d Add Helm Chart component for Open Match Demo Evaluator () 2019-06-25 10:30:55 -07:00
a7d97fdf0d Fix compile issue with _setup referencing _test vars. () 2019-06-25 09:43:33 -07:00
f08121cf25 Expose services as backed by a LoadBalancer. () 2019-06-25 09:19:54 -07:00
cd6dd410ee Light reduction of log spam from errors. () 2019-06-24 06:59:54 -07:00
5f0a2409e8 Add calls to synchronizer to backend service. Currently set them to default disabled. () 2019-06-24 06:26:42 -07:00
d445a0b2d5 Make FetchMatches return direct result instead of streaming ()
* Make FetchMatches return direct result instead of streaming

* Fix
2019-06-22 18:25:05 -07:00
1526827e3c Refactor fetch matches tests to support more incoming test senarios ()
* Checkpoint

* Refactor fetch matches tests to support more incoming test senarios
2019-06-21 23:29:00 -07:00
82e60e861f Deindex ticket after assignment () 2019-06-21 16:23:50 -07:00
5900a1c542 Improve logging on server shutdown. () 2019-06-21 16:01:09 -07:00
a02aa99c7a Use distroless nonroot images. () 2019-06-21 15:29:25 -07:00
2f3f8b7f56 Rename Synchronizer methods in proto to better align with their functionality () 2019-06-21 12:44:32 -07:00
a7eb1719cc Merge demo chart into open-match chart. Also create a open-match repository. () 2019-06-21 11:57:06 -07:00
ea24b702c8 Reduce the size of the default chart for Open Match. () 2019-06-21 10:39:35 -07:00
e7ab30dc63 Fix CI: Change min GKE cluster version to not point to a specific version since they can go away at any time. () 2019-06-21 10:22:14 -07:00
8b88f26e4e Test e2e QueryTickets behaviors ()
* Test e2e QueryTickets behaviors

* Fix

* fix

* Fix angry bot

* Update
2019-06-20 18:10:19 -07:00
d5f60ae202 Simplify mmf config proto definitions ()
* Simplify mmf config proto definitions

* Update
2019-06-20 15:45:47 -07:00
113ee00a6c Add e2e tests to Assignment logic and refine redis.UpdateAssignment workflow ()
* Add e2e tests to Assignment logic
2019-06-20 14:16:11 -07:00
c083f1735a Make create ticket returns precondition failure code when receiving n… ()
* Make create ticket returns precondition failure code when receiving non-number properties

* Update based on feedback
2019-06-20 13:09:23 -07:00
52ad8de602 Consolidate e2e service start up code () 2019-06-19 17:18:46 -07:00
3daebfc39d Fix canary tagging. () 2019-06-19 14:56:16 -07:00
3e5da9f7d5 Fix Swagger UI errors. () 2019-06-19 13:14:02 -07:00
951e82b6a2 Add canary tagging. () 2019-06-19 12:58:16 -07:00
d201242610 Use pb getter to avoid program panics when required fields are missing () 2019-06-19 10:51:47 -07:00
1328a109e5 Move generated pb files from internal/pb to pkg/pb ()
* Move generated pb files from internal/pb to pkg/pb

* Update base on feedback
2019-06-18 18:04:27 -07:00
2415194e68 Reorganize e2e structure ()
* Reorganize e2e structure

* Fix golangci error

* Split up setup code based on feedback
2019-06-18 17:37:46 -07:00
b2214f7b9b Update module dependencies ()
* Update dependency version

* Update

* Fix makefile
2019-06-18 16:55:58 -07:00
98220fdc0b Add replicas to Open Match deployments to ensure statelessness. () 2019-06-18 14:52:24 -07:00
b2bf00631a Add more unit tests to frontend service () 2019-06-18 11:15:26 -07:00
49ac68c32a Update miniredis version () 2019-06-18 10:36:25 -07:00
7b3d6d38d3 Terraform Configs () 2019-06-18 06:56:28 -07:00
a1271ff820 Add more unit tests to backend ()
* Add more unit tests to backend

* Fix typo

* Fix typo
2019-06-17 18:22:05 -07:00
2932144d80 Add Evaluator Proto, Evaluator Harness and an example Evaluator using the harness. ()
* Add Evaluator Proto, Evaluator Harness and an example Evaluator using the harness.

This change just adds a skeleton of the sample evaluator to set up
building the harness. The actual evaluation logic for the example, tests and wiring up the example in the helm charts, demos and e2e tests etc., will follow in future PRs.

* fix golint issues
2019-06-17 17:18:14 -07:00
3a14bf3641 Add bytesub for broadcasting demo state ()
This will be used by the demo. The demo will have a central state, which will be updated, serialized to json, and then announced on an instance of ByteSub. Demo webpage clients will subscribe using a websocket to receive the latest state.
2019-06-17 16:32:31 -07:00
7d0ec363e5 Move e2e test cases from /app folder to /e2e folder () 2019-06-17 15:09:52 -07:00
dcff6326b1 Rename Evaluator component to Synchronizer ()
The Synchronizer exposes APIs to give a synchronization window context
and to add matches to evaluate for that synchronization window. The
actual evaluator will be re-introduced as the component authored by the
customer that the Synchronizer triggers whenever the window expires.
2019-06-17 09:50:05 -07:00
ffd77212b0 Add test for frontend.GetAssignment method ()
* Refactor frontend service for unit tests
2019-06-15 00:35:53 -07:00
ea3e529b0d Add deployment phases to CI () 2019-06-14 16:46:14 -07:00
db2c298a48 Prepare CI for Cluster E2E Testing () 2019-06-14 14:50:13 -07:00
85d5f9fdbb Fix open match logo () 2019-06-14 11:50:18 -07:00
401329030a Delete the website, moved to open-match-docs. () 2019-06-14 07:20:11 -07:00
d9e20f9c29 Refactor frontend service for unit tests ()
* Refactor frontend service for unit tests

* Add more tests

* Fix

* Fix
2019-06-13 20:01:37 -07:00
f95164148f Temorarily disable md-test CI check () 2019-06-13 19:15:33 -07:00
ab39bcc93d Disable website autopush. Moved to open-match-docs. () 2019-06-13 15:28:26 -07:00
d1ae3e9620 Refactor mmlogic service for unit tests ()
* Refactor mmlogic service for unit tests

* Checkpoint

* Add test samples

* Update

* Fix golangci

* Update
2019-06-13 14:51:26 -07:00
de83c9f06a Refactor backend service for unit tests ()
* Refactor backend service for unit tests

* Refactor frontend service for unit tests

* Golangci fix

* Fix npe

* Add tests

* Fix bad merge

* Rewrite

* Fix hexakosioihexekontahexaphobia

* Fix type

* Add more tests

* go mod
2019-06-13 14:40:13 -07:00
9fb445fda6 Create cluster reaper for Open Match e2e tests. () 2019-06-13 13:55:54 -07:00
050367eb88 Use absolute paths in the makefile. Fix macos sed bug. () 2019-06-13 13:23:05 -07:00
40d288964b Fix a bug in redis connect ()
* Fix a bug in redis connect

* weird

* Fix go mod
2019-06-13 11:27:20 -07:00
e4c87c2c3a PodSecurityPolicy for Open Match () 2019-06-13 06:54:46 -07:00
bd2927bcc5 Add image for demo ()
This is barebones work to get an image for the demo working. This image will eventually contain the demo go-routines that emulate the clients, and presenting a dashboard for the state of the demo. Will followup with actual logic in the demo itself.

TESTED=Manually started om and the demo, ran proxy-demo and got the expected 404.
2019-06-12 15:15:42 -07:00
271e745a61 Read paging size and don't blow up on misconfiguration ()
This follows what the documentation on the min/max constants says it does.
Also defines a new default value which is a more reasonable default than the minimum.
2019-06-12 14:16:39 -07:00
98c15e78ad Fix program panics when calling proto subfields () 2019-06-12 12:02:15 -07:00
fbbe3cd2b4 Add tests for example match functions ()
* Add more tests

* Update

* Fix
2019-06-12 11:48:28 -07:00
878ef89c40 Remove unnecessary test files () 2019-06-12 11:11:33 -07:00
a9a5a29e58 Add filter package ()
This package will be useful for a simple definition of how filters work, as well as a way to process tickets without relying on indexes. This will allow other filter types (eg, strings) to be added without a redis implementation. See package documentation for some more details.

I plan on replacing the current redis indexing with a "all tickets" index to remove the edge cases it gets wrong that we don't want to spend time fixing for v0.6. Instead this package will be responsible for filtering which tickets to return. This also removes the index configuration problem from v0.6. Then for v0.7, once the indexing and database solutions are chosen, we can go back to implementing the correct way to be indexing tickets.

Also included are test cases, separated in their own definition. These test cases should be used in the future for end to end tests, and for tests on indexes. This will help ensure that the system as a whole maintains the behavior specified here.
2019-06-11 16:32:44 -07:00
8cd7cd0035 Update the behavior of backend.FetchMatches ()
* Update the behavior of backend.FetchMatches

* Update comments

* You fail I fail

* Fix

* Update

* Implement with buffered channel
2019-06-11 16:16:01 -07:00
92495071d2 Update golangci version and enable it in presubmit check ()
* bringitback

* Disable body close in golangci presubmit check
2019-06-11 13:39:02 -07:00
a766b38d62 Fix binauthz policy to allow for elasticsearch image. () 2019-06-10 20:58:26 -07:00
6c941909e8 Fix helm deletion error () 2019-06-10 15:53:22 -07:00
77dc8f8c47 Adds backend service tests ()
* Add more tests

* Adds backend service tests

* Fix golangci bot err

* update based on feedback

* Update

* Rename tests
2019-06-07 13:19:26 -07:00
a804e1009b Fix Shadowcheck ()
* Fix shadow check error

* Update
2019-06-07 11:08:52 -07:00
3b2efc39c7 e2e test with random port ()
* Checkpoint

* e2e tests with random ports

* Update based on feedback

* Fix

* Cleanup for review

* Update
2019-06-06 18:08:30 -07:00
b8054633bf Fix KinD make commands ()
The "v" was missing from the kind urls, so it wasn't properly downloaded.

Additionally, KinD does not update the kube config, so the makefile can't automatically configure future kubecfg commands to work properly. As an easy fix, just tell the user to run the commands themselves.

See  for KinD context.

This fixes 
2019-06-06 14:59:30 -07:00
336fad9079 Move harness to pkg/ directory and reorganize examples/ ()
* Move sample mmf and harness to pkg/ directory

* Fix makefile error

* Fix cloudbuild
2019-06-05 17:16:48 -07:00
ce59eedd29 Cleanup redundant createStore methods using statestoreTesting helpers ()
* Cleanup redundant createStore method using statestoreTesting helpers

* Fix unparam error

* Fix unparam error

* Update

* Fix bad merge
2019-06-05 16:40:53 -07:00
83c0913c34 Remove functionName from harness.FunctionSetting () 2019-06-05 16:19:26 -07:00
6b50cdd804 Remove test-hook that bypasses statestorage package to directly initialize Redis storage () 2019-06-05 16:07:53 -07:00
f427303505 Reuse existing helper functions to grab GRPC clients ()
* Reuse existing helper functions to grab GRPC clients

* Update based on feedback
2019-06-05 15:27:00 -07:00
269dd9bc2f Add Swagger UI to enable interactive calls. () 2019-06-05 08:31:29 -07:00
d501dbcde6 Add automation to apply Swagger UI directory. () 2019-06-05 07:18:27 -07:00
04c4e376b5 Use cos_containerd for node pool image type. () 2019-06-05 06:34:03 -07:00
3e61359f05 Create third party folder with grpc-gateway *.proto dependencies ()
* Create third party folder with grpc-gateway *.proto dependencies

* Enable automated third_party download

* third_party
2019-06-04 17:34:04 -07:00
8275ed76c5 Consolidate statstore/public.New signature ()
* Consolidate statestore.New signature

* Fix bad merge

* Fix bad merge
2019-06-04 17:22:55 -07:00
e8b2525262 Distinguish between example and demo. ()
There will be many example MMFs, but we specifically want an runnable demo. (the demo is an example, but not all examples are the demo.)

I change the install/helm from example to demo, as it now specifically only installs the demo. Changes in the Makefile to reflect that.
I also add new make commands to build and push the demo images. Currently it only contains the example mmf, but it will in the near future contain the demo driver image.

Tested = Created a GKE cluster and installed via the make commands.
2019-06-04 16:52:55 -07:00
3517b7725c Add Kubernetes health checks. () 2019-06-04 16:01:30 -07:00
6cd521abf7 Remove TestServerBinding from component tests () 2019-06-04 15:18:29 -07:00
924fccfeb3 Update to new respository location ()
CI is broken.
2019-06-04 12:51:43 -07:00
c17ca7a10c Update GetAssignment and UpdateAssignment methods to meet design need ()
* Update assignment methods

* nolint on exponential backoff strat
2019-06-04 11:28:17 -07:00
b11863071f Revert presubmit () 2019-06-04 11:15:14 -07:00
20dbcea99f Fix short sha tags in docker builds. () 2019-06-04 11:02:20 -07:00
13505956a0 Enable golangci in presubmit () 2019-06-03 19:57:02 -07:00
3d04025860 Have sample MMF create simple 1v1 matches ()
The previous MMF required a weird behavior where you needed to set tickets in rosters
to be overridden. It would also break if there weren't enough tickets to fill the rosters. This
is a simpler example which takes tickets from the pool and assigns them into 1v1 matches.
2019-06-03 17:51:20 -07:00
d0f7f2c4d3 Implemented backend rest config logic ()
* Implemented backend rest config logic

* Remove unncessary logs and let logics return rpc status error

* Fix chatty bot

* Fix bad merge
2019-05-31 17:12:03 -07:00
272e7642b1 Move back to helm2 and adjust cluster size again. () 2019-05-31 15:38:27 -07:00
f3f80a70bd Reorganize backend service and e2e test for incoming rest config logic ()
* Reorganize backend service and e2e test for incoming rest config logic
2019-05-31 12:57:50 -07:00
80bcd9487f Make backoff strategy configurable () 2019-05-31 11:32:26 -07:00
2e25daf474 Increase the size of the default cluster. We are hitting vCPU limits () 2019-05-30 17:51:03 -07:00
9fe32eef96 Implements frontend and backend Assignments method with tests ()
* Implements backend AssignTickets method

* implmenet backend get assignment method with tests

* Fix test comments

* Remove redundant log

* Go mod tidy
2019-05-30 20:18:17 -04:00
0446159872 Modify FunctionConfig to indicate mmf server type ()
* Make backend service supports secure mmf server
2019-05-30 19:59:29 -04:00
2ef8614687 Switch to Helm 3-alpha1 () 2019-05-30 16:03:28 -07:00
de8279dfe0 Add server/client test helpers and a basic test. () 2019-05-30 15:42:39 -07:00
8fedc2900f Implements evaluator harness and a default example ()
* Implements evaluator harness and a default example
2019-05-30 18:17:35 -04:00
0f95adce20 Update copyright headers () 2019-05-30 14:53:02 -07:00
4f851094a0 Fix some subtle TLS bugs and remove clientCredentialsFromFileData () 2019-05-30 06:37:01 -07:00
9cae854771 Enable most of the golangci checks and fix internal/set tests ()
* Enable most of the golangci checks and fix internal/set tests
2019-05-29 18:33:54 -04:00
603089f404 Add Binary Authorization commands. () 2019-05-29 14:59:03 -07:00
d024b46487 Enable GKE Autoscaling () 2019-05-29 14:22:33 -07:00
a2616870c7 Splitup host and prefix variables () 2019-05-29 14:27:37 -04:00
6d8b516026 Expose minimatch config to test context () 2019-05-29 14:13:21 -04:00
b4d3e84e3d Update tools, help, and scope some global vars. () 2019-05-29 07:48:21 -07:00
6b370f56c8 E2E tests for Open Match using Minimatch. ()
* E2E tests for Open Match using Minimatch.

The test binary starts all core Open Match services and a sample MMF in
proc. It then uses some test data to create tickets, query for Pools and
fetch matches and validate that the pools and matches have expected
tickets.
2019-05-25 00:23:01 -07:00
d5da3d16b7 Fix minor issues that were encountered in an E2E test case for generating matches. ()
* Fix minor issues that were encountered in an E2E test case for generating matches. Here are the issues this change fixes:

1. Add error check to avoid accessing failed connection in tcp net listener
2. Divide by zero error in the redis state storage library if page size is 0.
3. Ability to support null hostname when connecting to mmlogic client in match function harness.
4. Set min / max page size limits to initialize correctly if page size is set to unsupported values.

* review changes
2019-05-22 22:44:03 -07:00
31469bb0f9 Modify the Function Configuration proto to improve code readability. ()
Current proto names both the variable bearing the function config - and the config proto as grpc, due to which proto generation generates structures representing the variable and the type as differing by an '_'. Code using this current style turns out confusing and hard to read. Renaming the type different from the variable to improve code readability.
2019-05-21 22:52:07 -07:00
e8adc57f76 Add input validation to FetchMatches. The call now fails with ()
InvalidArgument if match configuration is missing or unsupported.
2019-05-21 14:36:24 -07:00
0da8d0d221 Fix 'make delete-chart' - use ignore-not-found properly ()
Without '=true', it complains that crd is not a supported kubectl command (because it thinks delete is an arg to ignore-not-found.)
2019-05-21 10:15:27 -07:00
08d9210588 Fix: Frontend config specifies grpcport properly. ()
It appears the code reads this properly already. It's just that the missing config probably defaulted to 0, which when assigning a port works.
2019-05-20 16:09:11 -07:00
8882d8c9a1 Fix mmf harness reading port from kub config () 2019-05-20 15:40:23 -07:00
6c65e924ec Use proto3's struct for properties, not string. ()
This makes the API more usuable for both json clients (who no longer
have to encode json into a string and put that into their json) and also
for proto clients (who no longer have to use json...). When converted
to json, struct will be encoded directly into an object, which is much
more convenient.

The majority of the rest of this change is fixing tickets in tests.

This removes the dependancy on the json parser that was used for reading
from properties.

I also added a test on indexing and filtering tickets from the state
store, because I can't help myself and I have a problem.
2019-05-20 13:33:55 -07:00
beba937ac5 Add a new Evaluator service to Open Match core components to perform Match evaluation. ()
Evaluator component synchronizes all the generated proposals, evaluates
them for quality, deduplicates proposals and generates result
matches. Backend service can scale with number of backend requests but
the evaluator acts as the single point aggregator for results. Hence it
requires to be on a separate service.

This change only introduces the scaffolding for the evaluator, tying
adding it as a core component to Open Match build, deployment and other
tooling. This change does not actually wire the evaluator up into the
match generation flow. The change that adds the core evaluator logic
will follow.
2019-05-17 16:24:18 -07:00
caa755272b Implement FetchMatches on Backend Service () 2019-05-17 16:04:16 -07:00
ea60386fa0 Implement clientwrapper for harness and testing ()
* Implement clientwrapper for harness and testing

* Disable clientAuthAndVerify on tlsServer

* Remove stale temp file codes from clients_test.go
2019-05-17 15:31:55 -07:00
9000ae8de4 Have redis filter query return full tickets ()
* Have redis filter query and return full tickets

* break out paging logic from redis filter

* Per code review, return grpc statuses
2019-05-17 15:21:23 -07:00
4e2b64722f Splits up stress test utility codes and implements mmlogic stress test () 2019-05-17 12:27:50 -07:00
4c0f24217f Add validation for markdown links () 2019-05-17 09:20:03 -07:00
872b7be6a5 Fixing URL for getting started guide. () 2019-05-17 08:52:00 -07:00
53f2ee208f Proto changes for implementing Backend Service () 2019-05-16 13:49:32 -07:00
b5eaf153e8 Implement Frontend Service Create / Get / Delete Ticket () 2019-05-15 22:39:26 -07:00
d3c7eb2000 Rename serving functions and params with 'Server' prefix () 2019-05-15 14:29:29 -07:00
23243e2815 Add SwaggerUI to website () 2019-05-15 09:02:08 -07:00
3be97908b2 Add make targets for creating TLS certificates. () 2019-05-15 08:28:43 -07:00
5892f81214 Implement the Mmlogic Service () 2019-05-14 18:15:50 -07:00
40892c9b2e Add option to serve with a trusted CA cert. () 2019-05-14 17:18:26 -07:00
9691d3f001 Refactor serving package to move to serving/rpc path. ()
* move serving codes to rpc directory to hide tls util methods

* nolint on unused tls util codes
2019-05-14 15:36:32 -07:00
9808066375 Move open match core components from future folders to actual destination. ()
* Remove future
2019-05-14 14:34:19 -07:00
534716eef4 Fix golangci errors ()
* Fix golangci errors
2019-05-14 13:35:54 -07:00
ece4a602d0 Move binaries to cmd/ () 2019-05-14 13:04:08 -07:00
8eb72d98b2 Delete old Open Match () 2019-05-14 11:04:52 -07:00
6da8a34b67 Ignore errors from make delete-kind-cluster () 2019-05-14 10:37:39 -07:00
b03189e34c Delete evaluator () 2019-05-14 06:40:13 -07:00
9766871a87 new mmf service impl ()
* mmf service impl
2019-05-13 19:16:08 -07:00
4eac4cb29a New MMF harness Makefile and skeletons ()
* New MMF harness Makefile and skeletons

* Not simple
2019-05-13 16:28:10 -07:00
439286523d Remove old mmf codes ()
* Remove old mmf codes

* cleanup makefile

* Cleanup cloudbuild

* makes cloudbuild great again

* Delete unmarshal.go

* Delete unmarshal_test.go
2019-05-13 15:46:37 -07:00
e0058c7c08 Get started guide ()
* Get started guide and fix development guide typo
2019-05-13 15:08:58 -07:00
ec40f26e62 Fix make all denpendency () 2019-05-13 11:37:30 -07:00
17134f0a40 Break up tls util codes ()
* Break up tls util codes
2019-05-13 11:16:18 -07:00
add2464b33 Basic experimental knative instructions () 2019-05-13 10:58:17 -07:00
b72b4f9b54 Redis implementation for State Storage methods for Tickets () 2019-05-10 16:03:56 -07:00
abdc3aca28 Clean up unused bindata () 2019-05-10 10:43:26 -07:00
3ab724e848 Initialize state storage in Frontend, Backend and MMLogic services () 2019-05-09 23:38:03 -07:00
3c8d0ce1b0 Fix the URL for install yamls. () 2019-05-09 15:58:11 -07:00
c0166e3176 Refactor protos to match CRUD operations () 2019-05-09 15:05:03 -07:00
3623adb90e Fix URLs, Post Submit, add gofmt to presubmit () 2019-05-09 11:12:31 -07:00
fba1bcf445 Fix post commit () 2019-05-09 06:57:17 -07:00
fdd865200a Set CORS policy on open-match.dev () 2019-05-08 14:40:27 -07:00
b0fc8f261f Remove helm chart autopush () 2019-05-08 14:11:08 -07:00
bdd3503d80 Add post commit for website auto push. () 2019-05-08 13:25:20 -07:00
81fd2bab83 Add abstraction to storage layer to decouple Redis from Open Match Services () 2019-05-08 12:54:34 -07:00
212a416789 Add Monitoring Configuration to Open Match () 2019-05-08 12:13:25 -07:00
2425e28129 Improve development guide and remove old user guide. () 2019-05-08 11:33:29 -07:00
3993a2f584 Delete old open match. gRPC harness will remain for now because it does not have an equivalent yet. () 2019-05-08 10:21:32 -07:00
05cb4e503f Fix broken link. () 2019-05-08 09:36:18 -07:00
1cf11e7d81 Add image-spec annotions () 2019-05-07 19:40:28 -07:00
1985ecefed Fix issues with website before launch () () 2019-05-07 19:17:57 -07:00
b7ebb60325 Update helm chart dependencies, and add Jaeger for tracing. () 2019-05-07 16:04:38 -07:00
e4651d9382 Add minimatch binary to gitignore () 2019-05-07 15:09:37 -07:00
04a574688a Monitoring package for Open Match () 2019-05-07 13:49:50 -07:00
d4a901fc71 Update frontend stress to use new API. () 2019-05-07 13:27:50 -07:00
5de79f90cf Fix mmlogic readiness probe. () 2019-05-07 11:20:34 -07:00
e42c8a0232 Add KinD support for OM deployments. () 2019-05-07 11:03:26 -07:00
1503ffae3a Add make proxy*, update tools, and cleanup make output. () 2019-05-07 10:42:50 -07:00
a842da5563 Download includes before using protoc tools. () 2019-05-07 07:07:08 -07:00
c3d6efef72 Use golang vanity url: open-match.dev/open-match () () 2019-05-06 07:57:43 -07:00
0516ab0800 Minimatch for 0.6.0 () 2019-05-06 07:33:47 -07:00
668bfd6104 updating release documentation and small edits to README ()
* updating release documentation and small edits to README

* Update release.md

* Update README.md
2019-05-03 19:03:35 -07:00
ef933ed6ef Add stubbed abstracted Redis client. ()
* Add stubbed abstracted Redis client.

* Make this work
2019-05-03 15:01:32 -07:00
3ee24e3f28 Add documentation on how to update the docs. () 2019-05-03 11:32:59 -07:00
d0bd794a61 Move the future/fake_frontend to use the future/pb. () 2019-05-03 10:58:41 -07:00
37bbf470de Replace helm chart for the new binaries. () 2019-05-03 10:34:14 -07:00
412cb8557a Move all deprecated Makefile targets to the bottom. () 2019-05-03 09:31:29 -07:00
38e81a9fd1 Revamp the website to include the basics and prepare for launch. () 2019-05-02 15:59:48 -07:00
cb24baf107 Add gRPC/TLS and HTTPS serving support. () 2019-05-02 15:12:13 -07:00
c6f6526823 Add top level Swagger annotations for API. () 2019-05-02 13:28:31 -07:00
41e441050f Add main() for new binaries and wire up to CI () 2019-05-02 12:44:29 -07:00
235e25c748 Remobve 404ing pages. () 2019-05-01 20:19:25 -07:00
93ca5e885c make test now does coverage () 2019-05-01 19:59:18 -07:00
5d67fb1548 Add Root CA support to certgen. () () 2019-05-01 17:35:55 -07:00
faa6e17607 Rename CreateTicketsResponse to CreateTicketResponse for consistency. () 2019-05-01 14:53:08 -07:00
6a0c648a8f Add missing copyright headers and godoc package comments. () 2019-05-01 14:23:35 -07:00
8516e3e036 Frontendapi load testing impl ()
* Frontendapi Stress Tests
2019-05-01 11:55:09 -07:00
e476078b9f Auto-generate new protobufs and APIs. () 2019-04-30 14:24:21 -07:00
0d405df761 Fix up some lint errors. () 2019-04-30 12:53:58 -07:00
06c1208ab2 Remove log alias from logrus imports () 2019-04-30 12:38:15 -07:00
af335032a8 Add protoc-gen-swagger proto options to includes. () 2019-04-30 11:34:36 -07:00
a8be8afce2 Add insecure gRPC serving to future/. () 2019-04-30 08:46:42 -07:00
e524121b4b Clearify in release issue to use release notes drafted in rc1 of a re… ()
* Clearify in release issue to use release notes drafted in rc1 of a release.  Also improve wording around instructions to create the release.
2019-04-29 17:45:17 -07:00
871abeee69 Initial protos for Open Match 0.6 () 2019-04-29 16:39:49 -07:00
b9af86b829 Allow creating global loggers. () 2019-04-29 16:15:14 -07:00
6a9572c416 Fix the trailing slash issue in Makefile () 2019-04-29 15:44:55 -07:00
636eb07869 Use n1-highcpu-32 machine and cache base image for 2 days in CI () () 2019-04-29 15:04:38 -07:00
d56c983c17 Add readiness probe and remove redis sanity check 2019-04-29 14:02:32 -07:00
a8e857c0ba Introduce internal/future/ directory with it's first file, fake_frontend.go. () () 2019-04-29 12:55:42 -07:00
75da6e1f4a Update master to use 0.0.0-dev for version () 2019-04-29 10:41:54 -07:00
e1fba5c1e8 Update the vanity url to open-match.dev/open-match () 2019-04-29 10:04:40 -07:00
d9911bdfdd golangci support () 2019-04-29 09:36:29 -07:00
175293fdf9 Consolidate build and push docker images for better CPU utilization () 2019-04-29 07:00:41 -07:00
01407fbcad Rephrase makefile set-redis-password command () 2019-04-29 06:29:11 -07:00
edad339a76 Make matchmaker_config.yaml a part of Helm chart config ()
* move config pkg under internal/;matchmaker_config.yaml is a part of Helm chart config now

* ignore data race in some tests
2019-04-25 21:51:20 -07:00
c57c841dfc Add ListenerHolder.AddrString() to avoid bad IPv6 assumptions. () 2019-04-25 21:31:15 -07:00
54dc0e0518 Update gcloud.md steps to work with free tier ()
While following documentation to create a cluster for Open Match using GCP, the command line example says to create using n1-standard-4 machine type. 
It seems like it is not possible when on a free tier, at least on the default settings: 

`C:\Program Files (x86)\Google\Cloud SDK>gcloud container clusters create --machine-type n1-standard-4 open-match-dev-cluster --zone us-west1-a --tags open-match`

`ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Insufficient regional quota to satisfy request: resource "CPUS": request requires '12.0' and is short '4.0'. project has a quota of '8.0' with '8.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=XXXXX`

To remove as many potential hurdles for people new to GCP, I would suggest to replace by a n1-standard-2 which works straight away, as I expect open-match can work with it?
2019-04-25 09:07:24 -07:00
fa4e8887d0 Update README.md to add Windows batch line version ()
* Fixing a few typos
* On Windows shell, you can't use backticks to catch the result of a program call as a string, I believe the standard way to do that is to use the **for** command to catch the result in a variable and then use it. I included that version in the "Deploying Open Match" section, as the current version does not work on Windows.
2019-04-25 08:12:16 -07:00
8384cb00b2 Self-Certificate generation for Open Match. () 2019-04-25 07:40:35 -07:00
b9502a59a0 Fix REST proxy and added proxy health check tests ()
* Add proxy tests with swagger and healthcheck handlers

* Block grpcserver Start() until fully initialized

* Serve each .swagger.json file on its corresponding REST endpont

* Resolve comments

* Add waitgroup to fully initialize the grpcserver
2019-04-24 15:30:31 -07:00
139d345915 Fix win64 and binary dependencies. () 2019-04-24 13:36:03 -07:00
5ea5b29af4 Make release issue a github issue template () 2019-04-24 12:34:50 -07:00
812afb2d06 Backend Client should keep generating matches, displaying failures if any () 2019-04-24 10:51:29 -07:00
cc82527eb5 Update documentation to reflect 0.5.0 changes. () 2019-04-24 09:38:28 -07:00
80b20623fb Include cloudbuild in places version needs to be set. Specify rc for all versions () 2019-04-23 17:05:28 -07:00
a270eab4b4 Update release notes based on feedback. () () 2019-04-23 16:07:14 -07:00
36f7dcc242 Makefile for windows () 2019-04-23 15:22:13 -07:00
a4706cbb73 Automatically publish development.open-match.dev on Post Commit () () 2019-04-23 13:37:51 -07:00
c09cc8e27f make clean now deletes build/ () 2019-04-23 11:15:35 -07:00
be55bfd1e8 Increase version to 0.5.0-rc1 ()
* Increase version to 0.5.0-rc1

* Increment version to 0.5.0-rc1 in cloudbuild.yaml
2019-04-22 18:09:36 -07:00
8389a62cf1 Release Process for Open Match () 2019-04-22 14:12:02 -07:00
af8895e629 Remove knative link because it fails in tests. () 2019-04-22 11:47:51 -07:00
2a3241307f Properly set tag and repository when making install/yaml/ () 2019-04-22 11:13:03 -07:00
f777a4f407 Publish all install/yaml/*.yaml files. ()
* Publish all install/yaml/*.yaml files.

* Update instructions and add publish post commit.

* Add yaml/
2019-04-22 10:02:49 -07:00
88ca8d7b7c DOcumentation () 2019-04-21 17:23:54 -07:00
3a09ce142a Fix namespace issues in example yaml () 2019-04-19 17:03:56 -07:00
8d8fdf0494 Add vanity url redirection support. () 2019-04-19 16:33:00 -07:00
45b0a7c38e Remove deprecated examples, evaluator and mmforc () 2019-04-19 15:34:21 -07:00
4cbee9d8a7 Remove deprecated artifacts from build pipeline () 2019-04-19 14:46:49 -07:00
55afac2c93 Embed profile config in the container to be used for standalone executions. ()
Embed profile config in the container to be used for standalone executions. Will create a separate issue to figure out a better way to do this.
2019-04-19 14:07:41 -07:00
8077dbcdba Changes to make the demo steps easier () 2019-04-19 11:30:30 -07:00
f1fc02755b Update theme and logo for Open Match website () 2019-04-19 11:01:31 -07:00
0cce1745bc Changes to Backend API and Backend Client to support GRPC Function Ha… ()
* Changes to Backend API and Backend Client to support GRPC Function Harness
2019-04-19 10:41:15 -07:00
d57b5f1872 Helm chart changes to not install mmforc and deploy function Service () ()
* Helm chart changes to not install mmforc and deploy function Service
2019-04-19 10:17:06 -07:00
1355e5c79e Fix lint issues in helm chart and improve lint coverage. () 2019-04-19 09:49:42 -07:00
4809f2801f Add Open Match Logo () 2019-04-19 08:28:13 -07:00
68d323f3ea 2nd pass of lint errors. () 2019-04-19 05:42:57 -07:00
b99160e356 Fix grpc harness startup panic due to http proxy not being set up () 2019-04-18 20:02:04 -07:00
98d4c31c61 Fix most of the lint errors from golangci. () 2019-04-18 18:15:46 -07:00
b4beb68920 Reduce log spam in backendapi () 2019-04-18 15:39:41 -07:00
b41a704886 Bump versions of dependencies () 2019-04-18 14:12:05 -07:00
88a692cdf3 Evaluator Harness and Sample golang Evaluator ()
* Evaluator Harness and sample serving Evaluator
2019-04-18 12:35:37 -07:00
623519bbb4 Core Logic for the GRPC Harness () ()
Core Logic for the MatchFunction GRPC Harness
2019-04-18 12:16:38 -07:00
655abfbb26 Example MMF demonstrating the use of the GRPC harness () 2019-04-18 10:10:06 -07:00
ac81b74fad Add Kaniko build cache ()
* Add Kaniko build cache - partly resolves 
2019-04-18 00:30:02 -07:00
ba62520d9c Prevent sudo on Makefile for commands that require auth. () 2019-04-17 20:58:16 -07:00
0205186e6f Remove install/yaml/ it will be moved to release artifacts. ()
* Remove install/yaml/ it will be moved to release artifacts.

* Add the ignore files.

* Create install/yaml/ directory for output targets.
2019-04-17 17:50:39 -07:00
ef2b1ea0a8 Implement REST proxy initializations and modified tests accordingly ()
This commit resolves  and generates swagger.json files for API visualization
2019-04-17 17:28:36 -07:00
1fe2bd4900 Add 'make presubmit' to keep generated files up to date. () 2019-04-17 17:04:05 -07:00
5333ef2092 Enable cloudbuild dev site to fix local cloud build error () 2019-04-17 16:17:01 -07:00
09b727b555 Remove the deprecated deployment mechanism for openmatch components () 2019-04-17 15:45:38 -07:00
c542d6d1c3 Serving GRPC Harness and example MMF scaffolding () ()
* Serving GRPC Harness and example MMF scaffolding

* Serving GRPC Harness and example MMF scaffolding

* Update logger field to add function name

* Update harness to use the TCP listener
2019-04-17 14:57:01 -07:00
8f3f7625ec Increases paralellism of the build () 2019-04-17 13:07:39 -07:00
6a4f309bd5 Remove temp files. () 2019-04-17 12:41:38 -07:00
26f5426b61 Disable logrus.SetReportCaller() () 2019-04-17 12:26:43 -07:00
f464b0bd7b Fix port allocation race condition during tests. () 2019-04-17 11:54:56 -07:00
092b7e634c Move GOPROXY=off to CI only. () 2019-04-17 11:01:37 -07:00
454a3d6cca Bump required Go version because of a dependency. () 2019-04-15 20:24:09 -07:00
50e3ede4b9 Remove use of GOPATH from Makefile () 2019-04-15 16:19:31 -07:00
6c36145e9b Mini Match () 2019-04-12 16:16:42 -07:00
47644004db Add link tests for website and removed broken links. () 2019-04-12 15:26:32 -07:00
1dec4d7555 Unify gRPC server initialization () 2019-04-12 12:47:27 -07:00
1c6f43a95f Add a link to the build queue. () 2019-04-12 11:38:50 -07:00
0cea4ed713 Add temporary redirect site for Open Match () 2019-04-12 11:24:23 -07:00
db912b2d68 Add reduced permissions for mmforc service account. () 2019-04-12 10:25:19 -07:00
726b1d4063 CI with Makefile () 2019-04-12 07:51:10 -07:00
468aef3835 Ignore files for Mini Match. () 2019-04-11 15:16:26 -07:00
c6e257ae76 Unified gRPC server initialization ()
* Unified gRPC server initialization

* Fix closure and review feedback
2019-04-11 15:06:07 -07:00
8e071020fa Kubernetes YAML configs for Open Match. () 2019-04-11 14:28:27 -07:00
c032e8f382 Detect sudo invocations to Makefile () 2019-04-11 14:09:52 -07:00
2af432c3d7 Fix build artifacts
Fix build artifacts issue 
2019-04-11 13:23:44 -07:00
4ddceb62ee fixed bugs in py3 mmf ()
fix py3 mmf image
2019-04-11 06:32:59 -07:00
ddb4521444 Add license preamble to proto and dockerfiles. () 2019-04-10 20:24:31 -07:00
86918e69eb Replace CURDIR with REPOSITORY ROOT 2019-04-10 16:32:01 -07:00
2d6db7e546 Remove manual stats that ocgrpc interceptor already records. 2019-04-10 16:21:42 -07:00
fc52ef6428 REST Implementation 2019-04-10 15:34:49 -07:00
1bfb30be6f Fix redis connection bugs and segfault in backendclient. () 2019-04-10 13:27:41 -07:00
9ee341baf2 Move configs from backendclient image to ConfigMap. () 2019-04-10 12:59:12 -07:00
7869e6eb81 Add opencensus metrics for Redis 2019-04-10 12:36:35 -07:00
7edca56f56 Disable php-proto building since it's missing gRPC client 2019-04-10 10:06:42 -07:00
eaedaa0265 Split up README.md and add project logo. 2019-04-10 08:26:21 -07:00
9cc8312ff5 Rename Function to MatchFunction and modify related protos () 2019-04-10 08:15:40 -07:00
2f0a1ad05b updating app.yaml 2019-04-09 20:47:33 -07:00
2ff77ac90b Fix 'make create-gke-cluster' ()
It is missing a dash on one of the arguments, which breaks things.
2019-04-09 15:59:16 -07:00
2a3cfea505 Add base package file for godoc index and go get. 2019-04-09 14:16:54 -07:00
b8326c2a91 Fix build dependencies to build/site/ 2019-04-09 14:05:03 -07:00
ccc9d87692 Disable the PHP example during the CI build. 2019-04-09 12:01:34 -07:00
bba49f3ec4 Simplify the go package path for proto definitions 2019-04-09 11:41:29 -07:00
632157806f Remove symlinks to config files because they are mounted via ConfigMaps. 2019-04-09 11:11:36 -07:00
6e039cb797 Delete images and scripts obsoleted by Makefile. 2019-04-09 10:40:53 -07:00
8db062f7b9 Use Request/Response protos in gRPC servers. 2019-04-03 21:11:42 -07:00
f379a5eb46 Disable 'Lint: Kubernetes Configs'
It is currently failing.
2019-04-03 18:28:24 -07:00
f3160cfc5c generate install.yaml with Helm
fixed helm templates

changes in helm templates

adding redis auth to the helm chart

helm templates changes

makefile: gen-install

make set-redis-password

make gen-install

fixing indentation in Makefile

remove old redis installation

use public images in install/yaml/

remove helm chart meta from static install yaml files

fixing cloudbuild

remove helm chart meta from static install yaml files

workaround for broken om-configmap data formatting

make gen-prometheus-install

drop namespace in OM resources definitions

override default matchmaker_config at Helm chart installation

fixed Makefile after rebase

matchmaker config: use latest public images

1) install Redis in same namespace with Open-match;2) Making namespace and Helm release names consistent in all places
2019-04-03 13:40:13 -07:00
442a1ff013 Update dependencies and resolve issue 2019-04-02 20:21:14 -07:00
0fb75ab35e Delete old cloudbuild.yaml files, obsoleted by PR 2019-04-02 11:23:14 -07:00
6308b218cc Minimize dependency on Viper and make config read-only. 2019-04-02 07:46:18 -07:00
624ba5c018 [charts/open-match] fix mmlogicapi service selector 2019-04-01 18:10:15 -07:00
82d034f8e4 Fix dependency issues in the build. 2019-04-01 11:05:57 -07:00
97eed146da update protoc version to 3.7.1
This fixes the bug outlined here https://github.com/protocolbuffers/protobuf/issues/5875
2019-04-01 09:49:19 -07:00
6dd23ff6ad Merge pull request from jeremyje/master
Merge 040wip into master.
2019-03-29 14:29:22 -07:00
03c7db7680 Merge 040wip 2019-03-28 11:12:07 -07:00
e5538401f6 Update protobuf definitions 2019-03-26 17:45:52 -07:00
eaa811f9ac Add example helm chart, replace example dashboard. 2019-03-26 17:45:28 -07:00
3b1c6b9141 Merge 2019-03-26 15:26:17 -07:00
34f9eb9bd3 Building again 2019-03-26 12:31:19 -07:00
3ad7f75fb4 Attempt to fix the build 2019-03-26 12:31:19 -07:00
78bd48118d Tweaks 2019-03-26 12:31:19 -07:00
3e71894111 Merge 2019-03-26 12:31:19 -07:00
36decb4068 Merge 2019-03-26 12:31:19 -07:00
f79b782a3a Go Modules 2019-03-26 11:14:48 -07:00
db186e55ff Move Dockfiles to build C#, Golang, PHP, and Python3 MMFs. 2019-03-26 09:54:10 -07:00
957465ce51 Remove dead code that was moved to internal/app/mmlogicapi/apisrv/ 2019-03-25 16:14:25 -07:00
478eb61589 Delete unnecessary copy of protos in frontendclient. 2019-03-25 16:13:56 -07:00
6d2a5b743b Remote executable bit from files that are not executable. 2019-03-13 09:31:24 -07:00
9c943d5a10 Fix comment 2019-03-12 22:04:42 -07:00
8293d44ee0 Fix typos in comments, set and playerindices 2019-03-12 22:04:42 -07:00
a3bd862e76 store optional Redis password inside the Secret 2019-03-12 21:52:59 -07:00
c424d5eac9 Update .gcloudignore to include .gitignore's filters so that Cloud Build packages don't upload binaries. 2019-03-11 16:29:50 +09:00
2e6f5173e0 Add Prometheus service discovery annotations to the Open Match servers. 2019-03-11 16:25:21 +09:00
ee4bba44ec Makefile for simpler development 2019-03-11 16:14:00 +09:00
8e923a4328 Use grpc error codes for responses. 2019-03-11 16:13:06 +09:00
52efa04ee6 Add RPC dashboard and instructions to add more dashboards. 2019-03-07 10:58:53 -08:00
67d4965648 Helm charts for open-match, prometheus, and grafana 2019-03-06 17:09:09 -08:00
7a7b1cb305 Open Match CI support via Cloud Build 2019-03-04 09:41:19 -08:00
377a9621ff Improve error handling of Redis open connection failures. 2019-02-27 19:35:23 -08:00
432dd5a504 Consolidate Ctrl+Break handling into it's own go package. 2019-02-27 17:52:58 +01:00
7446f5b1eb Move out Ctrl+Break wait signal to it's own package. 2019-02-27 17:52:58 +01:00
15ea999628 Remove init() methods from OM servers since they aren't needed. 2019-02-27 08:58:39 +01:00
b5367ea3aa Add config/ in the search path for configuration so that PWD/config can be used as a ConfigMap mount path. 2019-02-25 16:49:35 -08:00
e022c02cb6 golang mmf serving harness 2019-02-25 04:54:02 -05:00
a13455d5b0 Move application logic from cmd/ to internal/app/ 2019-02-24 13:56:48 +01:00
16741409e7 Cleaner builds using svn for github 2019-02-19 09:24:50 -05:00
d7e8f8b3fa Testing 2019-02-19 07:30:26 -05:00
8c97c8f141 Testing2 2019-02-19 07:26:11 -05:00
6a8755a13d Testing 2019-02-19 07:24:10 -05:00
4ed6d275a3 remove player from ignorelists on frontend.DeletePlayer call 2019-02-19 20:01:29 +09:00
cb49eb8646 Merge remote-tracking branch 'origin/calebatwd/knative-rest-mmf' into 040wip 2019-02-16 04:01:01 -05:00
a7458dabf2 Fix test/example paths 2019-02-14 10:56:33 +09:00
5856b7d873 Merge branch '040wip' of https://github.com/GoogleCloudPlatform/open-match into 040wip 2019-02-11 01:23:06 -05:00
7733824c21 Remove matchmaking config file from base image 2019-02-11 01:22:23 -05:00
f1d261044b Add function port to config 2019-02-11 01:21:28 -05:00
95820431ab Update dev instuctions 2019-02-11 01:20:55 -05:00
0002ecbdb2 Review feedback. 2019-02-09 15:28:48 +09:00
2eb51b5270 Fix build and test breakages 2019-02-09 15:28:48 +09:00
1847f79571 Convert JSON k8s deployment configs to YAML. 2019-02-09 15:17:22 +09:00
58ff12f3f8 Add stackdriver format support via TV4/logrus-stackdriver-formatter. Simply set format in config to stackdriver 2019-02-09 15:14:00 +09:00
b0b7b4bd15 Update git ignore to ignore goland ide files 2019-02-09 15:09:00 +09:00
f3f1f36099 Comment type 2019-02-08 14:21:36 -08:00
f8cfb1b90f Add rest call support to job scheduling. This is a prototype implementation to support knative experimentation. 2019-02-08 14:20:29 -08:00
393e1d6de2 added configurable backoff to MatchObject and Player watchers 2019-02-08 16:19:52 +09:00
a11556433b Merge branch 'master' into 040wip 2019-02-08 01:48:54 -05:00
3ee9c05db7 Merge upstream changes 2019-02-08 01:47:43 -05:00
de7ba2db6b added demo attr to player indices 2019-02-03 20:17:13 -08:00
8393454158 fixes for configmap 2019-02-03 20:17:13 -08:00
6b93ac7663 configmap for matchmaker config 2019-02-03 20:17:13 -08:00
fe2410e9d8 PHP MMF: move cfg values to env vars 2019-02-03 20:17:13 -08:00
d8ecf1c439 doc update 2019-02-03 20:17:13 -08:00
8577f6bd4d Move cfg values to env vars for MMFs 2019-02-03 20:17:13 -08:00
470be06d16 fixed set.Difference() 2019-01-29 22:38:18 -08:00
c6e4dae79b fix google cloud knative url 2019-01-25 11:38:46 -08:00
dd794fd004 py3 mmf empty pools bugfix 2019-01-23 19:57:16 -05:00
f234433e33 write to error if all pools are empty in py3 mmf 2019-01-23 19:57:16 -05:00
d52773543d check for empty pools in py3 mmf 2019-01-23 19:57:16 -05:00
bd4ab0b530 mmlogic GetPlayerPool bugfix 2019-01-23 14:18:00 +03:00
6b9cd11be3 fix py3 mmf 2019-01-16 18:01:10 +03:00
1443bd1e80 PHP MMF: move cfg values to env vars 2019-01-16 13:41:44 +03:00
3fd8081dc5 doc update 2019-01-15 11:58:42 -05:00
dda949a6a4 Move cfg values to env vars for MMFs 2019-01-15 11:25:02 -05:00
3fcedbf13b Remove enum status states. No justification yet. 2018-11-26 17:42:08 -08:00
274edaae2e Grpc code for calling functions in mmforc 2018-11-26 17:40:25 -08:00
8ed865d300 Initial function messages plus protoc regen 2018-11-26 17:05:42 -08:00
424 changed files with 32809 additions and 16999 deletions
.dockerignore.gcloudignore
.github/ISSUE_TEMPLATE
.gitignore.golangci.yamlCHANGELOG.mdDockerfile.baseDockerfile.base-buildDockerfile.ciDockerfile.mmf_phpDockerfile.mmf_py3MakefileREADME.md
api
cloudbuild.yamlcloudbuild_base.yamlcloudbuild_mmf_php.yamlcloudbuild_mmf_py3.yaml
cmd
config
deployments/k8s
doc.go
docs
examples
go.modgo.sum
install
internal
pkg
test
third_party
tools

132
.dockerignore Normal file

@ -0,0 +1,132 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
.git
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# vim swap files
*swp
*swo
*~
# Load testing residuals
test/stress/*.csv
test/stress/__pycache__
# Ping data files
*.ping
*.pings
*.population*
*.percent
*.cities
populations
# Discarded code snippets
build.sh
*-fast.yaml
detritus/
# Dotnet Core ignores
*.swp
*.*~
project.lock.json
.DS_Store
*.pyc
nupkg/
# Visual Studio Code
.vscode
# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates
# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
x64/
x86/
build/
bld/
[Bb]in/
[Oo]bj/
[Oo]ut/
msbuild.log
msbuild.err
msbuild.wrn
# Visual Studio 2015
.vs/
# Goland
.idea/
# Nodejs files placed when building Hugo, ok to allow if we actually start using Nodejs.
package.json
package-lock.json
site/resources/_gen/
# Node Modules
node_modules/
# Install YAML files, Helm is the source of truth for configuration.
install/yaml/
# Temp Directories
tmp/
# Terraform context
.terraform
*.tfstate
*.tfstate.backup
# Credential Files
creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/mmlogic/mmlogic
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
examples/functions/golang/pool/pool
examples/evaluator/golang/simple/simple
tools/reaper/reaper
# Open Match Build Directory
build/
# Secrets Directories
install/helm/open-match/secrets/

@ -1,3 +1,17 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
@ -12,3 +26,4 @@
# below:
.git
.gitignore
#!include:.gitignore

159
.github/ISSUE_TEMPLATE/release.md vendored Normal file

@ -0,0 +1,159 @@
---
name: release
about: Instructions and checklist for creating a release.
title: 'Release X.Y.Z-rc.N'
labels: kind/release
assignees: ''
---
# Open Match Release Process
Follow these instructions to create an Open Match release. The output of the
release process is new images and new configuration.
## Getting setup
*note: the commands below are pasted from the 0.5 release. make the necessary
changes to match your naming & environment.*
The Git flow for pushing a new release is similar to the development process
but there are some small differences.
**1. Clone your fork of the Open Match repository.**
```shell
git clone git@github.com:afeddersen/open-match.git
```
**2. Move into the new open-match directory.**
```shell
cd open-match
```
**3. Configure a remote that points to the upstream repository. This is required to sync changes you make in a fork with the original repository. Note: Upstream is the gatekeeper of the project or the source of truth to which you wish to contribute.**
```shell
git remote add upstream https://github.com/googleforgames/open-match.git
```
**3. Fetch the branches and their respective commits from the upstream repo.**
```shell
git fetch upstream
```
**4. Create a local release branch that tracks upstream and check it out.**
```shell
git checkout -b release-0.5 upstream/release-0.5
```
## Releases & Versions
Open Match uses Semantic Versioning 2.0.0. If you're not familiar please
see the documentation - [https://semver.org/](https://semver.org/).
Full Release / Stable Release:
* The final software product. Stable, reliable, etc...
* Naming example: 1.0.0
Release Candidate (RC):
* A release candidate (RC) is a version with the potential to be the final
product but it hasn't validated by automated and/or manual tests.
* Naming example: 1.0.0-rc.1
Hot Fixes:
* Code developed to correct a major software bug or fault
that's been discovered after the full release.
* Naming example: 1.0.1
# Detailed Instructions
## Find and replace
Below this point you will see {version} used as a placeholder for future
releases. Find {version} and replace with the current release (e.g. 0.5.0)
## Create a release branch in the upstream repository
**Note: This step is performed by the person who starts the release. It is
only required once.**
- [ ] Create the branch in the **upstream** repository. It should be named
release-X.Y. Example: release-0.5. At this point there's effectively a code
freeze for this version and all work on master will be included in a future
version. If you're on the branch that you created in the *getting setup*
section above you should be able to push upstream.
```shell
git push origin release-0.5
```
- [ ] Announce a PR freeze on release-X.Y branch on [open-match-discuss@](mailing-list-post).
- [ ] Open the [`Makefile`](makefile-version) and change BASE_VERSION entry.
- [ ] Open the [`install/helm/open-match/Chart.yaml`](om-chart-yaml-version) and [`install/helm/open-match-example/Chart.yaml`](om-example-chart-yaml-version) and change the `appVersion` and `version` entries.
- [ ] Open the [`install/helm/open-match/values.yaml`](om-values-yaml-version) and [`install/helm/open-match-example/values.yaml`](om-example-values-yaml-version) and change the `tag` entries.
- [ ] Open the [`site/config.toml`] and change the `release_branch` and `release_version` entries.
- [ ] Open the [`cloudbuild.yaml`] and change the `_OM_VERSION` entry.
- [ ] Run `make clean release`
- [ ] There might be additional references to the old version but be careful not to change it for places that have it for historical purposes.
- [ ] Create a PR with the changes and include the release candidate name.
- [ ] Merge your changes once the PR is approved.
## Complete Milestone
**Note: This step is performed by the person who starts the release. It is
only required once.**
- [ ] Create the next [version milestone](https://github.com/googleforgames/open-match/milestones) and use [semantic versioning](https://semver.org/) when naming it to be consistent with the [Go community](https://blog.golang.org/versioning-proposal).
- [ ] Create a *draft* [release](https://github.com/googleforgames/open-match/releases).
- [ ] Use the [release template](https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md)
- [ ] `Tag` = v{version}. Example: v0.5.0. Append -rc.# for release candidates. Example: v0.5.0-rc.1.
- [ ] `Target` = release-X.Y. Example: release-0.5.
- [ ] `Release Title` = `Tag`
- [ ] `Write` section will contain the contents from the [release template](https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md).
- [ ] Add the milestone to all PRs and issues that were merged since the last milestone. Look at the [releases page](https://github.com/googleforgames/open-match/releases) and look for the "X commits to master since this release" for the diff.
- [ ] Review all [milestone-less closed issues](https://github.com/googleforgames/open-match/issues?q=is%3Aissue+is%3Aclosed+no%3Amilestone) and assign the appropriate milestone.
- [ ] Review all [issues in milestone](https://github.com/googleforgames/open-match/milestones) for proper [labels](https://github.com/googleforgames/open-match/labels) (ex: area/build).
- [ ] Review all [milestone-less closed PRs](https://github.com/googleforgames/open-match/pulls?q=is%3Apr+is%3Aclosed+no%3Amilestone) and assign the appropriate milestone.
- [ ] Review all [PRs in milestone](https://github.com/googleforgames/open-match/milestones) for proper [labels](https://github.com/googleforgames/open-match/labels) (ex: area/build).
- [ ] View all open entries in milestone and move them to a future milestone if they aren't getting closed in time. https://github.com/googleforgames/open-match/milestones/v{version}
- [ ] Review all closed PRs against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/googleforgames/open-match/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+is%3Amerged+milestone%3Av{version}
- [ ] Review all closed issues against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/googleforgames/open-match/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed+milestone%3Av{version}
- [ ] Verify the [milestone](https://github.com/googleforgames/open-match/milestones) is effectively 100% at this point with the exception of the release issue itself.
TODO: Add guidelines for labeling issues.
## Build Artifacts
- [ ] Go to [Cloud Build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build), under Post Submit click "Run Trigger".
- [ ] Go to the History section and find the "Post Submit" build that's running. Wait for it to go Green. If it's red, fix error repeat this section. Take note of the docker image version tag for next step. Example: 0.5.0-a4706cb.
- [ ] Run `./docs/governance/templates/release.sh {source version tag} {version}` to copy the images to open-match-public-images.
- [ ] If this is a new minor version in the newest major version then run `./docs/governance/templates/release.sh {source version tag} latest`.
- [ ] Copy the files from `build/release/` generated from `make release` to the release draft you created. You can drag and drop the files using the Github UI.
- [ ] Run `make delete-gke-cluster create-gke-cluster` and run through the instructions under the [README](readme-deploy), verify the pods are healthy. You'll need to adjust the path to the `build/release/install.yaml` and `build/release/install-demo.yaml` in your local clone since you haven't published them yet.
- [ ] Open the [`README.md`](readme-deploy) update the version references and submit. (Release candidates can ignore this step.)
- [ ] Publish the [Release](om-release) in Github.
## Announce
- [ ] Send an email to the [mailing list](mailing-list-post) with the release details (copy-paste the release blog post)
- [ ] Send a chat on the [Slack channel](om-slack). "Open Match {version} has been released! Check it out at {release url}."
[om-slack]: https://open-match.slack.com/
[mailing-list-post]: https://groups.google.com/forum/#!newtopic/open-match-discuss
[release-template]: https://github.com/googleforgames/open-match/blob/master/docs/governance/templates/release.md
[makefile-version]: https://github.com/googleforgames/open-match/blob/master/Makefile#L53
[om-example-chart-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/Chart.yaml#L16
[om-example-values-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match/values.yaml#L16
[om-example-chart-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match-example/Chart.yaml#L16
[om-example-values-yaml-version]: https://github.com/googleforgames/open-match/blob/master/install/helm/open-match-example/values.yaml#L16
[om-release]: https://github.com/googleforgames/open-match/releases/new
[readme-deploy]: https://github.com/googleforgames/open-match/blob/master/README.md#deploy-to-kubernetes

63
.gitignore vendored

@ -1,3 +1,17 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Binaries for programs and plugins
*.exe
*.exe~
@ -16,6 +30,10 @@
*swo
*~
# Load testing residuals
test/stress/*.csv
test/stress/__pycache__
# Ping data files
*.ping
*.pings
@ -24,9 +42,6 @@
*.cities
populations
# local config files
*.json
# Discarded code snippets
build.sh
*-fast.yaml
@ -67,3 +82,45 @@ msbuild.wrn
# Visual Studio 2015
.vs/
# Goland
.idea/
# Nodejs files placed when building Hugo, ok to allow if we actually start using Nodejs.
package.json
package-lock.json
site/resources/_gen/
# Node Modules
node_modules/
# Install YAML files
install/yaml/
# Temp Directories
tmp/
# Terraform context
.terraform
*.tfstate.backup
# Credential Files
creds.json
# Open Match Binaries
cmd/backend/backend
cmd/frontend/frontend
cmd/mmlogic/mmlogic
cmd/synchronizer/synchronizer
cmd/minimatch/minimatch
cmd/swaggerui/swaggerui
tools/certgen/certgen
examples/demo/demo
examples/functions/golang/soloduel/soloduel
examples/functions/golang/pool/pool
examples/evaluator/golang/simple/simple
tools/reaper/reaper
# Secrets Directories
install/helm/open-match/secrets/

258
.golangci.yaml Normal file

@ -0,0 +1,258 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file contains all available configuration options
# with their default values.
# https://github.com/golangci/golangci-lint#config-file
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
deadline: 5m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# include test files or not, default is true
tests: true
# list of build tags, all linters use it. Default is empty list.
build-tags:
# which dirs to skip: they won't be analyzed;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but next dirs are always skipped independently
# from this option's value:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs:
# which files to skip: they will be analyzed, but issues from them
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
skip-files:
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle, default is "colored-line-number"
format: colored-line-number
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# all available settings of specific linters
linters-settings:
errcheck:
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: true
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: true
govet:
# report about shadowed variables
check-shadowing: true
# settings per analyzer
settings:
printf: # analyzer name, run `go tool vet help` to see all analyzers
funcs: # run `go tool vet help printf` to see available settings for `printf` analyzer
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Infof
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
golint:
# minimal confidence for issues, default is 0.8
min-confidence: 0.8
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 3
depguard:
list-type: blacklist
include-go-root: false
packages:
- github.com/davecgh/go-spew/spew
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
ignore-words:
- someword
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 1
unused:
# treat code as a program (not a library) and report unused exported identifiers; default is false.
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find external interfaces. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
nakedret:
# make an issue if func has more lines of code than this setting and it has naked returns; default is 30
max-func-lines: 30
prealloc:
# XXX: we don't recommend using this linter before doing performance profiling.
# For most programs usage of prealloc will be a premature optimization.
# Report preallocation suggestions only on simple loops that have no returns/breaks/continues/gotos in them.
# True by default.
simple: true
range-loops: true # Report preallocation suggestions on range loops, true by default
for-loops: false # Report preallocation suggestions on for loops, false by default
gocritic:
# Which checks should be enabled; can't be combined with 'disabled-checks';
# See https://go-critic.github.io/overview#checks-overview
# To check which checks are enabled run `GL_DEBUG=gocritic golangci-lint run`
# By default list of stable checks is used.
# enabled-checks:
# - rangeValCopy
# Enable multiple checks by tags, run `GL_DEBUG=gocritic golangci-lint` run to see all tags and checks.
# Empty list by default. See https://github.com/go-critic/go-critic#usage -> section "Tags".
enabled-tags:
- performance
settings: # settings passed to gocritic
captLocal: # must be valid enabled check name
paramsOnly: true
rangeValCopy:
sizeThreshold: 32
linters:
enable-all: true
disable:
- goimports
- stylecheck
- gocritic
- dupl
- gocyclo
- gosec
- lll
- staticcheck
- scopelint
- prealloc
- gofmt
- interfacer # deprecated - "A tool that suggests interfaces is prone to bad suggestions"
#linters:
# enable-all: true
issues:
# List of regexps of issue texts to exclude, empty list by default.
# But independently from this option we use default exclude patterns,
# it can be disabled by `exclude-use-default: false`. To list all
# excluded by default patterns execute `golangci-lint run --help`
exclude:
- abcdef
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
- path: internal[/\\]config[/\\]
linters:
- gochecknoglobals
- path: consts\.go
linters:
- gochecknoglobals
# Exclude some linters from running on test files
- path: _test\.go
linters:
- errcheck
- bodyclose
# The following are allowed global variable patterns.
# Generally it's ok to have constants or variables that effectively act as constants such as a static logger or flag values.
# The filters below specify the source code pattern that's allowed when declaring a global
# 'source: "flag."' will match 'var destFlag = flag.String("dest", "", "")'
- source: "flag."
linters:
- gochecknoglobals
- source: "telemetry."
linters:
- gochecknoglobals
- source: "View."
linters:
- gochecknoglobals
- source: "tag."
linters:
- gochecknoglobals
- source: "logrus."
linters:
- gochecknoglobals
- source: "stats."
linters:
- gochecknoglobals
- source: "serviceAddressList"
linters:
- gochecknoglobals
# Exclude known linters from partially hard-vendored code,
# which is impossible to exclude via "nolint" comments.
- path: internal/hmac/
text: "weak cryptographic primitive"
linters:
- gosec
# Exclude some staticcheck messages
- linters:
- staticcheck
text: "SA9003:"
# Exclude lll issues for long lines with go:generate
- linters:
- lll
source: "^//go:generate "
# Independently from option `exclude` we use default exclude patterns,
# it can be disabled by this option. To list all
# excluded by default patterns execute `golangci-lint run --help`.
# Default value for this option is true.
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

@ -1,5 +1,12 @@
# Release history
## v0.4.0 (alpha)
### Release notes
- Thanks to completion of Issues [#42](issues/42) and [#45](issues/45), there is no longer a need to use the `openmatch-base` image when building components of Open Match. Each stand alone appliation now is self-contained in its `Dockerfile` and `cloudbuild.yaml` files, and builds have been substantially simplified. **Note**: The default `Dockerfile` and `cloudbuild.yaml` now tag their images with the version number, not `dev`, and the YAML files in the `install` directory now reflect this.
- This paves the way for CI/CD in an upcoming version.
- This paves the way for public images in an upcoming version!
## v0.3.0 (alpha)
This update is focused on the Frontend API and Player Records, including more robust code for indexing, deindexing, reading, writing, and expiring player requests from Open Match state storage. All Frontend API function argument have changed, although many only slightly. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
@ -12,12 +19,12 @@
- A call to the Frontend API `GetUpdates()` gRPC endpoint returns a stream of player messages. This is used to send updates to state storage for the `Assignment`, `Status`, and `Error` Player fields in near-realtime. **It is the responsibility of the game client to disconnect** from the stream when it has gotten the results it was waiting for!
- Moved the rest of the gRPC messages into a shared [`messages.proto` file](api/protobuf-spec/messages.proto).
- Added documentation to Frontend API gRPC calls to the [`frontend.proto` file](api/protobuf-spec/frontend.proto).
- [Issue #41](https://github.com/GoogleCloudPlatform/open-match/issues/41)|[PR #48](https://github.com/GoogleCloudPlatform/open-match/pull/48) There is now a HA Redis install available in `install/yaml/01-redis-failover.yaml`. This would be used as a drop-in replacement for a single-instance Redis configuration in `install/yaml/01-redis.yaml`. The HA configuration requires that you install the [Redis Operator](https://github.com/spotahome/redis-operator) (note: **currently alpha**, use at your own risk) in your Kubernetes cluster.
- [Issue #41](https://github.com/googleforgames/open-match/issues/41)|[PR #48](https://github.com/googleforgames/open-match/pull/48) There is now a HA Redis install available in `install/yaml/01-redis-failover.yaml`. This would be used as a drop-in replacement for a single-instance Redis configuration in `install/yaml/01-redis.yaml`. The HA configuration requires that you install the [Redis Operator](https://github.com/spotahome/redis-operator) (note: **currently alpha**, use at your own risk) in your Kubernetes cluster.
- As part of this change, the kubernetes service name is now `redis` not `redis-sentinel` to denote that it is accessed using a standard Redis client.
- Open Match uses a new feature of the go module [logrus](github.com/sirupsen/logrus) to include filenames and line numbers. If you have an older version in your local build environment, you may need to delete the module and `go get github.com/sirupsen/logrus` again. When building using the provided `cloudbuild.yaml` and `Dockerfile`s this is handled for you.
- The program that was formerly in `examples/frontendclient` has been expanded and has been moved to the `test` directory under (`test/cmd/frontendclient/`)[test/cmd/frontendclient/].
- The client load generator program has been moved from `test/cmd/client` to (`test/cmd/clientloadgen/`)[test/cmd/clientloadgen/] to better reflect what it does.
- [Issue #45](https://github.com/GoogleCloudPlatform/open-match/issues/45) The process for moving the build files (`Dockerfile` and `cloudbuild.yaml`) for each component, example, and test program to their respective directories and out of the repository root has started but won't be completed until a future version.
- [Issue #45](https://github.com/googleforgames/open-match/issues/45) The process for moving the build files (`Dockerfile` and `cloudbuild.yaml`) for each component, example, and test program to their respective directories and out of the repository root has started but won't be completed until a future version.
- Put some basic notes in the [production guide](docs/production.md)
- Added a basic [roadmap](docs/roadmap.md)
@ -47,7 +54,7 @@
- It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
- We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
- Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an [outstanding issue to investigate and implement](https://github.com/GoogleCloudPlatform/open-match/issues/41) if it fills our needs, feel free to contribute!
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an [outstanding issue to investigate and implement](https://github.com/googleforgames/open-match/issues/41) if it fills our needs, feel free to contribute!
## v0.1.0 (alpha)
Initial release.

@ -1,7 +0,0 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY config config
COPY internal internal
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/internal
RUN go get -d -v ...

20
Dockerfile.base-build Normal file

@ -0,0 +1,20 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM golang:latest
ENV GO111MODULE=on
WORKDIR /go/src/open-match.dev/open-match
COPY . .
RUN go mod download

55
Dockerfile.ci Normal file

@ -0,0 +1,55 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM debian
RUN apt-get update
RUN apt-get install -y -qq git make python3 virtualenv curl sudo unzip apt-transport-https ca-certificates curl software-properties-common gnupg2 bc
# Docker
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
RUN sudo apt-key fingerprint 0EBFCD88
RUN sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
stretch \
stable"
RUN sudo apt-get update
RUN sudo apt-get install -y -qq docker-ce docker-ce-cli containerd.io
# Cloud SDK
RUN export CLOUD_SDK_REPO="cloud-sdk-stretch" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && apt-get install google-cloud-sdk google-cloud-sdk-app-engine-go -y -qq
# Install Golang
# https://github.com/docker-library/golang/blob/fd272b2b72db82a0bd516ce3d09bba624651516c/1.12/stretch/Dockerfile
RUN mkdir -p /toolchain/golang
WORKDIR /toolchain/golang
RUN sudo rm -rf /usr/local/go/
RUN curl -L https://storage.googleapis.com/golang/go1.12.6.linux-amd64.tar.gz | sudo tar -C /usr/local -xz
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN sudo mkdir -p "$GOPATH/src" "$GOPATH/bin" \
&& sudo chmod -R 777 "$GOPATH"
# Prepare toolchain and workspace
RUN mkdir -p /toolchain
WORKDIR /workspace
ENV OPEN_MATCH_CI_MODE=1
ENV KUBECONFIG=$HOME/.kube/config
RUN mkdir -p $HOME/.kube/

@ -1,21 +0,0 @@
FROM php:7.2-cli
RUN apt-get update && apt-get install -y -q zip unzip zlib1g-dev && apt-get clean
RUN cd /usr/local/bin && curl -sS https://getcomposer.org/installer | php
RUN cd /usr/local/bin && mv composer.phar composer
RUN pecl install grpc
RUN echo "extension=grpc.so" > /usr/local/etc/php/conf.d/30-grpc.ini
RUN pecl install protobuf
RUN echo "extension=protobuf.so" > /usr/local/etc/php/conf.d/30-protobuf.ini
WORKDIR /usr/src/open-match
COPY examples/functions/php/mmlogic-simple examples/functions/php/mmlogic-simple
COPY config config
WORKDIR /usr/src/open-match/examples/functions/php/mmlogic-simple
RUN composer install
CMD [ "php", "./harness.php" ]

@ -1,9 +0,0 @@
# Golang application builder steps
FROM python:3.5.3 as builder
WORKDIR /usr/src/open-match
COPY examples/functions/python3/mmlogic-simple examples/functions/python3/mmlogic-simple
COPY config config
WORKDIR /usr/src/open-match/examples/functions/python3/mmlogic-simple
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "./harness.py"]

1099
Makefile Normal file

File diff suppressed because it is too large Load Diff

280
README.md

@ -1,271 +1,43 @@
# Open Match
![Open Match](https://github.com/googleforgames/open-match-docs/blob/master/site/static/images/logo-with-name.png)
Open Match is an open source game matchmaking framework designed to allow game creators to build matchmakers of any size easily and with as much possibility for sharing and code re-use as possible. Its designed to be flexible (run it anywhere Kubernetes runs), extensible (match logic can be customized to work for any game), and scalable.
[![GoDoc](https://godoc.org/open-match.dev/open-match?status.svg)](https://godoc.org/open-match.dev/open-match)
[![Go Report Card](https://goreportcard.com/badge/open-match.dev/open-match)](https://goreportcard.com/report/open-match.dev/open-match)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/googleforgames/open-match/blob/master/LICENSE)
[![GitHub release](https://img.shields.io/github/release-pre/googleforgames/open-match.svg)](https://github.com/googleforgames/open-match/releases)
[![Follow on Twitter](https://img.shields.io/twitter/follow/Open_Match.svg?style=social&logo=twitter)](https://twitter.com/intent/follow?screen_name=Open_Match)
Matchmaking is a complicated process, and when large player populations are involved, many popular matchmaking approaches touch on significant areas of computer science including graph theory and massively concurrent processing. Open Match is an effort to provide a foundation upon which these difficult problems can be addressed by the wider game development community. As Josh Menke — famous for working on matchmaking for many popular triple-A franchises — put it:
Open Match is an open source game matchmaking framework that simplifies building
a scalable and extensible Matchmaker. It is designed to give the game developer
full control over how to make matches while removing the burden of dealing with
the challenges of running a production service at scale.
["Matchmaking, a lot of it actually really is just really good engineering. There's a lot of really hard networking and plumbing problems that need to be solved, depending on the size of your audience."](https://youtu.be/-pglxege-gU?t=830)
Please visit [Open Match website](https://open-match.dev/site/docs/) for user
documentation, demo instructions etc.
This project attempts to solve the networking and plumbing problems, so game developers can focus on the logic to match players into great games.
## Contributing to Open Match
Open Match is in active development and we would love your contribution! Please
read the [contributing guide](CONTRIBUTING.md) for guidelines on contributing to
Open Match.
The [Open Match Development guide](docs/development.md) has detailed instructions
on getting the source code, making changes, testing and submitting a pull request
to Open Match.
## Disclaimer
This software is currently alpha, and subject to change. Although Open Match has already been used to run [production workloads within Google](https://cloud.google.com/blog/topics/inside-google-cloud/no-tricks-just-treats-globally-scaling-the-halloween-multiplayer-doodle-with-open-match-on-google-cloud), but it's still early days on the way to our final goal. There's plenty left to write and we welcome contributions. **We strongly encourage you to engage with the community through the [Slack or Mailing lists](#get-involved) if you're considering using Open Match in production before the 1.0 release, as the documentation is likely to lag behind the latest version a bit while we focus on getting out of alpha/beta as soon as possible.**
## Version
[The current stable version in master is 0.3.1 (alpha)](https://github.com/GoogleCloudPlatform/open-match/releases/tag/v0.3.1-alpha). At this time only bugfixes and doc update pull requests will be considered.
Version 0.4.0 is in active development; please target code changes to the 040wip branch.
This software is currently alpha, and subject to change.
# Core Concepts
## Support
[Watch the introduction of Open Match at Unite Berlin 2018 on YouTube](https://youtu.be/qasAmy_ko2o)
Open Match is designed to support massively concurrent matchmaking, and to be scalable to player populations of hundreds of millions or more. It attempts to apply stateless web tech microservices patterns to game matchmaking. If you're not sure what that means, that's okay — it is fully open source and designed to be customizable to fit into your online game architecture — so have a look a the code and modify it as you see fit.
## Glossary
### General
* **DGS** — Dedicated game server
* **Client** — The game client program the player uses when playing the game
* **Session** — In Open Match, players are matched together, then assigned to a server which hosts the game _session_. Depending on context, this may be referred to as a _match_, _map_, or just _game_ elsewhere in the industry.
### Open Match
* **Component** — One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called _components_.
* **State Storage** — The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
* **MMFOrc** — Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
* **MMF** — Matchmaking function. This is the customizable matchmaking logic.
* **MMLogic API** — An API that provides MMF SDK functionality. It is optional - you can also do all the state storage read and write operations yourself if you have a good reason to do so.
* **Director** — The software you (as a developer) write against the Open Match Backend API. The _Director_ decides which MMFs to run, and is responsible for sending MMF results to a DGS to host the session.
### Data Model
* **Player** — An ID and list of attributes with values for a player who wants to participate in matchmaking.
* **Roster** — A list of player objects. Used to hold all the players on a single team.
* **Filter** — A _filter_ is used to narrow down the players to only those who have an attribute value within a certain integer range. All attributes are integer values in Open Match because [that is how indices are implemented](internal/statestorage/redis/playerindices/playerindices.go). A _filter_ is defined in a _player pool_.
* **Player Pool** — A list of all the players who fit all the _filters_ defined in the pool.
* **Match Object** — A protobuffer message format that contains the _profile_ and the results of the matchmaking function. Sent to the backend API from your game backend with the _roster_(s) empty and then returned from your MMF with the matchmaking results filled in.
* **Profile** — The json blob containing all the parameters used by your MMF to select which players go into a roster together.
* **Assignment** — Refers to assigning a player or group of players to a dedicated game server instance. Open Match offers a path to send dedicated game server connection details from your backend to your game clients after a match has been made.
* **Ignore List** — Removing players from matchmaking consideration is accomplished using _ignore lists_. They contain lists of player IDs that your MMF should not include when making matches.
## Requirements
* [Kubernetes](https://kubernetes.io/) cluster — tested with version 1.9.
* [Redis 4+](https://redis.io/) — tested with 4.0.11.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) — tested with 1.10.9.
## Components
Open Match is a set of processes designed to run on Kubernetes. It contains these **core** components:
1. Frontend API
1. Backend API
1. Matchmaker Function Orchestrator (MMFOrc) (may be deprecated in future versions)
It includes these **optional** (but recommended) components:
1. Matchmaking Logic (MMLogic) API
It also explicitly depends on these two **customizable** components.
1. Matchmaking "Function" (MMF)
1. Evaluator (may be optional in future versions)
While **core** components are fully open source and _can_ be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** MMF and Evaluator examples, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
### Frontend API
The Frontend API accepts the player data and puts it in state storage so your Matchmaking Function (MMF) can access it.
The Frontend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/frontend.proto`. At the most basic level, it expects clients to connect and send:
* A **unique ID** for the group of players (the group can contain any number of players, including only one).
* A **json blob** containing all player-related data you want to use in your matchmaking function.
The client is expected to maintain a connection, waiting for an update from the API that contains the details required to connect to a dedicated game server instance (an 'assignment'). There are also basic functions for removing an ID from the matchmaking pool or an existing match.
### Backend API
The Backend API writes match objects to state storage which the Matchmaking Functions (MMFs) access to decide which players should be matched. It returns the results from those MMFs.
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or **director**, or even directly to a dedicated game server), and to receive:
* A **unique ID** for a matchmaking profile.
* A **json blob** containing all the matching-related data and filters you want to use in your matchmaking function.
* An optional list of **roster**s to hold the resulting teams chosen by your matchmaking function.
* An optional set of **filters** that define player pools your matchmaking function will choose players from.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
### Matchmaking Function Orchestrator (MMFOrc)
The MMFOrc kicks off your custom matchmaking function (MMF) for every unique profile submitted to the Backend API in a match object. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
The MMFOrc exists to orchestrate/schedule your **custom components**, running them as often as required to meet the demands of your game. MMFOrc runs in an endless loop, submitting MMFs and Evaluator jobs to Kubernetes.
### Matchmaking Logic (MMLogic) API
The MMLogic API provides a series of gRPC functions that act as a Matchmaking Function SDK. Much of the basic, boilerplate code for an MMF is the same regardless of what players you want to match together. The MMLogic API offers a gRPC interface for many common MMF tasks, such as:
1. Reading a profile from state storage.
1. Running filters on players in state strorage. It automatically removes players on ignore lists as well!
1. Removing chosen players from consideration by other MMFs (by adding them to an ignore list). It does it automatically for you when writing your results!
1. Writing the matchmaking results to state storage.
1. (Optional, NYI) Exporting MMF stats for metrics collection.
More details about the available gRPC calls can be found in the [API Specification](api/protobuf-spec/messages.proto).
**Note**: using the MMLogic API is **optional**. It tries to simplify the development of MMFs, but if you want to take care of these tasks on your own, you can make few or no calls to the MMLogic API as long as your MMF still completes all the required tasks. Read the [Matchmaking Functions section](#matchmaking-functions-mmfs) for more details of what work an MMF must do.
### Evaluator
The Evaluator resolves conflicts when multiple MMFs select the same player(s).
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposals, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and well-tuned matchmaking functions, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
Large-scale concurrent matchmaking functions is a complex topic, and users who wish to do this are encouraged to engage with the [Open Match community](https://github.com/GoogleCloudPlatform/open-match#get-involved) about patterns and best practices.
### Matchmaking Functions (MMFs)
Matchmaking Functions (MMFs) are run by the Matchmaker Function Orchestrator (MMFOrc) — once per profile it sees in state storage. The MMF is run as a Job in Kubernetes, and has full access to read and write from state storage. At a high level, the encouraged pattern is to write a MMF in whatever language you are comfortable in that can do the following things:
- [x] Be packaged in a (Linux) Docker container.
- [x] Read/write from the Open Match state storage — Open Match ships with Redis as the default state storage.
- [x] Read a profile you wrote to state storage using the Backend API.
- [x] Select from the player data you wrote to state storage using the Frontend API. It must respect all the ignore lists defined in the matchmaker config.
- [ ] Run your custom logic to try to find a match.
- [x] Write the match object it creates to state storage at a specified key.
- [x] Remove the players it selected from consideration by other MMFs by adding them to the appropriate ignore list.
- [x] Notify the MMFOrc of completion.
- [x] (Optional, but recommended) Export stats for metrics collection.
**Open Match offers [matchmaking logic API](#matchmaking-logic-mmlogic-api) calls for handling the checked items, as long as you are able to format your input and output in the data schema Open Match expects (defined in the [protobuf messages](api/protobuf-spec/messages.proto)).** You can to do this work yourself if you don't want to or can't use the data schema Open Match is looking for. However, the data formats expected by Open Match are pretty generalized and will work with most common matchmaking scenarios and game types. If you have questions about how to fit your data into the formats specified, feel free to ask us in the [Slack or mailing group](#get-involved).
Example MMFs are provided in these languages:
- [C#](examples/functions/csharp/simple) (doesn't use the MMLogic API)
- [Python3](examples/functions/python3/mmlogic-simple) (MMLogic API enabled)
- [PHP](examples/functions/php/mmlogic-simple) (MMLogic API enabled)
- [golang](examples/functions/golang/manual-simple) (doesn't use the MMLogic API)
## Open Source Software integrations
### Structured logging
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. Level and format are configurable via config/matchmaker_config.json. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
### Instrumentation for metrics
Open Match uses [OpenCensus](https://opencensus.io/) for metrics instrumentation. The [gRPC](https://grpc.io/) integrations are built-in, and Golang redigo module integrations are incoming, but [haven't been merged into the official repo](https://github.com/opencensus-integrations/redigo/pull/1). All of the core components expose HTTP `/metrics` endpoints on the port defined in `config/matchmaker_config.json` (default: 9555) for Prometheus to scrape. If you would like to export to a different metrics aggregation platform, we suggest you have a look at the OpenCensus documentation — there may be one written for you already, and switching to it may be as simple as changing a few lines of code.
**Note:** A standard for instrumentation of MMFs is planned.
### Redis setup
By default, Open Match expects you to run Redis *somewhere*. Connection information can be put in the config file (`matchmaker_config.json`) for any Redis instance reachable from the [Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By default, Open Match sensibly runs in the Kubernetes `default` namespace. In most instances, we expect users will run a copy of Redis in a pod in Kubernetes, with a service pointing to it.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis`, which provides an easier path to multi-instance deployments.
## Additional examples
**Note:** These examples will be expanded on in future releases.
The following examples of how to call the APIs are provided in the repository. Both have a `Dockerfile` and `cloudbuild.yaml` files in their respective directories:
* `test/cmd/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `examples/backendclient/main.go` calls the Backend API and passes in the profile found in `backendstub/profiles/testprofile.json` to the `ListMatches` API endpoint, then continually prints the results until you exit, or there are insufficient players to make a match based on the profile..
## Usage
Documentation and usage guides on how to set up and customize Open Match.
### Precompiled container images
Once we reach a 1.0 release, we plan to produce publicly available (Linux) Docker container images of major releases in a public image registry. Until then, refer to the 'Compiling from source' section below.
### Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in the corresponding `cmd/<COMPONENT>` directories.
All the core components for Open Match are written in Golang and use the [Dockerfile multistage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/). This pattern uses intermediate Docker containers as a Golang build environment while producing lightweight, minimized container images as final build artifacts. When the project is ready for production, we will modify the `Dockerfile`s to uncomment the last build stage. Although this pattern is great for production container images, it removes most of the utilities required to troubleshoot issues during development.
## Configuration
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. When `docker build`ing the component container images, the Dockerfile copies the centralized config file into the component directory.
We plan to replace this with a Kubernetes-managed config with dynamic reloading, please join the discussion in [Issue #42](issues/42).
### Guides
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release, right now it's a scattered collection of notes. **WIP**
* [Development guide](./docs/development.md)
### Reference
* [FAQ](./docs/faq.md)
## Get involved
* [Slack channel](https://open-match.slack.com/)
* [Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU)
* [Slack Channel](https://open-match.slack.com/) ([Signup](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))
* [File an Issue](https://github.com/googleforgames/open-match/issues/new)
* [Mailing list](https://groups.google.com/forum/#!forum/open-match-discuss)
* [Managed Service Survey](https://goo.gl/forms/cbrFTNCmy9rItSv72)
## Code of Conduct
Participation in this project comes under the [Contributor Covenant Code of Conduct](code-of-conduct.md)
## Development and Contribution
Please read the [contributing](CONTRIBUTING.md) guide for directions on submitting Pull Requests to Open Match.
See the [Development guide](docs/development.md) for documentation for development and building Open Match from source.
The [Release Process](docs/governance/release_process.md) documentation displays the project's upcoming release calendar and release process. (NYI)
Open Match is in active development - we would love your help in shaping its future!
## This all sounds great, but can you explain Docker and/or Kubernetes to me?
### Docker
- [Docker's official "Getting Started" guide](https://docs.docker.com/get-started/)
- [Katacoda's free, interactive Docker course](https://www.katacoda.com/courses/docker)
### Kubernetes
- [You should totally read this comic, and interactive tutorial](https://cloud.google.com/kubernetes-engine/kubernetes-comic/)
- [Katacoda's free, interactive Kubernetes course](https://www.katacoda.com/courses/kubernetes)
## Licence
## License
Apache 2.0
# Planned improvements
See the [provisional roadmap](docs/roadmap.md) for more information on upcoming releases.
## Documentation
- [ ] “Writing your first matchmaker” getting started guide will be included in an upcoming version.
- [ ] Documentation for using the example customizable components and the `backendstub` and `frontendstub` applications to do an end-to-end (e2e) test will be written. This all works now, but needs to be written up.
- [ ] Documentation on release process and release calendar.
## State storage
- [X] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
- [X] [The Redis deployment should have an example HA configuration](https://github.com/GoogleCloudPlatform/open-match/issues/41)
- [X] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet.
- [ ] We don't want to support two redis watcher code paths, but we will until golang protobuf reflection is a bit more usable. [Design doc](https://docs.google.com/document/d/19kfhro7-CnBdFqFk7l4_HmwaH2JT_Rhw5-2FLWLEGGk/edit#heading=h.q3iwtwhfujjx), [github issue](https://github.com/golang/protobuf/issues/364)
- [X] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
## Instrumentation / Metrics / Analytics
- [ ] Instrumentation of MMFs is in the planning stages. Since MMFs are by design meant to be completely customizable (to the point of allowing any process that can be packaged in a Docker container), metrics/stats will need to have an expected format and formalized outgoing pathway. Currently the thought is that it might be that the metrics should be written to a particular key in statestorage in a format compatible with opencensus, and will be collected, aggreggated, and exported to Prometheus using another process.
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version. This is likely going to require knative.
- [X] Read logrus logging configuration from matchmaker_config.json.
## Security
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions. [Issue 52](issues/52)
## Kubernetes
- [ ] Autoscaling isn't turned on for the Frontend or Backend API Kubernetes deployments by default.
- [ ] A [Helm](https://helm.sh/) chart to stand up Open Match may be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
- [ ] A knative-based implementation of MMFs is in the planning stages.
## CI / CD / Build
- [ ] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon. This is tracked in [Issue #45](issues/45) and is blocked by [Issue 42](issues/42).
- [ ] CI/CD for this repo and the associated status tags are planned.
- [ ] Golang unit tests will be shipped in an upcoming version.
- [ ] A full load-testing and e2e testing suite will be included in an upcoming version.
## Will not Implement
- [X] Defining multiple images inside a profile for the purposes of experimentation adds another layer of complexity into profiles that can instead be handled outside of open match with custom match functions in collaboration with a director (thing that calls backend to schedule matchmaking)
### Special Thanks
- Thanks to https://jbt.github.io/markdown-editor/ for help in marking this document down.

@ -1,15 +1,9 @@
# Open Match APIs
# Open Match API
This directory contains the API specification files for Open Match. API documenation will be produced in a future version, although the protobuf files offer a concise description of the API calls available, along with arguments and return messages.
Open Match API is exposed via [gRPC](https://grpc.io/) and HTTP REST with [Swagger](https://swagger.io/tools/swagger-codegen/).
* [Protobuf .proto files for all APIs](./protobuf-spec/)
gRPC has first-class support for [many languages](https://grpc.io/docs/) and provides the most performance. It is a RPC protocol built on top of HTTP/2 and provides TLS for secure transport.
These proto files are copied to the container image during `docker build` for the Open Match core components. The `Dockerfiles` handle the compilation for you transparently, and copy the resulting `SPEC.pb.go` files to the appropriate place in your final container image.
For HTTP/HTTPS Open Match uses a gRPC proxy to serve the API. Since HTTP does not provide a structure for request/responses we use Swagger to provide a schema. You can view the Swagger docs for each service in this directory's `*.swagger.json` files. In addition each server will host it's swagger doc via `GET /swagger.json` if you want to dynamically load them at runtime.
References:
* [gRPC](https://grpc.io/)
* [Language Guide (proto3)](https://developers.google.com/protocol-buffers/docs/proto3)
Manual gRPC compilation commmand, from the directory containing the proto:
```protoc -I . ./<filename>.proto --go_out=plugins=grpc:.```
Lastly, Open Match supports insecure and TLS mode for serving the API. It's strongly preferred to use TLS mode in production but insecure mode can be used for test and local development. To help with certificates management see `tools/certgen` to create self-signed certificates.

115
api/backend.proto Normal file

@ -0,0 +1,115 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Backend"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
// Configuration for the Match Function to be triggered by Open Match to
// generate proposals.
message FunctionConfig {
string host = 1;
int32 port = 2;
Type type = 3;
enum Type {
GRPC = 0;
REST = 1;
}
}
message FetchMatchesRequest {
// Configuration of the MatchFunction to be executed for the given list of MatchProfiles
FunctionConfig config = 1;
// MatchProfiles for which this MatchFunction should be executed.
repeated MatchProfile profiles = 2;
}
message FetchMatchesResponse {
// Result Match for the requested MatchProfile.
// Note that OpenMatch will validate the proposals, a valid match should contain at least one ticket.
repeated Match matches = 1;
}
message AssignTicketsRequest {
// List of Ticket IDs for which the Assignment is to be made.
repeated string ticket_ids = 1;
// Assignment to be associated with the Ticket IDs.
Assignment assignment = 2;
}
message AssignTicketsResponse {}
// The service implementing the Backent API that is called to generate matches
// and make assignments for Tickets.
service Backend {
// FetchMatch triggers execution of the specfied MatchFunction for each of the
// specified MatchProfiles. Each MatchFunction execution returns a set of
// proposals which are then evaluated to generate results. FetchMatch method
// streams these results back to the caller.
rpc FetchMatches(FetchMatchesRequest) returns (FetchMatchesResponse) {
option (google.api.http) = {
post: "/v1/backend/matches:fetch"
body: "*"
};
}
// AssignTickets sets the specified Assignment on the Tickets for the Ticket
// IDs passed.
rpc AssignTickets(AssignTicketsRequest) returns (AssignTicketsResponse) {
option (google.api.http) = {
post: "/v1/backend/tickets:assign"
body: "*"
};
}
}

428
api/backend.swagger.json Normal file

@ -0,0 +1,428 @@
{
"swagger": "2.0",
"info": {
"title": "Backend",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/backend/matches:fetch": {
"post": {
"summary": "FetchMatch triggers execution of the specfied MatchFunction for each of the\nspecified MatchProfiles. Each MatchFunction execution returns a set of\nproposals which are then evaluated to generate results. FetchMatch method\nstreams these results back to the caller.",
"operationId": "FetchMatches",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiFetchMatchesResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiFetchMatchesRequest"
}
}
],
"tags": [
"Backend"
]
}
},
"/v1/backend/tickets:assign": {
"post": {
"summary": "AssignTickets sets the specified Assignment on the Tickets for the Ticket\nIDs passed.",
"operationId": "AssignTickets",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiAssignTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiAssignTicketsRequest"
}
}
],
"tags": [
"Backend"
]
}
}
},
"definitions": {
"apiAssignTicketsRequest": {
"type": "object",
"properties": {
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of Ticket IDs for which the Assignment is to be made."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment to be associated with the Ticket IDs."
}
}
},
"apiAssignTicketsResponse": {
"type": "object"
},
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiFetchMatchesRequest": {
"type": "object",
"properties": {
"config": {
"$ref": "#/definitions/apiFunctionConfig",
"title": "Configuration of the MatchFunction to be executed for the given list of MatchProfiles"
},
"profiles": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatchProfile"
},
"description": "MatchProfiles for which this MatchFunction should be executed."
}
}
},
"apiFetchMatchesResponse": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "Result Match for the requested MatchProfile.\nNote that OpenMatch will validate the proposals, a valid match should contain at least one ticket."
}
}
},
"apiFilter": {
"type": "object",
"properties": {
"attribute": {
"type": "string",
"description": "Name of the ticket attribute this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
}
},
"description": "A hard filter used to query a subset of Tickets meeting the filtering\ncriteria."
},
"apiFunctionConfig": {
"type": "object",
"properties": {
"host": {
"type": "string"
},
"port": {
"type": "integer",
"format": "int32"
},
"type": {
"$ref": "#/definitions/apiFunctionConfigType"
}
},
"description": "Configuration for the Match Function to be triggered by Open Match to\ngenerate proposals."
},
"apiFunctionConfigType": {
"type": "string",
"enum": [
"GRPC",
"REST"
],
"default": "GRPC"
},
"apiMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
},
"apiMatchProfile": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of this match profile."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Set of properties associated with this MatchProfile. (Optional)\nOpen Match does not interpret these properties but passes them through to\nthe MatchFunction."
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/apiPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile.\nThe pool names can be used in empty Rosters to specify composition of a\nmatch."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
},
"description": "Set of Rosters for this match request. Could be empty Rosters used to\nindicate the composition of the generated Match or they could be partially\npre-populated Ticket list to be used in scenarios such as backfill / join\nin progress."
}
},
"description": "A MatchProfile is Open Match's representation of a Match specification. It is\nused to indicate the criteria for selecting players for a match. A\nMatchProfile is the input to the API to get matches and is passed to the\nMatchFunction. It contains all the information required by the MatchFunction\nto generate match proposals."
},
"apiPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
}
}
},
"apiRoster": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

78
api/evaluator.proto Normal file

@ -0,0 +1,78 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Evaluator"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message EvaluateRequest {
// List of Matches to evaluate.
repeated api.Match matches = 1;
}
message EvaluateResponse {
// Accepted list of Matches.
repeated api.Match matches = 1;
}
// The service implementing the Evaluator API that is called to evaluate
// matches generated by MMFs and shortlist them to accepted results.
service Evaluator {
// Evaluate accepts a list of proposed matches, evaluates them for quality,
// collisions etc. and returns matches that should be accepted as results.
rpc Evaluate(EvaluateRequest) returns (EvaluateResponse) {
option (google.api.http) = {
post: "/v1/evaluator/matches:evaluate"
body: "*"
};
}
}

284
api/evaluator.swagger.json Normal file

@ -0,0 +1,284 @@
{
"swagger": "2.0",
"info": {
"title": "Evaluator",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/evaluator/matches:evaluate": {
"post": {
"summary": "Evaluate accepts a list of proposed matches, evaluates them for quality,\ncollisions etc. and returns matches that should be accepted as results.",
"operationId": "Evaluate",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiEvaluateResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiEvaluateRequest"
}
}
],
"tags": [
"Evaluator"
]
}
}
},
"definitions": {
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiEvaluateRequest": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "List of Matches to evaluate."
}
}
},
"apiEvaluateResponse": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "Accepted list of Matches."
}
}
},
"apiMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
},
"apiRoster": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

130
api/frontend.proto Normal file

@ -0,0 +1,130 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Frontend"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message CreateTicketRequest {
// Ticket object with the properties of the Ticket to be created.
Ticket ticket = 1;
}
message CreateTicketResponse {
// Ticket object for the created Ticket - with the ticket ID populated.
Ticket ticket = 1;
}
message DeleteTicketRequest {
// Ticket ID of the Ticket to be deleted.
string ticket_id = 1;
}
message DeleteTicketResponse {}
message GetTicketRequest {
// Ticket ID of the Ticket to fetch.
string ticket_id = 1;
}
message GetAssignmentsRequest {
// Ticket ID of the Ticket to get updates on.
string ticket_id = 1;
}
message GetAssignmentsResponse {
// The updated Ticket object.
Assignment assignment = 1;
}
// The Frontend service enables creating Tickets for matchmaking and fetching
// the status of these Tickets.
service Frontend {
// CreateTicket will create a new ticket, assign a Ticket ID to it and put the
// Ticket in state storage. It will then look through the 'properties' field
// for the attributes defined as indices the matchmakaking config. If the
// attributes exist and are valid integers, they will be indexed. Creating a
// ticket adds the Ticket to the pool of Tickets considered for matchmaking.
rpc CreateTicket(CreateTicketRequest) returns (CreateTicketResponse) {
option (google.api.http) = {
post: "/v1/frontend/tickets"
body: "*"
};
}
// DeleteTicket removes the Ticket from state storage and from corresponding
// configured indices and lazily removes the ticket from state storage.
// Deleting a ticket immediately stops the ticket from being
// considered for future matchmaking requests, yet when the ticket itself will be deleted
// is undeterministic. Users may still be able to assign/get a ticket after calling DeleteTicket on it.
rpc DeleteTicket(DeleteTicketRequest) returns (DeleteTicketResponse) {
option (google.api.http) = {
delete: "/v1/frontend/tickets/{ticket_id}"
};
}
// GetTicket fetches the ticket associated with the specified Ticket ID.
rpc GetTicket(GetTicketRequest) returns (Ticket) {
option (google.api.http) = {
get: "/v1/frontend/tickets/{ticket_id}"
};
}
// GetAssignments streams matchmaking results from Open Match for the
// provided Ticket ID.
rpc GetAssignments(GetAssignmentsRequest)
returns (stream GetAssignmentsResponse) {
option (google.api.http) = {
get: "/v1/frontend/tickets/{ticket_id}/assignments"
};
}
}

370
api/frontend.swagger.json Normal file

@ -0,0 +1,370 @@
{
"swagger": "2.0",
"info": {
"title": "Frontend",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/frontend/tickets": {
"post": {
"summary": "CreateTicket will create a new ticket, assign a Ticket ID to it and put the\nTicket in state storage. It will then look through the 'properties' field\nfor the attributes defined as indices the matchmakaking config. If the\nattributes exist and are valid integers, they will be indexed. Creating a\nticket adds the Ticket to the pool of Tickets considered for matchmaking.",
"operationId": "CreateTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiCreateTicketResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiCreateTicketRequest"
}
}
],
"tags": [
"Frontend"
]
}
},
"/v1/frontend/tickets/{ticket_id}": {
"get": {
"summary": "GetTicket fetches the ticket associated with the specified Ticket ID.",
"operationId": "GetTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiTicket"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "Ticket ID of the Ticket to fetch.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"Frontend"
]
},
"delete": {
"summary": "DeleteTicket removes the Ticket from state storage and from corresponding\nconfigured indices and lazily removes the ticket from state storage.\nDeleting a ticket immediately stops the ticket from being\nconsidered for future matchmaking requests, yet when the ticket itself will be deleted\nis undeterministic. Users may still be able to assign/get a ticket after calling DeleteTicket on it.",
"operationId": "DeleteTicket",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiDeleteTicketResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "Ticket ID of the Ticket to be deleted.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"Frontend"
]
}
},
"/v1/frontend/tickets/{ticket_id}/assignments": {
"get": {
"summary": "GetAssignments streams matchmaking results from Open Match for the\nprovided Ticket ID.",
"operationId": "GetAssignments",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/apiGetAssignmentsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "ticket_id",
"description": "Ticket ID of the Ticket to get updates on.",
"in": "path",
"required": true,
"type": "string"
}
],
"tags": [
"Frontend"
]
}
}
},
"definitions": {
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiCreateTicketRequest": {
"type": "object",
"properties": {
"ticket": {
"$ref": "#/definitions/apiTicket",
"description": "Ticket object with the properties of the Ticket to be created."
}
}
},
"apiCreateTicketResponse": {
"type": "object",
"properties": {
"ticket": {
"$ref": "#/definitions/apiTicket",
"description": "Ticket object for the created Ticket - with the ticket ID populated."
}
}
},
"apiDeleteTicketResponse": {
"type": "object"
},
"apiGetAssignmentsResponse": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "The updated Ticket object."
}
}
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"apiGetAssignmentsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/apiGetAssignmentsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of apiGetAssignmentsResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

80
api/matchfunction.proto Normal file

@ -0,0 +1,80 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Match Function"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message RunRequest {
// The MatchProfile that describes the Match that this MatchFunction needs to
// generate proposals for.
MatchProfile profile = 1;
}
message RunResponse {
// The proposal generated by this MatchFunction Run.
// Note that OpenMatch will validate the proposals, a valid match should contain at least one ticket.
repeated Match proposals = 1;
}
// This proto defines the API for running Match Functions as long-lived,
// 'serving' functions.
service MatchFunction {
// This is the function that is executed when by the Open Match backend to
// generate Match proposals.
rpc Run(RunRequest) returns (RunResponse) {
option (google.api.http) = {
post: "/v1/matchfunction:run"
body: "*"
};
}
}

@ -0,0 +1,345 @@
{
"swagger": "2.0",
"info": {
"title": "Match Function",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/matchfunction:run": {
"post": {
"summary": "This is the function that is executed when by the Open Match backend to\ngenerate Match proposals.",
"operationId": "Run",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiRunResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiRunRequest"
}
}
],
"tags": [
"MatchFunction"
]
}
}
},
"definitions": {
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiFilter": {
"type": "object",
"properties": {
"attribute": {
"type": "string",
"description": "Name of the ticket attribute this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
}
},
"description": "A hard filter used to query a subset of Tickets meeting the filtering\ncriteria."
},
"apiMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
},
"apiMatchProfile": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of this match profile."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Set of properties associated with this MatchProfile. (Optional)\nOpen Match does not interpret these properties but passes them through to\nthe MatchFunction."
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/apiPool"
},
"description": "Set of pools to be queried when generating a match for this MatchProfile.\nThe pool names can be used in empty Rosters to specify composition of a\nmatch."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
},
"description": "Set of Rosters for this match request. Could be empty Rosters used to\nindicate the composition of the generated Match or they could be partially\npre-populated Ticket list to be used in scenarios such as backfill / join\nin progress."
}
},
"description": "A MatchProfile is Open Match's representation of a Match specification. It is\nused to indicate the criteria for selecting players for a match. A\nMatchProfile is the input to the API to get matches and is passed to the\nMatchFunction. It contains all the information required by the MatchFunction\nto generate match proposals."
},
"apiPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
}
}
},
"apiRoster": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
},
"apiRunRequest": {
"type": "object",
"properties": {
"profile": {
"$ref": "#/definitions/apiMatchProfile",
"description": "The MatchProfile that describes the Match that this MatchFunction needs to\ngenerate proposals for."
}
}
},
"apiRunResponse": {
"type": "object",
"properties": {
"proposals": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "The proposal generated by this MatchFunction Run.\nNote that OpenMatch will validate the proposals, a valid match should contain at least one ticket."
}
}
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

138
api/messages.proto Normal file

@ -0,0 +1,138 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
import "google/rpc/status.proto";
import "google/protobuf/struct.proto";
// A Ticket is a basic matchmaking entity in Open Match. In order to enter
// matchmaking using Open Match, the client should generate a Ticket, passing in
// the properties to be associated with this Ticket. Open Match will generate an
// ID for a Ticket during creation. A Ticket could be used to represent an
// individual 'Player' or a 'Group' of players. Open Match will not interpret
// what the Ticket represents but just treat it as a matchmaking unit with a set
// of properties. Open Match stores the Ticket in state storage and enables an
// Assignment to be associated with this Ticket.
message Ticket {
// The Ticket ID generated by Open Match.
string id = 1;
// Properties contains custom info about the ticket. Top level values can be
// used in indexing and filtering to find tickets.
google.protobuf.Struct properties = 2;
// Assignment associated with the Ticket.
Assignment assignment = 3;
}
// An Assignment object represents the assignment associated with a Ticket. Open
// match does not require or inspect any fields on assignment.
message Assignment {
// Connection information for this Assignment.
string connection = 1;
// Other details to be sent to the players.
google.protobuf.Struct properties = 2;
// Error when finding an Assignment for this Ticket.
google.rpc.Status error = 3;
}
// A hard filter used to query a subset of Tickets meeting the filtering
// criteria.
message Filter {
// Name of the ticket attribute this Filter operates on.
string attribute = 1;
// Maximum value. Defaults to positive infinity (any value above minv).
double max = 2;
// Minimum value. Defaults to 0.
double min = 3;
}
message Pool {
// A developer-chosen human-readable name for this Pool.
string name = 1;
// Set of Filters indicating the filtering criteria. Selected players must
// match every Filter.
repeated Filter filters = 2;
}
// A Roster is a named collection of Ticket IDs. It exists so that a Tickets
// associated with a Match can be labelled to belong to a team, sub-team etc. It
// can also be used to represent the current state of a Match in scenarios such
// as backfill, join-in-progress etc.
message Roster {
// A developer-chosen human-readable name for this Roster.
string name = 1;
// Tickets belonging to this Roster.
repeated string ticket_ids = 2;
}
// A MatchProfile is Open Match's representation of a Match specification. It is
// used to indicate the criteria for selecting players for a match. A
// MatchProfile is the input to the API to get matches and is passed to the
// MatchFunction. It contains all the information required by the MatchFunction
// to generate match proposals.
message MatchProfile {
// Name of this match profile.
string name = 1;
// Set of properties associated with this MatchProfile. (Optional)
// Open Match does not interpret these properties but passes them through to
// the MatchFunction.
google.protobuf.Struct properties = 2;
// Set of pools to be queried when generating a match for this MatchProfile.
// The pool names can be used in empty Rosters to specify composition of a
// match.
repeated Pool pools = 3;
// Set of Rosters for this match request. Could be empty Rosters used to
// indicate the composition of the generated Match or they could be partially
// pre-populated Ticket list to be used in scenarios such as backfill / join
// in progress.
repeated Roster rosters = 4;
}
// A Match is used to represent a completed match object. It can be generated by
// a MatchFunction as a proposal or can be returned by OpenMatch as a result in
// response to the FetchMatches call.
// When a match is returned by the FetchMatches call, it should contain at least
// one ticket to be considered as valid.
message Match {
// A Match ID that should be passed through the stack for tracing.
string match_id = 1;
// Name of the match profile that generated this Match.
string match_profile = 2;
// Name of the match function that generated this Match.
string match_function = 3;
// Tickets belonging to this match.
repeated Ticket tickets = 4;
// Set of Rosters that comprise this Match
repeated Roster rosters = 5;
// Match properties for this Match. Open Match does not interpret this field.
google.protobuf.Struct properties = 6;
}

78
api/mmlogic.proto Normal file

@ -0,0 +1,78 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "pkg/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "MM Logic (Data Layer)"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message QueryTicketsRequest {
// The Pool representing the set of Filters to be queried.
Pool pool = 1;
}
message QueryTicketsResponse {
// The Tickets that meet the Filter criteria requested by the Pool.
repeated Ticket tickets = 1;
}
// The MMLogic API provides utility functions for common MMF functionality such
// as retreiving Tickets from state storage.
service MmLogic {
// QueryTickets gets the list of Tickets that match every Filter in the
// specified Pool.
rpc QueryTickets(QueryTicketsRequest) returns (stream QueryTicketsResponse) {
option (google.api.http) = {
post: "/v1/mmlogic/tickets:query"
body: "*"
};
}
}

303
api/mmlogic.swagger.json Normal file

@ -0,0 +1,303 @@
{
"swagger": "2.0",
"info": {
"title": "MM Logic (Data Layer)",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/mmlogic/tickets:query": {
"post": {
"summary": "QueryTickets gets the list of Tickets that match every Filter in the\nspecified Pool.",
"operationId": "QueryTickets",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/apiQueryTicketsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiQueryTicketsRequest"
}
}
],
"tags": [
"MmLogic"
]
}
}
},
"definitions": {
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiFilter": {
"type": "object",
"properties": {
"attribute": {
"type": "string",
"description": "Name of the ticket attribute this Filter operates on."
},
"max": {
"type": "number",
"format": "double",
"description": "Maximum value. Defaults to positive infinity (any value above minv)."
},
"min": {
"type": "number",
"format": "double",
"description": "Minimum value. Defaults to 0."
}
},
"description": "A hard filter used to query a subset of Tickets meeting the filtering\ncriteria."
},
"apiPool": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Pool."
},
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiFilter"
},
"description": "Set of Filters indicating the filtering criteria. Selected players must\nmatch every Filter."
}
}
},
"apiQueryTicketsRequest": {
"type": "object",
"properties": {
"pool": {
"$ref": "#/definitions/apiPool",
"description": "The Pool representing the set of Filters to be queried."
}
}
},
"apiQueryTicketsResponse": {
"type": "object",
"properties": {
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
},
"description": "The Tickets that meet the Filter criteria requested by the Pool."
}
}
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"apiQueryTicketsResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/apiQueryTicketsResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of apiQueryTicketsResponse"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

@ -1,17 +0,0 @@
** REST compatibility
Follow the guidelines at https://cloud.google.com/endpoints/docs/grpc/transcoding
to keep the gRPC service definitions friendly to REST transcoding. An excerpt:
"Transcoding involves mapping HTTP/JSON requests and their parameters to gRPC
methods and their parameters and return types (we'll look at exactly how you
do this in the following sections). Because of this, while it's possible to
map an HTTP/JSON request to any arbitrary API method, it's simplest and most
intuitive to do so if the gRPC API itself is structured in a
resource-oriented way, just like a traditional HTTP REST API. In other
words, the API service should be designed so that it uses a small number of
standard methods (corresponding to HTTP verbs like GET, PUT, and so on) that
operate on the service's resources (and collections of resources, which are
themselves a type of resource).
These standard methods are List, Get, Create, Update, and Delete."
It is for these reasons we don't have gRPC calls that support bi-directional streaming in Open Match.

@ -1,56 +0,0 @@
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
// The protobuf messages sent in the gRPC calls are defined 'messages.proto'.
import 'api/protobuf-spec/messages.proto';
service Backend {
// Calls to ask the matchmaker to run a matchmaking function.
// Run MMF once. Return a matchobject that fits this profile.
// INPUT: MatchObject message with these fields populated:
// - id
// - properties
// - [optional] roster, any fields you fill are available to your MMF.
// - [optional] pools, any fields you fill are available to your MMF.
// OUTPUT: MatchObject message with these fields populated:
// - id
// - properties
// - error. Empty if no error was encountered
// - rosters, if you choose to fill them in your MMF. (Recommended)
// - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)
rpc CreateMatch(messages.MatchObject) returns (messages.MatchObject) {}
// Continually run MMF and stream MatchObjects that fit this profile until
// the backend client closes the connection. Same inputs/outputs as CreateMatch.
rpc ListMatches(messages.MatchObject) returns (stream messages.MatchObject) {}
// Delete a MatchObject from state storage manually. (MatchObjects in state
// storage will also automatically expire after a while, defined in the config)
// INPUT: MatchObject message with the 'id' field populated.
// (All other fields are ignored.)
rpc DeleteMatch(messages.MatchObject) returns (messages.Result) {}
// Calls for communication of connection info to players.
// Write the connection info for the list of players in the
// Assignments.messages.Rosters to state storage. The Frontend API is
// responsible for sending anything sent here to the game clients.
// Sending a player to this function kicks off a process that removes
// the player from future matchmaking functions by adding them to the
// 'deindexed' player list and then deleting their player ID from state storage
// indexes.
// INPUT: Assignments message with these fields populated:
// - assignment, anything you write to this string is sent to Frontend API
// - rosters. You can send any number of rosters, containing any number of
// player messages. All players from all rosters will be sent the assignment.
// The only field in the Roster's Player messages used by CreateAssignments is
// the id field. All other fields in the Player messages are silently ignored.
rpc CreateAssignments(messages.Assignments) returns (messages.Result) {}
// Remove DGS connection info from state storage for players.
// INPUT: Roster message with the 'players' field populated.
// The only field in the Roster's Player messages used by
// DeleteAssignments is the 'id' field. All others are silently ignored. If
// you need to delete multiple rosters, make multiple calls.
rpc DeleteAssignments(messages.Roster) returns (messages.Result) {}
}

@ -1,65 +0,0 @@
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
import 'api/protobuf-spec/messages.proto';
service Frontend {
// Call to start matchmaking for a player
// CreatePlayer will put the player in state storage, and then look
// through the 'properties' field for the attributes you have defined as
// indices your matchmaker config. If the attributes exist and are valid
// integers, they will be indexed.
// INPUT: Player message with these fields populated:
// - id
// - properties
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc CreatePlayer(messages.Player) returns (messages.Result) {}
// Call to stop matchmaking for a player
// DeletePlayer removes the player from state storage by doing the
// following:
// 1) Delete player from configured indices. This effectively removes the
// player from matchmaking when using recommended MMF patterns.
// Everything after this is just cleanup to save stage storage space.
// 2) 'Lazily' delete the player's state storage record. This is kicked
// off in the background and may take some time to complete.
// 2) 'Lazily' delete the player's metadata indicies (like, the timestamp when
// they called CreatePlayer, and the last time the record was accessed). This
// is also kicked off in the background and may take some time to complete.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc DeletePlayer(messages.Player) returns (messages.Result) {}
// Calls to access matchmaking results for a player
// GetUpdates streams matchmaking results from Open Match for the
// provided player ID.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: a stream of player objects with one or more of the following
// fields populated, if an update to that field is seen in state storage:
// - 'assignment': string that usually contains game server connection information.
// - 'status': string to communicate current matchmaking status to the client.
// - 'error': string to pass along error information to the client.
//
// During normal operation, the expectation is that the 'assignment' field
// will be updated by a Backend process calling the 'CreateAssignments' Backend API
// endpoint. 'Status' and 'Error' are free for developers to use as they see fit.
// Even if you had multiple players enter a matchmaking request as a group, the
// Backend API 'CreateAssignments' call will write the results to state
// storage separately under each player's ID. OM expects you to make all game
// clients 'GetUpdates' with their own ID from the Frontend API to get
// their results.
//
// NOTE: This call generates a small amount of load on the Frontend API and state
// storage while watching the player record for updates. You are expected
// to close the stream from your client after receiving your matchmaking
// results (or a reasonable timeout), or you will continue to
// generate load on OM until you do!
// NOTE: Just bear in mind that every update will send egress traffic from
// Open Match to game clients! Frugality is recommended.
rpc GetUpdates(messages.Player) returns (stream messages.Player) {}
}

@ -1,94 +0,0 @@
syntax = 'proto3';
package messages;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
// Open Match's internal representation and wire protocol format for "MatchObjects".
// In order to request a match using the Backend API, your backend code should generate
// a new MatchObject with an ID and properties filled in (for more details about valid
// values for these fields, see the documentation). Open Match then sends the Match
// Object through to your matchmaking function, where you add players to 'rosters' and
// store any schemaless data you wish in the 'properties' field. The MatchObject
// is then sent, populated, out through the Backend API to your backend code.
//
// MatchObjects contain a number of fields, but many gRPC calls that take a
// MatchObject as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message MatchObject{
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string error = 3; // Last error encountered.
repeated Roster rosters = 4; // Rosters of players.
repeated PlayerPool pools = 5; // 'Hard' filters, and the players who match them.
}
// Data structure to hold a list of players in a match.
message Roster{
string name = 1; // Arbitrary developer-chosen, human-readable string. By convention, set to team name.
repeated Player players = 2; // Player profiles on this roster.
}
// A 'hard' filter to apply to the player pool.
message Filter{
string name = 1; // Arbitrary developer-chosen, human-readable name of this filter. Appears in logs and metrics.
string attribute = 2; // Name of the player attribute this filter operates on.
int64 maxv = 3; // Maximum value. Defaults to positive infinity (any value above minv).
int64 minv = 4; // Minimum value. Defaults to 0.
Stats stats = 5; // Statistics for the last time the filter was applied.
}
// Holds statistics
message Stats{
int64 count = 1; // Number of results.
double elapsed = 2; // How long it took to get the results.
}
// PlayerPools are defined by a set of 'hard' filters, and can be filled in
// with the players that match those filters.
//
// PlayerPools contain a number of fields, but many gRPC calls that take a
// PlayerPool as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message PlayerPool{
string name = 1; // Arbitrary developer-chosen, human-readable string.
repeated Filter filters = 2; // Filters are logical AND-ed (a player must match every filter).
Roster roster = 3; // Roster of players that match all filters.
Stats stats = 4; // Statisticss for the last time this Pool was retrieved from state storage.
}
// Open Match's internal representation and wire protocol format for "Players".
// In order to enter matchmaking using the Frontend API, your client code should generate
// a consistent (same result for each client every time they launch) with an ID and
// properties filled in (for more details about valid values for these fields,
// see the documentation).
// Players contain a number of fields, but the gRPC calls that take a
// Player as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message Player{
message Attribute{
string name = 1; // Name should match a Filter.attribute field.
int64 value = 2;
}
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string pool = 3; // Optionally used to specify the PlayerPool in which to find a player.
repeated Attribute attributes= 4; // Attributes of this player.
string assignment = 5; // By convention, ip:port of a DGS to connect to
string status = 6; // Arbitrary developer-chosen string.
string error = 7; // Arbitrary developer-chosen string.
}
// Simple message to return success/failure and error status.
message Result{
bool success = 1;
string error = 2;
}
// IlInput is an empty message reserved for future use.
message IlInput{
}
message Assignments{
repeated Roster rosters = 1;
string assignment = 10;
}

@ -1,74 +0,0 @@
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
// The protobuf messages sent in the gRPC calls are defined 'messages.proto'.
import 'api/protobuf-spec/messages.proto';
// The MMLogic API provides utility functions for common MMF functionality, such
// as retreiving profiles and players from state storage, writing results to state storage,
// and exposing metrics and statistics.
service MmLogic {
// Profile and match object functions
// Send GetProfile a match object with the ID field populated, it will return a
// 'filled' one.
// Note: filters are assumed to have been checked for validity by the
// backendapi when accepting a profile
rpc GetProfile(messages.MatchObject) returns (messages.MatchObject) {}
// CreateProposal is called by MMFs that wish to write their results to
// a proposed MatchObject, that can be sent out the Backend API once it has
// been approved (by default, by the evaluator process).
// - adds all players in all Rosters to the proposed player ignore list
// - writes the proposed match to the provided key
// - adds that key to the list of proposals to be considered
// INPUT:
// * TO RETURN A MATCHOBJECT AFTER A SUCCESSFUL MMF RUN
// To create a match MatchObject message with these fields populated:
// - id, set to the value of the MMF_PROPOSAL_ID env var
// - properties
// - error. You must explicitly set this to an empty string if your MMF
// - roster, with the playerIDs filled in the 'players' repeated field.
// - [optional] pools, set to the output from the 'GetPlayerPools' call,
// will populate the pools with stats about how many players the filters
// matched and how long the filters took to run, which will be sent out
// the backend api along with your match results.
// was successful.
// * TO RETURN AN ERROR
// To report a failure or error, send a MatchObject message with these
// these fields populated:
// - id, set to the value of the MMF_ERROR_ID env var.
// - error, set to a string value describing the error your MMF encountered.
// - [optional] properties, anything you put here is returned to the
// backend along with your error.
// - [optional] rosters, anything you put here is returned to the
// backend along with your error.
// - [optional] pools, set to the output from the 'GetPlayerPools' call,
// will populate the pools with stats about how many players the filters
// matched and how long the filters took to run, which will be sent out
// the backend api along with your match results.
// OUTPUT: a Result message with a boolean success value and an error string
// if an error was encountered
rpc CreateProposal(messages.MatchObject) returns (messages.Result) {}
// Player listing and filtering functions
//
// RetrievePlayerPool gets the list of players that match every Filter in the
// PlayerPool, .excluding players in any configured ignore lists. It
// combines the results, and returns the resulting player pool.
rpc GetPlayerPool(messages.PlayerPool) returns (stream messages.PlayerPool) {}
// Ignore List functions
//
// IlInput is an empty message reserved for future use.
rpc GetAllIgnoredPlayers(messages.IlInput) returns (messages.Roster) {}
// ListIgnoredPlayers retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposed.name'.
rpc ListIgnoredPlayers(messages.IlInput) returns (messages.Roster) {}
// NYI
// UpdateMetrics sends stats about the MMF run to export to a metrics aggregation tool
// like Prometheus or StackDriver.
// rpc UpdateMetrics(messages.NYI) returns (messages.Results) {}
}

@ -1,3 +0,0 @@
python3 -m grpc_tools.protoc -I . --python_out=. --grpc_python_out=. mmlogic.proto
python3 -m grpc_tools.protoc -I . --python_out=. --grpc_python_out=. messages.proto
cp *pb2* $OM/examples/functions/python3/simple/.

@ -1,26 +0,0 @@
#!/bin/bash
# Script to compile golang versions of the OM proto files
#
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cd $GOPATH/src
protoc \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/backend.proto \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/frontend.proto \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/mmlogic.proto \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/messages.proto \
-I ${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/ \
--go_out=plugins=grpc:$GOPATH/src
cd -

103
api/synchronizer.proto Normal file

@ -0,0 +1,103 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package api;
option go_package = "internal/pb";
import "api/messages.proto";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_swagger) = {
info: {
title: "Synchronizer"
version: "1.0"
contact: {
name: "Open Match"
url: "https://open-match.dev"
email: "open-match-discuss@googlegroups.com"
}
license: {
name: "Apache 2.0 License"
url: "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
}
external_docs: {
url: "https://open-match.dev/site/docs/"
description: "Open Match Documentation"
}
schemes: HTTP
schemes: HTTPS
consumes: "application/json"
produces: "application/json"
responses: {
key: "404"
value: {
description: "Returned when the resource does not exist."
schema: { json_schema: { type: STRING } }
}
}
// TODO Add annotations for security_defintiions.
// See
// https://github.com/grpc-ecosystem/grpc-gateway/blob/master/examples/proto/examplepb/a_bit_of_everything.proto
};
message RegisterRequest {
}
message RegisterResponse {
// Identifier for this request valid for the current synchronization cycle.
string id = 1;
}
message EvaluateProposalsRequest {
// List of proposals to evaluate in the current synchronization cycle.
repeated Match matches = 1;
// Identifier for this request issued during request registration.
string id = 2;
}
message EvaluateProposalsResponse {
// Results from evaluating proposals for this request.
repeated Match matches = 1;
}
// The service implementing the Synchronizer API that synchronizes the evaluation
// of proposals returned from Match functions.
service Synchronizer {
// Register associates this request with the current synchronization cycle and
// returns an identifier for this registration. The caller returns this
// identifier back in the evaluation request. This enables synchronizer to
// identify stale evaluation requests belonging to a prior window.
rpc Register(RegisterRequest) returns (RegisterResponse) {
option (google.api.http) = {
get: "/v1/synchronizer/register"
};
}
// EvaluateProposals accepts a list of proposals and a registration identifier
// for this request. If the synchronization cycle to which the request was
// registered is completed, this request fails otherwise the proposals are
// added to the list of proposals to be evaluated in the current cycle. At the
// end of the cycle, the user defined evaluation method is triggered and the
// matches accepted by it are returned as results.
rpc EvaluateProposals(EvaluateProposalsRequest) returns (EvaluateProposalsResponse) {
option (google.api.http) = {
post: "/v1/synchronizer/proposals:evaluate"
body: "*"
};
}
}

@ -0,0 +1,320 @@
{
"swagger": "2.0",
"info": {
"title": "Synchronizer",
"version": "1.0",
"contact": {
"name": "Open Match",
"url": "https://open-match.dev",
"email": "open-match-discuss@googlegroups.com"
},
"license": {
"name": "Apache 2.0 License",
"url": "https://github.com/googleforgames/open-match/blob/master/LICENSE"
}
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/synchronizer/proposals:evaluate": {
"post": {
"summary": "EvaluateProposals accepts a list of proposals and a registration identifier\nfor this request. If the synchronization cycle to which the request was\nregistered is completed, this request fails otherwise the proposals are\nadded to the list of proposals to be evaluated in the current cycle. At the\n end of the cycle, the user defined evaluation method is triggered and the\nmatches accepted by it are returned as results.",
"operationId": "EvaluateProposals",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiEvaluateProposalsResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiEvaluateProposalsRequest"
}
}
],
"tags": [
"Synchronizer"
]
}
},
"/v1/synchronizer/register": {
"get": {
"summary": "Register associates this request with the current synchronization cycle and\nreturns an identifier for this registration. The caller returns this\nidentifier back in the evaluation request. This enables synchronizer to\nidentify stale evaluation requests belonging to a prior window.",
"operationId": "Register",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiRegisterResponse"
}
},
"404": {
"description": "Returned when the resource does not exist.",
"schema": {
"format": "string"
}
}
},
"tags": [
"Synchronizer"
]
}
}
},
"definitions": {
"apiAssignment": {
"type": "object",
"properties": {
"connection": {
"type": "string",
"description": "Connection information for this Assignment."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Other details to be sent to the players."
},
"error": {
"$ref": "#/definitions/rpcStatus",
"description": "Error when finding an Assignment for this Ticket."
}
},
"description": "An Assignment object represents the assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
},
"apiEvaluateProposalsRequest": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "List of proposals to evaluate in the current synchronization cycle."
},
"id": {
"type": "string",
"description": "Identifier for this request issued during request registration."
}
}
},
"apiEvaluateProposalsResponse": {
"type": "object",
"properties": {
"matches": {
"type": "array",
"items": {
"$ref": "#/definitions/apiMatch"
},
"description": "Results from evaluating proposals for this request."
}
}
},
"apiMatch": {
"type": "object",
"properties": {
"match_id": {
"type": "string",
"description": "A Match ID that should be passed through the stack for tracing."
},
"match_profile": {
"type": "string",
"description": "Name of the match profile that generated this Match."
},
"match_function": {
"type": "string",
"description": "Name of the match function that generated this Match."
},
"tickets": {
"type": "array",
"items": {
"$ref": "#/definitions/apiTicket"
},
"description": "Tickets belonging to this match."
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/apiRoster"
},
"title": "Set of Rosters that comprise this Match"
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Match properties for this Match. Open Match does not interpret this field."
}
},
"description": "A Match is used to represent a completed match object. It can be generated by\na MatchFunction as a proposal or can be returned by OpenMatch as a result in\nresponse to the FetchMatches call.\nWhen a match is returned by the FetchMatches call, it should contain at least \none ticket to be considered as valid."
},
"apiRegisterResponse": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Identifier for this request valid for the current synchronization cycle."
}
}
},
"apiRoster": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "A developer-chosen human-readable name for this Roster."
},
"ticket_ids": {
"type": "array",
"items": {
"type": "string"
},
"description": "Tickets belonging to this Roster."
}
},
"description": "A Roster is a named collection of Ticket IDs. It exists so that a Tickets\nassociated with a Match can be labelled to belong to a team, sub-team etc. It\ncan also be used to represent the current state of a Match in scenarios such\nas backfill, join-in-progress etc."
},
"apiTicket": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "The Ticket ID generated by Open Match."
},
"properties": {
"$ref": "#/definitions/protobufStruct",
"description": "Properties contains custom info about the ticket. Top level values can be\nused in indexing and filtering to find tickets."
},
"assignment": {
"$ref": "#/definitions/apiAssignment",
"description": "Assignment associated with the Ticket."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. In order to enter\nmatchmaking using Open Match, the client should generate a Ticket, passing in\nthe properties to be associated with this Ticket. Open Match will generate an\nID for a Ticket during creation. A Ticket could be used to represent an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof properties. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string",
"description": "A URL/resource name that uniquely identifies the type of the serialized\nprotocol buffer message. This string must contain at least\none \"/\" character. The last segment of the URL's path must represent\nthe fully qualified name of the type (as in\n`path/google.protobuf.Duration`). The name should be in a canonical form\n(e.g., leading \".\" is not accepted).\n\nIn practice, teams usually precompile into the binary all types that they\nexpect it to use in the context of Any. However, for URLs which use the\nscheme `http`, `https`, or no scheme, one can optionally set up a type\nserver that maps type URLs to message definitions as follows:\n\n* If no scheme is provided, `https` is assumed.\n* An HTTP GET on the URL must yield a [google.protobuf.Type][]\n value in binary format, or produce an error.\n* Applications are allowed to cache lookup results based on the\n URL, or have them precompiled into a binary to avoid any\n lookup. Therefore, binary compatibility needs to be preserved\n on changes to types. (Use versioned type names to manage\n breaking changes.)\n\nNote: this functionality is not currently available in the official\nprotobuf release, and it is not used for type URLs beginning with\ntype.googleapis.com.\n\nSchemes other than `http`, `https` (or the empty scheme) might be\nused with implementation specific semantics."
},
"value": {
"type": "string",
"format": "byte",
"description": "Must be a valid serialized protocol buffer of the above specified type."
}
},
"description": "`Any` contains an arbitrary serialized protocol buffer message along with a\nURL that describes the type of the serialized message.\n\nProtobuf library provides support to pack/unpack Any values in the form\nof utility functions or additional generated methods of the Any type.\n\nExample 1: Pack and unpack a message in C++.\n\n Foo foo = ...;\n Any any;\n any.PackFrom(foo);\n ...\n if (any.UnpackTo(\u0026foo)) {\n ...\n }\n\nExample 2: Pack and unpack a message in Java.\n\n Foo foo = ...;\n Any any = Any.pack(foo);\n ...\n if (any.is(Foo.class)) {\n foo = any.unpack(Foo.class);\n }\n\n Example 3: Pack and unpack a message in Python.\n\n foo = Foo(...)\n any = Any()\n any.Pack(foo)\n ...\n if any.Is(Foo.DESCRIPTOR):\n any.Unpack(foo)\n ...\n\n Example 4: Pack and unpack a message in Go\n\n foo := \u0026pb.Foo{...}\n any, err := ptypes.MarshalAny(foo)\n ...\n foo := \u0026pb.Foo{}\n if err := ptypes.UnmarshalAny(any, foo); err != nil {\n ...\n }\n\nThe pack methods provided by protobuf library will by default use\n'type.googleapis.com/full.type.name' as the type URL and the unpack\nmethods only use the fully qualified type name after the last '/'\nin the type URL, for example \"foo.bar.com/x/y.z\" will yield type\nname \"y.z\".\n\n\nJSON\n====\nThe JSON representation of an `Any` value uses the regular\nrepresentation of the deserialized, embedded message, with an\nadditional field `@type` which contains the type URL. Example:\n\n package google.profile;\n message Person {\n string first_name = 1;\n string last_name = 2;\n }\n\n {\n \"@type\": \"type.googleapis.com/google.profile.Person\",\n \"firstName\": \u003cstring\u003e,\n \"lastName\": \u003cstring\u003e\n }\n\nIf the embedded message type is well-known and has a custom JSON\nrepresentation, that representation will be embedded adding a field\n`value` which holds the custom JSON in addition to the `@type`\nfield. Example (for message [google.protobuf.Duration][]):\n\n {\n \"@type\": \"type.googleapis.com/google.protobuf.Duration\",\n \"value\": \"1.212s\"\n }"
},
"protobufListValue": {
"type": "object",
"properties": {
"values": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufValue"
},
"description": "Repeated field of dynamically typed values."
}
},
"description": "`ListValue` is a wrapper around a repeated field of values.\n\nThe JSON representation for `ListValue` is JSON array."
},
"protobufNullValue": {
"type": "string",
"enum": [
"NULL_VALUE"
],
"default": "NULL_VALUE",
"description": "`NullValue` is a singleton enumeration to represent the null value for the\n`Value` type union.\n\n The JSON representation for `NullValue` is JSON `null`.\n\n - NULL_VALUE: Null value."
},
"protobufStruct": {
"type": "object",
"properties": {
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/protobufValue"
},
"description": "Unordered map of dynamically typed values."
}
},
"description": "`Struct` represents a structured data value, consisting of fields\nwhich map to dynamically typed values. In some languages, `Struct`\nmight be supported by a native representation. For example, in\nscripting languages like JS a struct is represented as an\nobject. The details of that representation are described together\nwith the proto support for the language.\n\nThe JSON representation for `Struct` is JSON object."
},
"protobufValue": {
"type": "object",
"properties": {
"null_value": {
"$ref": "#/definitions/protobufNullValue",
"description": "Represents a null value."
},
"number_value": {
"type": "number",
"format": "double",
"description": "Represents a double value."
},
"string_value": {
"type": "string",
"description": "Represents a string value."
},
"bool_value": {
"type": "boolean",
"format": "boolean",
"description": "Represents a boolean value."
},
"struct_value": {
"$ref": "#/definitions/protobufStruct",
"description": "Represents a structured value."
},
"list_value": {
"$ref": "#/definitions/protobufListValue",
"description": "Represents a repeated `Value`."
}
},
"description": "`Value` represents a dynamically typed value which can be either\nnull, a number, a string, a boolean, a recursive struct value, or a\nlist of values. A producer of value is expected to set one of that\nvariants, absence of any variant indicates an error.\n\nThe JSON representation for `Value` is JSON value."
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32",
"description": "The status code, which should be an enum value of\n[google.rpc.Code][google.rpc.Code]."
},
"message": {
"type": "string",
"description": "A developer-facing error message, which should be in English. Any\nuser-facing error message should be localized and sent in the\n[google.rpc.Status.details][google.rpc.Status.details] field, or localized\nby the client."
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
},
"description": "A list of messages that carry the error details. There is a common set of\nmessage types for APIs to use."
}
},
"description": "- Simple to use and understand for most users\n- Flexible enough to meet unexpected needs\n\n# Overview\n\nThe `Status` message contains three pieces of data: error code, error\nmessage, and error details. The error code should be an enum value of\n[google.rpc.Code][google.rpc.Code], but it may accept additional error codes\nif needed. The error message should be a developer-facing English message\nthat helps developers *understand* and *resolve* the error. If a localized\nuser-facing error message is needed, put the localized message in the error\ndetails or localize it in the client. The optional error details may contain\narbitrary information about the error. There is a predefined set of error\ndetail types in the package `google.rpc` that can be used for common error\nconditions.\n\n# Language mapping\n\nThe `Status` message is the logical representation of the error model, but it\nis not necessarily the actual wire format. When the `Status` message is\nexposed in different client libraries and different wire protocols, it can be\nmapped differently. For example, it will likely be mapped to some exceptions\nin Java, but more likely mapped to some error codes in C.\n\n# Other uses\n\nThe error model and the `Status` message can be used in a variety of\nenvironments, either with or without APIs, to provide a\nconsistent developer experience across different environments.\n\nExample uses of this error model include:\n\n- Partial errors. If a service needs to return partial errors to the client,\n it may embed the `Status` in the normal response to indicate the partial\n errors.\n\n- Workflow errors. A typical workflow has multiple steps. Each step may\n have a `Status` message for error reporting.\n\n- Batch operations. If a client uses batch request and batch response, the\n `Status` message should be used directly inside batch response, one for\n each error sub-response.\n\n- Asynchronous operations. If an API call embeds asynchronous operation\n results in its response, the status of those operations should be\n represented directly using the `Status` message.\n\n- Logging. If some API errors are stored in logs, the message `Status` could\n be used directly after any stripping needed for security/privacy reasons.",
"title": "The `Status` type defines a logical error model that is suitable for\ndifferent programming environments, including REST APIs and RPC APIs. It is\nused by [gRPC](https://github.com/grpc). The error model is designed to be:"
}
},
"externalDocs": {
"description": "Open Match Documentation",
"url": "https://open-match.dev/site/docs/"
}
}

210
cloudbuild.yaml Normal file

@ -0,0 +1,210 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Open Match Script for Google Cloud Build #
################################################################################
# To run this locally:
# cloud-build-local --config=cloudbuild.yaml --dryrun=false --substitutions=_OM_VERSION=DEV .
# To run this remotely:
# gcloud builds submit --config=cloudbuild.yaml --substitutions=_OM_VERSION=DEV .
# Requires gcloud to be installed to work. (https://cloud.google.com/sdk/)
# gcloud auth login
# gcloud components install cloud-build-local
# This YAML contains all the build steps for building Open Match.
# All PRs are verified against this script to prevent build breakages and regressions.
# Conventions
# Each build step is ID'ed with "Prefix: Description".
# The prefix portion determines what kind of step it is and it's impact.
# Docker Image: Read-Only, outputs a docker image.
# Lint: Read-Only, verifies correctness and formatting of a file.
# Build: Read-Write, outputs a build artifact. Ok to run in parallel if the artifact will not collide with another one.
# Generate: Read-Write, outputs files within /workspace that are used in other build step. Do not run these in parallel.
# Setup: Read-Write, similar to generate but steps that run before any other step.
# Some useful things to know about Cloud Build.
# The root of this repository is always stored in /workspace.
# Any modifications that occur within /workspace are persisted between builds anything else is forgotten.
# If a build step has intermediate files that need to be persisted for a future step then use volumes.
# An example of this is the go-vol which is where the pkg/ data for go mod is stored.
# More information here: https://cloud.google.com/cloud-build/docs/build-config#build_steps
# A build step is basically a docker image that is tuned for Cloud Build,
# https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/go
steps:
- id: 'Docker Image: open-match-build'
name: gcr.io/kaniko-project/executor
args: ['--destination=gcr.io/$PROJECT_ID/open-match-build', '--cache=true', '--cache-ttl=48h', '--dockerfile=Dockerfile.ci', '.']
waitFor: ['-']
- id: 'Build: Clean'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'clean']
waitFor: ['Docker Image: open-match-build']
- id: 'Test: Markdown'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'md-test']
waitFor: ['Build: Clean']
- id: 'Setup: Download Dependencies'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'sync-deps']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Clean']
- id: 'Build: Install Kubernetes Tools'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'install-kubernetes-tools']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Clean']
- id: 'Build: Install Toolchain'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'install-toolchain']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Setup: Download Dependencies']
- id: 'Build: Assets'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'assets', '-j12']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Install Toolchain']
- id: 'Build: Binaries'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'build', 'all', '-j12']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Assets']
- id: 'Test: Services'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'GOLANG_TEST_COUNT=10', 'test']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Assets']
- id: 'Build: Docker Images'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', '_GCB_LATEST_VERSION=${_GCB_LATEST_VERSION}', 'SHORT_SHA=${SHORT_SHA}', 'BRANCH_NAME=${BRANCH_NAME}', 'push-images', '-j8']
waitFor: ['Build: Assets']
- id: 'Build: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'VERSION_SUFFIX=$SHORT_SHA', 'clean-install-yaml', 'install/yaml/']
waitFor: ['Build: Install Toolchain']
- id: 'Lint: Format, Vet, Charts'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'lint']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Assets', 'Build: Deployment Configs']
- id: 'Test: Terraform Configuration'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'terraform-test']
waitFor: ['Build: Install Toolchain']
- id: 'Test: Create Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'delete-gke-cluster', 'create-gke-cluster', 'push-helm']
waitFor: ['Build: Install Kubernetes Tools']
- id: 'Test: Deploy Open Match'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'install-ci-chart']
waitFor: ['Test: Create Cluster', 'Build: Docker Images']
- id: 'Test: End-to-End Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'SHORT_SHA=${SHORT_SHA}', 'test-e2e-cluster']
waitFor: ['Test: Deploy Open Match', 'Build: Assets']
volumes:
- name: 'go-vol'
path: '/go'
- id: 'Test: Delete Cluster'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'SHORT_SHA=${SHORT_SHA}', 'GCLOUD_EXTRA_FLAGS=--async', 'GCP_PROJECT_ID=${PROJECT_ID}', 'ci-reap-clusters', 'delete-gke-cluster']
waitFor: ['Test: End-to-End Cluster']
- id: 'Deploy: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', '_GCB_LATEST_VERSION=${_GCB_LATEST_VERSION}', 'VERSION_SUFFIX=${SHORT_SHA}', 'BRANCH_NAME=${BRANCH_NAME}', 'ci-deploy-artifacts']
waitFor: ['Lint: Format, Vet, Charts', 'Test: Deploy Open Match']
volumes:
- name: 'go-vol'
path: '/go'
artifacts:
objects:
location: gs://open-match-build-artifacts/output/
paths:
- cmd/backend/backend
- cmd/frontend/frontend
- cmd/mmlogic/mmlogic
- cmd/synchronizer/synchronizer
- cmd/minimatch/minimatch
- cmd/swaggerui/swaggerui
- install/yaml/install.yaml
- install/yaml/install-demo.yaml
- install/yaml/01-redis-chart.yaml
- install/yaml/02-open-match.yaml
- install/yaml/03-prometheus-chart.yaml
- install/yaml/04-grafana-chart.yaml
- install/yaml/05-jaeger-chart.yaml
- examples/functions/golang/soloduel/soloduel
- examples/functions/golang/pool/pool
- examples/evaluator/golang/simple/simple
- tools/certgen/certgen
- tools/reaper/reaper
images:
- 'gcr.io/$PROJECT_ID/openmatch-backend:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-frontend:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmlogic:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-synchronizer:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-minimatch:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-demo:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmf-go-soloduel:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmf-go-pool:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-evaluator-go-simple:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-swaggerui:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-reaper:${_OM_VERSION}-${SHORT_SHA}'
substitutions:
_OM_VERSION: "0.6.0-rc.1"
_GCB_POST_SUBMIT: "0"
_GCB_LATEST_VERSION: "undefined"
logsBucket: 'gs://open-match-build-logs/'
options:
sourceProvenanceHash: ['SHA256']
machineType: 'N1_HIGHCPU_32'
timeout: 2500s

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-base:dev',
'-f', 'Dockerfile.base',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-base:dev']

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple',
'-f', 'Dockerfile.mmf_php',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple']

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev',
'-f', 'Dockerfile.mmf_py3',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev']

56
cmd/backend/Dockerfile Normal file

@ -0,0 +1,56 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/backend/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/backend/backend /app/
ENTRYPOINT ["/app/backend"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Backend API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

24
cmd/backend/backend.go Normal file

@ -0,0 +1,24 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the backend service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/backend"
)
func main() {
backend.RunApplication()
}

@ -1,10 +0,0 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/backendapi .
ENTRYPOINT ["./backendapi"]

@ -1,447 +0,0 @@
/*
package apisrv provides an implementation of the gRPC server defined in
../../../api/protobuf-spec/backend.proto
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"context"
"errors"
"fmt"
"net"
"time"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
backend "github.com/GoogleCloudPlatform/open-match/internal/pb"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/ignorelist"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
"github.com/gogo/protobuf/jsonpb"
"github.com/gogo/protobuf/proto"
log "github.com/sirupsen/logrus"
"go.opencensus.io/plugin/ocgrpc"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/tidwall/gjson"
"github.com/gomodule/redigo/redis"
"github.com/rs/xid"
"github.com/spf13/viper"
"google.golang.org/grpc"
)
// Logrus structured logging setup
var (
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
}
beLog = log.WithFields(beLogFields)
)
// BackendAPI implements backend API Server, the server generated by compiling
// the protobuf, by fulfilling the API Client interface.
type BackendAPI struct {
grpc *grpc.Server
cfg *viper.Viper
pool *redis.Pool
}
type backendAPI BackendAPI
// New returns an instantiated srvice
func New(cfg *viper.Viper, pool *redis.Pool) *BackendAPI {
s := BackendAPI{
pool: pool,
grpc: grpc.NewServer(grpc.StatsHandler(&ocgrpc.ServerHandler{})),
cfg: cfg,
}
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(BeLogLines, KeySeverity))
backend.RegisterBackendServer(s.grpc, (*backendAPI)(&s))
beLog.Info("Successfully registered gRPC server")
return &s
}
// Open starts the api grpc service listening on the configured port.
func (s *BackendAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.backend.port"))
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"port": s.cfg.GetInt("api.backend.port"),
}).Error("net.Listen() error")
return err
}
beLog.WithFields(log.Fields{"port": s.cfg.GetInt("api.backend.port")}).Info("TCP net listener initialized")
go func() {
err := s.grpc.Serve(ln)
if err != nil {
beLog.WithFields(log.Fields{"error": err.Error()}).Error("gRPC serve() error")
}
beLog.Info("serving gRPC endpoints")
}()
return nil
}
// CreateMatch is this service's implementation of the CreateMatch gRPC method
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject) (*backend.MatchObject, error) {
// Get a cancel-able context
ctx, cancel := context.WithCancel(c)
defer cancel()
// Create context for tagging OpenCensus metrics.
funcName := "CreateMatch"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Generate a request to fill the profile. Make a unique request ID.
moID := xid.New().String()
requestKey := moID + "." + profile.Id
/*
// Debugging logs
beLog.Info("Pools nil? ", (profile.Pools == nil))
beLog.Info("Pools empty? ", (len(profile.Pools) == 0))
beLog.Info("Rosters nil? ", (profile.Rosters == nil))
beLog.Info("Rosters empty? ", (len(profile.Rosters) == 0))
beLog.Info("config set for json.pools?", s.cfg.IsSet("jsonkeys.pools"))
beLog.Info("contents key?", s.cfg.GetString("jsonkeys.pools"))
beLog.Info("contents exist?", gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.pools")).Exists())
*/
// Case where no protobuf pools was passed; check if there's a JSON version in the properties.
// This is for backwards compatibility, it is recommended you populate the protobuf's
// 'pools' field directly and pass it to CreateMatch/ListMatches
if profile.Pools == nil && s.cfg.IsSet("jsonkeys.pools") &&
gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.pools")).Exists() {
poolsJSON := fmt.Sprintf("{\"pools\": %v}", gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.pools")).String())
ppLog := beLog.WithFields(log.Fields{"jsonkey": s.cfg.GetString("jsonkeys.pools")})
ppLog.Info("poolsJSON: ", poolsJSON)
ppools := &backend.MatchObject{}
err := jsonpb.UnmarshalString(poolsJSON, ppools)
if err != nil {
ppLog.Error("failed to parse JSON to protobuf pools")
} else {
profile.Pools = ppools.Pools
ppLog.Info("parsed JSON to protobuf pools")
}
}
// Case where no protobuf roster was passed; check if there's a JSON version in the properties.
// This is for backwards compatibility, it is recommended you populate the
// protobuf's 'rosters' field directly and pass it to CreateMatch/ListMatches
if profile.Rosters == nil && s.cfg.IsSet("jsonkeys.rosters") &&
gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.rosters")).Exists() {
rostersJSON := fmt.Sprintf("{\"rosters\": %v}", gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.rosters")).String())
rLog := beLog.WithFields(log.Fields{"jsonkey": s.cfg.GetString("jsonkeys.rosters")})
prosters := &backend.MatchObject{}
err := jsonpb.UnmarshalString(rostersJSON, prosters)
if err != nil {
rLog.Error("failed to parse JSON to protobuf rosters")
} else {
profile.Rosters = prosters.Rosters
rLog.Info("parsed JSON to protobuf rosters")
}
}
// Add fields for all subsequent logging
beLog = beLog.WithFields(log.Fields{
"profileID": profile.Id,
"func": funcName,
"matchObjectID": moID,
"requestKey": requestKey,
})
beLog.Info("gRPC call executing")
beLog.Info("profile is")
beLog.Info(profile)
// Write profile to state storage
err := redispb.MarshalToRedis(ctx, s.pool, profile, s.cfg.GetInt("redis.expirations.matchobject"))
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage failure to create match profile")
// Failure! Return empty match object and the error
stats.Record(fnCtx, BeGrpcErrors.M(1))
return &backend.MatchObject{}, err
}
beLog.Info("Profile written to state storage")
// Queue the request ID to be sent to an MMF
_, err = redisHelpers.Update(ctx, s.pool, s.cfg.GetString("queues.profiles.name"), requestKey)
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage failure to queue profile")
// Failure! Return empty match object and the error
stats.Record(fnCtx, BeGrpcErrors.M(1))
return &backend.MatchObject{}, err
}
beLog.Info("Profile added to processing queue")
// get and return matchobject, it will be written to the requestKey when the MMF has finished.
var ok bool
newMO := backend.MatchObject{Id: requestKey}
watchChan := redispb.Watcher(ctx, s.pool, newMO) // Watcher() runs the appropriate Redis commands.
errString := ("Error retrieving matchmaking results from state storage")
timeout := time.Duration(s.cfg.GetInt("api.backend.timeout")) * time.Second
select {
case <-time.After(timeout):
// TODO:Timeout: deal with the fallout. There are some edge cases here.
// When there is a timeout, need to send a stop to the watch channel.
stats.Record(fnCtx, BeGrpcRequests.M(1))
return profile, errors.New(errString + ": timeout exceeded")
case newMO, ok = <-watchChan:
if !ok {
// ok is false if watchChan has been closed by redispb.Watcher()
newMO.Error = newMO.Error + "; channel closed - was the context cancelled?"
} else {
// 'ok' was true, so properties should contain the results from redis.
// Do basic error checking on the returned JSON
if !gjson.Valid(profile.Properties) {
newMO.Error = "retreived properties json was malformed"
}
}
// TODO test that this is the correct condition for an empty error.
if newMO.Error != "" {
stats.Record(fnCtx, BeGrpcErrors.M(1))
return &newMO, errors.New(newMO.Error)
}
// Got results; close the channel so the Watcher() function stops querying redis.
}
beLog.Info("Matchmaking results received, returning to backend client")
stats.Record(fnCtx, BeGrpcRequests.M(1))
return &newMO, err
}
// ListMatches is this service's implementation of the ListMatches gRPC method
// defined in api/protobuf-spec/backend.proto
// This is the streaming version of CreateMatch - continually submitting the
// profile to be filled until the requesting service ends the connection.
func (s *backendAPI) ListMatches(p *backend.MatchObject, matchStream backend.Backend_ListMatchesServer) error {
// call creatematch in infinite loop as long as the stream is open
ctx := matchStream.Context() // https://talks.golang.org/2015/gotham-grpc.slide#30
// Create context for tagging OpenCensus metrics.
funcName := "ListMatches"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"profileID": p.Id,
}).Info("gRPC call executing. Calling CreateMatch. Looping until cancelled.")
for {
select {
case <-ctx.Done():
// Context cancelled, probably because the client cancelled their request, time to exit.
beLog.WithFields(log.Fields{
"profileID": p.Id,
}).Info("gRPC Context cancelled; client is probably finished receiving matches")
// TODO: need to make sure that in-flight matches don't get leaked here.
stats.Record(fnCtx, BeGrpcRequests.M(1))
return nil
default:
// Retreive results from Redis
requestProfile := proto.Clone(p).(*backend.MatchObject)
/*
beLog.Debug("new profile requested!")
beLog.Debug(requestProfile)
beLog.Debug(&requestProfile)
*/
mo, err := s.CreateMatch(ctx, requestProfile)
beLog = beLog.WithFields(log.Fields{"func": funcName})
if err != nil {
beLog.WithFields(log.Fields{"error": err.Error()}).Error("Failure calling CreateMatch")
stats.Record(fnCtx, BeGrpcErrors.M(1))
return err
}
beLog.WithFields(log.Fields{"matchProperties": fmt.Sprintf("%v", mo)}).Debug("Streaming back match object")
matchStream.Send(mo)
// TODO: This should be tunable, but there should be SOME sleep here, to give a requestor a window
// to cleanly close the connection after receiving a match object when they know they don't want to
// request any more matches.
time.Sleep(2 * time.Second)
}
}
}
// DeleteMatch is this service's implementation of the DeleteMatch gRPC method
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (*backend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "DeleteMatch"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"matchObjectID": mo.Id,
}).Info("gRPC call executing")
err := redisHelpers.Delete(ctx, s.pool, mo.Id)
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
return &backend.Result{Success: false, Error: err.Error()}, err
}
beLog.WithFields(log.Fields{
"matchObjectID": mo.Id,
}).Info("Match Object deleted.")
stats.Record(fnCtx, BeGrpcRequests.M(1))
return &backend.Result{Success: true, Error: ""}, err
}
// CreateAssignments is this service's implementation of the CreateAssignments gRPC method
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignments) (*backend.Result, error) {
// Make a map of players and what assignments we want to send them.
playerIDs := make([]string, 0)
players := make(map[string]string, 0)
for _, roster := range a.Rosters { // Loop through all rosters
for _, player := range roster.Players { // Loop through all players in this roster
if player.Id != "" {
if player.Assignment == "" {
// No player-specific assignment, so use the default one in
// the Assignment message.
player.Assignment = a.Assignment
}
players[player.Id] = player.Assignment
beLog.Debug(fmt.Sprintf("playerid %v assignment %v", player.Id, player.Assignment))
}
}
playerIDs = append(playerIDs, getPlayerIdsFromRoster(roster)...)
}
// Create context for tagging OpenCensus metrics.
funcName := "CreateAssignments"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"numAssignments": len(players),
}).Info("gRPC call executing")
// TODO: These two calls are done in two different transactions; could be
// combined as an optimization but probably not particularly necessary
// Send the players their assignments.
err := redisHelpers.UpdateMultiFields(ctx, s.pool, players, "assignment")
// Move these players from the proposed list to the deindexed list.
ignorelist.Move(ctx, s.pool, playerIDs, "proposed", "deindexed")
// Issue encountered
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
stats.Record(fnCtx, BeAssignmentFailures.M(int64(len(players))))
return &backend.Result{Success: false, Error: err.Error()}, err
}
// Success!
beLog.WithFields(log.Fields{
"numPlayers": len(players),
}).Info("Assignments complete")
stats.Record(fnCtx, BeGrpcRequests.M(1))
stats.Record(fnCtx, BeAssignments.M(int64(len(players))))
return &backend.Result{Success: true, Error: ""}, err
}
// DeleteAssignments is this service's implementation of the DeleteAssignments gRPC method
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) DeleteAssignments(ctx context.Context, r *backend.Roster) (*backend.Result, error) {
assignments := getPlayerIdsFromRoster(r)
// Create context for tagging OpenCensus metrics.
funcName := "DeleteAssignments"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
}).Info("gRPC call executing")
err := redisHelpers.DeleteMultiFields(ctx, s.pool, assignments, "assignment")
// Issue encountered
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
stats.Record(fnCtx, BeAssignmentDeletionFailures.M(int64(len(assignments))))
return &backend.Result{Success: false, Error: err.Error()}, err
}
// Success!
stats.Record(fnCtx, BeGrpcRequests.M(1))
stats.Record(fnCtx, BeAssignmentDeletions.M(int64(len(assignments))))
return &backend.Result{Success: true, Error: ""}, err
}
// getPlayerIdsFromRoster returns the slice of player ID strings contained in
// the input roster.
func getPlayerIdsFromRoster(r *backend.Roster) []string {
playerIDs := make([]string, 0)
for _, p := range r.Players {
playerIDs = append(playerIDs, p.Id)
}
return playerIDs
}

@ -1,178 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
)
// OpenCensus Measures. These are exported as metrics to your monitoring system
// https://godoc.org/go.opencensus.io/stats
//
// When making opencensus stats, the 'name' param, with forward slashes changed
// to underscores, is appended to the 'namespace' value passed to the
// prometheus exporter to become the Prometheus metric name. You can also look
// into having Prometheus rewrite your metric names on scrape.
//
// For example:
// - defining the promethus export namespace "open_match" when instanciating the exporter:
// pe, err := promethus.NewExporter(promethus.Options{Namespace: "open_match"})
// - and naming the request counter "backend/requests_total":
// MGrpcRequests := stats.Int64("backendapi/requests_total", ...
// - results in the prometheus metric name:
// open_match_backendapi_requests_total
// - [note] when using opencensus views to aggregate the metrics into
// distribution buckets and such, multiple metrics
// will be generated with appended types ("<metric>_bucket",
// "<metric>_count", "<metric>_sum", for example)
//
// In addition, OpenCensus stats propogated to Prometheus have the following
// auto-populated labels pulled from kubernetes, which we should avoid to
// prevent overloading and having to use the HonorLabels param in Prometheus.
//
// - Information about the k8s pod being monitored:
// "pod" (name of the monitored k8s pod)
// "namespace" (k8s namespace of the monitored pod)
// - Information about how promethus is gathering the metrics:
// "instance" (IP and port number being scraped by prometheus)
// "job" (name of the k8s service being scraped by prometheus)
// "endpoint" (name of the k8s port in the k8s service being scraped by prometheus)
//
var (
// API instrumentation
BeGrpcRequests = stats.Int64("backendapi/requests_total", "Number of requests to the gRPC Backend API endpoints", "1")
BeGrpcErrors = stats.Int64("backendapi/errors_total", "Number of errors generated by the gRPC Backend API endpoints", "1")
BeGrpcLatencySecs = stats.Float64("backendapi/latency_seconds", "Latency in seconds of the gRPC Backend API endpoints", "1")
// Logging instrumentation
// There's no need to record this measurement directly if you use
// the logrus hook provided in metrics/helper.go after instantiating the
// logrus instance in your application code.
// https://godoc.org/github.com/sirupsen/logrus#LevelHooks
BeLogLines = stats.Int64("backendapi/logs_total", "Number of Backend API lines logged", "1")
// Failure instrumentation
BeFailures = stats.Int64("backendapi/failures_total", "Number of Backend API failures", "1")
// Counting operations
BeAssignments = stats.Int64("backendapi/assignments_total", "Number of players assigned to matches", "1")
BeAssignmentFailures = stats.Int64("backendapi/assignment/failures_total", "Number of player match assigment failures", "1")
BeAssignmentDeletions = stats.Int64("backendapi/assignment/deletions_total", "Number of player match assigment deletions", "1")
BeAssignmentDeletionFailures = stats.Int64("backendapi/assignment/deletions/failures_total", "Number of player match assigment deletion failures", "1")
)
var (
// KeyMethod is used to tag a measure with the currently running API method.
KeyMethod, _ = tag.NewKey("method")
// KeySeverity is used to tag a the severity of a log message.
KeySeverity, _ = tag.NewKey("severity")
)
var (
// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
latencyDistribution = view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000)
)
// Package metrics provides some convience views.
// You need to register the views for the data to actually be collected.
// Note: The OpenCensus View 'Description' is exported to Prometheus as the HELP string.
// Note: If you get a "Failed to export to Prometheus: inconsistent label
// cardinality" error, chances are you forgot to set the tags specified in the
// view for a given measure when you tried to do a stats.Record()
var (
BeLatencyView = &view.View{
Name: "backend/latency",
Measure: BeGrpcLatencySecs,
Description: "The distribution of backend latencies",
Aggregation: latencyDistribution,
TagKeys: []tag.Key{KeyMethod},
}
BeRequestCountView = &view.View{
Name: "backend/grpc/requests",
Measure: BeGrpcRequests,
Description: "The number of successful backend gRPC requests",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
BeErrorCountView = &view.View{
Name: "backend/grpc/errors",
Measure: BeGrpcErrors,
Description: "The number of gRPC errors",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
BeLogCountView = &view.View{
Name: "log_lines/total",
Measure: BeLogLines,
Description: "The number of lines logged",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeySeverity},
}
BeFailureCountView = &view.View{
Name: "failures",
Measure: BeFailures,
Description: "The number of failures",
Aggregation: view.Count(),
}
BeAssignmentCountView = &view.View{
Name: "backend/assignments",
Measure: BeAssignments,
Description: "The number of successful player match assignments",
Aggregation: view.Count(),
}
BeAssignmentFailureCountView = &view.View{
Name: "backend/assignments/failures",
Measure: BeAssignmentFailures,
Description: "The number of player match assignment failures",
Aggregation: view.Count(),
}
BeAssignmentDeletionCountView = &view.View{
Name: "backend/assignments/deletions",
Measure: BeAssignmentDeletions,
Description: "The number of successful player match assignments",
Aggregation: view.Count(),
}
BeAssignmentDeletionFailureCountView = &view.View{
Name: "backend/assignments/deletions/failures",
Measure: BeAssignmentDeletionFailures,
Description: "The number of player match assignment failures",
Aggregation: view.Count(),
}
)
// DefaultBackendAPIViews are the default backend API OpenCensus measure views.
var DefaultBackendAPIViews = []*view.View{
BeLatencyView,
BeRequestCountView,
BeErrorCountView,
BeLogCountView,
BeFailureCountView,
BeAssignmentCountView,
BeAssignmentFailureCountView,
BeAssignmentDeletionCountView,
BeAssignmentDeletionFailureCountView,
}

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendapi:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendapi:dev']

@ -1,14 +0,0 @@
/*
BackendAPI contains the unique files required to run the API endpoints for
Open Match's backend. It is assumed you'll either integrate calls to these
endpoints directly into your dedicated game server (simple use case), or call
these endpoints from other, established services in your infrastructure (more
complicated use cases).
Note that the main package for backendapi does very little except read the
config and set up logging and metrics, then start the server. Almost all the
work is being done by backendapi/apisrv, which implements the gRPC server
defined in the backendapi/proto/backend.pb.go file.
*/
package main

@ -1,103 +0,0 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
${OM_ROOT}/internal/pb/backend.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/backendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
)
var (
// Logrus structured logging setup
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
}
beLog = log.WithFields(beLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.BeLogLines, apisrv.KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultBackendAPIViews // BackendAPI OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
beLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redishelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
beLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
beLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
beLog.Info("Shutting down gRPC server")
}

@ -1 +0,0 @@
../../config/matchmaker_config.json

56
cmd/frontend/Dockerfile Normal file

@ -0,0 +1,56 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/frontend/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/frontend/frontend /app/
ENTRYPOINT ["/app/frontend"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Frontend API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

24
cmd/frontend/frontend.go Normal file

@ -0,0 +1,24 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the frontend service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/frontend"
)
func main() {
frontend.RunApplication()
}

@ -1,10 +0,0 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/frontendapi .
ENTRYPOINT ["./frontendapi"]

@ -1,239 +0,0 @@
/*
package apisrv provides an implementation of the gRPC server defined in ../../../api/protobuf-spec/frontend.proto.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"context"
"errors"
"net"
"time"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
frontend "github.com/GoogleCloudPlatform/open-match/internal/pb"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerindices"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
log "github.com/sirupsen/logrus"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
"google.golang.org/grpc"
)
// Logrus structured logging setup
var (
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
}
feLog = log.WithFields(feLogFields)
)
// FrontendAPI implements frontend.ApiServer, the server generated by compiling
// the protobuf, by fulfilling the frontend.APIClient interface.
type FrontendAPI struct {
grpc *grpc.Server
cfg *viper.Viper
pool *redis.Pool
}
type frontendAPI FrontendAPI
// New returns an instantiated srvice
func New(cfg *viper.Viper, pool *redis.Pool) *FrontendAPI {
s := FrontendAPI{
pool: pool,
grpc: grpc.NewServer(grpc.StatsHandler(&ocgrpc.ServerHandler{})),
cfg: cfg,
}
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(FeLogLines, KeySeverity))
// Register gRPC server
frontend.RegisterFrontendServer(s.grpc, (*frontendAPI)(&s))
feLog.Info("Successfully registered gRPC server")
return &s
}
// Open starts the api grpc service listening on the configured port.
func (s *FrontendAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.frontend.port"))
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"port": s.cfg.GetInt("api.frontend.port"),
}).Error("net.Listen() error")
return err
}
feLog.WithFields(log.Fields{"port": s.cfg.GetInt("api.frontend.port")}).Info("TCP net listener initialized")
go func() {
err := s.grpc.Serve(ln)
if err != nil {
feLog.WithFields(log.Fields{"error": err.Error()}).Error("gRPC serve() error")
}
feLog.Info("serving gRPC endpoints")
}()
return nil
}
// CreatePlayer is this service's implementation of the CreatePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) CreatePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "CreatePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Write group
err := redispb.MarshalToRedis(ctx, s.pool, group, s.cfg.GetInt("redis.expirations.player"))
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Index group
err = playerindices.Create(ctx, s.pool, s.cfg, *group)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Return success.
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// DeletePlayer is this service's implementation of the DeletePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) DeletePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "DeletePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Deindex this player; at that point they don't show up in MMFs anymore. We can then delete
// their actual player object from Redis later.
err := playerindices.Delete(ctx, s.pool, s.cfg, group.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Kick off delete but don't wait for it to complete.
go s.deletePlayer(group.Id)
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// deletePlayer is a 'lazy' player delete
// It should always be called as a goroutine and should only be called after
// confirmation that a player has been deindexed (and therefore MMF's can't
// find the player to read them anyway)
// As a final action, it also kicks off a lazy delete of the player's metadata
func (s *frontendAPI) deletePlayer(id string) {
err := redisHelpers.Delete(context.Background(), s.pool, id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Warn("Error deleting player from state storage, this could leak state storage memory but is usually not a fatal error")
}
go playerindices.DeleteMeta(context.Background(), s.pool, id)
}
// GetUpdates is this service's implementation of the GetUpdates gRPC method defined in frontend.proto
func (s *frontendAPI) GetUpdates(p *frontend.Player, assignmentStream frontend.Frontend_GetUpdatesServer) error {
// Get cancellable context
ctx, cancel := context.WithCancel(assignmentStream.Context())
defer cancel()
// Create context for tagging OpenCensus metrics.
funcName := "GetAssignment"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// get and return connection string
watchChan := redispb.PlayerWatcher(ctx, s.pool, *p) // watcher() runs the appropriate Redis commands.
timeoutChan := time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
for {
select {
case <-ctx.Done():
// Context cancelled
feLog.WithFields(log.Fields{
"playerid": p.Id,
}).Info("client closed connection successfully")
stats.Record(fnCtx, FeGrpcRequests.M(1))
return nil
case <-timeoutChan: // Timeout reached without client closing connection
// TODO:deal with the fallout
err := errors.New("server timeout reached without client closing connection")
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"playerid": p.Id,
}).Error("State storage error")
// Count errors for metrics
errTag, _ := tag.NewKey("errtype")
fnCtx, _ := tag.New(ctx, tag.Insert(errTag, "watch_timeout"))
stats.Record(fnCtx, FeGrpcErrors.M(1))
//TODO: we could generate a frontend.player message with an error
//field and stream it to the client before throwing the error here
//if we wanted to send more useful client retry information
return err
case a := <-watchChan:
feLog.WithFields(log.Fields{
"assignment": a.Assignment,
"playerid": a.Id,
"status": a.Status,
"error": a.Error,
}).Info("updating client")
assignmentStream.Send(&a)
stats.Record(fnCtx, FeGrpcStreamedResponses.M(1))
// Reset timeout.
timeoutChan = time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
}
}
}

@ -1,149 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
)
// OpenCensus Measures. These are exported as metrics to your monitoring system
// https://godoc.org/go.opencensus.io/stats
//
// When making opencensus stats, the 'name' param, with forward slashes changed
// to underscores, is appended to the 'namespace' value passed to the
// prometheus exporter to become the Prometheus metric name. You can also look
// into having Prometheus rewrite your metric names on scrape.
//
// For example:
// - defining the promethus export namespace "open_match" when instanciating the exporter:
// pe, err := promethus.NewExporter(promethus.Options{Namespace: "open_match"})
// - and naming the request counter "frontend/requests_total":
// MGrpcRequests := stats.Int64("frontendapi/requests_total", ...
// - results in the prometheus metric name:
// open_match_frontendapi_requests_total
// - [note] when using opencensus views to aggregate the metrics into
// distribution buckets and such, multiple metrics
// will be generated with appended types ("<metric>_bucket",
// "<metric>_count", "<metric>_sum", for example)
//
// In addition, OpenCensus stats propogated to Prometheus have the following
// auto-populated labels pulled from kubernetes, which we should avoid to
// prevent overloading and having to use the HonorLabels param in Prometheus.
//
// - Information about the k8s pod being monitored:
// "pod" (name of the monitored k8s pod)
// "namespace" (k8s namespace of the monitored pod)
// - Information about how promethus is gathering the metrics:
// "instance" (IP and port number being scraped by prometheus)
// "job" (name of the k8s service being scraped by prometheus)
// "endpoint" (name of the k8s port in the k8s service being scraped by prometheus)
//
var (
// API instrumentation
FeGrpcRequests = stats.Int64("frontendapi/requests_total", "Number of requests to the gRPC Frontend API endpoints", "1")
FeGrpcStreamedResponses = stats.Int64("frontendapi/streamed_responses_total", "Number of responses streamed back from the gRPC Frontend API endpoints", "1")
FeGrpcErrors = stats.Int64("frontendapi/errors_total", "Number of errors generated by the gRPC Frontend API endpoints", "1")
FeGrpcLatencySecs = stats.Float64("frontendapi/latency_seconds", "Latency in seconds of the gRPC Frontend API endpoints", "1")
// Logging instrumentation
// There's no need to record this measurement directly if you use
// the logrus hook provided in metrics/helper.go after instantiating the
// logrus instance in your application code.
// https://godoc.org/github.com/sirupsen/logrus#LevelHooks
FeLogLines = stats.Int64("frontendapi/logs_total", "Number of Frontend API lines logged", "1")
// Failure instrumentation
FeFailures = stats.Int64("frontendapi/failures_total", "Number of Frontend API failures", "1")
)
var (
// KeyMethod is used to tag a measure with the currently running API method.
KeyMethod, _ = tag.NewKey("method")
KeySeverity, _ = tag.NewKey("severity")
)
var (
// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
latencyDistribution = view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000)
)
// Package metrics provides some convience views.
// You need to register the views for the data to actually be collected.
// Note: The OpenCensus View 'Description' is exported to Prometheus as the HELP string.
// Note: If you get a "Failed to export to Prometheus: inconsistent label
// cardinality" error, chances are you forgot to set the tags specified in the
// view for a given measure when you tried to do a stats.Record()
var (
FeLatencyView = &view.View{
Name: "frontend/latency",
Measure: FeGrpcLatencySecs,
Description: "The distribution of frontend latencies",
Aggregation: latencyDistribution,
TagKeys: []tag.Key{KeyMethod},
}
FeRequestCountView = &view.View{
Name: "frontend/grpc/requests",
Measure: FeGrpcRequests,
Description: "The number of successful frontend gRPC requests",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
FeStreamedResponseCountView = &view.View{
Name: "frontend/grpc/streamed_responses",
Measure: FeGrpcRequests,
Description: "The number of successful streamed gRPC responses",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
FeErrorCountView = &view.View{
Name: "frontend/grpc/errors",
Measure: FeGrpcErrors,
Description: "The number of gRPC errors",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
FeLogCountView = &view.View{
Name: "log_lines/total",
Measure: FeLogLines,
Description: "The number of lines logged",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeySeverity},
}
FeFailureCountView = &view.View{
Name: "failures",
Measure: FeFailures,
Description: "The number of failures",
Aggregation: view.Count(),
}
)
// DefaultFrontendAPIViews are the default frontend API OpenCensus measure views.
var DefaultFrontendAPIViews = []*view.View{
FeLatencyView,
FeRequestCountView,
FeStreamedResponseCountView,
FeErrorCountView,
FeLogCountView,
FeFailureCountView,
}

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-frontendapi:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-frontendapi:dev']

@ -1,14 +0,0 @@
/*
FrontendAPI contains the unique files required to run the API endpoints for
Open Match's frontend. It is assumed you'll either integrate calls to these
endpoints directly into your game client (simple use case), or call these
endpoints from other, established platform services in your infrastructure
(more complicated use cases).
Note that the main package for frontendapi does very little except read the
config and set up logging and metrics, then start the server. Almost all the
work is being done by frontendapi/apisrv, which implements the gRPC server
defined in the frontendapi/proto/frontend.pb.go file.
*/
package main

@ -1,105 +0,0 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
${OM_ROOT}/internal/pb/frontend.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
)
var (
// Logrus structured logging setup
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
}
feLog = log.WithFields(feLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.FeLogLines, apisrv.KeySeverity))
// Add a hook to the logger to log the filename & line number.
log.SetReportCaller(true)
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultFrontendAPIViews // FrontendAPI OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
feLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redishelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
feLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
feLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
feLog.Info("Shutting down gRPC server")
}

@ -1 +0,0 @@
../../config/matchmaker_config.json

56
cmd/minimatch/Dockerfile Normal file

@ -0,0 +1,56 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/minimatch/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/minimatch/minimatch /app/
ENTRYPOINT ["/app/minimatch"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Mini Match"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

24
cmd/minimatch/main.go Normal file

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the minimatch in-process testing binary for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/minimatch"
)
func main() {
minimatch.RunApplication()
}

@ -1,21 +0,0 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
# Necessary to get a specific version of the golang k8s client
RUN go get github.com/tools/godep
RUN go get k8s.io/client-go/...
WORKDIR /go/src/k8s.io/client-go
RUN git checkout v7.0.0
RUN godep restore ./...
RUN rm -rf vendor/
RUN rm -rf /go/src/github.com/golang/protobuf/
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
# Uncomment to build production images (removes all troubleshooting tools)
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/mmforc .
CMD ["./mmforc"]

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmforc:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmforc:dev']

@ -1,404 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Note: the example only works with the code within the same release/branch.
// This is based on the example from the official k8s golang client repository:
// k8s.io/client-go/examples/create-update-delete-deployment/
package main
import (
"context"
"errors"
"os"
"strconv"
"strings"
"time"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/tidwall/gjson"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/gomodule/redigo/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
batchv1 "k8s.io/api/batch/v1"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
//"k8s.io/kubernetes/pkg/api"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
// Uncomment the following line to load the gcp plugin (only required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
)
var (
// Logrus structured logging setup
mmforcLogFields = log.Fields{
"app": "openmatch",
"component": "mmforc",
}
mmforcLog = log.WithFields(mmforcLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(MmforcLogLines, KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocMmforcViews := DefaultMmforcViews // mmforc OpenCensus views.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocMmforcViews = append(ocMmforcViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
mmforcLog.WithFields(log.Fields{"viewscount": len(ocMmforcViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocMmforcViews)
}
func main() {
pool := redisHelpers.ConnectionPool(cfg)
redisConn := pool.Get()
defer redisConn.Close()
// Get k8s credentials so we can starts k8s Jobs
mmforcLog.Info("Attempting to acquire k8s credentials")
config, err := rest.InClusterConfig()
if err != nil {
panic(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
mmforcLog.Info("K8s credentials acquired")
start := time.Now()
checkProposals := true
// main loop; kick off matchmaker functions for profiles in the profile
// queue and an evaluator when proposals are in the proposals queue
for {
ctx, cancel := context.WithCancel(context.Background())
_ = cancel
// Get profiles and kick off a job for each
mmforcLog.WithFields(log.Fields{
"profileQueueName": cfg.GetString("queues.profiles.name"),
"pullCount": cfg.GetInt("queues.profiles.pullCount"),
"query": "SPOP",
"component": "statestorage",
}).Debug("Retreiving match profiles")
results, err := redis.Strings(redisConn.Do("SPOP",
cfg.GetString("queues.profiles.name"), cfg.GetInt("queues.profiles.pullCount")))
if err != nil {
panic(err)
}
if len(results) > 0 {
mmforcLog.WithFields(log.Fields{
"numProfiles": len(results),
}).Info("Starting MMF jobs...")
for _, profile := range results {
// Kick off the job asynchrnously
go mmfunc(ctx, profile, cfg, clientset, pool)
// Count the number of jobs running
redisHelpers.Increment(context.Background(), pool, "concurrentMMFs")
}
} else {
mmforcLog.WithFields(log.Fields{
"profileQueueName": cfg.GetString("queues.profiles.name"),
}).Info("Unable to retreive match profiles from statestorage - have you entered any?")
}
// Check to see if we should run the evaluator.
// Get number of running MMFs
r, err := redisHelpers.Retrieve(context.Background(), pool, "concurrentMMFs")
if err != nil {
if err.Error() == "redigo: nil returned" {
// No MMFs have run since we last evaluated; reset timer and loop
mmforcLog.Debug("Number of concurrentMMFs is nil")
start = time.Now()
time.Sleep(1000 * time.Millisecond)
}
continue
}
numRunning, err := strconv.Atoi(r)
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Issue retrieving number of currently running MMFs")
}
// We are ready to evaluate either when all MMFs are complete, or the
// timeout is reached.
//
// Tuning how frequently the evaluator runs is a complex topic and
// probably only of interest to users running large-scale production
// workloads with many concurrently running matchmaking functions,
// which have some overlap in the matchmaking player pools. Suffice to
// say that under load, this switch should almost always trigger the
// timeout interval code path. The concurrentMMFs check to see how
// many are still running is meant as a deadman's switch to prevent
// waiting to run the evaluator when all your MMFs are already
// finished.
switch {
case time.Since(start).Seconds() >= float64(cfg.GetInt("evaluator.interval")):
mmforcLog.WithFields(log.Fields{
"interval": cfg.GetInt("evaluator.interval"),
}).Info("Maximum evaluator interval exceeded")
checkProposals = true
// Opencensus tagging
ctx, _ = tag.New(ctx, tag.Insert(KeyEvalReason, "interval_exceeded"))
case numRunning <= 0:
mmforcLog.Info("All MMFs complete")
checkProposals = true
numRunning = 0
ctx, _ = tag.New(ctx, tag.Insert(KeyEvalReason, "mmfs_completed"))
}
if checkProposals {
// Make sure there are proposals in the queue. No need to run the
// evaluator if there are none.
checkProposals = false
mmforcLog.Info("Checking statestorage for match object proposals")
results, err := redisHelpers.Count(context.Background(), pool, cfg.GetString("queues.proposals.name"))
switch {
case err != nil:
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Couldn't retrieve the length of the proposal queue from statestorage!")
case results == 0:
mmforcLog.WithFields(log.Fields{}).Warn("No proposals in the queue!")
default:
mmforcLog.WithFields(log.Fields{
"numProposals": results,
}).Info("Proposals available, evaluating!")
go evaluator(ctx, cfg, clientset)
}
err = redisHelpers.Delete(context.Background(), pool, "concurrentMMFs")
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Error deleting concurrent MMF counter!")
}
start = time.Now()
}
// TODO: Make this tunable via config.
// A sleep here is not critical but just a useful safety valve in case
// things are broken, to keep the main loop from going all-out and spamming the log.
mainSleep := 1000
mmforcLog.WithFields(log.Fields{
"ms": mainSleep,
}).Info("Sleeping...")
time.Sleep(time.Duration(mainSleep) * time.Millisecond)
} // End main for loop
}
// mmfunc generates a k8s job that runs the specified mmf container image.
// resultsID is the redis key that the Backend API is monitoring for results; we can 'short circuit' and write errors directly to this key if we can't run the MMF for some reason.
func mmfunc(ctx context.Context, resultsID string, cfg *viper.Viper, clientset *kubernetes.Clientset, pool *redis.Pool) {
// Generate the various keys/names, some of which must be populated to the k8s job.
imageName := cfg.GetString("defaultImages.mmf.name") + ":" + cfg.GetString("defaultImages.mmf.tag")
jobType := "mmf"
ids := strings.Split(resultsID, ".") // comes in as dot-concatinated moID and profID.
moID := ids[0]
profID := ids[1]
timestamp := strconv.Itoa(int(time.Now().Unix()))
jobName := timestamp + "." + moID + "." + profID + "." + jobType
propID := "proposal." + timestamp + "." + moID + "." + profID
// Extra fields for structured logging
lf := log.Fields{"jobName": jobName}
if cfg.GetBool("debug") { // Log a lot more info.
lf = log.Fields{
"jobType": jobType,
"backendMatchObject": moID,
"profile": profID,
"jobTimestamp": timestamp,
"containerImage": imageName,
"jobName": jobName,
"profileImageJSONKey": cfg.GetString("jsonkeys.mmfImage"),
}
}
mmfuncLog := mmforcLog.WithFields(lf)
// Read the full profile from redis and access any keys that are important to deciding how MMFs are run.
// TODO: convert this to using redispb and directly access the protobuf message instead of retrieving as a map?
profile, err := redisHelpers.RetrieveAll(ctx, pool, profID)
if err != nil {
// Log failure to read this profile and return - won't run an MMF for an unreadable profile.
mmfuncLog.WithFields(log.Fields{"error": err.Error()}).Error("Failure retreiving profile from statestorage")
return
}
// Got profile from state storage, make sure it is valid
if gjson.Valid(profile["properties"]) {
profileImage := gjson.Get(profile["properties"], cfg.GetString("jsonkeys.mmfImage"))
if profileImage.Exists() {
imageName = profileImage.String()
mmfuncLog = mmfuncLog.WithFields(log.Fields{"containerImage": imageName})
} else {
mmfuncLog.Warn("Failed to read image name from profile at configured json key, using default image instead")
}
}
mmfuncLog.Info("Attempting to create mmf k8s job")
// Kick off k8s job
envvars := []apiv1.EnvVar{
{Name: "MMF_PROFILE_ID", Value: profID},
{Name: "MMF_PROPOSAL_ID", Value: propID},
{Name: "MMF_REQUEST_ID", Value: moID},
{Name: "MMF_ERROR_ID", Value: resultsID},
{Name: "MMF_TIMESTAMP", Value: timestamp},
}
err = submitJob(clientset, jobType, jobName, imageName, envvars)
if err != nil {
// Record failure & log
stats.Record(ctx, mmforcMmfFailures.M(1))
mmfuncLog.WithFields(log.Fields{"error": err.Error()}).Error("MMF job submission failure!")
} else {
// Record Success
stats.Record(ctx, mmforcMmfs.M(1))
}
}
// evaluator generates a k8s job that runs the specified evaluator container image.
func evaluator(ctx context.Context, cfg *viper.Viper, clientset *kubernetes.Clientset) {
imageName := cfg.GetString("defaultImages.evaluator.name") + ":" + cfg.GetString("defaultImages.evaluator.tag")
// Generate the job name
timestamp := strconv.Itoa(int(time.Now().Unix()))
jobType := "evaluator"
jobName := timestamp + "." + jobType
mmforcLog.WithFields(log.Fields{
"jobName": jobName,
"containerImage": imageName,
}).Info("Attempting to create evaluator k8s job")
// Kick off k8s job
envvars := []apiv1.EnvVar{{Name: "MMF_TIMESTAMP", Value: timestamp}}
err = submitJob(clientset, jobType, jobName, imageName, envvars)
if err != nil {
// Record failure & log
stats.Record(ctx, mmforcEvalFailures.M(1))
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
"jobName": jobName,
"containerImage": imageName,
}).Error("Evaluator job submission failure!")
} else {
// Record success
stats.Record(ctx, mmforcEvals.M(1))
}
}
// submitJob submits a job to kubernetes
func submitJob(clientset *kubernetes.Clientset, jobType string, jobName string, imageName string, envvars []apiv1.EnvVar) error {
// DEPRECATED: will be removed in a future vrsion. Please switch to using the 'MMF_*' environment variables.
v := strings.Split(jobName, ".")
envvars = append(envvars, apiv1.EnvVar{Name: "PROFILE", Value: strings.Join(v[:len(v)-1], ".")})
job := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
},
Spec: batchv1.JobSpec{
Completions: int32Ptr(1),
Template: apiv1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": jobType,
},
Annotations: map[string]string{
// Unused; here as an example.
// Later we can put things more complicated than
// env vars here and read them using k8s downward API
// volumes
"profile": jobName,
},
},
Spec: apiv1.PodSpec{
RestartPolicy: "Never",
Containers: []apiv1.Container{
{
Name: jobType,
Image: imageName,
ImagePullPolicy: "Always",
Env: envvars,
},
},
},
},
},
}
// Get the namespace for the job from the current namespace, otherwise, use default
namespace := os.Getenv("METADATA_NAMESPACE")
if len(namespace) == 0 {
namespace = apiv1.NamespaceDefault
}
// Submit kubernetes job
jobsClient := clientset.BatchV1().Jobs(namespace)
result, err := jobsClient.Create(job)
if err != nil {
// TODO: replace queued profiles if things go south
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Couldn't create k8s job!")
}
mmforcLog.WithFields(log.Fields{
"jobName": result.GetObjectMeta().GetName(),
}).Info("Created job.")
return err
}
// readability functions used by generateJobSpec
func int32Ptr(i int32) *int32 { return &i }
func strPtr(i string) *string { return &i }

@ -1 +0,0 @@
../../config/matchmaker_config.json

@ -1,128 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
)
// OpenCensus Measures. These are exported as metrics to your monitoring system
// https://godoc.org/go.opencensus.io/stats
//
// When making opencensus stats, the 'name' param, with forward slashes changed
// to underscores, is appended to the 'namespace' value passed to the
// prometheus exporter to become the Prometheus metric name. You can also look
// into having Prometheus rewrite your metric names on scrape.
//
// For example:
// - defining the promethus export namespace "open_match" when instanciating the exporter:
// pe, err := promethus.NewExporter(promethus.Options{Namespace: "open_match"})
// - and naming the request counter "backend/requests_total":
// MGrpcRequests := stats.Int64("backendapi/requests_total", ...
// - results in the prometheus metric name:
// open_match_backendapi_requests_total
// - [note] when using opencensus views to aggregate the metrics into
// distribution buckets and such, multiple metrics
// will be generated with appended types ("<metric>_bucket",
// "<metric>_count", "<metric>_sum", for example)
//
// In addition, OpenCensus stats propogated to Prometheus have the following
// auto-populated labels pulled from kubernetes, which we should avoid to
// prevent overloading and having to use the HonorLabels param in Prometheus.
//
// - Information about the k8s pod being monitored:
// "pod" (name of the monitored k8s pod)
// "namespace" (k8s namespace of the monitored pod)
// - Information about how promethus is gathering the metrics:
// "instance" (IP and port number being scraped by prometheus)
// "job" (name of the k8s service being scraped by prometheus)
// "endpoint" (name of the k8s port in the k8s service being scraped by prometheus)
//
var (
// Logging instrumentation
// There's no need to record this measurement directly if you use
// the logrus hook provided in metrics/helper.go after instantiating the
// logrus instance in your application code.
// https://godoc.org/github.com/sirupsen/logrus#LevelHooks
MmforcLogLines = stats.Int64("mmforc/logs_total", "Number of Backend API lines logged", "1")
// Counting operations
mmforcMmfs = stats.Int64("mmforc/mmfs_total", "Number of mmf jobs submitted to kubernetes", "1")
mmforcMmfFailures = stats.Int64("mmforc/mmf/failures_total", "Number of failures attempting to submit mmf jobs to kubernetes", "1")
mmforcEvals = stats.Int64("mmforc/evaluators_total", "Number of evaluator jobs submitted to kubernetes", "1")
mmforcEvalFailures = stats.Int64("mmforc/evaluator/failures_total", "Number of failures attempting to submit evaluator jobs to kubernetes", "1")
)
var (
// KeyEvalReason is used to tag which code path caused the evaluator to run.
KeyEvalReason, _ = tag.NewKey("evalReason")
// KeySeverity is used to tag a the severity of a log message.
KeySeverity, _ = tag.NewKey("severity")
)
var (
// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
latencyDistribution = view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000)
)
// Package metrics provides some convience views.
// You need to register the views for the data to actually be collected.
// Note: The OpenCensus View 'Description' is exported to Prometheus as the HELP string.
// Note: If you get a "Failed to export to Prometheus: inconsistent label
// cardinality" error, chances are you forgot to set the tags specified in the
// view for a given measure when you tried to do a stats.Record()
var (
mmforcMmfsCountView = &view.View{
Name: "mmforc/mmfs",
Measure: mmforcMmfs,
Description: "The number of mmf jobs submitted to kubernetes",
Aggregation: view.Count(),
}
mmforcMmfFailuresCountView = &view.View{
Name: "mmforc/mmf/failures",
Measure: mmforcMmfFailures,
Description: "The number of mmf jobs that failed submission to kubernetes",
Aggregation: view.Count(),
}
mmforcEvalsCountView = &view.View{
Name: "mmforc/evaluators",
Measure: mmforcEvals,
Description: "The number of evaluator jobs submitted to kubernetes",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyEvalReason},
}
mmforcEvalFailuresCountView = &view.View{
Name: "mmforc/evaluator/failures",
Measure: mmforcEvalFailures,
Description: "The number of evaluator jobs that failed submission to kubernetes",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyEvalReason},
}
)
// DefaultMmforcViews are the default matchmaker orchestrator OpenCensus measure views.
var DefaultMmforcViews = []*view.View{
mmforcEvalsCountView,
mmforcMmfFailuresCountView,
mmforcMmfsCountView,
mmforcEvalFailuresCountView,
}

56
cmd/mmlogic/Dockerfile Normal file

@ -0,0 +1,56 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/mmlogic/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/mmlogic/mmlogic /app/
ENTRYPOINT ["/app/mmlogic"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Data API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

24
cmd/mmlogic/mmlogic.go Normal file

@ -0,0 +1,24 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the mmlogic service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/mmlogic"
)
func main() {
mmlogic.RunApplication()
}

@ -1,10 +0,0 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/frontendapi .
ENTRYPOINT ["./mmlogicapi"]

@ -1,596 +0,0 @@
/*
package apisrv provides an implementation of the gRPC server defined in ../../../api/protobuf-spec/mmlogic.proto.
Most of the documentation for what these calls should do is in that file!
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"context"
"errors"
"fmt"
"math"
"net"
"strconv"
"time"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
mmlogic "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/GoogleCloudPlatform/open-match/internal/set"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/ignorelist"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
log "github.com/sirupsen/logrus"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
"google.golang.org/grpc"
)
// Logrus structured logging setup
var (
mlLogFields = log.Fields{
"app": "openmatch",
"component": "mmlogic",
}
mlLog = log.WithFields(mlLogFields)
)
// MmlogicAPI implements mmlogic.ApiServer, the server generated by compiling
// the protobuf, by fulfilling the mmlogic.APIClient interface.
type MmlogicAPI struct {
grpc *grpc.Server
cfg *viper.Viper
pool *redis.Pool
}
type mmlogicAPI MmlogicAPI
// New returns an instantiated srvice
func New(cfg *viper.Viper, pool *redis.Pool) *MmlogicAPI {
s := MmlogicAPI{
pool: pool,
grpc: grpc.NewServer(grpc.StatsHandler(&ocgrpc.ServerHandler{})),
cfg: cfg,
}
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(MlLogLines, KeySeverity))
// Register gRPC server
mmlogic.RegisterMmLogicServer(s.grpc, (*mmlogicAPI)(&s))
mlLog.Info("Successfully registered gRPC server")
return &s
}
// Open starts the api grpc service listening on the configured port.
func (s *MmlogicAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.mmlogic.port"))
if err != nil {
mlLog.WithFields(log.Fields{
"error": err.Error(),
"port": s.cfg.GetInt("api.mmlogic.port"),
}).Error("net.Listen() error")
return err
}
mlLog.WithFields(log.Fields{"port": s.cfg.GetInt("api.mmlogic.port")}).Info("TCP net listener initialized")
go func() {
err := s.grpc.Serve(ln)
if err != nil {
mlLog.WithFields(log.Fields{"error": err.Error()}).Error("gRPC serve() error")
}
mlLog.Info("serving gRPC endpoints")
}()
return nil
}
// GetProfile is this service's implementation of the gRPC call defined in
// mmlogicapi/proto/mmlogic.proto
func (s *mmlogicAPI) GetProfile(c context.Context, profile *mmlogic.MatchObject) (*mmlogic.MatchObject, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "GetProfile"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Get profile.
mlLog.WithFields(log.Fields{"profileid": profile.Id}).Info("Attempting retreival of profile")
err := redispb.UnmarshalFromRedis(c, s.pool, profile)
mlLog.Warn("returned profile from redispb", profile)
if err != nil {
mlLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"profileid": profile.Id,
}).Error("State storage error")
stats.Record(fnCtx, MlGrpcErrors.M(1))
return profile, err
}
mlLog.WithFields(log.Fields{"profileid": profile.Id}).Debug("Retrieved profile from state storage")
mlLog.Debug(profile)
stats.Record(fnCtx, MlGrpcRequests.M(1))
//return out, err
return profile, err
}
// CreateProposal is this service's implementation of the gRPC call defined in
// mmlogicapi/proto/mmlogic.proto
func (s *mmlogicAPI) CreateProposal(c context.Context, prop *mmlogic.MatchObject) (*mmlogic.Result, error) {
// Retreive configured redis keys.
list := "proposed"
proposalq := s.cfg.GetString("queues.proposals.name")
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "CreateProposal"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Log what kind of results we received.
cpLog := mlLog.WithFields(log.Fields{"id": prop.Id})
if len(prop.Error) == 0 {
cpLog.Info("writing MMF propsal to state storage")
} else {
cpLog.Info("writing MMF error to state storage")
}
// Write all non-id fields from the protobuf message to state storage.
err := redispb.MarshalToRedis(c, s.pool, prop, s.cfg.GetInt("redis.expirations.matchobject"))
if err != nil {
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err
}
// Proposals need two more actions: players added to ignorelist, and adding
// the proposalkey to the proposal queue for the evaluator to read.
if len(prop.Error) == 0 {
// look for players to add to the ignorelist
cpLog.Info("parsing rosters")
playerIDs := make([]string, 0)
for _, roster := range prop.Rosters {
playerIDs = append(playerIDs, getPlayerIdsFromRoster(roster)...)
}
// If players were on the roster, add them to the ignorelist
if len(playerIDs) > 0 {
cpLog.WithFields(log.Fields{
"count": len(playerIDs),
"ignorelist": list,
}).Info("adding players to ignorelist")
err := ignorelist.Add(redisConn, list, playerIDs)
if err != nil {
cpLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"ignorelist": list,
}).Error("State storage error")
// record error.
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err
}
} else {
cpLog.Warn("found no players in rosters, not adding any players to the proposed ignorelist")
}
// add propkey to proposalsq
pqLog := cpLog.WithFields(log.Fields{
"component": "statestorage",
"queue": proposalq,
})
pqLog.Info("adding propsal to queue")
_, err = redisConn.Do("SADD", proposalq, prop.Id)
if err != nil {
pqLog.WithFields(log.Fields{"error": err.Error()}).Error("State storage error")
// record error.
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err
}
}
// Mark this MMF as finished by decrementing the concurrent MMFs.
// This is used to trigger the evaluator early if all MMFs have finished
// before its next scheduled run.
cmLog := cpLog.WithFields(log.Fields{
"component": "statestorage",
"key": "concurrentMMFs",
})
cmLog.Info("marking MMF finished for evaluator")
_, err = redishelpers.Decrement(fnCtx, s.pool, "concurrentMMFs")
if err != nil {
cmLog.WithFields(log.Fields{"error": err.Error()}).Error("State storage error")
// record error.
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, MlGrpcRequests.M(1))
return &mmlogic.Result{Success: true, Error: ""}, err
}
// GetPlayerPool is this service's implementation of the gRPC call defined in
// mmlogicapi/proto/mmlogic.proto
// API_GetPlayerPoolServer returns mutiple PlayerPool messages - they should
// all be reassembled into one set on the calling side, as they are just
// paginated subsets of the player pool.
func (s *mmlogicAPI) GetPlayerPool(pool *mmlogic.PlayerPool, stream mmlogic.MmLogic_GetPlayerPoolServer) error {
// TODO: quit if context is cancelled
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Create context for tagging OpenCensus metrics.
funcName := "GetPlayerPool"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
mlLog.WithFields(log.Fields{
"filterCount": len(pool.Filters),
"pool": pool.Name,
"funcName": funcName,
}).Info("attempting to retreive player pool from state storage")
// One working Roster per filter in the set. Combined at the end.
filteredRosters := make(map[string][]string)
// Temp store the results so we can also populate some field values in the final return roster.
filteredResults := make(map[string]map[string]int64)
overlap := make([]string, 0)
fnStart := time.Now()
// Loop over all filters, get results, combine
for _, thisFilter := range pool.Filters {
filterStart := time.Now()
results, err := s.applyFilter(ctx, thisFilter)
thisFilter.Stats = &mmlogic.Stats{Count: int64(len(results)), Elapsed: time.Since(filterStart).Seconds()}
mlLog.WithFields(log.Fields{
"count": int64(len(results)),
"elapsed": time.Since(filterStart).Seconds(),
"filterName": thisFilter.Name,
}).Debug("Filter stats")
if err != nil {
mlLog.WithFields(log.Fields{"error": err.Error(), "filterName": thisFilter.Name}).Debug("Error applying filter")
if len(results) == 0 {
// One simple optimization here: check the count returned by a
// ZCOUNT query for each filter before doing anything. If any of the
// filters return a ZCOUNT of 0, then the logical AND of all filters will
// container no players and we can shortcircuit and quit.
mlLog.WithFields(log.Fields{
"count": 0,
"filterName": thisFilter.Name,
"pool": pool.Name,
}).Warn("returning empty pool")
// Fill in the stats for this player pool.
pool.Stats = &mmlogic.Stats{Count: int64(len(results)), Elapsed: time.Since(filterStart).Seconds()}
// Send the empty pool and exit.
if err = stream.Send(pool); err != nil {
stats.Record(fnCtx, MlGrpcErrors.M(1))
return err
}
stats.Record(fnCtx, MlGrpcRequests.M(1))
return nil
}
}
// Make an array of only the player IDs; used to do set.Unions and find the
// logical AND
m := make([]string, len(results))
i := 0
for playerID := range results {
m[i] = playerID
i++
}
// Store the array of player IDs as well as the full results for later
// retrieval
filteredRosters[thisFilter.Attribute] = m
filteredResults[thisFilter.Attribute] = results
overlap = m
}
// Player must be in every filtered pool to be returned
for field, thesePlayers := range filteredRosters {
overlap = set.Intersection(overlap, thesePlayers)
_ = field
//mlLog.WithFields(log.Fields{"count": len(overlap), "field": field}).Debug("Amount of overlap")
}
// Get contents of all ignore lists and remove those players from the pool.
il, err := s.allIgnoreLists(ctx, &mmlogic.IlInput{})
if err != nil {
mlLog.Error(err)
}
mlLog.WithFields(log.Fields{"count": len(overlap)}).Debug("Pool size before applying ignorelists")
mlLog.WithFields(log.Fields{"count": len(il)}).Debug("Ignorelist size")
playerList := set.Difference(overlap, il) // removes ignorelist from the Roster
mlLog.WithFields(log.Fields{"count": len(playerList)}).Debug("Final Pool size")
// Reformat the playerList as a gRPC PlayerPool message. Send partial results as we go.
// This is pretty agressive in the partial result 'page'
// sizes it sends, and that is partially because it assumes you're running
// everything on a local network. If you aren't, you may need to tune this
// pageSize.
pageSize := s.cfg.GetInt("redis.results.pageSize")
pageCount := int(math.Ceil((float64(len(playerList)) / float64(pageSize)))) // Divides and rounds up on any remainder
//TODO: change if removing filtersets from rosters in favor of it being in pools
partialRoster := mmlogic.Roster{Name: fmt.Sprintf("%v.partialRoster", pool.Name)}
pool.Stats = &mmlogic.Stats{Count: int64(len(playerList)), Elapsed: time.Since(fnStart).Seconds()}
for i := 0; i < len(playerList); i++ {
// Add one additional player result to the partial pool.
player := &mmlogic.Player{Id: playerList[i], Attributes: []*mmlogic.Player_Attribute{}}
// Collect all the filtered attributes into the player protobuf.
for attribute, fr := range filteredResults {
if value, ok := fr[playerList[i]]; ok {
player.Attributes = append(player.Attributes, &mmlogic.Player_Attribute{Name: attribute, Value: value})
}
}
partialRoster.Players = append(partialRoster.Players, player)
// Check if we've filled in enough players to fill a page of results.
if ((i+1)%pageSize == 0) || i == (len(playerList)-1) {
pageName := fmt.Sprintf("%v.page%v/%v", pool.Name, i/pageSize+1, pageCount)
poolChunk := &mmlogic.PlayerPool{
Name: pageName,
Filters: pool.Filters,
Stats: pool.Stats,
Roster: &partialRoster,
}
if err = stream.Send(poolChunk); err != nil {
stats.Record(fnCtx, MlGrpcErrors.M(1))
return err
}
partialRoster.Players = []*mmlogic.Player{}
}
}
mlLog.WithFields(log.Fields{"count": len(playerList), "pool": pool.Name}).Debug("player pool streaming complete")
stats.Record(fnCtx, MlGrpcRequests.M(1))
return nil
}
// applyFilter is a sequential query of every entry in the Redis sorted set
// that fall beween the minimum and maximum values passed in through the filter
// argument. This can be likely sped up later using concurrent access, but
// with small enough player pools (less than the 'redis.queryArgs.count' config
// parameter) the amount of work is identical, so this is fine as a starting point.
// If the provided field is not indexed or the provided range is too large, a nil result
// is returned and this filter should be disregarded when applying filter overlaps.
func (s *mmlogicAPI) applyFilter(c context.Context, filter *mmlogic.Filter) (map[string]int64, error) {
type pName string
pool := make(map[string]int64)
// Default maximum value is positive infinity (i.e. highest possible number in redis)
// https://redis.io/commands/zrangebyscore
maxv := strconv.FormatInt(filter.Maxv, 10) // Convert int64 to a string
if filter.Maxv == 0 { // No max specified, set to +inf
maxv = "+inf"
}
mlLog.WithFields(log.Fields{"filterField": filter.Attribute}).Debug("In applyFilter")
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Check how many expected matches for this filter before we start retrieving.
cmd := "ZCOUNT"
count, err := redis.Int64(redisConn.Do(cmd, filter.Attribute, filter.Minv, maxv))
//DEBUG: count, err := redis.Int64(redisConn.Do(cmd, "BLARG", filter.Minv, maxv))
mlLog := mlLog.WithFields(log.Fields{
"query": cmd,
"field": filter.Attribute,
"minv": filter.Minv,
"maxv": maxv,
"count": count,
})
if err != nil {
mlLog.WithFields(log.Fields{"error": err.Error()}).Error("state storage error")
return nil, err
}
if count == 0 {
err = errors.New("filter applies to no players")
mlLog.Error(err.Error())
return nil, err
} else if count > 500000 {
// 500,000 results is an arbitrary number; OM doesn't encourage
// patterns where MMFs look at this large of a pool.
err = errors.New("filter applies to too many players")
mlLog.Error(err.Error())
for i := 0; i < int(count); i++ {
// Send back an empty pool, used by the calling function to calculate the number of results
pool[strconv.Itoa(i)] = 0
}
return pool, err
} else if count < 100000 {
mlLog.Info("filter processed")
} else {
// Send a warning to the logs.
mlLog.Warn("filter applies to a large number of players")
}
// Amount of results look okay and no redis error, begin
// var init for player retrieval
cmd = "ZRANGEBYSCORE"
offset := 0
// Loop, retrieving players in chunks.
for len(pool) == offset {
results, err := redis.Int64Map(redisConn.Do(cmd, filter.Attribute, filter.Minv, maxv, "WITHSCORES", "LIMIT", offset, s.cfg.GetInt("redis.queryArgs.count")))
if err != nil {
mlLog.WithFields(log.Fields{
"query": cmd,
"field": filter.Attribute,
"minv": filter.Minv,
"maxv": maxv,
"offset": offset,
"count": s.cfg.GetInt("redis.queryArgs.count"),
"error": err.Error(),
}).Error("statestorage error")
}
// Increment the offset for the next query by the 'count' config value
offset = offset + s.cfg.GetInt("redis.queryArgs.count")
// Add all results to this player pool
for k, v := range results {
if _, ok := pool[k]; ok {
// Redis returned the same player more than once; this is not
// actually a problem, it just indicates that players are being
// added/removed from the index as it is queried. We take the
// tradeoff in consistency for speed, as it won't cause issues
// in matchmaking results as long as ignorelists are respected.
offset--
}
pool[k] = v
}
}
// Log completion and return
//mlLog.WithFields(log.Fields{
// "poolSize": len(pool),
// "field": filter.Attribute,
// "minv": filter.Minv,
// "maxv": maxv,
//}).Debug("Player pool filter processed")
return pool, nil
}
// GetAllIgnoredPlayers is this service's implementation of the gRPC call defined in
// mmlogicapi/proto/mmlogic.proto
// This is a wrapper around allIgnoreLists, and converts the []string return
// value of that function to a gRPC Roster message to send out over the wire.
func (s *mmlogicAPI) GetAllIgnoredPlayers(c context.Context, in *mmlogic.IlInput) (*mmlogic.Roster, error) {
// Create context for tagging OpenCensus metrics.
funcName := "GetAllIgnoredPlayers"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
il, err := s.allIgnoreLists(c, in)
stats.Record(fnCtx, MlGrpcRequests.M(1))
return createRosterfromPlayerIds(il), err
}
// ListIgnoredPlayers is this service's implementation of the gRPC call defined in
// mmlogicapi/proto/mmlogic.proto
func (s *mmlogicAPI) ListIgnoredPlayers(c context.Context, olderThan *mmlogic.IlInput) (*mmlogic.Roster, error) {
// TODO: is this supposed to able to take any list?
ilName := "proposed"
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "ListIgnoredPlayers"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
mlLog.WithFields(log.Fields{"ignorelist": ilName}).Info("Attempting to get ignorelist")
// retreive ignore list
il, err := ignorelist.Retrieve(redisConn, s.cfg, ilName)
if err != nil {
mlLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"key": ilName,
}).Error("State storage error")
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Roster{}, err
}
// TODO: fix this
mlLog.Debug(fmt.Sprintf("Retreival success %v", il))
stats.Record(fnCtx, MlGrpcRequests.M(1))
return createRosterfromPlayerIds(il), err
}
// allIgnoreLists combines all the ignore lists and returns them.
func (s *mmlogicAPI) allIgnoreLists(c context.Context, in *mmlogic.IlInput) (allIgnored []string, err error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
mlLog.Info("Attempting to get and combine ignorelists")
// Loop through all ignorelists configured in the config file.
for il := range s.cfg.GetStringMap("ignoreLists") {
ilCfg := s.cfg.Sub(fmt.Sprintf("ignoreLists.%v", il))
thisIl, err := ignorelist.Retrieve(redisConn, ilCfg, il)
if err != nil {
panic(err)
}
// Join this ignorelist to the others we've retrieved
allIgnored = set.Union(allIgnored, thisIl)
}
return allIgnored, err
}
// Functions for getting or setting player IDs to/from rosters
// Probably should get moved to an internal module in a future version.
func getPlayerIdsFromRoster(r *mmlogic.Roster) []string {
playerIDs := make([]string, 0)
for _, p := range r.Players {
playerIDs = append(playerIDs, p.Id)
}
return playerIDs
}
func createRosterfromPlayerIds(playerIDs []string) *mmlogic.Roster {
players := make([]*mmlogic.Player, 0)
for _, id := range playerIDs {
players = append(players, &mmlogic.Player{Id: id})
}
return &mmlogic.Roster{Players: players}
}

@ -1,139 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
)
// OpenCensus Measures. These are exported as metrics to your monitoring system
// https://godoc.org/go.opencensus.io/stats
//
// When making opencensus stats, the 'name' param, with forward slashes changed
// to underscores, is appended to the 'namespace' value passed to the
// prometheus exporter to become the Prometheus metric name. You can also look
// into having Prometheus rewrite your metric names on scrape.
//
// For example:
// - defining the promethus export namespace "open_match" when instanciating the exporter:
// pe, err := promethus.NewExporter(promethus.Options{Namespace: "open_match"})
// - and naming the request counter "mmlogic/requests_total":
// MGrpcRequests := stats.Int64("mmlogicapi/requests_total", ...
// - results in the prometheus metric name:
// open_match_mmlogicapi_requests_total
// - [note] when using opencensus views to aggregate the metrics into
// distribution buckets and such, multiple metrics
// will be generated with appended types ("<metric>_bucket",
// "<metric>_count", "<metric>_sum", for example)
//
// In addition, OpenCensus stats propogated to Prometheus have the following
// auto-populated labels pulled from kubernetes, which we should avoid to
// prevent overloading and having to use the HonorLabels param in Prometheus.
//
// - Information about the k8s pod being monitored:
// "pod" (name of the monitored k8s pod)
// "namespace" (k8s namespace of the monitored pod)
// - Information about how promethus is gathering the metrics:
// "instance" (IP and port number being scraped by prometheus)
// "job" (name of the k8s service being scraped by prometheus)
// "endpoint" (name of the k8s port in the k8s service being scraped by prometheus)
//
var (
// API instrumentation
MlGrpcRequests = stats.Int64("mmlogicapi/requests_total", "Number of requests to the gRPC Frontend API endpoints", "1")
MlGrpcErrors = stats.Int64("mmlogicapi/errors_total", "Number of errors generated by the gRPC Frontend API endpoints", "1")
MlGrpcLatencySecs = stats.Float64("mmlogicapi/latency_seconds", "Latency in seconds of the gRPC Frontend API endpoints", "1")
// Logging instrumentation
// There's no need to record this measurement directly if you use
// the logrus hook provided in metrics/helper.go after instantiating the
// logrus instance in your application code.
// https://godoc.org/github.com/sirupsen/logrus#LevelHooks
MlLogLines = stats.Int64("mmlogicapi/logs_total", "Number of Frontend API lines logged", "1")
// Failure instrumentation
MlFailures = stats.Int64("mmlogicapi/failures_total", "Number of Frontend API failures", "1")
)
var (
// KeyMethod is used to tag a measure with the currently running API method.
KeyMethod, _ = tag.NewKey("method")
KeySeverity, _ = tag.NewKey("severity")
)
var (
// Latency in buckets:
// [>=0ms, >=25ms, >=50ms, >=75ms, >=100ms, >=200ms, >=400ms, >=600ms, >=800ms, >=1s, >=2s, >=4s, >=6s]
latencyDistribution = view.Distribution(0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000, 4000, 6000)
)
// Package metrics provides some convience views.
// You need to register the views for the data to actually be collected.
// Note: The OpenCensus View 'Description' is exported to Prometheus as the HELP string.
// Note: If you get a "Failed to export to Prometheus: inconsistent label
// cardinality" error, chances are you forgot to set the tags specified in the
// view for a given measure when you tried to do a stats.Record()
var (
MlLatencyView = &view.View{
Name: "mmlogic/latency",
Measure: MlGrpcLatencySecs,
Description: "The distribution of mmlogic latencies",
Aggregation: latencyDistribution,
TagKeys: []tag.Key{KeyMethod},
}
MlRequestCountView = &view.View{
Name: "mmlogic/grpc/requests",
Measure: MlGrpcRequests,
Description: "The number of successful mmlogic gRPC requests",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
MlErrorCountView = &view.View{
Name: "mmlogic/grpc/errors",
Measure: MlGrpcErrors,
Description: "The number of gRPC errors",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
MlLogCountView = &view.View{
Name: "log_lines/total",
Measure: MlLogLines,
Description: "The number of lines logged",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeySeverity},
}
MlFailureCountView = &view.View{
Name: "failures",
Measure: MlFailures,
Description: "The number of failures",
Aggregation: view.Count(),
}
)
// DefaultMmlogicAPIViews are the default mmlogic API OpenCensus measure views.
var DefaultMmlogicAPIViews = []*view.View{
MlLatencyView,
MlRequestCountView,
MlErrorCountView,
MlLogCountView,
MlFailureCountView,
}

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev']

@ -1,104 +0,0 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
${OM_ROOT}/internal/pb/mmlogic.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
)
var (
// Logrus structured logging setup
mlLogFields = log.Fields{
"app": "openmatch",
"component": "mmlogic",
}
mlLog = log.WithFields(mlLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.MlLogLines, apisrv.KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
mlLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
mlLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultMmlogicAPIViews // Matchmaking logic API OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
mlLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redisHelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
mlLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
mlLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
mlLog.Info("Shutting down gRPC server")
}

@ -1 +0,0 @@
../../config/matchmaker_config.json

61
cmd/swaggerui/Dockerfile Normal file

@ -0,0 +1,61 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/swaggerui/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
COPY api/*.json /go/src/open-match.dev/open-match/third_party/swaggerui/api/
# Since we copy the swagger docs to the container point to them so they are served locally.
# This is important because if there are local changes we want those reflecting in the container.
RUN sed -i 's|https://open-match.dev/api/v.*/|/api/|g' /go/src/open-match.dev/open-match/third_party/swaggerui/config.json
FROM gcr.io/distroless/static:nonroot
WORKDIR /app
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/swaggerui/swaggerui /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/third_party/swaggerui/ /app/static
ENTRYPOINT ["/app/swaggerui"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Swagger UI"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/GoogleCloudPlatform/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/GoogleCloudPlatform/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

10
cmd/swaggerui/config.json Normal file

@ -0,0 +1,10 @@
{
"urls": [
{"name": "Frontend", "url": "https://open-match.dev/api/v0.0.0-dev/frontend.swagger.json"},
{"name": "Backend", "url": "https://open-match.dev/api/v0.0.0-dev/backend.swagger.json"},
{"name": "Mmlogic", "url": "https://open-match.dev/api/v0.0.0-dev/mmlogic.swagger.json"},
{"name": "MatchFunction", "url": "https://open-match.dev/api/v0.0.0-dev/matchfunction.swagger.json"},
{"name": "Synchronizer", "url": "https://open-match.dev/api/v0.0.0-dev/synchronizer.swagger.json"},
{"name": "Evaluator", "url": "https://open-match.dev/api/v0.0.0-dev/evaluator.swagger.json"}
]
}

@ -0,0 +1,24 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is a simple webserver for hosting Open Match Swagger UI.
package main
import (
"open-match.dev/open-match/internal/app/swaggerui"
)
func main() {
swaggerui.RunApplication()
}

@ -0,0 +1,56 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/open-match.dev/open-match/cmd/synchronizer/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static:nonroot
WORKDIR /app/
COPY --from=builder --chown=nonroot /go/src/open-match.dev/open-match/cmd/synchronizer/synchronizer /app/
ENTRYPOINT ["/app/synchronizer"]
# Docker Image Arguments
ARG BUILD_DATE
ARG VCS_REF
ARG BUILD_VERSION
ARG IMAGE_TITLE="Open Match Synchronizer API"
# Standardized Docker Image Labels
# https://github.com/opencontainers/image-spec/blob/master/annotations.md
LABEL \
org.opencontainers.image.created="${BUILD_TIME}" \
org.opencontainers.image.authors="Google LLC <open-match-discuss@googlegroups.com>" \
org.opencontainers.image.url="https://open-match.dev/" \
org.opencontainers.image.documentation="https://open-match.dev/site/docs/" \
org.opencontainers.image.source="https://github.com/googleforgames/open-match" \
org.opencontainers.image.version="${BUILD_VERSION}" \
org.opencontainers.image.revision="1" \
org.opencontainers.image.vendor="Google LLC" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.ref.name="" \
org.opencontainers.image.title="${IMAGE_TITLE}" \
org.opencontainers.image.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.schema-version="1.0" \
org.label-schema.build-date=$BUILD_DATE \
org.label-schema.url="http://open-match.dev/" \
org.label-schema.vcs-url="https://github.com/googleforgames/open-match" \
org.label-schema.version=$BUILD_VERSION \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vendor="Google LLC" \
org.label-schema.name="${IMAGE_TITLE}" \
org.label-schema.description="Flexible, extensible, and scalable video game matchmaking." \
org.label-schema.usage="https://open-match.dev/site/docs/"

@ -0,0 +1,24 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package main is the synchronizer service for Open Match.
package main
import (
"open-match.dev/open-match/internal/app/synchronizer"
)
func main() {
synchronizer.RunApplication()
}

@ -1,110 +0,0 @@
{
"debug": true,
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 90
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "dev"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
},
"expirations": {
"player": 43200,
"matchobject":43200
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-backendapi",
"labels":{
"app":"openmatch",
"component": "backend"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "backend"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "backend"
}
},
"spec":{
"containers":[
{
"name":"om-backend",
"image":"gcr.io/open-match-public-images/openmatch-backendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50505
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-backendapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "backend"
},
"ports": [
{
"protocol": "TCP",
"port": 50505,
"targetPort": "grpc"
}
]
}
}

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-frontendapi",
"labels":{
"app":"openmatch",
"component": "frontend"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "frontend"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "frontend"
}
},
"spec":{
"containers":[
{
"name":"om-frontendapi",
"image":"gcr.io/open-match-public-images/openmatch-frontendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50504
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-frontendapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "frontend"
},
"ports": [
{
"protocol": "TCP",
"port": 50504,
"targetPort": "grpc"
}
]
}
}

@ -1,27 +0,0 @@
{
"apiVersion": "monitoring.coreos.com/v1",
"kind": "ServiceMonitor",
"metadata": {
"name": "openmatch-metrics",
"labels": {
"app": "openmatch",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "openmatch",
"agent": "opencensus",
"destination": "prometheus"
}
},
"endpoints": [
{
"port": "metrics",
"interval": "10s"
}
]
}
}

@ -1,78 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-frontend-metrics",
"labels": {
"app": "openmatch",
"component": "frontend",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "frontend"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 19555
}
]
}
}
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-backend-metrics",
"labels": {
"app": "openmatch",
"component": "backend",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "backend"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 29555
}
]
}
}
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-mmforc-metrics",
"labels": {
"app": "openmatch",
"component": "mmforc",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "mmforc"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 39555
}
]
}
}

@ -1,59 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-mmforc",
"labels":{
"app":"openmatch",
"component": "mmforc"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "mmforc"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "mmforc"
}
},
"spec":{
"containers":[
{
"name":"om-mmforc",
"image":"gcr.io/open-match-public-images/openmatch-mmforc:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "metrics",
"containerPort":9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
},
"env":[
{
"name":"METADATA_NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}
]
}
]
}
}
}
}

@ -1,19 +0,0 @@
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "mmf-sa"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "default",
"namespace": "default"
}
],
"roleRef": {
"kind": "ClusterRole",
"name": "cluster-admin",
"apiGroup": "rbac.authorization.k8s.io"
}
}

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-mmlogicapi",
"labels":{
"app":"openmatch",
"component": "mmlogic"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "mmlogic"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "mmlogic"
}
},
"spec":{
"containers":[
{
"name":"om-mmlogic",
"image":"gcr.io/open-match-public-images/openmatch-mmlogicapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50503
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-mmlogicapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "mmlogic"
},
"ports": [
{
"protocol": "TCP",
"port": 50503,
"targetPort": "grpc"
}
]
}
}

@ -1,20 +0,0 @@
{
"apiVersion": "monitoring.coreos.com/v1",
"kind": "Prometheus",
"metadata": {
"name": "prometheus"
},
"spec": {
"serviceMonitorSelector": {
"matchLabels": {
"app": "openmatch"
}
},
"serviceAccountName": "prometheus",
"resources": {
"requests": {
"memory": "400Mi"
}
}
}
}

@ -1,266 +0,0 @@
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "prometheus-operator"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "prometheus-operator"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "prometheus-operator",
"namespace": "default"
}
]
}
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "prometheus"
}
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRole",
"metadata": {
"name": "prometheus"
},
"rules": [
{
"apiGroups": [
""
],
"resources": [
"nodes",
"services",
"endpoints",
"pods"
],
"verbs": [
"get",
"list",
"watch"
]
},
{
"apiGroups": [
""
],
"resources": [
"configmaps"
],
"verbs": [
"get"
]
},
{
"nonResourceURLs": [
"/metrics"
],
"verbs": [
"get"
]
}
]
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "prometheus"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "prometheus"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "prometheus",
"namespace": "default"
}
]
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRole",
"metadata": {
"name": "prometheus-operator"
},
"rules": [
{
"apiGroups": [
"extensions"
],
"resources": [
"thirdpartyresources"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"apiextensions.k8s.io"
],
"resources": [
"customresourcedefinitions"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"monitoring.coreos.com"
],
"resources": [
"alertmanagers",
"prometheuses",
"prometheuses/finalizers",
"servicemonitors"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"apps"
],
"resources": [
"statefulsets"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
""
],
"resources": [
"configmaps",
"secrets"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
""
],
"resources": [
"pods"
],
"verbs": [
"list",
"delete"
]
},
{
"apiGroups": [
""
],
"resources": [
"services",
"endpoints"
],
"verbs": [
"get",
"create",
"update"
]
},
{
"apiGroups": [
""
],
"resources": [
"nodes"
],
"verbs": [
"list",
"watch"
]
},
{
"apiGroups": [
""
],
"resources": [
"namespaces"
],
"verbs": [
"list"
]
}
]
}
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "prometheus-operator"
}
}
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"labels": {
"k8s-app": "prometheus-operator"
},
"name": "prometheus-operator"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"k8s-app": "prometheus-operator"
}
},
"spec": {
"containers": [
{
"args": [
"--kubelet-service=kube-system/kubelet",
"--config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1"
],
"image": "quay.io/coreos/prometheus-operator:v0.17.0",
"name": "prometheus-operator",
"ports": [
{
"containerPort": 8080,
"name": "http"
}
],
"resources": {
"limits": {
"cpu": "200m",
"memory": "100Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
}
}
],
"securityContext": {
"runAsNonRoot": true,
"runAsUser": 65534
},
"serviceAccountName": "prometheus-operator"
}
}
}
}

@ -1,22 +0,0 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "prometheus"
},
"spec": {
"type": "NodePort",
"ports": [
{
"name": "web",
"nodePort": 30900,
"port": 9090,
"protocol": "TCP",
"targetPort": "web"
}
],
"selector": {
"prometheus": "prometheus"
}
}
}

@ -1,38 +0,0 @@
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "redis-master"
},
"spec": {
"selector": {
"matchLabels": {
"app": "mm",
"tier": "storage"
}
},
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "mm",
"tier": "storage"
}
},
"spec": {
"containers": [
{
"name": "redis-master",
"image": "redis:4.0.11",
"ports": [
{
"name": "redis",
"containerPort": 6379
}
]
}
]
}
}
}
}

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis"
},
"spec": {
"selector": {
"app": "mm",
"tier": "storage"
},
"ports": [
{
"protocol": "TCP",
"port": 6379,
"targetPort": "redis"
}
]
}
}

16
doc.go Normal file

@ -0,0 +1,16 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package openmatch provides flexible, extensible, and scalable video game matchmaking.
package openmatch // import "open-match.dev/open-match"

@ -1,184 +1,196 @@
# Compiling from source
# Development Guide
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in their respective directories. Note that most of them build from an 'base' image called `openmatch-devbase`. You can find a `Dockerfile` and `cloudbuild_base.yaml` file for this in the repository root. Build it first!
Open Match is a collection of [Go](https://golang.org/) gRPC services that run
within [Kubernetes](https://kubernetes.io).
Note: Although Google Cloud Platform includes some free usage, you may incur charges following this guide if you use GCP products.
## Install Prerequisites
## Security Disclaimer
**This project has not completed a first-line security audit, and there are definitely going to be some service accounts that are too permissive. This should be fine for testing/development in a local environment, but absolutely should not be used as-is in a production environment without your team/organization evaluating it's permissions.**
To build Open Match you'll need the following applications installed.
## Before getting started
**NOTE**: Before starting with this guide, you'll need to update all the URIs from the tutorial's gcr.io container image registry with the URI for your own image registry. If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`. Here's an example command in Linux to do the replacement for you this (replace YOUR_REGISTRY_URI with your URI, this should be run from the repository root directory):
```
# Linux
egrep -lR 'open-match-public-images' . | xargs sed -i -e 's|open-match-public-images|<PROJECT_NAME>|g'
```
```
# Mac OS, you can delete the .backup files after if all looks good
egrep -lR 'open-match-public-images' . | xargs sed -i'.backup' -e 's|open-match-public-images|<PROJECT_NAME>|g'
* [Git](https://git-scm.com/downloads)
* [Go](https://golang.org/doc/install)
* [Python3 with virtualenv](https://wiki.python.org/moin/BeginnersGuide/Download)
* Make (Mac: install [XCode](https://itunes.apple.com/us/app/xcode/id497799835))
* [Docker](https://docs.docker.com/install/) including the
[post-install steps](https://docs.docker.com/install/linux/linux-postinstall/).
Optional Software
* [Google Cloud Platform](gcloud.md)
* [Visual Studio Code](https://code.visualstudio.com/Download) for IDE.
Vim and Emacs work to.
* [VirtualBox](https://www.virtualbox.org/wiki/Downloads) recommended for
[Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/).
On Debian-based Linux you can install all the required packages (except Go) by
running:
```bash
sudo apt-get update
sudo apt-get install -y -q python3 python3-virtualenv virtualenv make \
google-cloud-sdk git unzip tar
```
## Example of building using Google Cloud Builder
*It's recommended that you install Go using their instructions because package
managers tend to lag behind the latest Go releases.*
The [Quickstart for Docker](https://cloud.google.com/cloud-build/docs/quickstart-docker) guide explains how to set up a project, enable billing, enable Cloud Build, and install the Cloud SDK if you haven't do these things before. Once you get to 'Preparing source files' you are ready to continue with the steps below.
## Get the Code
* Clone this repo to a local machine or Google Cloud Shell session, and cd into it.
* In Linux, you can run the following one-line bash script to compile all the images for the first time, and push them to your gcr.io registry. You must enable the [Container Registry API](https://console.cloud.google.com/flows/enableapi?apiid=containerregistry.googleapis.com) first.
```
# First, build the 'base' image. Some other images depend on this so it must complete first.
gcloud builds submit --config cloudbuild_base.yaml
# Build all other images.
for dfile in $(find . -name "Dockerfile" -iregex "./\(cmd\|test\|examples\)/.*"); do cd $(dirname ${dfile}); gcloud builds submit --config cloudbuild.yaml & cd -; done
```
Note: as of v0.3.0 alpha, the Python and PHP MMF examples still depend on the previous way of building until [issue #42, introducing new config management](https://github.com/GoogleCloudPlatform/open-match/issues/42) is resolved (apologies for the inconvenience):
```
gcloud builds submit --config cloudbuild_mmf_py3.yaml
gcloud builds submit --config cloudbuild_mmf_php.yaml
```
* Once the cloud builds have completed, you can verify that all the builds succeeded in the cloud console or by by checking the list of images in your **gcr.io** registry:
```
gcloud container images list
```
(your registry name will be different)
```
NAME
gcr.io/open-match-public-images/openmatch-backendapi
gcr.io/open-match-public-images/openmatch-devbase
gcr.io/open-match-public-images/openmatch-evaluator
gcr.io/open-match-public-images/openmatch-frontendapi
gcr.io/open-match-public-images/openmatch-mmf-golang-manual-simple
gcr.io/open-match-public-images/openmatch-mmf-php-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmforc
gcr.io/open-match-public-images/openmatch-mmlogicapi
```
## Example of starting a GKE cluster
A cluster with mostly default settings will work for this development guide. In the Cloud SDK command below we start it with machines that have 4 vCPUs. Alternatively, you can use the 'Create Cluster' button in [Google Cloud Console]("https://console.cloud.google.com/kubernetes").
```
gcloud container clusters create --machine-type n1-standard-4 open-match-dev-cluster --zone <ZONE>
```bash
# Create a directory for the project.
mkdir -p $HOME/workspace
cd $HOME/workspace
# Download the source code.
git clone https://github.com/googleforgames/open-match.git
cd open-match
# Print the help for the Makefile commands.
make
```
If you don't know which zone to launch the cluster in (`<ZONE>`), you can list all available zones by running the following command.
*Typically for contributing you'll want to
[create a fork](https://help.github.com/en/articles/fork-a-repo) and use that
but for purpose of this guide we'll be using the upstream/master.*
```
gcloud compute zones list
## Building
```bash
# Reset workspace
make clean
# Compile all the binaries
make all -j$(nproc)
# Run tests
make test
# Build all the images.
make build-images -j$(nproc)
# Push images to gcr.io (requires Google Cloud SDK installed)
make push-images -j$(nproc)
# Push images to Docker Hub
make REGISTRY=mydockerusername push-images -j$(nproc)
```
## Configuration
_**-j$(nproc)** is a flag to tell make to parallelize the commands based on
the number of CPUs on your machine._
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration (if you would like to help us design the replacement config solution, please join the [discussion](https://github.com/GoogleCloudPlatform/open-match/issues/42). To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. Note: [there is an issue with symlinks on Windows](../issues/57).
## Deploying to Kubernetes
## Running Open Match in a development environment
Kubernetes comes in many flavors and Open Match can be used in any of them.
The rest of this guide assumes you have a cluster (example is using GKE, but works on any cluster with a little tweaking), and kubectl configured to administer that cluster, and you've built all the Docker container images described by `Dockerfiles` in the repository root directory and given them the docker tag 'dev'. It assumes you are in the `<REPO_ROOT>/deployments/k8s/` directory.
_We support GKE ([setup guide](gcloud.md)), Minikube, and Kubernetes in Docker (KinD) in the Makefile.
As long as kubectl is configured to talk to your Kubernetes cluster as the
default context the Makefile will honor that._
* Start a copy of redis and a service in front of it:
```
kubectl apply -f redis_deployment.json
kubectl apply -f redis_service.json
```
* Run the **core components**: the frontend API, the backend API, the matchmaker function orchestrator (MMFOrc), and the matchmaking logic API.
**NOTE** In order to kick off jobs, the matchmaker function orchestrator needs a service account with permission to administer the cluster. This should be updated to have min required perms before launch, this is pretty permissive but acceptable for closed testing:
```
kubectl apply -f backendapi_deployment.json
kubectl apply -f backendapi_service.json
kubectl apply -f frontendapi_deployment.json
kubectl apply -f frontendapi_service.json
kubectl apply -f mmforc_deployment.json
kubectl apply -f mmforc_serviceaccount.json
kubectl apply -f mmlogicapi_deployment.json
kubectl apply -f mmlogicapi_service.json
```
* [optional, but recommended] Configure the OpenCensus metrics services:
```
kubectl apply -f metrics_services.json
```
* [optional] Trying to apply the Kubernetes Prometheus Operator resource definition files without a cluster-admin rolebinding on GKE doesn't work without running the following command first. See https://github.com/coreos/prometheus-operator/issues/357
```
kubectl create clusterrolebinding projectowner-cluster-admin-binding --clusterrole=cluster-admin --user=<GCP_ACCOUNT>
```
* [optional, uses beta software] If using Prometheus as your metrics gathering backend, configure the [Prometheus Kubernetes Operator](https://github.com/coreos/prometheus-operator):
```
kubectl apply -f prometheus_operator.json
kubectl apply -f prometheus.json
kubectl apply -f prometheus_service.json
kubectl apply -f metrics_servicemonitor.json
```
You should now be able to see the core component pods running using a `kubectl get pods`, and the core component metrics in the Prometheus Web UI by running `kubectl proxy <PROMETHEUS_POD_NAME> 9090:9090` in your local shell, then opening http://localhost:9090/targets in your browser to see which services Prometheus is collecting from.
```bash
# Step 1: Create a Kubernetes (k8s) cluster
# KinD cluster: make create-kind-cluster/delete-kind-cluster
# GKE cluster: make create-gke-cluster/delete-gke-cluster
# or create a local Minikube cluster
make create-gke-cluster
# Step 2: Download helm and install Tiller in the cluster
make push-helm
# Step 3: Build and Push Open Match Images to gcr.io
make push-images -j$(nproc)
# Install Open Match in the cluster.
make install-chart
Here's an example output from `kubectl get all` if everything started correctly, and you included all the optional components (note: this could become out-of-date with upcoming versions; apologies if that happens):
```
NAME READY STATUS RESTARTS AGE
pod/om-backendapi-84bc9d8fff-q89kr 1/1 Running 0 9m
pod/om-frontendapi-55d5bb7946-c5ccb 1/1 Running 0 9m
pod/om-mmforc-85bfd7f4f6-wmwhc 1/1 Running 0 9m
pod/om-mmlogicapi-6488bc7fc6-g74dm 1/1 Running 0 9m
pod/prometheus-operator-5c8774cdd8-7c5qm 1/1 Running 0 9m
pod/prometheus-prometheus-0 2/2 Running 0 9m
pod/redis-master-9b6b86c46-b7ggn 1/1 Running 0 9m
# Create a proxy to Open Match pods so that you can access them locally.
# This command consumes a terminal window that you can kill via Ctrl+C.
# You can run `curl -X POST http://localhost:51504/v1/frontend/tickets` to send
# a DeleteTicket request to the frontend service in the cluster.
# Then try visiting http://localhost:3000/ and view the graphs.
make proxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 19m
service/om-backend-metrics ClusterIP 10.59.254.43 <none> 29555/TCP 9m
service/om-backendapi ClusterIP 10.59.240.211 <none> 50505/TCP 9m
service/om-frontend-metrics ClusterIP 10.59.246.228 <none> 19555/TCP 9m
service/om-frontendapi ClusterIP 10.59.250.59 <none> 50504/TCP 9m
service/om-mmforc-metrics ClusterIP 10.59.240.59 <none> 39555/TCP 9m
service/om-mmlogicapi ClusterIP 10.59.248.3 <none> 50503/TCP 9m
service/prometheus NodePort 10.59.252.212 <none> 9090:30900/TCP 9m
service/prometheus-operated ClusterIP None <none> 9090/TCP 9m
service/redis ClusterIP 10.59.249.197 <none> 6379/TCP 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/om-backendapi 1 1 1 1 9m
deployment.extensions/om-frontendapi 1 1 1 1 9m
deployment.extensions/om-mmforc 1 1 1 1 9m
deployment.extensions/om-mmlogicapi 1 1 1 1 9m
deployment.extensions/prometheus-operator 1 1 1 1 9m
deployment.extensions/redis-master 1 1 1 1 9m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/om-backendapi-84bc9d8fff 1 1 1 9m
replicaset.extensions/om-frontendapi-55d5bb7946 1 1 1 9m
replicaset.extensions/om-mmforc-85bfd7f4f6 1 1 1 9m
replicaset.extensions/om-mmlogicapi-6488bc7fc6 1 1 1 9m
replicaset.extensions/prometheus-operator-5c8774cdd8 1 1 1 9m
replicaset.extensions/redis-master-9b6b86c46 1 1 1 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/om-backendapi 1 1 1 1 9m
deployment.apps/om-frontendapi 1 1 1 1 9m
deployment.apps/om-mmforc 1 1 1 1 9m
deployment.apps/om-mmlogicapi 1 1 1 1 9m
deployment.apps/prometheus-operator 1 1 1 1 9m
deployment.apps/redis-master 1 1 1 1 9m
NAME DESIRED CURRENT READY AGE
replicaset.apps/om-backendapi-84bc9d8fff 1 1 1 9m
replicaset.apps/om-frontendapi-55d5bb7946 1 1 1 9m
replicaset.apps/om-mmforc-85bfd7f4f6 1 1 1 9m
replicaset.apps/om-mmlogicapi-6488bc7fc6 1 1 1 9m
replicaset.apps/prometheus-operator-5c8774cdd8 1 1 1 9m
replicaset.apps/redis-master-9b6b86c46 1 1 1 9m
NAME DESIRED CURRENT AGE
statefulset.apps/prometheus-prometheus 1 1 9m
# Teardown the install
make delete-chart
```
### End-to-End testing
## Interaction
**Note**: The programs provided below are just bare-bones manual testing programs with no automation and no claim of code coverage. This sparseness of this part of the documentation is because we expect to discard all of these tools and write a fully automated end-to-end test suite and a collection of load testing tools, with extensive stats output and tracing capabilities before 1.0 release. Tracing has to be integrated first, which will be in an upcoming release.
Before integrating with Open Match you can manually interact with it to get a feel for how it works.
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
`make proxy-ui` exposes the Swagger UI for Open Match locally on your computer.
You can then go to http://localhost:51500 and view the API as well as interactively call Open Match.
* `examples/frontendclient` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen until you press the enter key. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that command sends to the backend look for many more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match.
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to openmatch and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools. Note: building this image requires that you first build the 'base' dev image (look for `cloudbuild_base.yaml` and `Dockerfile.base` in the root directory) and then update the first step to point to that image in your registry. This will be simplified in a future release. **Note**: If you run this by itself, expect it to wait about 30 seconds, then return a result of 'insufficient players' and exit - this is working as intended. Use the client load simulation tool below to add players to the pool or you'll never be able to make a successful match.
* `test/cmd/client` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
By default you will be talking to the frontend server but you can change the target API url to any of the following:
### Resources
* api/frontend.swagger.json
* api/backend.swagger.json
* api/synchronizer.swagger.json
* api/mmlogic.swagger.json
* [Prometheus Operator spec](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md)
For a more current list refer to the api/ directory of this repository. Also matchfunction.swagger.json is not supported.
## IDE Support
Open Match is a standard Go project so any IDE that understands that should
work. We use [Go Modules](https://github.com/golang/go/wiki/Modules) which is a
relatively new feature in Go so make sure the IDE you are using was built around
Summer 2019. The latest version of
[Visual Studio Code](https://code.visualstudio.com/download) supports it.
If your IDE is too old you can create a
[Go workspace](https://golang.org/doc/code.html#Workspaces).
```bash
# Create the Go workspace in $HOME/workspace/ directory.
mkdir -p $HOME/workspace/src/open-match.dev/
cd $HOME/workspace/src/open-match.dev/
# Download the source code.
git clone https://github.com/googleforgames/open-match.git
cd open-match
export GOPATH=$HOME/workspace/
```
## Pull Requests
If you want to submit a Pull Request there's some tools to help prepare your
change.
```bash
# Runs code generators, tests, and linters.
make presubmit
```
`make presubmit` catches most of the issues your change can run into. If the
submit checks fail you can run it locally via,
```bash
make local-cloud-build
```
Our [continuous integration](https://console.cloud.google.com/cloud-build/builds?project=open-match-build)
runs against all PRs. In order to see your build results you'll need to
become a member of
[open-match-discuss@googlegroups.com](https://groups.google.com/forum/#!forum/open-match-discuss).
## Makefile
The Makefile is the core of Open Match's build process. There's a lot of
commands but here's a list of the important ones and patterns to remember them.
```bash
# Help
make
# Reset workspace (delete all build artifacts)
make clean
# Delete auto-generated protobuf code and swagger API docs.
make clean-protos clean-swagger-docs
# make clean-* deletes some part of the build outputs.
# Build all Docker images
make build-images
# Build frontend docker image.
make build-frontend-image
# Formats, Vets, and tests the codebase.
make fmt vet test
# Same as above also regenerates autogen files.
make presubmit
# Run website on http://localhost:8080
make run-site
# Proxy all Open Match processes to view them.
make proxy
```

@ -1 +0,0 @@
*"I notice that all the APIs use gRPC. What if I want to make my calls using REST, or via a Websocket?"** (gateway/proxy OSS projects are available)

26
docs/gcloud.md Normal file

@ -0,0 +1,26 @@
# Create a GKE Cluster
Below are the steps to create a GKE cluster in Google Cloud Platform.
* Create a GCP project via [Google Cloud Console](https://console.cloud.google.com/).
* Billing must be enabled. If you're a new customer you can get some [free credits](https://cloud.google.com/free/).
* When you create a project you'll need to set a Project ID, if you forget it you can see it here, https://console.cloud.google.com/iam-admin/settings/project.
* Install [Google Cloud SDK](https://cloud.google.com/sdk/) which is the command line tool to work against your project.
Here are the next steps using the gcloud tool.
```bash
# Login to your Google Account for GCP
gcloud auth login
gcloud config set project $YOUR_GCP_PROJECT_ID
# Enable necessary GCP services
gcloud services enable containerregistry.googleapis.com
gcloud services enable container.googleapis.com
# Test that everything is good, this command should work.
gcloud compute zones list
# Create a GKE Cluster in this project
gcloud container clusters create --machine-type n1-standard-2 open-match-dev-cluster --zone us-west1-a --tags open-match
```

@ -0,0 +1,61 @@
# v{version}
This is the {version} release of Open Match.
Check the [README](https://github.com/googleforgames/open-match/tree/release-{version}) for details on features, installation and usage.
Release Notes
-------------
{ insert enhancements from the changelog and/or security and breaking changes }
**Breaking Changes**
* API Changed #PR
**Enhancements**
* New Harness #PR
**Security Fixes**
* Reduced privileges required for MMF. #PR
See [CHANGELOG](https://github.com/googleforgames/open-match/blob/release-{version}/CHANGELOG.md) for more details on changes.
Images
------
```bash
# Servers
docker pull gcr.io/open-match-public-images/openmatch-backendapi:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontendapi:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmforc:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmlogicapi:{version}
# Evaluators
docker pull gcr.io/open-match-public-images/openmatch-evaluator-serving:{version}
# Sample Match Making Functions
docker pull gcr.io/open-match-public-images/openmatch-mmf-go-simple:{version}
# Test Clients
docker pull gcr.io/open-match-public-images/openmatch-backendclient:{version}
docker pull gcr.io/open-match-public-images/openmatch-clientloadgen:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontendclient:{version}
```
_This software is currently alpha, and subject to change. Not to be used in production systems._
Installation
------------
To deploy Open Match in your Kubernetes cluster run the following commands:
```bash
# Grant yourself cluster-admin permissions so that you can deploy service accounts.
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(YOUR_KUBERNETES_USER_NAME)
# Place all Open Match components in their own namespace.
kubectl create namespace open-match
# Install Open Match and monitoring services.
kubectl apply -f https://github.com/googleforgames/open-match/releases/download/v{version}/install.yaml --namespace open-match
# Install the demo.
kubectl apply -f https://github.com/googleforgames/open-match/releases/download/v{version}/install-demo.yaml --namespace open-match
```

@ -0,0 +1,35 @@
#!/bin/bash
# Usage:
# ./release.sh 0.5.0-82d034f unstable
# ./release.sh [SOURCE VERSION] [DEST VERSION]
# This is a basic shell script to publish the latest Open Match Images
# There's no guardrails yet so use with care.
# Purge Images
# docker rmi $(docker images -a -q)
# 0.4.0-82d034f
SOURCE_VERSION=$1
DEST_VERSION=$2
SOURCE_PROJECT_ID=open-match-build
DEST_PROJECT_ID=open-match-public-images
IMAGE_NAMES="openmatch-backendapi openmatch-frontendapi openmatch-mmforc openmatch-mmlogicapi openmatch-evaluator-serving openmatch-mmf-go-simple openmatch-backendclient openmatch-clientloadgen openmatch-frontendclient"
for name in $IMAGE_NAMES
do
source_image=gcr.io/$SOURCE_PROJECT_ID/$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION
docker pull $source_image
docker tag $source_image $dest_image
docker push $dest_image
done
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "Add these lines to your release notes:"
for name in $IMAGE_NAMES
do
echo "docker pull gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION"
done

Some files were not shown because too many files have changed in this diff Show More