Compare commits

..

158 Commits

Author SHA1 Message Date
be55bfd1e8 Increase version to 0.5.0-rc1 (#268)
* Increase version to 0.5.0-rc1

* Increment version to 0.5.0-rc1 in cloudbuild.yaml
2019-04-22 18:09:36 -07:00
8389a62cf1 Release Process for Open Match (#235) 2019-04-22 14:12:02 -07:00
af8895e629 Remove knative link because it fails in tests. (#265) 2019-04-22 11:47:51 -07:00
2a3241307f Properly set tag and repository when making install/yaml/ (#258) 2019-04-22 11:13:03 -07:00
f777a4f407 Publish all install/yaml/*.yaml files. (#264)
* Publish all install/yaml/*.yaml files.

* Update instructions and add publish post commit.

* Add yaml/
2019-04-22 10:02:49 -07:00
88ca8d7b7c DOcumentation (#259) 2019-04-21 17:23:54 -07:00
3a09ce142a Fix namespace issues in example yaml (#257) 2019-04-19 17:03:56 -07:00
8d8fdf0494 Add vanity url redirection support. (#239) 2019-04-19 16:33:00 -07:00
45b0a7c38e Remove deprecated examples, evaluator and mmforc (#249) 2019-04-19 15:34:21 -07:00
4cbee9d8a7 Remove deprecated artifacts from build pipeline (#255) 2019-04-19 14:46:49 -07:00
55afac2c93 Embed profile config in the container to be used for standalone executions. (#254)
Embed profile config in the container to be used for standalone executions. Will create a separate issue to figure out a better way to do this.
2019-04-19 14:07:41 -07:00
8077dbcdba Changes to make the demo steps easier (#253) 2019-04-19 11:30:30 -07:00
f1fc02755b Update theme and logo for Open Match website (#240) 2019-04-19 11:01:31 -07:00
0cce1745bc Changes to Backend API and Backend Client to support GRPC Function Ha… (#246)
* Changes to Backend API and Backend Client to support GRPC Function Harness
2019-04-19 10:41:15 -07:00
d57b5f1872 Helm chart changes to not install mmforc and deploy function Service (#227) (#248)
* Helm chart changes to not install mmforc and deploy function Service
2019-04-19 10:17:06 -07:00
1355e5c79e Fix lint issues in helm chart and improve lint coverage. (#252) 2019-04-19 09:49:42 -07:00
4809f2801f Add Open Match Logo (#251) 2019-04-19 08:28:13 -07:00
68d323f3ea 2nd pass of lint errors. (#247) 2019-04-19 05:42:57 -07:00
b99160e356 Fix grpc harness startup panic due to http proxy not being set up (#244) 2019-04-18 20:02:04 -07:00
98d4c31c61 Fix most of the lint errors from golangci. (#243) 2019-04-18 18:15:46 -07:00
b4beb68920 Reduce log spam in backendapi (#234) 2019-04-18 15:39:41 -07:00
b41a704886 Bump versions of dependencies (#241) 2019-04-18 14:12:05 -07:00
88a692cdf3 Evaluator Harness and Sample golang Evaluator (#238)
* Evaluator Harness and sample serving Evaluator
2019-04-18 12:35:37 -07:00
623519bbb4 Core Logic for the GRPC Harness (#125) (#237)
Core Logic for the MatchFunction GRPC Harness
2019-04-18 12:16:38 -07:00
655abfbb26 Example MMF demonstrating the use of the GRPC harness (#236) 2019-04-18 10:10:06 -07:00
ac81b74fad Add Kaniko build cache (#230)
* Add Kaniko build cache - partly resolves #231
2019-04-18 00:30:02 -07:00
ba62520d9c Prevent sudo on Makefile for commands that require auth. (#225) 2019-04-17 20:58:16 -07:00
0205186e6f Remove install/yaml/ it will be moved to release artifacts. (#232)
* Remove install/yaml/ it will be moved to release artifacts.

* Add the ignore files.

* Create install/yaml/ directory for output targets.
2019-04-17 17:50:39 -07:00
ef2b1ea0a8 Implement REST proxy initializations and modified tests accordingly (#210)
This commit resolves #196 and generates swagger.json files for API visualization
2019-04-17 17:28:36 -07:00
1fe2bd4900 Add 'make presubmit' to keep generated files up to date. (#223) 2019-04-17 17:04:05 -07:00
5333ef2092 Enable cloudbuild dev site to fix local cloud build error (#219) 2019-04-17 16:17:01 -07:00
09b727b555 Remove the deprecated deployment mechanism for openmatch components (#224) 2019-04-17 15:45:38 -07:00
c542d6d1c3 Serving GRPC Harness and example MMF scaffolding (#112) (#216)
* Serving GRPC Harness and example MMF scaffolding

* Serving GRPC Harness and example MMF scaffolding

* Update logger field to add function name

* Update harness to use the TCP listener
2019-04-17 14:57:01 -07:00
8f3f7625ec Increases paralellism of the build (#203) 2019-04-17 13:07:39 -07:00
6a4f309bd5 Remove temp files. (#220) 2019-04-17 12:41:38 -07:00
26f5426b61 Disable logrus.SetReportCaller() (#222) 2019-04-17 12:26:43 -07:00
f464b0bd7b Fix port allocation race condition during tests. (#215) 2019-04-17 11:54:56 -07:00
092b7e634c Move GOPROXY=off to CI only. (#209) 2019-04-17 11:01:37 -07:00
454a3d6cca Bump required Go version because of a dependency. (#207) 2019-04-15 20:24:09 -07:00
50e3ede4b9 Remove use of GOPATH from Makefile (#208) 2019-04-15 16:19:31 -07:00
6c36145e9b Mini Match (#152) 2019-04-12 16:16:42 -07:00
47644004db Add link tests for website and removed broken links. (#202) 2019-04-12 15:26:32 -07:00
1dec4d7555 Unify gRPC server initialization (#198) 2019-04-12 12:47:27 -07:00
1c6f43a95f Add a link to the build queue. (#199) 2019-04-12 11:38:50 -07:00
0cea4ed713 Add temporary redirect site for Open Match (#200) 2019-04-12 11:24:23 -07:00
db912b2d68 Add reduced permissions for mmforc service account. (#197) 2019-04-12 10:25:19 -07:00
726b1d4063 CI with Makefile (#188) 2019-04-12 07:51:10 -07:00
468aef3835 Ignore files for Mini Match. (#194) 2019-04-11 15:16:26 -07:00
c6e257ae76 Unified gRPC server initialization (#195)
* Unified gRPC server initialization

* Fix closure and review feedback
2019-04-11 15:06:07 -07:00
8e071020fa Kubernetes YAML configs for Open Match. (#190) 2019-04-11 14:28:27 -07:00
c032e8f382 Detect sudo invocations to Makefile #164 (#187) 2019-04-11 14:09:52 -07:00
2af432c3d7 Fix build artifacts
Fix build artifacts issue #180
2019-04-11 13:23:44 -07:00
4ddceb62ee fixed bugs in py3 mmf (#193)
fix py3 mmf image
2019-04-11 06:32:59 -07:00
ddb4521444 Add license preamble to proto and dockerfiles. (#186) 2019-04-10 20:24:31 -07:00
86918e69eb Replace CURDIR with REPOSITORY ROOT #156 2019-04-10 16:32:01 -07:00
2d6db7e546 Remove manual stats that ocgrpc interceptor already records. 2019-04-10 16:21:42 -07:00
fc52ef6428 REST Implementation 2019-04-10 15:34:49 -07:00
1bfb30be6f Fix redis connection bugs and segfault in backendclient. (#178) 2019-04-10 13:27:41 -07:00
9ee341baf2 Move configs from backendclient image to ConfigMap. (#175) 2019-04-10 12:59:12 -07:00
7869e6eb81 Add opencensus metrics for Redis 2019-04-10 12:36:35 -07:00
7edca56f56 Disable php-proto building since it's missing gRPC client 2019-04-10 10:06:42 -07:00
eaedaa0265 Split up README.md and add project logo. 2019-04-10 08:26:21 -07:00
9cc8312ff5 Rename Function to MatchFunction and modify related protos (#159) 2019-04-10 08:15:40 -07:00
2f0a1ad05b updating app.yaml 2019-04-09 20:47:33 -07:00
2ff77ac90b Fix 'make create-gke-cluster' (#154)
It is missing a dash on one of the arguments, which breaks things.
2019-04-09 15:59:16 -07:00
2a3cfea505 Add base package file for godoc index and go get. 2019-04-09 14:16:54 -07:00
b8326c2a91 Fix build dependencies to build/site/ 2019-04-09 14:05:03 -07:00
ccc9d87692 Disable the PHP example during the CI build. 2019-04-09 12:01:34 -07:00
bba49f3ec4 Simplify the go package path for proto definitions 2019-04-09 11:41:29 -07:00
632157806f Remove symlinks to config files because they are mounted via ConfigMaps. 2019-04-09 11:11:36 -07:00
6e039cb797 Delete images and scripts obsoleted by Makefile. 2019-04-09 10:40:53 -07:00
8db062f7b9 Use Request/Response protos in gRPC servers. 2019-04-03 21:11:42 -07:00
f379a5eb46 Disable 'Lint: Kubernetes Configs'
It is currently failing.
2019-04-03 18:28:24 -07:00
f3160cfc5c generate install.yaml with Helm
fixed helm templates

changes in helm templates

adding redis auth to the helm chart

helm templates changes

makefile: gen-install

make set-redis-password

make gen-install

fixing indentation in Makefile

remove old redis installation

use public images in install/yaml/

remove helm chart meta from static install yaml files

fixing cloudbuild

remove helm chart meta from static install yaml files

workaround for broken om-configmap data formatting

make gen-prometheus-install

drop namespace in OM resources definitions

override default matchmaker_config at Helm chart installation

fixed Makefile after rebase

matchmaker config: use latest public images

1) install Redis in same namespace with Open-match;2) Making namespace and Helm release names consistent in all places
2019-04-03 13:40:13 -07:00
442a1ff013 Update dependencies and resolve issue #149 2019-04-02 20:21:14 -07:00
0fb75ab35e Delete old cloudbuild.yaml files, obsoleted by PR #98 2019-04-02 11:23:14 -07:00
6308b218cc Minimize dependency on Viper and make config read-only. 2019-04-02 07:46:18 -07:00
624ba5c018 [charts/open-match] fix mmlogicapi service selector 2019-04-01 18:10:15 -07:00
82d034f8e4 Fix dependency issues in the build. 2019-04-01 11:05:57 -07:00
97eed146da update protoc version to 3.7.1
This fixes the bug outlined here https://github.com/protocolbuffers/protobuf/issues/5875
2019-04-01 09:49:19 -07:00
6dd23ff6ad Merge pull request #135 from jeremyje/master
Merge 040wip into master.
2019-03-29 14:29:22 -07:00
03c7db7680 Merge 040wip 2019-03-28 11:12:07 -07:00
e5538401f6 Update protobuf definitions 2019-03-26 17:45:52 -07:00
eaa811f9ac Add example helm chart, replace example dashboard. 2019-03-26 17:45:28 -07:00
3b1c6b9141 Merge 2019-03-26 15:26:17 -07:00
34f9eb9bd3 Building again 2019-03-26 12:31:19 -07:00
3ad7f75fb4 Attempt to fix the build 2019-03-26 12:31:19 -07:00
78bd48118d Tweaks 2019-03-26 12:31:19 -07:00
3e71894111 Merge 2019-03-26 12:31:19 -07:00
36decb4068 Merge 2019-03-26 12:31:19 -07:00
f79b782a3a Go Modules 2019-03-26 11:14:48 -07:00
db186e55ff Move Dockfiles to build C#, Golang, PHP, and Python3 MMFs. 2019-03-26 09:54:10 -07:00
957465ce51 Remove dead code that was moved to internal/app/mmlogicapi/apisrv/ 2019-03-25 16:14:25 -07:00
478eb61589 Delete unnecessary copy of protos in frontendclient. 2019-03-25 16:13:56 -07:00
6d2a5b743b Remote executable bit from files that are not executable. 2019-03-13 09:31:24 -07:00
9c943d5a10 Fix comment 2019-03-12 22:04:42 -07:00
8293d44ee0 Fix typos in comments, set and playerindices 2019-03-12 22:04:42 -07:00
a3bd862e76 store optional Redis password inside the Secret 2019-03-12 21:52:59 -07:00
c424d5eac9 Update .gcloudignore to include .gitignore's filters so that Cloud Build packages don't upload binaries. 2019-03-11 16:29:50 +09:00
2e6f5173e0 Add Prometheus service discovery annotations to the Open Match servers. 2019-03-11 16:25:21 +09:00
ee4bba44ec Makefile for simpler development 2019-03-11 16:14:00 +09:00
8e923a4328 Use grpc error codes for responses. 2019-03-11 16:13:06 +09:00
52efa04ee6 Add RPC dashboard and instructions to add more dashboards. 2019-03-07 10:58:53 -08:00
67d4965648 Helm charts for open-match, prometheus, and grafana 2019-03-06 17:09:09 -08:00
7a7b1cb305 Open Match CI support via Cloud Build 2019-03-04 09:41:19 -08:00
377a9621ff Improve error handling of Redis open connection failures. 2019-02-27 19:35:23 -08:00
432dd5a504 Consolidate Ctrl+Break handling into it's own go package. 2019-02-27 17:52:58 +01:00
7446f5b1eb Move out Ctrl+Break wait signal to it's own package. 2019-02-27 17:52:58 +01:00
15ea999628 Remove init() methods from OM servers since they aren't needed. 2019-02-27 08:58:39 +01:00
b5367ea3aa Add config/ in the search path for configuration so that PWD/config can be used as a ConfigMap mount path. 2019-02-25 16:49:35 -08:00
e022c02cb6 golang mmf serving harness 2019-02-25 04:54:02 -05:00
a13455d5b0 Move application logic from cmd/ to internal/app/ 2019-02-24 13:56:48 +01:00
16741409e7 Cleaner builds using svn for github 2019-02-19 09:24:50 -05:00
d7e8f8b3fa Testing 2019-02-19 07:30:26 -05:00
8c97c8f141 Testing2 2019-02-19 07:26:11 -05:00
6a8755a13d Testing 2019-02-19 07:24:10 -05:00
4ed6d275a3 remove player from ignorelists on frontend.DeletePlayer call 2019-02-19 20:01:29 +09:00
cb49eb8646 Merge remote-tracking branch 'origin/calebatwd/knative-rest-mmf' into 040wip 2019-02-16 04:01:01 -05:00
a7458dabf2 Fix test/example paths 2019-02-14 10:56:33 +09:00
5856b7d873 Merge branch '040wip' of https://github.com/GoogleCloudPlatform/open-match into 040wip 2019-02-11 01:23:06 -05:00
7733824c21 Remove matchmaking config file from base image 2019-02-11 01:22:23 -05:00
f1d261044b Add function port to config 2019-02-11 01:21:28 -05:00
95820431ab Update dev instuctions 2019-02-11 01:20:55 -05:00
0002ecbdb2 Review feedback. 2019-02-09 15:28:48 +09:00
2eb51b5270 Fix build and test breakages 2019-02-09 15:28:48 +09:00
1847f79571 Convert JSON k8s deployment configs to YAML. 2019-02-09 15:17:22 +09:00
58ff12f3f8 Add stackdriver format support via TV4/logrus-stackdriver-formatter. Simply set format in config to stackdriver 2019-02-09 15:14:00 +09:00
b0b7b4bd15 Update git ignore to ignore goland ide files 2019-02-09 15:09:00 +09:00
f3f1f36099 Comment type 2019-02-08 14:21:36 -08:00
f8cfb1b90f Add rest call support to job scheduling. This is a prototype implementation to support knative experimentation. 2019-02-08 14:20:29 -08:00
393e1d6de2 added configurable backoff to MatchObject and Player watchers 2019-02-08 16:19:52 +09:00
a11556433b Merge branch 'master' into 040wip 2019-02-08 01:48:54 -05:00
3ee9c05db7 Merge upstream changes 2019-02-08 01:47:43 -05:00
de7ba2db6b added demo attr to player indices 2019-02-03 20:17:13 -08:00
8393454158 fixes for configmap 2019-02-03 20:17:13 -08:00
6b93ac7663 configmap for matchmaker config 2019-02-03 20:17:13 -08:00
fe2410e9d8 PHP MMF: move cfg values to env vars 2019-02-03 20:17:13 -08:00
d8ecf1c439 doc update 2019-02-03 20:17:13 -08:00
8577f6bd4d Move cfg values to env vars for MMFs 2019-02-03 20:17:13 -08:00
470be06d16 fixed set.Difference() 2019-01-29 22:38:18 -08:00
c6e4dae79b fix google cloud knative url 2019-01-25 11:38:46 -08:00
23f83eddd1 mmlogic GetPlayerPool bugfix 2019-01-23 19:57:36 -05:00
dd794fd004 py3 mmf empty pools bugfix 2019-01-23 19:57:16 -05:00
f234433e33 write to error if all pools are empty in py3 mmf 2019-01-23 19:57:16 -05:00
d52773543d check for empty pools in py3 mmf 2019-01-23 19:57:16 -05:00
bd4ab0b530 mmlogic GetPlayerPool bugfix 2019-01-23 14:18:00 +03:00
6b9cd11be3 fix py3 mmf 2019-01-16 18:01:10 +03:00
1443bd1e80 PHP MMF: move cfg values to env vars 2019-01-16 13:41:44 +03:00
3fd8081dc5 doc update 2019-01-15 11:58:42 -05:00
dda949a6a4 Move cfg values to env vars for MMFs 2019-01-15 11:25:02 -05:00
128f0a2941 Merge branch 'master' of https://github.com/GoogleCloudPlatform/open-match 2019-01-15 09:42:01 -05:00
5f8a57398a Fix cloud build issue caused by 5f827b5c7c81c79ef9341cbebb51880f74b78a35 2019-01-15 09:41:38 -05:00
327d64611b This time with working hyperlink 2019-01-14 23:44:10 +09:00
5b4cdce610 Bump version number 2019-01-14 23:43:11 +09:00
56e08e82d4 Revert accidental file type change 2019-01-14 09:32:13 -05:00
3fcedbf13b Remove enum status states. No justification yet. 2018-11-26 17:42:08 -08:00
274edaae2e Grpc code for calling functions in mmforc 2018-11-26 17:40:25 -08:00
8ed865d300 Initial function messages plus protoc regen 2018-11-26 17:05:42 -08:00
8450 changed files with 350361 additions and 9851 deletions

96
.dockerignore Normal file
View File

@ -0,0 +1,96 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# vim swap files
*swp
*swo
*~
# Ping data files
*.ping
*.pings
*.population*
*.percent
*.cities
populations
# Discarded code snippets
build.sh
*-fast.yaml
detritus/
# Dotnet Core ignores
*.swp
*.*~
project.lock.json
.DS_Store
*.pyc
nupkg/
# Visual Studio Code
.vscode
# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates
# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
x64/
x86/
build/
bld/
[Bb]in/
[Oo]bj/
[Oo]ut/
msbuild.log
msbuild.err
msbuild.wrn
# Visual Studio 2015
.vs/
# Goland
.idea/
# Nodejs files placed when building Hugo, ok to allow if we actually start using Nodejs.
package.json
package-lock.json
site/resources/_gen/
# Node Modules
node_modules/
# Install YAML files, Helm is the source of truth for configuration.
install/yaml/
# Temp Directories
tmp/
# Compiled Binaries
cmd/minimatch/minimatch
cmd/backendapi/backendapi
cmd/frontendapi/frontendapi
cmd/mmlogicapi/mmlogicapi
examples/backendclient/backendclient
examples/evaluators/golang/serving/serving
examples/functions/golang/grpc-serving/grpc-serving
test/cmd/clientloadgen/clientloadgen
test/cmd/frontendclient/frontendclient
build/

15
.gcloudignore Normal file
View File

@ -0,0 +1,15 @@
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
# file at that point).
#
# For more information, run:
# $ gcloud topic gcloudignore
#
.gcloudignore
# If you would like to upload your .git directory, .gitignore file or files
# from your .gitignore file, remove the corresponding line
# below:
.git
.gitignore
#!include:.gitignore

29
.gitignore vendored
View File

@ -64,3 +64,32 @@ msbuild.wrn
# Visual Studio 2015
.vs/
# Goland
.idea/
# Nodejs files placed when building Hugo, ok to allow if we actually start using Nodejs.
package.json
package-lock.json
site/resources/_gen/
# Node Modules
node_modules/
# Install YAML files
install/yaml/
# Temp Directories
tmp/
# Compiled Binaries
cmd/minimatch/minimatch
cmd/backendapi/backendapi
cmd/frontendapi/frontendapi
cmd/mmlogicapi/mmlogicapi
examples/backendclient/backendclient
examples/evaluators/golang/serving/serving
examples/functions/golang/grpc-serving/grpc-serving
test/cmd/clientloadgen/clientloadgen
test/cmd/frontendclient/frontendclient

View File

@ -1,5 +1,12 @@
# Release history
## v0.4.0 (alpha)
### Release notes
- Thanks to completion of Issues [#42](issues/42) and [#45](issues/45), there is no longer a need to use the `openmatch-base` image when building components of Open Match. Each stand alone appliation now is self-contained in its `Dockerfile` and `cloudbuild.yaml` files, and builds have been substantially simplified. **Note**: The default `Dockerfile` and `cloudbuild.yaml` now tag their images with the version number, not `dev`, and the YAML files in the `install` directory now reflect this.
- This paves the way for CI/CD in an upcoming version.
- This paves the way for public images in an upcoming version!
## v0.3.0 (alpha)
This update is focused on the Frontend API and Player Records, including more robust code for indexing, deindexing, reading, writing, and expiring player requests from Open Match state storage. All Frontend API function argument have changed, although many only slightly. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!

View File

@ -1,7 +0,0 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY config config
COPY internal internal
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/internal
RUN go get -d -v ...

21
Dockerfile.base-build Normal file
View File

@ -0,0 +1,21 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM golang:latest
ENV GO111MODULE=on
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY . .
RUN go mod download

55
Dockerfile.ci Normal file
View File

@ -0,0 +1,55 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM debian
RUN apt-get update
RUN apt-get install -y -qq git make python3 virtualenv curl sudo unzip apt-transport-https ca-certificates curl software-properties-common gnupg2
# Docker
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
RUN sudo apt-key fingerprint 0EBFCD88
RUN sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
stretch \
stable"
RUN sudo apt-get update
RUN sudo apt-get install -y -qq docker-ce docker-ce-cli containerd.io
# Cloud SDK
RUN export CLOUD_SDK_REPO="cloud-sdk-stretch" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && apt-get install google-cloud-sdk -y -qq
# Install Golang
# https://github.com/docker-library/golang/blob/fd272b2b72db82a0bd516ce3d09bba624651516c/1.12/stretch/Dockerfile
RUN mkdir -p /toolchain/golang
WORKDIR /toolchain/golang
RUN sudo rm -rf /usr/local/go/
RUN curl -L https://storage.googleapis.com/golang/go1.12.1.linux-amd64.tar.gz | sudo tar -C /usr/local -xz
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN sudo mkdir -p "$GOPATH/src" "$GOPATH/bin" \
&& sudo chmod -R 777 "$GOPATH"
# Prepare toolchain and workspace
RUN mkdir -p /toolchain
RUN mkdir -p /workspace
WORKDIR /workspace
ENV ALLOW_BUILD_WITH_SUDO=1

View File

@ -1,21 +0,0 @@
FROM php:7.2-cli
RUN apt-get update && apt-get install -y -q zip unzip zlib1g-dev && apt-get clean
RUN cd /usr/local/bin && curl -sS https://getcomposer.org/installer | php
RUN cd /usr/local/bin && mv composer.phar composer
RUN pecl install grpc
RUN echo "extension=grpc.so" > /usr/local/etc/php/conf.d/30-grpc.ini
RUN pecl install protobuf
RUN echo "extension=protobuf.so" > /usr/local/etc/php/conf.d/30-protobuf.ini
WORKDIR /usr/src/open-match
COPY examples/functions/php/mmlogic-simple examples/functions/php/mmlogic-simple
COPY config config
WORKDIR /usr/src/open-match/examples/functions/php/mmlogic-simple
RUN composer install
CMD [ "php", "./harness.php" ]

View File

@ -1,9 +0,0 @@
# Golang application builder steps
FROM python:3.5.3 as builder
WORKDIR /usr/src/open-match
COPY examples/functions/python3/mmlogic-simple examples/functions/python3/mmlogic-simple
COPY config config
WORKDIR /usr/src/open-match/examples/functions/python3/mmlogic-simple
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "./harness.py"]

728
Makefile Normal file
View File

@ -0,0 +1,728 @@
################################################################################
# Open Match Makefile #
################################################################################
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## NOTICE: There's 2 variables you need to make sure are set.
## GCP_PROJECT_ID if you're working against GCP.
## Or $REGISTRY if you want to use your own custom docker registry.
##
## Basic Deployment
## make create-gke-cluster OR make create-mini-cluster
## make push-helm
## make REGISTRY=gcr.io/$PROJECT_ID push-images -j$(nproc)
## make install-chart
## Generate Files
## make all-protos
##
## Building
## make all -j$(nproc)
##
## Access monitoring
## make proxy-prometheus
## make proxy-grafana
##
## Run those tools
## make run-backendclient
## make run-frontendclient
## make run-clientloadgen
##
## Teardown
## make delete-mini-cluster
## make delete-gke-cluster
##
# http://makefiletutorial.com/
BASE_VERSION = 0.5.0-rc1
VERSION_SUFFIX = $(shell git rev-parse --short=7 HEAD | tr -d [:punct:])
BRANCH_NAME = $(shell git rev-parse --abbrev-ref HEAD | tr -d [:punct:])
VERSION = $(BASE_VERSION)-$(VERSION_SUFFIX)
PROTOC_VERSION = 3.7.1
HELM_VERSION = 2.13.1
HUGO_VERSION = 0.55.2
KUBECTL_VERSION = 1.14.1
NODEJS_VERSION = 10.15.3
SKAFFOLD_VERSION = latest
MINIKUBE_VERSION = latest
HTMLTEST_VERSION = 0.10.1
GOLANGCI_VERSION = 1.16.0
PROTOC_RELEASE_BASE = https://github.com/protocolbuffers/protobuf/releases/download/v$(PROTOC_VERSION)/protoc-$(PROTOC_VERSION)
GO = GO111MODULE=on go
# Defines the absolute local directory of the open-match project
REPOSITORY_ROOT := $(realpath $(dir $(abspath $(MAKEFILE_LIST))))
GO_BUILD_COMMAND = CGO_ENABLED=0 $(GO) build -a -installsuffix cgo .
BUILD_DIR = $(REPOSITORY_ROOT)/build
TOOLCHAIN_DIR = $(BUILD_DIR)/toolchain
TOOLCHAIN_BIN = $(TOOLCHAIN_DIR)/bin
PROTOC := $(TOOLCHAIN_BIN)/protoc
PROTOC_INCLUDES := $(TOOLCHAIN_DIR)/include/
GCP_PROJECT_ID ?=
GCP_PROJECT_FLAG = --project=$(GCP_PROJECT_ID)
OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID = open-match-public-images
OM_SITE_GCP_PROJECT_ID = open-match-site
OM_SITE_GCP_PROJECT_FLAG = --project=$(OM_SITE_GCP_PROJECT_ID)
REGISTRY ?= gcr.io/$(GCP_PROJECT_ID)
TAG := $(VERSION)
ALTERNATE_TAG := dev
GKE_CLUSTER_NAME = om-cluster
GCP_REGION = us-west1
GCP_ZONE = us-west1-a
EXE_EXTENSION =
LOCAL_CLOUD_BUILD_PUSH = # --push
KUBECTL_RUN_ENV = --env='REDIS_SERVICE_HOST=$$(OM_REDIS_MASTER_SERVICE_HOST)' --env='REDIS_SERVICE_PORT=$$(OM_REDIS_MASTER_SERVICE_PORT)'
GCP_LOCATION_FLAG = --zone $(GCP_ZONE)
# Flags to simulate behavior of newer versions of Kubernetes
KUBERNETES_COMPAT = --no-enable-basic-auth --no-issue-client-certificate --enable-ip-alias --metadata disable-legacy-endpoints=true --enable-autoupgrade
GO111MODULE = on
PROMETHEUS_PORT = 9090
GRAFANA_PORT = 3000
SITE_PORT = 8080
HELM = $(TOOLCHAIN_BIN)/helm
TILLER = $(TOOLCHAIN_BIN)/tiller
MINIKUBE = $(TOOLCHAIN_BIN)/minikube
KUBECTL = $(TOOLCHAIN_BIN)/kubectl
HTMLTEST = $(TOOLCHAIN_BIN)/htmltest
SERVICE = default
OPEN_MATCH_CHART_NAME = open-match
OPEN_MATCH_KUBERNETES_NAMESPACE = open-match
OPEN_MATCH_EXAMPLE_CHART_NAME = open-match-example
OPEN_MATCH_EXAMPLE_KUBERNETES_NAMESPACE = open-match
REDIS_NAME = om-redis
GCLOUD_ACCOUNT_EMAIL = $(shell gcloud auth list --format yaml | grep account: | cut -c 10-)
_GCB_POST_SUBMIT ?= 0
# Make port forwards accessible outside of the proxy machine.
PORT_FORWARD_ADDRESS_FLAG = --address 0.0.0.0
DASHBOARD_PORT = 9092
export PATH := $(REPOSITORY_ROOT)/node_modules/.bin/:$(TOOLCHAIN_BIN):$(TOOLCHAIN_DIR)/nodejs/bin:$(PATH)
# Get the project from gcloud if it's not set.
ifeq ($(GCP_PROJECT_ID),)
export GCP_PROJECT_ID = $(shell gcloud config list --format 'value(core.project)')
endif
ifeq ($(OS),Windows_NT)
# TODO: Windows packages are here but things are broken since many paths are Linux based and zip vs tar.gz.
HELM_PACKAGE = https://storage.googleapis.com/kubernetes-helm/helm-v$(HELM_VERSION)-windows-amd64.zip
MINIKUBE_PACKAGE = https://storage.googleapis.com/minikube/releases/$(MINIKUBE_VERSION)/minikube-windows-amd64.exe
SKAFFOLD_PACKAGE = https://storage.googleapis.com/skaffold/releases/$(SKAFFOLD_VERSION)/skaffold-windows-amd64.exe
EXE_EXTENSION = .exe
PROTOC_PACKAGE = $(PROTOC_RELEASE_BASE)-win64.zip
KUBECTL_PACKAGE = https://storage.googleapis.com/kubernetes-release/release/v$(KUBECTL_VERSION)/bin/windows/amd64/kubectl.exe
HUGO_PACKAGE = https://github.com/gohugoio/hugo/releases/download/v$(HUGO_VERSION)/hugo_extended_$(HUGO_VERSION)_Windows-64bit.zip
NODEJS_PACKAGE = https://nodejs.org/dist/v$(NODEJS_VERSION)/node-v$(NODEJS_VERSION)-win-x64.zip
NODEJS_PACKAGE_NAME = nodejs.zip
HTMLTEST_PACKAGE = https://github.com/wjdp/htmltest/releases/download/v$(HTMLTEST_VERSION)/htmltest_$(HTMLTEST_VERSION)_windows_amd64.zip
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-windows-amd64.zip
else
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
HELM_PACKAGE = https://storage.googleapis.com/kubernetes-helm/helm-v$(HELM_VERSION)-linux-amd64.tar.gz
MINIKUBE_PACKAGE = https://storage.googleapis.com/minikube/releases/$(MINIKUBE_VERSION)/minikube-linux-amd64
SKAFFOLD_PACKAGE = https://storage.googleapis.com/skaffold/releases/$(SKAFFOLD_VERSION)/skaffold-linux-amd64
PROTOC_PACKAGE = $(PROTOC_RELEASE_BASE)-linux-x86_64.zip
KUBECTL_PACKAGE = https://storage.googleapis.com/kubernetes-release/release/v$(KUBECTL_VERSION)/bin/linux/amd64/kubectl
HUGO_PACKAGE = https://github.com/gohugoio/hugo/releases/download/v$(HUGO_VERSION)/hugo_extended_$(HUGO_VERSION)_Linux-64bit.tar.gz
NODEJS_PACKAGE = https://nodejs.org/dist/v$(NODEJS_VERSION)/node-v$(NODEJS_VERSION)-linux-x64.tar.gz
NODEJS_PACKAGE_NAME = nodejs.tar.gz
HTMLTEST_PACKAGE = https://github.com/wjdp/htmltest/releases/download/v$(HTMLTEST_VERSION)/htmltest_$(HTMLTEST_VERSION)_linux_amd64.tar.gz
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-linux-amd64.tar.gz
endif
ifeq ($(UNAME_S),Darwin)
HELM_PACKAGE = https://storage.googleapis.com/kubernetes-helm/helm-v$(HELM_VERSION)-darwin-amd64.tar.gz
MINIKUBE_PACKAGE = https://storage.googleapis.com/minikube/releases/$(MINIKUBE_VERSION)/minikube-darwin-amd64
SKAFFOLD_PACKAGE = https://storage.googleapis.com/skaffold/releases/$(SKAFFOLD_VERSION)/skaffold-darwin-amd64
PROTOC_PACKAGE = $(PROTOC_RELEASE_BASE)-osx-x86_64.zip
KUBECTL_PACKAGE = https://storage.googleapis.com/kubernetes-release/release/v$(KUBECTL_VERSION)/bin/darwin/amd64/kubectl
HUGO_PACKAGE = https://github.com/gohugoio/hugo/releases/download/v$(HUGO_VERSION)/hugo_extended_$(HUGO_VERSION)_macOS-64bit.tar.gz
NODEJS_PACKAGE = https://nodejs.org/dist/v$(NODEJS_VERSION)/node-v$(NODEJS_VERSION)-darwin-x64.tar.gz
NODEJS_PACKAGE_NAME = nodejs.tar.gz
HTMLTEST_PACKAGE = https://github.com/wjdp/htmltest/releases/download/v$(HTMLTEST_VERSION)/htmltest_$(HTMLTEST_VERSION)_osx_amd64.tar.gz
GOLANGCI_PACKAGE = https://github.com/golangci/golangci-lint/releases/download/v$(GOLANGCI_VERSION)/golangci-lint-$(GOLANGCI_VERSION)-darwin-amd64.tar.gz
endif
endif
help:
@cat Makefile | grep ^\#\# | grep -v ^\#\#\# |cut -c 4-
local-cloud-build: gcloud
cloud-build-local --config=cloudbuild.yaml --dryrun=false $(LOCAL_CLOUD_BUILD_PUSH) --substitutions SHORT_SHA=$(VERSION_SUFFIX),_GCB_POST_SUBMIT=$(_GCB_POST_SUBMIT),BRANCH_NAME=$(BRANCH_NAME) .
push-images: push-service-images push-client-images push-mmf-example-images push-evaluator-example-images
push-service-images: push-minimatch-image push-frontendapi-image push-backendapi-image push-mmlogicapi-image
push-mmf-example-images: push-mmf-go-grpc-serving-simple-image
push-client-images: push-backendclient-image push-clientloadgen-image push-frontendclient-image
push-evaluator-example-images: push-evaluator-serving-image
push-minimatch-image: docker build-minimatch-image
docker push $(REGISTRY)/openmatch-minimatch:$(TAG)
docker push $(REGISTRY)/openmatch-minimatch:$(ALTERNATE_TAG)
push-frontendapi-image: docker build-frontendapi-image
docker push $(REGISTRY)/openmatch-frontendapi:$(TAG)
docker push $(REGISTRY)/openmatch-frontendapi:$(ALTERNATE_TAG)
push-backendapi-image: docker build-backendapi-image
docker push $(REGISTRY)/openmatch-backendapi:$(TAG)
docker push $(REGISTRY)/openmatch-backendapi:$(ALTERNATE_TAG)
push-mmlogicapi-image: docker build-mmlogicapi-image
docker push $(REGISTRY)/openmatch-mmlogicapi:$(TAG)
docker push $(REGISTRY)/openmatch-mmlogicapi:$(ALTERNATE_TAG)
push-mmf-go-grpc-serving-simple-image: docker build-mmf-go-grpc-serving-simple-image
docker push $(REGISTRY)/openmatch-mmf-go-grpc-serving-simple:$(TAG)
docker push $(REGISTRY)/openmatch-mmf-go-grpc-serving-simple:$(ALTERNATE_TAG)
push-backendclient-image: docker build-backendclient-image
docker push $(REGISTRY)/openmatch-backendclient:$(TAG)
docker push $(REGISTRY)/openmatch-backendclient:$(ALTERNATE_TAG)
push-clientloadgen-image: docker build-clientloadgen-image
docker push $(REGISTRY)/openmatch-clientloadgen:$(TAG)
docker push $(REGISTRY)/openmatch-clientloadgen:$(ALTERNATE_TAG)
push-frontendclient-image: docker build-frontendclient-image
docker push $(REGISTRY)/openmatch-frontendclient:$(TAG)
docker push $(REGISTRY)/openmatch-frontendclient:$(ALTERNATE_TAG)
push-evaluator-serving-image: build-evaluator-serving-image
docker push $(REGISTRY)/openmatch-evaluator-serving:$(TAG)
docker push $(REGISTRY)/openmatch-evaluator-serving:$(ALTERNATE_TAG)
build-images: build-service-images build-client-images build-mmf-example-images build-evaluator-example-images
build-service-images: build-minimatch-image build-frontendapi-image build-backendapi-image build-mmlogicapi-image
build-client-images: build-backendclient-image build-clientloadgen-image build-frontendclient-image
build-mmf-example-images: build-mmf-go-grpc-serving-simple-image
build-evaluator-example-images: build-evaluator-serving-image
build-base-build-image: docker
docker build -f Dockerfile.base-build -t open-match-base-build .
build-minimatch-image: docker build-base-build-image
docker build -f cmd/minimatch/Dockerfile -t $(REGISTRY)/openmatch-minimatch:$(TAG) -t $(REGISTRY)/openmatch-minimatch:$(ALTERNATE_TAG) .
build-frontendapi-image: docker build-base-build-image
docker build -f cmd/frontendapi/Dockerfile -t $(REGISTRY)/openmatch-frontendapi:$(TAG) -t $(REGISTRY)/openmatch-frontendapi:$(ALTERNATE_TAG) .
build-backendapi-image: docker build-base-build-image
docker build -f cmd/backendapi/Dockerfile -t $(REGISTRY)/openmatch-backendapi:$(TAG) -t $(REGISTRY)/openmatch-backendapi:$(ALTERNATE_TAG) .
build-mmlogicapi-image: docker build-base-build-image
docker build -f cmd/mmlogicapi/Dockerfile -t $(REGISTRY)/openmatch-mmlogicapi:$(TAG) -t $(REGISTRY)/openmatch-mmlogicapi:$(ALTERNATE_TAG) .
build-mmf-go-grpc-serving-simple-image: docker build-base-build-image
docker build -f examples/functions/golang/grpc-serving/Dockerfile -t $(REGISTRY)/openmatch-mmf-go-grpc-serving-simple:$(TAG) -t $(REGISTRY)/openmatch-mmf-go-grpc-serving-simple:$(ALTERNATE_TAG) .
build-backendclient-image: docker build-base-build-image
docker build -f examples/backendclient/Dockerfile -t $(REGISTRY)/openmatch-backendclient:$(TAG) -t $(REGISTRY)/openmatch-backendclient:$(ALTERNATE_TAG) .
build-clientloadgen-image: docker build-base-build-image
docker build -f test/cmd/clientloadgen/Dockerfile -t $(REGISTRY)/openmatch-clientloadgen:$(TAG) -t $(REGISTRY)/openmatch-clientloadgen:$(ALTERNATE_TAG) .
build-frontendclient-image: docker build-base-build-image
docker build -f test/cmd/frontendclient/Dockerfile -t $(REGISTRY)/openmatch-frontendclient:$(TAG) -t $(REGISTRY)/openmatch-frontendclient:$(ALTERNATE_TAG) .
build-evaluator-serving-image: build-base-build-image
docker build -f examples/evaluators/golang/serving/Dockerfile -t $(REGISTRY)/openmatch-evaluator-serving:$(TAG) -t $(REGISTRY)/openmatch-evaluator-serving:$(ALTERNATE_TAG) .
clean-images: docker
-docker rmi -f open-match-base-build
-docker rmi -f $(REGISTRY)/openmatch-minimatch:$(TAG) $(REGISTRY)/openmatch-minimatch:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-frontendapi:$(TAG) $(REGISTRY)/openmatch-frontendapi:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-backendapi:$(TAG) $(REGISTRY)/openmatch-backendapi:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-mmlogicapi:$(TAG) $(REGISTRY)/openmatch-mmlogicapi:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-mmf-go-grpc-serving-simple:$(TAG) $(REGISTRY)/openmatch-mmf-go-grpc-serving-simple:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-backendclient:$(TAG) $(REGISTRY)/openmatch-backendclient:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-clientloadgen:$(TAG) $(REGISTRY)/openmatch-clientloadgen:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-frontendclient:$(TAG) $(REGISTRY)/openmatch-frontendclient:$(ALTERNATE_TAG)
-docker rmi -f $(REGISTRY)/openmatch-evaluator-serving:$(TAG) $(REGISTRY)/openmatch-evaluator-serving:$(ALTERNATE_TAG)
install-redis: build/toolchain/bin/helm$(EXE_EXTENSION)
$(HELM) upgrade --install --wait --debug $(REDIS_NAME) stable/redis --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE)
chart-deps: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd install/helm/open-match; $(HELM) dependency update)
lint-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd install/helm; $(HELM) lint open-match; $(HELM) lint open-match-example)
print-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd install/helm; $(HELM) install --dry-run --debug open-match; $(HELM) install --dry-run --debug open-match-example)
install-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
$(HELM) upgrade --install --wait --debug $(OPEN_MATCH_CHART_NAME) install/helm/open-match \
--namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG)
install-example-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
$(HELM) upgrade --install --wait --debug $(OPEN_MATCH_EXAMPLE_CHART_NAME) install/helm/open-match-example \
--namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG)
delete-example-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
-$(HELM) delete --purge $(OPEN_MATCH_EXAMPLE_CHART_NAME)
dry-chart: build/toolchain/bin/helm$(EXE_EXTENSION)
$(HELM) upgrade --install --wait --debug --dry-run $(OPEN_MATCH_CHART_NAME) install/helm/open-match \
--namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG)
delete-chart: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
-$(HELM) delete --purge $(OPEN_MATCH_CHART_NAME)
-$(KUBECTL) delete crd prometheuses.monitoring.coreos.com
-$(KUBECTL) delete crd servicemonitors.monitoring.coreos.com
-$(KUBECTL) delete crd prometheusrules.monitoring.coreos.com
update-helm-deps: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd install/helm/open-match; $(HELM) dependencies update)
install/yaml/: install/yaml/install.yaml install/yaml/install-example.yaml install/yaml/01-redis-chart.yaml install/yaml/02-open-match.yaml install/yaml/03-prometheus-chart.yaml install/yaml/04-grafana-chart.yaml
install/yaml/01-redis-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template --name $(OPEN_MATCH_CHART_NAME) --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set redis.fullnameOverride='$(REDIS_NAME)' \
--set openmatch.config.install=false \
--set openmatch.backendapi.install=false \
--set openmatch.frontendapi.install=false \
--set openmatch.mmlogicapi.install=false \
--set prometheus.enabled=false \
--set grafana.enabled=false \
install/helm/open-match > install/yaml/01-redis-chart.yaml
install/yaml/02-open-match.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template --name $(OPEN_MATCH_CHART_NAME) --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set redis.fullnameOverride='$(REDIS_NAME)' \
--set redis.enabled=false \
--set prometheus.enabled=false \
--set grafana.enabled=false \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG) \
--set openmatch.noChartMeta=true \
install/helm/open-match > install/yaml/02-open-match.yaml
install/yaml/03-prometheus-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template --name $(OPEN_MATCH_CHART_NAME) --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set redis.enabled=false \
--set openmatch.config.install=false \
--set openmatch.backendapi.install=false \
--set openmatch.frontendapi.install=false \
--set openmatch.mmlogicapi.install=false \
--set grafana.enabled=false \
install/helm/open-match > install/yaml/03-prometheus-chart.yaml
install/yaml/04-grafana-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template --name $(OPEN_MATCH_CHART_NAME) --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set redis.enabled=false \
--set openmatch.config.install=false \
--set openmatch.backendapi.install=false \
--set openmatch.frontendapi.install=false \
--set openmatch.mmlogicapi.install=false \
--set prometheus.enabled=false \
--set grafana.enabled=true \
install/helm/open-match > install/yaml/04-grafana-chart.yaml
install/yaml/install.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template --name $(OPEN_MATCH_CHART_NAME) --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG) \
--set redis.enabled=true \
--set prometheus.enabled=true \
--set grafana.enabled=true \
install/helm/open-match > install/yaml/install.yaml
install/yaml/install-example.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template --name $(OPEN_MATCH_EXAMPLE_CHART_NAME) --namespace $(OPEN_MATCH_EXAMPLE_KUBERNETES_NAMESPACE) \
--set openmatch.image.registry=$(REGISTRY) \
--set openmatch.image.tag=$(TAG) \
install/helm/open-match-example > install/yaml/install-example.yaml
set-redis-password:
@stty -echo; \
printf "Redis password: "; \
read REDIS_PASSWORD; \
stty echo; \
printf "\n"; \
REDIS_PASSWORD=$$(printf "$$REDIS_PASSWORD" | base64); \
printf "apiVersion: v1\nkind: Secret\nmetadata:\n name: $(REDIS_NAME)\n namespace: $(OPEN_MATCH_KUBERNETES_NAMESPACE)\ndata:\n redis-password: $$REDIS_PASSWORD\n" | \
$(KUBECTL) replace -f - --force
install-toolchain: build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION) build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/minikube$(EXE_EXTENSION) build/toolchain/bin/skaffold$(EXE_EXTENSION) build/toolchain/bin/hugo$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION) build/toolchain/bin/htmltest$(EXE_EXTENSION)
build/toolchain/bin/helm$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-helm
cd $(TOOLCHAIN_DIR)/temp-helm && curl -Lo helm.tar.gz $(HELM_PACKAGE) && tar xzf helm.tar.gz --strip-components 1
mv $(TOOLCHAIN_DIR)/temp-helm/helm$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/helm$(EXE_EXTENSION)
mv $(TOOLCHAIN_DIR)/temp-helm/tiller$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/tiller$(EXE_EXTENSION)
rm -rf $(TOOLCHAIN_DIR)/temp-helm/
build/toolchain/bin/hugo$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-hugo
cd $(TOOLCHAIN_DIR)/temp-hugo && curl -Lo hugo.tar.gz $(HUGO_PACKAGE) && tar xzf hugo.tar.gz
mv $(TOOLCHAIN_DIR)/temp-hugo/hugo$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/hugo$(EXE_EXTENSION)
rm -rf $(TOOLCHAIN_DIR)/temp-hugo/
build/toolchain/bin/minikube$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -Lo minikube$(EXE_EXTENSION) $(MINIKUBE_PACKAGE)
chmod +x minikube$(EXE_EXTENSION)
mv minikube$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/minikube$(EXE_EXTENSION)
build/toolchain/bin/kubectl$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -Lo kubectl$(EXE_EXTENSION) $(KUBECTL_PACKAGE)
chmod +x kubectl$(EXE_EXTENSION)
mv kubectl$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/kubectl$(EXE_EXTENSION)
build/toolchain/bin/skaffold$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -Lo skaffold$(EXE_EXTENSION) $(SKAFFOLD_PACKAGE)
chmod +x skaffold$(EXE_EXTENSION)
mv skaffold$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/skaffold$(EXE_EXTENSION)
build/toolchain/bin/htmltest$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-htmltest
cd $(TOOLCHAIN_DIR)/temp-htmltest && curl -Lo htmltest.tar.gz $(HTMLTEST_PACKAGE) && tar xzf htmltest.tar.gz
mv $(TOOLCHAIN_DIR)/temp-htmltest/htmltest$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/htmltest$(EXE_EXTENSION)
rm -rf $(TOOLCHAIN_DIR)/temp-htmltest/
build/toolchain/bin/golangci-lint$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
mkdir -p $(TOOLCHAIN_DIR)/temp-golangci
cd $(TOOLCHAIN_DIR)/temp-golangci && curl -Lo golangci.tar.gz $(GOLANGCI_PACKAGE) && tar xvzf golangci.tar.gz --strip-components 1
mv $(TOOLCHAIN_DIR)/temp-golangci/golangci-lint$(EXE_EXTENSION) $(TOOLCHAIN_BIN)/golangci-lint$(EXE_EXTENSION)
rm -rf $(TOOLCHAIN_DIR)/temp-golangci/
build/toolchain/bin/protoc$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
curl -o $(TOOLCHAIN_DIR)/protoc-temp.zip -L $(PROTOC_PACKAGE)
(cd $(TOOLCHAIN_DIR); unzip -q -o protoc-temp.zip)
rm $(TOOLCHAIN_DIR)/protoc-temp.zip $(TOOLCHAIN_DIR)/readme.txt
build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_BIN)
cd $(TOOLCHAIN_BIN) && $(GO) build -pkgdir . github.com/golang/protobuf/protoc-gen-go
build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION):
mkdir -p $(TOOLCHAIN_DIR)/googleapis-temp/
mkdir -p $(TOOLCHAIN_BIN)
curl -o $(TOOLCHAIN_DIR)/googleapis-temp/googleapis.zip -L \
https://github.com/googleapis/googleapis/archive/master.zip
(cd $(TOOLCHAIN_DIR)/googleapis-temp/; unzip -q -o googleapis.zip)
cp -rf $(TOOLCHAIN_DIR)/googleapis-temp/googleapis-master/google/api/ \
$(PROTOC_INCLUDES)/google/api
rm -rf $(TOOLCHAIN_DIR)/googleapis-temp
cd $(TOOLCHAIN_BIN) && $(GO) build -pkgdir . github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
cd $(TOOLCHAIN_BIN) && $(GO) build -pkgdir . github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
build/archives/$(NODEJS_PACKAGE_NAME):
mkdir -p build/archives/
cd build/archives/ && curl -L -o $(NODEJS_PACKAGE_NAME) $(NODEJS_PACKAGE)
build/toolchain/nodejs/: build/archives/$(NODEJS_PACKAGE_NAME)
mkdir -p build/toolchain/nodejs/
cd build/toolchain/nodejs/ && tar xzf ../../archives/$(NODEJS_PACKAGE_NAME) --strip-components 1
push-helm: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) create serviceaccount --namespace kube-system tiller
$(HELM) init --service-account tiller --force-upgrade
$(KUBECTL) create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
ifneq ($(strip $($(KUBECTL) get clusterroles | grep -i rbac)),)
$(KUBECTL) patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
endif
echo "Waiting for Tiller to become ready..."
$(KUBECTL) wait deployment --timeout=60s --for condition=available -l app=helm,name=tiller --namespace kube-system
delete-helm: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
-$(HELM) reset
-$(KUBECTL) delete serviceaccount --namespace kube-system tiller
-$(KUBECTL) delete clusterrolebinding tiller-cluster-rule
ifneq ($(strip $($(KUBECTL) get clusterroles | grep -i rbac)),)
-$(KUBECTL) delete deployment --namespace kube-system tiller-deploy
endif
echo "Waiting for Tiller to go away..."
-$(KUBECTL) wait deployment --timeout=60s --for delete -l app=helm,name=tiller --namespace kube-system
# Fake target for docker
docker: no-sudo
# Fake target for gcloud
gcloud: no-sudo
auth-docker: gcloud docker
gcloud $(GCP_PROJECT_FLAG) auth configure-docker
auth-gke-cluster: gcloud
gcloud $(GCP_PROJECT_FLAG) container clusters get-credentials $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG)
create-gke-cluster: build/toolchain/bin/kubectl$(EXE_EXTENSION) gcloud
gcloud $(GCP_PROJECT_FLAG) container clusters create $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG) --machine-type n1-standard-4 --tags open-match $(KUBERNETES_COMPAT)
$(KUBECTL) create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(GCLOUD_ACCOUNT_EMAIL)
delete-gke-cluster: gcloud
gcloud $(GCP_PROJECT_FLAG) container clusters delete $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG) --quiet
create-mini-cluster: build/toolchain/bin/minikube$(EXE_EXTENSION)
$(MINIKUBE) start --memory 6144 --cpus 4 --disk-size 50g
delete-mini-cluster: build/toolchain/bin/minikube$(EXE_EXTENSION)
$(MINIKUBE) delete
all-protos: golang-protos reverse-golang-protos swagger-def-protos
golang-protos: internal/pb/backend.pb.go internal/pb/frontend.pb.go internal/pb/matchfunction.pb.go internal/pb/messages.pb.go internal/pb/mmlogic.pb.go
reverse-golang-protos: internal/pb/backend.pb.gw.go internal/pb/frontend.pb.gw.go internal/pb/matchfunction.pb.gw.go internal/pb/messages.pb.gw.go internal/pb/mmlogic.pb.gw.go
swagger-def-protos: internal/swagger/frontend.proto internal/swagger/backend.proto internal/swagger/mmlogic.proto internal/swagger/matchfunction.proto
internal/pb/%.pb.go: api/protobuf-spec/%.proto build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION)
$(PROTOC) $< \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--go_out=plugins=grpc:$(REPOSITORY_ROOT)
internal/pb/%.pb.gw.go: api/protobuf-spec/%.proto build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-go$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION)
$(PROTOC) $< \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--grpc-gateway_out=logtostderr=true,allow_delete_body=true:$(REPOSITORY_ROOT)\
internal/swagger/%.proto: api/protobuf-spec/%.proto build/toolchain/bin/protoc$(EXE_EXTENSION) build/toolchain/bin/protoc-gen-grpc-gateway$(EXE_EXTENSION)
$(PROTOC) $< \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--swagger_out=logtostderr=true,allow_delete_body=true:.
# Include structure of the protos needs to be called out do the dependency chain is run through properly.
internal/pb/backend.pb.go: internal/pb/messages.pb.go
internal/pb/frontend.pb.go: internal/pb/messages.pb.go
internal/pb/mmlogic.pb.go: internal/pb/messages.pb.go
internal/pb/matchfunction.pb.go: internal/pb/messages.pb.go
build:
$(GO) build ./...
test:
$(GO) test ./... -race
test-in-ci:
$(GO) test ./... -race -test.count 25 -cover
fmt:
$(GO) fmt ./...
vet:
$(GO) vet ./...
# Blocked on https://github.com/golangci/golangci-lint/issues/500
golangci: build/toolchain/bin/golangci-lint$(EXE_EXTENSION)
build/toolchain/bin/golangci-lint$(EXE_EXTENSION) run -v --config=.golangci.yaml
lint: fmt vet lint-chart
cmd/minimatch/minimatch: internal/pb/backend.pb.go internal/pb/frontend.pb.go internal/pb/mmlogic.pb.go internal/pb/matchfunction.pb.go internal/pb/messages.pb.go
cd cmd/minimatch; $(GO_BUILD_COMMAND)
cmd/backendapi/backendapi: internal/pb/backend.pb.go
cd cmd/backendapi; $(GO_BUILD_COMMAND)
cmd/frontendapi/frontendapi: internal/pb/frontend.pb.go
cd cmd/frontendapi; $(GO_BUILD_COMMAND)
cmd/mmlogicapi/mmlogicapi: internal/pb/mmlogic.pb.go
cd cmd/mmlogicapi; $(GO_BUILD_COMMAND)
examples/backendclient/backendclient: internal/pb/backend.pb.go
cd examples/backendclient; $(GO_BUILD_COMMAND)
examples/evaluators/golang/serving/serving: internal/pb/messages.pb.go
cd examples/evaluators/golang/serving; $(GO_BUILD_COMMAND)
examples/functions/golang/grpc-serving/grpc-serving: internal/pb/messages.pb.go
cd examples/functions/golang/grpc-serving; $(GO_BUILD_COMMAND)
test/cmd/clientloadgen/clientloadgen:
cd test/cmd/clientloadgen; $(GO_BUILD_COMMAND)
test/cmd/frontendclient/frontendclient: internal/pb/frontend.pb.go internal/pb/messages.pb.go
cd test/cmd/frontendclient; $(GO_BUILD_COMMAND)
node_modules/: build/toolchain/nodejs/
-rm -r package.json package-lock.json
-rm -rf node_modules/
echo "{}" > package.json
-rm -f package-lock.json
$(TOOLCHAIN_DIR)/nodejs/bin/npm install postcss-cli autoprefixer
build/site/: build/toolchain/bin/hugo$(EXE_EXTENSION) node_modules/
rm -rf build/site/
mkdir -p build/site/
cd site/ && ../build/toolchain/bin/hugo$(EXE_EXTENSION) --config=config.toml --source . --destination $(BUILD_DIR)/site/public/
# Only copy the root directory since that has the AppEngine serving code.
-cp -f site/* $(BUILD_DIR)/site
-cp -f site/.gcloudignore $(BUILD_DIR)/site/.gcloudignore
#cd $(BUILD_DIR)/site && "SERVICE=$(SERVICE) envsubst < app.yaml > .app.yaml"
cp $(BUILD_DIR)/site/app.yaml $(BUILD_DIR)/site/.app.yaml
site-test: TEMP_SITE_DIR := /tmp/open-match-site
site-test: build/site/ build/toolchain/bin/htmltest$(EXE_EXTENSION)
rm -rf $(TEMP_SITE_DIR)
mkdir -p $(TEMP_SITE_DIR)/site/
cp -rf $(REPOSITORY_ROOT)/build/site/public/* $(TEMP_SITE_DIR)/site/
$(HTMLTEST) --conf $(REPOSITORY_ROOT)/site/htmltest.yaml $(TEMP_SITE_DIR)
browse-site: build/site/
cd $(BUILD_DIR)/site && dev_appserver.py .app.yaml
deploy-dev-site: build/site/ gcloud
cd $(BUILD_DIR)/site && gcloud $(OM_SITE_GCP_PROJECT_FLAG) app deploy .app.yaml --promote --version=$(VERSION_SUFFIX) --quiet
ci-deploy-dev-site: build/site/ gcloud
ifeq ($(_GCB_POST_SUBMIT),1)
echo "Deploying website to development.open-match.dev..."
# TODO: Install GAE SDK and use the Service Account to deploy to GAE.
#cd $(BUILD_DIR)/site && gcloud $(OM_SITE_GCP_PROJECT_FLAG) app deploy .app.yaml --promote --version=$(VERSION_SUFFIX) --quiet
else
echo "Not deploying development.open-match.dev because this is not a post commit change."
endif
deploy-redirect-site: gcloud
cd $(REPOSITORY_ROOT)/site/redirect/ && gcloud $(OM_SITE_GCP_PROJECT_FLAG) app deploy app.yaml --promote --quiet
run-site: build/toolchain/bin/hugo$(EXE_EXTENSION)
cd site/ && ../build/toolchain/bin/hugo$(EXE_EXTENSION) server --debug --watch --enableGitInfo . --baseURL=http://localhost:$(SITE_PORT)/ --bind 0.0.0.0 --port $(SITE_PORT) --disableFastRender
ci-deploy-artifacts: install/yaml/ gcloud
ifeq ($(_GCB_POST_SUBMIT),1)
#gsutil cp -a public-read $(REPOSITORY_ROOT)/install/yaml/* gs://open-match-chart/install/$(VERSION_SUFFIX)/
gsutil cp -a public-read $(REPOSITORY_ROOT)/install/yaml/* gs://open-match-chart/install/yaml/$(BRANCH_NAME)-latest/
else
echo "Not deploying development.open-match.dev because this is not a post commit change."
endif
all: service-binaries client-binaries example-binaries
service-binaries: cmd/minimatch/minimatch cmd/backendapi/backendapi cmd/frontendapi/frontendapi cmd/mmlogicapi/mmlogicapi
client-binaries: examples/backendclient/backendclient test/cmd/clientloadgen/clientloadgen test/cmd/frontendclient/frontendclient
example-binaries: example-mmf-binaries example-evaluator-binaries
example-mmf-binaries: examples/functions/golang/grpc-serving/grpc-serving
example-evaluator-binaries: examples/evaluators/golang/serving/serving
# For presubmit we want to update the protobuf generated files and verify that tests are good.
presubmit: sync-deps clean-protos all-protos fmt vet build test
build/release/: presubmit clean-install-yaml install/yaml/
mkdir -p $(BUILD_DIR)/release/
cp install/yaml/* $(BUILD_DIR)/release/
release: REGISTRY = gcr.io/$(OPEN_MATCH_PUBLIC_IMAGES_PROJECT_ID)
release: TAG = $(BASE_VERSION)
release: build/release/
clean-release:
rm -rf build/release/
clean-site:
rm -rf build/site/
clean-protos:
rm -rf internal/pb/
rm -rf api/protobuf_spec/
clean-binaries:
rm -rf cmd/minimatch/minimatch
rm -rf cmd/backendapi/backendapi
rm -rf cmd/frontendapi/frontendapi
rm -rf cmd/mmlogicapi/mmlogicapi
rm -rf examples/backendclient/backendclient
rm -rf examples/evaluators/golang/serving/serving
rm -rf examples/functions/golang/grpc-serving/grpc-serving
rm -rf test/cmd/clientloadgen/clientloadgen
rm -rf test/cmd/frontendclient/frontendclient
clean-toolchain:
rm -rf build/toolchain/
clean-nodejs:
rm -rf build/toolchain/nodejs/
rm -rf node_modules/
rm -rf package.json
rm -rf package-lock.json
clean-install-yaml:
rm -f install/yaml/install.yaml
rm -f install/yaml/install-example.yaml
rm -f install/yaml/01-redis-chart.yaml
rm -f install/yaml/02-open-match.yaml
rm -f install/yaml/03-prometheus-chart.yaml
rm -f install/yaml/04-grafana-chart.yaml
clean: clean-images clean-binaries clean-site clean-release clean-toolchain clean-protos clean-nodejs clean-install-yaml
run-backendclient: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) run om-backendclient --rm --restart=Never --image-pull-policy=Always -i --tty --image=$(REGISTRY)/openmatch-backendclient:$(TAG) --namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) $(KUBECTL_RUN_ENV)
run-frontendclient: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) run om-frontendclient --rm --restart=Never --image-pull-policy=Always -i --tty --image=$(REGISTRY)/openmatch-frontendclient:$(TAG) --namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) $(KUBECTL_RUN_ENV)
run-clientloadgen: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) run om-clientloadgen --rm --restart=Never --image-pull-policy=Always -i --tty --image=$(REGISTRY)/openmatch-clientloadgen:$(TAG) --namespace=$(OPEN_MATCH_KUBERNETES_NAMESPACE) $(KUBECTL_RUN_ENV)
proxy-grafana: build/toolchain/bin/kubectl$(EXE_EXTENSION)
echo "User: admin"
echo "Password: openmatch"
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=grafana,release=$(OPEN_MATCH_CHART_NAME)" --output jsonpath='{.items[0].metadata.name}') $(GRAFANA_PORT):3000 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-prometheus: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) port-forward --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) $(shell $(KUBECTL) get pod --namespace $(OPEN_MATCH_KUBERNETES_NAMESPACE) --selector="app=prometheus,component=server,release=$(OPEN_MATCH_CHART_NAME)" --output jsonpath='{.items[0].metadata.name}') $(PROMETHEUS_PORT):9090 $(PORT_FORWARD_ADDRESS_FLAG)
proxy-dashboard: build/toolchain/bin/kubectl$(EXE_EXTENSION)
$(KUBECTL) port-forward --namespace kube-system $(shell $(KUBECTL) get pod --namespace kube-system --selector="app=kubernetes-dashboard" --output jsonpath='{.items[0].metadata.name}') $(DASHBOARD_PORT):9090 $(PORT_FORWARD_ADDRESS_FLAG)
sync-deps:
$(GO) mod download
sleep-10:
sleep 10
# Prevents users from running with sudo.
# There's an exception for Google Cloud Build because it runs as root.
no-sudo:
ifndef ALLOW_BUILD_WITH_SUDO
ifeq ($(shell whoami),root)
echo "ERROR: Running Makefile as root (or sudo)"
echo "Please follow the instructions at https://docs.docker.com/install/linux/linux-postinstall/ if you are trying to sudo run the Makefile because of the 'Cannot connect to the Docker daemon' error."
echo "NOTE: sudo/root do not have the authentication token to talk to any GCP service via gcloud."
exit 1
endif
endif
.PHONY: docker gcloud deploy-redirect-site sync-deps sleep-10 proxy-dashboard proxy-prometheus proxy-grafana clean clean-toolchain clean-binaries clean-protos presubmit test test-in-ci vet

325
README.md
View File

@ -1,271 +1,162 @@
# Open Match
![Open Match](site/static/images/logo-with-name.png)
Open Match is an open source game matchmaking framework designed to allow game creators to build matchmakers of any size easily and with as much possibility for sharing and code re-use as possible. Its designed to be flexible (run it anywhere Kubernetes runs), extensible (match logic can be customized to work for any game), and scalable.
[![GoDoc](https://godoc.org/github.com/GoogleCloudPlatform/open-match?status.svg)](https://godoc.org/github.com/GoogleCloudPlatform/open-match)
[![Go Report Card](https://goreportcard.com/badge/github.com/GoogleCloudPlatform/open-match)](https://goreportcard.com/report/github.com/GoogleCloudPlatform/open-match)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/GoogleCloudPlatform/open-match/blob/master/LICENSE)
Matchmaking is a complicated process, and when large player populations are involved, many popular matchmaking approaches touch on significant areas of computer science including graph theory and massively concurrent processing. Open Match is an effort to provide a foundation upon which these difficult problems can be addressed by the wider game development community. As Josh Menke &mdash; famous for working on matchmaking for many popular triple-A franchises &mdash; put it:
Open Match is an open source game matchmaking framework designed to allow game creators to build matchmakers of any size easily and with as much possibility for sharing and code re-use as possible. Its designed to be flexible, extensible, and scalable.
Matchmaking begins when a player tells the game that they want to play. Every player has a set of attributes like skill, location, playtime, win-lose ratio, etc which may factor in how they are paired with other players. Typically, there's a trade off between the quality of the match vs the time to wait. Since Open Match is designed to scale with the player population, it should be possible to still have high quality matches while having high player count.
Under the covers matchmaking approaches touch on significant areas of computer science including graph theory and massively concurrent processing. Open Match is an effort to provide a foundation upon which these difficult problems can be addressed by the wider game development community. As Josh Menke &mdash; famous for working on matchmaking for many popular triple-A franchises &mdash; put it:
["Matchmaking, a lot of it actually really is just really good engineering. There's a lot of really hard networking and plumbing problems that need to be solved, depending on the size of your audience."](https://youtu.be/-pglxege-gU?t=830)
This project attempts to solve the networking and plumbing problems, so game developers can focus on the logic to match players into great games.
## Disclaimer
This software is currently alpha, and subject to change. Although Open Match has already been used to run [production workloads within Google](https://cloud.google.com/blog/topics/inside-google-cloud/no-tricks-just-treats-globally-scaling-the-halloween-multiplayer-doodle-with-open-match-on-google-cloud), but it's still early days on the way to our final goal. There's plenty left to write and we welcome contributions. **We strongly encourage you to engage with the community through the [Slack or Mailing lists](#get-involved) if you're considering using Open Match in production before the 1.0 release, as the documentation is likely to lag behind the latest version a bit while we focus on getting out of alpha/beta as soon as possible.**
## Running Open Match
Open Match framework is a collection of servers that run within Kubernetes (the [puppet master](https://en.wikipedia.org/wiki/Puppet_Master_(gaming)) for your server cluster.)
## Version
[The current stable version in master is 0.3.0 (alpha)](https://github.com/GoogleCloudPlatform/open-match/releases/tag/030). At this time only bugfixes and doc update pull requests will be considered.
Version 0.4.0 is in active development; please target code changes to the 040wip branch.
# Core Concepts
## Deploy to Kubernetes
[Watch the introduction of Open Match at Unite Berlin 2018 on YouTube](https://youtu.be/qasAmy_ko2o)
If you have an [existing Kubernetes cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster) you can run these commands to install Open Match.
```bash
# Grant yourself cluster-admin permissions so that you can deploy service accounts.
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(YOUR_KUBERNETES_USER_NAME)
# Place all Open Match components in their own namespace.
kubectl create namespace open-match
# Install Open Match and monitoring services.
kubectl apply -f https://storage.googleapis.com/open-match-chart/install/yaml/master-latest/install.yaml --namespace open-match
# Install the example MMF and Evaluator.
kubectl apply -f https://storage.googleapis.com/open-match-chart/install/yaml/master-latest/install-example.yaml --namespace open-match
```
Open Match is designed to support massively concurrent matchmaking, and to be scalable to player populations of hundreds of millions or more. It attempts to apply stateless web tech microservices patterns to game matchmaking. If you're not sure what that means, that's okay &mdash; it is fully open source and designed to be customizable to fit into your online game architecture &mdash; so have a look a the code and modify it as you see fit.
To delete Open Match
## Glossary
```bash
# Delete the open-match namespace that holds all the Open Match configuration.
kubectl delete namespace open-match
```
### General
* **DGS** &mdash; Dedicated game server
* **Client** &mdash; The game client program the player uses when playing the game
* **Session** &mdash; In Open Match, players are matched together, then assigned to a server which hosts the game _session_. Depending on context, this may be referred to as a _match_, _map_, or just _game_ elsewhere in the industry.
## Development
Open Match can be deployed locally or in the cloud for development. Below are the steps to build, push, and deploy the binaries to Kubernetes.
### Open Match
* **Component** &mdash; One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called _components_.
* **State Storage** &mdash; The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
* **MMFOrc** &mdash; Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
* **MMF** &mdash; Matchmaking function. This is the customizable matchmaking logic.
* **MMLogic API** &mdash; An API that provides MMF SDK functionality. It is optional - you can also do all the state storage read and write operations yourself if you have a good reason to do so.
* **Director** &mdash; The software you (as a developer) write against the Open Match Backend API. The _Director_ decides which MMFs to run, and is responsible for sending MMF results to a DGS to host the session.
### Deploy to Minikube (Locally)
[Minikube](https://kubernetes.io/docs/setup/minikube/) is Kubernetes in a VM. It's mainly used for development.
### Data Model
* **Player** &mdash; An ID and list of attributes with values for a player who wants to participate in matchmaking.
* **Roster** &mdash; A list of player objects. Used to hold all the players on a single team.
* **Filter** &mdash; A _filter_ is used to narrow down the players to only those who have an attribute value within a certain integer range. All attributes are integer values in Open Match because [that is how indices are implemented](internal/statestorage/redis/playerindices/playerindices.go). A _filter_ is defined in a _player pool_.
* **Player Pool** &mdash; A list of all the players who fit all the _filters_ defined in the pool.
* **Match Object** &mdash; A protobuffer message format that contains the _profile_ and the results of the matchmaking function. Sent to the backend API from your game backend with the _roster_(s) empty and then returned from your MMF with the matchmaking results filled in.
* **Profile** &mdash; The json blob containing all the parameters used by your MMF to select which players go into a roster together.
* **Assignment** &mdash; Refers to assigning a player or group of players to a dedicated game server instance. Open Match offers a path to send dedicated game server connection details from your backend to your game clients after a match has been made.
* **Ignore List** &mdash; Removing players from matchmaking consideration is accomplished using _ignore lists_. They contain lists of player IDs that your MMF should not include when making matches.
```bash
# Create a Minikube Cluster and install Helm
make create-mini-cluster push-helm
# Deploy Open Match with example functions
make REGISTRY=gcr.io/open-match-public-images TAG=latest install-chart install-example-chart
```
## Requirements
* [Kubernetes](https://kubernetes.io/) cluster &mdash; tested with version 1.9.
* [Redis 4+](https://redis.io/) &mdash; tested with 4.0.11.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) &mdash; tested with 1.10.9.
### Deploy to Google Cloud Platform (Cloud)
## Components
Create a GCP project via [Google Cloud Console](https://console.cloud.google.com/). Billing must be enabled but if you're a new customer you can get some [free credits](https://cloud.google.com/free/). When you create a project you'll need to set a Project ID, if you forget it you can see it here, https://console.cloud.google.com/iam-admin/settings/project.
Open Match is a set of processes designed to run on Kubernetes. It contains these **core** components:
Now install [Google Cloud SDK](https://cloud.google.com/sdk/) which is the command line tool to work against your project. The following commands log you into your GCP Project.
1. Frontend API
1. Backend API
1. Matchmaker Function Orchestrator (MMFOrc) (may be deprecated in future versions)
```bash
# Login to your Google Account for GCP.
gcloud auth login
gcloud config set project $YOUR_GCP_PROJECT_ID
# Enable GCP services
gcloud services enable containerregistry.googleapis.com
gcloud services enable container.googleapis.com
# Test that everything is good, this command should work.
gcloud compute zones list
```
It includes these **optional** (but recommended) components:
1. Matchmaking Logic (MMLogic) API
Please follow the instructions to [Setup Local Open Match Repository](#local-repository-setup). Once everything is setup you can deploy Open Match by creating a cluster in Google Kubernetes Engine (GKE).
It also explicitly depends on these two **customizable** components.
```bash
# Create a GKE Cluster and install Helm
make create-gke-cluster push-helm
# Deploy Open Match with example functions
make REGISTRY=gcr.io/open-match-build TAG=0.4.0-e98e1b6 install-chart install-example-chart
```
1. Matchmaking "Function" (MMF)
1. Evaluator (may be optional in future versions)
To generate matches using a test client, run the following command:
While **core** components are fully open source and _can_ be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** MMF and Evaluator examples, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
```bash
make REGISTRY=gcr.io/open-match-build TAG=0.4.0-e98e1b6 run-backendclient
```
### Frontend API
Once deployed you can view the jobs in [Cloud Console](https://console.cloud.google.com/kubernetes/workload).
The Frontend API accepts the player data and puts it in state storage so your Matchmaking Function (MMF) can access it.
### Local Repository Setup
The Frontend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/frontend.proto`. At the most basic level, it expects clients to connect and send:
* A **unique ID** for the group of players (the group can contain any number of players, including only one).
* A **json blob** containing all player-related data you want to use in your matchmaking function.
Here are the instructions to set up a local repository for Open Match.
The client is expected to maintain a connection, waiting for an update from the API that contains the details required to connect to a dedicated game server instance (an 'assignment'). There are also basic functions for removing an ID from the matchmaking pool or an existing match.
```bash
# Install Open Match Toolchain Dependencies (for Debian, other OSes including Mac OS X have similar dependencies)
sudo apt-get update; sudo apt-get install -y -q python3 python3-virtualenv virtualenv make google-cloud-sdk git unzip tar
# Setup your repository like Go workspace, https://golang.org/doc/code.html#Workspaces
# This requirement will go away soon.
mkdir -p $HOME/workspace/src/github.com/GoogleCloudPlatform/
cd $HOME/workspace/src/github.com/GoogleCloudPlatform/
export GOPATH=$HOME/workspace
export GO111MODULE=on
git clone https://github.com/GoogleCloudPlatform/open-match.git
cd open-match
```
### Backend API
### Compiling From Source
The Backend API writes match objects to state storage which the Matchmaking Functions (MMFs) access to decide which players should be matched. It returns the results from those MMFs.
The easiest way to build Open Match is to use the [Makefile](Makefile). Please follow the instructions to [Setup Local Open Match Repository](#local-repository-setup).
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or **director**, or even directly to a dedicated game server), and to receive:
* A **unique ID** for a matchmaking profile.
* A **json blob** containing all the matching-related data and filters you want to use in your matchmaking function.
* An optional list of **roster**s to hold the resulting teams chosen by your matchmaking function.
* An optional set of **filters** that define player pools your matchmaking function will choose players from.
[Docker](https://docs.docker.com/install/) and [Go 1.12+](https://golang.org/dl/) is also required.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
To build all the artifacts of Open Match you can simply run the following commands.
```bash
# Downloads all the tools needed to build Open Match
make install-toolchain
# Generates protocol buffer code files
make all-protos
# Builds all the binaries
make all
# Builds all the images.
make build-images
```
### Matchmaking Function Orchestrator (MMFOrc)
Once build you can use a command like `docker images` to see all the images that were build.
The MMFOrc kicks off your custom matchmaking function (MMF) for every unique profile submitted to the Backend API in a match object. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
Before creating a pull request you can run `make local-cloud-build` to simulate a Cloud Build run to check for regressions.
The MMFOrc exists to orchestrate/schedule your **custom components**, running them as often as required to meet the demands of your game. MMFOrc runs in an endless loop, submitting MMFs and Evaluator jobs to Kubernetes.
The directory structure is a typical Go structure so if you do the following you should be able to work on this project within your IDE.
### Matchmaking Logic (MMLogic) API
Lastly, this project uses go modules so you'll want to set `export GO111MODULE=on` in your `~/.bashrc`.
The MMLogic API provides a series of gRPC functions that act as a Matchmaking Function SDK. Much of the basic, boilerplate code for an MMF is the same regardless of what players you want to match together. The MMLogic API offers a gRPC interface for many common MMF tasks, such as:
The [Build Queue](https://console.cloud.google.com/cloud-build/builds?project=open-match-build) runs against all PRs, requires membership to [open-match-discuss@googlegroups.com](https://groups.google.com/forum/#!forum/open-match-discuss).
1. Reading a profile from state storage.
1. Running filters on players in state strorage. It automatically removes players on ignore lists as well!
1. Removing chosen players from consideration by other MMFs (by adding them to an ignore list). It does it automatically for you when writing your results!
1. Writing the matchmaking results to state storage.
1. (Optional, NYI) Exporting MMF stats for metrics collection.
## Support
More details about the available gRPC calls can be found in the [API Specification](api/protobuf-spec/messages.proto).
**Note**: using the MMLogic API is **optional**. It tries to simplify the development of MMFs, but if you want to take care of these tasks on your own, you can make few or no calls to the MMLogic API as long as your MMF still completes all the required tasks. Read the [Matchmaking Functions section](#matchmaking-functions-mmfs) for more details of what work an MMF must do.
### Evaluator
The Evaluator resolves conflicts when multiple MMFs select the same player(s).
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposals, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and well-tuned matchmaking functions, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
Large-scale concurrent matchmaking functions is a complex topic, and users who wish to do this are encouraged to engage with the [Open Match community](https://github.com/GoogleCloudPlatform/open-match#get-involved) about patterns and best practices.
### Matchmaking Functions (MMFs)
Matchmaking Functions (MMFs) are run by the Matchmaker Function Orchestrator (MMFOrc) &mdash; once per profile it sees in state storage. The MMF is run as a Job in Kubernetes, and has full access to read and write from state storage. At a high level, the encouraged pattern is to write a MMF in whatever language you are comfortable in that can do the following things:
- [x] Be packaged in a (Linux) Docker container.
- [x] Read/write from the Open Match state storage &mdash; Open Match ships with Redis as the default state storage.
- [x] Read a profile you wrote to state storage using the Backend API.
- [x] Select from the player data you wrote to state storage using the Frontend API. It must respect all the ignore lists defined in the matchmaker config.
- [ ] Run your custom logic to try to find a match.
- [x] Write the match object it creates to state storage at a specified key.
- [x] Remove the players it selected from consideration by other MMFs by adding them to the appropriate ignore list.
- [x] Notify the MMFOrc of completion.
- [x] (Optional, but recommended) Export stats for metrics collection.
**Open Match offers [matchmaking logic API](#matchmaking-logic-mmlogic-api) calls for handling the checked items, as long as you are able to format your input and output in the data schema Open Match expects (defined in the [protobuf messages](api/protobuf-spec/messages.proto)).** You can to do this work yourself if you don't want to or can't use the data schema Open Match is looking for. However, the data formats expected by Open Match are pretty generalized and will work with most common matchmaking scenarios and game types. If you have questions about how to fit your data into the formats specified, feel free to ask us in the [Slack or mailing group](#get-involved).
Example MMFs are provided in these languages:
- [C#](examples/functions/csharp/simple) (doesn't use the MMLogic API)
- [Python3](examples/functions/python3/mmlogic-simple) (MMLogic API enabled)
- [PHP](examples/functions/php/mmlogic-simple) (MMLogic API enabled)
- [golang](examples/functions/golang/manual-simple) (doesn't use the MMLogic API)
## Open Source Software integrations
### Structured logging
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. Level and format are configurable via config/matchmaker_config.json. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
### Instrumentation for metrics
Open Match uses [OpenCensus](https://opencensus.io/) for metrics instrumentation. The [gRPC](https://grpc.io/) integrations are built-in, and Golang redigo module integrations are incoming, but [haven't been merged into the official repo](https://github.com/opencensus-integrations/redigo/pull/1). All of the core components expose HTTP `/metrics` endpoints on the port defined in `config/matchmaker_config.json` (default: 9555) for Prometheus to scrape. If you would like to export to a different metrics aggregation platform, we suggest you have a look at the OpenCensus documentation &mdash; there may be one written for you already, and switching to it may be as simple as changing a few lines of code.
**Note:** A standard for instrumentation of MMFs is planned.
### Redis setup
By default, Open Match expects you to run Redis *somewhere*. Connection information can be put in the config file (`matchmaker_config.json`) for any Redis instance reachable from the [Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By default, Open Match sensibly runs in the Kubernetes `default` namespace. In most instances, we expect users will run a copy of Redis in a pod in Kubernetes, with a service pointing to it.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis`, which provides an easier path to multi-instance deployments.
## Additional examples
**Note:** These examples will be expanded on in future releases.
The following examples of how to call the APIs are provided in the repository. Both have a `Dockerfile` and `cloudbuild.yaml` files in their respective directories:
* `test/cmd/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `examples/backendclient/main.go` calls the Backend API and passes in the profile found in `backendstub/profiles/testprofile.json` to the `ListMatches` API endpoint, then continually prints the results until you exit, or there are insufficient players to make a match based on the profile..
## Usage
Documentation and usage guides on how to set up and customize Open Match.
### Precompiled container images
Once we reach a 1.0 release, we plan to produce publicly available (Linux) Docker container images of major releases in a public image registry. Until then, refer to the 'Compiling from source' section below.
### Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in the corresponding `cmd/<COMPONENT>` directories.
All the core components for Open Match are written in Golang and use the [Dockerfile multistage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/). This pattern uses intermediate Docker containers as a Golang build environment while producing lightweight, minimized container images as final build artifacts. When the project is ready for production, we will modify the `Dockerfile`s to uncomment the last build stage. Although this pattern is great for production container images, it removes most of the utilities required to troubleshoot issues during development.
## Configuration
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. When `docker build`ing the component container images, the Dockerfile copies the centralized config file into the component directory.
We plan to replace this with a Kubernetes-managed config with dynamic reloading, please join the discussion in [Issue #42](issues/42).
### Guides
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release, right now it's a scattered collection of notes. **WIP**
* [Development guide](./docs/development.md)
### Reference
* [FAQ](./docs/faq.md)
## Get involved
* [Slack channel](https://open-match.slack.com/)
* [Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU)
* [Slack Channel](https://open-match.slack.com/) ([Signup](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))
* [File an Issue](https://github.com/GoogleCloudPlatform/open-match/issues/new)
* [Mailing list](https://groups.google.com/forum/#!forum/open-match-discuss)
* [Managed Service Survey](https://goo.gl/forms/cbrFTNCmy9rItSv72)
## Code of Conduct
Participation in this project comes under the [Contributor Covenant Code of Conduct](code-of-conduct.md)
## Development and Contribution
## Contributing
Please read the [contributing](CONTRIBUTING.md) guide for directions on submitting Pull Requests to Open Match.
See the [Development guide](docs/development.md) for documentation for development and building Open Match from source.
The [Release Process](docs/governance/release_process.md) documentation displays the project's upcoming release calendar and release process. (NYI)
The [Release Process](docs/governance/release_process.md) documentation displays the project's upcoming release calendar and release process.
Open Match is in active development - we would love your help in shaping its future!
## This all sounds great, but can you explain Docker and/or Kubernetes to me?
## Documentation
### Docker
- [Docker's official "Getting Started" guide](https://docs.docker.com/get-started/)
- [Katacoda's free, interactive Docker course](https://www.katacoda.com/courses/docker)
For more information on the technical underpinnings of Open Match you can refer to the [docs/](docs/) directory.
### Kubernetes
- [You should totally read this comic, and interactive tutorial](https://cloud.google.com/kubernetes-engine/kubernetes-comic/)
- [Katacoda's free, interactive Kubernetes course](https://www.katacoda.com/courses/kubernetes)
## Code of Conduct
## Licence
Participation in this project comes under the [Contributor Covenant Code of Conduct](code-of-conduct.md)
Apache 2.0
# Planned improvements
See the [provisional roadmap](docs/roadmap.md) for more information on upcoming releases.
## Documentation
- [ ] “Writing your first matchmaker” getting started guide will be included in an upcoming version.
- [ ] Documentation for using the example customizable components and the `backendstub` and `frontendstub` applications to do an end-to-end (e2e) test will be written. This all works now, but needs to be written up.
- [ ] Documentation on release process and release calendar.
## State storage
- [X] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
- [X] [The Redis deployment should have an example HA configuration](https://github.com/GoogleCloudPlatform/open-match/issues/41)
- [X] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet.
- [ ] We don't want to support two redis watcher code paths, but we will until golang protobuf reflection is a bit more usable. [Design doc](https://docs.google.com/document/d/19kfhro7-CnBdFqFk7l4_HmwaH2JT_Rhw5-2FLWLEGGk/edit#heading=h.q3iwtwhfujjx), [github issue](https://github.com/golang/protobuf/issues/364)
- [X] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
## Instrumentation / Metrics / Analytics
- [ ] Instrumentation of MMFs is in the planning stages. Since MMFs are by design meant to be completely customizable (to the point of allowing any process that can be packaged in a Docker container), metrics/stats will need to have an expected format and formalized outgoing pathway. Currently the thought is that it might be that the metrics should be written to a particular key in statestorage in a format compatible with opencensus, and will be collected, aggreggated, and exported to Prometheus using another process.
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version. This is likely going to require knative.
- [X] Read logrus logging configuration from matchmaker_config.json.
## Security
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions. [Issue 52](issues/52)
## Kubernetes
- [ ] Autoscaling isn't turned on for the Frontend or Backend API Kubernetes deployments by default.
- [ ] A [Helm](https://helm.sh/) chart to stand up Open Match may be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
- [ ] A knative-based implementation of MMFs is in the planning stages.
## CI / CD / Build
- [ ] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon. This is tracked in [Issue #45](issues/45) and is blocked by [Issue 42](issues/42).
- [ ] CI/CD for this repo and the associated status tags are planned.
- [ ] Golang unit tests will be shipped in an upcoming version.
- [ ] A full load-testing and e2e testing suite will be included in an upcoming version.
## Will not Implement
- [X] Defining multiple images inside a profile for the purposes of experimentation adds another layer of complexity into profiles that can instead be handled outside of open match with custom match functions in collaboration with a director (thing that calls backend to schedule matchmaking)
### Special Thanks
- Thanks to https://jbt.github.io/markdown-editor/ for help in marking this document down.
## Disclaimer
This software is currently alpha, and subject to change. Although Open Match has already been used to run [production workloads within Google](https://cloud.google.com/blog/topics/inside-google-cloud/no-tricks-just-treats-globally-scaling-the-halloween-multiplayer-doodle-with-open-match-on-google-cloud), but it's still early days on the way to our final goal. There's plenty left to write and we welcome contributions. **We strongly encourage you to engage with the community through the [Slack or Mailing lists](#support) if you're considering using Open Match in production before the 1.0 release, as the documentation is likely to lag behind the latest version a bit while we focus on getting out of alpha/beta as soon as possible.**

View File

@ -4,12 +4,9 @@ This directory contains the API specification files for Open Match. API documena
* [Protobuf .proto files for all APIs](./protobuf-spec/)
These proto files are copied to the container image during `docker build` for the Open Match core components. The `Dockerfiles` handle the compilation for you transparently, and copy the resulting `SPEC.pb.go` files to the appropriate place in your final container image.
References:
* [gRPC](https://grpc.io/)
* [Language Guide (proto3)](https://developers.google.com/protocol-buffers/docs/proto3)
Manual gRPC compilation commmand, from the directory containing the proto:
```protoc -I . ./<filename>.proto --go_out=plugins=grpc:.```
If you want to regenerate the golang gRPC modules (for local Open Match core component development, for example), the `protoc_go.sh` file in this directory may be of use to you!

View File

@ -1,17 +1,27 @@
** REST compatibility
Follow the guidelines at https://cloud.google.com/endpoints/docs/grpc/transcoding
to keep the gRPC service definitions friendly to REST transcoding. An excerpt:
## REST compatibility
Follow the guidelines at https://cloud.google.com/endpoints/docs/grpc/transcoding to keep the gRPC service definitions friendly to REST transcoding. An excerpt:
"Transcoding involves mapping HTTP/JSON requests and their parameters to gRPC
methods and their parameters and return types (we'll look at exactly how you
do this in the following sections). Because of this, while it's possible to
map an HTTP/JSON request to any arbitrary API method, it's simplest and most
intuitive to do so if the gRPC API itself is structured in a
resource-oriented way, just like a traditional HTTP REST API. In other
words, the API service should be designed so that it uses a small number of
standard methods (corresponding to HTTP verbs like GET, PUT, and so on) that
operate on the service's resources (and collections of resources, which are
themselves a type of resource).
These standard methods are List, Get, Create, Update, and Delete."
"Transcoding involves mapping HTTP/JSON requests and their parameters to gRPC methods and their parameters and return types (we'll look at exactly how you do this in the following sections). Because of this, while it's possible to map an HTTP/JSON request to any arbitrary API method, it's simplest and most intuitive to do so if the gRPC API itself is structured in a resource-oriented way, just like a traditional HTTP REST API. In other words, the API service should be designed so that it uses a small number of standard methods (corresponding to HTTP verbs like GET, PUT, and so on) that operate on the service's resources (and collections of resources, which are themselves a type of resource). These standard methods are List, Get, Create, Update, and Delete."
It is for these reasons we don't have gRPC calls that support bi-directional streaming in Open Match.
## REST API Usage
Open Match gateway proxy transcodes any REST calls to its underlying gRPC service. Follow the [examples](https://cloud.google.com/endpoints/docs/grpc-service-config/reference/rpc/google.api#httprule) for further details.
A typical REST call to Open Match backend's `CreateAssignments` service via HTTP POST request
```
/v1/backend/assignments/123? \
assignment.rosters.name=foo&assignment.rosters.players.id=1&assignment.rosters.players.id=2
```
is equivalent to
```go
CreateAssignmentsRequest(
Assignments(
name: '123',
rosters: [
Roster(name: 'foo', [Player(id: 1), Player(id: 2)])
]
)
)
```

View File

@ -1,9 +1,75 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
option go_package = "internal/pb";
// The protobuf messages sent in the gRPC calls are defined 'messages.proto'.
import 'api/protobuf-spec/messages.proto';
import 'google/api/annotations.proto';
message MmfConfig {
enum Type {
GRPC = 0;
REST = 1; // REST support will be added in future.
}
string name = 1; // Developer-chosen, human-readable string. (Optional)
string host = 2; // Host or DNS name for service providing this MMF. Must be resolve-able by the backend API.
int32 port = 3; // Port number for service providing this MMF.
Type type = 4; // Type of MMF call
}
message CreateMatchRequest {
messages.MatchObject match = 1;
MmfConfig mmfcfg = 2;
}
message CreateMatchResponse {
messages.MatchObject match = 1;
}
message ListMatchesRequest {
messages.MatchObject match = 1;
MmfConfig mmfcfg = 2;
}
message ListMatchesResponse {
messages.MatchObject match = 1;
}
message DeleteMatchRequest {
messages.MatchObject match = 1;
}
message DeleteMatchResponse {
}
message CreateAssignmentsRequest {
messages.Assignments assignment = 1;
}
message CreateAssignmentsResponse {
}
message DeleteAssignmentsRequest {
messages.Roster roster = 1;
}
message DeleteAssignmentsResponse {
}
service Backend {
// Calls to ask the matchmaker to run a matchmaking function.
@ -20,37 +86,65 @@ service Backend {
// - error. Empty if no error was encountered
// - rosters, if you choose to fill them in your MMF. (Recommended)
// - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)
rpc CreateMatch(messages.MatchObject) returns (messages.MatchObject) {}
rpc CreateMatch(CreateMatchRequest) returns (CreateMatchResponse) {
option (google.api.http) = {
put: "/v1/backend/matches"
body: "*"
};
}
// Continually run MMF and stream MatchObjects that fit this profile until
// the backend client closes the connection. Same inputs/outputs as CreateMatch.
rpc ListMatches(messages.MatchObject) returns (stream messages.MatchObject) {}
rpc ListMatches(ListMatchesRequest) returns (stream ListMatchesResponse) {
option (google.api.http).get = "/v1/backend/matches/{match.id}/{match.properties}";
}
// Delete a MatchObject from state storage manually. (MatchObjects in state
// storage will also automatically expire after a while, defined in the config)
// INPUT: MatchObject message with the 'id' field populated.
// INPUT: MatchObject message with the 'id' field populated.
// (All other fields are ignored.)
rpc DeleteMatch(messages.MatchObject) returns (messages.Result) {}
rpc DeleteMatch(DeleteMatchRequest) returns (DeleteMatchResponse) {
option (google.api.http) = {
delete: "/v1/backend/matches"
body: "*"
additional_bindings {
delete: "/v1/backend/matches/{match.id}"
}
};
}
// Calls for communication of connection info to players.
// Calls for communication of connection info to players.
// Write the connection info for the list of players in the
// Assignments.messages.Rosters to state storage. The Frontend API is
// responsible for sending anything sent here to the game clients.
// Sending a player to this function kicks off a process that removes
// the player from future matchmaking functions by adding them to the
// the player from future matchmaking functions by adding them to the
// 'deindexed' player list and then deleting their player ID from state storage
// indexes.
// INPUT: Assignments message with these fields populated:
// - assignment, anything you write to this string is sent to Frontend API
// - assignment, anything you write to this string is sent to Frontend API
// - rosters. You can send any number of rosters, containing any number of
// player messages. All players from all rosters will be sent the assignment.
// The only field in the Roster's Player messages used by CreateAssignments is
// the id field. All other fields in the Player messages are silently ignored.
rpc CreateAssignments(messages.Assignments) returns (messages.Result) {}
// Remove DGS connection info from state storage for players.
// INPUT: Roster message with the 'players' field populated.
rpc CreateAssignments(CreateAssignmentsRequest) returns (CreateAssignmentsResponse) {
option (google.api.http)= {
put: "/v1/backend/assignments"
body: "*"
};
}
// Remove DGS connection info from state storage for players.
// INPUT: Roster message with the 'players' field populated.
// The only field in the Roster's Player messages used by
// DeleteAssignments is the 'id' field. All others are silently ignored. If
// you need to delete multiple rosters, make multiple calls.
rpc DeleteAssignments(messages.Roster) returns (messages.Result) {}
rpc DeleteAssignments(DeleteAssignmentsRequest) returns (DeleteAssignmentsResponse) {
option (google.api.http) = {
delete: "/v1/backend/assignments"
body: "*"
additional_bindings {
delete: "/v1/backend/assignments"
}
};
}
}

View File

@ -0,0 +1,535 @@
{
"swagger": "2.0",
"info": {
"title": "api/protobuf-spec/backend.proto",
"version": "version not set"
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/backend/assignments": {
"delete": {
"summary": "Remove DGS connection info from state storage for players.\nINPUT: Roster message with the 'players' field populated.\n The only field in the Roster's Player messages used by\n DeleteAssignments is the 'id' field. All others are silently ignored. If\n you need to delete multiple rosters, make multiple calls.",
"operationId": "DeleteAssignments2",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiDeleteAssignmentsResponse"
}
}
},
"parameters": [
{
"name": "roster.name",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"Backend"
]
},
"put": {
"summary": "Write the connection info for the list of players in the\nAssignments.messages.Rosters to state storage. The Frontend API is\nresponsible for sending anything sent here to the game clients.\nSending a player to this function kicks off a process that removes\nthe player from future matchmaking functions by adding them to the\n'deindexed' player list and then deleting their player ID from state storage\nindexes.\nINPUT: Assignments message with these fields populated:\n - assignment, anything you write to this string is sent to Frontend API\n - rosters. You can send any number of rosters, containing any number of\n player messages. All players from all rosters will be sent the assignment.\n The only field in the Roster's Player messages used by CreateAssignments is\n the id field. All other fields in the Player messages are silently ignored.",
"operationId": "CreateAssignments",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiCreateAssignmentsResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiCreateAssignmentsRequest"
}
}
],
"tags": [
"Backend"
]
}
},
"/v1/backend/matches": {
"delete": {
"summary": "Delete a MatchObject from state storage manually. (MatchObjects in state\nstorage will also automatically expire after a while, defined in the config)\nINPUT: MatchObject message with the 'id' field populated.\n(All other fields are ignored.)",
"operationId": "DeleteMatch",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiDeleteMatchResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiDeleteMatchRequest"
}
}
],
"tags": [
"Backend"
]
},
"put": {
"summary": "Run MMF once. Return a matchobject that fits this profile.\nINPUT: MatchObject message with these fields populated:\n - id\n - properties\n - [optional] roster, any fields you fill are available to your MMF.\n - [optional] pools, any fields you fill are available to your MMF.\nOUTPUT: MatchObject message with these fields populated:\n - id\n - properties\n - error. Empty if no error was encountered\n - rosters, if you choose to fill them in your MMF. (Recommended)\n - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)",
"operationId": "CreateMatch",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiCreateMatchResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiCreateMatchRequest"
}
}
],
"tags": [
"Backend"
]
}
},
"/v1/backend/matches/{match.id}": {
"delete": {
"summary": "Delete a MatchObject from state storage manually. (MatchObjects in state\nstorage will also automatically expire after a while, defined in the config)\nINPUT: MatchObject message with the 'id' field populated.\n(All other fields are ignored.)",
"operationId": "DeleteMatch2",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiDeleteMatchResponse"
}
}
},
"parameters": [
{
"name": "match.id",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "match.properties",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "match.error",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "match.status",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"Backend"
]
}
},
"/v1/backend/matches/{match.id}/{match.properties}": {
"get": {
"summary": "Continually run MMF and stream MatchObjects that fit this profile until\nthe backend client closes the connection. Same inputs/outputs as CreateMatch.",
"operationId": "ListMatches",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/apiListMatchesResponse"
}
}
},
"parameters": [
{
"name": "match.id",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "match.properties",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "match.error",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "match.status",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "mmfcfg.name",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "mmfcfg.host",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "mmfcfg.port",
"in": "query",
"required": false,
"type": "integer",
"format": "int32"
},
{
"name": "mmfcfg.type",
"in": "query",
"required": false,
"type": "string",
"enum": [
"GRPC",
"REST"
],
"default": "GRPC"
}
],
"tags": [
"Backend"
]
}
}
},
"definitions": {
"PlayerAttribute": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string",
"format": "int64"
}
}
},
"apiCreateAssignmentsRequest": {
"type": "object",
"properties": {
"assignment": {
"$ref": "#/definitions/messagesAssignments"
}
}
},
"apiCreateAssignmentsResponse": {
"type": "object"
},
"apiCreateMatchRequest": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/messagesMatchObject"
},
"mmfcfg": {
"$ref": "#/definitions/apiMmfConfig"
}
}
},
"apiCreateMatchResponse": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/messagesMatchObject"
}
}
},
"apiDeleteAssignmentsRequest": {
"type": "object",
"properties": {
"roster": {
"$ref": "#/definitions/messagesRoster"
}
}
},
"apiDeleteAssignmentsResponse": {
"type": "object"
},
"apiDeleteMatchRequest": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/messagesMatchObject"
}
}
},
"apiDeleteMatchResponse": {
"type": "object"
},
"apiListMatchesResponse": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/messagesMatchObject"
}
}
},
"apiMmfConfig": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"host": {
"type": "string"
},
"port": {
"type": "integer",
"format": "int32"
},
"type": {
"$ref": "#/definitions/apiMmfConfigType"
}
}
},
"apiMmfConfigType": {
"type": "string",
"enum": [
"GRPC",
"REST"
],
"default": "GRPC"
},
"messagesAssignments": {
"type": "object",
"properties": {
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesRoster"
}
},
"assignment": {
"type": "string"
}
}
},
"messagesFilter": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"attribute": {
"type": "string"
},
"maxv": {
"type": "string",
"format": "int64"
},
"minv": {
"type": "string",
"format": "int64"
},
"stats": {
"$ref": "#/definitions/messagesStats"
}
},
"description": "A 'hard' filter to apply to the player pool."
},
"messagesMatchObject": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"properties": {
"type": "string"
},
"error": {
"type": "string"
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesRoster"
}
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesPlayerPool"
}
},
"status": {
"type": "string"
}
},
"description": "Open Match's internal representation and wire protocol format for \"MatchObjects\".\nIn order to request a match using the Backend API, your backend code should generate\na new MatchObject with an ID and properties filled in (for more details about valid\nvalues for these fields, see the documentation). Open Match then sends the Match\nObject through to your matchmaking function, where you add players to 'rosters' and\nstore any schemaless data you wish in the 'properties' field. The MatchObject\nis then sent, populated, out through the Backend API to your backend code.\n\nMatchObjects contain a number of fields, but many gRPC calls that take a\nMatchObject as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesPlayer": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"properties": {
"type": "string"
},
"pool": {
"type": "string"
},
"attributes": {
"type": "array",
"items": {
"$ref": "#/definitions/PlayerAttribute"
}
},
"assignment": {
"type": "string"
},
"status": {
"type": "string"
},
"error": {
"type": "string"
}
},
"description": "Open Match's internal representation and wire protocol format for \"Players\".\nIn order to enter matchmaking using the Frontend API, your client code should generate\na consistent (same result for each client every time they launch) with an ID and\nproperties filled in (for more details about valid values for these fields,\nsee the documentation).\nPlayers contain a number of fields, but the gRPC calls that take a\nPlayer as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesPlayerPool": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesFilter"
}
},
"roster": {
"$ref": "#/definitions/messagesRoster"
},
"stats": {
"$ref": "#/definitions/messagesStats"
}
},
"description": "PlayerPools are defined by a set of 'hard' filters, and can be filled in\nwith the players that match those filters.\n\nPlayerPools contain a number of fields, but many gRPC calls that take a\nPlayerPool as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesRoster": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"players": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesPlayer"
}
}
},
"description": "Data structure to hold a list of players in a match."
},
"messagesStats": {
"type": "object",
"properties": {
"count": {
"type": "string",
"format": "int64"
},
"elapsed": {
"type": "number",
"format": "double"
}
},
"title": "Holds statistics"
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string"
},
"value": {
"type": "string",
"format": "byte"
}
}
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"apiListMatchesResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/apiListMatchesResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of apiListMatchesResponse"
}
}
}

View File

@ -1,7 +1,45 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
option go_package = "internal/pb";
import 'api/protobuf-spec/messages.proto';
import 'google/api/annotations.proto';
message CreatePlayerRequest {
messages.Player player = 1;
}
message CreatePlayerResponse {
}
message DeletePlayerRequest {
messages.Player player = 1;
}
message DeletePlayerResponse {
}
message GetUpdatesRequest {
messages.Player player = 1;
}
message GetUpdatesResponse {
messages.Player player = 1;
}
service Frontend {
// Call to start matchmaking for a player
@ -15,7 +53,12 @@ service Frontend {
// - properties
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc CreatePlayer(messages.Player) returns (messages.Result) {}
rpc CreatePlayer(CreatePlayerRequest) returns (CreatePlayerResponse) {
option (google.api.http) = {
put: "/v1/frontend/players"
body: "*"
};
}
// Call to stop matchmaking for a player
@ -32,7 +75,9 @@ service Frontend {
// INPUT: Player message with the 'id' field populated.
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc DeletePlayer(messages.Player) returns (messages.Result) {}
rpc DeletePlayer(DeletePlayerRequest) returns (DeletePlayerResponse) {
option (google.api.http).delete = "/v1/frontend/players/{player.id}";
}
// Calls to access matchmaking results for a player
@ -61,5 +106,7 @@ service Frontend {
// generate load on OM until you do!
// NOTE: Just bear in mind that every update will send egress traffic from
// Open Match to game clients! Frugality is recommended.
rpc GetUpdates(messages.Player) returns (stream messages.Player) {}
rpc GetUpdates(GetUpdatesRequest) returns (stream GetUpdatesResponse) {
option (google.api.http).get = "/v1/frontend/players/{player.id}";
}
}

View File

@ -0,0 +1,272 @@
{
"swagger": "2.0",
"info": {
"title": "api/protobuf-spec/frontend.proto",
"version": "version not set"
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/frontend/players": {
"put": {
"summary": "CreatePlayer will put the player in state storage, and then look\nthrough the 'properties' field for the attributes you have defined as\nindices your matchmaker config. If the attributes exist and are valid\nintegers, they will be indexed.\nINPUT: Player message with these fields populated:\n - id\n - properties\nOUTPUT: Result message denoting success or failure (and an error if\nnecessary)",
"operationId": "CreatePlayer",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiCreatePlayerResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiCreatePlayerRequest"
}
}
],
"tags": [
"Frontend"
]
}
},
"/v1/frontend/players/{player.id}": {
"get": {
"summary": "GetUpdates streams matchmaking results from Open Match for the\nprovided player ID.\nINPUT: Player message with the 'id' field populated.\nOUTPUT: a stream of player objects with one or more of the following\nfields populated, if an update to that field is seen in state storage:\n - 'assignment': string that usually contains game server connection information.\n - 'status': string to communicate current matchmaking status to the client.\n - 'error': string to pass along error information to the client.",
"description": "During normal operation, the expectation is that the 'assignment' field\nwill be updated by a Backend process calling the 'CreateAssignments' Backend API\nendpoint. 'Status' and 'Error' are free for developers to use as they see fit. \nEven if you had multiple players enter a matchmaking request as a group, the\nBackend API 'CreateAssignments' call will write the results to state\nstorage separately under each player's ID. OM expects you to make all game\nclients 'GetUpdates' with their own ID from the Frontend API to get\ntheir results.\n\nNOTE: This call generates a small amount of load on the Frontend API and state\n storage while watching the player record for updates. You are expected\n to close the stream from your client after receiving your matchmaking\n results (or a reasonable timeout), or you will continue to\n generate load on OM until you do!\nNOTE: Just bear in mind that every update will send egress traffic from\n Open Match to game clients! Frugality is recommended.",
"operationId": "GetUpdates",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/apiGetUpdatesResponse"
}
}
},
"parameters": [
{
"name": "player.id",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "player.properties",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.pool",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.assignment",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.status",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.error",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"Frontend"
]
},
"delete": {
"summary": "DeletePlayer removes the player from state storage by doing the\nfollowing:\n 1) Delete player from configured indices. This effectively removes the\n player from matchmaking when using recommended MMF patterns.\n Everything after this is just cleanup to save stage storage space.\n 2) 'Lazily' delete the player's state storage record. This is kicked\n off in the background and may take some time to complete.\n 2) 'Lazily' delete the player's metadata indicies (like, the timestamp when \n they called CreatePlayer, and the last time the record was accessed). This \n is also kicked off in the background and may take some time to complete.\nINPUT: Player message with the 'id' field populated.\nOUTPUT: Result message denoting success or failure (and an error if\nnecessary)",
"operationId": "DeletePlayer",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiDeletePlayerResponse"
}
}
},
"parameters": [
{
"name": "player.id",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "player.properties",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.pool",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.assignment",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.status",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player.error",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"Frontend"
]
}
}
},
"definitions": {
"PlayerAttribute": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string",
"format": "int64"
}
}
},
"apiCreatePlayerRequest": {
"type": "object",
"properties": {
"player": {
"$ref": "#/definitions/messagesPlayer"
}
}
},
"apiCreatePlayerResponse": {
"type": "object"
},
"apiDeletePlayerResponse": {
"type": "object"
},
"apiGetUpdatesResponse": {
"type": "object",
"properties": {
"player": {
"$ref": "#/definitions/messagesPlayer"
}
}
},
"messagesPlayer": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"properties": {
"type": "string"
},
"pool": {
"type": "string"
},
"attributes": {
"type": "array",
"items": {
"$ref": "#/definitions/PlayerAttribute"
}
},
"assignment": {
"type": "string"
},
"status": {
"type": "string"
},
"error": {
"type": "string"
}
},
"description": "Open Match's internal representation and wire protocol format for \"Players\".\nIn order to enter matchmaking using the Frontend API, your client code should generate\na consistent (same result for each client every time they launch) with an ID and\nproperties filled in (for more details about valid values for these fields,\nsee the documentation).\nPlayers contain a number of fields, but the gRPC calls that take a\nPlayer as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string"
},
"value": {
"type": "string",
"format": "byte"
}
}
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"apiGetUpdatesResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/apiGetUpdatesResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of apiGetUpdatesResponse"
}
}
}

View File

@ -0,0 +1,48 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = 'proto3';
package api;
option go_package = "internal/pb";
// The protobuf messages sent in the gRPC calls are defined 'messages.proto'.
import 'api/protobuf-spec/messages.proto';
import 'google/api/annotations.proto';
// Request message sent to the MMF.
message RunRequest {
string profile_id = 1; // Developer-chosen profile name, state storage key for the match object.
string proposal_id = 2; // The ID against which, the generated proposal should be stored.
string result_id = 3; // Final result ID. MMF needs to know this in case of errors where proposal generation can be shortcircuited.
messages.MatchObject match_object = 4; // The match object containing the details of the match to be generated.
string timestamp = 5;
}
message RunResponse {
}
// The MMF proto defines the API for running MMFs as long-lived, 'serving'
// functions inside of the kubernetes cluster.
service MatchFunction {
// The assumption is that there will be one service for each MMF that is
// being served. Build your MMF in the appropriate serving harness, deploy it
// to the K8s cluster with a unique service name, then connect to that service
// and call 'Run()' to execute the fuction.
rpc Run(RunRequest) returns (RunResponse) {
option (google.api.http) = {
put: "/v1/function"
body: "*"
};
}
}

View File

@ -0,0 +1,217 @@
{
"swagger": "2.0",
"info": {
"title": "api/protobuf-spec/matchfunction.proto",
"version": "version not set"
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/function": {
"put": {
"summary": "The assumption is that there will be one service for each MMF that is\nbeing served. Build your MMF in the appropriate serving harness, deploy it\nto the K8s cluster with a unique service name, then connect to that service\nand call 'Run()' to execute the fuction.",
"operationId": "Run",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiRunResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiRunRequest"
}
}
],
"tags": [
"MatchFunction"
]
}
}
},
"definitions": {
"PlayerAttribute": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string",
"format": "int64"
}
}
},
"apiRunRequest": {
"type": "object",
"properties": {
"profile_id": {
"type": "string"
},
"proposal_id": {
"type": "string"
},
"result_id": {
"type": "string"
},
"match_object": {
"$ref": "#/definitions/messagesMatchObject"
},
"timestamp": {
"type": "string"
}
},
"description": "Request message sent to the MMF."
},
"apiRunResponse": {
"type": "object"
},
"messagesFilter": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"attribute": {
"type": "string"
},
"maxv": {
"type": "string",
"format": "int64"
},
"minv": {
"type": "string",
"format": "int64"
},
"stats": {
"$ref": "#/definitions/messagesStats"
}
},
"description": "A 'hard' filter to apply to the player pool."
},
"messagesMatchObject": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"properties": {
"type": "string"
},
"error": {
"type": "string"
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesRoster"
}
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesPlayerPool"
}
},
"status": {
"type": "string"
}
},
"description": "Open Match's internal representation and wire protocol format for \"MatchObjects\".\nIn order to request a match using the Backend API, your backend code should generate\na new MatchObject with an ID and properties filled in (for more details about valid\nvalues for these fields, see the documentation). Open Match then sends the Match\nObject through to your matchmaking function, where you add players to 'rosters' and\nstore any schemaless data you wish in the 'properties' field. The MatchObject\nis then sent, populated, out through the Backend API to your backend code.\n\nMatchObjects contain a number of fields, but many gRPC calls that take a\nMatchObject as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesPlayer": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"properties": {
"type": "string"
},
"pool": {
"type": "string"
},
"attributes": {
"type": "array",
"items": {
"$ref": "#/definitions/PlayerAttribute"
}
},
"assignment": {
"type": "string"
},
"status": {
"type": "string"
},
"error": {
"type": "string"
}
},
"description": "Open Match's internal representation and wire protocol format for \"Players\".\nIn order to enter matchmaking using the Frontend API, your client code should generate\na consistent (same result for each client every time they launch) with an ID and\nproperties filled in (for more details about valid values for these fields,\nsee the documentation).\nPlayers contain a number of fields, but the gRPC calls that take a\nPlayer as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesPlayerPool": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesFilter"
}
},
"roster": {
"$ref": "#/definitions/messagesRoster"
},
"stats": {
"$ref": "#/definitions/messagesStats"
}
},
"description": "PlayerPools are defined by a set of 'hard' filters, and can be filled in\nwith the players that match those filters.\n\nPlayerPools contain a number of fields, but many gRPC calls that take a\nPlayerPool as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesRoster": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"players": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesPlayer"
}
}
},
"description": "Data structure to hold a list of players in a match."
},
"messagesStats": {
"type": "object",
"properties": {
"count": {
"type": "string",
"format": "int64"
},
"elapsed": {
"type": "number",
"format": "double"
}
},
"title": "Holds statistics"
}
}
}

View File

@ -1,6 +1,20 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = 'proto3';
package messages;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
option go_package = "internal/pb";
// Open Match's internal representation and wire protocol format for "MatchObjects".
// In order to request a match using the Backend API, your backend code should generate
@ -9,37 +23,38 @@ option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
// Object through to your matchmaking function, where you add players to 'rosters' and
// store any schemaless data you wish in the 'properties' field. The MatchObject
// is then sent, populated, out through the Backend API to your backend code.
//
//
// MatchObjects contain a number of fields, but many gRPC calls that take a
// MatchObject as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message MatchObject{
message MatchObject {
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string error = 3; // Last error encountered.
repeated Roster rosters = 4; // Rosters of players.
repeated PlayerPool pools = 5; // 'Hard' filters, and the players who match them.
string error = 3; // Last error encountered.
repeated Roster rosters = 4; // Rosters of players.
repeated PlayerPool pools = 5; // 'Hard' filters, and the players who match them.
string status = 6; // Resulting status of the match function
}
// Data structure to hold a list of players in a match.
message Roster{
string name = 1; // Arbitrary developer-chosen, human-readable string. By convention, set to team name.
// Data structure to hold a list of players in a match.
message Roster {
string name = 1; // Arbitrary developer-chosen, human-readable string. By convention, set to team name.
repeated Player players = 2; // Player profiles on this roster.
}
// A 'hard' filter to apply to the player pool.
message Filter{
string name = 1; // Arbitrary developer-chosen, human-readable name of this filter. Appears in logs and metrics.
message Filter {
string name = 1; // Arbitrary developer-chosen, human-readable name of this filter. Appears in logs and metrics.
string attribute = 2; // Name of the player attribute this filter operates on.
int64 maxv = 3; // Maximum value. Defaults to positive infinity (any value above minv).
int64 minv = 4; // Minimum value. Defaults to 0.
Stats stats = 5; // Statistics for the last time the filter was applied.
int64 minv = 4; // Minimum value. Defaults to 0.
Stats stats = 5; // Statistics for the last time the filter was applied.
}
// Holds statistics
message Stats{
message Stats {
int64 count = 1; // Number of results.
double elapsed = 2; // How long it took to get the results.
double elapsed = 2; // How long it took to get the results.
}
// PlayerPools are defined by a set of 'hard' filters, and can be filled in
@ -48,47 +63,40 @@ message Stats{
// PlayerPools contain a number of fields, but many gRPC calls that take a
// PlayerPool as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message PlayerPool{
message PlayerPool {
string name = 1; // Arbitrary developer-chosen, human-readable string.
repeated Filter filters = 2; // Filters are logical AND-ed (a player must match every filter).
Roster roster = 3; // Roster of players that match all filters.
Stats stats = 4; // Statisticss for the last time this Pool was retrieved from state storage.
Stats stats = 4; // Statisticss for the last time this Pool was retrieved from state storage.
}
// Open Match's internal representation and wire protocol format for "Players".
// In order to enter matchmaking using the Frontend API, your client code should generate
// a consistent (same result for each client every time they launch) with an ID and
// a consistent (same result for each client every time they launch) with an ID and
// properties filled in (for more details about valid values for these fields,
// see the documentation).
// see the documentation).
// Players contain a number of fields, but the gRPC calls that take a
// Player as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message Player{
message Attribute{
string name = 1; // Name should match a Filter.attribute field.
message Player {
message Attribute {
string name = 1; // Name should match a Filter.attribute field.
int64 value = 2;
}
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string pool = 3; // Optionally used to specify the PlayerPool in which to find a player.
repeated Attribute attributes= 4; // Attributes of this player.
string assignment = 5; // By convention, ip:port of a DGS to connect to
string status = 6; // Arbitrary developer-chosen string.
string pool = 3; // Optionally used to specify the PlayerPool in which to find a player.
repeated Attribute attributes = 4; // Attributes of this player.
string assignment = 5; // By convention, ip:port of a DGS to connect to
string status = 6; // Arbitrary developer-chosen string.
string error = 7; // Arbitrary developer-chosen string.
}
// Simple message to return success/failure and error status.
message Result{
bool success = 1;
string error = 2;
}
// IlInput is an empty message reserved for future use.
message IlInput{
message IlInput {
}
message Assignments{
message Assignments {
repeated Roster rosters = 1;
string assignment = 10;
string assignment = 10;
}

View File

@ -1,9 +1,63 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
option go_package = "internal/pb";
// The protobuf messages sent in the gRPC calls are defined 'messages.proto'.
import 'api/protobuf-spec/messages.proto';
import 'google/api/annotations.proto';
message GetProfileRequest {
messages.MatchObject match = 1;
}
message GetProfileResponse {
messages.MatchObject match = 1;
}
message CreateProposalRequest {
messages.MatchObject match = 1;
}
message CreateProposalResponse {
}
message GetPlayerPoolRequest {
messages.PlayerPool player_pool = 1;
}
message GetPlayerPoolResponse {
messages.PlayerPool player_pool = 1;
}
message GetAllIgnoredPlayersRequest {
messages.IlInput ignore_player = 1;
}
message GetAllIgnoredPlayersResponse {
messages.Roster roster = 1;
}
message ListIgnoredPlayersRequest {
messages.IlInput ignore_player = 1;
}
message ListIgnoredPlayersResponse {
messages.Roster roster = 1;
}
// The MMLogic API provides utility functions for common MMF functionality, such
// as retreiving profiles and players from state storage, writing results to state storage,
@ -15,7 +69,9 @@ service MmLogic {
// 'filled' one.
// Note: filters are assumed to have been checked for validity by the
// backendapi when accepting a profile
rpc GetProfile(messages.MatchObject) returns (messages.MatchObject) {}
rpc GetProfile(GetProfileRequest) returns (GetProfileResponse) {
option (google.api.http).get = "/v1/logic/match-profiles/{match.id}";
}
// CreateProposal is called by MMFs that wish to write their results to
// a proposed MatchObject, that can be sent out the Backend API once it has
@ -50,22 +106,29 @@ service MmLogic {
// the backend api along with your match results.
// OUTPUT: a Result message with a boolean success value and an error string
// if an error was encountered
rpc CreateProposal(messages.MatchObject) returns (messages.Result) {}
rpc CreateProposal(CreateProposalRequest) returns (CreateProposalResponse) {
option (google.api.http) = {
put: "/v1/logic/match-proposals"
body: "*"
};
}
// Player listing and filtering functions
//
// RetrievePlayerPool gets the list of players that match every Filter in the
// PlayerPool, .excluding players in any configured ignore lists. It
// combines the results, and returns the resulting player pool.
rpc GetPlayerPool(messages.PlayerPool) returns (stream messages.PlayerPool) {}
rpc GetPlayerPool(GetPlayerPoolRequest) returns (stream GetPlayerPoolResponse) {
option (google.api.http).get = "/v1/logic/player-pools/{player_pool.name}";
}
// Ignore List functions
//
// IlInput is an empty message reserved for future use.
rpc GetAllIgnoredPlayers(messages.IlInput) returns (messages.Roster) {}
rpc GetAllIgnoredPlayers(GetAllIgnoredPlayersRequest) returns (GetAllIgnoredPlayersResponse) {}
// ListIgnoredPlayers retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposed.name'.
rpc ListIgnoredPlayers(messages.IlInput) returns (messages.Roster) {}
rpc ListIgnoredPlayers(ListIgnoredPlayersRequest) returns (ListIgnoredPlayersResponse) {}
// NYI
// UpdateMetrics sends stats about the MMF run to export to a metrics aggregation tool

View File

@ -0,0 +1,380 @@
{
"swagger": "2.0",
"info": {
"title": "api/protobuf-spec/mmlogic.proto",
"version": "version not set"
},
"schemes": [
"http",
"https"
],
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {
"/v1/logic/match-profiles/{match.id}": {
"get": {
"summary": "Send GetProfile a match object with the ID field populated, it will return a\n 'filled' one.\n Note: filters are assumed to have been checked for validity by the\n backendapi when accepting a profile",
"operationId": "GetProfile",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiGetProfileResponse"
}
}
},
"parameters": [
{
"name": "match.id",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "match.properties",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "match.error",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "match.status",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"MmLogic"
]
}
},
"/v1/logic/match-proposals": {
"put": {
"summary": "CreateProposal is called by MMFs that wish to write their results to\na proposed MatchObject, that can be sent out the Backend API once it has\nbeen approved (by default, by the evaluator process).\n - adds all players in all Rosters to the proposed player ignore list\n - writes the proposed match to the provided key\n - adds that key to the list of proposals to be considered\nINPUT: \n * TO RETURN A MATCHOBJECT AFTER A SUCCESSFUL MMF RUN\n To create a match MatchObject message with these fields populated:\n - id, set to the value of the MMF_PROPOSAL_ID env var\n - properties\n - error. You must explicitly set this to an empty string if your MMF\n - roster, with the playerIDs filled in the 'players' repeated field. \n - [optional] pools, set to the output from the 'GetPlayerPools' call,\n will populate the pools with stats about how many players the filters\n matched and how long the filters took to run, which will be sent out\n the backend api along with your match results.\n was successful.\n * TO RETURN AN ERROR \n To report a failure or error, send a MatchObject message with these\n these fields populated:\n - id, set to the value of the MMF_ERROR_ID env var. \n - error, set to a string value describing the error your MMF encountered.\n - [optional] properties, anything you put here is returned to the\n backend along with your error.\n - [optional] rosters, anything you put here is returned to the\n backend along with your error.\n - [optional] pools, set to the output from the 'GetPlayerPools' call,\n will populate the pools with stats about how many players the filters\n matched and how long the filters took to run, which will be sent out\n the backend api along with your match results.\nOUTPUT: a Result message with a boolean success value and an error string\nif an error was encountered",
"operationId": "CreateProposal",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/apiCreateProposalResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/apiCreateProposalRequest"
}
}
],
"tags": [
"MmLogic"
]
}
},
"/v1/logic/player-pools/{player_pool.name}": {
"get": {
"summary": "Player listing and filtering functions",
"description": "RetrievePlayerPool gets the list of players that match every Filter in the\nPlayerPool, .excluding players in any configured ignore lists. It\ncombines the results, and returns the resulting player pool.",
"operationId": "GetPlayerPool",
"responses": {
"200": {
"description": "A successful response.(streaming responses)",
"schema": {
"$ref": "#/x-stream-definitions/apiGetPlayerPoolResponse"
}
}
},
"parameters": [
{
"name": "player_pool.name",
"in": "path",
"required": true,
"type": "string"
},
{
"name": "player_pool.roster.name",
"in": "query",
"required": false,
"type": "string"
},
{
"name": "player_pool.stats.count",
"in": "query",
"required": false,
"type": "string",
"format": "int64"
},
{
"name": "player_pool.stats.elapsed",
"in": "query",
"required": false,
"type": "number",
"format": "double"
}
],
"tags": [
"MmLogic"
]
}
}
},
"definitions": {
"PlayerAttribute": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"value": {
"type": "string",
"format": "int64"
}
}
},
"apiCreateProposalRequest": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/messagesMatchObject"
}
}
},
"apiCreateProposalResponse": {
"type": "object"
},
"apiGetAllIgnoredPlayersResponse": {
"type": "object",
"properties": {
"roster": {
"$ref": "#/definitions/messagesRoster"
}
}
},
"apiGetPlayerPoolResponse": {
"type": "object",
"properties": {
"player_pool": {
"$ref": "#/definitions/messagesPlayerPool"
}
}
},
"apiGetProfileResponse": {
"type": "object",
"properties": {
"match": {
"$ref": "#/definitions/messagesMatchObject"
}
}
},
"apiListIgnoredPlayersResponse": {
"type": "object",
"properties": {
"roster": {
"$ref": "#/definitions/messagesRoster"
}
}
},
"messagesFilter": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"attribute": {
"type": "string"
},
"maxv": {
"type": "string",
"format": "int64"
},
"minv": {
"type": "string",
"format": "int64"
},
"stats": {
"$ref": "#/definitions/messagesStats"
}
},
"description": "A 'hard' filter to apply to the player pool."
},
"messagesIlInput": {
"type": "object",
"description": "IlInput is an empty message reserved for future use."
},
"messagesMatchObject": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"properties": {
"type": "string"
},
"error": {
"type": "string"
},
"rosters": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesRoster"
}
},
"pools": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesPlayerPool"
}
},
"status": {
"type": "string"
}
},
"description": "Open Match's internal representation and wire protocol format for \"MatchObjects\".\nIn order to request a match using the Backend API, your backend code should generate\na new MatchObject with an ID and properties filled in (for more details about valid\nvalues for these fields, see the documentation). Open Match then sends the Match\nObject through to your matchmaking function, where you add players to 'rosters' and\nstore any schemaless data you wish in the 'properties' field. The MatchObject\nis then sent, populated, out through the Backend API to your backend code.\n\nMatchObjects contain a number of fields, but many gRPC calls that take a\nMatchObject as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesPlayer": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"properties": {
"type": "string"
},
"pool": {
"type": "string"
},
"attributes": {
"type": "array",
"items": {
"$ref": "#/definitions/PlayerAttribute"
}
},
"assignment": {
"type": "string"
},
"status": {
"type": "string"
},
"error": {
"type": "string"
}
},
"description": "Open Match's internal representation and wire protocol format for \"Players\".\nIn order to enter matchmaking using the Frontend API, your client code should generate\na consistent (same result for each client every time they launch) with an ID and\nproperties filled in (for more details about valid values for these fields,\nsee the documentation).\nPlayers contain a number of fields, but the gRPC calls that take a\nPlayer as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesPlayerPool": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesFilter"
}
},
"roster": {
"$ref": "#/definitions/messagesRoster"
},
"stats": {
"$ref": "#/definitions/messagesStats"
}
},
"description": "PlayerPools are defined by a set of 'hard' filters, and can be filled in\nwith the players that match those filters.\n\nPlayerPools contain a number of fields, but many gRPC calls that take a\nPlayerPool as input only require a few of them to be filled in. Check the\ngRPC function in question for more details."
},
"messagesRoster": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"players": {
"type": "array",
"items": {
"$ref": "#/definitions/messagesPlayer"
}
}
},
"description": "Data structure to hold a list of players in a match."
},
"messagesStats": {
"type": "object",
"properties": {
"count": {
"type": "string",
"format": "int64"
},
"elapsed": {
"type": "number",
"format": "double"
}
},
"title": "Holds statistics"
},
"protobufAny": {
"type": "object",
"properties": {
"type_url": {
"type": "string"
},
"value": {
"type": "string",
"format": "byte"
}
}
},
"runtimeStreamError": {
"type": "object",
"properties": {
"grpc_code": {
"type": "integer",
"format": "int32"
},
"http_code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"http_status": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"$ref": "#/definitions/protobufAny"
}
}
}
}
},
"x-stream-definitions": {
"apiGetPlayerPoolResponse": {
"type": "object",
"properties": {
"result": {
"$ref": "#/definitions/apiGetPlayerPoolResponse"
},
"error": {
"$ref": "#/definitions/runtimeStreamError"
}
},
"title": "Stream result of apiGetPlayerPoolResponse"
}
}
}

View File

@ -1,3 +0,0 @@
python3 -m grpc_tools.protoc -I . --python_out=. --grpc_python_out=. mmlogic.proto
python3 -m grpc_tools.protoc -I . --python_out=. --grpc_python_out=. messages.proto
cp *pb2* $OM/examples/functions/python3/simple/.

View File

@ -1,26 +0,0 @@
#!/bin/bash
# Script to compile golang versions of the OM proto files
#
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
cd $GOPATH/src
protoc \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/backend.proto \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/frontend.proto \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/mmlogic.proto \
${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/api/protobuf-spec/messages.proto \
-I ${GOPATH}/src/github.com/GoogleCloudPlatform/open-match/ \
--go_out=plugins=grpc:$GOPATH/src
cd -

207
cloudbuild.yaml Normal file
View File

@ -0,0 +1,207 @@
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
# Open Match Script for Google Cloud Build #
################################################################################
# To run this locally:
# cloud-build-local --config=cloudbuild.yaml --dryrun=false --substitutions=_OM_VERSION=DEV .
# To run this remotely:
# gcloud builds submit --config=cloudbuild.yaml --substitutions=_OM_VERSION=DEV .
# Requires gcloud to be installed to work. (https://cloud.google.com/sdk/)
# gcloud auth login
# gcloud components install cloud-build-local
# This YAML contains all the build steps for building Open Match.
# All PRs are verified against this script to prevent build breakages and regressions.
# Conventions
# Each build step is ID'ed with "Prefix: Description".
# The prefix portion determines what kind of step it is and it's impact.
# Docker Image: Read-Only, outputs a docker image.
# Lint: Read-Only, verifies correctness and formatting of a file.
# Build: Read-Write, outputs a build artifact. Ok to run in parallel if the artifact will not collide with another one.
# Generate: Read-Write, outputs files within /workspace that are used in other build step. Do not run these in parallel.
# Setup: Read-Write, similar to generate but steps that run before any other step.
# Some useful things to know about Cloud Build.
# The root of this repository is always stored in /workspace.
# Any modifications that occur within /workspace are persisted between builds anything else is forgotten.
# If a build step has intermediate files that need to be persisted for a future step then use volumes.
# An example of this is the go-vol which is where the pkg/ data for go mod is stored.
# More information here: https://cloud.google.com/cloud-build/docs/build-config#build_steps
# A build step is basically a docker image that is tuned for Cloud Build,
# https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/go
steps:
# Blocked by https://github.com/GoogleContainerTools/kaniko/issues/477
- id: 'Docker Image: open-match-build'
name: gcr.io/kaniko-project/executor
args: ['--destination=gcr.io/$PROJECT_ID/open-match-build', '--cache=true', '--cache-ttl=6h', '--dockerfile=Dockerfile.ci', '.']
waitFor: ['-']
- id: 'Build: Clean'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'clean']
waitFor: ['Docker Image: open-match-build']
- id: 'Setup: Download Dependencies'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'sync-deps']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Clean']
- id: 'Build: Install Toolchain'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'install-toolchain']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Setup: Download Dependencies']
- id: 'Build: Protocol Buffers'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'all-protos']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Install Toolchain']
- id: 'Build: Binaries'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'all', '-j8']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Protocol Buffers']
- id: 'Test: Core'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'GOPROXY=off', 'test-in-ci']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Protocol Buffers']
- id: 'Build: Docker Images'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'VERSION_SUFFIX=$SHORT_SHA', 'build-images', '-j8']
waitFor: ['Build: Protocol Buffers']
- id: 'Build: Push Images'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'VERSION_SUFFIX=$SHORT_SHA', 'push-images', '-j8']
waitFor: ['Build: Docker Images']
- id: 'Build: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'VERSION_SUFFIX=$SHORT_SHA', 'clean-install-yaml', 'install/yaml/']
waitFor: ['Build: Install Toolchain']
- id: 'Lint: Format, Vet, Charts'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'lint']
volumes:
- name: 'go-vol'
path: '/go'
waitFor: ['Build: Protocol Buffers', 'Build: Deployment Configs']
- id: 'Build: Website'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'build/site/']
waitFor: ['Build: Install Toolchain']
- id: 'Test: Website'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', 'site-test']
waitFor: ['Build: Website']
- id: 'Deploy: Website'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', VERSION_SUFFIX=$SHORT_SHA', 'BRANCH_NAME=$BRANCH_NAME', 'ci-deploy-dev-site']
waitFor: ['Test: Website', 'Build: Binaries']
volumes:
- name: 'go-vol'
path: '/go'
- id: 'Deploy: Deployment Configs'
name: 'gcr.io/$PROJECT_ID/open-match-build'
args: ['make', '_GCB_POST_SUBMIT=${_GCB_POST_SUBMIT}', VERSION_SUFFIX=$SHORT_SHA', 'BRANCH_NAME=$BRANCH_NAME', 'ci-deploy-artifacts']
waitFor: ['Lint: Format, Vet, Charts', 'Build: Binaries']
volumes:
- name: 'go-vol'
path: '/go'
#- id: 'Deploy: Create Cluster'
# name: 'gcr.io/$PROJECT_ID/open-match-build'
# args: ['make', 'create-gke-cluster', 'push-helm']
# waitFor: ['Build: Docker Images']
#- id: 'Deploy: Install Charts'
# name: 'gcr.io/$PROJECT_ID/open-match-build'
# args: ['make', 'sleep-10', 'install-chart', 'install-example-chart']
# waitFor: ['Deploy: Create Cluster']
#- id: 'Deploy: Teardown Cluster'
# name: 'gcr.io/$PROJECT_ID/open-match-build'
# args: ['make', 'sleep-10', 'delete-gke-cluster']
# waitFor: ['Deploy: Install Charts']
artifacts:
objects:
location: gs://open-match-build-artifacts/output/
paths:
- cmd/minimatch/minimatch
- cmd/backendapi/backendapi
- cmd/frontendapi/frontendapi
- cmd/mmlogicapi/mmlogicapi
- examples/functions/golang/grpc-serving/grpc-serving
- examples/evaluators/golang/serving/serving
- examples/backendclient/backendclient
- test/cmd/clientloadgen/clientloadgen
- test/cmd/frontendclient/frontendclient
- install/yaml/install.yaml
- install/yaml/install-example.yaml
- install/yaml/01-redis-chart.yaml
- install/yaml/02-open-match.yaml
- install/yaml/03-prometheus-chart.yaml
- install/yaml/04-grafana-chart.yaml
images:
- 'gcr.io/$PROJECT_ID/openmatch-minimatch:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-backendapi:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-frontendapi:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmlogicapi:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-evaluator-serving:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-mmf-go-grpc-serving-simple:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-backendclient:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-clientloadgen:${_OM_VERSION}-${SHORT_SHA}'
- 'gcr.io/$PROJECT_ID/openmatch-frontendclient:${_OM_VERSION}-${SHORT_SHA}'
substitutions:
_OM_VERSION: "0.5.0-rc1"
_GCB_POST_SUBMIT: "0"
logsBucket: 'gs://open-match-build-logs/'
options:
sourceProvenanceHash: ['SHA256']
machineType: 'N1_HIGHCPU_8'
# TODO: The build is slow because we don't vendor. go get takes a very long time.
# Also we are rebuilding a lot of code unnecessarily. This should improve once
# we have new hermetic and reproducible Dockerfiles.
timeout: 1200s
# TODO Build Steps
# config/matchmaker_config.yaml: Lint this file so it's verified as a valid YAML file.
# examples/profiles/*.json: Verify valid JSON files.
#
# Consolidate many of these build steps via Makefile.
# Caching of dependencies is a serious problem. Cloud Build does not complete within 20 minutes!

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-base:dev',
'-f', 'Dockerfile.base',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-base:dev']

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple',
'-f', 'Dockerfile.mmf_php',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple']

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev',
'-f', 'Dockerfile.mmf_py3',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev']

View File

@ -1,10 +1,23 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi
COPY . .
RUN go get -d -v
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/backendapi .
ENTRYPOINT ["./backendapi"]
FROM gcr.io/distroless/static
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/backendapi .
ENTRYPOINT ["/backendapi"]

View File

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendapi:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendapi:dev']

View File

@ -23,81 +23,9 @@ limitations under the License.
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/backendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
"github.com/GoogleCloudPlatform/open-match/internal/app/backendapi"
)
var (
// Logrus structured logging setup
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
}
beLog = log.WithFields(beLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.BeLogLines, apisrv.KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultBackendAPIViews // BackendAPI OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
beLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redishelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
beLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
beLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
beLog.Info("Shutting down gRPC server")
backendapi.RunApplication()
}

View File

@ -1,105 +0,0 @@
{
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 30
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "py3"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

View File

@ -1,10 +1,23 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi
COPY . .
RUN go get -d -v
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/frontendapi .
ENTRYPOINT ["./frontendapi"]
FROM gcr.io/distroless/static
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/frontendapi .
ENTRYPOINT ["/frontendapi"]

View File

@ -1,239 +0,0 @@
/*
package apisrv provides an implementation of the gRPC server defined in ../../../api/protobuf-spec/frontend.proto.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apisrv
import (
"context"
"errors"
"net"
"time"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
frontend "github.com/GoogleCloudPlatform/open-match/internal/pb"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerindices"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
log "github.com/sirupsen/logrus"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
"google.golang.org/grpc"
)
// Logrus structured logging setup
var (
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
}
feLog = log.WithFields(feLogFields)
)
// FrontendAPI implements frontend.ApiServer, the server generated by compiling
// the protobuf, by fulfilling the frontend.APIClient interface.
type FrontendAPI struct {
grpc *grpc.Server
cfg *viper.Viper
pool *redis.Pool
}
type frontendAPI FrontendAPI
// New returns an instantiated srvice
func New(cfg *viper.Viper, pool *redis.Pool) *FrontendAPI {
s := FrontendAPI{
pool: pool,
grpc: grpc.NewServer(grpc.StatsHandler(&ocgrpc.ServerHandler{})),
cfg: cfg,
}
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(FeLogLines, KeySeverity))
// Register gRPC server
frontend.RegisterFrontendServer(s.grpc, (*frontendAPI)(&s))
feLog.Info("Successfully registered gRPC server")
return &s
}
// Open starts the api grpc service listening on the configured port.
func (s *FrontendAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.frontend.port"))
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"port": s.cfg.GetInt("api.frontend.port"),
}).Error("net.Listen() error")
return err
}
feLog.WithFields(log.Fields{"port": s.cfg.GetInt("api.frontend.port")}).Info("TCP net listener initialized")
go func() {
err := s.grpc.Serve(ln)
if err != nil {
feLog.WithFields(log.Fields{"error": err.Error()}).Error("gRPC serve() error")
}
feLog.Info("serving gRPC endpoints")
}()
return nil
}
// CreatePlayer is this service's implementation of the CreatePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) CreatePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "CreatePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Write group
err := redispb.MarshalToRedis(ctx, s.pool, group, s.cfg.GetInt("redis.expirations.player"))
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Index group
err = playerindices.Create(ctx, s.pool, s.cfg, *group)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Return success.
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// DeletePlayer is this service's implementation of the DeletePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) DeletePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "DeletePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Deindex this player; at that point they don't show up in MMFs anymore. We can then delete
// their actual player object from Redis later.
err := playerindices.Delete(ctx, s.pool, s.cfg, group.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Kick off delete but don't wait for it to complete.
go s.deletePlayer(group.Id)
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// deletePlayer is a 'lazy' player delete
// It should always be called as a goroutine and should only be called after
// confirmation that a player has been deindexed (and therefore MMF's can't
// find the player to read them anyway)
// As a final action, it also kicks off a lazy delete of the player's metadata
func (s *frontendAPI) deletePlayer(id string) {
err := redisHelpers.Delete(context.Background(), s.pool, id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Warn("Error deleting player from state storage, this could leak state storage memory but is usually not a fatal error")
}
go playerindices.DeleteMeta(context.Background(), s.pool, id)
}
// GetUpdates is this service's implementation of the GetUpdates gRPC method defined in frontend.proto
func (s *frontendAPI) GetUpdates(p *frontend.Player, assignmentStream frontend.Frontend_GetUpdatesServer) error {
// Get cancellable context
ctx, cancel := context.WithCancel(assignmentStream.Context())
defer cancel()
// Create context for tagging OpenCensus metrics.
funcName := "GetAssignment"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// get and return connection string
watchChan := redispb.PlayerWatcher(ctx, s.pool, *p) // watcher() runs the appropriate Redis commands.
timeoutChan := time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
for {
select {
case <-ctx.Done():
// Context cancelled
feLog.WithFields(log.Fields{
"playerid": p.Id,
}).Info("client closed connection successfully")
stats.Record(fnCtx, FeGrpcRequests.M(1))
return nil
case <-timeoutChan: // Timeout reached without client closing connection
// TODO:deal with the fallout
err := errors.New("server timeout reached without client closing connection")
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"playerid": p.Id,
}).Error("State storage error")
// Count errors for metrics
errTag, _ := tag.NewKey("errtype")
fnCtx, _ := tag.New(ctx, tag.Insert(errTag, "watch_timeout"))
stats.Record(fnCtx, FeGrpcErrors.M(1))
//TODO: we could generate a frontend.player message with an error
//field and stream it to the client before throwing the error here
//if we wanted to send more useful client retry information
return err
case a := <-watchChan:
feLog.WithFields(log.Fields{
"assignment": a.Assignment,
"playerid": a.Id,
"status": a.Status,
"error": a.Error,
}).Info("updating client")
assignmentStream.Send(&a)
stats.Record(fnCtx, FeGrpcStreamedResponses.M(1))
// Reset timeout.
timeoutChan = time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
}
}
}

View File

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-frontendapi:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-frontendapi:dev']

View File

@ -19,87 +19,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
"github.com/GoogleCloudPlatform/open-match/internal/app/frontendapi"
)
var (
// Logrus structured logging setup
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
}
feLog = log.WithFields(feLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.FeLogLines, apisrv.KeySeverity))
// Add a hook to the logger to log the filename & line number.
log.SetReportCaller(true)
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultFrontendAPIViews // FrontendAPI OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
feLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redishelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
feLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
feLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
feLog.Info("Shutting down gRPC server")
frontendapi.RunApplication()
}

View File

@ -1,105 +0,0 @@
{
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 30
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "py3"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

10
cmd/minimatch/Dockerfile Normal file
View File

@ -0,0 +1,10 @@
FROM open-match-base-build as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/minimatch/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/minimatch/minimatch .
ENTRYPOINT ["/minimatch"]

29
cmd/minimatch/main.go Normal file
View File

@ -0,0 +1,29 @@
/*
This application is a minified version of Open Match.
All the actual important bits are in the API Server source code: apisrv/apisrv.go
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"github.com/GoogleCloudPlatform/open-match/internal/app/minimatch"
)
func main() {
minimatch.RunApplication()
}

View File

@ -1,21 +0,0 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
# Necessary to get a specific version of the golang k8s client
RUN go get github.com/tools/godep
RUN go get k8s.io/client-go/...
WORKDIR /go/src/k8s.io/client-go
RUN git checkout v7.0.0
RUN godep restore ./...
RUN rm -rf vendor/
RUN rm -rf /go/src/github.com/golang/protobuf/
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
# Uncomment to build production images (removes all troubleshooting tools)
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/mmforc .
CMD ["./mmforc"]

View File

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmforc:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmforc:dev']

View File

@ -1,404 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Note: the example only works with the code within the same release/branch.
// This is based on the example from the official k8s golang client repository:
// k8s.io/client-go/examples/create-update-delete-deployment/
package main
import (
"context"
"errors"
"os"
"strconv"
"strings"
"time"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/tidwall/gjson"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
"github.com/gomodule/redigo/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
batchv1 "k8s.io/api/batch/v1"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
//"k8s.io/kubernetes/pkg/api"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
// Uncomment the following line to load the gcp plugin (only required to authenticate against GKE clusters).
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
)
var (
// Logrus structured logging setup
mmforcLogFields = log.Fields{
"app": "openmatch",
"component": "mmforc",
}
mmforcLog = log.WithFields(mmforcLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(MmforcLogLines, KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocMmforcViews := DefaultMmforcViews // mmforc OpenCensus views.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocMmforcViews = append(ocMmforcViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
mmforcLog.WithFields(log.Fields{"viewscount": len(ocMmforcViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocMmforcViews)
}
func main() {
pool := redisHelpers.ConnectionPool(cfg)
redisConn := pool.Get()
defer redisConn.Close()
// Get k8s credentials so we can starts k8s Jobs
mmforcLog.Info("Attempting to acquire k8s credentials")
config, err := rest.InClusterConfig()
if err != nil {
panic(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
mmforcLog.Info("K8s credentials acquired")
start := time.Now()
checkProposals := true
// main loop; kick off matchmaker functions for profiles in the profile
// queue and an evaluator when proposals are in the proposals queue
for {
ctx, cancel := context.WithCancel(context.Background())
_ = cancel
// Get profiles and kick off a job for each
mmforcLog.WithFields(log.Fields{
"profileQueueName": cfg.GetString("queues.profiles.name"),
"pullCount": cfg.GetInt("queues.profiles.pullCount"),
"query": "SPOP",
"component": "statestorage",
}).Debug("Retreiving match profiles")
results, err := redis.Strings(redisConn.Do("SPOP",
cfg.GetString("queues.profiles.name"), cfg.GetInt("queues.profiles.pullCount")))
if err != nil {
panic(err)
}
if len(results) > 0 {
mmforcLog.WithFields(log.Fields{
"numProfiles": len(results),
}).Info("Starting MMF jobs...")
for _, profile := range results {
// Kick off the job asynchrnously
go mmfunc(ctx, profile, cfg, clientset, pool)
// Count the number of jobs running
redisHelpers.Increment(context.Background(), pool, "concurrentMMFs")
}
} else {
mmforcLog.WithFields(log.Fields{
"profileQueueName": cfg.GetString("queues.profiles.name"),
}).Info("Unable to retreive match profiles from statestorage - have you entered any?")
}
// Check to see if we should run the evaluator.
// Get number of running MMFs
r, err := redisHelpers.Retrieve(context.Background(), pool, "concurrentMMFs")
if err != nil {
if err.Error() == "redigo: nil returned" {
// No MMFs have run since we last evaluated; reset timer and loop
mmforcLog.Debug("Number of concurrentMMFs is nil")
start = time.Now()
time.Sleep(1000 * time.Millisecond)
}
continue
}
numRunning, err := strconv.Atoi(r)
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Issue retrieving number of currently running MMFs")
}
// We are ready to evaluate either when all MMFs are complete, or the
// timeout is reached.
//
// Tuning how frequently the evaluator runs is a complex topic and
// probably only of interest to users running large-scale production
// workloads with many concurrently running matchmaking functions,
// which have some overlap in the matchmaking player pools. Suffice to
// say that under load, this switch should almost always trigger the
// timeout interval code path. The concurrentMMFs check to see how
// many are still running is meant as a deadman's switch to prevent
// waiting to run the evaluator when all your MMFs are already
// finished.
switch {
case time.Since(start).Seconds() >= float64(cfg.GetInt("evaluator.interval")):
mmforcLog.WithFields(log.Fields{
"interval": cfg.GetInt("evaluator.interval"),
}).Info("Maximum evaluator interval exceeded")
checkProposals = true
// Opencensus tagging
ctx, _ = tag.New(ctx, tag.Insert(KeyEvalReason, "interval_exceeded"))
case numRunning <= 0:
mmforcLog.Info("All MMFs complete")
checkProposals = true
numRunning = 0
ctx, _ = tag.New(ctx, tag.Insert(KeyEvalReason, "mmfs_completed"))
}
if checkProposals {
// Make sure there are proposals in the queue. No need to run the
// evaluator if there are none.
checkProposals = false
mmforcLog.Info("Checking statestorage for match object proposals")
results, err := redisHelpers.Count(context.Background(), pool, cfg.GetString("queues.proposals.name"))
switch {
case err != nil:
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Couldn't retrieve the length of the proposal queue from statestorage!")
case results == 0:
mmforcLog.WithFields(log.Fields{}).Warn("No proposals in the queue!")
default:
mmforcLog.WithFields(log.Fields{
"numProposals": results,
}).Info("Proposals available, evaluating!")
go evaluator(ctx, cfg, clientset)
}
err = redisHelpers.Delete(context.Background(), pool, "concurrentMMFs")
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Error deleting concurrent MMF counter!")
}
start = time.Now()
}
// TODO: Make this tunable via config.
// A sleep here is not critical but just a useful safety valve in case
// things are broken, to keep the main loop from going all-out and spamming the log.
mainSleep := 1000
mmforcLog.WithFields(log.Fields{
"ms": mainSleep,
}).Info("Sleeping...")
time.Sleep(time.Duration(mainSleep) * time.Millisecond)
} // End main for loop
}
// mmfunc generates a k8s job that runs the specified mmf container image.
// resultsID is the redis key that the Backend API is monitoring for results; we can 'short circuit' and write errors directly to this key if we can't run the MMF for some reason.
func mmfunc(ctx context.Context, resultsID string, cfg *viper.Viper, clientset *kubernetes.Clientset, pool *redis.Pool) {
// Generate the various keys/names, some of which must be populated to the k8s job.
imageName := cfg.GetString("defaultImages.mmf.name") + ":" + cfg.GetString("defaultImages.mmf.tag")
jobType := "mmf"
ids := strings.Split(resultsID, ".") // comes in as dot-concatinated moID and profID.
moID := ids[0]
profID := ids[1]
timestamp := strconv.Itoa(int(time.Now().Unix()))
jobName := timestamp + "." + moID + "." + profID + "." + jobType
propID := "proposal." + timestamp + "." + moID + "." + profID
// Extra fields for structured logging
lf := log.Fields{"jobName": jobName}
if cfg.GetBool("debug") { // Log a lot more info.
lf = log.Fields{
"jobType": jobType,
"backendMatchObject": moID,
"profile": profID,
"jobTimestamp": timestamp,
"containerImage": imageName,
"jobName": jobName,
"profileImageJSONKey": cfg.GetString("jsonkeys.mmfImage"),
}
}
mmfuncLog := mmforcLog.WithFields(lf)
// Read the full profile from redis and access any keys that are important to deciding how MMFs are run.
// TODO: convert this to using redispb and directly access the protobuf message instead of retrieving as a map?
profile, err := redisHelpers.RetrieveAll(ctx, pool, profID)
if err != nil {
// Log failure to read this profile and return - won't run an MMF for an unreadable profile.
mmfuncLog.WithFields(log.Fields{"error": err.Error()}).Error("Failure retreiving profile from statestorage")
return
}
// Got profile from state storage, make sure it is valid
if gjson.Valid(profile["properties"]) {
profileImage := gjson.Get(profile["properties"], cfg.GetString("jsonkeys.mmfImage"))
if profileImage.Exists() {
imageName = profileImage.String()
mmfuncLog = mmfuncLog.WithFields(log.Fields{"containerImage": imageName})
} else {
mmfuncLog.Warn("Failed to read image name from profile at configured json key, using default image instead")
}
}
mmfuncLog.Info("Attempting to create mmf k8s job")
// Kick off k8s job
envvars := []apiv1.EnvVar{
{Name: "MMF_PROFILE_ID", Value: profID},
{Name: "MMF_PROPOSAL_ID", Value: propID},
{Name: "MMF_REQUEST_ID", Value: moID},
{Name: "MMF_ERROR_ID", Value: resultsID},
{Name: "MMF_TIMESTAMP", Value: timestamp},
}
err = submitJob(clientset, jobType, jobName, imageName, envvars)
if err != nil {
// Record failure & log
stats.Record(ctx, mmforcMmfFailures.M(1))
mmfuncLog.WithFields(log.Fields{"error": err.Error()}).Error("MMF job submission failure!")
} else {
// Record Success
stats.Record(ctx, mmforcMmfs.M(1))
}
}
// evaluator generates a k8s job that runs the specified evaluator container image.
func evaluator(ctx context.Context, cfg *viper.Viper, clientset *kubernetes.Clientset) {
imageName := cfg.GetString("defaultImages.evaluator.name") + ":" + cfg.GetString("defaultImages.evaluator.tag")
// Generate the job name
timestamp := strconv.Itoa(int(time.Now().Unix()))
jobType := "evaluator"
jobName := timestamp + "." + jobType
mmforcLog.WithFields(log.Fields{
"jobName": jobName,
"containerImage": imageName,
}).Info("Attempting to create evaluator k8s job")
// Kick off k8s job
envvars := []apiv1.EnvVar{{Name: "MMF_TIMESTAMP", Value: timestamp}}
err = submitJob(clientset, jobType, jobName, imageName, envvars)
if err != nil {
// Record failure & log
stats.Record(ctx, mmforcEvalFailures.M(1))
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
"jobName": jobName,
"containerImage": imageName,
}).Error("Evaluator job submission failure!")
} else {
// Record success
stats.Record(ctx, mmforcEvals.M(1))
}
}
// submitJob submits a job to kubernetes
func submitJob(clientset *kubernetes.Clientset, jobType string, jobName string, imageName string, envvars []apiv1.EnvVar) error {
// DEPRECATED: will be removed in a future vrsion. Please switch to using the 'MMF_*' environment variables.
v := strings.Split(jobName, ".")
envvars = append(envvars, apiv1.EnvVar{Name: "PROFILE", Value: strings.Join(v[:len(v)-1], ".")})
job := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
},
Spec: batchv1.JobSpec{
Completions: int32Ptr(1),
Template: apiv1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": jobType,
},
Annotations: map[string]string{
// Unused; here as an example.
// Later we can put things more complicated than
// env vars here and read them using k8s downward API
// volumes
"profile": jobName,
},
},
Spec: apiv1.PodSpec{
RestartPolicy: "Never",
Containers: []apiv1.Container{
{
Name: jobType,
Image: imageName,
ImagePullPolicy: "Always",
Env: envvars,
},
},
},
},
},
}
// Get the namespace for the job from the current namespace, otherwise, use default
namespace := os.Getenv("METADATA_NAMESPACE")
if len(namespace) == 0 {
namespace = apiv1.NamespaceDefault
}
// Submit kubernetes job
jobsClient := clientset.BatchV1().Jobs(namespace)
result, err := jobsClient.Create(job)
if err != nil {
// TODO: replace queued profiles if things go south
mmforcLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Couldn't create k8s job!")
}
mmforcLog.WithFields(log.Fields{
"jobName": result.GetObjectMeta().GetName(),
}).Info("Created job.")
return err
}
// readability functions used by generateJobSpec
func int32Ptr(i int32) *int32 { return &i }
func strPtr(i string) *string { return &i }

View File

@ -1,105 +0,0 @@
{
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 30
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "py3"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

View File

@ -1,10 +1,23 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi
COPY . .
RUN go get -d -v
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/frontendapi .
ENTRYPOINT ["./mmlogicapi"]
FROM gcr.io/distroless/static
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi/mmlogicapi .
ENTRYPOINT ["/mmlogicapi"]

View File

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev']

View File

@ -22,83 +22,9 @@ limitations under the License.
package main
import (
"errors"
"os"
"os/signal"
"github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/plugin/ocgrpc"
"github.com/GoogleCloudPlatform/open-match/internal/app/mmlogicapi"
)
var (
// Logrus structured logging setup
mlLogFields = log.Fields{
"app": "openmatch",
"component": "mmlogic",
}
mlLog = log.WithFields(mlLogFields)
// Viper config management setup
cfg = viper.New()
err = errors.New("")
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.MlLogLines, apisrv.KeySeverity))
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
mlLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
mlLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
// want to register is in an array, so append any views you want from other
// packages to a single array here.
ocServerViews := apisrv.DefaultMmlogicAPIViews // Matchmaking logic API OpenCensus views.
ocServerViews = append(ocServerViews, ocgrpc.DefaultServerViews...) // gRPC OpenCensus views.
ocServerViews = append(ocServerViews, config.CfgVarCountView) // config loader view.
// Waiting on https://github.com/opencensus-integrations/redigo/pull/1
// ocServerViews = append(ocServerViews, redis.ObservabilityMetricViews...) // redis OpenCensus views.
mlLog.WithFields(log.Fields{"viewscount": len(ocServerViews)}).Info("Loaded OpenCensus views")
metrics.ConfigureOpenCensusPrometheusExporter(cfg, ocServerViews)
}
func main() {
// Connect to redis
pool := redisHelpers.ConnectionPool(cfg)
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
mlLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server
err := srv.Open()
if err != nil {
mlLog.WithFields(log.Fields{"error": err.Error()}).Fatal("Failed to start gRPC server")
}
// Exit when we see a signal
terminate := make(chan os.Signal, 1)
signal.Notify(terminate, os.Interrupt)
<-terminate
mlLog.Info("Shutting down gRPC server")
mmlogicapi.RunApplication()
}

View File

@ -1,105 +0,0 @@
{
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 30
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "py3"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

View File

@ -18,6 +18,7 @@ limitations under the License.
package config
import (
"github.com/fsnotify/fsnotify"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"go.opencensus.io/stats"
@ -41,17 +42,22 @@ var (
// REDIS_SENTINEL_PORT_6379_TCP_PORT=6379
// REDIS_SENTINEL_PORT_6379_TCP_PROTO=tcp
// REDIS_SENTINEL_SERVICE_HOST=10.55.253.195
//
// MMFs are expected to get their configuation from env vars instead
// of reading the config file. So, config parameters that are required
// by MMFs should be populated to env vars.
envMappings = map[string]string{
"redis.user": "REDIS_USER",
"redis.password": "REDIS_PASSWORD",
"redis.hostname": "REDIS_SERVICE_HOST",
"redis.port": "REDIS_SERVICE_PORT",
"redis.pool.maxIdle": "REDIS_POOL_MAXIDLE",
"redis.pool.maxActive": "REDIS_POOL_MAXACTIVE",
"redis.pool.idleTimeout": "REDIS_POOL_IDLETIMEOUT",
"api.mmlogic.hostname": "OM_MMLOGICAPI_SERVICE_HOST",
"api.mmlogic.port": "OM_MMLOGICAPI_SERVICE_PORT",
}
// Viper config management setup
cfg = viper.New()
// OpenCensus
cfgVarCount = stats.Int64("config/vars_total", "Number of config vars read during initialization", "1")
// CfgVarCountView is the Open Census view for the cfgVarCount measure.
@ -65,8 +71,8 @@ var (
// Read reads a config file into a viper.Viper instance and associates environment vars defined in
// config.envMappings
func Read() (*viper.Viper, error) {
func Read() (View, error) {
cfg := viper.New()
// Viper config management initialization
// Support either json or yaml file types (json for backwards compatibility
// with previous versions)
@ -74,6 +80,7 @@ func Read() (*viper.Viper, error) {
cfg.SetConfigType("yaml")
cfg.SetConfigName("matchmaker_config")
cfg.AddConfigPath(".")
cfg.AddConfigPath("config")
// Read in config file using Viper
err := cfg.ReadInConfig()
@ -105,9 +112,7 @@ func Read() (*viper.Viper, error) {
"envvar": envVar,
"module": "config",
}).Info("Binding environment var as a config variable")
}
}
// Look for updates to the config; in Kubernetes, this is implemented using
@ -116,5 +121,12 @@ func Read() (*viper.Viper, error) {
// More details about Open Match's use of Kubernetes ConfigMaps at:
// https://github.com/GoogleCloudPlatform/open-match/issues/42
cfg.WatchConfig() // Watch and re-read config file.
// Write a log when the configuration changes.
cfg.OnConfigChange(func(event fsnotify.Event) {
cfgLog.WithFields(log.Fields{
"filename": event.Name,
"operation": event.Op,
}).Info("Server configuration changed.")
})
return cfg, err
}

33
config/config_test.go Normal file
View File

@ -0,0 +1,33 @@
/*
Package config contains convenience functions for reading and managing configuration.
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"testing"
)
func TestReadConfig(t *testing.T) {
cfg, err := Read()
if err != nil {
t.Fatalf("cannot load config, %s", err)
}
if cfg.GetString("metrics.endpoint") != "/metrics" {
t.Errorf("av.GetString('metrics.endpoint') = %s, expected '/metrics'", cfg.GetString("metrics.endpoint"))
}
}

View File

@ -1,110 +0,0 @@
{
"debug": true,
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 90
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "dev"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
},
"expirations": {
"player": 43200,
"matchobject":43200
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

View File

@ -0,0 +1,90 @@
# kubectl create configmap om-configmap --from-file=config/matchmaker_config.yaml
debug: true
logging:
level: debug
format: text
source: false
api:
backend:
hostname: om-backendapi
port: 50505
backoff: "[2 32] *2 ~0.33 <30"
proxyport: 51505
frontend:
hostname: om-frontendapi
port: 50504
backoff: "[2 32] *2 ~0.33 <300"
proxyport: 51504
mmlogic:
hostname: om-mmlogicapi
port: 50503
proxyport: 51503
functions:
port: 50502
proxyport: 51502
evaluator:
# Evaluator intervals are in milliseconds
pollIntervalMs: 1000
maxWaitMs: 10000
metrics:
port: 9555
endpoint: /metrics
reportingPeriod: 5
queues:
proposals:
name: proposalq
ignoreLists:
proposed:
name: proposed
offset: 0
duration: 800
deindexed:
name: deindexed
offset: 0
duration: 800
expired:
name: OM_METADATA.accessed
offset: 800
duration: 0
redis:
pool:
maxIdle: 3
maxActive: 0
idleTimeout: 60
queryArgs:
count: 10000
results:
pageSize: 10000
expirations:
player: 43200
matchobject: 43200
jsonkeys:
rosters: properties.rosters
pools: properties.pools
playerIndices:
- char.cleric
- char.knight
- char.paladin
- map.aleroth
- map.oasis
- mmr.rating
- mode.battleroyale
- mode.ctf
- mode.demo
- region.europe-east1
- region.europe-west1
- region.europe-west2
- region.europe-west3
- region.europe-west4
- role.dps
- role.support
- role.tank

46
config/view.go Normal file
View File

@ -0,0 +1,46 @@
/*
Package config contains convenience functions for reading and managing configuration.
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"time"
"github.com/spf13/viper"
)
// View is a read-only view of the Open Match configuration.
// New accessors from Viper should be added here.
type View interface {
IsSet(string) bool
GetString(string) string
GetInt(string) int
GetInt64(string) int64
GetStringSlice(string) []string
GetBool(string) bool
GetDuration(string) time.Duration
GetStringMap(string) map[string]interface{}
}
// Sub returns a subset of configuration filtered by the key.
func Sub(v View, key string) View {
vcfg, ok := v.(*viper.Viper)
if ok {
return vcfg.Sub(key)
}
return nil
}

50
config/view_test.go Normal file
View File

@ -0,0 +1,50 @@
/*
Package config contains convenience functions for reading and managing configuration.
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"testing"
"github.com/spf13/viper"
)
func TestSubFromViper(t *testing.T) {
v := viper.New()
v.Set("a", "a")
v.Set("b", "b")
v.Set("c", "c")
v.Set("a.a", "a.a")
v.Set("a.b", "a.b")
av := Sub(v, "a")
if av == nil {
t.Fatalf("Sub(%v, 'a') => %v", v, av)
}
if av.GetString("a") != "a.a" {
t.Errorf("av.GetString('a') = %s, expected 'a.a'", av.GetString("a"))
}
if av.GetString("a.a") != "" {
t.Errorf("av.GetString('a.a') = %s, expected ''", av.GetString("a.a"))
}
if av.GetString("b") != "a.b" {
t.Errorf("av.GetString('b') = %s, expected 'a.b'", av.GetString("b"))
}
if av.GetString("c") != "" {
t.Errorf("av.GetString('c') = %s, expected ''", av.GetString(""))
}
}

View File

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-backendapi",
"labels":{
"app":"openmatch",
"component": "backend"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "backend"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "backend"
}
},
"spec":{
"containers":[
{
"name":"om-backend",
"image":"gcr.io/open-match-public-images/openmatch-backendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50505
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

View File

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-backendapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "backend"
},
"ports": [
{
"protocol": "TCP",
"port": 50505,
"targetPort": "grpc"
}
]
}
}

View File

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-frontendapi",
"labels":{
"app":"openmatch",
"component": "frontend"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "frontend"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "frontend"
}
},
"spec":{
"containers":[
{
"name":"om-frontendapi",
"image":"gcr.io/open-match-public-images/openmatch-frontendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50504
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

View File

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-frontendapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "frontend"
},
"ports": [
{
"protocol": "TCP",
"port": 50504,
"targetPort": "grpc"
}
]
}
}

View File

@ -1,27 +0,0 @@
{
"apiVersion": "monitoring.coreos.com/v1",
"kind": "ServiceMonitor",
"metadata": {
"name": "openmatch-metrics",
"labels": {
"app": "openmatch",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "openmatch",
"agent": "opencensus",
"destination": "prometheus"
}
},
"endpoints": [
{
"port": "metrics",
"interval": "10s"
}
]
}
}

View File

@ -1,78 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-frontend-metrics",
"labels": {
"app": "openmatch",
"component": "frontend",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "frontend"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 19555
}
]
}
}
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-backend-metrics",
"labels": {
"app": "openmatch",
"component": "backend",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "backend"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 29555
}
]
}
}
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-mmforc-metrics",
"labels": {
"app": "openmatch",
"component": "mmforc",
"agent": "opencensus",
"destination": "prometheus"
}
},
"spec": {
"selector": {
"app": "openmatch",
"component": "mmforc"
},
"ports": [
{
"name": "metrics",
"targetPort": 9555,
"port": 39555
}
]
}
}

View File

@ -1,59 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-mmforc",
"labels":{
"app":"openmatch",
"component": "mmforc"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "mmforc"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "mmforc"
}
},
"spec":{
"containers":[
{
"name":"om-mmforc",
"image":"gcr.io/open-match-public-images/openmatch-mmforc:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "metrics",
"containerPort":9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
},
"env":[
{
"name":"METADATA_NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}
]
}
]
}
}
}
}

View File

@ -1,19 +0,0 @@
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "mmf-sa"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "default",
"namespace": "default"
}
],
"roleRef": {
"kind": "ClusterRole",
"name": "cluster-admin",
"apiGroup": "rbac.authorization.k8s.io"
}
}

View File

@ -1,53 +0,0 @@
{
"apiVersion":"extensions/v1beta1",
"kind":"Deployment",
"metadata":{
"name":"om-mmlogicapi",
"labels":{
"app":"openmatch",
"component": "mmlogic"
}
},
"spec":{
"replicas":1,
"selector":{
"matchLabels":{
"app":"openmatch",
"component": "mmlogic"
}
},
"template":{
"metadata":{
"labels":{
"app":"openmatch",
"component": "mmlogic"
}
},
"spec":{
"containers":[
{
"name":"om-mmlogic",
"image":"gcr.io/open-match-public-images/openmatch-mmlogicapi:dev",
"imagePullPolicy":"Always",
"ports": [
{
"name": "grpc",
"containerPort": 50503
},
{
"name": "metrics",
"containerPort": 9555
}
],
"resources":{
"requests":{
"memory":"100Mi",
"cpu":"100m"
}
}
}
]
}
}
}
}

View File

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-mmlogicapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "mmlogic"
},
"ports": [
{
"protocol": "TCP",
"port": 50503,
"targetPort": "grpc"
}
]
}
}

View File

@ -1,20 +0,0 @@
{
"apiVersion": "monitoring.coreos.com/v1",
"kind": "Prometheus",
"metadata": {
"name": "prometheus"
},
"spec": {
"serviceMonitorSelector": {
"matchLabels": {
"app": "openmatch"
}
},
"serviceAccountName": "prometheus",
"resources": {
"requests": {
"memory": "400Mi"
}
}
}
}

View File

@ -1,266 +0,0 @@
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "prometheus-operator"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "prometheus-operator"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "prometheus-operator",
"namespace": "default"
}
]
}
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "prometheus"
}
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRole",
"metadata": {
"name": "prometheus"
},
"rules": [
{
"apiGroups": [
""
],
"resources": [
"nodes",
"services",
"endpoints",
"pods"
],
"verbs": [
"get",
"list",
"watch"
]
},
{
"apiGroups": [
""
],
"resources": [
"configmaps"
],
"verbs": [
"get"
]
},
{
"nonResourceURLs": [
"/metrics"
],
"verbs": [
"get"
]
}
]
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRoleBinding",
"metadata": {
"name": "prometheus"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "prometheus"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "prometheus",
"namespace": "default"
}
]
}
{
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRole",
"metadata": {
"name": "prometheus-operator"
},
"rules": [
{
"apiGroups": [
"extensions"
],
"resources": [
"thirdpartyresources"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"apiextensions.k8s.io"
],
"resources": [
"customresourcedefinitions"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"monitoring.coreos.com"
],
"resources": [
"alertmanagers",
"prometheuses",
"prometheuses/finalizers",
"servicemonitors"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
"apps"
],
"resources": [
"statefulsets"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
""
],
"resources": [
"configmaps",
"secrets"
],
"verbs": [
"*"
]
},
{
"apiGroups": [
""
],
"resources": [
"pods"
],
"verbs": [
"list",
"delete"
]
},
{
"apiGroups": [
""
],
"resources": [
"services",
"endpoints"
],
"verbs": [
"get",
"create",
"update"
]
},
{
"apiGroups": [
""
],
"resources": [
"nodes"
],
"verbs": [
"list",
"watch"
]
},
{
"apiGroups": [
""
],
"resources": [
"namespaces"
],
"verbs": [
"list"
]
}
]
}
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "prometheus-operator"
}
}
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"labels": {
"k8s-app": "prometheus-operator"
},
"name": "prometheus-operator"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"k8s-app": "prometheus-operator"
}
},
"spec": {
"containers": [
{
"args": [
"--kubelet-service=kube-system/kubelet",
"--config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1"
],
"image": "quay.io/coreos/prometheus-operator:v0.17.0",
"name": "prometheus-operator",
"ports": [
{
"containerPort": 8080,
"name": "http"
}
],
"resources": {
"limits": {
"cpu": "200m",
"memory": "100Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
}
}
],
"securityContext": {
"runAsNonRoot": true,
"runAsUser": 65534
},
"serviceAccountName": "prometheus-operator"
}
}
}
}

View File

@ -1,22 +0,0 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "prometheus"
},
"spec": {
"type": "NodePort",
"ports": [
{
"name": "web",
"nodePort": 30900,
"port": 9090,
"protocol": "TCP",
"targetPort": "web"
}
],
"selector": {
"prometheus": "prometheus"
}
}
}

View File

@ -1,38 +0,0 @@
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "redis-master"
},
"spec": {
"selector": {
"matchLabels": {
"app": "mm",
"tier": "storage"
}
},
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "mm",
"tier": "storage"
}
},
"spec": {
"containers": [
{
"name": "redis-master",
"image": "redis:4.0.11",
"ports": [
{
"name": "redis",
"containerPort": 6379
}
]
}
]
}
}
}
}

View File

@ -1,20 +0,0 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis"
},
"spec": {
"selector": {
"app": "mm",
"tier": "storage"
},
"ports": [
{
"protocol": "TCP",
"port": 6379,
"targetPort": "redis"
}
]
}
}

18
doc.go Normal file
View File

@ -0,0 +1,18 @@
/*
* Copyright 2019 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// Package openmatch provides flexible, extensible, and scalable video game matchmaking.
package openmatch

84
docs/building.md Normal file
View File

@ -0,0 +1,84 @@
## Building
Documentation and usage guides on how to set up and customize Open Match.
### Precompiled container images
Once we reach a 1.0 release, we plan to produce publicly available (Linux) Docker container images of major releases in a public image registry. Until then, refer to the 'Compiling from source' section below.
### Compiling from source
The easiest way to build Open Match is to use the Makefile. Before you can use the Makefile make sure you have the following dependencies:
```bash
# Install Open Match Toolchain Dependencies (Debian other OSes including Mac OS X have similar dependencies)
sudo apt-get update; sudo apt-get install -y -q python3 python3-virtualenv virtualenv make google-cloud-sdk git unzip tar
# Setup your repository like Go workspace, https://golang.org/doc/code.html#Workspaces
# This requirement will go away soon.
mkdir -p workspace/src/github.com/GoogleCloudPlatform/
cd workspace/src/github.com/GoogleCloudPlatform/
export GOPATH=$HOME/workspace
export GO111MODULE=on
git clone https://github.com/GoogleCloudPlatform/open-match.git
cd open-match
```
[Docker](https://docs.docker.com/install/) and [Go 1.11+](https://golang.org/dl/) is also required. If your distro is new enough you can probably run `sudo apt-get install -y golang` or download the newest version from https://golang.org/.
To build all the artifacts of Open Match you can simply run the following commands.
```bash
# Downloads all the tools needed to build Open Match
make install-toolchain
# Generates protocol buffer code files
make all-protos
# Builds all the binaries
make all
# Builds all the images.
make build-images
```
Once build you can use a command like `docker images` to see all the images that were build.
Before creating a pull request you can run `make local-cloud-build` to simulate a Cloud Build run to check for regressions.
The directory structure is a typical Go structure so if you do the following you should be able to work on this project within your IDE.
```bash
cd $GOPATH
mkdir -p src/github.com/GoogleCloudPlatform/
cd src/github.com/GoogleCloudPlatform/
# If you're going to contribute you'll want to fork open-match, see CONTRIBUTING.md for details.
git clone https://github.com/GoogleCloudPlatform/open-match.git
cd open-match
# Open IDE in this directory.
```
Lastly, this project uses go modules so you'll want to set `export GO111MODULE=on` before building.
## Zero to Open Match
To deploy Open Match quickly to a Kubernetes cluster run these commands.
```bash
# Downloads all the tools.
make install-toolchain
# Create a GKE Cluster
make create-gke-cluster
# OR Create a Minikube Cluster
make create-mini-cluster
# Install Helm
make push-helm
# Build and push images
make push-images -j4
# Deploy Open Match with example functions
make install-chart install-example-chart
```
## Docker Image Builds
All the core components for Open Match are written in Golang and use the [Dockerfile multistage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/). This pattern uses intermediate Docker containers as a Golang build environment while producing lightweight, minimized container images as final build artifacts. When the project is ready for production, we will modify the `Dockerfile`s to uncomment the last build stage. Although this pattern is great for production container images, it removes most of the utilities required to troubleshoot issues during development.
## Configuration
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. When `docker build`ing the component container images, the Dockerfile copies the centralized config file into the component directory.
We plan to replace this with a Kubernetes-managed config with dynamic reloading, please join the discussion in [Issue #42](issues/42).

136
docs/concepts.md Normal file
View File

@ -0,0 +1,136 @@
# Core Concepts
[Watch the introduction of Open Match at Unite Berlin 2018 on YouTube](https://youtu.be/qasAmy_ko2o)
Open Match is designed to support massively concurrent matchmaking, and to be scalable to player populations of hundreds of millions or more. It attempts to apply stateless web tech microservices patterns to game matchmaking. If you're not sure what that means, that's okay &mdash; it is fully open source and designed to be customizable to fit into your online game architecture &mdash; so have a look a the code and modify it as you see fit.
## Glossary
### General
* **DGS** &mdash; Dedicated game server
* **Client** &mdash; The game client program the player uses when playing the game
* **Session** &mdash; In Open Match, players are matched together, then assigned to a server which hosts the game _session_. Depending on context, this may be referred to as a _match_, _map_, or just _game_ elsewhere in the industry.
### Open Match
* **Component** &mdash; One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called _components_.
* **State Storage** &mdash; The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
* **MMFOrc** &mdash; Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
* **MMF** &mdash; Matchmaking function. This is the customizable matchmaking logic.
* **MMLogic API** &mdash; An API that provides MMF SDK functionality. It is optional - you can also do all the state storage read and write operations yourself if you have a good reason to do so.
* **Director** &mdash; The software you (as a developer) write against the Open Match Backend API. The _Director_ decides which MMFs to run, and is responsible for sending MMF results to a DGS to host the session.
### Data Model
* **Player** &mdash; An ID and list of attributes with values for a player who wants to participate in matchmaking.
* **Roster** &mdash; A list of player objects. Used to hold all the players on a single team.
* **Filter** &mdash; A _filter_ is used to narrow down the players to only those who have an attribute value within a certain integer range. All attributes are integer values in Open Match because [that is how indices are implemented](internal/statestorage/redis/playerindices/playerindices.go). A _filter_ is defined in a _player pool_.
* **Player Pool** &mdash; A list of all the players who fit all the _filters_ defined in the pool.
* **Match Object** &mdash; A protobuffer message format that contains the _profile_ and the results of the matchmaking function. Sent to the backend API from your game backend with the _roster_(s) empty and then returned from your MMF with the matchmaking results filled in.
* **Profile** &mdash; The json blob containing all the parameters used by your MMF to select which players go into a roster together.
* **Assignment** &mdash; Refers to assigning a player or group of players to a dedicated game server instance. Open Match offers a path to send dedicated game server connection details from your backend to your game clients after a match has been made.
* **Ignore List** &mdash; Removing players from matchmaking consideration is accomplished using _ignore lists_. They contain lists of player IDs that your MMF should not include when making matches.
## Requirements
* [Kubernetes](https://kubernetes.io/) cluster &mdash; tested with version 1.11.7.
* [Redis 4+](https://redis.io/) &mdash; tested with 4.0.11.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) &mdash; tested with 1.11.5.
## Components
Open Match is a set of processes designed to run on Kubernetes. It contains these **core** components:
1. Frontend API
1. Backend API
1. Matchmaker Function Orchestrator (MMFOrc) (may be deprecated in future versions)
It includes these **optional** (but recommended) components:
1. Matchmaking Logic (MMLogic) API
It also explicitly depends on these two **customizable** components.
1. Matchmaking "Function" (MMF)
1. Evaluator (may be optional in future versions)
While **core** components are fully open source and _can_ be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** MMF and Evaluator examples, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
### Frontend API
The Frontend API accepts the player data and puts it in state storage so your Matchmaking Function (MMF) can access it.
The Frontend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/frontend.proto`. At the most basic level, it expects clients to connect and send:
* A **unique ID** for the group of players (the group can contain any number of players, including only one).
* A **json blob** containing all player-related data you want to use in your matchmaking function.
The client is expected to maintain a connection, waiting for an update from the API that contains the details required to connect to a dedicated game server instance (an 'assignment'). There are also basic functions for removing an ID from the matchmaking pool or an existing match.
### Backend API
The Backend API writes match objects to state storage which the Matchmaking Functions (MMFs) access to decide which players should be matched. It returns the results from those MMFs.
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or **director**, or even directly to a dedicated game server), and to receive:
* A **unique ID** for a matchmaking profile.
* A **json blob** containing all the matching-related data and filters you want to use in your matchmaking function.
* An optional list of **roster**s to hold the resulting teams chosen by your matchmaking function.
* An optional set of **filters** that define player pools your matchmaking function will choose players from.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
### Matchmaking Function Orchestrator (MMFOrc)
The MMFOrc kicks off your custom matchmaking function (MMF) for every unique profile submitted to the Backend API in a match object. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
The MMFOrc exists to orchestrate/schedule your **custom components**, running them as often as required to meet the demands of your game. MMFOrc runs in an endless loop, submitting MMFs and Evaluator jobs to Kubernetes.
### Matchmaking Logic (MMLogic) API
The MMLogic API provides a series of gRPC functions that act as a Matchmaking Function SDK. Much of the basic, boilerplate code for an MMF is the same regardless of what players you want to match together. The MMLogic API offers a gRPC interface for many common MMF tasks, such as:
1. Reading a profile from state storage.
1. Running filters on players in state strorage. It automatically removes players on ignore lists as well!
1. Removing chosen players from consideration by other MMFs (by adding them to an ignore list). It does it automatically for you when writing your results!
1. Writing the matchmaking results to state storage.
1. (Optional, NYI) Exporting MMF stats for metrics collection.
More details about the available gRPC calls can be found in the [API Specification](api/protobuf-spec/messages.proto).
**Note**: using the MMLogic API is **optional**. It tries to simplify the development of MMFs, but if you want to take care of these tasks on your own, you can make few or no calls to the MMLogic API as long as your MMF still completes all the required tasks. Read the [Matchmaking Functions section](#matchmaking-functions-mmfs) for more details of what work an MMF must do.
### Evaluator
The Evaluator resolves conflicts when multiple MMFs select the same player(s).
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposals, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and well-tuned matchmaking functions, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
Large-scale concurrent matchmaking functions is a complex topic, and users who wish to do this are encouraged to engage with the [Open Match community](https://github.com/GoogleCloudPlatform/open-match#get-involved) about patterns and best practices.
### Matchmaking Functions (MMFs)
Matchmaking Functions (MMFs) are run by the Matchmaker Function Orchestrator (MMFOrc) &mdash; once per profile it sees in state storage. The MMF is run as a Job in Kubernetes, and has full access to read and write from state storage. At a high level, the encouraged pattern is to write a MMF in whatever language you are comfortable in that can do the following things:
- [x] Be packaged in a (Linux) Docker container.
- [x] Read/write from the Open Match state storage &mdash; Open Match ships with Redis as the default state storage.
- [x] Read a profile you wrote to state storage using the Backend API.
- [x] Select from the player data you wrote to state storage using the Frontend API. It must respect all the ignore lists defined in the matchmaker config.
- [ ] Run your custom logic to try to find a match.
- [x] Write the match object it creates to state storage at a specified key.
- [x] Remove the players it selected from consideration by other MMFs by adding them to the appropriate ignore list.
- [x] Notify the MMFOrc of completion.
- [x] (Optional, but recommended) Export stats for metrics collection.
**Open Match offers [matchmaking logic API](#matchmaking-logic-mmlogic-api) calls for handling the checked items, as long as you are able to format your input and output in the data schema Open Match expects (defined in the [protobuf messages](api/protobuf-spec/messages.proto)).** You can to do this work yourself if you don't want to or can't use the data schema Open Match is looking for. However, the data formats expected by Open Match are pretty generalized and will work with most common matchmaking scenarios and game types. If you have questions about how to fit your data into the formats specified, feel free to ask us in the [Slack or mailing group](#get-involved).
Example MMFs are provided in these languages:
- [C#](examples/functions/csharp/simple) (doesn't use the MMLogic API)
- [Python3](examples/functions/python3/mmlogic-simple) (MMLogic API enabled)
- [PHP](examples/functions/php/mmlogic-simple) (MMLogic API enabled)
- [golang](examples/functions/golang/manual-simple) (doesn't use the MMLogic API)
## Additional examples
**Note:** These examples will be expanded on in future releases.
The following examples of how to call the APIs are provided in the repository. Both have a `Dockerfile` and `cloudbuild.yaml` files in their respective directories:
* `test/cmd/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `examples/backendclient/main.go` calls the Backend API and passes in the profile found in `backendstub/profiles/testprofile.json` to the `ListMatches` API endpoint, then continually prints the results until you exit, or there are insufficient players to make a match based on the profile..

View File

@ -1,3 +1,7 @@
# Development Guide
This doc explains how to setup a development environment so you can get started contributing to Open Match. If you instead want to write a matchmaker that _uses_ Open Match, you probably want to read the [User Guide](user_guide.md).
# Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in their respective directories. Note that most of them build from an 'base' image called `openmatch-devbase`. You can find a `Dockerfile` and `cloudbuild_base.yaml` file for this in the repository root. Build it first!
@ -11,11 +15,11 @@ Note: Although Google Cloud Platform includes some free usage, you may incur cha
**NOTE**: Before starting with this guide, you'll need to update all the URIs from the tutorial's gcr.io container image registry with the URI for your own image registry. If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`. Here's an example command in Linux to do the replacement for you this (replace YOUR_REGISTRY_URI with your URI, this should be run from the repository root directory):
```
# Linux
egrep -lR 'open-match-public-images' . | xargs sed -i -e 's|open-match-public-images|<PROJECT_NAME>|g'
egrep -lR 'matchmaker-dev-201405' . | xargs sed -i -e 's|matchmaker-dev-201405|<PROJECT_NAME>|g'
```
```
# Mac OS, you can delete the .backup files after if all looks good
egrep -lR 'open-match-public-images' . | xargs sed -i'.backup' -e 's|open-match-public-images|<PROJECT_NAME>|g'
egrep -lR 'matchmaker-dev-201405' . | xargs sed -i'.backup' -e 's|matchmaker-dev-201405|<PROJECT_NAME>|g'
```
## Example of building using Google Cloud Builder
@ -42,15 +46,15 @@ The [Quickstart for Docker](https://cloud.google.com/cloud-build/docs/quickstart
(your registry name will be different)
```
NAME
gcr.io/open-match-public-images/openmatch-backendapi
gcr.io/open-match-public-images/openmatch-devbase
gcr.io/open-match-public-images/openmatch-evaluator
gcr.io/open-match-public-images/openmatch-frontendapi
gcr.io/open-match-public-images/openmatch-mmf-golang-manual-simple
gcr.io/open-match-public-images/openmatch-mmf-php-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmforc
gcr.io/open-match-public-images/openmatch-mmlogicapi
gcr.io/matchmaker-dev-201405/openmatch-backendapi
gcr.io/matchmaker-dev-201405/openmatch-devbase
gcr.io/matchmaker-dev-201405/openmatch-evaluator
gcr.io/matchmaker-dev-201405/openmatch-frontendapi
gcr.io/matchmaker-dev-201405/openmatch-mmf-golang-manual-simple
gcr.io/matchmaker-dev-201405/openmatch-mmf-php-mmlogic-simple
gcr.io/matchmaker-dev-201405/openmatch-mmf-py3-mmlogic-simple
gcr.io/matchmaker-dev-201405/openmatch-mmforc
gcr.io/matchmaker-dev-201405/openmatch-mmlogicapi
```
## Example of starting a GKE cluster
@ -76,24 +80,24 @@ The rest of this guide assumes you have a cluster (example is using GKE, but wor
* Start a copy of redis and a service in front of it:
```
kubectl apply -f redis_deployment.json
kubectl apply -f redis_service.json
kubectl apply -f redis_deployment.yaml
kubectl apply -f redis_service.yaml
```
* Run the **core components**: the frontend API, the backend API, the matchmaker function orchestrator (MMFOrc), and the matchmaking logic API.
**NOTE** In order to kick off jobs, the matchmaker function orchestrator needs a service account with permission to administer the cluster. This should be updated to have min required perms before launch, this is pretty permissive but acceptable for closed testing:
```
kubectl apply -f backendapi_deployment.json
kubectl apply -f backendapi_service.json
kubectl apply -f frontendapi_deployment.json
kubectl apply -f frontendapi_service.json
kubectl apply -f mmforc_deployment.json
kubectl apply -f mmforc_serviceaccount.json
kubectl apply -f mmlogicapi_deployment.json
kubectl apply -f mmlogicapi_service.json
kubectl apply -f backendapi_deployment.yaml
kubectl apply -f backendapi_service.yaml
kubectl apply -f frontendapi_deployment.yaml
kubectl apply -f frontendapi_service.yaml
kubectl apply -f mmforc_deployment.yaml
kubectl apply -f mmforc_serviceaccount.yaml
kubectl apply -f mmlogicapi_deployment.yaml
kubectl apply -f mmlogicapi_service.yaml
```
* [optional, but recommended] Configure the OpenCensus metrics services:
```
kubectl apply -f metrics_services.json
kubectl apply -f metrics_services.yaml
```
* [optional] Trying to apply the Kubernetes Prometheus Operator resource definition files without a cluster-admin rolebinding on GKE doesn't work without running the following command first. See https://github.com/coreos/prometheus-operator/issues/357
```
@ -102,10 +106,10 @@ The rest of this guide assumes you have a cluster (example is using GKE, but wor
* [optional, uses beta software] If using Prometheus as your metrics gathering backend, configure the [Prometheus Kubernetes Operator](https://github.com/coreos/prometheus-operator):
```
kubectl apply -f prometheus_operator.json
kubectl apply -f prometheus.json
kubectl apply -f prometheus_service.json
kubectl apply -f metrics_servicemonitor.json
kubectl apply -f prometheus_operator.yaml
kubectl apply -f prometheus.yaml
kubectl apply -f prometheus_service.yaml
kubectl apply -f metrics_servicemonitor.yaml
```
You should now be able to see the core component pods running using a `kubectl get pods`, and the core component metrics in the Prometheus Web UI by running `kubectl proxy <PROMETHEUS_POD_NAME> 9090:9090` in your local shell, then opening http://localhost:9090/targets in your browser to see which services Prometheus is collecting from.
@ -174,9 +178,9 @@ statefulset.apps/prometheus-prometheus 1 1 9m
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
* `examples/frontendclient` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen until you press the enter key. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that command sends to the backend look for many more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match.
* `test/cmd/frontendclient/` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen until you press the enter key. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that command sends to the backend look for many more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match.
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to openmatch and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools. Note: building this image requires that you first build the 'base' dev image (look for `cloudbuild_base.yaml` and `Dockerfile.base` in the root directory) and then update the first step to point to that image in your registry. This will be simplified in a future release. **Note**: If you run this by itself, expect it to wait about 30 seconds, then return a result of 'insufficient players' and exit - this is working as intended. Use the client load simulation tool below to add players to the pool or you'll never be able to make a successful match.
* `test/cmd/client` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
* `test/cmd/clientloadgen/` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
### Resources

View File

@ -0,0 +1,61 @@
# v{version}
This is the {version} release of Open Match.
Check the [README](https://github.com/GoogleCloudPlatform/open-match/tree/release-{version}) for details on features, installation and usage.
Release Notes
-------------
{ insert enhancements from the changelog and/or security and breaking changes }
**Breaking Changes**
* API Changed #PR
**Enhancements**
* New Harness #PR
**Security Fixes**
* Reduced privileges required for MMF. #PR
See [CHANGELOG](https://github.com/GoogleCloudPlatform/open-match/blob/release-{version}/CHANGELOG.md) for more details on changes.
Images
------
```bash
# Servers
docker pull gcr.io/open-match-public-images/openmatch-backendapi:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontendapi:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmforc:{version}
docker pull gcr.io/open-match-public-images/openmatch-mmlogicapi:{version}
# Evaluators
docker pull gcr.io/open-match-public-images/openmatch-evaluator-serving:{version}
# Sample Match Making Functions
docker pull gcr.io/open-match-public-images/openmatch-mmf-go-grpc-serving-simple:{version}
# Test Clients
docker pull gcr.io/open-match-public-images/openmatch-backendclient:{version}
docker pull gcr.io/open-match-public-images/openmatch-clientloadgen:{version}
docker pull gcr.io/open-match-public-images/openmatch-frontendclient:{version}
```
_This software is currently alpha, and subject to change. Not to be used in production systems._
Installation
------------
To deploy Open Match in your Kubernetes cluster run the following commands:
```bash
# Grant yourself cluster-admin permissions so that you can deploy service accounts.
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(YOUR_KUBERNETES_USER_NAME)
# Place all Open Match components in their own namespace.
kubectl create namespace open-match
# Install Open Match and monitoring services.
kubectl apply -f https://github.com/GoogleCloudPlatform/open-match/releases/download/v{version}/install.yaml --namespace open-match
# Install the example MMF and Evaluator.
kubectl apply -f https://github.com/GoogleCloudPlatform/open-match/releases/download/v{version}/install-example.yaml --namespace open-match
```

View File

@ -0,0 +1,27 @@
#!/bin/bash
# Usage:
# ./release.sh 0.5.0-82d034f unstable
# ./release.sh [SOURCE VERSION] [DEST VERSION]
# This is a basic shell script to publish the latest Open Match Images
# There's no guardrails yet so use with care.
# Purge Images
# docker rmi $(docker images -a -q)
# 0.4.0-82d034f
SOURCE_VERSION=$1
DEST_VERSION=$2
SOURCE_PROJECT_ID=open-match-build
DEST_PROJECT_ID=open-match-public-images
IMAGE_NAMES="openmatch-backendapi openmatch-frontendapi openmatch-mmforc openmatch-mmlogicapi openmatch-evaluator-simple openmatch-mmf-cs-mmlogic-simple openmatch-mmf-go-mmlogic-simple openmatch-mmf-go-grpc-serving-simple openmatch-mmf-py3-mmlogic-simple openmatch-backendclient openmatch-clientloadgen openmatch-frontendclient"
for name in $IMAGE_NAMES
do
source_image=gcr.io/$SOURCE_PROJECT_ID/$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION
dest_image_latest=gcr.io/$DEST_PROJECT_ID/$name:latest
docker pull $source_image
docker tag $source_image $dest_image
docker tag $source_image $dest_image_latest
docker push $dest_image
docker push $dest_image_latest
done

View File

@ -0,0 +1,82 @@
# Release {version}
<!--
This is the release issue template. Make a copy of the markdown in this page
and copy it into a release issue. Fill in relevent values, found inside {}
{version} should be replaced with the version ie: 0.5.0.
There are 3 types of releases:
* Release Candidates - 1.0.0-rc1
* Full Releases - 1.2.0
* Hot Fixes - 1.0.1
# Release Candidate and Full Release Process
1. Create a Release Issue from the [release issue template](./release_issue.md).
1. Label the issue `kind/release`, and attach it to the milestone that it matches.
1. Complete all items in the release issue checklist.
1. Close the release issue.
# Hot Fix Process
1. Hotfixes will occur as needed, to be determined by those will commit access on the repository.
1. Create a Release Issue from the [release issue template](./release_issue.md).
1. Label the issue `kind/release`, and attach it to the next upcoming milestone.
1. Complete all items in the release issue checklist.
1. Close the release issue.
!-->
Complete Milestone
------------------
- [ ] Create the next version milestone, use [semantic versioning](https://semver.org/) when naming it to be consistent with the [Go community](https://blog.golang.org/versioning-proposal).
- [ ] Visit the [milestone](https://github.com/GoogleCloudPlatform/open-match/milestone).
- [ ] Open a document for a draft [release notes](release.md).
- [ ] Add the milestone tag to all PRs and issues that were merged since the last milestone. Look at the [releases page](https://github.com/GoogleCloudPlatform/open-match/releases) and look for the "X commits to master since this release" for the diff. The link resolves to, https://github.com/GoogleCloudPlatform/open-match/compare/v{version}...master.
- [ ] Review all [milestone-less closed issues](https://github.com/GoogleCloudPlatform/open-match/issues?q=is%3Aissue+is%3Aclosed+no%3Amilestone) and assign the appropriate milestone.
- [ ] Review all [issues in milestone](https://github.com/GoogleCloudPlatform/open-match/milestones) for proper [labels](https://github.com/GoogleCloudPlatform/open-match/labels) (ex: area/build).
- [ ] Review all [milestone-less closed PRs](https://github.com/GoogleCloudPlatform/open-match/pulls?q=is%3Apr+is%3Aclosed+no%3Amilestone) and assign the appropriate milestone.
- [ ] Review all [PRs in milestone](https://github.com/GoogleCloudPlatform/open-match/milestones) for proper [labels](https://github.com/GoogleCloudPlatform/open-match/labels) (ex: area/build).
- [ ] View all open entries in milestone and move them to a future milestone if they aren't getting closed in time. https://github.com/GoogleCloudPlatform/open-match/milestones/v{version}
- [ ] Review all closed PRs against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/GoogleCloudPlatform/open-match/pulls?q=is%3Apr+is%3Aclosed+milestone%3Av{version}
- [ ] Review all closed issues against the milestone. Put the user visible changes into the release notes using the suggested format. https://github.com/GoogleCloudPlatform/open-match/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed+milestone%3Av{version}
- [ ] Verify the [milestone](https://github.com/GoogleCloudPlatform/open-match/milestones) is effectively 100% at this point with the exception of the release issue itself.
TODO: Add details for appropriate tagging for issues.
Build Artifacts
---------------
- [ ] Create a PR to bump the version.
- [ ] Open the [`Makefile`](makefile-version) and change BASE_VERSION value. Release candidates use the -rc# suffix.
- [ ] Open the [`install/helm/open-match/Chart.yaml`](om-chart-yaml-version) and [`install/helm/open-match-example/Chart.yaml`](om-example-chart-yaml-version) and change the `appVersion` and `version` entries.
- [ ] Open the [`install/helm/open-match/values.yaml`](om-values-yaml-version) and [`install/helm/open-match-example/values.yaml`](om-example-values-yaml-version) and change the `tag` entries.
- [ ] Open the [`site/config.toml`] and change the `release_branch` and `release_version` entries.
- [ ] Open the [`README.md`](readme-deploy) update the version references.
- [ ] Run `make clean release`
- [ ] There might be additional references to the old version but be careful not to change it for places that have it for historical purposes.
- [ ] Submit the pull request.
- [ ] Take note of the git hash in master, `git checkout master && git pull master && git rev-parse HEAD`
- [ ] Go to [Cloud Build](https://pantheon.corp.google.com/cloud-build/triggers?project=open-match-build), under Post Submit click "Run Trigger".
- [ ] Go to the History section and find the "Post Submit" build that's running. Wait for it to go Green. If it's red fix error repeat this section. Take note of version tag for next step.
- [ ] Run `./docs/governance/templates/release.sh {source version tag} {version}` to copy the images to open-match-public-images.
- [ ] Create a *draft* release with the [release template][release-template]
- [ ] Make a `tag` with the release version. The tag must be v{version}. Example: v0.5.0. Append -rc# for release candidates. Example: v0.5.0-rc1.
- [ ] Copy the files from `build/release/` generated from `make release` from earlier as release artifacts.
- [ ] Run `make delete-gke-cluster create-gke-cluster push-helm sleep-10 install-chart install-example-chart` and verify that the pods are all healthy.
- [ ] Run `make delete-gke-cluster create-gke-cluster` and run through the instructions under the [README](readme-deploy), verify the pods are healthy. You'll need to adjust the path to the `install/yaml/install.yaml` and `install/yaml/install-example.yaml` in your local clone since you haven't published them yet.
- [ ] Publish the [Release](om-release) in Github.
Announce
--------
- [ ] Send an email to the [mailing list](mailing-list-post) with the release details (copy-paste the release blog post)
- [ ] Send a chat on the [Slack channel](om-slack). "Open Match {version} has been released! Check it out at {release url}."
[om-slack]: https://open-match.slack.com/
[mailing-list-post]: https://groups.google.com/forum/#!newtopic/open-match-discuss
[release-template]: https://github.com/GoogleCloudPlatform/open-match/blob/master/docs/governance/templates/release.md
[makefile-version]: https://github.com/GoogleCloudPlatform/open-match/blob/master/Makefile#L53
[om-example-chart-yaml-version]: https://github.com/GoogleCloudPlatform/open-match/blob/master/install/helm/open-match/Chart.yaml#L16
[om-example-values-yaml-version]: https://github.com/GoogleCloudPlatform/open-match/blob/master/install/helm/open-match/values.yaml#L16
[om-example-chart-yaml-version]: https://github.com/GoogleCloudPlatform/open-match/blob/master/install/helm/open-match-example/Chart.yaml#L16
[om-example-values-yaml-version]: https://github.com/GoogleCloudPlatform/open-match/blob/master/install/helm/open-match-example/values.yaml#L16
[om-release]: https://github.com/GoogleCloudPlatform/open-match/releases/new
[readme-deploy]: https://github.com/GoogleCloudPlatform/open-match/blob/master/README.md#deploy-to-kubernetes

17
docs/integrations.md Normal file
View File

@ -0,0 +1,17 @@
## Open Source Software integrations
### Structured logging
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. Level and format are configurable via config/matchmaker_config.json. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
### Instrumentation for metrics
Open Match uses [OpenCensus](https://opencensus.io/) for metrics instrumentation. The [gRPC](https://grpc.io/) integrations are built-in, and Golang redigo module integrations are incoming, but [haven't been merged into the official repo](https://github.com/opencensus-integrations/redigo/pull/1). All of the core components expose HTTP `/metrics` endpoints on the port defined in `config/matchmaker_config.json` (default: 9555) for Prometheus to scrape. If you would like to export to a different metrics aggregation platform, we suggest you have a look at the OpenCensus documentation &mdash; there may be one written for you already, and switching to it may be as simple as changing a few lines of code.
**Note:** A standard for instrumentation of MMFs is planned.
### Redis setup
By default, Open Match expects you to run Redis *somewhere*. Connection information can be put in the config file (`matchmaker_config.json`) for any Redis instance reachable from the [Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By default, Open Match sensibly runs in the Kubernetes `default` namespace. In most instances, we expect users will run a copy of Redis in a pod in Kubernetes, with a service pointing to it.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis`, which provides an easier path to multi-instance deployments.

View File

@ -11,7 +11,12 @@ All the usual guidance around hardening and securing Kubernetes are applicable t
* Note that the default MMForc process has cluster management permissions by default. Before moving to production, you should create a role with only access to create kubernetes jobs and configure the MMForc to use it.
### Kubernetes Jobs (MMFOrc)
The 0.3.0 MMFOrc component runs your MMFs as Kubernetes Jobs. You should periodically delete these jobs to keep the cluster running smoothly. How often you need to delete them is dependant on how many you are running. There are a number of open source solutions to do this for you. ***Note that once you delete the job, you won't have access to that job's logs anymore unless you're sending your logs from kubernetes to a log aggregator like Google Stackdriver. This can make it a challenge to troubleshoot issues***
### CPU and Memory limits
For any production Kubernetes deployment, it is good practice to profile your processes and select [resource limits and requests](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) according to your results. For example, you'll likely want to set adequate resource requests based on your expected player base and some load testing for the Redis state storage pods. This will help Kubernetes avoid scheduling other intensive processes on the same underlying node and keep you from running into resource contention issues. Another example might be an MMF with a particularly large memory or CPU footprint - maybe you have one that searches a lot of players for a potential match. This would be a good candidate for resource limits and requests in Kubernetes to both ensure it gets the CPU and RAM it needs to complete quickly, and to make sure it's not scheduled alongside another intensive Kubernetes pod.
### State storage
The default state storage for Open Match is a _single instance_ of Redis. Although it _is_ possible to go to production with this as the solution if you're willing to accept the potential downsides, for most deployments, a HA Redis configuration would better fit your needs. An example YAML file for creating a [self-healing HA Redis deployment on Kubernetes](../install/yaml/01-redis-failover.yaml) is available. Regardless of which configuation you use, it is probably a good idea to put some [resource requests](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) in your Kubernetes resource definition as mentioned above.
You can find more discussion in the [state storage readme doc](../internal/statestorage/redis/README.md).
## Open Match config
Debug logging and the extra debug code paths should be disabled in the `config/matchmaker_config.json` file (as of the time of this writing, 0.3.0).

15
docs/references.md Normal file
View File

@ -0,0 +1,15 @@
### Guides
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release, right now it's a scattered collection of notes. **WIP**
* [Development guide](./docs/development.md)
## This all sounds great, but can you explain Docker and/or Kubernetes to me?
### Docker
- [Docker's official "Getting Started" guide](https://docs.docker.com/get-started/)
- [Katacoda's free, interactive Docker course](https://www.katacoda.com/courses/docker)
### Kubernetes
- [You should totally read this comic, and interactive tutorial](https://cloud.google.com/kubernetes-engine/kubernetes-comic/)
- [Katacoda's free, interactive Kubernetes course](https://www.katacoda.com/courses/kubernetes)

View File

@ -4,7 +4,7 @@ Releases are scheduled for every 6 weeks. **Every release is a stable, long-ter
Our current thinking is to wait to take Open Match out of alpha/beta (and label it 1.0) until it can be used out-of-the-box, standalone, for developers that dont have any existing platform services. Which is to say, the majority of **established game developers likely won't have any reason to wait for the 1.0 release if Open Match already handles your needs**. If you already have live platform services that you plan to integrate Open Match with (player authentication, a group invite system, dedicated game servers, metrics collection, logging aggregation, etc), then a lot of the features planned between 0.4.0 and 1.0 likely aren't of much interest to you anyway.
## Upcoming releases
* **0.4.0** &mdash; Agones Integration & MMF on [Knative](https://cloud.google.com/Knative/)
* **0.4.0** &mdash; Agones Integration & MMF on [Knative](https://cloud.google.com/knative/)
MMF instrumentation
Match object expiration / lazy deletion
API autoscaling by default
@ -18,3 +18,43 @@ Our current thinking is to wait to take Open Match out of alpha/beta (and label
* The next version (0.4.0) will focus on making MMFs run on serverless platforms - specifically Knative. This will just be first steps, as Knative is still pretty early. We want to get a proof of concept working so we can roadmap out the future "MMF on Knative" experience. Our intention is to keep MMFs as compatible as possible with the current Kubernetes job-based way of doing them. Our hope is that by the time Knative is mature, well be able to provide a [Knative build](https://github.com/Knative/build) pipeline that will take existing MMFs and build them as Knative functions. In the meantime, well map out a relatively painless (but not yet fully automated) way to make an existing MMF into a Kubernetes Deployment that looks as similar to what [Knative serving](https://github.com/knative/serving) is shaping up to be, in an effort to make the eventual switchover painless. Basically all of this is just _optimizing MMFs to make them spin up faster and take less resources_, **we're not planning to change what MMFs do or the interfaces they need to fulfill**. Existing MMFs will continue to run as-is, and in the future moving them to Knative should be both **optional** and **largely automated**.
* 0.4.0 represents the natural stopping point for adding new functionality until we have more community uptake and direction. We don't anticipate many API changes in 0.4.0 and beyond. Maybe new API calls for new functionality, but we're unlikely to see big shifts in existing calls through 1.0 and its point releases. We'll issue a new major release version if we decide we need those changes.
* The 0.5.0 version and beyond will be focused on operationalizing the out-of-the-box experience. Metrics and analytics and a default dashboard, additional tooling, and a load testing suite are all planned. We want it to be easy for operators to see KPI and know what's going on with Open Match.
# Planned improvements
See the [provisional roadmap](docs/roadmap.md) for more information on upcoming releases.
## Documentation
- [ ] “Writing your first matchmaker” getting started guide will be included in an upcoming version.
- [ ] Documentation for using the example customizable components and the `backendstub` and `frontendstub` applications to do an end-to-end (e2e) test will be written. This all works now, but needs to be written up.
- [ ] Documentation on release process and release calendar.
## State storage
- [X] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
- [X] [The Redis deployment should have an example HA configuration](https://github.com/GoogleCloudPlatform/open-match/issues/41)
- [X] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet.
- [ ] We don't want to support two redis watcher code paths, but we will until golang protobuf reflection is a bit more usable. [Design doc](https://docs.google.com/document/d/19kfhro7-CnBdFqFk7l4_HmwaH2JT_Rhw5-2FLWLEGGk/edit#heading=h.q3iwtwhfujjx), [github issue](https://github.com/golang/protobuf/issues/364)
- [X] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
## Instrumentation / Metrics / Analytics
- [ ] Instrumentation of MMFs is in the planning stages. Since MMFs are by design meant to be completely customizable (to the point of allowing any process that can be packaged in a Docker container), metrics/stats will need to have an expected format and formalized outgoing pathway. Currently the thought is that it might be that the metrics should be written to a particular key in statestorage in a format compatible with opencensus, and will be collected, aggreggated, and exported to Prometheus using another process.
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version. This is likely going to require knative.
- [X] Read logrus logging configuration from matchmaker_config.json.
## Security
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions. [Issue 52](issues/52)
## Kubernetes
- [ ] Autoscaling isn't turned on for the Frontend or Backend API Kubernetes deployments by default.
- [X] A [Helm](https://helm.sh/) chart to stand up Open Match may be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
- [ ] A knative-based implementation of MMFs is in the planning stages.
## CI / CD / Build
- [X] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon. This is tracked in [Issue #45](issues/45) and is blocked by [Issue 42](issues/42).
- [X] CI/CD for this repo and the associated status tags are planned.
- [ ] Golang unit tests will be shipped in an upcoming version.
- [ ] A full load-testing and e2e testing suite will be included in an upcoming version.
## Will not Implement
- [X] Defining multiple images inside a profile for the purposes of experimentation adds another layer of complexity into profiles that can instead be handled outside of open match with custom match functions in collaboration with a director (thing that calls backend to schedule matchmaking)
### Special Thanks
- Thanks to https://jbt.github.io/markdown-editor/ for help in marking this document down.

30
examples/backendclient/Dockerfile Executable file → Normal file
View File

@ -1,8 +1,24 @@
#FROM golang:1.10.3 as builder
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/backendclient
COPY ./ ./
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o backendclient .
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
CMD ["./backendclient"]
FROM open-match-base-build as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/backendclient/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/examples/backendclient/backendclient .
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/examples/backendclient/profiles profiles
ENTRYPOINT ["/backendclient"]

View File

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendclient:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendclient:dev']

View File

@ -4,7 +4,6 @@ assumes that the backend api is up and can be accessed through a k8s service
named om-backendapi
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
@ -25,15 +24,26 @@ import (
"context"
"encoding/json"
"errors"
"flag"
"io"
"io/ioutil"
"log"
"net"
"os"
backend "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/gobs/pretty"
"github.com/tidwall/gjson"
"google.golang.org/grpc"
"google.golang.org/grpc/status"
)
var (
filename = flag.String("file", "profiles/testprofile.json", "JSON file from which to read match properties")
beCall = flag.String("call", "ListMatches", "Open Match backend match request gRPC call to test")
server = flag.String("backend", "om-backendapi:50505", "Hostname and IP of the Open Match backend")
assignment = flag.String("assignment", "example.server.dgs:12345", "Assignment to send to matched players")
delAssignments = flag.Bool("rm", false, "Delete assignments. Leave off to be able to manually validate assignments in state storage")
verbose = flag.Bool("verbose", false, "Print out as much as possible")
)
func bytesToString(data []byte) string {
@ -41,25 +51,31 @@ func bytesToString(data []byte) string {
}
func ppJSON(s string) {
buf := new(bytes.Buffer)
json.Indent(buf, []byte(s), "", " ")
log.Println(buf)
if *verbose {
buf := new(bytes.Buffer)
json.Indent(buf, []byte(s), "", " ")
log.Println(buf)
}
return
}
func main() {
flag.Parse()
log.Print("Parsing flags:")
log.Printf(" [flags] Reading properties from file at %v", *filename)
log.Printf(" [flags] Using OM Backend address %v", *server)
log.Printf(" [flags] Using OM Backend %v call", *beCall)
log.Printf(" [flags] Assigning players to %v", *assignment)
log.Printf(" [flags] Deleting assignments? %v", *delAssignments)
if !(*beCall == "CreateMatch" || *beCall == "ListMatches") {
log.Printf(" [flags] Unknown OM Backend call %v! Exiting...", *beCall)
return
}
// Read the profile
filename := "profiles/testprofile.json"
/*
if len(os.Args) > 1 {
filename = os.Args[1]
}
log.Println("Reading profile from ", filename)
*/
jsonFile, err := os.Open(filename)
jsonFile, err := os.Open(*filename)
if err != nil {
panic("Failed to open file specified at command line. Did you forget to specify one?")
log.Fatalf("Failed to open file %v", *filename)
}
defer jsonFile.Close()
@ -71,98 +87,137 @@ func main() {
}
jsonProfile := buffer.String()
pbProfile := &backend.MatchObject{}
/*
err = jsonpb.UnmarshalString(jsonProfile, pbProfile)
if err != nil {
log.Println(err)
}
*/
pbProfile := &pb.MatchObject{}
pbProfile.Properties = jsonProfile
log.Println("Requesting matches that fit profile:")
ppJSON(jsonProfile)
//jsonProfile := bytesToString(jsonData)
// Connect gRPC client
ip, err := net.LookupHost("om-backendapi")
if err != nil {
panic(err)
}
conn, err := grpc.Dial(ip[0]+":50505", grpc.WithInsecure())
conn, err := grpc.Dial(*server, grpc.WithInsecure())
if err != nil {
log.Fatalf("failed to connect: %s", err.Error())
}
client := backend.NewBackendClient(conn)
log.Println("API client connected to", ip[0]+":50505")
profileName := "test-dm-usc1f"
_ = profileName
client := pb.NewBackendClient(conn)
log.Println("Backend client connected to", *server)
var profileName string
if gjson.Get(jsonProfile, "name").Exists() {
profileName = gjson.Get(jsonProfile, "name").String()
} else {
profileName = "testprofilename"
log.Println("JSON Profile does not contain a name; using ", profileName)
}
pbProfile.Id = profileName
pbProfile.Properties = jsonProfile
mmfcfg := &pb.MmfConfig{Name: "profileName"}
mmfcfg.Type = pb.MmfConfig_GRPC
mmfcfg.Host = gjson.Get(jsonProfile, "hostname").String()
mmfcfg.Port = int32(gjson.Get(jsonProfile, "port").Int())
req := &pb.CreateMatchRequest{
Match: pbProfile,
Mmfcfg: mmfcfg,
}
log.Println("Backend Request:")
ppJSON(jsonProfile)
pretty.PrettyPrint(mmfcfg)
log.Printf("Establishing HTTPv2 stream...")
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
//match, err := client.CreateMatch(ctx, pbProfile)
for {
log.Println("Attempting to send ListMatches call")
stream, err := client.ListMatches(ctx, pbProfile)
matchChan := make(chan *pb.MatchObject)
doneChan := make(chan bool)
go func() {
// Watch for results and print as they come in.
log.Println("Watching for match results...")
for {
select {
case match := <-matchChan:
if match.Error == "insufficient players" {
log.Println("Waiting for a larger player pool...")
}
// Validate JSON before trying to parse it
if !gjson.Valid(string(match.Properties)) {
log.Println(errors.New("invalid json"))
}
log.Println("Received match:")
pretty.PrettyPrint(match)
// Assign players in this match to our server
log.Println("Assigning players to DGS at", *assignment)
assign := &pb.Assignments{Rosters: match.Rosters, Assignment: *assignment}
log.Printf("Waiting for matches...")
_, err = client.CreateAssignments(context.Background(), &pb.CreateAssignmentsRequest{
Assignment: assign,
})
if err != nil {
log.Println(err)
}
log.Println("Success!")
if *delAssignments {
log.Println("deleting assignments")
for _, a := range assign.Rosters {
_, err = client.DeleteAssignments(context.Background(), &pb.DeleteAssignmentsRequest{Roster: a})
if err != nil {
log.Println(err)
}
log.Println("Success Deleting Assignments!")
}
} else {
log.Println("Not deleting assignments [demo mode].")
}
}
if *beCall == "CreateMatch" {
// Got a result; done here.
log.Println("Got single result from CreateMatch, exiting...")
doneChan <- true
return
}
}
}()
// Make the requested backend call: CreateMatch calls once, ListMatches continually calls.
log.Printf("Attempting %v() call", *beCall)
switch *beCall {
case "CreateMatch":
resp, err := client.CreateMatch(ctx, req)
if err != nil {
panic(err)
}
log.Printf("CreateMatch returned; processing match")
matchChan <- resp.Match
<-doneChan
case "ListMatches":
stream, err := client.ListMatches(ctx, &pb.ListMatchesRequest{
Mmfcfg: req.Mmfcfg,
Match: req.Match,
})
if err != nil {
log.Fatalf("Attempting to open stream for ListMatches(_) = _, %v", err)
}
//for i := 0; i < 2; i++ {
for {
log.Printf("Waiting for matches...")
match, err := stream.Recv()
resp, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatalf("Error reading stream for ListMatches(_) = _, %v", err)
stat, ok := status.FromError(err)
if ok {
log.Printf("Error reading stream for ListMatches() returned status: %s %s", stat.Code().String(), stat.Message())
} else {
log.Printf("Error reading stream for ListMatches() returned status: %s", err)
}
break
}
if match.Properties == "{error: insufficient_players}" {
log.Println("Waiting for a larger player pool...")
//break
}
// Validate JSON before trying to parse it
if !gjson.Valid(string(match.Properties)) {
log.Println(errors.New("invalid json"))
}
log.Println("Received match:")
ppJSON(match.Properties)
//fmt.Println(match) // Debug
// Assign players in this match to our server
connstring := "example.com:12345"
if len(os.Args) >= 2 {
connstring = os.Args[1]
log.Printf("Player assignment '%v' specified at commandline", connstring)
}
log.Println("Assigning players to DGS at", connstring)
assign := &backend.Assignments{Rosters: match.Rosters, Assignment: connstring}
log.Printf("Waiting for matches...")
_, err = client.CreateAssignments(context.Background(), assign)
if err != nil {
log.Println(err)
}
log.Println("Success! Not deleting assignments [demo mode].")
matchChan <- resp.Match
}
//log.Println("deleting assignments")
//playerstr = strings.Join(players[0:len(players)/2], " ")
//roster.PlayerIds = playerstr
//_, err = client.DeleteAssignments(context.Background(), roster)
}
}

View File

@ -1,7 +1,8 @@
{
"imagename":"gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple:dev",
"name":"testprofilev1",
"id":"testprofile",
"hostname": "om-function",
"port": 50502,
"properties":{
"pools": [
{

View File

@ -0,0 +1,23 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/evaluators/golang/serving/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
FROM gcr.io/distroless/static
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/examples/evaluators/golang/serving/serving .
ENTRYPOINT ["/serving"]

View File

@ -0,0 +1,125 @@
/*
This is a sample Evaluator built using the Evaluator Harness. It evaluates
multiple proposals and approves a subset of them. This sample demonstrates
how to build a basic Evaluator using the Evaluator Harness . This example
over-simplifies the actual evaluation decisions and hence should not be
used as is for a real scenario.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"fmt"
harness "github.com/GoogleCloudPlatform/open-match/internal/harness/evaluator/golang"
"github.com/GoogleCloudPlatform/open-match/internal/pb"
)
func main() {
// This invoke the harness to set up the Evaluator. The harness abstracts
// fetching proposals ready for evaluation and the process of transforming
// approved proposals into results that Open Match can relay to the caller
// requesting for matches.
harness.RunEvaluator(Evaluate)
}
// Evaluate is where your custom evaluation logic lives.
// Input:
// - proposals : List of all the proposals to be consiered for evaluation. Each proposal will have
// Rosters comprising of the players belonging to that proposal.
// Output:
// - (proposals) : List of approved proposal IDs that can be returned as match results.
func Evaluate(ctx context.Context, proposals []*pb.MatchObject) ([]string, error) {
// Map of approved and overloaded proposals. Using maps for easier lookup.
approvedProposals := map[string]bool{}
overloadedProposals := map[string]bool{}
// Map of all the players encountered in the proposals. Each entry maps a player id to
// the first match in which the player was encountered.
allPlayers := map[string]string{}
// Iterate over each proposal to either add to approved map or overloaded map.
for _, proposal := range proposals {
proposalID := proposal.Id
approved := true
players := getPlayersInProposal(proposal)
// Iterate over each player in the proposal to check if the player was encountered before.
for _, playerID := range players {
if propID, found := allPlayers[playerID]; found {
// Player was encountered in an earlier proposal. Mark the current proposal as overloaded (not approved).
// Also, the first proposal where the player was encountered may have been marked approved. Remove that proposal
// approved proposals and add to overloaded proposals since we encountered its player in current proposal too.
approved = false
delete(approvedProposals, propID)
overloadedProposals[propID] = true
} else {
// Player encountered for the first time, add to all players map with the current proposal.
allPlayers[playerID] = proposalID
}
if approved {
approvedProposals[proposalID] = true
} else {
overloadedProposals[proposalID] = true
}
}
}
// Convert the maps to lists of overloaded, approved proposals.
overloadedList := []string{}
approvedList := []string{}
for k := range overloadedProposals {
overloadedList = append(overloadedList, k)
}
for k := range approvedProposals {
approvedList = append(approvedList, k)
}
// Select proposals to approve from the overloaded proposals list.
chosen, err := chooseProposals(overloadedList)
if err != nil {
return nil, fmt.Errorf("Failed to select approved list from overloaded proposals, %v", err)
}
// Add the chosen proposals to the approved list.
approvedList = append(approvedList, chosen...)
return approvedList, nil
}
// chooseProposals should look through all overloaded proposals (that is, have a player that is also
// in another proposed match) and choose the proposals to approve. This is where the core evaluation
// logic will be added.
func chooseProposals(overloaded []string) ([]string, error) {
// As a basic example, we pick the first overloaded proposal for approval.
approved := []string{}
if len(overloaded) > 0 {
approved = append(approved, overloaded[0])
}
return approved, nil
}
func getPlayersInProposal(proposal *pb.MatchObject) []string {
var players []string
for _, r := range proposal.Rosters {
for _, p := range r.Players {
players = append(players, p.Id)
}
}
return players
}

View File

@ -1,10 +0,0 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/evaluators/golang/simple
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/mmfstub/mmfstub mmfstub
ENTRYPOINT ["./simple"]

View File

@ -1,11 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-evaluator:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-evaluator:dev']

View File

@ -1,302 +0,0 @@
// The evaluator is, generalized, a weighted graph problem
// https://www.google.co.jp/search?q=computer+science+weighted+graph&oq=computer+science+weighted+graph
// However, it's up to the developer to decide what values in their matchmaking
// decision process are the weights as well as what to prioritize (make as many
// groups as possible is a common goal). The default evaluator makes naive
// decisions under the assumption that most use cases would rather spend their
// time tweaking the profiles sent to matchmaking such that they are less and less
// likely to choose the same players.
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"fmt"
"log"
"os"
"strings"
"time"
om_messages "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
"github.com/gobs/pretty"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
)
func main() {
//Init Logger
lgr := log.New(os.Stdout, "MMFEval: ", log.LstdFlags)
lgr.Println("Initializing example MMF proposal evaluator")
// Read config
lgr.Println("Initializing config...")
cfg, err := readConfig("matchmaker_config", map[string]interface{}{
"REDIS_SERVICE_HOST": "redis",
"REDIS_SERVICE_PORT": "6379",
"auth": map[string]string{
// Read from k8s secret eventually
// Probably doesn't need a map, just here for reference
"password": "12fa",
},
})
if err != nil {
panic(nil)
}
// Connect to redis
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz // redis pool docs: https://godoc.org/github.com/gomodule/redigo/redis#Pool
redisURL := "redis://" + cfg.GetString("REDIS_SERVICE_HOST") + ":" + cfg.GetString("REDIS_SERVICE_PORT")
lgr.Println("Connecting to redis at", redisURL)
pool := redis.Pool{
MaxIdle: 3,
MaxActive: 0,
IdleTimeout: 60 * time.Second,
Dial: func() (redis.Conn, error) { return redis.DialURL(redisURL) },
}
redisConn := pool.Get()
defer redisConn.Close()
// TODO: write some code to allow the context to be safely cancelled
/*
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
*/
start := time.Now()
proposedMatchIds, overloadedPlayers, overloadedMatches, approvedMatches, err := stub(cfg, &pool)
overloadedPlayerList, overloadedMatchList, approvedMatchList := generateLists(overloadedPlayers, overloadedMatches, approvedMatches)
fmt.Println("overloadedPlayers")
pretty.PrettyPrint(overloadedPlayers)
fmt.Println("overloadePlayerList")
pretty.PrettyPrint(overloadedPlayerList)
fmt.Println("overloadedMatchList")
pretty.PrettyPrint(overloadedMatchList)
fmt.Println("approvedMatchList")
pretty.PrettyPrint(approvedMatchList)
approved, rejected, err := chooseMatches(overloadedMatchList)
approvedMatchList = append(approvedMatchList, approved...)
// run redis commands to approve matches
for _, proposalIndex := range approvedMatchList {
// The match object was already written by the MMF, just change the
// name to what the Backend API (apisrv.go) is looking for.
proposedID := proposedMatchIds[proposalIndex]
// Incoming proposal keys look like this:
// proposal.1542600048.80e43fa085844eebbf53fc736150ef96.testprofile
// format:
// "proposal".timestamp.unique_matchobject_id.profile_name
values := strings.Split(proposedID, ".")
moID, proID := values[2], values[3]
backendID := moID + "." + proID
fmt.Printf("approving proposal #%+v:%+v\n", proposalIndex, moID)
fmt.Println("RENAME", proposedID, backendID)
_, err = redisConn.Do("RENAME", proposedID, backendID)
if err != nil {
// RENAME only fails if the source key doesn't exist
fmt.Printf("err = %+v\n", err)
}
}
//TODO: Need to requeue for another job run here.
for _, proposalIndex := range rejected {
fmt.Println("rejecting ", proposalIndex)
proposedID := proposedMatchIds[proposalIndex]
fmt.Printf("proposedID = %+v\n", proposedID)
values := strings.Split(proposedID, ".")
fmt.Printf("values = %+v\n", values)
timestamp, moID, proID := values[0], values[1], values[2]
fmt.Printf("timestamp = %+v\n", timestamp)
fmt.Printf("moID = %+v\n", moID)
fmt.Printf("proID = %+v\n", proID)
}
lgr.Printf("0 Finished in %v seconds.", time.Since(start).Seconds())
}
// chooseMatches looks through all match proposals that ard overloaded (that
// is, have a player that is also in another proposed match) and chooses those
// to approve and those to reject.
// TODO: this needs a complete overhaul in a 'real' graph search
func chooseMatches(overloaded []int) ([]int, []int, error) {
// Super naive - take one overloaded match and approved it, reject all others.
fmt.Printf("overloaded = %+v\n", overloaded)
fmt.Printf("len(overloaded) = %+v\n", len(overloaded))
if len(overloaded) > 0 {
fmt.Printf("overloaded[0:2] = %+v\n", overloaded[0:0])
fmt.Printf("overloaded[1:] = %+v\n", overloaded[1:])
return overloaded[0:1], overloaded[1:], nil
}
return []int{}, overloaded, nil
}
func readConfig(filename string, defaults map[string]interface{}) (*viper.Viper, error) {
/*
Examples of redis-related env vars as written by k8s
REDIS_SENTINEL_PORT_6379_TCP=tcp://10.55.253.195:6379
REDIS_SENTINEL_PORT=tcp://10.55.253.195:6379
REDIS_SENTINEL_PORT_6379_TCP_ADDR=10.55.253.195
REDIS_SERVICE_PORT=6379
REDIS_SENTINEL_PORT_6379_TCP_PORT=6379
REDIS_SENTINEL_PORT_6379_TCP_PROTO=tcp
REDIS_SERVICE_HOST=10.55.253.195
*/
v := viper.New()
for key, value := range defaults {
v.SetDefault(key, value)
}
v.SetConfigName(filename)
v.SetConfigType("json")
v.AddConfigPath(".")
v.AutomaticEnv()
// Optional read from config if it exists
err := v.ReadInConfig()
if err != nil {
//lgr.Printf("error when reading config: %v\n", err)
//lgr.Println("continuing...")
err = nil
}
return v, err
}
func stub(cfg *viper.Viper, pool *redis.Pool) ([]string, map[string][]int, map[int][]int, map[int]bool, error) {
//Init Logger
lgr := log.New(os.Stdout, "MMFEvalStub: ", log.LstdFlags)
lgr.Println("Initializing example MMF proposal evaluator")
// Get redis conneciton
redisConn := pool.Get()
defer redisConn.Close()
// Put some config vars into other vars for readability
proposalq := cfg.GetString("queues.proposals.name")
lgr.Println("SCARD", proposalq)
numProposals, err := redis.Int(redisConn.Do("SCARD", proposalq))
lgr.Println("SPOP", proposalq, numProposals)
proposals, err := redis.Strings(redisConn.Do("SPOP", proposalq, numProposals))
if err != nil {
lgr.Println(err)
}
fmt.Printf("proposals = %+v\n", proposals)
// This is a far cry from effecient but we expect a pretty small set of players under consideration
// at any given time
// Map that implements a set https://golang.org/doc/effective_go.html#maps
overloadedPlayers := make(map[string][]int)
overloadedMatches := make(map[int][]int)
approvedMatches := make(map[int]bool)
allPlayers := make(map[string]int)
// Loop through each proposal, and look for 'overloaded' players (players in multiple proposals)
for index, propKey := range proposals {
approvedMatches[index] = true // This proposal is approved until proven otherwise
playerList, err := getProposedPlayers(pool, propKey)
if err != nil {
lgr.Println(err)
}
for _, pID := range playerList {
if allPlayers[pID] != 0 {
// Seen this player at least once before; gather the indicies of all the match
// proposals with this player
overloadedPlayers[pID] = append(overloadedPlayers[pID], index)
overloadedMatches[index] = []int{}
delete(approvedMatches, index)
if len(overloadedPlayers[pID]) == 1 {
adjustedIndex := allPlayers[pID] - 1
overloadedPlayers[pID] = append(overloadedPlayers[pID], adjustedIndex)
overloadedMatches[adjustedIndex] = []int{}
delete(approvedMatches, adjustedIndex)
}
} else {
// First time seeing this player. Track which match proposal had them in case we see
// them again
// Terrible indexing hack: default int value is 0, but so is the
// lowest propsal index. Since we need to use interpret 0 as
// 'unset' in this context, add one to index, and remove it if/when we put this
// player in the overloadedPlayers map.
adjustedIndex := index + 1
allPlayers[pID] = adjustedIndex
}
}
}
return proposals, overloadedPlayers, overloadedMatches, approvedMatches, err
}
// getProposedPlayers is a function that may be moved to an API call in the future.
func getProposedPlayers(pool *redis.Pool, propKey string) ([]string, error) {
// Get the proposal match object from redis
mo := &om_messages.MatchObject{Id: propKey}
err := redispb.UnmarshalFromRedis(context.Background(), pool, mo)
if err != nil {
return nil, err
}
// Loop through all rosters, appending players IDs to a list.
playerList := make([]string, 0)
for _, r := range mo.Rosters {
for _, p := range r.Players {
playerList = append(playerList, p.Id)
}
}
return playerList, err
}
func propToRoster(in []string) []interface{} {
// Convert []string to []interface{} so it can be passed as variadic input
// https://golang.org/doc/faq#convert_slice_of_interface
out := make([]interface{}, len(in))
for i, v := range in {
values := strings.Split(v, ".")
timestamp, moID, proID := values[0], values[1], values[2]
out[i] = "roster." + timestamp + "." + moID + "." + proID
}
return out
}
func generateLists(overloadedPlayers map[string][]int, overloadedMatches map[int][]int, approvedMatches map[int]bool) ([]string, []int, []int) {
// Make a slice of overloaded players from the map.
overloadedPlayerList := make([]string, 0, len(overloadedPlayers))
for k := range overloadedPlayers {
overloadedPlayerList = append(overloadedPlayerList, k)
}
// Make a slice of overloaded matches from the set.
overloadedMatchList := make([]int, 0, len(overloadedMatches))
for k := range overloadedMatches {
overloadedMatchList = append(overloadedMatchList, k)
}
// Make a slice of approved matches from the set.
approvedMatchList := make([]int, 0, len(approvedMatches))
for k := range approvedMatches {
approvedMatchList = append(approvedMatchList, k)
}
return overloadedPlayerList, overloadedMatchList, approvedMatchList
}

View File

@ -1,105 +0,0 @@
{
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 30
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "py3"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

View File

@ -1,29 +0,0 @@
using System.Collections.Generic;
namespace mmfdotnet
{
/// <summary>
/// A deserialization target for the simple match function profile example
/// </summary>
public class Profile
{
public Properties Properties { get; set; }
}
public class Properties
{
public Dictionary<string, string> PlayerPool { get; set; }
public Dictionary<string, int> Roster { get; set; }
}
/// <summary>
/// The output of the match function is a collection of team names and contained players
/// </summary>
public class Result
{
public Dictionary<string, List<string>> Teams { get; set; }
}
}

View File

@ -1,16 +0,0 @@
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "mmfdotnet.dll"]

View File

@ -1,150 +0,0 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using Newtonsoft.Json;
using StackExchange.Redis;
namespace mmfdotnet
{
/// <summary>
/// An example of a simple match function
/// </summary>
/// <remarks>
/// Compatible with example profiles found here: https://github.com/GoogleCloudPlatform/open-match/tree/master/examples/backendclient/profiles
/// </remarks>
class Program
{
static void Main(string[] args)
{
string host = Environment.GetEnvironmentVariable("REDIS_SERVICE_HOST");
string port = Environment.GetEnvironmentVariable("REDIS_SERVICE_PORT");
// Single connection to the open match redis cluster
Console.WriteLine($"Connecting to redis...{host}:{port}");
StringBuilder builder = new StringBuilder();
StringWriter writer = new StringWriter(builder);
ConnectionMultiplexer redis;
try
{
redis = ConnectionMultiplexer.Connect($"{host}:{port}", writer);
}
catch (Exception e)
{
writer.WriteLine(e);
throw;
}
finally
{
writer.Flush();
Console.WriteLine(writer.GetStringBuilder());
}
IDatabase db = redis.GetDatabase();
try
{
FindMatch(db);
}
finally
{
// Decrement the number of running MMFs since this one is finished
Console.WriteLine("DECR concurrentMMFs");
db.StringDecrement("concurrentMMFs");
}
}
private static void FindMatch(IDatabase db)
{
// PROFILE is passed via the k8s downward API through an env set to jobName.
string jobName = Environment.GetEnvironmentVariable("PROFILE");
Console.WriteLine("PROFILE from job name " + jobName);
string[] tokens = jobName.Split('.');
string timestamp = tokens[0];
string moId = tokens[1];
string profileKey = tokens[2];
string resultsKey = $"proposal.{jobName}";
string rosterKey = $"roster.{jobName}";
string errorKey = $"{moId}.{profileKey}";
Console.WriteLine($"Looking for a profile in key " + profileKey);
string profileJson = db.StringGet(profileKey);
Profile profile = JsonConvert.DeserializeObject<Profile>(profileJson);
if (profile.Properties.PlayerPool.Count < 1)
{
Console.WriteLine("Insufficient filters");
db.StringSet(errorKey, "{ \"error\": \"insufficient_filters\"}");
return;
}
// Filter the player pool into sets matching the given filters
List<List<string>> filteredIds = new List<List<string>>();
foreach (KeyValuePair<string, string> filter in profile.Properties.PlayerPool)
{
string[] range = filter.Value.Split('-');
int min = int.Parse(range[0]);
int max = int.Parse(range[1]);
Console.WriteLine($"Filtering {filter.Key} for {min} to {max}");
List<string> idsFound = new List<string>();
// TODO: Only poll a reasonable number (not the whole table)
RedisValue[] set = db.SortedSetRangeByRank(filter.Key, min, max);
Console.WriteLine($"Found {set.Count()} matching");
filteredIds.Add(Array.ConvertAll(set, item => item.ToString()).ToList());
}
// Find the union of the player sets (TODO: optimize)
List<string> overlap = new List<string>();
foreach (List<string> set in filteredIds)
{
overlap = overlap.Union(set).ToList();
}
Console.WriteLine($"Overlapping players in pool: {overlap.Count}");
int rosterSize = profile.Properties.Roster.Values.Sum();
if (overlap.Count < rosterSize)
{
Console.WriteLine("Insufficient players");
db.StringSet(errorKey, "{ \"error\": \"insufficient_players\"}");
return;
}
// Split the players into teams based on the profile roster information
Result result = new Result()
{
Teams = new Dictionary<string, List<string>>()
};
List<string> roster = new List<string>();
foreach (KeyValuePair<string, int> team in profile.Properties.Roster)
{
Console.WriteLine($"Attempting to fill team {team.Key} with {team.Value} players");
// Only take as many players as are available, or the maximum available
List<string> group = overlap.Take(team.Value).ToList();
result.Teams.Add(team.Key, group);
Console.WriteLine($"Team {team.Key} roster: " + string.Join(" ", group));
roster.AddRange(group);
}
// Write the match object that will be sent back to the DGS
// In this example, the output is not a modified profile, but rather, just the team rosters
db.StringSet(resultsKey, JsonConvert.SerializeObject(result));
// Write the flattened roster that will be sent to the evaluator
db.StringSet(rosterKey, string.Join(" ", roster));
// Finally, write the results key to the proposal queue to trigger the evaluation of these results
string proposalQueueKey = "proposalq";
db.SetAdd(proposalQueueKey, jobName);
}
}
}

View File

@ -1,10 +0,0 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="StackExchange.Redis" Version="1.2.6"/>
<PackageReference Include="Newtonsoft.Json" Version="11.0.2"/>
</ItemGroup>
</Project>

View File

@ -0,0 +1,23 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM open-match-base-build as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/functions/golang/grpc-serving
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o matchfunction .
FROM gcr.io/distroless/static
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/examples/functions/golang/grpc-serving/matchfunction .
ENTRYPOINT ["/matchfunction"]

View File

@ -0,0 +1,136 @@
/*
This is a sample match function that uses the GRPC harness to set up
the match making function as a service. This sample is a reference
to demonstrate the usage of the GRPC harness and should only be used as
a starting point for your match function. You will need to modify the
matchmaking logic in this function based on your game's requirements.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"errors"
"math/rand"
"time"
harness "github.com/GoogleCloudPlatform/open-match/internal/harness/matchfunction/golang"
"github.com/GoogleCloudPlatform/open-match/internal/pb"
log "github.com/sirupsen/logrus"
)
func main() {
// Invoke the harness to setup a GRPC service that handles requests to run the
// match function. The harness itself queries open match for player pools for
// the specified request and passes the pools to the match function to generate
// proposals.
harness.ServeMatchFunction(&harness.HarnessParams{
FunctionName: "simple-matchfunction",
ServicePortConfigName: "api.functions.port",
ProxyPortConfigName: "api.functions.proxyport",
Func: makeMatches,
})
}
// makeMatches is where your custom matchmaking logic lives.
// Input:
// - profile : 'Properties' of a MatchObject specified in the ListMatches or CreateMatch call.
// - rosters : An array of Rosters. By convention, your input Roster contains players already in
// the match, and the names of pools to search when trying to fill an empty slot.
// - pools : An array PlayerPool messages. Contains all the players returned by the MMLogic API
// upon querying for the player pools. It already had ignorelists applied as of the
// time the player pool was queried.
// Output:
// - (results) JSON blob to populated in the MatchObject 'Properties' field sent to the ListMatches
// or CreateMatch call.
// - (rosters) Populated team rosters. Use is optional but recommended;
// you'll need to construct at least one Roster with all the players you're selecting
// as it is used to add those players to the ignorelist. You'll also need all
// the players you want to assign to your DGS in Roster(s) when you call the
// BackendAPI CreateAssignments() endpoint. Might as well put them in rosters now.
// - error : Use if you need to return an unrecoverable error.
func makeMatches(ctx context.Context, logger *log.Entry, profile string, rosters []*pb.Roster, pools []*pb.PlayerPool) (string, []*pb.Roster, error) {
// Open Match will try to marshal your JSON roster to an array of protobuf Roster objects. It's
// up to you if you want to fill these protobuf Roster objects or just write your Rosters in your
// custom JSON blob. This example uses the protobuf Rosters.
// Used for tracking metrics.
var selectedPlayerCount int64
// Loop through all the team rosters sent in the call to create a match.
for ti, team := range rosters {
logger.Infof(" Attempting to fill team: %v", team.Name)
// Loop through all the players slots on this team roster.
for si, slot := range team.Players {
// Loop through all the pools and check if there is a pool with a matching name to the
// poolName for this player slot. Just one example of a way for your matchmaker to
// specify which pools your MMF should search through to fill a given player slot.
// Optional, feel free to change as you see fit.
for _, pool := range pools {
if slot.Pool == pool.Name && len(pool.Roster.Players) > 0 {
/////////////////////////////////////////////////////////
// These next few lines are where you would put your custom logic, such as
// searching the pool for players with similar latencies or skill ratings
// to the players you have already selected. This example doesn't do anything
// but choose at random!
logger.Infof("Looking for player in pool: %v, size: %v", pool.Name, len(pool.Roster.Players))
randPlayerIndex := rand.New(rand.NewSource(time.Now().UnixNano())).Intn(len(pool.Roster.Players))
// Get random player with this index
selectedPlayer := pool.Roster.Players[randPlayerIndex]
logger.Infof("Selected player index %v: %v", randPlayerIndex, selectedPlayer.Id)
// Remove this player from the array as they are now used up.
pool.Roster.Players[randPlayerIndex] = pool.Roster.Players[0]
// This is a functional pop from a set.
_, pool.Roster.Players = pool.Roster.Players[0], pool.Roster.Players[1:]
// Write the player to the slot and loop.
rosters[ti].Players[si] = selectedPlayer
selectedPlayerCount++
break
/////////////////////////////////////////////////////////
} else {
// Weren't enough players left in the pool to fill all the slots so this example errors out.
// For this example, this is an error condition and so the match result will have the error
// populated. If your game can handle partial results, customize this to NOT return an error
// and instaead populate the result with any properties that may be needed to evaluate the proposal.
return "", rosters, errors.New("insufficient players")
}
}
}
}
logger.Info(" Rosters complete.")
// You can send back any arbitrary JSON in the first return value (the 'results' string). It
// will get sent back out the backend API in the Properties field of the MatchObject message.
// In this example, all the selected players are populated to the Rosters array, so we'll just
// pass back the input profile back as the results. If there was anything else arbitrary as
// output from the MMF, it could easily be included here.
results := profile
logger.Infof("Selected %v players", selectedPlayerCount)
return results, rosters, nil
}

View File

@ -1,10 +0,0 @@
# Golang application builder steps
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/functions/golang/manual-simple
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o mmf .
#FROM scratch
#COPY --from=builder /go/src/github.com/GoogleCloudPlatform/mmfstub/mmfstub mmfstub
CMD ["./mmf"]

View File

@ -1,10 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-golang-manual-simple',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-golang-manual-simple']

View File

@ -1,355 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/GoogleCloudPlatform/open-match/config"
messages "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/GoogleCloudPlatform/open-match/internal/set"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/ignorelist"
"github.com/gogo/protobuf/jsonpb"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
/*
Here are the things a MMF needs to do:
*Read/write from the Open Match state storage — Open Match ships with Redis as
the default state storage.
*Be packaged in a (Linux) Docker container.
*Read a profile you wrote to state storage using the Backend API.
*Select from the player data you wrote to state storage using the Frontend API.
*Run your custom logic to try to find a match.
*Write the match object it creates to state storage at a specified key.
*Remove the players it selected from consideration by other MMFs.
*Notify the MMForc of completion.
*(Optional & NYI, but recommended) Export stats for metrics collection.
*/
func main() {
// Read config file.
cfg := viper.New()
cfg, err := config.Read()
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz
redisURL := "redis://" + os.Getenv("REDIS_SERVICE_HOST") + ":" + os.Getenv("REDIS_SERVICE_PORT")
fmt.Println("Connecting to Redis at", redisURL)
redisConn, err := redis.DialURL(redisURL)
if err != nil {
panic(err)
}
defer redisConn.Close()
// decrement the number of running MMFs once finished
defer func() {
fmt.Println("DECR moncurrentMMFs")
_, err = redisConn.Do("DECR", "concurrentMMFs")
if err != nil {
fmt.Println(err)
}
}()
// Environment vars set by the MMForc
jobName := os.Getenv("PROFILE")
timestamp := os.Getenv("MMF_TIMESTAMP")
proposalKey := os.Getenv("MMF_PROPOSAL_ID")
profileKey := os.Getenv("MMF_PROFILE_ID")
errorKey := os.Getenv("MMF_ERROR_ID")
rosterKey := os.Getenv("MMF_ROSTER_ID")
_ = jobName
_ = timestamp
_ = proposalKey
_ = profileKey
_ = errorKey
_ = rosterKey
fmt.Println("MMF request inserted at ", timestamp)
fmt.Println("Looking for profile in key", profileKey)
fmt.Println("Placing results in MatchObjectID", proposalKey)
// Retrieve profile from Redis.
// NOTE: This can also be done with a call to the MMLogic API.
profile, err := redis.StringMap(redisConn.Do("HGETALL", profileKey))
if err != nil {
panic(err)
}
fmt.Println("=========Profile")
p, err := json.MarshalIndent(profile, "", " ")
fmt.Println(string(p))
// select players
const numPlayers = 8
// ZRANGE is 0-indexed
pools := gjson.Get(profile["properties"], cfg.GetString("jsonkeys.pools"))
fmt.Println("=========Pools")
fmt.Printf("pool.String() = %+v\n", pools.String())
// Parse all the pools.
// NOTE: When using pool definitions like these that are using the
// PlayerPool protobuf message data schema, you can avoid all of this by
// using the MMLogic API call to automatically parse the pools, run the
// filters, and return the results in one gRPC call per pool.
//
// ex: poolRosters["defaultPool"]["mmr.rating"]=[]string{"abc", "def", "ghi"}
poolRosters := make(map[string]map[string][]string)
// Loop through each pool.
pools.ForEach(func(_, pool gjson.Result) bool {
pName := gjson.Get(pool.String(), "name").String()
pFilters := gjson.Get(pool.String(), "filters")
poolRosters[pName] = make(map[string][]string)
// Loop through each filter for this pool
pFilters.ForEach(func(_, filter gjson.Result) bool {
// Note: This only works when running only one filter on each attribute!
searchKey := gjson.Get(filter.String(), "attribute").String()
min := int64(0)
max := int64(time.Now().Unix())
poolRosters[pName][searchKey] = make([]string, 0)
// Parse the min and max values.
if minv := gjson.Get(filter.String(), "minv"); minv.Bool() {
min = int64(minv.Int())
}
if maxv := gjson.Get(filter.String(), "maxv"); maxv.Bool() {
max = int64(maxv.Int())
}
fmt.Printf("%v: %v: [%v-%v]\n", pName, searchKey, min, max)
// NOTE: This only pulls the first 50000 matches for a given index!
// This is an example, and probably shouldn't be used outside of
// testing without some performance tuning based on the size of
// your indexes. In prodution, this could be run concurrently on
// multiple parts of the index, and combined.
// NOTE: It is recommended you also send back some stats about this
// query along with your MMF, which can be useful when your backend
// API client is deciding which profiles to send. This example does
// not return stats, but when using the MMLogic API, this is done
// for you.
poolRosters[pName][searchKey], err = redis.Strings(
redisConn.Do("ZRANGEBYSCORE", searchKey, min, max, "LIMIT", "0", "50000"))
if err != nil {
panic(err)
}
return true // keep iterating
})
return true // keep iterating
})
// Get ignored players.
combinedIgnoreList := make([]string, 0)
// Loop through all ignorelists configured in the config file.
for il := range cfg.GetStringMap("ignoreLists") {
ilCfg := cfg.Sub(fmt.Sprintf("ignoreLists.%v", il))
thisIl, err := ignorelist.Retrieve(redisConn, ilCfg, il)
if err != nil {
panic(err)
}
// Join this ignorelist to the others we've retrieved
combinedIgnoreList = set.Union(combinedIgnoreList, thisIl)
}
// Cycle through all filters for each pool, and calculate the overlap
// (players that match all filters)
overlaps := make(map[string][]string)
// Loop through pools
for pName, p := range poolRosters {
fmt.Println(pName)
// Var init
overlaps[pName] = make([]string, 0)
first := true // Flag used to initialize the overlap on the first iteration.
// Loop through rosters that matched each filter
for fName, roster := range p {
if first {
first = false
overlaps[pName] = roster
}
// Calculate overlap
overlaps[pName] = set.Intersection(overlaps[pName], roster)
// Print out for visibility/debugging
fmt.Printf(" filtering: %-20v | participants remaining: %-5v\n", fName, len(overlaps[pName]))
}
// Remove players on ignorelists
overlaps[pName] = set.Difference(overlaps[pName], combinedIgnoreList)
fmt.Printf(" removing: %-21v | participants remaining: %-5v\n", "(ignorelists)", len(overlaps[pName]))
}
// Loop through each roster in the profile and fill in players.
rosters := gjson.Get(profile["properties"], cfg.GetString("jsonkeys.rosters"))
fmt.Println("=========Rosters")
fmt.Printf("rosters.String() = %+v\n", rosters.String())
// Parse all the rosters in the profile, adding players if we can.
// NOTE: This is using roster definitions that follow the Roster protobuf
// message data schema.
profileRosters := make(map[string][]string)
//proposedRosters := make([]string, 0)
mo := &messages.MatchObject{}
mo.Rosters = make([]*messages.Roster, 0)
// List of all player IDs on all proposed rosters, used to add players to
// the ignore list.
// NOTE: when using the MMLogic API, writing your final proposal to state
// storage will automatically add players to the ignorelist, so you don't
// need to track them separately and add them to the ignore list yourself.
playerList := make([]string, 0)
rosters.ForEach(func(_, roster gjson.Result) bool {
rName := gjson.Get(roster.String(), "name").String()
fmt.Println(rName)
rPlayers := gjson.Get(roster.String(), "players")
profileRosters[rName] = make([]string, 0)
pbRoster := messages.Roster{Name: rName, Players: []*messages.Player{}}
rPlayers.ForEach(func(_, player gjson.Result) bool {
// TODO: This is where you would put your own custom matchmaking
// logic. MMFs have full access to the state storage in Redis, so
// you can choose some participants from the pool according to your
// favored strategy. You have complete freedom to read the
// participant's records from Redis and make decisions accordingly.
//
// This example just chooses the players in the order they were
// returned from state storage.
//fmt.Printf(" %v\n", player.String()) //DEBUG
proposedPlayer := player.String()
// Get the name of the pool that the profile wanted this player pulled from.
desiredPool := gjson.Get(player.String(), "pool").String()
if _, ok := overlaps[desiredPool]; ok {
// There are players that match all the desired filters.
if len(overlaps[desiredPool]) > 0 {
// Propose the next player returned from state storage for this
// slot in the match rosters.
// Functionally, a pop from the overlap array into the proposed slot.
playerID := ""
playerID, overlaps[desiredPool] = overlaps[desiredPool][0], overlaps[desiredPool][1:]
proposedPlayer, err = sjson.Set(proposedPlayer, "id", playerID)
if err != nil {
panic(err)
}
profileRosters[rName] = append(profileRosters[rName], proposedPlayer)
fmt.Printf(" proposing: %v\n", proposedPlayer)
pbRoster.Players = append(pbRoster.Players, &messages.Player{Id: playerID, Pool: desiredPool})
playerList = append(playerList, playerID)
} else {
// Not enough players, exit.
fmt.Println("Not enough players in the pool to fill all player slots in requested roster", rName)
fmt.Printf("%+v\n", roster.String())
fmt.Println("SET", errorKey, `{"error": "insufficient_players"}`)
redisConn.Do("SET", errorKey, `{"error": "insufficient_players"}`)
os.Exit(1)
}
}
return true
})
//proposedRoster, err := sjson.Set(roster.String(), "players", profileRosters[rName])
mo.Rosters = append(mo.Rosters, &pbRoster)
//fmt.Sprintf("[%v]", strings.Join(profileRosters[rName], ",")))
//if err != nil {
// panic(err)
//}
//proposedRosters = append(proposedRosters, proposedRoster)
return true
})
// Write back the match object to state storage so the evaluator can look at it, and update the ignorelist.
// NOTE: the MMLogic API CreateProposal automates most of this for you, as
// long as you send it properly formatted data (i.e. data that fits the schema of
// the protobuf messages)
// Add proposed players to the ignorelist so other MMFs won't consider them.
fmt.Printf("Adding %v players to ignorelist\n", len(playerList))
err = ignorelist.Add(redisConn, "proposed", playerList)
if err != nil {
fmt.Println("Unable to add proposed players to the ignorelist")
panic(err)
}
// Write the match object that will be sent back to the DGS
jmarshaler := jsonpb.Marshaler{}
moJSON, err := jmarshaler.MarshalToString(mo)
proposedRosters := gjson.Get(moJSON, "rosters")
fmt.Println("===========Proposal")
// Set the properties field.
// This is a filthy hack due to the way sjson escapes & quotes values it inserts.
// Better in most cases than trying to marshal the JSON into giant multi-dimensional
// interface maps only to dump it back out to a string after.
// Note: this hack isn't necessary for most users, who just use this same
// data directly from the protobuf message 'rosters' field, or write custom
// rosters directly to the JSON properties when choosing players. This is here
// for backwards compatibility with backends that haven't been updated to take
// advantage of the new rosters field in the MatchObject protobuf message introduced
// in 0.2.0.
profile["properties"], err = sjson.Set(profile["properties"], cfg.GetString("jsonkeys.rosters"), proposedRosters.String())
profile["properties"] = strings.Replace(profile["properties"], "\\", "", -1)
profile["properties"] = strings.Replace(profile["properties"], "]\"", "]", -1)
profile["properties"] = strings.Replace(profile["properties"], "\"[", "[", -1)
if err != nil {
fmt.Println("problem with sjson")
fmt.Println(err)
}
fmt.Printf("Proposed ID: %v | Properties: %v", proposalKey, profile["properties"])
// Write the roster that will be sent to the evaluator. This needs to be written to the
// "rosters" key of the match object, in the protobuf format for an array of
// rosters protobuf messages. You can write this output by hand (not recommended)
// or use the MMLogic API call CreateProposal will a filled out MatchObject protobuf message
// and let it do the work for you.
profile["rosters"] = proposedRosters.String()
fmt.Println("===========Redis")
// Start writing proposed results to Redis.
redisConn.Send("MULTI")
for key, value := range profile {
if key != "id" {
fmt.Println("HSET", proposalKey, key, value)
redisConn.Send("HSET", proposalKey, key, value)
}
}
//Finally, write the propsal key to trigger the evaluation of these results
fmt.Println("SADD", cfg.GetString("queues.proposals.name"), proposalKey)
redisConn.Send("SADD", cfg.GetString("queues.proposals.name"), proposalKey)
_, err = redisConn.Do("EXEC")
if err != nil {
panic(err)
}
}

View File

@ -1,105 +0,0 @@
{
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505,
"timeout": 30
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
"reportingPeriod": 5
},
"queues": {
"profiles": {
"name": "profileq",
"pullCount": 100
},
"proposals": {
"name": "proposalq"
}
},
"ignoreLists": {
"proposed": {
"name": "proposed",
"offset": 0,
"duration": 800
},
"deindexed": {
"name": "deindexed",
"offset": 0,
"duration": 800
},
"expired": {
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "py3"
}
},
"redis": {
"user": "",
"password": "",
"pool" : {
"maxIdle" : 3,
"maxActive" : 0,
"idleTimeout" : 60
},
"queryArgs":{
"count": 10000
},
"results": {
"pageSize": 10000
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"pools": "properties.pools"
},
"playerIndices": [
"char.cleric",
"char.knight",
"char.paladin",
"map.aleroth",
"map.oasis",
"mmr.rating",
"mode.battleroyale",
"mode.ctf",
"region.europe-east1",
"region.europe-west1",
"region.europe-west2",
"region.europe-west3",
"region.europe-west4",
"role.dps",
"role.support",
"role.tank"
]
}

View File

@ -1,12 +0,0 @@
{
"require": {
"grpc/grpc": "v1.9.0"
},
"autoload": {
"psr-4": {
"Api\\": "proto/Api",
"Messages\\": "proto/Messages",
"GPBMetadata\\": "proto/GPBMetadata"
}
}
}

Some files were not shown because too many files have changed in this diff Show More