Compare commits

..

89 Commits

Author SHA1 Message Date
56e08e82d4 Revert accidental file type change 2019-01-14 09:32:13 -05:00
2df027c9f6 Bold release numbers 2019-01-10 00:28:31 -05:00
913af84931 Use public repo URL 2019-01-09 02:18:53 -05:00
de6064f9fd Use public repo URL 2019-01-09 02:18:22 -05:00
867c55a409 Fix registry URL and add symlink issue 2019-01-09 02:15:11 -05:00
36420be2ce Revert accidental removal of symlink 2019-01-09 02:14:32 -05:00
16e9dda64a Bugfix for no commandline args 2019-01-09 02:14:07 -05:00
1ef9a896bf Revert accidental commit of empty file 2019-01-09 02:13:30 -05:00
75f2b84ded Up default timeout 2019-01-09 02:03:47 -05:00
2268baf1ba revert accidential commit of local change 2019-01-09 02:00:36 -05:00
9e43d989ea Remove debug sleep command 2019-01-09 00:10:47 -05:00
869725baee Bump k8s version 2019-01-08 23:56:07 -05:00
ae26ac3cd3 Merge remote-tracking branch 'origin/master' into 030wip 2019-01-08 23:41:55 -05:00
826af77396 Point to public registry and update tag 2019-01-08 23:37:38 -05:00
294d03e18b Roadmap 2019-01-08 22:39:08 -05:00
b27116aedd 030 RC2 2019-01-08 02:19:53 -05:00
074c0584f5 030 RC1 issue thread updates https://github.com/GoogleCloudPlatform/open-match/pull/55 2019-01-07 23:35:42 -05:00
210e00703a production guide now has placeholder notes, low hanging fruit 2019-01-07 23:35:14 -05:00
3ffbddbdd8 Updates to add optional TTL to redis objects 2019-01-05 23:37:38 -05:00
5f827b5c7c doesn't work 2019-01-05 23:01:33 -05:00
a161e6dba9 030 WIP first pass 2018-12-30 05:31:49 -05:00
7e70683d9b fix broken sed command 2018-12-30 04:34:27 -05:00
38bd94c078 Merge NoFr1ends commit 6a5dc1c 2018-12-30 04:16:48 -05:00
83366498d3 Update Docs 2018-12-30 03:45:39 -05:00
929e089e4d rename api call 2018-12-30 03:35:25 -05:00
a6b56b19d2 Merge branch to address issue #42 2018-12-28 04:01:59 -05:00
c2b6fdc198 Updates to FEClient and protos 2018-12-28 02:48:03 -05:00
43a4f046f0 Update config 2018-12-27 03:14:40 -05:00
b79bc2591c Remove references to connstring 2018-12-27 03:07:26 -05:00
61198fd168 No unused code 2018-12-27 03:04:18 -05:00
c1dd3835fe Updated logging 2018-12-27 02:55:16 -05:00
f3c9e87653 updates to documentation and builds 2018-12-27 02:28:43 -05:00
0064116c34 Further deletion and fix indexing for empty fields 2018-12-27 02:09:20 -05:00
298fe18f29 Updates to player deletion logic, metadata indices 2018-12-27 01:27:39 -05:00
6c539ab2a4 Remove manual filenames in logs 2018-12-26 07:43:54 -05:00
b6c59a7a0a Player watcher for FEAPI brought over from Doodle 2018-12-26 07:29:28 -05:00
f0536cedde Merge Ilya's updates 2018-12-26 00:18:00 -05:00
48fa4ba962 Update Redis HA details 2018-12-25 23:58:54 -05:00
39ff99b65e rename 'redis-sentinel' to just 'redis' 2018-12-26 13:51:24 +09:00
78c7b3b949 redis failover deployment 2018-12-26 13:51:24 +09:00
6a5dc1c508 Fix typo in development guide 2018-12-26 13:49:54 +09:00
9f84ec9bc9 First pass. Works but hacky. 2018-12-25 23:47:30 -05:00
e48b7db56f #51 Fix parsing of empty matchobject fields 2018-12-26 13:45:40 +09:00
bffd54727c Merge branch 'udptest' into test_agones 2018-12-19 02:59:04 -05:00
ab90f5f6e0 got udp test workign 2018-12-19 02:56:20 -05:00
632415c746 simple udp client & server to integrate with agones 2018-12-18 23:58:02 +03:00
0882c63eb1 Update messages; more redis code sequestered to redis module 2018-12-16 08:12:42 -05:00
ee6716c60e Merge PL 47 2018-12-15 23:56:35 -05:00
bb5ad8a596 Merge 951bc8509d5eb8fceb138135c001c6a7b7f9bb25 into 275fa2d125e91fd25981124387f6388431f73874 2018-12-15 19:32:28 +00:00
951bc8509d Remove strings import as it's no longer used 2018-12-15 14:11:31 -05:00
ab8cd21633 Update to use Xid instead of UUID. 2018-12-15 14:11:05 -05:00
721cd2f7ae Still needs make file or the like and updated instructions 2018-12-10 14:05:00 +09:00
13cd1da631 Merge remote-tracking branch 'origin/json-logging' into feupdate 2018-12-06 23:28:35 -05:00
275fa2d125 Awkward wording 2018-12-07 13:17:39 +09:00
4a8e018599 Fix merge conflict 2018-12-06 22:04:52 -05:00
c1b5d44947 Update current version number 2018-12-06 22:01:14 -05:00
ae9db9fae8 Merge remote-tracking branch 'origin/master' 2018-12-06 21:56:43 -05:00
104fbd19cd Header level tweaks 2018-12-06 02:54:40 -05:00
3b2571fced Doc updates for 0.2.0 2018-12-06 02:53:16 -05:00
486c64798b Merge tag '020rc2' into feupdate 2018-12-06 02:14:58 -05:00
3fb17c5f22 Merge remote-tracking branch 'origin/master' into 020rc2 2018-12-06 02:12:55 -05:00
3f42e3d986 Finalizing 0.2.0 updates to dev doc 2018-12-06 01:16:26 -05:00
0c74debbb3 Updated docs for 0.2.0 2018-12-05 03:59:57 -05:00
1854ee0ba1 Fix formatting 2018-12-04 01:07:31 -05:00
99d9d7e2b5 Update for 0.2.0 Release 2018-12-02 21:48:48 -05:00
e286435e19 0.2.0 RC2 release notes 2018-11-28 22:40:07 -05:00
52f9e2810f WIP indexing 2018-11-28 04:10:08 -05:00
db60d7ac5f Merge from 0.2.0 2018-11-28 02:23:26 -05:00
b17dccac3b Merge manual golang MMF & README.md updates 2018-11-27 01:57:14 -05:00
b9bb0b1aeb Tested 2018-11-27 01:55:48 -05:00
a6f2edbbae Fully working 2018-11-27 00:12:36 -05:00
55db5c5ba3 Writing proposal 2018-11-25 22:51:08 -05:00
b4f696484f Move set operations to module for use in example MMF 2018-11-24 03:46:07 -05:00
12935d2cab Rename 2018-11-24 02:33:58 -05:00
a0cff79878 Parsing filters now 2018-11-23 10:29:17 -05:00
7a3c5937f2 Updates to simple mmf for 020 2018-11-23 10:03:48 -05:00
f430720d2f example of MMF done in PHP 2018-11-22 13:24:55 +09:00
34010986f7 Caleb fixes from https://github.com/GoogleCloudPlatform/open-match/pull/39 2018-11-21 07:46:08 -05:00
d8a8d16bfc Iterate over attributes, not properties. Thanks Ilya 2018-11-21 07:45:14 -05:00
243f53336c Update backendclient build directions, thanks Ilya 2018-11-20 08:29:24 -05:00
d188be60c8 Fix for new pb module, thanks Ilya 2018-11-20 08:21:09 -05:00
f1541a8cee Remove development config, thanks Ilya 2018-11-20 08:11:27 -05:00
cd1c4c768e ReDucTor caught a typo 2018-11-20 00:51:37 -05:00
967b6cc695 Update dev doc to include MMLogic API 2018-11-20 00:19:10 -05:00
906c0861c7 Make mmlogic deploy json match other APIs 2018-11-20 00:17:47 -05:00
4e0bb5c07d Add DGS (Dedicated Game Server) to glossary 2018-11-20 14:10:41 +09:00
b57dd3e668 https://github.com/GoogleCloudPlatform/open-match/pull/36 2018-11-20 00:09:59 -05:00
b2897ca159 Remove unused file 2018-11-19 06:44:18 -05:00
326dd6c6dd Add logging config to support json and level selection for logrus 2018-11-17 16:11:33 -08:00
141 changed files with 7057 additions and 3191 deletions

2
.gitignore vendored
View File

@ -26,6 +26,8 @@ populations
# Discarded code snippets
build.sh
*-fast.yaml
detritus/
# Dotnet Core ignores
*.swp

53
CHANGELOG.md Normal file
View File

@ -0,0 +1,53 @@
# Release history
## v0.3.0 (alpha)
This update is focused on the Frontend API and Player Records, including more robust code for indexing, deindexing, reading, writing, and expiring player requests from Open Match state storage. All Frontend API function argument have changed, although many only slightly. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
### Release notes
- The Frontend API calls have all be changed to reflect the fact that they operate on Players in state storage. To queue a game client, 'CreatePlayer' in Open Match, to get updates 'GetUpdates', and to stop matching, 'DeletePlayer'. The calls are now much more obviously related to how Open Match sees players: they are database records that it creates on demand, updates using MMFs and the Backend API, and deletes when the player is no longer looking for a match.
- The Player record in state storage has changed to a more complete hash format, and it no longer makes sense to remove a player's assignment from the Frontend as a separate action to removing their record entirely. `DeleteAssignment()` has therefore been removed. Just use `DeletePlayer` instead; you'll always want the client to re-request matching with its latest attributes anyway.
- There is now a module for [indexing and deindexing players in state storage](internal/statestorage/redis/playerindices/playerindices.go). This is a *much* more efficient, as well as being cleaner and more maintainable than the previous implementation which was **hard-coded to index everything** you passed in to the Frontend API at a specific JSON object depth.
- This paves the way for dynamically choosing your indicies without restarting the matchmaker. This will be implemented if there is demand. Pull Requests are welcome!
- Two internal timestamp-based indices have replaced the previous `timestamp` index. `created` is used to calculate how long a player has been waiting for a match, `accessed` is used to determine when a player needs to be expired out of state storage. Both are prefixed by the string `OM_METADATA` so it should be easy to spot them.
- A call to the Frontend API `GetUpdates()` gRPC endpoint returns a stream of player messages. This is used to send updates to state storage for the `Assignment`, `Status`, and `Error` Player fields in near-realtime. **It is the responsibility of the game client to disconnect** from the stream when it has gotten the results it was waiting for!
- Moved the rest of the gRPC messages into a shared [`messages.proto` file](api/protobuf-spec/messages.proto).
- Added documentation to Frontend API gRPC calls to the [`frontend.proto` file](api/protobuf-spec/frontend.proto).
- [Issue #41](https://github.com/GoogleCloudPlatform/open-match/issues/41)|[PR #48](https://github.com/GoogleCloudPlatform/open-match/pull/48) There is now a HA Redis install available in `install/yaml/01-redis-failover.yaml`. This would be used as a drop-in replacement for a single-instance Redis configuration in `install/yaml/01-redis.yaml`. The HA configuration requires that you install the [Redis Operator](https://github.com/spotahome/redis-operator) (note: **currently alpha**, use at your own risk) in your Kubernetes cluster.
- As part of this change, the kubernetes service name is now `redis` not `redis-sentinel` to denote that it is accessed using a standard Redis client.
- Open Match uses a new feature of the go module [logrus](github.com/sirupsen/logrus) to include filenames and line numbers. If you have an older version in your local build environment, you may need to delete the module and `go get github.com/sirupsen/logrus` again. When building using the provided `cloudbuild.yaml` and `Dockerfile`s this is handled for you.
- The program that was formerly in `examples/frontendclient` has been expanded and has been moved to the `test` directory under (`test/cmd/frontendclient/`)[test/cmd/frontendclient/].
- The client load generator program has been moved from `test/cmd/client` to (`test/cmd/clientloadgen/`)[test/cmd/clientloadgen/] to better reflect what it does.
- [Issue #45](https://github.com/GoogleCloudPlatform/open-match/issues/45) The process for moving the build files (`Dockerfile` and `cloudbuild.yaml`) for each component, example, and test program to their respective directories and out of the repository root has started but won't be completed until a future version.
- Put some basic notes in the [production guide](docs/production.md)
- Added a basic [roadmap](docs/roadmap.md)
## v0.2.0 (alpha)
This is a pretty large update. Custom MMFs or evaluators from 0.1.0 may need some tweaking to work with this version. Some Backend API function arguments have changed. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
v0.2.0 focused on adding additional functionality to Backend API calls and on **reducing the amount of boilerplate code required to make a custom Matchmaking Function**. For this, a new internal API for use by MMFs called the [Matchmaking Logic API (MMLogic API)](README.md#matchmaking-logic-mmlogic-api) has been added. Many of the core components and examples had to be updated to use the new Backend API arguments and the modules to support them, so we recommend you rebuild and redeploy all the components to use v0.2.0.
### Release notes
- MMLogic API is now available. Deploy it to kubernetes using the [appropriate json file]() and check out the [gRPC API specification](api/protobuf-spec/mmlogic.proto) to see how to use it. To write a client against this API, you'll need to compile the protobuf files to your language of choice. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory.
- When using the MMLogic API to filter players into pools, it will attempt to report back the number of players that matched the filters and how long the filters took to query state storage.
- An [example MMF](examples/functions/python3/mmlogic-simple/harness.py) using it has been written in Python3. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory. By default the [example backend client](examples/backendclient/main.go) is now configured to use this MMF, so make sure you have it avaiable before you try to run the latest backend client.
- An [example MMF](examples/functions/php/mmlogic-simple/harness.py) using it has been contributed by Ilya Hrankouski in PHP (thanks!). - The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The [example golang MMF](examples/functions/golang/manual-simple/) has been updated to use the latest data schemas for MatchObjects, and renamed to `manual-simple` to denote that it is manually manipulating Redis, not using the MMLogic API.
- The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The message model for using the Backend API has changed slightly - for calls that make MatchObjects, the expectation is that you will provide a MatchObject with a few fields populated, and it will then be shuttled along through state storage to your MMF and back out again, with various processes 'filling in the blanks' of your MatchObject, which is then returned to your code calling the Backend API. Read the[gRPC API specification](api/protobuf-spec/backend.proto) for more information.
- As part of this, compiled protobuf golang modules now live in the [`internal/pb`](internal/pb) directory. There's a handy [bash script](api/protoc-go.sh) for compiling them from the `api/protobuf-spec` directory into this new `internal/pb` directory for development in your local golang environment if you need it.
- As part of this Backend API message shift and the advent of the MMLogic API, 'player pools' and 'rosters' are now first-class data structures in MatchObjects for those who wish to use them. You can ignore them if you like, but if you want to use some of the MMLogic API calls to automate tasks for you - things like filtering a pool of players according attributes or adding all the players in your rosters to the ignorelist so other MMFs don't try to grab them - you'll need to put your data into the [protobuf messages](api/protobuf-spec/messages.proto) so Open Match knows how to read them. The sample backend client [test profile JSON](examples/backendclient/profiles/testprofile.json)has been updated to use this format if you want to see an example.
- Rosters were formerly space-delimited lists of player IDs. They are now first-class repeated protobuf message fields in the [Roster message format](api/protobuf-spec/messages.proto). That means that in most languages, you can access the roster as a list of players using your native language data structures (more info can be found in the [guide for using protocol buffers in your langauge of choice](https://developers.google.com/protocol-buffers/docs/reference/overview)). If you don't care about the new fields or the new functionality, you can just leave all the other fields but the player ID unset.
- Open Match is transitioning to using [protocol buffer messages](https://developers.google.com/protocol-buffers/) as its internal data format. There is now a Redis state storage [golang module](internal/statestorage/redis/redispb/) for marshaling and unmarshaling MatchObject messages to and from Redis. It isn't very clean code right now but will get worked on for the next couple releases.
- Ignorelists now exist, and have a Redis state storage [golang module](internal/statestorage/redis/ignorelist/) for CRUD access. Currently three ignorelists are defined in the [config file](config/matchmaker_config.json) with their respective parameters. These are implemented as [Sorted Sets in Redis](https://redis.io/commands#sorted_set).
- For those who only want to stand up Open Match and aren't interested in individually tweaking the required kubernetes resources, there are now [three YAML files](install/yaml) that can be used to install Redis, install Open Match, and (optionally) install Prometheus. You'll still need the `sed` [instructions from the Developer Guide](docs/development.md#running-open-match-in-a-development-environment) to substitute in the name of your Docker container registry.
- A super-simple module has been created for doing instersections, unions, and differences of lists of player IDs. It lives in `internal/set/set.go`.
### Roadmap
- It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
- We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
- Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an [outstanding issue to investigate and implement](https://github.com/GoogleCloudPlatform/open-match/issues/41) if it fills our needs, feel free to contribute!
## v0.1.0 (alpha)
Initial release.

21
Dockerfile.mmf_php Normal file
View File

@ -0,0 +1,21 @@
FROM php:7.2-cli
RUN apt-get update && apt-get install -y -q zip unzip zlib1g-dev && apt-get clean
RUN cd /usr/local/bin && curl -sS https://getcomposer.org/installer | php
RUN cd /usr/local/bin && mv composer.phar composer
RUN pecl install grpc
RUN echo "extension=grpc.so" > /usr/local/etc/php/conf.d/30-grpc.ini
RUN pecl install protobuf
RUN echo "extension=protobuf.so" > /usr/local/etc/php/conf.d/30-protobuf.ini
WORKDIR /usr/src/open-match
COPY examples/functions/php/mmlogic-simple examples/functions/php/mmlogic-simple
COPY config config
WORKDIR /usr/src/open-match/examples/functions/php/mmlogic-simple
RUN composer install
CMD [ "php", "./harness.php" ]

136
README.md
View File

@ -1,20 +1,19 @@
# Open Match
Open Match is an open source game matchmaker designed to allow game creators to re-use a common matchmaker framework. Its designed to be flexible (run it anywhere Kubernetes runs), extensible (match logic can be customized to work for any game), and scalable.
Open Match is an open source game matchmaking framework designed to allow game creators to build matchmakers of any size easily and with as much possibility for sharing and code re-use as possible. Its designed to be flexible (run it anywhere Kubernetes runs), extensible (match logic can be customized to work for any game), and scalable.
Matchmaking is a complicated process, and when large player populations are involved, many popular matchmaking approaches touch on significant areas of computer science including graph theory and massively concurrent processing. Open Match is an effort to provide a foundation upon which these difficult problems can be addressed by the wider game development community. As Josh Menke — famous for working on matchmaking for many popular triple-A franchises — put it:
["Matchmaking, a lot of it actually really is just really good engineering. There's a lot of really hard networking and plumbing problems that need to be solved, depending on the size of your audience."](https://youtu.be/-pglxege-gU?t=830)
This project attempts to solve the networking and plumbing problems, so game developers can focus on the logic to match players into great games.
## Disclaimer
This software is currently alpha, and subject to change. **It is not yet ready to be used in production.**
This software is currently alpha, and subject to change. Although Open Match has already been used to run [production workloads within Google](https://cloud.google.com/blog/topics/inside-google-cloud/no-tricks-just-treats-globally-scaling-the-halloween-multiplayer-doodle-with-open-match-on-google-cloud), but it's still early days on the way to our final goal. There's plenty left to write and we welcome contributions. **We strongly encourage you to engage with the community through the [Slack or Mailing lists](#get-involved) if you're considering using Open Match in production before the 1.0 release, as the documentation is likely to lag behind the latest version a bit while we focus on getting out of alpha/beta as soon as possible.**
## Version
The current stable version in master is 0.1.0.
The 0.2.0 RC1 is now available.
[The current stable version in master is 0.3.0 (alpha)](https://github.com/GoogleCloudPlatform/open-match/releases/tag/030). At this time only bugfixes and doc update pull requests will be considered.
Version 0.4.0 is in active development; please target code changes to the 040wip branch.
# Core Concepts
@ -24,19 +23,33 @@ Open Match is designed to support massively concurrent matchmaking, and to be sc
## Glossary
* **MMF** — Matchmaking function. This is the customizable matchmaking logic.
* **Component** — One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called 'components'.
* **Roster** — A list of all the players in a match.
* **Profile** — The json blob containing all the parameters used to select which players go into a roster.
* **Match Object** — A protobuffer message format that contains the Profile and the results of the matchmaking function. Sent to the backend API from yoru game backend with an empty roster and then returned from your MMF with the matchmaking results filled in.
* **MMFOrc** — Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
### General
* **DGS** — Dedicated game server
* **Client** — The game client program the player uses when playing the game
* **Session** — In Open Match, players are matched together, then assigned to a server which hosts the game _session_. Depending on context, this may be referred to as a _match_, _map_, or just _game_ elsewhere in the industry.
### Open Match
* **Component** — One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called _components_.
* **State Storage** — The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
* **MMFOrc** — Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
* **MMF** — Matchmaking function. This is the customizable matchmaking logic.
* **MMLogic API** — An API that provides MMF SDK functionality. It is optional - you can also do all the state storage read and write operations yourself if you have a good reason to do so.
* **Director** — The software you (as a developer) write against the Open Match Backend API. The _Director_ decides which MMFs to run, and is responsible for sending MMF results to a DGS to host the session.
### Data Model
* **Player** — An ID and list of attributes with values for a player who wants to participate in matchmaking.
* **Roster** — A list of player objects. Used to hold all the players on a single team.
* **Filter** — A _filter_ is used to narrow down the players to only those who have an attribute value within a certain integer range. All attributes are integer values in Open Match because [that is how indices are implemented](internal/statestorage/redis/playerindices/playerindices.go). A _filter_ is defined in a _player pool_.
* **Player Pool** — A list of all the players who fit all the _filters_ defined in the pool.
* **Match Object** — A protobuffer message format that contains the _profile_ and the results of the matchmaking function. Sent to the backend API from your game backend with the _roster_(s) empty and then returned from your MMF with the matchmaking results filled in.
* **Profile** — The json blob containing all the parameters used by your MMF to select which players go into a roster together.
* **Assignment** — Refers to assigning a player or group of players to a dedicated game server instance. Open Match offers a path to send dedicated game server connection details from your backend to your game clients after a match has been made.
* **Ignore List** — Removing players from matchmaking consideration is accomplished using _ignore lists_. They contain lists of player IDs that your MMF should not include when making matches.
## Requirements
* [Kubernetes](https://kubernetes.io/) cluster — tested with version 1.9.
* [Redis 4+](https://redis.io/) — tested with 4.0.11.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) — tested with 1.10.3.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) — tested with 1.10.9.
## Components
@ -44,15 +57,17 @@ Open Match is a set of processes designed to run on Kubernetes. It contains thes
1. Frontend API
1. Backend API
1. Matchmaker Function Orchestrator (MMFOrc)
1. Matchmaker Function Orchestrator (MMFOrc) (may be deprecated in future versions)
It includes these **optional** (but recommended) components:
1. Matchmaking Logic (MMLogic) API
It also explicitly depends on these two **customizable** components.
1. Matchmaking "Function" (MMF)
1. Evaluator (may be deprecated in future versions)
1. Evaluator (may be optional in future versions)
While **core** components are fully open source and *can* be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** example MMF and Evaluator processes, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
While **core** components are fully open source and _can_ be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** MMF and Evaluator examples, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
### Frontend API
@ -66,18 +81,20 @@ The client is expected to maintain a connection, waiting for an update from the
### Backend API
The Backend API puts match profiles in state storage which the Matchmaking Function (MMF) can access and use to decide which players should be put into a match together, then return those matches to dedicated game server instances.
The Backend API writes match objects to state storage which the Matchmaking Functions (MMFs) access to decide which players should be matched. It returns the results from those MMFs.
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or scheduler, or even directly to a dedicated game server), and to receive:
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or **director**, or even directly to a dedicated game server), and to receive:
* A **unique ID** for a matchmaking profile.
* A **json blob** containing all the match-related data you want to use in your matchmaking function, in an 'empty' match object.
* A **json blob** containing all the matching-related data and filters you want to use in your matchmaking function.
* An optional list of **roster**s to hold the resulting teams chosen by your matchmaking function.
* An optional set of **filters** that define player pools your matchmaking function will choose players from.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
### Matchmaking Function Orchestrator (MMFOrc)
The MMFOrc kicks off your custom matchmaking function (MMF) for every profile submitted to the Backend API. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
The MMFOrc kicks off your custom matchmaking function (MMF) for every unique profile submitted to the Backend API in a match object. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
The MMFOrc exists to orchestrate/schedule your **custom components**, running them as often as required to meet the demands of your game. MMFOrc runs in an endless loop, submitting MMFs and Evaluator jobs to Kubernetes.
@ -86,20 +103,20 @@ The MMFOrc exists to orchestrate/schedule your **custom components**, running th
The MMLogic API provides a series of gRPC functions that act as a Matchmaking Function SDK. Much of the basic, boilerplate code for an MMF is the same regardless of what players you want to match together. The MMLogic API offers a gRPC interface for many common MMF tasks, such as:
1. Reading a profile from state storage.
1. Running filters on players in state strorage.
1. Removing chosen players from consideration by other MMFs (by adding them to an ignore list).
1. Running filters on players in state strorage. It automatically removes players on ignore lists as well!
1. Removing chosen players from consideration by other MMFs (by adding them to an ignore list). It does it automatically for you when writing your results!
1. Writing the matchmaking results to state storage.
1. (Optional, NYI) Exporting MMF stats for metrics collection.
More details about the available gRPC calls can be found in the [API Specification](api/protobuf-spec/messages.proto).
**Note**: using the MMLogic API is **optional**. It tries to simplify the development of MMFs, but if you want to take care of these tasks on your own, you can make few or no calls to the MMLogic API as long as your MMF still completes all the required tasks. Read the [Matchmaking Functions section](https://github.com/GoogleCloudPlatform/open-match#matchmaking-functions-mmfs) for more details of what work an MMF must do.
**Note**: using the MMLogic API is **optional**. It tries to simplify the development of MMFs, but if you want to take care of these tasks on your own, you can make few or no calls to the MMLogic API as long as your MMF still completes all the required tasks. Read the [Matchmaking Functions section](#matchmaking-functions-mmfs) for more details of what work an MMF must do.
### Evaluator
The Evaluator resolves conflicts when multiple matches want to include the same player(s).
The Evaluator resolves conflicts when multiple MMFs select the same player(s).
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposed matches, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and matchmaking functions that always look at different parts of the matchmaking pool, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposals, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and well-tuned matchmaking functions, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
Large-scale concurrent matchmaking functions is a complex topic, and users who wish to do this are encouraged to engage with the [Open Match community](https://github.com/GoogleCloudPlatform/open-match#get-involved) about patterns and best practices.
@ -107,22 +124,29 @@ Large-scale concurrent matchmaking functions is a complex topic, and users who w
Matchmaking Functions (MMFs) are run by the Matchmaker Function Orchestrator (MMFOrc) — once per profile it sees in state storage. The MMF is run as a Job in Kubernetes, and has full access to read and write from state storage. At a high level, the encouraged pattern is to write a MMF in whatever language you are comfortable in that can do the following things:
1. Be packaged in a (Linux) Docker container.
1. Read/write from the Open Match state storage — Open Match ships with Redis as the default state storage.
1. Read a profile you wrote to state storage using the Backend API.
1. Select from the player data you wrote to state storage using the Frontend API.
1. Run your custom logic to try to find a match.
1. Write the match object it creates to state storage at a specified key.
1. Remove the players it selected from consideration by other MMFs.
1. (Optional, but recommended) Export stats for metrics collection.
- [x] Be packaged in a (Linux) Docker container.
- [x] Read/write from the Open Match state storage — Open Match ships with Redis as the default state storage.
- [x] Read a profile you wrote to state storage using the Backend API.
- [x] Select from the player data you wrote to state storage using the Frontend API. It must respect all the ignore lists defined in the matchmaker config.
- [ ] Run your custom logic to try to find a match.
- [x] Write the match object it creates to state storage at a specified key.
- [x] Remove the players it selected from consideration by other MMFs by adding them to the appropriate ignore list.
- [x] Notify the MMFOrc of completion.
- [x] (Optional, but recommended) Export stats for metrics collection.
Example MMFs are provided in Golang and C#.
**Open Match offers [matchmaking logic API](#matchmaking-logic-mmlogic-api) calls for handling the checked items, as long as you are able to format your input and output in the data schema Open Match expects (defined in the [protobuf messages](api/protobuf-spec/messages.proto)).** You can to do this work yourself if you don't want to or can't use the data schema Open Match is looking for. However, the data formats expected by Open Match are pretty generalized and will work with most common matchmaking scenarios and game types. If you have questions about how to fit your data into the formats specified, feel free to ask us in the [Slack or mailing group](#get-involved).
Example MMFs are provided in these languages:
- [C#](examples/functions/csharp/simple) (doesn't use the MMLogic API)
- [Python3](examples/functions/python3/mmlogic-simple) (MMLogic API enabled)
- [PHP](examples/functions/php/mmlogic-simple) (MMLogic API enabled)
- [golang](examples/functions/golang/manual-simple) (doesn't use the MMLogic API)
## Open Source Software integrations
### Structured logging
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. Level and format are configurable via config/matchmaker_config.json. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
### Instrumentation for metrics
@ -134,7 +158,7 @@ Open Match uses [OpenCensus](https://opencensus.io/) for metrics instrumentation
By default, Open Match expects you to run Redis *somewhere*. Connection information can be put in the config file (`matchmaker_config.json`) for any Redis instance reachable from the [Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By default, Open Match sensibly runs in the Kubernetes `default` namespace. In most instances, we expect users will run a copy of Redis in a pod in Kubernetes, with a service pointing to it.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis-sentinel`, which provides an easier path to multi-instance deployments.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis`, which provides an easier path to multi-instance deployments.
## Additional examples
@ -142,31 +166,30 @@ By default, Open Match expects you to run Redis *somewhere*. Connection informat
The following examples of how to call the APIs are provided in the repository. Both have a `Dockerfile` and `cloudbuild.yaml` files in their respective directories:
* `examples/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `test/cmd/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `examples/backendclient/main.go` calls the Backend API and passes in the profile found in `backendstub/profiles/testprofile.json` to the `ListMatches` API endpoint, then continually prints the results until you exit, or there are insufficient players to make a match based on the profile..
## Usage
Documentation and usage guides on how to set up and customize Open Match.
## Precompiled container images
### Precompiled container images
Once we reach a 1.0 release, we plan to produce publicly available (Linux) Docker container images of major releases in a public image registry. Until then, refer to the 'Compiling from source' section below.
## Compiling from source
### Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild_COMPONENT.yaml` files for each component in the repository root.
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in the corresponding `cmd/<COMPONENT>` directories.
All the core components for Open Match are written in Golang and use the [Dockerfile multistage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/). This pattern uses intermediate Docker containers as a Golang build environment while producing lightweight, minimized container images as final build artifacts. When the project is ready for production, we will modify the `Dockerfile`s to uncomment the last build stage. Although this pattern is great for production container images, it removes most of the utilities required to troubleshoot issues during development.
## Configuration
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. When `docker build`ing the component container images, the Dockerfile copies the centralized config file into the component directory.
We plan to replace this with a Kubernetes-managed config with dynamic reloading when development time allows. Pull requests are welcome!
We plan to replace this with a Kubernetes-managed config with dynamic reloading, please join the discussion in [Issue #42](issues/42).
### Guides
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release. **WIP**
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release, right now it's a scattered collection of notes. **WIP**
* [Development guide](./docs/development.md)
### Reference
@ -207,8 +230,8 @@ Open Match is in active development - we would love your help in shaping its fut
Apache 2.0
# Planned improvements
See the [provisional roadmap](docs/roadmap.md) for more information on upcoming releases.
## Documentation
- [ ] “Writing your first matchmaker” getting started guide will be included in an upcoming version.
@ -216,28 +239,33 @@ Apache 2.0
- [ ] Documentation on release process and release calendar.
## State storage
- [ ] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
- [ ] The Redis deployment should have an example HA configuration using HAProxy
- [ ] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet. We don't want to support two redis watcher code paths, so the backend watch of the match object should be switched to unify the way the frontend and backend watch keys. The backend part of this is in but the frontend part is in another branch and will be committed later.
- [X] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
- [X] [The Redis deployment should have an example HA configuration](https://github.com/GoogleCloudPlatform/open-match/issues/41)
- [X] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet.
- [ ] We don't want to support two redis watcher code paths, but we will until golang protobuf reflection is a bit more usable. [Design doc](https://docs.google.com/document/d/19kfhro7-CnBdFqFk7l4_HmwaH2JT_Rhw5-2FLWLEGGk/edit#heading=h.q3iwtwhfujjx), [github issue](https://github.com/golang/protobuf/issues/364)
- [X] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
## Instrumentation / Metrics / Analytics
- [ ] Instrumentation of MMFs is in the planning stages. Since MMFs are by design meant to be completely customizable (to the point of allowing any process that can be packaged in a Docker container), metrics/stats will need to have an expected format and formalized outgoing pathway. Currently the thought is that it might be that the metrics should be written to a particular key in statestorage in a format compatible with opencensus, and will be collected, aggreggated, and exported to Prometheus using another process.
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version.
- [ ] Read logrus logging configuration from matchmaker_config.json.
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version. This is likely going to require knative.
- [X] Read logrus logging configuration from matchmaker_config.json.
## Security
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions.
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions. [Issue 52](issues/52)
## Kubernetes
- [ ] Autoscaling isn't turned on for the Frontend or Backend API Kubernetes deployments by default.
- [ ] A [Helm](https://helm.sh/) chart to stand up Open Match will be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
- [ ] A [Helm](https://helm.sh/) chart to stand up Open Match may be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
- [ ] A knative-based implementation of MMFs is in the planning stages.
- [ ] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
## CI / CD / Build
- [ ] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon.
- [ ] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon. This is tracked in [Issue #45](issues/45) and is blocked by [Issue 42](issues/42).
- [ ] CI/CD for this repo and the associated status tags are planned.
- [ ] Golang unit tests will be shipped in an upcoming version.
- [ ] A full load-testing and e2e testing suite will be included in an upcoming version.
## Will not Implement
- [X] Match profiles should be able to define multiple MMF container images to run, but this is not currently supported. This enables A/B testing and several other scenarios.
- [X] Defining multiple images inside a profile for the purposes of experimentation adds another layer of complexity into profiles that can instead be handled outside of open match with custom match functions in collaboration with a director (thing that calls backend to schedule matchmaking)
### Special Thanks
- Thanks to https://jbt.github.io/markdown-editor/ for help in marking this document down.

View File

@ -1,30 +0,0 @@
# Release history
##v0.2.0 RC1 (alpha)
This is a pretty large update. Custom MMFs or evaluators from 0.1.0 may need some tweaking to work with this version. Some Backend API function arguments have changed. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
v0.2.0 focused on adding additional functionality to Backend API calls and on **reducing the amount of boilerplate code required to make a custom Matchmaking Function**. For this, a new internal API for use by MMFs called the [Matchmaking Logic API (MMLogic API)](README.md#matchmaking-logic-mmlogic-api) has been added. Many of the core components and examples had to be updated to use the new Backend API arguments and the modules to support them, so we recommend you rebuild and redeploy all the components to use v0.2.0.
### Release notes
- MMLogic API is now available. Deploy it to kubernetes using the [appropriate json file]() and check out the [gRPC API specification](api/protobuf-spec/mmlogic.proto) to see how to use it. To write a client against this API, you'll need to compile the protobuf files to your language of choice. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory.
- When using the MMLogic API to filter players into pools, it will attempt to report back the number of players that matched the filters and how long the filters took to query state storage.
- An [example MMF](examples/functions/python3/mmlogic-simple/harness.py) using it has been written in Python3. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory. By default the [example backend client](examples/backendclient/main.go) is now configured to use this MMF, so make sure you have it avaiable before you try to run the latest backend client.
- The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The message model for using the Backend API has changed slightly - for calls that make MatchObjects, the expectation is that you will provide a MatchObject with a few fields populated, and it will then be shuttled along through state storage to your MMF and back out again, with various processes 'filling in the blanks' of your MatchObject, which is then returned to your code calling the Backend API. Read the[gRPC API specification](api/protobuf-spec/backend.proto) for more information.
- As part of this, compiled protobuf golang modules now live in the [internal/pb](internal/pb) directory. There's a handy [bash script](api/protoc-go.sh) for compiling them in your local golang environment if you need it.
- As part of this Backend API message shift and the advent of the MMLogic API, 'player pools' and 'rosters' are now first-class data structures in MatchObjects for those who wish to use them. You can ignore them if you like, but if you want to use some of the MMLogic API calls to automate tasks for you - things like filtering a pool of players according attributes or adding all the players in your rosters to the ignorelist so other MMFs don't try to grab them - you'll need to put your data into the [protobuf messages](api/protobuf-spec/messages.proto) so Open Match knows how to read them. The sample backend client [test profile JSON](examples/backendclient/profiles/testprofile.json)has been updated to use this format if you want to see an example.
- Rosters were formerly space-delimited lists of player IDs. They are now first-class repeated protobuf message fields in the [Roster message format](api/protobuf-spec/messages.proto). That means that in most languages, you can access the roster as a list of players using your native language data structures (more info can be found in the [guide for using protocol buffers in your langauge of choice](https://developers.google.com/protocol-buffers/docs/reference/overview)). If you don't care about the new fields or the new functionality, you can just leave all the other fields but the player ID unset.
- Open Match is transitioning to using [protocol buffer messages](https://developers.google.com/protocol-buffers/) as its internal data format. There is now a Redis state storage [golang module](internal/statestorage/redis/redispb/) for marshaling and unmarshaling MatchObject messages to and from Redis. It isn't very clean code right now but will get worked on for the next couple releases.
- Ignorelists now exist, and have a Redis state storage [golang module](internal/statestorage/redis/ignorelist/) for CRUD access. Currently three ignorelists are defined in the [config file](config/matchmaker_config.json) with their respective parameters. These are implemented as [Sorted Sets in Redis](https://redis.io/commands#sorted_set).
- For those who only want to stand up Open Match and aren't interested in individually tweaking the required kubernetes resources, there are now [three YAML files](install/yaml) that can be used to install Redis, install Open Match, and (optionally) install Prometheus. You'll still need the `sed` [instructions from the Developer Guide](docs/development.md#running-open-match-in-a-development-environment) to substitute in the name of your Docker container registry.
### Roadmap
- It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
- We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
- Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
- Evaluators will be examined for removal in an upcoming version. There's a good change this logic should live in the Directory
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice.
## v0.1.0 (alpha)
Initial release.

View File

@ -21,35 +21,35 @@ service Backend {
// - rosters, if you choose to fill them in your MMF. (Recommended)
// - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)
rpc CreateMatch(messages.MatchObject) returns (messages.MatchObject) {}
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection. Same inputs/outputs as CreateMatch.
// Continually run MMF and stream MatchObjects that fit this profile until
// the backend client closes the connection. Same inputs/outputs as CreateMatch.
rpc ListMatches(messages.MatchObject) returns (stream messages.MatchObject) {}
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
// Delete a MatchObject from state storage manually. (MatchObjects in state
// storage will also automatically expire after a while, defined in the config)
// INPUT: MatchObject message with the 'id' field populated.
// (All other fields are ignored.)
rpc DeleteMatch(messages.MatchObject) returns (messages.Result) {}
// Call fors communication of connection info to players.
// Calls for communication of connection info to players.
// Write the connection info for the list of players in the
// Assignments.messages.Rosters to state storage. The FrontendAPI is
// Assignments.messages.Rosters to state storage. The Frontend API is
// responsible for sending anything sent here to the game clients.
// Sending a player to this function kicks off a process that removes
// the player from future matchmaking functions by adding them to the
// 'deindexed' player list and then deleting their player ID from state storage
// indexes.
// INPUT: Assignments message with these fields populated:
// - connection_info, anything you write to this string is sent to Frontend API
// - assignment, anything you write to this string is sent to Frontend API
// - rosters. You can send any number of rosters, containing any number of
// player messages. All players from all rosters will be sent the connection_info.
// The only field in the Player object that is used by CreateAssignments is
// the id field. All others are silently ignored.
// player messages. All players from all rosters will be sent the assignment.
// The only field in the Roster's Player messages used by CreateAssignments is
// the id field. All other fields in the Player messages are silently ignored.
rpc CreateAssignments(messages.Assignments) returns (messages.Result) {}
// Remove DGS connection info from state storage for players.
// INPUT: Roster message with the 'players' field populated.
// The only field in the Player object that is used by
// The only field in the Roster's Player messages used by
// DeleteAssignments is the 'id' field. All others are silently ignored. If
// you need to delete multiple rosters, make multiple calls.
rpc DeleteAssignments(messages.Roster) returns (messages.Result) {}

View File

@ -1,23 +1,65 @@
// TODO: In a future version, these messages will be moved/merged with those in om_messages.proto
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
import 'api/protobuf-spec/messages.proto';
service Frontend {
rpc CreateRequest(Group) returns (messages.Result) {}
rpc DeleteRequest(Group) returns (messages.Result) {}
rpc GetAssignment(PlayerId) returns (messages.ConnectionInfo) {}
rpc DeleteAssignment(PlayerId) returns (messages.Result) {}
}
// Call to start matchmaking for a player
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
message Group{
string id = 1; // By convention, string of space-delimited playerIDs
string properties = 2; // By convention, a JSON-encoded string
}
// CreatePlayer will put the player in state storage, and then look
// through the 'properties' field for the attributes you have defined as
// indices your matchmaker config. If the attributes exist and are valid
// integers, they will be indexed.
// INPUT: Player message with these fields populated:
// - id
// - properties
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc CreatePlayer(messages.Player) returns (messages.Result) {}
message PlayerId {
string id = 1; // By convention, a UUID
// Call to stop matchmaking for a player
// DeletePlayer removes the player from state storage by doing the
// following:
// 1) Delete player from configured indices. This effectively removes the
// player from matchmaking when using recommended MMF patterns.
// Everything after this is just cleanup to save stage storage space.
// 2) 'Lazily' delete the player's state storage record. This is kicked
// off in the background and may take some time to complete.
// 2) 'Lazily' delete the player's metadata indicies (like, the timestamp when
// they called CreatePlayer, and the last time the record was accessed). This
// is also kicked off in the background and may take some time to complete.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc DeletePlayer(messages.Player) returns (messages.Result) {}
// Calls to access matchmaking results for a player
// GetUpdates streams matchmaking results from Open Match for the
// provided player ID.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: a stream of player objects with one or more of the following
// fields populated, if an update to that field is seen in state storage:
// - 'assignment': string that usually contains game server connection information.
// - 'status': string to communicate current matchmaking status to the client.
// - 'error': string to pass along error information to the client.
//
// During normal operation, the expectation is that the 'assignment' field
// will be updated by a Backend process calling the 'CreateAssignments' Backend API
// endpoint. 'Status' and 'Error' are free for developers to use as they see fit.
// Even if you had multiple players enter a matchmaking request as a group, the
// Backend API 'CreateAssignments' call will write the results to state
// storage separately under each player's ID. OM expects you to make all game
// clients 'GetUpdates' with their own ID from the Frontend API to get
// their results.
//
// NOTE: This call generates a small amount of load on the Frontend API and state
// storage while watching the player record for updates. You are expected
// to close the stream from your client after receiving your matchmaking
// results (or a reasonable timeout), or you will continue to
// generate load on OM until you do!
// NOTE: Just bear in mind that every update will send egress traffic from
// Open Match to game clients! Frugality is recommended.
rpc GetUpdates(messages.Player) returns (stream messages.Player) {}
}

View File

@ -14,7 +14,7 @@ option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
// MatchObject as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message MatchObject{
string id = 1; // By convention, a UUID
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string error = 3; // Last error encountered.
repeated Roster rosters = 4; // Rosters of players.
@ -55,16 +55,26 @@ message PlayerPool{
Stats stats = 4; // Statisticss for the last time this Pool was retrieved from state storage.
}
// Data structure to hold details about a player
// Open Match's internal representation and wire protocol format for "Players".
// In order to enter matchmaking using the Frontend API, your client code should generate
// a consistent (same result for each client every time they launch) with an ID and
// properties filled in (for more details about valid values for these fields,
// see the documentation).
// Players contain a number of fields, but the gRPC calls that take a
// Player as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message Player{
message Attribute{
string name = 1; // Name should match a Filter.attribute field.
int64 value = 2;
}
string id = 1; // By convention, a UUID
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string pool = 3; // Optionally used to specify the PlayerPool in which to find a player.
repeated Attribute attributes= 4; // Attributes of this player.
string assignment = 5; // By convention, ip:port of a DGS to connect to
string status = 6; // Arbitrary developer-chosen string.
string error = 7; // Arbitrary developer-chosen string.
}
@ -78,13 +88,7 @@ message Result{
message IlInput{
}
// Simple message used to pass the connection string for the DGS to the player.
// DEPRECATED: Likely to be integrated into another protobuf message in a future version.
message ConnectionInfo{
string connection_string = 1; // Passed by the matchmaker to game clients without modification.
}
message Assignments{
repeated Roster rosters = 1;
ConnectionInfo connection_info = 2;
string assignment = 10;
}

View File

@ -55,7 +55,7 @@ service MmLogic {
// Player listing and filtering functions
//
// RetrievePlayerPool gets the list of players that match every Filter in the
// PlayerPool, and then removes all players it finds in the ignore list. It
// PlayerPool, .excluding players in any configured ignore lists. It
// combines the results, and returns the resulting player pool.
rpc GetPlayerPool(messages.PlayerPool) returns (stream messages.PlayerPool) {}
@ -63,8 +63,8 @@ service MmLogic {
//
// IlInput is an empty message reserved for future use.
rpc GetAllIgnoredPlayers(messages.IlInput) returns (messages.Roster) {}
// RetrieveIgnoreList retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposedPlayers.key'.
// ListIgnoredPlayers retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposed.name'.
rpc ListIgnoredPlayers(messages.IlInput) returns (messages.Roster) {}
// NYI

View File

@ -2,9 +2,8 @@ steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-devbase:latest',
'--cache-from=gcr.io/$PROJECT_ID/openmatch-devbase:latest',
'--tag=gcr.io/$PROJECT_ID/openmatch-base:dev',
'-f', 'Dockerfile.base',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-devbase:latest']
images: ['gcr.io/$PROJECT_ID/openmatch-base:dev']

View File

@ -1,9 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:dev',
'-f', 'Dockerfile.mmf',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:dev']

9
cloudbuild_mmf_php.yaml Normal file
View File

@ -0,0 +1,9 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple',
'-f', 'Dockerfile.mmf_php',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple']

View File

@ -1,12 +1,9 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-mmf:py3' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:py3',
'--cache-from=gcr.io/$PROJECT_ID/openmatch-mmf:py3',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev',
'-f', 'Dockerfile.mmf_py3',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:py3']
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev']

View File

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/backendapi cmd/backendapi
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

View File

@ -1,5 +1,6 @@
/*
package apisrv provides an implementation of the gRPC server defined in ../../../api/protobuf-spec/backend.proto
package apisrv provides an implementation of the gRPC server defined in
../../../api/protobuf-spec/backend.proto
Copyright 2018 Google LLC
@ -24,7 +25,6 @@ import (
"errors"
"fmt"
"net"
"strings"
"time"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
@ -42,7 +42,7 @@ import (
"github.com/tidwall/gjson"
"github.com/gomodule/redigo/redis"
"github.com/google/uuid"
"github.com/rs/xid"
"github.com/spf13/viper"
"google.golang.org/grpc"
@ -53,7 +53,6 @@ var (
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
"caller": "backend/apisrv/apisrv.go",
}
beLog = log.WithFields(beLogFields)
)
@ -83,7 +82,7 @@ func New(cfg *viper.Viper, pool *redis.Pool) *BackendAPI {
return &s
}
// Open opens the api grpc service, starting it listening on the configured port.
// Open starts the api grpc service listening on the configured port.
func (s *BackendAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.backend.port"))
if err != nil {
@ -108,7 +107,7 @@ func (s *BackendAPI) Open() error {
}
// CreateMatch is this service's implementation of the CreateMatch gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject) (*backend.MatchObject, error) {
// Get a cancel-able context
@ -120,7 +119,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Generate a request to fill the profile. Make a unique request ID.
moID := strings.Replace(uuid.New().String(), "-", "", -1)
moID := xid.New().String()
requestKey := moID + "." + profile.Id
/*
@ -135,8 +134,8 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
*/
// Case where no protobuf pools was passed; check if there's a JSON version in the properties.
// This is for backwards compatibility, it is recommended you populate the
// pools before calling CreateMatch/ListMatches
// This is for backwards compatibility, it is recommended you populate the protobuf's
// 'pools' field directly and pass it to CreateMatch/ListMatches
if profile.Pools == nil && s.cfg.IsSet("jsonkeys.pools") &&
gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.pools")).Exists() {
poolsJSON := fmt.Sprintf("{\"pools\": %v}", gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.pools")).String())
@ -155,7 +154,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
// Case where no protobuf roster was passed; check if there's a JSON version in the properties.
// This is for backwards compatibility, it is recommended you populate the
// pools before calling CreateMatch/ListMatches
// protobuf's 'rosters' field directly and pass it to CreateMatch/ListMatches
if profile.Rosters == nil && s.cfg.IsSet("jsonkeys.rosters") &&
gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.rosters")).Exists() {
rostersJSON := fmt.Sprintf("{\"rosters\": %v}", gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.rosters")).String())
@ -183,8 +182,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
beLog.Info(profile)
// Write profile to state storage
//_, err := redisHelpers.Create(ctx, s.pool, profile.Id, profile.Properties)
err := redispb.MarshalToRedis(ctx, profile, s.pool)
err := redispb.MarshalToRedis(ctx, s.pool, profile, s.cfg.GetInt("redis.expirations.matchobject"))
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
@ -216,7 +214,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
newMO := backend.MatchObject{Id: requestKey}
watchChan := redispb.Watcher(ctx, s.pool, newMO) // Watcher() runs the appropriate Redis commands.
errString := ("Error retrieving matchmaking results from state storage")
timeout := time.Duration(s.cfg.GetInt("interval.resultsTimeout")) * time.Second
timeout := time.Duration(s.cfg.GetInt("api.backend.timeout")) * time.Second
select {
case <-time.After(timeout):
@ -311,7 +309,7 @@ func (s *backendAPI) ListMatches(p *backend.MatchObject, matchStream backend.Bac
}
// DeleteMatch is this service's implementation of the DeleteMatch gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (*backend.Result, error) {
// Create context for tagging OpenCensus metrics.
@ -323,7 +321,7 @@ func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (
"matchObjectID": mo.Id,
}).Info("gRPC call executing")
_, err := redisHelpers.Delete(ctx, s.pool, mo.Id)
err := redisHelpers.Delete(ctx, s.pool, mo.Id)
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
@ -343,14 +341,25 @@ func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (
}
// CreateAssignments is this service's implementation of the CreateAssignments gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignments) (*backend.Result, error) {
// TODO: make playerIDs a repeated protobuf message field and iterate over it
//assignments := strings.Split(a.Roster.PlayerIds, " ")
assignments := make([]string, 0)
for _, roster := range a.Rosters {
assignments = append(assignments, getPlayerIdsFromRoster(roster)...)
// Make a map of players and what assignments we want to send them.
playerIDs := make([]string, 0)
players := make(map[string]string, 0)
for _, roster := range a.Rosters { // Loop through all rosters
for _, player := range roster.Players { // Loop through all players in this roster
if player.Id != "" {
if player.Assignment == "" {
// No player-specific assignment, so use the default one in
// the Assignment message.
player.Assignment = a.Assignment
}
players[player.Id] = player.Assignment
beLog.Debug(fmt.Sprintf("playerid %v assignment %v", player.Id, player.Assignment))
}
}
playerIDs = append(playerIDs, getPlayerIdsFromRoster(roster)...)
}
// Create context for tagging OpenCensus metrics.
@ -359,30 +368,16 @@ func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignmen
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
"numAssignments": len(players),
}).Info("gRPC call executing")
// TODO: relocate this redis functionality to a module
redisConn := s.pool.Get()
defer redisConn.Close()
// TODO: These two calls are done in two different transactions; could be
// combined as an optimization but probably not particularly necessary
// Send the players their assignments.
err := redisHelpers.UpdateMultiFields(ctx, s.pool, players, "assignment")
// Create player assignments in a transaction.
redisConn.Send("MULTI")
for _, playerID := range assignments {
beLog.WithFields(log.Fields{
"query": "HSET",
"playerID": playerID,
s.cfg.GetString("jsonkeys.connstring"): a.ConnectionInfo.ConnectionString,
}).Debug("state storage operation")
redisConn.Send("HSET", playerID, s.cfg.GetString("jsonkeys.connstring"), a.ConnectionInfo.ConnectionString)
}
// Remove these players from the proposed list.
ignorelist.SendRemove(redisConn, "proposed", assignments)
// Add these players from the deindexed list.
ignorelist.SendAdd(redisConn, "deindexed", assignments)
// Send the multi-command transaction to Redis.
_, err := redisConn.Do("EXEC")
// Move these players from the proposed list to the deindexed list.
ignorelist.Move(ctx, s.pool, playerIDs, "proposed", "deindexed")
// Issue encountered
if err != nil {
@ -392,25 +387,23 @@ func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignmen
}).Error("State storage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
stats.Record(fnCtx, BeAssignmentFailures.M(int64(len(assignments))))
stats.Record(fnCtx, BeAssignmentFailures.M(int64(len(players))))
return &backend.Result{Success: false, Error: err.Error()}, err
}
// Success!
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
"numPlayers": len(players),
}).Info("Assignments complete")
stats.Record(fnCtx, BeGrpcRequests.M(1))
stats.Record(fnCtx, BeAssignments.M(int64(len(assignments))))
stats.Record(fnCtx, BeAssignments.M(int64(len(players))))
return &backend.Result{Success: true, Error: ""}, err
}
// DeleteAssignments is this service's implementation of the DeleteAssignments gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) DeleteAssignments(ctx context.Context, r *backend.Roster) (*backend.Result, error) {
// TODO: make playerIDs a repeated protobuf message field and iterate over it
//assignments := strings.Split(a.PlayerIds, " ")
assignments := getPlayerIdsFromRoster(r)
// Create context for tagging OpenCensus metrics.
@ -422,18 +415,7 @@ func (s *backendAPI) DeleteAssignments(ctx context.Context, r *backend.Roster) (
"numAssignments": len(assignments),
}).Info("gRPC call executing")
// TODO: relocate this redis functionality to a module
redisConn := s.pool.Get()
defer redisConn.Close()
// Remove player assignments in a transaction
redisConn.Send("MULTI")
// TODO: make playerIDs a repeated protobuf message field and iterate over it
for _, playerID := range assignments {
beLog.WithFields(log.Fields{"query": "DEL", "key": playerID}).Debug("state storage operation")
redisConn.Send("DEL", playerID)
}
_, err := redisConn.Do("EXEC")
err := redisHelpers.DeleteMultiFields(ctx, s.pool, assignments, "assignment")
// Issue encountered
if err != nil {
@ -453,6 +435,8 @@ func (s *backendAPI) DeleteAssignments(ctx context.Context, r *backend.Roster) (
return &backend.Result{Success: true, Error: ""}, err
}
// getPlayerIdsFromRoster returns the slice of player ID strings contained in
// the input roster.
func getPlayerIdsFromRoster(r *backend.Roster) []string {
playerIDs := make([]string, 0)
for _, p := range r.Players {

View File

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendapi:dev',
'-f', 'Dockerfile.backendapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendapi:dev']

View File

@ -1,6 +1,7 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in proto/backend.proto
running a gRPC server serving the APIService as defined in
${OM_ROOT}/internal/pb/backend.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
@ -28,6 +29,7 @@ import (
"github.com/GoogleCloudPlatform/open-match/cmd/backendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
@ -41,7 +43,6 @@ var (
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
"caller": "backendapi/main.go",
}
beLog = log.WithFields(beLogFields)
@ -51,7 +52,6 @@ var (
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.BeLogLines, apisrv.KeySeverity))
@ -63,10 +63,8 @@ func init() {
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
beLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
@ -88,7 +86,7 @@ func main() {
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
beLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
beLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server

View File

@ -1,749 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: backend.proto
/*
Package backend is a generated protocol buffer package.
It is generated from these files:
backend.proto
It has these top-level messages:
Profile
MatchObject
Roster
Filter
Stats
PlayerPool
Player
Result
IlInput
Timestamp
ConnectionInfo
Assignments
*/
package backend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
type Profile struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
Name string `protobuf:"bytes,3,opt,name=name" json:"name,omitempty"`
// When you send a Profile to the backendAPI, it looks to see if you populated
// this field with protobuf-encoded PlayerPool objects containing valid the filters
// objects. If you did, they are used by OM. If you didn't, the backendAPI
// next looks in your properties blob at the key specified in the 'jsonkeys.pools'
// config value from config/matchmaker_config.json - If it finds valid player
// pool definitions at that key, it will try to unmarshal them into this field.
// If you didn't specify valid player pools in either place, OM assumes you
// know what you're doing and just leaves this unpopulatd.
Pools []*PlayerPool `protobuf:"bytes,4,rep,name=pools" json:"pools,omitempty"`
}
func (m *Profile) Reset() { *m = Profile{} }
func (m *Profile) String() string { return proto.CompactTextString(m) }
func (*Profile) ProtoMessage() {}
func (*Profile) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Profile) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Profile) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
func (m *Profile) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Profile) GetPools() []*PlayerPool {
if m != nil {
return m.Pools
}
return nil
}
// A MMF takes the Profile object above, and generates a MatchObject.
type MatchObject struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
Rosters []*Roster `protobuf:"bytes,3,rep,name=rosters" json:"rosters,omitempty"`
Pools []*PlayerPool `protobuf:"bytes,4,rep,name=pools" json:"pools,omitempty"`
}
func (m *MatchObject) Reset() { *m = MatchObject{} }
func (m *MatchObject) String() string { return proto.CompactTextString(m) }
func (*MatchObject) ProtoMessage() {}
func (*MatchObject) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *MatchObject) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *MatchObject) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
func (m *MatchObject) GetRosters() []*Roster {
if m != nil {
return m.Rosters
}
return nil
}
func (m *MatchObject) GetPools() []*PlayerPool {
if m != nil {
return m.Pools
}
return nil
}
// Data structure to hold a list of players in a match.
type Roster struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Players []*Player `protobuf:"bytes,2,rep,name=players" json:"players,omitempty"`
}
func (m *Roster) Reset() { *m = Roster{} }
func (m *Roster) String() string { return proto.CompactTextString(m) }
func (*Roster) ProtoMessage() {}
func (*Roster) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *Roster) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Roster) GetPlayers() []*Player {
if m != nil {
return m.Players
}
return nil
}
// A filter to apply to the player pool.
type Filter struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Attribute string `protobuf:"bytes,2,opt,name=attribute" json:"attribute,omitempty"`
Maxv int64 `protobuf:"varint,3,opt,name=maxv" json:"maxv,omitempty"`
Minv int64 `protobuf:"varint,4,opt,name=minv" json:"minv,omitempty"`
Stats *Stats `protobuf:"bytes,5,opt,name=stats" json:"stats,omitempty"`
}
func (m *Filter) Reset() { *m = Filter{} }
func (m *Filter) String() string { return proto.CompactTextString(m) }
func (*Filter) ProtoMessage() {}
func (*Filter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Filter) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Filter) GetAttribute() string {
if m != nil {
return m.Attribute
}
return ""
}
func (m *Filter) GetMaxv() int64 {
if m != nil {
return m.Maxv
}
return 0
}
func (m *Filter) GetMinv() int64 {
if m != nil {
return m.Minv
}
return 0
}
func (m *Filter) GetStats() *Stats {
if m != nil {
return m.Stats
}
return nil
}
type Stats struct {
Count int64 `protobuf:"varint,1,opt,name=count" json:"count,omitempty"`
Elapsed float64 `protobuf:"fixed64,2,opt,name=elapsed" json:"elapsed,omitempty"`
}
func (m *Stats) Reset() { *m = Stats{} }
func (m *Stats) String() string { return proto.CompactTextString(m) }
func (*Stats) ProtoMessage() {}
func (*Stats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *Stats) GetCount() int64 {
if m != nil {
return m.Count
}
return 0
}
func (m *Stats) GetElapsed() float64 {
if m != nil {
return m.Elapsed
}
return 0
}
type PlayerPool struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Filters []*Filter `protobuf:"bytes,2,rep,name=filters" json:"filters,omitempty"`
Roster *Roster `protobuf:"bytes,3,opt,name=roster" json:"roster,omitempty"`
Stats *Stats `protobuf:"bytes,4,opt,name=stats" json:"stats,omitempty"`
}
func (m *PlayerPool) Reset() { *m = PlayerPool{} }
func (m *PlayerPool) String() string { return proto.CompactTextString(m) }
func (*PlayerPool) ProtoMessage() {}
func (*PlayerPool) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
func (m *PlayerPool) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *PlayerPool) GetFilters() []*Filter {
if m != nil {
return m.Filters
}
return nil
}
func (m *PlayerPool) GetRoster() *Roster {
if m != nil {
return m.Roster
}
return nil
}
func (m *PlayerPool) GetStats() *Stats {
if m != nil {
return m.Stats
}
return nil
}
// Data structure for a profile to pass to the matchmaking function.
type Player struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
Pool string `protobuf:"bytes,3,opt,name=pool" json:"pool,omitempty"`
Attributes []*Player_Attribute `protobuf:"bytes,4,rep,name=attributes" json:"attributes,omitempty"`
}
func (m *Player) Reset() { *m = Player{} }
func (m *Player) String() string { return proto.CompactTextString(m) }
func (*Player) ProtoMessage() {}
func (*Player) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
func (m *Player) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Player) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
func (m *Player) GetPool() string {
if m != nil {
return m.Pool
}
return ""
}
func (m *Player) GetAttributes() []*Player_Attribute {
if m != nil {
return m.Attributes
}
return nil
}
type Player_Attribute struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Value int64 `protobuf:"varint,2,opt,name=value" json:"value,omitempty"`
}
func (m *Player_Attribute) Reset() { *m = Player_Attribute{} }
func (m *Player_Attribute) String() string { return proto.CompactTextString(m) }
func (*Player_Attribute) ProtoMessage() {}
func (*Player_Attribute) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6, 0} }
func (m *Player_Attribute) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Player_Attribute) GetValue() int64 {
if m != nil {
return m.Value
}
return 0
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
// IlInput is an empty message reserved for future use.
type IlInput struct {
}
func (m *IlInput) Reset() { *m = IlInput{} }
func (m *IlInput) String() string { return proto.CompactTextString(m) }
func (*IlInput) ProtoMessage() {}
func (*IlInput) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
// Epoch timestamp in seconds.
type Timestamp struct {
Ts int64 `protobuf:"varint,1,opt,name=ts" json:"ts,omitempty"`
}
func (m *Timestamp) Reset() { *m = Timestamp{} }
func (m *Timestamp) String() string { return proto.CompactTextString(m) }
func (*Timestamp) ProtoMessage() {}
func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
func (m *Timestamp) GetTs() int64 {
if m != nil {
return m.Ts
}
return 0
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
type Assignments struct {
Rosters []*Roster `protobuf:"bytes,1,rep,name=rosters" json:"rosters,omitempty"`
ConnectionInfo *ConnectionInfo `protobuf:"bytes,2,opt,name=connection_info,json=connectionInfo" json:"connection_info,omitempty"`
}
func (m *Assignments) Reset() { *m = Assignments{} }
func (m *Assignments) String() string { return proto.CompactTextString(m) }
func (*Assignments) ProtoMessage() {}
func (*Assignments) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
func (m *Assignments) GetRosters() []*Roster {
if m != nil {
return m.Rosters
}
return nil
}
func (m *Assignments) GetConnectionInfo() *ConnectionInfo {
if m != nil {
return m.ConnectionInfo
}
return nil
}
func init() {
proto.RegisterType((*Profile)(nil), "Profile")
proto.RegisterType((*MatchObject)(nil), "MatchObject")
proto.RegisterType((*Roster)(nil), "Roster")
proto.RegisterType((*Filter)(nil), "Filter")
proto.RegisterType((*Stats)(nil), "Stats")
proto.RegisterType((*PlayerPool)(nil), "PlayerPool")
proto.RegisterType((*Player)(nil), "Player")
proto.RegisterType((*Player_Attribute)(nil), "Player.Attribute")
proto.RegisterType((*Result)(nil), "Result")
proto.RegisterType((*IlInput)(nil), "IlInput")
proto.RegisterType((*Timestamp)(nil), "Timestamp")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Assignments)(nil), "Assignments")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error)
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error)
// Call for communication of connection info to players.
//
// Write the connection info for the list of players in the
// Assignments.Rosters to state storage. The FrontendAPI is responsible for
// sending anything written here to the game clients.
// TODO: change this to be agnostic; return a 'result' instead of a connection
// string so it can be integrated with session service etc
CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error)
// Remove DGS connection info from state storage for all players in the Roster.
DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error) {
out := new(MatchObject)
err := grpc.Invoke(ctx, "/API/CreateMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error) {
stream, err := grpc.NewClientStream(ctx, &_API_serviceDesc.Streams[0], c.cc, "/API/ListMatches", opts...)
if err != nil {
return nil, err
}
x := &aPIListMatchesClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type API_ListMatchesClient interface {
Recv() (*MatchObject, error)
grpc.ClientStream
}
type aPIListMatchesClient struct {
grpc.ClientStream
}
func (x *aPIListMatchesClient) Recv() (*MatchObject, error) {
m := new(MatchObject)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *aPIClient) DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(context.Context, *Profile) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(*Profile, API_ListMatchesServer) error
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(context.Context, *MatchObject) (*Result, error)
// Call for communication of connection info to players.
//
// Write the connection info for the list of players in the
// Assignments.Rosters to state storage. The FrontendAPI is responsible for
// sending anything written here to the game clients.
// TODO: change this to be agnostic; return a 'result' instead of a connection
// string so it can be integrated with session service etc
CreateAssignments(context.Context, *Assignments) (*Result, error)
// Remove DGS connection info from state storage for all players in the Roster.
DeleteAssignments(context.Context, *Roster) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Profile)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateMatch(ctx, req.(*Profile))
}
return interceptor(ctx, in, info, handler)
}
func _API_ListMatches_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(Profile)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(APIServer).ListMatches(m, &aPIListMatchesServer{stream})
}
type API_ListMatchesServer interface {
Send(*MatchObject) error
grpc.ServerStream
}
type aPIListMatchesServer struct {
grpc.ServerStream
}
func (x *aPIListMatchesServer) Send(m *MatchObject) error {
return x.ServerStream.SendMsg(m)
}
func _API_DeleteMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(MatchObject)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteMatch(ctx, req.(*MatchObject))
}
return interceptor(ctx, in, info, handler)
}
func _API_CreateAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Assignments)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateAssignments(ctx, req.(*Assignments))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Roster)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignments(ctx, req.(*Roster))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateMatch",
Handler: _API_CreateMatch_Handler,
},
{
MethodName: "DeleteMatch",
Handler: _API_DeleteMatch_Handler,
},
{
MethodName: "CreateAssignments",
Handler: _API_CreateAssignments_Handler,
},
{
MethodName: "DeleteAssignments",
Handler: _API_DeleteAssignments_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "ListMatches",
Handler: _API_ListMatches_Handler,
ServerStreams: true,
},
},
Metadata: "backend.proto",
}
func init() { proto.RegisterFile("backend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 591 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0x51, 0x6f, 0xd3, 0x30,
0x10, 0x9e, 0x9b, 0x26, 0x59, 0x2f, 0x63, 0xa3, 0xd6, 0x1e, 0xa2, 0x31, 0x41, 0xe7, 0x07, 0x56,
0x04, 0x8a, 0xa0, 0x08, 0xb1, 0x17, 0x84, 0xaa, 0x21, 0xa4, 0x4a, 0x20, 0x2a, 0x8f, 0x77, 0x94,
0xa6, 0xee, 0xf0, 0x48, 0xed, 0xc8, 0x76, 0x2a, 0x78, 0x43, 0xf0, 0x9f, 0xf8, 0x2d, 0xfc, 0x1c,
0x14, 0x3b, 0x69, 0x53, 0x41, 0x25, 0xe0, 0xcd, 0xdf, 0xe7, 0xbb, 0xf3, 0x77, 0xdf, 0xe5, 0x02,
0xb7, 0x66, 0x69, 0xf6, 0x89, 0x89, 0x79, 0x52, 0x28, 0x69, 0x24, 0x29, 0x20, 0x9c, 0x2a, 0xb9,
0xe0, 0x39, 0xc3, 0x87, 0xd0, 0xe1, 0xf3, 0x18, 0x0d, 0xd0, 0xb0, 0x47, 0x3b, 0x7c, 0x8e, 0xef,
0x02, 0x14, 0x4a, 0x16, 0x4c, 0x19, 0xce, 0x74, 0xdc, 0xb1, 0x7c, 0x8b, 0xc1, 0x18, 0xba, 0x22,
0x5d, 0xb2, 0xd8, 0xb3, 0x37, 0xf6, 0x8c, 0xcf, 0xc0, 0x2f, 0xa4, 0xcc, 0x75, 0xdc, 0x1d, 0x78,
0xc3, 0x68, 0x14, 0x25, 0xd3, 0x3c, 0xfd, 0xc2, 0xd4, 0x54, 0xca, 0x9c, 0xba, 0x1b, 0xf2, 0x1d,
0x41, 0xf4, 0x36, 0x35, 0xd9, 0xc7, 0x77, 0xb3, 0x1b, 0x96, 0x99, 0x7f, 0x7e, 0xf6, 0x0c, 0x42,
0x25, 0xb5, 0x61, 0x4a, 0xc7, 0x9e, 0x7d, 0x24, 0x4c, 0xa8, 0xc5, 0xb4, 0xe1, 0xff, 0x46, 0xc5,
0x4b, 0x08, 0x5c, 0xd6, 0xba, 0x0d, 0xb4, 0xd5, 0x46, 0x58, 0xd8, 0x94, 0x4a, 0x80, 0x7b, 0xc3,
0x95, 0xa0, 0x0d, 0x4f, 0xbe, 0x22, 0x08, 0x5e, 0xf3, 0x7c, 0x57, 0x85, 0x53, 0xe8, 0xa5, 0xc6,
0x28, 0x3e, 0x2b, 0x0d, 0xab, 0x9b, 0xd8, 0x10, 0x55, 0xc6, 0x32, 0xfd, 0xbc, 0xb2, 0xd6, 0x79,
0xd4, 0x9e, 0x2d, 0xc7, 0xc5, 0x2a, 0xee, 0xd6, 0x1c, 0x17, 0x2b, 0x7c, 0x0a, 0xbe, 0x36, 0xa9,
0xd1, 0xb1, 0x3f, 0x40, 0xc3, 0x68, 0x14, 0x24, 0x57, 0x15, 0xa2, 0x8e, 0x24, 0xcf, 0xc1, 0xb7,
0x18, 0x1f, 0x83, 0x9f, 0xc9, 0x52, 0x18, 0xab, 0xc0, 0xa3, 0x0e, 0xe0, 0x18, 0x42, 0x96, 0xa7,
0x85, 0x66, 0x73, 0x2b, 0x00, 0xd1, 0x06, 0x92, 0x6f, 0x08, 0x60, 0x63, 0xc9, 0x2e, 0x07, 0x16,
0xb6, 0xbb, 0x8d, 0x03, 0xae, 0x5b, 0xda, 0xf0, 0xf8, 0x1e, 0x04, 0xce, 0x70, 0xdb, 0x46, 0x6b,
0x0e, 0x35, 0xbd, 0x51, 0xdf, 0xfd, 0x93, 0xfa, 0x1f, 0x08, 0x02, 0x27, 0xe2, 0x7f, 0xbe, 0xbc,
0x6a, 0x8a, 0xcd, 0x97, 0x57, 0x9d, 0xf1, 0x13, 0x80, 0xb5, 0xbf, 0xcd, 0xe0, 0xfb, 0xf5, 0xd4,
0x92, 0x71, 0x73, 0x43, 0x5b, 0x41, 0x27, 0xcf, 0xa0, 0x37, 0x6e, 0x8f, 0xe4, 0x37, 0x13, 0x8e,
0xc1, 0x5f, 0xa5, 0x79, 0xe9, 0x06, 0xe8, 0x51, 0x07, 0xc8, 0x05, 0x04, 0x94, 0xe9, 0x32, 0xb7,
0x0e, 0xeb, 0x32, 0xcb, 0x98, 0xd6, 0x36, 0x6d, 0x9f, 0x36, 0xb0, 0xca, 0x64, 0x4a, 0x49, 0x55,
0x8b, 0x77, 0x80, 0xf4, 0x20, 0x9c, 0xe4, 0x13, 0x51, 0x94, 0x86, 0xdc, 0x81, 0xde, 0x7b, 0xbe,
0x64, 0xda, 0xa4, 0xcb, 0xa2, 0xea, 0xdf, 0xe8, 0x7a, 0x78, 0x1d, 0xa3, 0xc9, 0x0b, 0x38, 0xbc,
0x94, 0x42, 0xb0, 0xcc, 0x70, 0x29, 0x26, 0x62, 0x21, 0xf1, 0x43, 0xe8, 0x67, 0x6b, 0xe6, 0x83,
0x36, 0x8a, 0x8b, 0xeb, 0x5a, 0xea, 0xed, 0xcd, 0xc5, 0x95, 0xe5, 0xc9, 0x0d, 0x44, 0x63, 0xad,
0xf9, 0xb5, 0x58, 0x32, 0x61, 0xb6, 0x16, 0x06, 0xed, 0x58, 0x98, 0x0b, 0x38, 0x6a, 0x95, 0xe7,
0x62, 0x21, 0xad, 0xf0, 0x68, 0x74, 0x94, 0x6c, 0x0b, 0xa1, 0x87, 0xd9, 0x16, 0x1e, 0xfd, 0x44,
0xe0, 0x8d, 0xa7, 0x13, 0x7c, 0x0e, 0xd1, 0xa5, 0x62, 0xa9, 0x61, 0x76, 0xb5, 0xf1, 0x7e, 0x52,
0xff, 0x55, 0x4e, 0x0e, 0x92, 0xd6, 0xb2, 0x93, 0x3d, 0xfc, 0x00, 0xa2, 0x37, 0x5c, 0x1b, 0x4b,
0x32, 0xbd, 0x3b, 0xf0, 0x31, 0xc2, 0xf7, 0x21, 0x7a, 0xc5, 0x72, 0xd6, 0xd4, 0xdc, 0x0a, 0x38,
0x09, 0x13, 0x37, 0x04, 0xb2, 0x87, 0x1f, 0x41, 0xdf, 0xbd, 0xdd, 0xee, 0xfa, 0x20, 0x69, 0xa1,
0x76, 0xf4, 0x39, 0xf4, 0x5d, 0xd5, 0x76, 0x74, 0x63, 0x49, 0x2b, 0x70, 0x16, 0xd8, 0x3f, 0xe4,
0xd3, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x04, 0x1a, 0xf8, 0x0a, 0x32, 0x05, 0x00, 0x00,
}

View File

@ -1,4 +0,0 @@
/*
backend is a package compiled from the protobuffer in <REPO_ROOT>/api/protobuf-spec/backend.proto. It is auto-generated and shouldn't be edited.
*/
package backend

View File

@ -1,11 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/frontendapi cmd/frontendapi
COPY api/protobuf-spec/frontend.pb.go cmd/frontendapi/proto/
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

View File

@ -25,9 +25,11 @@ import (
"net"
"time"
frontend "github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/proto"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
playerq "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerq"
frontend "github.com/GoogleCloudPlatform/open-match/internal/pb"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerindices"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
log "github.com/sirupsen/logrus"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
@ -44,7 +46,6 @@ var (
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
"caller": "frontendapi/apisrv/apisrv.go",
}
feLog = log.WithFields(feLogFields)
)
@ -70,12 +71,12 @@ func New(cfg *viper.Viper, pool *redis.Pool) *FrontendAPI {
log.AddHook(metrics.NewHook(FeLogLines, KeySeverity))
// Register gRPC server
frontend.RegisterAPIServer(s.grpc, (*frontendAPI)(&s))
frontend.RegisterFrontendServer(s.grpc, (*frontendAPI)(&s))
feLog.Info("Successfully registered gRPC server")
return &s
}
// Open opens the api grpc service, starting it listening on the configured port.
// Open starts the api grpc service listening on the configured port.
func (s *FrontendAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.frontend.port"))
if err != nil {
@ -98,22 +99,15 @@ func (s *FrontendAPI) Open() error {
return nil
}
// CreateRequest is this service's implementation of the CreateRequest gRPC method // defined in ../proto/frontend.proto
func (s *frontendAPI) CreateRequest(c context.Context, g *frontend.Group) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// CreatePlayer is this service's implementation of the CreatePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) CreatePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "CreateRequest"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
funcName := "CreatePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Write group
// TODO: Remove playerq module and just use redishelper module once
// indexing has its own implementation
err := playerq.Create(redisConn, g.Id, g.Properties)
err := redispb.MarshalToRedis(ctx, s.pool, group, s.cfg.GetInt("redis.expirations.player"))
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
@ -124,24 +118,8 @@ func (s *frontendAPI) CreateRequest(c context.Context, g *frontend.Group) (*fron
return &frontend.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// DeleteRequest is this service's implementation of the DeleteRequest gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) DeleteRequest(c context.Context, g *frontend.Group) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "DeleteRequest"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Write group
err := playerq.Delete(redisConn, g.Id)
// Index group
err = playerindices.Create(ctx, s.pool, s.cfg, *group)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
@ -152,16 +130,60 @@ func (s *frontendAPI) DeleteRequest(c context.Context, g *frontend.Group) (*fron
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Return success.
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// GetAssignment is this service's implementation of the GetAssignment gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) GetAssignment(c context.Context, p *frontend.PlayerId) (*frontend.ConnectionInfo, error) {
// DeletePlayer is this service's implementation of the DeletePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) DeletePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "DeletePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Deindex this player; at that point they don't show up in MMFs anymore. We can then delete
// their actual player object from Redis later.
err := playerindices.Delete(ctx, s.pool, s.cfg, group.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Kick off delete but don't wait for it to complete.
go s.deletePlayer(group.Id)
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// deletePlayer is a 'lazy' player delete
// It should always be called as a goroutine and should only be called after
// confirmation that a player has been deindexed (and therefore MMF's can't
// find the player to read them anyway)
// As a final action, it also kicks off a lazy delete of the player's metadata
func (s *frontendAPI) deletePlayer(id string) {
err := redisHelpers.Delete(context.Background(), s.pool, id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Warn("Error deleting player from state storage, this could leak state storage memory but is usually not a fatal error")
}
go playerindices.DeleteMeta(context.Background(), s.pool, id)
}
// GetUpdates is this service's implementation of the GetUpdates gRPC method defined in frontend.proto
func (s *frontendAPI) GetUpdates(p *frontend.Player, assignmentStream frontend.Frontend_GetUpdatesServer) error {
// Get cancellable context
ctx, cancel := context.WithCancel(c)
ctx, cancel := context.WithCancel(assignmentStream.Context())
defer cancel()
// Create context for tagging OpenCensus metrics.
@ -169,132 +191,49 @@ func (s *frontendAPI) GetAssignment(c context.Context, p *frontend.PlayerId) (*f
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// get and return connection string
var connString string
watchChan := s.watcher(ctx, s.pool, p.Id) // watcher() runs the appropriate Redis commands.
watchChan := redispb.PlayerWatcher(ctx, s.pool, *p) // watcher() runs the appropriate Redis commands.
timeoutChan := time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
select {
case <-time.After(30 * time.Second): // TODO: Make this configurable.
err := errors.New("did not see matchmaking results in redis before timeout")
// TODO:Timeout: deal with the fallout
// When there is a timeout, need to send a stop to the watch channel.
// cancelling ctx isn't doing it.
//cancel()
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"playerid": p.Id,
}).Error("State storage error")
for {
errTag, _ := tag.NewKey("errtype")
fnCtx, _ := tag.New(ctx, tag.Insert(errTag, "watch_timeout"))
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.ConnectionInfo{ConnectionString: ""}, err
select {
case <-ctx.Done():
// Context cancelled
feLog.WithFields(log.Fields{
"playerid": p.Id,
}).Info("client closed connection successfully")
stats.Record(fnCtx, FeGrpcRequests.M(1))
return nil
case <-timeoutChan: // Timeout reached without client closing connection
// TODO:deal with the fallout
err := errors.New("server timeout reached without client closing connection")
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"playerid": p.Id,
}).Error("State storage error")
case connString = <-watchChan:
feLog.Debug(p.Id, "connString:", connString)
}
// Count errors for metrics
errTag, _ := tag.NewKey("errtype")
fnCtx, _ := tag.New(ctx, tag.Insert(errTag, "watch_timeout"))
stats.Record(fnCtx, FeGrpcErrors.M(1))
//TODO: we could generate a frontend.player message with an error
//field and stream it to the client before throwing the error here
//if we wanted to send more useful client retry information
return err
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.ConnectionInfo{ConnectionString: connString}, nil
}
// DeleteAssignment is this service's implementation of the DeleteAssignment gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) DeleteAssignment(c context.Context, p *frontend.PlayerId) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "DeleteAssignment"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Write group
err := playerq.Delete(redisConn, p.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
//TODO: Everything below this line will be moved to the redis statestorage library
// in an upcoming version.
// ================================================
// watcher makes a channel and returns it immediately. It also launches an
// asynchronous goroutine that watches a redis key and returns the value of
// the 'connstring' field of that key once it exists on the channel.
//
// The pattern for this function is from 'Go Concurrency Patterns', it is a function
// that wraps a closure goroutine, and returns a channel.
// reference: https://talks.golang.org/2012/concurrency.slide#25
func (s *frontendAPI) watcher(ctx context.Context, pool *redis.Pool, key string) <-chan string {
// Add the key as a field to all logs for the execution of this function.
feLog = feLog.WithFields(log.Fields{"key": key})
feLog.Debug("Watching key in statestorage for changes")
watchChan := make(chan string)
go func() {
// var declaration
var results string
var err = errors.New("haven't queried Redis yet")
// Loop, querying redis until this key has a value
for err != nil {
select {
case <-ctx.Done():
// Cleanup
close(watchChan)
return
default:
results, err = s.retrieveConnstring(ctx, pool, key, s.cfg.GetString("jsonkeys.connstring"))
if err != nil {
time.Sleep(5 * time.Second) // TODO: exp bo + jitter
}
}
case a := <-watchChan:
feLog.WithFields(log.Fields{
"assignment": a.Assignment,
"playerid": a.Id,
"status": a.Status,
"error": a.Error,
}).Info("updating client")
assignmentStream.Send(&a)
stats.Record(fnCtx, FeGrpcStreamedResponses.M(1))
// Reset timeout.
timeoutChan = time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
}
// Return value retreived from Redis asynchonously and tell calling function we're done
feLog.Debug("Statestorage watched record update detected")
watchChan <- results
close(watchChan)
}()
return watchChan
}
// retrieveConnstring is a concurrent-safe, context-aware redis HGET of the 'connstring' fieldin the input key
// TODO: This will be moved to the redis statestorage module.
func (s *frontendAPI) retrieveConnstring(ctx context.Context, pool *redis.Pool, key string, field string) (string, error) {
// Add the key as a field to all logs for the execution of this function.
feLog = feLog.WithFields(log.Fields{"key": key})
cmd := "HGET"
feLog.WithFields(log.Fields{"query": cmd}).Debug("Statestorage operation")
// Get a connection to redis
redisConn, err := pool.GetContext(ctx)
defer redisConn.Close()
// Encountered an issue getting a connection from the pool.
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"query": cmd}).Error("Statestorage connection error")
return "", err
}
// Run redis query and return
return redis.String(redisConn.Do("HGET", key, field))
}

View File

@ -55,9 +55,10 @@ import (
//
var (
// API instrumentation
FeGrpcRequests = stats.Int64("frontendapi/requests_total", "Number of requests to the gRPC Frontend API endpoints", "1")
FeGrpcErrors = stats.Int64("frontendapi/errors_total", "Number of errors generated by the gRPC Frontend API endpoints", "1")
FeGrpcLatencySecs = stats.Float64("frontendapi/latency_seconds", "Latency in seconds of the gRPC Frontend API endpoints", "1")
FeGrpcRequests = stats.Int64("frontendapi/requests_total", "Number of requests to the gRPC Frontend API endpoints", "1")
FeGrpcStreamedResponses = stats.Int64("frontendapi/streamed_responses_total", "Number of responses streamed back from the gRPC Frontend API endpoints", "1")
FeGrpcErrors = stats.Int64("frontendapi/errors_total", "Number of errors generated by the gRPC Frontend API endpoints", "1")
FeGrpcLatencySecs = stats.Float64("frontendapi/latency_seconds", "Latency in seconds of the gRPC Frontend API endpoints", "1")
// Logging instrumentation
// There's no need to record this measurement directly if you use
@ -105,6 +106,14 @@ var (
TagKeys: []tag.Key{KeyMethod},
}
FeStreamedResponseCountView = &view.View{
Name: "frontend/grpc/streamed_responses",
Measure: FeGrpcRequests,
Description: "The number of successful streamed gRPC responses",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
FeErrorCountView = &view.View{
Name: "frontend/grpc/errors",
Measure: FeGrpcErrors,
@ -133,6 +142,7 @@ var (
var DefaultFrontendAPIViews = []*view.View{
FeLatencyView,
FeRequestCountView,
FeStreamedResponseCountView,
FeErrorCountView,
FeLogCountView,
FeFailureCountView,

View File

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-frontendapi:dev',
'-f', 'Dockerfile.frontendapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-frontendapi:dev']

View File

@ -1,7 +1,7 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
frontendapi/proto/frontend.pb.go
${OM_ROOT}/internal/pb/frontend.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
@ -28,6 +28,7 @@ import (
"github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
@ -41,7 +42,6 @@ var (
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
"caller": "frontendapi/main.go",
}
feLog = log.WithFields(feLogFields)
@ -51,10 +51,12 @@ var (
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.FeLogLines, apisrv.KeySeverity))
// Add a hook to the logger to log the filename & line number.
log.SetReportCaller(true)
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
@ -63,10 +65,8 @@ func init() {
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
feLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
@ -88,7 +88,7 @@ func main() {
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
feLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
feLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server

View File

@ -1,4 +0,0 @@
/*
frontend is a package compiled from the protobuffer in <REPO_ROOT>/api/protobuf-spec/frontend.proto. It is auto-generated and shouldn't be edited.
*/
package frontend

View File

@ -1,335 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: frontend.proto
/*
Package frontend is a generated protocol buffer package.
It is generated from these files:
frontend.proto
It has these top-level messages:
Group
PlayerId
ConnectionInfo
Result
*/
package frontend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
type Group struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Group) Reset() { *m = Group{} }
func (m *Group) String() string { return proto.CompactTextString(m) }
func (*Group) ProtoMessage() {}
func (*Group) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Group) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Group) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
type PlayerId struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *PlayerId) Reset() { *m = PlayerId{} }
func (m *PlayerId) String() string { return proto.CompactTextString(m) }
func (*PlayerId) ProtoMessage() {}
func (*PlayerId) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *PlayerId) GetId() string {
if m != nil {
return m.Id
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
func init() {
proto.RegisterType((*Group)(nil), "Group")
proto.RegisterType((*PlayerId)(nil), "PlayerId")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Result)(nil), "Result")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error)
DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error) {
out := new(ConnectionInfo)
err := grpc.Invoke(ctx, "/API/GetAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
CreateRequest(context.Context, *Group) (*Result, error)
DeleteRequest(context.Context, *Group) (*Result, error)
GetAssignment(context.Context, *PlayerId) (*ConnectionInfo, error)
DeleteAssignment(context.Context, *PlayerId) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_GetAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).GetAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/GetAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).GetAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateRequest",
Handler: _API_CreateRequest_Handler,
},
{
MethodName: "DeleteRequest",
Handler: _API_DeleteRequest_Handler,
},
{
MethodName: "GetAssignment",
Handler: _API_GetAssignment_Handler,
},
{
MethodName: "DeleteAssignment",
Handler: _API_DeleteAssignment_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "frontend.proto",
}
func init() { proto.RegisterFile("frontend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0x41, 0x4b, 0xfb, 0x40,
0x10, 0xc5, 0x9b, 0xfc, 0x69, 0xda, 0x0e, 0x34, 0xff, 0xba, 0x78, 0x08, 0x39, 0x88, 0xec, 0xa9,
0x20, 0xee, 0x41, 0x0f, 0x7a, 0xf1, 0x50, 0x2a, 0x94, 0xdc, 0x4a, 0xfc, 0x00, 0x52, 0x93, 0x69,
0x59, 0x88, 0xbb, 0x71, 0x66, 0x72, 0xf0, 0x0b, 0xf9, 0x39, 0xc5, 0x4d, 0x6b, 0x55, 0xc4, 0xe3,
0xfb, 0xed, 0x7b, 0x8f, 0x7d, 0x03, 0xe9, 0x96, 0xbc, 0x13, 0x74, 0xb5, 0x69, 0xc9, 0x8b, 0xd7,
0x37, 0x30, 0x5c, 0x91, 0xef, 0x5a, 0x95, 0x42, 0x6c, 0xeb, 0x2c, 0x3a, 0x8f, 0xe6, 0x93, 0x32,
0xb6, 0xb5, 0x3a, 0x03, 0x68, 0xc9, 0xb7, 0x48, 0x62, 0x91, 0xb3, 0x38, 0xf0, 0x2f, 0x44, 0xe7,
0x30, 0x5e, 0x37, 0x9b, 0x57, 0xa4, 0xa2, 0xfe, 0x99, 0xd5, 0x77, 0x90, 0x2e, 0xbd, 0x73, 0x58,
0x89, 0xf5, 0xae, 0x70, 0x5b, 0xaf, 0x2e, 0xe0, 0xa4, 0xfa, 0x24, 0x8f, 0x2c, 0x64, 0xdd, 0x6e,
0x1f, 0x98, 0x1d, 0x1f, 0x1e, 0x02, 0xd7, 0xb7, 0x90, 0x94, 0xc8, 0x5d, 0x23, 0x2a, 0x83, 0x11,
0x77, 0x55, 0x85, 0xcc, 0xc1, 0x3c, 0x2e, 0x0f, 0x52, 0x9d, 0xc2, 0x10, 0x89, 0x3c, 0xed, 0x7f,
0xd6, 0x8b, 0xab, 0xb7, 0x08, 0xfe, 0x2d, 0xd6, 0x85, 0xd2, 0x30, 0x5d, 0x12, 0x6e, 0x04, 0x4b,
0x7c, 0xe9, 0x90, 0x45, 0x25, 0x26, 0xac, 0xcc, 0x47, 0xa6, 0x6f, 0xd6, 0x83, 0x0f, 0xcf, 0x3d,
0x36, 0xf8, 0xa7, 0xe7, 0x12, 0xa6, 0x2b, 0x94, 0x05, 0xb3, 0xdd, 0xb9, 0x67, 0x74, 0xa2, 0x26,
0xe6, 0x30, 0x3a, 0xff, 0x6f, 0xbe, 0x6f, 0xd4, 0x03, 0x35, 0x87, 0x59, 0x5f, 0xf9, 0x7b, 0xe2,
0x58, 0xfc, 0x94, 0x84, 0xeb, 0x5f, 0xbf, 0x07, 0x00, 0x00, 0xff, 0xff, 0x2b, 0xde, 0x2c, 0x5b,
0x8f, 0x01, 0x00, 0x00,
}

View File

@ -1,5 +1,5 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
# Necessary to get a specific version of the golang k8s client
RUN go get github.com/tools/godep
@ -10,11 +10,8 @@ RUN godep restore ./...
RUN rm -rf vendor/
RUN rm -rf /go/src/github.com/golang/protobuf/
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/
COPY cmd/mmforc cmd/mmforc
COPY config config
COPY internal internal
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

View File

@ -1,12 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-mmforc:dev']
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmforc:dev',
'--cache-from=gcr.io/$PROJECT_ID/openmatch-mmforc:dev',
'-f', 'Dockerfile.mmforc',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmforc:dev']

View File

@ -28,6 +28,7 @@ import (
"time"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/tidwall/gjson"
@ -54,7 +55,6 @@ var (
mmforcLogFields = log.Fields{
"app": "openmatch",
"component": "mmforc",
"caller": "mmforc/main.go",
}
mmforcLog = log.WithFields(mmforcLogFields)
@ -64,9 +64,7 @@ var (
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.SetFormatter(&log.JSONFormatter{})
log.AddHook(metrics.NewHook(MmforcLogLines, KeySeverity))
// Viper config management initialization
@ -77,10 +75,8 @@ func init() {
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
mmforcLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
@ -185,9 +181,9 @@ func main() {
// waiting to run the evaluator when all your MMFs are already
// finished.
switch {
case time.Since(start).Seconds() >= float64(cfg.GetInt("interval.evaluator")):
case time.Since(start).Seconds() >= float64(cfg.GetInt("evaluator.interval")):
mmforcLog.WithFields(log.Fields{
"interval": cfg.GetInt("interval.evaluator"),
"interval": cfg.GetInt("evaluator.interval"),
}).Info("Maximum evaluator interval exceeded")
checkProposals = true
@ -219,7 +215,7 @@ func main() {
}).Info("Proposals available, evaluating!")
go evaluator(ctx, cfg, clientset)
}
_, err = redisHelpers.Delete(context.Background(), pool, "concurrentMMFs")
err = redisHelpers.Delete(context.Background(), pool, "concurrentMMFs")
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),

View File

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/mmlogicapi cmd/mmlogicapi
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

View File

@ -31,6 +31,8 @@ import (
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
mmlogic "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/GoogleCloudPlatform/open-match/internal/set"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/ignorelist"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
log "github.com/sirupsen/logrus"
@ -49,7 +51,6 @@ var (
mlLogFields = log.Fields{
"app": "openmatch",
"component": "mmlogic",
"caller": "mmlogicapi/apisrv/apisrv.go",
}
mlLog = log.WithFields(mlLogFields)
)
@ -80,7 +81,7 @@ func New(cfg *viper.Viper, pool *redis.Pool) *MmlogicAPI {
return &s
}
// Open opens the api grpc service, starting it listening on the configured port.
// Open starts the api grpc service listening on the configured port.
func (s *MmlogicAPI) Open() error {
ln, err := net.Listen("tcp", ":"+s.cfg.GetString("api.mmlogic.port"))
if err != nil {
@ -164,7 +165,7 @@ func (s *mmlogicAPI) CreateProposal(c context.Context, prop *mmlogic.MatchObject
}
// Write all non-id fields from the protobuf message to state storage.
err := redispb.MarshalToRedis(c, prop, s.pool)
err := redispb.MarshalToRedis(c, s.pool, prop, s.cfg.GetInt("redis.expirations.matchobject"))
if err != nil {
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err
@ -219,20 +220,23 @@ func (s *mmlogicAPI) CreateProposal(c context.Context, prop *mmlogic.MatchObject
return &mmlogic.Result{Success: false, Error: err.Error()}, err
}
}
/*
// add propkey to proposalsq
_, err = redisConn.Do("SADD", proposalq, prop.Id)
if err != nil {
cpLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"key": proposalq,
"proposal": prop.Id,
}).Error("State storage error")
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err
}
*/
// Mark this MMF as finished by decrementing the concurrent MMFs.
// This is used to trigger the evaluator early if all MMFs have finished
// before its next scheduled run.
cmLog := cpLog.WithFields(log.Fields{
"component": "statestorage",
"key": "concurrentMMFs",
})
cmLog.Info("marking MMF finished for evaluator")
_, err = redishelpers.Decrement(fnCtx, s.pool, "concurrentMMFs")
if err != nil {
cmLog.WithFields(log.Fields{"error": err.Error()}).Error("State storage error")
// record error.
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, MlGrpcRequests.M(1))
return &mmlogic.Result{Success: true, Error: ""}, err
@ -271,13 +275,6 @@ func (s *mmlogicAPI) GetPlayerPool(pool *mmlogic.PlayerPool, stream mmlogic.MmLo
filterStart := time.Now()
results, err := s.applyFilter(ctx, thisFilter)
if results == nil && err == nil {
// Filter applies to so many players that we can't filter on it.
// Ignore this filter and attempt to process all the rest.
thisFilter.Stats = &mmlogic.Stats{Elapsed: time.Since(filterStart).Seconds()}
continue
}
thisFilter.Stats = &mmlogic.Stats{Count: int64(len(results)), Elapsed: time.Since(filterStart).Seconds()}
mlLog.WithFields(log.Fields{
"count": int64(len(results)),
@ -313,7 +310,7 @@ func (s *mmlogicAPI) GetPlayerPool(pool *mmlogic.PlayerPool, stream mmlogic.MmLo
}
// Make an array of only the player IDs; used to do unions and find the
// Make an array of only the player IDs; used to do set.Unions and find the
// logical AND
m := make([]string, len(results))
i := 0
@ -331,7 +328,7 @@ func (s *mmlogicAPI) GetPlayerPool(pool *mmlogic.PlayerPool, stream mmlogic.MmLo
// Player must be in every filtered pool to be returned
for field, thesePlayers := range filteredRosters {
overlap = intersection(overlap, thesePlayers)
overlap = set.Intersection(overlap, thesePlayers)
_ = field
//mlLog.WithFields(log.Fields{"count": len(overlap), "field": field}).Debug("Amount of overlap")
@ -344,7 +341,7 @@ func (s *mmlogicAPI) GetPlayerPool(pool *mmlogic.PlayerPool, stream mmlogic.MmLo
}
mlLog.WithFields(log.Fields{"count": len(overlap)}).Debug("Pool size before applying ignorelists")
mlLog.WithFields(log.Fields{"count": len(il)}).Debug("Ignorelist size")
playerList := difference(overlap, il) // removes ignorelist from the Roster
playerList := set.Difference(overlap, il) // removes ignorelist from the Roster
mlLog.WithFields(log.Fields{"count": len(playerList)}).Debug("Final Pool size")
// Reformat the playerList as a gRPC PlayerPool message. Send partial results as we go.
@ -400,7 +397,9 @@ func (s *mmlogicAPI) GetPlayerPool(pool *mmlogic.PlayerPool, stream mmlogic.MmLo
// If the provided field is not indexed or the provided range is too large, a nil result
// is returned and this filter should be disregarded when applying filter overlaps.
func (s *mmlogicAPI) applyFilter(c context.Context, filter *mmlogic.Filter) (map[string]int64, error) {
type pName string
pool := make(map[string]int64)
// Default maximum value is positive infinity (i.e. highest possible number in redis)
// https://redis.io/commands/zrangebyscore
@ -438,8 +437,13 @@ func (s *mmlogicAPI) applyFilter(c context.Context, filter *mmlogic.Filter) (map
} else if count > 500000 {
// 500,000 results is an arbitrary number; OM doesn't encourage
// patterns where MMFs look at this large of a pool.
mlLog.Warn("filter applies to too many players, ignoring")
return nil, nil
err = errors.New("filter applies to too many players")
mlLog.Error(err.Error())
for i := 0; i < int(count); i++ {
// Send back an empty pool, used by the calling function to calculate the number of results
pool[strconv.Itoa(i)] = 0
}
return pool, err
} else if count < 100000 {
mlLog.Info("filter processed")
} else {
@ -451,7 +455,6 @@ func (s *mmlogicAPI) applyFilter(c context.Context, filter *mmlogic.Filter) (map
// var init for player retrieval
cmd = "ZRANGEBYSCORE"
offset := 0
pool := make(map[string]int64)
// Loop, retrieving players in chunks.
for len(pool) == offset {
@ -566,77 +569,12 @@ func (s *mmlogicAPI) allIgnoreLists(c context.Context, in *mmlogic.IlInput) (all
}
// Join this ignorelist to the others we've retrieved
allIgnored = union(allIgnored, thisIl)
allIgnored = set.Union(allIgnored, thisIl)
}
return allIgnored, err
}
// Set data structure functions.
// TODO: maybe move these into an internal module if they are useful elsewhere.
func intersection(a []string, b []string) (out []string) {
hash := make(map[string]bool)
for _, v := range a {
hash[v] = true
}
for _, v := range b {
if _, found := hash[v]; found {
out = append(out, v)
}
}
return out
}
func union(a []string, b []string) (out []string) {
hash := make(map[string]bool)
// collect all values from input args
for _, v := range a {
hash[v] = true
}
for _, v := range b {
hash[v] = true
}
// put values into string array
for k := range hash {
out = append(out, k)
}
return out
}
func difference(a []string, b []string) (out []string) {
hash := make(map[string]bool)
out = append([]string{}, a...)
for _, v := range b {
hash[v] = true
}
// Iterate through output, removing items found in b
for i := 0; i < len(out); i++ {
if _, found := hash[out[i]]; found {
// Remove this element by moving the copying the last element of the
// array to this index and then slicing off the last element.
// https://stackoverflow.com/a/37335777/3113674
out[i] = out[len(out)-1]
out = out[:len(out)-1]
}
}
return out
}
// Functions for getting or setting player IDs to/from rosters
// Probably should get moved to an internal module in a future version.
func getPlayerIdsFromRoster(r *mmlogic.Roster) []string {

View File

@ -1,12 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev',
'--cache-from=gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev',
'-f', 'Dockerfile.mmlogicapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev']

View File

@ -1,7 +1,7 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
mmlogic/proto/mmlogic.pb.go
${OM_ROOT}/internal/pb/mmlogic.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
@ -41,7 +41,6 @@ var (
mlLogFields = log.Fields{
"app": "openmatch",
"component": "mmlogic",
"caller": "mmlogicapi/main.go",
}
mlLog = log.WithFields(mlLogFields)
@ -88,7 +87,7 @@ func main() {
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
mlLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
mlLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server

View File

@ -29,7 +29,6 @@ var (
logFields = log.Fields{
"app": "openmatch",
"component": "config",
"caller": "config/main.go",
}
cfgLog = log.WithFields(logFields)
@ -43,12 +42,11 @@ var (
// REDIS_SENTINEL_PORT_6379_TCP_PROTO=tcp
// REDIS_SENTINEL_SERVICE_HOST=10.55.253.195
envMappings = map[string]string{
"redis.hostname": "REDIS_SENTINEL_SERVICE_HOST",
"redis.port": "REDIS_SENTINEL_SERVICE_PORT",
"redis.hostname": "REDIS_SERVICE_HOST",
"redis.port": "REDIS_SERVICE_PORT",
"redis.pool.maxIdle": "REDIS_POOL_MAXIDLE",
"redis.pool.maxActive": "REDIS_POOL_MAXACTIVE",
"redis.pool.idleTimeout": "REDIS_POOL_IDLETIMEOUT",
"debug": "DEBUG",
}
// Viper config management setup
@ -70,7 +68,10 @@ var (
func Read() (*viper.Viper, error) {
// Viper config management initialization
// Support either json or yaml file types (json for backwards compatibility
// with previous versions)
cfg.SetConfigType("json")
cfg.SetConfigType("yaml")
cfg.SetConfigName("matchmaker_config")
cfg.AddConfigPath(".")
@ -109,5 +110,11 @@ func Read() (*viper.Viper, error) {
}
// Look for updates to the config; in Kubernetes, this is implemented using
// a ConfigMap that is written to the matchmaker_config.yaml file, which is
// what the Open Match components using Viper monitor for changes.
// More details about Open Match's use of Kubernetes ConfigMaps at:
// https://github.com/GoogleCloudPlatform/open-match/issues/42
cfg.WatchConfig() // Watch and re-read config file.
return cfg, err
}

View File

@ -1,19 +1,29 @@
{
"debug": true,
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505
"port": 50505,
"timeout": 90
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
@ -40,19 +50,19 @@
"duration": 800
},
"expired": {
"name": "timestamp",
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/matchmaker-dev-201405/openmatch-evaluator",
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/matchmaker-dev-201405/openmatch-mmf",
"tag": "py3"
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "dev"
}
},
"redis": {
@ -68,18 +78,17 @@
},
"results": {
"pageSize": 10000
}
},
"expirations": {
"player": 43200,
"matchobject":43200
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"connstring": "connstring",
"pools": "properties.pools"
},
"interval": {
"evaluator": 10,
"resultsTimeout": 30
},
"playerIndices": [
"char.cleric",
"char.knight",

View File

View File

@ -27,9 +27,8 @@
"containers":[
{
"name":"om-backend",
"image":"gcr.io/matchmaker-dev-201405/openmatch-backendapi:dev",
"image":"gcr.io/open-match-public-images/openmatch-backendapi:dev",
"imagePullPolicy":"Always",
"command": ["sleep", "30000"],
"ports": [
{
"name": "grpc",

View File

@ -27,7 +27,7 @@
"containers":[
{
"name":"om-frontendapi",
"image":"gcr.io/matchmaker-dev-201405/openmatch-frontendapi:dev",
"image":"gcr.io/open-match-public-images/openmatch-frontendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{

View File

@ -27,9 +27,8 @@
"containers":[
{
"name":"om-mmforc",
"image":"gcr.io/matchmaker-dev-201405/openmatch-mmforc:dev",
"image":"gcr.io/open-match-public-images/openmatch-mmforc:dev",
"imagePullPolicy":"Always",
"command": ["sleep", "30000"],
"ports": [
{
"name": "metrics",

View File

@ -27,9 +27,8 @@
"containers":[
{
"name":"om-mmlogic",
"image":"gcr.io/matchmaker-dev-201405/openmatch-mmlogicapi:dev",
"image":"gcr.io/open-match-public-images/openmatch-mmlogicapi:dev",
"imagePullPolicy":"Always",
"command": ["sleep", "30000"],
"ports": [
{
"name": "grpc",
@ -52,23 +51,3 @@
}
}
}
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-mmlogicapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "mmlogic"
},
"ports": [
{
"protocol": "TCP",
"port": 50503,
"targetPort": "grpc"
}
]
}
}

View File

@ -0,0 +1,20 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "om-mmlogicapi"
},
"spec": {
"selector": {
"app": "openmatch",
"component": "mmlogic"
},
"ports": [
{
"protocol": "TCP",
"port": 50503,
"targetPort": "grpc"
}
]
}
}

View File

@ -2,7 +2,7 @@
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis-sentinel"
"name": "redis"
},
"spec": {
"selector": {

View File

@ -1,21 +1,57 @@
# Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild_<name>.yaml` files for each component in the repository root.
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in their respective directories. Note that most of them build from an 'base' image called `openmatch-devbase`. You can find a `Dockerfile` and `cloudbuild_base.yaml` file for this in the repository root. Build it first!
Note: Although Google Cloud Platform includes some free usage, you may incur charges following this guide if you use GCP products.
**This project has not completed a first-line security audit, and there are definitely going to be some service accounts that are too permissive. This should be fine for testing/development in a local environment, but absolutely should not be used as-is in a production environment.**
## Security Disclaimer
**This project has not completed a first-line security audit, and there are definitely going to be some service accounts that are too permissive. This should be fine for testing/development in a local environment, but absolutely should not be used as-is in a production environment without your team/organization evaluating it's permissions.**
## Before getting started
**NOTE**: Before starting with this guide, you'll need to update all the URIs from the tutorial's gcr.io container image registry with the URI for your own image registry. If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`. Here's an example command in Linux to do the replacement for you this (replace YOUR_REGISTRY_URI with your URI, this should be run from the repository root directory):
```
# Linux
egrep -lR 'open-match-public-images' . | xargs sed -i -e 's|open-match-public-images|<PROJECT_NAME>|g'
```
```
# Mac OS, you can delete the .backup files after if all looks good
egrep -lR 'open-match-public-images' . | xargs sed -i'.backup' -e 's|open-match-public-images|<PROJECT_NAME>|g'
```
## Example of building using Google Cloud Builder
The [Quickstart for Docker](https://cloud.google.com/cloud-build/docs/quickstart-docker) guide explains how to set up a project, enable billing, enable Cloud Build, and install the Cloud SDK if you haven't do these things before. Once you get to 'Preparing source files' you are ready to continue with the steps below.
* Clone this repo to a local machine or Google Cloud Shell session, and cd into it.
* Run the following one-line bash script to compile all the images for the first time, and push them to your gcr.io registry. You must enable the [Container Registry API](https://console.cloud.google.com/flows/enableapi?apiid=containerregistry.googleapis.com) first.
```
for dfile in $(ls Dockerfile.*); do gcloud builds submit --config cloudbuild_${dfile##*.}.yaml; done
```
* In Linux, you can run the following one-line bash script to compile all the images for the first time, and push them to your gcr.io registry. You must enable the [Container Registry API](https://console.cloud.google.com/flows/enableapi?apiid=containerregistry.googleapis.com) first.
```
# First, build the 'base' image. Some other images depend on this so it must complete first.
gcloud builds submit --config cloudbuild_base.yaml
# Build all other images.
for dfile in $(find . -name "Dockerfile" -iregex "./\(cmd\|test\|examples\)/.*"); do cd $(dirname ${dfile}); gcloud builds submit --config cloudbuild.yaml & cd -; done
```
Note: as of v0.3.0 alpha, the Python and PHP MMF examples still depend on the previous way of building until [issue #42, introducing new config management](https://github.com/GoogleCloudPlatform/open-match/issues/42) is resolved (apologies for the inconvenience):
```
gcloud builds submit --config cloudbuild_mmf_py3.yaml
gcloud builds submit --config cloudbuild_mmf_php.yaml
```
* Once the cloud builds have completed, you can verify that all the builds succeeded in the cloud console or by by checking the list of images in your **gcr.io** registry:
```
gcloud container images list
```
(your registry name will be different)
```
NAME
gcr.io/open-match-public-images/openmatch-backendapi
gcr.io/open-match-public-images/openmatch-devbase
gcr.io/open-match-public-images/openmatch-evaluator
gcr.io/open-match-public-images/openmatch-frontendapi
gcr.io/open-match-public-images/openmatch-mmf-golang-manual-simple
gcr.io/open-match-public-images/openmatch-mmf-php-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmforc
gcr.io/open-match-public-images/openmatch-mmlogicapi
```
## Example of starting a GKE cluster
A cluster with mostly default settings will work for this development guide. In the Cloud SDK command below we start it with machines that have 4 vCPUs. Alternatively, you can use the 'Create Cluster' button in [Google Cloud Console]("https://console.cloud.google.com/kubernetes").
@ -32,76 +68,115 @@ gcloud compute zones list
## Configuration
Currently, each component reads a local config file `matchmaker_config.json` , and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally.
**NOTE** 'defaultImages' container images names in the config file will need to be updated with **your container registry URI**. Here's an example command in Linux to do this (just replace YOUR_REGISTRY_URI with the appropriate location in your environment, should be run from the config directory):
```
sed -i 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' matchmaker_config.json
```
For MacOS the `-i` flag creates backup files when changing the original file in place. You can use the following command, and then delete the `*.backup` files afterwards if you don't need them anymore:
```
sed -i'.backup' -e 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' matchmaker_config.json
```
If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`.
We plan to replace this with a Kubernetes-managed config with dynamic reloading when development time allows. Pull requests are welcome!
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration (if you would like to help us design the replacement config solution, please join the [discussion](https://github.com/GoogleCloudPlatform/open-match/issues/42). To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. Note: [there is an issue with symlinks on Windows](../issues/57).
## Running Open Match in a development environment
The rest of this guide assumes you have a cluster (example is using GKE, but works on any cluster with a little tweaking), and kubectl configured to administer that cluster, and you've built all the Docker container images described by `Dockerfiles` in the repository root directory and given them the docker tag 'dev'. It assumes you are in the `<REPO_ROOT>/deployments/k8s/` directory.
**NOTE** Kubernetes resources that use container images will need to be updated with **your container registry URI**. Here's an example command in Linux to do this (just replace YOUR_REGISTRY_URI with the appropriate location in your environment):
```
sed -i 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' *deployment.json
```
For MacOS the `-i` flag creates backup files when changing the original file in place. You can use the following command, and then delete the `*.backup` files afterwards if you don't need them anymore:
```
sed -i'.backup' -e 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g' *deployment.json
```
If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`.
* Start a copy of redis and a service in front of it:
```
kubectl apply -f redis_deployment.json
kubectl apply -f redis_service.json
```
* Run the **core components**: the frontend API, the backend API, and the matchmaker function orchestrator (MMFOrc).
```
kubectl apply -f redis_deployment.json
kubectl apply -f redis_service.json
```
* Run the **core components**: the frontend API, the backend API, the matchmaker function orchestrator (MMFOrc), and the matchmaking logic API.
**NOTE** In order to kick off jobs, the matchmaker function orchestrator needs a service account with permission to administer the cluster. This should be updated to have min required perms before launch, this is pretty permissive but acceptable for closed testing:
```
kubectl apply -f backendapi_deployment.json
kubectl apply -f backendapi_service.json
kubectl apply -f frontendapi_deployment.json
kubectl apply -f frontendapi_service.json
kubectl apply -f mmforc_deployment.json
kubectl apply -f mmforc_serviceaccount.json
```
```
kubectl apply -f backendapi_deployment.json
kubectl apply -f backendapi_service.json
kubectl apply -f frontendapi_deployment.json
kubectl apply -f frontendapi_service.json
kubectl apply -f mmforc_deployment.json
kubectl apply -f mmforc_serviceaccount.json
kubectl apply -f mmlogicapi_deployment.json
kubectl apply -f mmlogicapi_service.json
```
* [optional, but recommended] Configure the OpenCensus metrics services:
```
kubectl apply -f metrics_services.json
```
```
kubectl apply -f metrics_services.json
```
* [optional] Trying to apply the Kubernetes Prometheus Operator resource definition files without a cluster-admin rolebinding on GKE doesn't work without running the following command first. See https://github.com/coreos/prometheus-operator/issues/357
```
kubectl create clusterrolebinding projectowner-cluster-admin-binding --clusterrole=cluster-admin --user=<GCP_ACCOUNT>
```
```
kubectl create clusterrolebinding projectowner-cluster-admin-binding --clusterrole=cluster-admin --user=<GCP_ACCOUNT>
```
* [optional, uses beta software] If using Prometheus as your metrics gathering backend, configure the [Prometheus Kubernetes Operator](https://github.com/coreos/prometheus-operator):
```
kubectl apply -f prometheus_operator.json
kubectl apply -f prometheus.json
kubectl apply -f prometheus_service.json
kubectl apply -f metrics_servicemonitor.json
```
```
kubectl apply -f prometheus_operator.json
kubectl apply -f prometheus.json
kubectl apply -f prometheus_service.json
kubectl apply -f metrics_servicemonitor.json
```
You should now be able to see the core component pods running using a `kubectl get pods`, and the core component metrics in the Prometheus Web UI by running `kubectl proxy <PROMETHEUS_POD_NAME> 9090:9090` in your local shell, then opening http://localhost:9090/targets in your browser to see which services Prometheus is collecting from.
Here's an example output from `kubectl get all` if everything started correctly, and you included all the optional components (note: this could become out-of-date with upcoming versions; apologies if that happens):
```
NAME READY STATUS RESTARTS AGE
pod/om-backendapi-84bc9d8fff-q89kr 1/1 Running 0 9m
pod/om-frontendapi-55d5bb7946-c5ccb 1/1 Running 0 9m
pod/om-mmforc-85bfd7f4f6-wmwhc 1/1 Running 0 9m
pod/om-mmlogicapi-6488bc7fc6-g74dm 1/1 Running 0 9m
pod/prometheus-operator-5c8774cdd8-7c5qm 1/1 Running 0 9m
pod/prometheus-prometheus-0 2/2 Running 0 9m
pod/redis-master-9b6b86c46-b7ggn 1/1 Running 0 9m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 19m
service/om-backend-metrics ClusterIP 10.59.254.43 <none> 29555/TCP 9m
service/om-backendapi ClusterIP 10.59.240.211 <none> 50505/TCP 9m
service/om-frontend-metrics ClusterIP 10.59.246.228 <none> 19555/TCP 9m
service/om-frontendapi ClusterIP 10.59.250.59 <none> 50504/TCP 9m
service/om-mmforc-metrics ClusterIP 10.59.240.59 <none> 39555/TCP 9m
service/om-mmlogicapi ClusterIP 10.59.248.3 <none> 50503/TCP 9m
service/prometheus NodePort 10.59.252.212 <none> 9090:30900/TCP 9m
service/prometheus-operated ClusterIP None <none> 9090/TCP 9m
service/redis ClusterIP 10.59.249.197 <none> 6379/TCP 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/om-backendapi 1 1 1 1 9m
deployment.extensions/om-frontendapi 1 1 1 1 9m
deployment.extensions/om-mmforc 1 1 1 1 9m
deployment.extensions/om-mmlogicapi 1 1 1 1 9m
deployment.extensions/prometheus-operator 1 1 1 1 9m
deployment.extensions/redis-master 1 1 1 1 9m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/om-backendapi-84bc9d8fff 1 1 1 9m
replicaset.extensions/om-frontendapi-55d5bb7946 1 1 1 9m
replicaset.extensions/om-mmforc-85bfd7f4f6 1 1 1 9m
replicaset.extensions/om-mmlogicapi-6488bc7fc6 1 1 1 9m
replicaset.extensions/prometheus-operator-5c8774cdd8 1 1 1 9m
replicaset.extensions/redis-master-9b6b86c46 1 1 1 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/om-backendapi 1 1 1 1 9m
deployment.apps/om-frontendapi 1 1 1 1 9m
deployment.apps/om-mmforc 1 1 1 1 9m
deployment.apps/om-mmlogicapi 1 1 1 1 9m
deployment.apps/prometheus-operator 1 1 1 1 9m
deployment.apps/redis-master 1 1 1 1 9m
NAME DESIRED CURRENT READY AGE
replicaset.apps/om-backendapi-84bc9d8fff 1 1 1 9m
replicaset.apps/om-frontendapi-55d5bb7946 1 1 1 9m
replicaset.apps/om-mmforc-85bfd7f4f6 1 1 1 9m
replicaset.apps/om-mmlogicapi-6488bc7fc6 1 1 1 9m
replicaset.apps/prometheus-operator-5c8774cdd8 1 1 1 9m
replicaset.apps/redis-master-9b6b86c46 1 1 1 9m
NAME DESIRED CURRENT AGE
statefulset.apps/prometheus-prometheus 1 1 9m
```
### End-to-End testing
**Note** The programs provided below are just bare-bones manual testing programs with no automation and no claim of code coverage. This sparseness of this part of the documentation is because we expect to discard all of these tools and write a fully automated end-to-end test suite and a collection of load testing tools, with extensive stats output and tracing capabilities before 1.0 release. Tracing has to be integrated first, which will be in an upcoming release.
**Note**: The programs provided below are just bare-bones manual testing programs with no automation and no claim of code coverage. This sparseness of this part of the documentation is because we expect to discard all of these tools and write a fully automated end-to-end test suite and a collection of load testing tools, with extensive stats output and tracing capabilities before 1.0 release. Tracing has to be integrated first, which will be in an upcoming release.
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
* `examples/frontendclient` is a fake client for the Frontend API. It pretends to be a real game client connecting to Open Match and requests a game, then dumps out the connection string it receives. Note that it doesn't actually test the return path by looking for arbitrary results from your matchmaking function; it pauses and tells you the name of a key to set a connection string in directly using a redis-cli client.
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to openmatch and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools.
* `test/cmd/client` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
* `examples/frontendclient` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen until you press the enter key. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that command sends to the backend look for many more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match.
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to openmatch and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools. Note: building this image requires that you first build the 'base' dev image (look for `cloudbuild_base.yaml` and `Dockerfile.base` in the root directory) and then update the first step to point to that image in your registry. This will be simplified in a future release. **Note**: If you run this by itself, expect it to wait about 30 seconds, then return a result of 'insufficient players' and exit - this is working as intended. Use the client load simulation tool below to add players to the pool or you'll never be able to make a successful match.
* `test/cmd/client` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
### Resources

View File

@ -1 +1,28 @@
During alpha, please do not use Open Match as-is in production. To develop against it, please see the [development guide](development.md).
# "Productionizing" a deployment
Here are some steps that should be taken to productionize your Open Match deployment before exposing it to live public traffic. Some of these overlap with best practices for [productionizing Kubernetes](https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10) or cloud infrastructure more generally. We will work to make as many of these into the default deployment strategy for Open Match as possible, going forward.
**This is not an exhaustive list and addressing the items in this document alone shouldn't be considered sufficient. Every game is different and will have different production needs.**
## Kubernetes
All the usual guidance around hardening and securing Kubernetes are applicable to running Open Match. [Here is a guide around security for Google Kubernetes Enginge on GCP](https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10), and a number of other guides are available from reputable sources on the internet.
### Minimum permissions on Kubernetes
* The components of Open Match should be run in a separate Kubernetes namespace if you're also using the cluster for other services. As of 0.3.0 they run in the 'default' namespace if you follow the development guide.
* Note that the default MMForc process has cluster management permissions by default. Before moving to production, you should create a role with only access to create kubernetes jobs and configure the MMForc to use it.
### Kubernetes Jobs (MMFOrc)
The 0.3.0 MMFOrc component runs your MMFs as Kubernetes Jobs. You should periodically delete these jobs to keep the cluster running smoothly. How often you need to delete them is dependant on how many you are running. There are a number of open source solutions to do this for you. ***Note that once you delete the job, you won't have access to that job's logs anymore unless you're sending your logs from kubernetes to a log aggregator like Google Stackdriver. This can make it a challenge to troubleshoot issues***
## Open Match config
Debug logging and the extra debug code paths should be disabled in the `config/matchmaker_config.json` file (as of the time of this writing, 0.3.0).
## Public APIs for Open Match
In many cases, you may choose to configure your game clients to connect to the Open Match Frontend API, and in a few select cases (such as using it for P2P non-dedicated game server hosting), the game client may also need to connect to the Backend API. In these cases, it is important to secure the API endpoints against common attacks, such as DDoS or malformed packet floods.
* Using a cloud provider's Load Balancer in front of the Kubernetes Service is a common approach to enable vendor-specific DDoS protections. Check the documentation for your cloud vendor's Load Balancer for more details ([GCP's DDoS protection](https://cloud.google.com/armor/)).
* Using an API framework can be used to limit endpoint access to only game clients you have authenticated using your platform's authentication service. This may be accomplished with simple authentication tokens or a more complex scheme depending on your needs.
## Testing
(as of 0.3.0) The provided test programs are just for validating that Open Match is operating correctly; they are command-line applications designed to be run from within the same cluster as Open Match and are therefore not a suitable test harness for doing production testing to make sure your matchmaker is ready to handle your live game. Instead, it is recommended that you integrate Open Match into your game client and test it using the actual game flow players will use if at all possible.
### Load testing
Ideally, you would already be making 'headless' game clients for automated qa and load testing of your game servers; it is recommended that you also code these testing clients to be able to act as a mock player connecting to Open Match. Load testing platform services is a huge topic and should reflect your actual game access patterns as closely as possible, which will be very game dependant.
**Note: It is never a good idea to do load testing against a cloud vendor without informing them first!**

20
docs/roadmap.md Normal file
View File

@ -0,0 +1,20 @@
# Roadmap. [Subject to change]
Releases are scheduled for every 6 weeks. **Every release is a stable, long-term-support version**. Even for alpha releases, best-effort support is available. With a little work and input from an experienced live services developer, you can go to production with any version on the [releases page](https://github.com/GoogleCloudPlatform/open-match/releases).
Our current thinking is to wait to take Open Match out of alpha/beta (and label it 1.0) until it can be used out-of-the-box, standalone, for developers that dont have any existing platform services. Which is to say, the majority of **established game developers likely won't have any reason to wait for the 1.0 release if Open Match already handles your needs**. If you already have live platform services that you plan to integrate Open Match with (player authentication, a group invite system, dedicated game servers, metrics collection, logging aggregation, etc), then a lot of the features planned between 0.4.0 and 1.0 likely aren't of much interest to you anyway.
## Upcoming releases
* **0.4.0** &mdash; Agones Integration & MMF on [Knative](https://cloud.google.com/Knative/)
MMF instrumentation
Match object expiration / lazy deletion
API autoscaling by default
API changes after this will likely be additions or very minor
* **0.5.0** &mdash; Tracing, Metrics, and KPI Dashboard
* **0.6.0** &mdash; Load testing suite
* **1.0.0** &mdash; API Formally Stable. Breaking API changes will require a new major version number.
* **1.1.0** &mdash; Canonical MMFs
## Philosophy
* The next version (0.4.0) will focus on making MMFs run on serverless platforms - specifically Knative. This will just be first steps, as Knative is still pretty early. We want to get a proof of concept working so we can roadmap out the future "MMF on Knative" experience. Our intention is to keep MMFs as compatible as possible with the current Kubernetes job-based way of doing them. Our hope is that by the time Knative is mature, well be able to provide a [Knative build](https://github.com/Knative/build) pipeline that will take existing MMFs and build them as Knative functions. In the meantime, well map out a relatively painless (but not yet fully automated) way to make an existing MMF into a Kubernetes Deployment that looks as similar to what [Knative serving](https://github.com/knative/serving) is shaping up to be, in an effort to make the eventual switchover painless. Basically all of this is just _optimizing MMFs to make them spin up faster and take less resources_, **we're not planning to change what MMFs do or the interfaces they need to fulfill**. Existing MMFs will continue to run as-is, and in the future moving them to Knative should be both **optional** and **largely automated**.
* 0.4.0 represents the natural stopping point for adding new functionality until we have more community uptake and direction. We don't anticipate many API changes in 0.4.0 and beyond. Maybe new API calls for new functionality, but we're unlikely to see big shifts in existing calls through 1.0 and its point releases. We'll issue a new major release version if we decide we need those changes.
* The 0.5.0 version and beyond will be focused on operationalizing the out-of-the-box experience. Metrics and analytics and a default dashboard, additional tooling, and a load testing suite are all planned. We want it to be easy for operators to see KPI and know what's going on with Open Match.

View File

@ -1,5 +1,5 @@
#FROM golang:1.10.3 as builder
FROM gcr.io/matchmaker-dev-201405/openmatch-devbase as builder
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/backendclient
COPY ./ ./
RUN go get -d -v

View File

@ -1,11 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-devbase' ]
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendclient:dev',
'--cache-from=gcr.io/$PROJECT_ID/openmatch-devbase:latest',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendclient:dev']

View File

@ -25,7 +25,6 @@ import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"log"
@ -33,7 +32,6 @@ import (
"os"
backend "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/gogo/protobuf/jsonpb"
"github.com/tidwall/gjson"
"google.golang.org/grpc"
)
@ -53,10 +51,12 @@ func main() {
// Read the profile
filename := "profiles/testprofile.json"
if len(os.Args) > 1 {
filename = os.Args[1]
}
log.Println("Reading profile from ", filename)
/*
if len(os.Args) > 1 {
filename = os.Args[1]
}
log.Println("Reading profile from ", filename)
*/
jsonFile, err := os.Open(filename)
if err != nil {
panic("Failed to open file specified at command line. Did you forget to specify one?")
@ -72,10 +72,12 @@ func main() {
jsonProfile := buffer.String()
pbProfile := &backend.MatchObject{}
err = jsonpb.UnmarshalString(jsonProfile, pbProfile)
if err != nil {
log.Println(err)
}
/*
err = jsonpb.UnmarshalString(jsonProfile, pbProfile)
if err != nil {
log.Println(err)
}
*/
pbProfile.Properties = jsonProfile
log.Println("Requesting matches that fit profile:")
@ -101,17 +103,9 @@ func main() {
profileName = gjson.Get(jsonProfile, "name").String()
}
/*
// Test CreateMatch
p := &backend.MatchObject{
Id: profileName,
// Make a stub debug hostname from the current time
Properties: jsonProfile,
}
*/
pbProfile.Id = profileName
pbProfile.Properties = jsonProfile
//
//log.Printf("Looking for matches for profile for the next 5 seconds:")
log.Printf("Establishing HTTPv2 stream...")
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -123,9 +117,9 @@ func main() {
if err != nil {
log.Fatalf("Attempting to open stream for ListMatches(_) = _, %v", err)
}
log.Printf("Waiting for matches...")
//for i := 0; i < 2; i++ {
for {
log.Printf("Waiting for matches...")
match, err := stream.Recv()
if err == io.EOF {
break
@ -137,7 +131,7 @@ func main() {
if match.Properties == "{error: insufficient_players}" {
log.Println("Waiting for a larger player pool...")
break
//break
}
// Validate JSON before trying to parse it
@ -146,36 +140,23 @@ func main() {
}
log.Println("Received match:")
ppJSON(match.Properties)
fmt.Println(match)
//fmt.Println(match) // Debug
/*
// Get players from the json properties.roster field
log.Println("Gathering roster from received match...")
players := make([]string, 0)
result := gjson.Get(match.Properties, "properties.roster")
result.ForEach(func(teamName, teamRoster gjson.Result) bool {
teamRoster.ForEach(func(_, player gjson.Result) bool {
players = append(players, player.String())
return true // keep iterating
})
return true // keep iterating
})
//log.Printf("players = %+v\n", players)
// Assign players in this match to our server
connstring := "example.com:12345"
if len(os.Args) >= 2 {
connstring = os.Args[1]
log.Printf("Player assignment '%v' specified at commandline", connstring)
}
log.Println("Assigning players to DGS at", connstring)
// Assign players in this match to our server
log.Println("Assigning players to DGS at example.com:12345")
playerstr := strings.Join(players, " ")
roster := &backend.Roster{PlayerIds: playerstr}
ci := &backend.ConnectionInfo{ConnectionString: "example.com:12345"}
assign := &backend.Assignments{Roster: roster, ConnectionInfo: ci}
_, err = client.CreateAssignments(context.Background(), assign)
if err != nil {
panic(err)
}
*/
assign := &backend.Assignments{Rosters: match.Rosters, Assignment: connstring}
log.Printf("Waiting for matches...")
_, err = client.CreateAssignments(context.Background(), assign)
if err != nil {
log.Println(err)
}
log.Println("Success! Not deleting assignments [demo mode].")
}

View File

@ -1,4 +1,5 @@
{
"imagename":"gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple:dev",
"name":"testprofilev1",
"id":"testprofile",
"properties":{

View File

@ -1,2 +0,0 @@
// package backend should be a copy of the compiled gRPC protobuf file used by the backend API.
package backend

View File

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY examples/evaluators/golang/simple examples/evaluators/golang/simple
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/evaluators/golang/simple
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

View File

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-evaluator:dev',
'-f', 'Dockerfile.evaluator',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-evaluator:dev']

View File

@ -48,8 +48,8 @@ func main() {
// Read config
lgr.Println("Initializing config...")
cfg, err := readConfig("matchmaker_config", map[string]interface{}{
"REDIS_SENTINEL_SERVICE_HOST": "redis-sentinel",
"REDIS_SENTINEL_SERVICE_PORT": "6379",
"REDIS_SERVICE_HOST": "redis",
"REDIS_SERVICE_PORT": "6379",
"auth": map[string]string{
// Read from k8s secret eventually
// Probably doesn't need a map, just here for reference
@ -63,7 +63,7 @@ func main() {
// Connect to redis
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz // redis pool docs: https://godoc.org/github.com/gomodule/redigo/redis#Pool
redisURL := "redis://" + cfg.GetString("REDIS_SENTINEL_SERVICE_HOST") + ":" + cfg.GetString("REDIS_SENTINEL_SERVICE_PORT")
redisURL := "redis://" + cfg.GetString("REDIS_SERVICE_HOST") + ":" + cfg.GetString("REDIS_SERVICE_PORT")
lgr.Println("Connecting to redis at", redisURL)
pool := redis.Pool{
MaxIdle: 3,
@ -157,10 +157,10 @@ func readConfig(filename string, defaults map[string]interface{}) (*viper.Viper,
REDIS_SENTINEL_PORT_6379_TCP=tcp://10.55.253.195:6379
REDIS_SENTINEL_PORT=tcp://10.55.253.195:6379
REDIS_SENTINEL_PORT_6379_TCP_ADDR=10.55.253.195
REDIS_SENTINEL_SERVICE_PORT=6379
REDIS_SERVICE_PORT=6379
REDIS_SENTINEL_PORT_6379_TCP_PORT=6379
REDIS_SENTINEL_PORT_6379_TCP_PROTO=tcp
REDIS_SENTINEL_SERVICE_HOST=10.55.253.195
REDIS_SERVICE_HOST=10.55.253.195
*/
v := viper.New()
for key, value := range defaults {

View File

@ -1 +0,0 @@
../../test/cmd/client/city.percent

View File

@ -1 +0,0 @@
../../test/cmd/client/europe-west1.ping

View File

@ -1 +0,0 @@
../../test/cmd/client/europe-west2.ping

View File

@ -1 +0,0 @@
../../test/cmd/client/europe-west3.ping

View File

@ -1 +0,0 @@
../../test/cmd/client/europe-west4.ping

View File

@ -1,144 +0,0 @@
/*
Stubbed frontend api client. This should be run within a k8s cluster, and
assumes that the frontend api is up and can be accessed through a k8s service
called 'om-frontendapi'.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"log"
"net"
"os"
"strconv"
"github.com/GoogleCloudPlatform/open-match/examples/frontendclient/player"
frontend "github.com/GoogleCloudPlatform/open-match/examples/frontendclient/proto"
"github.com/gobs/pretty"
"google.golang.org/grpc"
)
func bytesToString(data []byte) string {
return string(data[:])
}
func ppJSON(s string) {
buf := new(bytes.Buffer)
json.Indent(buf, []byte(s), "", " ")
log.Println(buf)
return
}
func main() {
// determine number of players to generate per group
numPlayers := 4 // default if nothing provided
var err error
if len(os.Args) > 1 {
numPlayers, err = strconv.Atoi(os.Args[1])
if err != nil {
panic(err)
}
}
player.New()
log.Printf("Generating %d players", numPlayers)
// Connect gRPC client
ip, err := net.LookupHost("om-frontendapi")
if err != nil {
panic(err)
}
_ = ip
conn, err := grpc.Dial(ip[0]+":50504", grpc.WithInsecure())
if err != nil {
log.Fatalf("failed to connect: %s", err.Error())
}
client := frontend.NewAPIClient(conn)
log.Println("API client connected!")
log.Printf("Establishing HTTPv2 stream...")
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Empty group to fill and then run through the CreateRequest gRPC endpoint
g := &frontend.Group{
Id: "",
Properties: "",
}
// Generate players for the group and put them in
for i := 0; i < numPlayers; i++ {
playerID, playerData, debug := player.Generate()
groupPlayer(g, playerID, playerData)
_ = debug // TODO. For now you could copy this into playerdata before creating player if you want it available in redis
pretty.PrettyPrint(playerID)
pretty.PrettyPrint(playerData)
}
g.Id = g.Id[:len(g.Id)-1] // Remove trailing whitespace
log.Printf("Finished grouping players")
// Test CreateRequest
log.Println("Testing CreateRequest")
results, err := client.CreateRequest(ctx, g)
if err != nil {
panic(err)
}
pretty.PrettyPrint(g.Id)
pretty.PrettyPrint(g.Properties)
pretty.PrettyPrint(results.Success)
// wait for value to be inserted that will be returned by Get ssignment
test := "bitters"
fmt.Println("Pausing: go put a value to return in Redis using HSET", test, "connstring <YOUR_TEST_STRING>")
fmt.Println("Hit Enter to test GetAssignment...")
reader := bufio.NewReader(os.Stdin)
_, _ = reader.ReadString('\n')
connstring, err := client.GetAssignment(ctx, &frontend.PlayerId{Id: test})
pretty.PrettyPrint(connstring.ConnectionString)
// Test DeleteRequest
fmt.Println("Deleting Request")
results, err = client.DeleteRequest(ctx, g)
pretty.PrettyPrint(results.Success)
// Remove assignments key
fmt.Println("deleting the key", test)
results, err = client.DeleteAssignment(ctx, &frontend.PlayerId{Id: test})
pretty.PrettyPrint(results.Success)
return
}
func groupPlayer(g *frontend.Group, playerID string, playerData map[string]int) error {
//g.Properties = playerData
pdJSON, _ := json.Marshal(playerData)
buffer := new(bytes.Buffer) // convert byte array to buffer to send to json.Compact()
if err := json.Compact(buffer, pdJSON); err != nil {
log.Println(err)
}
g.Id = g.Id + playerID + " "
// TODO: actually aggregate group stats
g.Properties = buffer.String()
return nil
}

View File

@ -1,2 +0,0 @@
// package frontend should be a copy of the compiled gRPC protobuf file used by the frontend API.
package frontend

View File

@ -1,335 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: frontend.proto
/*
Package frontend is a generated protocol buffer package.
It is generated from these files:
frontend.proto
It has these top-level messages:
Group
PlayerId
ConnectionInfo
Result
*/
package frontend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
type Group struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Group) Reset() { *m = Group{} }
func (m *Group) String() string { return proto.CompactTextString(m) }
func (*Group) ProtoMessage() {}
func (*Group) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Group) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Group) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
type PlayerId struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *PlayerId) Reset() { *m = PlayerId{} }
func (m *PlayerId) String() string { return proto.CompactTextString(m) }
func (*PlayerId) ProtoMessage() {}
func (*PlayerId) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *PlayerId) GetId() string {
if m != nil {
return m.Id
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
func init() {
proto.RegisterType((*Group)(nil), "Group")
proto.RegisterType((*PlayerId)(nil), "PlayerId")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Result)(nil), "Result")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error)
DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error) {
out := new(ConnectionInfo)
err := grpc.Invoke(ctx, "/API/GetAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
CreateRequest(context.Context, *Group) (*Result, error)
DeleteRequest(context.Context, *Group) (*Result, error)
GetAssignment(context.Context, *PlayerId) (*ConnectionInfo, error)
DeleteAssignment(context.Context, *PlayerId) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_GetAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).GetAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/GetAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).GetAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateRequest",
Handler: _API_CreateRequest_Handler,
},
{
MethodName: "DeleteRequest",
Handler: _API_DeleteRequest_Handler,
},
{
MethodName: "GetAssignment",
Handler: _API_GetAssignment_Handler,
},
{
MethodName: "DeleteAssignment",
Handler: _API_DeleteAssignment_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "frontend.proto",
}
func init() { proto.RegisterFile("frontend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0x41, 0x4b, 0xfb, 0x40,
0x10, 0xc5, 0x9b, 0xfc, 0x69, 0xda, 0x0e, 0x34, 0xff, 0xba, 0x78, 0x08, 0x39, 0x88, 0xec, 0xa9,
0x20, 0xee, 0x41, 0x0f, 0x7a, 0xf1, 0x50, 0x2a, 0x94, 0xdc, 0x4a, 0xfc, 0x00, 0x52, 0x93, 0x69,
0x59, 0x88, 0xbb, 0x71, 0x66, 0x72, 0xf0, 0x0b, 0xf9, 0x39, 0xc5, 0x4d, 0x6b, 0x55, 0xc4, 0xe3,
0xfb, 0xed, 0x7b, 0x8f, 0x7d, 0x03, 0xe9, 0x96, 0xbc, 0x13, 0x74, 0xb5, 0x69, 0xc9, 0x8b, 0xd7,
0x37, 0x30, 0x5c, 0x91, 0xef, 0x5a, 0x95, 0x42, 0x6c, 0xeb, 0x2c, 0x3a, 0x8f, 0xe6, 0x93, 0x32,
0xb6, 0xb5, 0x3a, 0x03, 0x68, 0xc9, 0xb7, 0x48, 0x62, 0x91, 0xb3, 0x38, 0xf0, 0x2f, 0x44, 0xe7,
0x30, 0x5e, 0x37, 0x9b, 0x57, 0xa4, 0xa2, 0xfe, 0x99, 0xd5, 0x77, 0x90, 0x2e, 0xbd, 0x73, 0x58,
0x89, 0xf5, 0xae, 0x70, 0x5b, 0xaf, 0x2e, 0xe0, 0xa4, 0xfa, 0x24, 0x8f, 0x2c, 0x64, 0xdd, 0x6e,
0x1f, 0x98, 0x1d, 0x1f, 0x1e, 0x02, 0xd7, 0xb7, 0x90, 0x94, 0xc8, 0x5d, 0x23, 0x2a, 0x83, 0x11,
0x77, 0x55, 0x85, 0xcc, 0xc1, 0x3c, 0x2e, 0x0f, 0x52, 0x9d, 0xc2, 0x10, 0x89, 0x3c, 0xed, 0x7f,
0xd6, 0x8b, 0xab, 0xb7, 0x08, 0xfe, 0x2d, 0xd6, 0x85, 0xd2, 0x30, 0x5d, 0x12, 0x6e, 0x04, 0x4b,
0x7c, 0xe9, 0x90, 0x45, 0x25, 0x26, 0xac, 0xcc, 0x47, 0xa6, 0x6f, 0xd6, 0x83, 0x0f, 0xcf, 0x3d,
0x36, 0xf8, 0xa7, 0xe7, 0x12, 0xa6, 0x2b, 0x94, 0x05, 0xb3, 0xdd, 0xb9, 0x67, 0x74, 0xa2, 0x26,
0xe6, 0x30, 0x3a, 0xff, 0x6f, 0xbe, 0x6f, 0xd4, 0x03, 0x35, 0x87, 0x59, 0x5f, 0xf9, 0x7b, 0xe2,
0x58, 0xfc, 0x94, 0x84, 0xeb, 0x5f, 0xbf, 0x07, 0x00, 0x00, 0xff, 0xff, 0x2b, 0xde, 0x2c, 0x5b,
0x8f, 0x01, 0x00, 0x00,
}

View File

@ -18,8 +18,8 @@ namespace mmfdotnet
{
static void Main(string[] args)
{
string host = Environment.GetEnvironmentVariable("REDIS_SENTINEL_SERVICE_HOST");
string port = Environment.GetEnvironmentVariable("REDIS_SENTINEL_SERVICE_PORT");
string host = Environment.GetEnvironmentVariable("REDIS_SERVICE_HOST");
string port = Environment.GetEnvironmentVariable("REDIS_SERVICE_PORT");
// Single connection to the open match redis cluster
Console.WriteLine($"Connecting to redis...{host}:{port}");

View File

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY examples/functions/golang/simple examples/functions/golang/simple
COPY config config
COPY internal/statestorage internal/statestorage
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/functions/golang/simple
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/functions/golang/manual-simple
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o mmf .

View File

@ -0,0 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-golang-manual-simple',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-golang-manual-simple']

View File

@ -0,0 +1,355 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/GoogleCloudPlatform/open-match/config"
messages "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/GoogleCloudPlatform/open-match/internal/set"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/ignorelist"
"github.com/gogo/protobuf/jsonpb"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
/*
Here are the things a MMF needs to do:
*Read/write from the Open Match state storage — Open Match ships with Redis as
the default state storage.
*Be packaged in a (Linux) Docker container.
*Read a profile you wrote to state storage using the Backend API.
*Select from the player data you wrote to state storage using the Frontend API.
*Run your custom logic to try to find a match.
*Write the match object it creates to state storage at a specified key.
*Remove the players it selected from consideration by other MMFs.
*Notify the MMForc of completion.
*(Optional & NYI, but recommended) Export stats for metrics collection.
*/
func main() {
// Read config file.
cfg := viper.New()
cfg, err := config.Read()
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz
redisURL := "redis://" + os.Getenv("REDIS_SERVICE_HOST") + ":" + os.Getenv("REDIS_SERVICE_PORT")
fmt.Println("Connecting to Redis at", redisURL)
redisConn, err := redis.DialURL(redisURL)
if err != nil {
panic(err)
}
defer redisConn.Close()
// decrement the number of running MMFs once finished
defer func() {
fmt.Println("DECR moncurrentMMFs")
_, err = redisConn.Do("DECR", "concurrentMMFs")
if err != nil {
fmt.Println(err)
}
}()
// Environment vars set by the MMForc
jobName := os.Getenv("PROFILE")
timestamp := os.Getenv("MMF_TIMESTAMP")
proposalKey := os.Getenv("MMF_PROPOSAL_ID")
profileKey := os.Getenv("MMF_PROFILE_ID")
errorKey := os.Getenv("MMF_ERROR_ID")
rosterKey := os.Getenv("MMF_ROSTER_ID")
_ = jobName
_ = timestamp
_ = proposalKey
_ = profileKey
_ = errorKey
_ = rosterKey
fmt.Println("MMF request inserted at ", timestamp)
fmt.Println("Looking for profile in key", profileKey)
fmt.Println("Placing results in MatchObjectID", proposalKey)
// Retrieve profile from Redis.
// NOTE: This can also be done with a call to the MMLogic API.
profile, err := redis.StringMap(redisConn.Do("HGETALL", profileKey))
if err != nil {
panic(err)
}
fmt.Println("=========Profile")
p, err := json.MarshalIndent(profile, "", " ")
fmt.Println(string(p))
// select players
const numPlayers = 8
// ZRANGE is 0-indexed
pools := gjson.Get(profile["properties"], cfg.GetString("jsonkeys.pools"))
fmt.Println("=========Pools")
fmt.Printf("pool.String() = %+v\n", pools.String())
// Parse all the pools.
// NOTE: When using pool definitions like these that are using the
// PlayerPool protobuf message data schema, you can avoid all of this by
// using the MMLogic API call to automatically parse the pools, run the
// filters, and return the results in one gRPC call per pool.
//
// ex: poolRosters["defaultPool"]["mmr.rating"]=[]string{"abc", "def", "ghi"}
poolRosters := make(map[string]map[string][]string)
// Loop through each pool.
pools.ForEach(func(_, pool gjson.Result) bool {
pName := gjson.Get(pool.String(), "name").String()
pFilters := gjson.Get(pool.String(), "filters")
poolRosters[pName] = make(map[string][]string)
// Loop through each filter for this pool
pFilters.ForEach(func(_, filter gjson.Result) bool {
// Note: This only works when running only one filter on each attribute!
searchKey := gjson.Get(filter.String(), "attribute").String()
min := int64(0)
max := int64(time.Now().Unix())
poolRosters[pName][searchKey] = make([]string, 0)
// Parse the min and max values.
if minv := gjson.Get(filter.String(), "minv"); minv.Bool() {
min = int64(minv.Int())
}
if maxv := gjson.Get(filter.String(), "maxv"); maxv.Bool() {
max = int64(maxv.Int())
}
fmt.Printf("%v: %v: [%v-%v]\n", pName, searchKey, min, max)
// NOTE: This only pulls the first 50000 matches for a given index!
// This is an example, and probably shouldn't be used outside of
// testing without some performance tuning based on the size of
// your indexes. In prodution, this could be run concurrently on
// multiple parts of the index, and combined.
// NOTE: It is recommended you also send back some stats about this
// query along with your MMF, which can be useful when your backend
// API client is deciding which profiles to send. This example does
// not return stats, but when using the MMLogic API, this is done
// for you.
poolRosters[pName][searchKey], err = redis.Strings(
redisConn.Do("ZRANGEBYSCORE", searchKey, min, max, "LIMIT", "0", "50000"))
if err != nil {
panic(err)
}
return true // keep iterating
})
return true // keep iterating
})
// Get ignored players.
combinedIgnoreList := make([]string, 0)
// Loop through all ignorelists configured in the config file.
for il := range cfg.GetStringMap("ignoreLists") {
ilCfg := cfg.Sub(fmt.Sprintf("ignoreLists.%v", il))
thisIl, err := ignorelist.Retrieve(redisConn, ilCfg, il)
if err != nil {
panic(err)
}
// Join this ignorelist to the others we've retrieved
combinedIgnoreList = set.Union(combinedIgnoreList, thisIl)
}
// Cycle through all filters for each pool, and calculate the overlap
// (players that match all filters)
overlaps := make(map[string][]string)
// Loop through pools
for pName, p := range poolRosters {
fmt.Println(pName)
// Var init
overlaps[pName] = make([]string, 0)
first := true // Flag used to initialize the overlap on the first iteration.
// Loop through rosters that matched each filter
for fName, roster := range p {
if first {
first = false
overlaps[pName] = roster
}
// Calculate overlap
overlaps[pName] = set.Intersection(overlaps[pName], roster)
// Print out for visibility/debugging
fmt.Printf(" filtering: %-20v | participants remaining: %-5v\n", fName, len(overlaps[pName]))
}
// Remove players on ignorelists
overlaps[pName] = set.Difference(overlaps[pName], combinedIgnoreList)
fmt.Printf(" removing: %-21v | participants remaining: %-5v\n", "(ignorelists)", len(overlaps[pName]))
}
// Loop through each roster in the profile and fill in players.
rosters := gjson.Get(profile["properties"], cfg.GetString("jsonkeys.rosters"))
fmt.Println("=========Rosters")
fmt.Printf("rosters.String() = %+v\n", rosters.String())
// Parse all the rosters in the profile, adding players if we can.
// NOTE: This is using roster definitions that follow the Roster protobuf
// message data schema.
profileRosters := make(map[string][]string)
//proposedRosters := make([]string, 0)
mo := &messages.MatchObject{}
mo.Rosters = make([]*messages.Roster, 0)
// List of all player IDs on all proposed rosters, used to add players to
// the ignore list.
// NOTE: when using the MMLogic API, writing your final proposal to state
// storage will automatically add players to the ignorelist, so you don't
// need to track them separately and add them to the ignore list yourself.
playerList := make([]string, 0)
rosters.ForEach(func(_, roster gjson.Result) bool {
rName := gjson.Get(roster.String(), "name").String()
fmt.Println(rName)
rPlayers := gjson.Get(roster.String(), "players")
profileRosters[rName] = make([]string, 0)
pbRoster := messages.Roster{Name: rName, Players: []*messages.Player{}}
rPlayers.ForEach(func(_, player gjson.Result) bool {
// TODO: This is where you would put your own custom matchmaking
// logic. MMFs have full access to the state storage in Redis, so
// you can choose some participants from the pool according to your
// favored strategy. You have complete freedom to read the
// participant's records from Redis and make decisions accordingly.
//
// This example just chooses the players in the order they were
// returned from state storage.
//fmt.Printf(" %v\n", player.String()) //DEBUG
proposedPlayer := player.String()
// Get the name of the pool that the profile wanted this player pulled from.
desiredPool := gjson.Get(player.String(), "pool").String()
if _, ok := overlaps[desiredPool]; ok {
// There are players that match all the desired filters.
if len(overlaps[desiredPool]) > 0 {
// Propose the next player returned from state storage for this
// slot in the match rosters.
// Functionally, a pop from the overlap array into the proposed slot.
playerID := ""
playerID, overlaps[desiredPool] = overlaps[desiredPool][0], overlaps[desiredPool][1:]
proposedPlayer, err = sjson.Set(proposedPlayer, "id", playerID)
if err != nil {
panic(err)
}
profileRosters[rName] = append(profileRosters[rName], proposedPlayer)
fmt.Printf(" proposing: %v\n", proposedPlayer)
pbRoster.Players = append(pbRoster.Players, &messages.Player{Id: playerID, Pool: desiredPool})
playerList = append(playerList, playerID)
} else {
// Not enough players, exit.
fmt.Println("Not enough players in the pool to fill all player slots in requested roster", rName)
fmt.Printf("%+v\n", roster.String())
fmt.Println("SET", errorKey, `{"error": "insufficient_players"}`)
redisConn.Do("SET", errorKey, `{"error": "insufficient_players"}`)
os.Exit(1)
}
}
return true
})
//proposedRoster, err := sjson.Set(roster.String(), "players", profileRosters[rName])
mo.Rosters = append(mo.Rosters, &pbRoster)
//fmt.Sprintf("[%v]", strings.Join(profileRosters[rName], ",")))
//if err != nil {
// panic(err)
//}
//proposedRosters = append(proposedRosters, proposedRoster)
return true
})
// Write back the match object to state storage so the evaluator can look at it, and update the ignorelist.
// NOTE: the MMLogic API CreateProposal automates most of this for you, as
// long as you send it properly formatted data (i.e. data that fits the schema of
// the protobuf messages)
// Add proposed players to the ignorelist so other MMFs won't consider them.
fmt.Printf("Adding %v players to ignorelist\n", len(playerList))
err = ignorelist.Add(redisConn, "proposed", playerList)
if err != nil {
fmt.Println("Unable to add proposed players to the ignorelist")
panic(err)
}
// Write the match object that will be sent back to the DGS
jmarshaler := jsonpb.Marshaler{}
moJSON, err := jmarshaler.MarshalToString(mo)
proposedRosters := gjson.Get(moJSON, "rosters")
fmt.Println("===========Proposal")
// Set the properties field.
// This is a filthy hack due to the way sjson escapes & quotes values it inserts.
// Better in most cases than trying to marshal the JSON into giant multi-dimensional
// interface maps only to dump it back out to a string after.
// Note: this hack isn't necessary for most users, who just use this same
// data directly from the protobuf message 'rosters' field, or write custom
// rosters directly to the JSON properties when choosing players. This is here
// for backwards compatibility with backends that haven't been updated to take
// advantage of the new rosters field in the MatchObject protobuf message introduced
// in 0.2.0.
profile["properties"], err = sjson.Set(profile["properties"], cfg.GetString("jsonkeys.rosters"), proposedRosters.String())
profile["properties"] = strings.Replace(profile["properties"], "\\", "", -1)
profile["properties"] = strings.Replace(profile["properties"], "]\"", "]", -1)
profile["properties"] = strings.Replace(profile["properties"], "\"[", "[", -1)
if err != nil {
fmt.Println("problem with sjson")
fmt.Println(err)
}
fmt.Printf("Proposed ID: %v | Properties: %v", proposalKey, profile["properties"])
// Write the roster that will be sent to the evaluator. This needs to be written to the
// "rosters" key of the match object, in the protobuf format for an array of
// rosters protobuf messages. You can write this output by hand (not recommended)
// or use the MMLogic API call CreateProposal will a filled out MatchObject protobuf message
// and let it do the work for you.
profile["rosters"] = proposedRosters.String()
fmt.Println("===========Redis")
// Start writing proposed results to Redis.
redisConn.Send("MULTI")
for key, value := range profile {
if key != "id" {
fmt.Println("HSET", proposalKey, key, value)
redisConn.Send("HSET", proposalKey, key, value)
}
}
//Finally, write the propsal key to trigger the evaluation of these results
fmt.Println("SADD", cfg.GetString("queues.proposals.name"), proposalKey)
redisConn.Send("SADD", cfg.GetString("queues.proposals.name"), proposalKey)
_, err = redisConn.Do("EXEC")
if err != nil {
panic(err)
}
}

View File

@ -1,11 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'gcr.io/$PROJECT_ID/openmatch-mmf:latest']
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:$TAG_NAME',
'--cache-from', 'gcr.io/$PROJECT_ID/openmatch-mmf:latest',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:$TAG_NAME']

View File

@ -1,244 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"fmt"
"log"
"os"
"strconv"
"strings"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerq"
"github.com/gobs/pretty"
"github.com/gomodule/redigo/redis"
intersect "github.com/juliangruber/go-intersect"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
/*
Here are the things a MMF needs to do:
*Read/write from the Open Match state storage — Open Match ships with Redis as the default state storage.
*Be packaged in a (Linux) Docker container.
*Read a profile you wrote to state storage using the Backend API.
*Select from the player data you wrote to state storage using the Frontend API.
*Run your custom logic to try to find a match.
*Write the match object it creates to state storage at a specified key.
*Remove the players it selected from consideration by other MMFs.
*(Optional, but recommended) Export stats for metrics collection.
*/
func main() {
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz
redisURL := "redis://" + os.Getenv("REDIS_SENTINEL_SERVICE_HOST") + ":" + os.Getenv("REDIS_SENTINEL_SERVICE_PORT")
//Single redis connection
fmt.Println("Connecting to Redis at", redisURL)
redisConn, err := redis.DialURL(redisURL)
check(err, "QUIT")
defer redisConn.Close()
defer func() {
// decrement the number of running MMFs since this one is finished
fmt.Println("DECR concurrentMMFs")
_, err = redisConn.Do("DECR", "concurrentMMFs")
if err != nil {
fmt.Println(err)
}
}()
// PROFILE is passed via the k8s downward API through an env set to jobName.
jobName := os.Getenv("PROFILE")
timestamp := os.Getenv("OM_TIMESTAMP")
moID := os.Getenv("OM_PROPOSAL_ID")
profileKey := os.Getenv("OM_PROFILE_ID")
errorKey := os.Getenv("OM_SHORTCIRCUIT_MATCHOBJECT_ID")
rosterKey := os.Getenv("OM_ROSTER_ID")
fmt.Println("MMF request inserted at ", timestamp)
fmt.Println("Looking for profile in key", profileKey)
fmt.Println("Placing results in MatchObjectID", moID)
profile, err := redis.String(redisConn.Do("GET", profileKey))
if err != nil {
panic(err)
}
fmt.Println("Got profile!")
fmt.Println(profile)
// Redis key under which to store results
resultsKey := "proposal." + jobName
rosterKey := "roster." + jobName
shortcutKey := moID + "." + profileKey
// select players
const numPlayers = 8
// ZRANGE is 0-indexed
defaultPool := gjson.Get(profile, "properties.playerPool")
fmt.Println("defaultPool")
fmt.Printf("defaultPool.String() = %+v\n", defaultPool.String())
filters := make(map[string]map[string]int)
defaultPool.ForEach(func(key, value gjson.Result) bool {
// Make a map entry for this filter.
searchKey := key.String()
filters[searchKey] = map[string]int{"min": 0, "max": 9999999999}
// Parse the min and max values. JSON format is "min-max"
r := strings.Split(value.String(), "-")
filters[searchKey]["min"], err = strconv.Atoi(r[0])
if err != nil {
log.Println(err)
}
filters[searchKey]["max"], err = strconv.Atoi(r[1])
if err != nil {
log.Println(err)
}
return true // keep iterating
})
pretty.PrettyPrint(filters)
if len(filters) < 1 {
fmt.Printf("No filters in the default pool for the profile, %v\n", len(filters))
fmt.Println("SET", moID+"."+profileKey, `{"error": "insufficient_filters"}`)
redisConn.Do("SET", moID+"."+profileKey, `{"error": "insufficient_filters"}`)
return
}
//init 2d array
stuff := make([][]string, len(filters))
for i := range stuff {
stuff[i] = make([]string, 0)
}
i := 0
for key, value := range filters {
// TODO: this needs a lot of time and effort on building sane values for how many IDs to pull at once.
// TODO: this should also be run concurrently per index we're filtering on
fmt.Printf("key = %+v\n", key)
fmt.Printf("value = %+v\n", value)
results, err := redis.Strings(redisConn.Do("ZRANGEBYSCORE", key, value["min"], value["max"], "LIMIT", "0", "10000"))
if err != nil {
panic(err)
}
// Store off these results in the 2d array used to calculate intersections below.
stuff[i] = append(stuff[i], results...)
i++
}
fmt.Println("overlap")
overlap := stuff[0]
for i := range stuff {
if i > 0 {
set := fmt.Sprint(intersect.Hash(overlap, stuff[i]))
overlap = strings.Split(set[1:len(set)-1], " ")
}
}
pretty.PrettyPrint(overlap)
// TODO: rigourous logic to put players in desired groups
teamRosters := make(map[string][]string)
rosterProfile := gjson.Get(profile, "properties.roster")
fmt.Printf("rosterProfile.String() = %+v\n", rosterProfile.String())
// TODO: get this by parsing the JSON instead of cheating
rosterSize := int(gjson.Get(profile, "properties.roster.blue").Int() + gjson.Get(profile, "properties.roster.red").Int())
matchRoster := make([]string, 0)
if len(overlap) < rosterSize {
fmt.Printf("Not enough players in the pool to fill %v player slots in requested roster", rosterSize)
fmt.Printf("rosterProfile.String() = %+v\n", rosterProfile.String())
fmt.Println("SET", moID+"."+profileKey, `{"error": "insufficient_players"}`)
redisConn.Do("SET", moID+"."+profileKey, `{"error": "insufficient_players"}`)
return
}
rosterProfile.ForEach(func(name, size gjson.Result) bool {
teamKey := name.String()
teamRosters[teamKey] = make([]string, size.Int())
for i := 0; i < int(size.Int()); i++ {
var playerID string
// Functionally a Pop from the overlap array into playerID
playerID, overlap = overlap[0], overlap[1:]
teamRosters[teamKey][i] = playerID
matchRoster = append(matchRoster, playerID)
}
return true
})
pretty.PrettyPrint(teamRosters)
pretty.PrettyPrint(matchRoster)
profile, err = sjson.Set(profile, "properties.roster", teamRosters)
if err != nil {
panic(err)
}
// Write the match object that will be sent back to the DGS
fmt.Println("Proposing the following group ", resultsKey)
fmt.Println("SET", resultsKey, profile)
_, err = redisConn.Do("SET", resultsKey, profile)
if err != nil {
panic(err)
}
// Write the roster that will be sent to the evaluator
fmt.Println("Sending the following roster to the evaluator under key ", rosterKey)
fmt.Println("SET", rosterKey, strings.Join(matchRoster, " "))
_, err = redisConn.Do("SET", rosterKey, strings.Join(matchRoster, " "))
if err != nil {
panic(err)
}
//TODO: make this auto-correcting if the player doesn't end up in a group.
for _, playerID := range matchRoster {
fmt.Printf("Attempting to remove player %v from indices\n", playerID)
// TODO: make playerq module available to everything
err := playerq.Deindex(redisConn, playerID)
if err != nil {
panic(err)
}
}
//Finally, write the propsal key to trigger the evaluation of these
//results
// TODO: read this from a config ala proposalq := cfg.GetString("queues.proposals.name")
proposalq := "proposalq"
fmt.Println("SADD", proposalq, jobName)
_, err = redisConn.Do("SADD", proposalq, jobName)
if err != nil {
panic(err)
}
// DEBUG
results, err := redis.Strings(redisConn.Do("SMEMBERS", proposalq))
if err != nil {
panic(err)
}
pretty.PrettyPrint(results)
}
func check(err error, action string) {
if err != nil {
if action == "QUIT" {
log.Fatal(err)
} else {
log.Print(err)
}
}
}

View File

@ -0,0 +1,12 @@
{
"require": {
"grpc/grpc": "v1.9.0"
},
"autoload": {
"psr-4": {
"Api\\": "proto/Api",
"Messages\\": "proto/Messages",
"GPBMetadata\\": "proto/GPBMetadata"
}
}
}

View File

@ -0,0 +1,131 @@
#!/usr/bin/env php
<?php
# Step 1 - Package this in a linux container image.
require dirname(__FILE__).'/vendor/autoload.php';
require 'mmf.php';
function dump_pb_message($msg) {
print($msg->serializeToJsonString() . "\n");
}
# Load config file
$cfg = json_decode(file_get_contents('matchmaker_config.json'), true);
# Step 2 - Talk to Redis. This example uses the MM Logic API in OM to read/write to/from redis.
# Establish grpc channel and make the API client stub
$api_conn_info = sprintf('%s:%s', $cfg['api']['mmlogic']['hostname'], $cfg['api']['mmlogic']['port']);
$mmlogic_api = new Api\MmLogicClient($api_conn_info, [
'credentials' => Grpc\ChannelCredentials::createInsecure(),
]);
# Step 3 - Read the profile written to the Backend API.
# Get profile from redis
$match_object = new Messages\MatchObject([
'id' => getenv('MMF_PROFILE_ID'
)]);
list($profile_pb, $status) = $mmlogic_api->GetProfile($match_object)->wait();
dump_pb_message($profile_pb);
$profile_dict = json_decode($profile_pb->getProperties(), true);
# Step 4 - Select the player data from Redis that we want for our matchmaking logic.
# Embedded in this profile are JSON representations of the filters for each player pool.
# JsonFilterSet() is able to read those directly. No need to marhal that
# JSON into the protobuf message format
$player_pools = [];
foreach ($profile_pb->getPools() as $empty_pool) {
$empty_pool_name = $empty_pool->getName();
# Dict to hold value-sorted field dictionary for easy retreival of players by value
$player_pools[$empty_pool_name] = [];
printf("Retrieving pool '%s'\n", $empty_pool_name);
if (!$empty_pool->getStats()) {
$empty_pool->setStats(new Messages\Stats());
}
if ($cfg['debug']) {
$start = microtime(true);
}
# Pool filter results are streamed in chunks as they can be too large to send
# in one grpc message. Loop to get them all.
$call = $mmlogic_api->GetPlayerPool($empty_pool);
foreach ($call->responses() as $partial_results) {
if ($partial_results->getStats()) {
$empty_pool->getStats()->setCount($partial_results->getStats()->getCount());
$empty_pool->getStats()->setElapsed($partial_results->getStats()->getElapsed());
}
print ".\n";
$roster = $partial_results->getRoster();
if ($roster) {
foreach ($roster->getPlayers() as $player) {
if (!array_key_exists($player->getId(), $player_pools[$empty_pool_name])) {
$player_pools[$empty_pool_name][$player->getId()] = [];
}
foreach ($player->getAttributes() as $attr) {
$player_pools[$empty_pool_name][$player->getId()][$attr->getName()] = $attr->getValue();
}
}
}
}
if ($cfg['debug']) {
$end = microtime(true);
printf("\n'%s': count %06d | elapsed %0.3f\n", $empty_pool_name, count($player_pools[$empty_pool_name]), $end - $start);
}
}
#################################################################
# Step 5 - Run custom matchmaking logic to try to find a match
# This is in the file mmf.py
$results = make_matches($profile_dict, $player_pools);
#################################################################
# DEBUG
if ($cfg['debug']) {
print("======= match_properties\n");
var_export($results);
}
# Generate a MatchObject message to write to state storage with the results in it.
$mo = new Messages\MatchObject([
'id' => getenv('MMF_PROPOSAL_ID'),
'properties' => json_encode($results)
]);
$mo->setPools($profile_pb->getPools());
# Access the rosters in dict form within the properties json.
# It is stored at the key specified in the config file.
$rosters_dict = $results;
foreach (explode('.', $cfg['jsonkeys']['rosters']) as $partial_key) {
$rosters_dict = $rosters_dict[$partial_key] ?? [];
}
# Unmarshal the rosters into the MatchObject
foreach ($rosters_dict as $roster) {
$r = new Messages\Roster();
$r->mergeFromJsonString(json_encode($roster));
$mo->getRosters() []= $r;
}
#DEBUG: writing to error key prevents evalutor run
if ($cfg['debug']) {
print("======== MMF results:\n");
dump_pb_message($mo);
}
# Step 6 - Write the outcome of the matchmaking logic back to state storage.
# Step 7 - Remove the selected players from consideration by other MMFs.
# CreateProposal does both of these for you, and some other items as well.
list($result, $status) = $mmlogic_api->CreateProposal($mo)->wait();
printf("======== MMF write to state storage: %s\n", $result->getSuccess() ? 'true' : 'false');
dump_pb_message($result);
# [OPTIONAL] Step 8 - Export stats about this run.
# TODO
?>

View File

@ -0,0 +1 @@
../../../../config/matchmaker_config.json

View File

@ -0,0 +1,27 @@
<?php
function make_matches($profile_dict, $player_pools) {
###########################################################################
# This is the exciting part, and where most of your custom code would go! #
###########################################################################
foreach ($profile_dict['properties']['rosters'] as &$roster) {
foreach ($roster['players'] as &$player) {
if (array_key_exists('pool', $player)) {
$player['id'] = array_rand($player_pools[$player['pool']]);
printf("Selected player %s from pool %s (strategy: RANDOM)\n",
$player['id'],
$player['pool']
);
} else {
var_export($player);
}
}
unset($player);
}
unset($roster);
return $profile_dict;
}
?>

View File

@ -0,0 +1,122 @@
<?php
// GENERATED CODE -- DO NOT EDIT!
namespace Api;
/**
*/
class BackendClient extends \Grpc\BaseStub {
/**
* @param string $hostname hostname
* @param array $opts channel options
* @param \Grpc\Channel $channel (optional) re-use channel object
*/
public function __construct($hostname, $opts, $channel = null) {
parent::__construct($hostname, $opts, $channel);
}
/**
* Run MMF once. Return a matchobject that fits this profile.
* INPUT: MatchObject message with these fields populated:
* - id
* - properties
* - [optional] roster, any fields you fill are available to your MMF.
* - [optional] pools, any fields you fill are available to your MMF.
* OUTPUT: MatchObject message with these fields populated:
* - id
* - properties
* - error. Empty if no error was encountered
* - rosters, if you choose to fill them in your MMF. (Recommended)
* - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)
* @param \Messages\MatchObject $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function CreateMatch(\Messages\MatchObject $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Backend/CreateMatch',
$argument,
['\Messages\MatchObject', 'decode'],
$metadata, $options);
}
/**
* Continually run MMF and stream matchobjects that fit this profile until
* client closes the connection. Same inputs/outputs as CreateMatch.
* @param \Messages\MatchObject $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function ListMatches(\Messages\MatchObject $argument,
$metadata = [], $options = []) {
return $this->_serverStreamRequest('/api.Backend/ListMatches',
$argument,
['\Messages\MatchObject', 'decode'],
$metadata, $options);
}
/**
* Delete a matchobject from state storage manually. (Matchobjects in state
* storage will also automatically expire after a while)
* INPUT: MatchObject message with the 'id' field populated.
* (All other fields are ignored.)
* @param \Messages\MatchObject $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function DeleteMatch(\Messages\MatchObject $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Backend/DeleteMatch',
$argument,
['\Messages\Result', 'decode'],
$metadata, $options);
}
/**
* Call fors communication of connection info to players.
*
* Write the connection info for the list of players in the
* Assignments.messages.Rosters to state storage. The FrontendAPI is
* responsible for sending anything sent here to the game clients.
* Sending a player to this function kicks off a process that removes
* the player from future matchmaking functions by adding them to the
* 'deindexed' player list and then deleting their player ID from state storage
* indexes.
* INPUT: Assignments message with these fields populated:
* - connection_info, anything you write to this string is sent to Frontend API
* - rosters. You can send any number of rosters, containing any number of
* player messages. All players from all rosters will be sent the connection_info.
* The only field in the Player object that is used by CreateAssignments is
* the id field. All others are silently ignored.
* @param \Messages\Assignments $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function CreateAssignments(\Messages\Assignments $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Backend/CreateAssignments',
$argument,
['\Messages\Result', 'decode'],
$metadata, $options);
}
/**
* Remove DGS connection info from state storage for players.
* INPUT: Roster message with the 'players' field populated.
* The only field in the Player object that is used by
* DeleteAssignments is the 'id' field. All others are silently ignored. If
* you need to delete multiple rosters, make multiple calls.
* @param \Messages\Roster $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function DeleteAssignments(\Messages\Roster $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Backend/DeleteAssignments',
$argument,
['\Messages\Result', 'decode'],
$metadata, $options);
}
}

View File

@ -0,0 +1,73 @@
<?php
// GENERATED CODE -- DO NOT EDIT!
// Original file comments:
// TODO: In a future version, these messages will be moved/merged with those in om_messages.proto
namespace Api;
/**
*/
class FrontendClient extends \Grpc\BaseStub {
/**
* @param string $hostname hostname
* @param array $opts channel options
* @param \Grpc\Channel $channel (optional) re-use channel object
*/
public function __construct($hostname, $opts, $channel = null) {
parent::__construct($hostname, $opts, $channel);
}
/**
* @param \Api\Group $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function CreateRequest(\Api\Group $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Frontend/CreateRequest',
$argument,
['\Messages\Result', 'decode'],
$metadata, $options);
}
/**
* @param \Api\Group $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function DeleteRequest(\Api\Group $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Frontend/DeleteRequest',
$argument,
['\Messages\Result', 'decode'],
$metadata, $options);
}
/**
* @param \Api\PlayerId $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function GetAssignment(\Api\PlayerId $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Frontend/GetAssignment',
$argument,
['\Messages\ConnectionInfo', 'decode'],
$metadata, $options);
}
/**
* @param \Api\PlayerId $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function DeleteAssignment(\Api\PlayerId $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.Frontend/DeleteAssignment',
$argument,
['\Messages\Result', 'decode'],
$metadata, $options);
}
}

View File

@ -0,0 +1,102 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/frontend.proto
namespace Api;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Data structure for a group of players to pass to the matchmaking function.
* Obviously, the group can be a group of one!
*
* Generated from protobuf message <code>api.Group</code>
*/
class Group extends \Google\Protobuf\Internal\Message
{
/**
* By convention, string of space-delimited playerIDs
*
* Generated from protobuf field <code>string id = 1;</code>
*/
private $id = '';
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
*/
private $properties = '';
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $id
* By convention, string of space-delimited playerIDs
* @type string $properties
* By convention, a JSON-encoded string
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Frontend::initOnce();
parent::__construct($data);
}
/**
* By convention, string of space-delimited playerIDs
*
* Generated from protobuf field <code>string id = 1;</code>
* @return string
*/
public function getId()
{
return $this->id;
}
/**
* By convention, string of space-delimited playerIDs
*
* Generated from protobuf field <code>string id = 1;</code>
* @param string $var
* @return $this
*/
public function setId($var)
{
GPBUtil::checkString($var, True);
$this->id = $var;
return $this;
}
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
* @return string
*/
public function getProperties()
{
return $this->properties;
}
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
* @param string $var
* @return $this
*/
public function setProperties($var)
{
GPBUtil::checkString($var, True);
$this->properties = $var;
return $this;
}
}

View File

@ -0,0 +1,134 @@
<?php
// GENERATED CODE -- DO NOT EDIT!
namespace Api;
/**
* The MMLogic API provides utility functions for common MMF functionality, such
* as retreiving profiles and players from state storage, writing results to state storage,
* and exposing metrics and statistics.
*/
class MmLogicClient extends \Grpc\BaseStub {
/**
* @param string $hostname hostname
* @param array $opts channel options
* @param \Grpc\Channel $channel (optional) re-use channel object
*/
public function __construct($hostname, $opts, $channel = null) {
parent::__construct($hostname, $opts, $channel);
}
/**
* Send GetProfile a match object with the ID field populated, it will return a
* 'filled' one.
* Note: filters are assumed to have been checked for validity by the
* backendapi when accepting a profile
* @param \Messages\MatchObject $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function GetProfile(\Messages\MatchObject $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.MmLogic/GetProfile',
$argument,
['\Messages\MatchObject', 'decode'],
$metadata, $options);
}
/**
* CreateProposal is called by MMFs that wish to write their results to
* a proposed MatchObject, that can be sent out the Backend API once it has
* been approved (by default, by the evaluator process).
* - adds all players in all Rosters to the proposed player ignore list
* - writes the proposed match to the provided key
* - adds that key to the list of proposals to be considered
* INPUT:
* * TO RETURN A MATCHOBJECT AFTER A SUCCESSFUL MMF RUN
* To create a match MatchObject message with these fields populated:
* - id, set to the value of the MMF_PROPOSAL_ID env var
* - properties
* - error. You must explicitly set this to an empty string if your MMF
* - roster, with the playerIDs filled in the 'players' repeated field.
* - [optional] pools, set to the output from the 'GetPlayerPools' call,
* will populate the pools with stats about how many players the filters
* matched and how long the filters took to run, which will be sent out
* the backend api along with your match results.
* was successful.
* * TO RETURN AN ERROR
* To report a failure or error, send a MatchObject message with these
* these fields populated:
* - id, set to the value of the MMF_ERROR_ID env var.
* - error, set to a string value describing the error your MMF encountered.
* - [optional] properties, anything you put here is returned to the
* backend along with your error.
* - [optional] rosters, anything you put here is returned to the
* backend along with your error.
* - [optional] pools, set to the output from the 'GetPlayerPools' call,
* will populate the pools with stats about how many players the filters
* matched and how long the filters took to run, which will be sent out
* the backend api along with your match results.
* OUTPUT: a Result message with a boolean success value and an error string
* if an error was encountered
* @param \Messages\MatchObject $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function CreateProposal(\Messages\MatchObject $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.MmLogic/CreateProposal',
$argument,
['\Messages\Result', 'decode'],
$metadata, $options);
}
/**
* Player listing and filtering functions
*
* RetrievePlayerPool gets the list of players that match every Filter in the
* PlayerPool, and then removes all players it finds in the ignore list. It
* combines the results, and returns the resulting player pool.
* @param \Messages\PlayerPool $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function GetPlayerPool(\Messages\PlayerPool $argument,
$metadata = [], $options = []) {
return $this->_serverStreamRequest('/api.MmLogic/GetPlayerPool',
$argument,
['\Messages\PlayerPool', 'decode'],
$metadata, $options);
}
/**
* Ignore List functions
*
* IlInput is an empty message reserved for future use.
* @param \Messages\IlInput $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function GetAllIgnoredPlayers(\Messages\IlInput $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.MmLogic/GetAllIgnoredPlayers',
$argument,
['\Messages\Roster', 'decode'],
$metadata, $options);
}
/**
* RetrieveIgnoreList retrieves players from the ignore list specified in the
* config file under 'ignoreLists.proposedPlayers.key'.
* @param \Messages\IlInput $argument input argument
* @param array $metadata metadata
* @param array $options call options
*/
public function ListIgnoredPlayers(\Messages\IlInput $argument,
$metadata = [], $options = []) {
return $this->_simpleRequest('/api.MmLogic/ListIgnoredPlayers',
$argument,
['\Messages\Roster', 'decode'],
$metadata, $options);
}
}

View File

@ -0,0 +1,65 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/frontend.proto
namespace Api;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Generated from protobuf message <code>api.PlayerId</code>
*/
class PlayerId extends \Google\Protobuf\Internal\Message
{
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
*/
private $id = '';
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $id
* By convention, an Xid
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Frontend::initOnce();
parent::__construct($data);
}
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @return string
*/
public function getId()
{
return $this->id;
}
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @param string $var
* @return $this
*/
public function setId($var)
{
GPBUtil::checkString($var, True);
$this->id = $var;
return $this;
}
}

View File

@ -0,0 +1,39 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/backend.proto
namespace GPBMetadata\Api\ProtobufSpec;
class Backend
{
public static $is_initialized = false;
public static function initOnce() {
$pool = \Google\Protobuf\Internal\DescriptorPool::getGeneratedPool();
if (static::$is_initialized == true) {
return;
}
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
$pool->internalAddGeneratedFile(hex2bin(
"0aa8030a1f6170692f70726f746f6275662d737065632f6261636b656e64" .
"2e70726f746f120361706932be020a074261636b656e64123d0a0b437265" .
"6174654d6174636812152e6d657373616765732e4d617463684f626a6563" .
"741a152e6d657373616765732e4d617463684f626a6563742200123f0a0b" .
"4c6973744d61746368657312152e6d657373616765732e4d617463684f62" .
"6a6563741a152e6d657373616765732e4d617463684f626a656374220030" .
"0112380a0b44656c6574654d6174636812152e6d657373616765732e4d61" .
"7463684f626a6563741a102e6d657373616765732e526573756c74220012" .
"3e0a1143726561746541737369676e6d656e747312152e6d657373616765" .
"732e41737369676e6d656e74731a102e6d657373616765732e526573756c" .
"74220012390a1144656c65746541737369676e6d656e747312102e6d6573" .
"73616765732e526f737465721a102e6d657373616765732e526573756c74" .
"220042375a356769746875622e636f6d2f476f6f676c65436c6f7564506c" .
"6174666f726d2f6f70656e2d6d617463682f696e7465726e616c2f706262" .
"0670726f746f33"
));
static::$is_initialized = true;
}
}

View File

@ -0,0 +1,38 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/frontend.proto
namespace GPBMetadata\Api\ProtobufSpec;
class Frontend
{
public static $is_initialized = false;
public static function initOnce() {
$pool = \Google\Protobuf\Internal\DescriptorPool::getGeneratedPool();
if (static::$is_initialized == true) {
return;
}
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
$pool->internalAddGeneratedFile(hex2bin(
"0a8b030a206170692f70726f746f6275662d737065632f66726f6e74656e" .
"642e70726f746f120361706922270a0547726f7570120a0a026964180120" .
"01280912120a0a70726f7065727469657318022001280922160a08506c61" .
"7965724964120a0a02696418012001280932df010a0846726f6e74656e64" .
"122f0a0d43726561746552657175657374120a2e6170692e47726f75701a" .
"102e6d657373616765732e526573756c742200122f0a0d44656c65746552" .
"657175657374120a2e6170692e47726f75701a102e6d657373616765732e" .
"526573756c742200123a0a0d47657441737369676e6d656e74120d2e6170" .
"692e506c6179657249641a182e6d657373616765732e436f6e6e65637469" .
"6f6e496e666f220012350a1044656c65746541737369676e6d656e74120d" .
"2e6170692e506c6179657249641a102e6d657373616765732e526573756c" .
"74220042375a356769746875622e636f6d2f476f6f676c65436c6f756450" .
"6c6174666f726d2f6f70656e2d6d617463682f696e7465726e616c2f7062" .
"620670726f746f33"
));
static::$is_initialized = true;
}
}

View File

@ -0,0 +1,54 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace GPBMetadata\Api\ProtobufSpec;
class Messages
{
public static $is_initialized = false;
public static function initOnce() {
$pool = \Google\Protobuf\Internal\DescriptorPool::getGeneratedPool();
if (static::$is_initialized == true) {
return;
}
$pool->internalAddGeneratedFile(hex2bin(
"0a9a070a206170692f70726f746f6275662d737065632f6d657373616765" .
"732e70726f746f12086d657373616765732284010a0b4d617463684f626a" .
"656374120a0a02696418012001280912120a0a70726f7065727469657318" .
"0220012809120d0a056572726f7218032001280912210a07726f73746572" .
"7318042003280b32102e6d657373616765732e526f7374657212230a0570" .
"6f6f6c7318052003280b32142e6d657373616765732e506c61796572506f" .
"6f6c22390a06526f73746572120c0a046e616d6518012001280912210a07" .
"706c617965727318022003280b32102e6d657373616765732e506c617965" .
"7222650a0646696c746572120c0a046e616d6518012001280912110a0961" .
"7474726962757465180220012809120c0a046d617876180320012803120c" .
"0a046d696e76180420012803121e0a05737461747318052001280b320f2e" .
"6d657373616765732e537461747322270a055374617473120d0a05636f75" .
"6e74180120012803120f0a07656c6170736564180220012801227f0a0a50" .
"6c61796572506f6f6c120c0a046e616d6518012001280912210a0766696c" .
"7465727318022003280b32102e6d657373616765732e46696c7465721220" .
"0a06726f7374657218032001280b32102e6d657373616765732e526f7374" .
"6572121e0a05737461747318042001280b320f2e6d657373616765732e53" .
"746174732290010a06506c61796572120a0a02696418012001280912120a" .
"0a70726f70657274696573180220012809120c0a04706f6f6c1803200128" .
"09122e0a0a6174747269627574657318042003280b321a2e6d6573736167" .
"65732e506c617965722e4174747269627574651a280a0941747472696275" .
"7465120c0a046e616d65180120012809120d0a0576616c75651802200128" .
"0322280a06526573756c74120f0a0773756363657373180120012808120d" .
"0a056572726f7218022001280922090a07496c496e707574222b0a0e436f" .
"6e6e656374696f6e496e666f12190a11636f6e6e656374696f6e5f737472" .
"696e6718012001280922630a0b41737369676e6d656e747312210a07726f" .
"737465727318012003280b32102e6d657373616765732e526f7374657212" .
"310a0f636f6e6e656374696f6e5f696e666f18022001280b32182e6d6573" .
"73616765732e436f6e6e656374696f6e496e666f42375a35676974687562" .
"2e636f6d2f476f6f676c65436c6f7564506c6174666f726d2f6f70656e2d" .
"6d617463682f696e7465726e616c2f7062620670726f746f33"
));
static::$is_initialized = true;
}
}

View File

@ -0,0 +1,39 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/mmlogic.proto
namespace GPBMetadata\Api\ProtobufSpec;
class Mmlogic
{
public static $is_initialized = false;
public static function initOnce() {
$pool = \Google\Protobuf\Internal\DescriptorPool::getGeneratedPool();
if (static::$is_initialized == true) {
return;
}
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
$pool->internalAddGeneratedFile(hex2bin(
"0aab030a1f6170692f70726f746f6275662d737065632f6d6d6c6f676963" .
"2e70726f746f120361706932c1020a074d6d4c6f676963123c0a0a476574" .
"50726f66696c6512152e6d657373616765732e4d617463684f626a656374" .
"1a152e6d657373616765732e4d617463684f626a6563742200123b0a0e43" .
"726561746550726f706f73616c12152e6d657373616765732e4d61746368" .
"4f626a6563741a102e6d657373616765732e526573756c742200123f0a0d" .
"476574506c61796572506f6f6c12142e6d657373616765732e506c617965" .
"72506f6f6c1a142e6d657373616765732e506c61796572506f6f6c220030" .
"01123d0a14476574416c6c49676e6f726564506c617965727312112e6d65" .
"7373616765732e496c496e7075741a102e6d657373616765732e526f7374" .
"65722200123b0a124c69737449676e6f726564506c617965727312112e6d" .
"657373616765732e496c496e7075741a102e6d657373616765732e526f73" .
"746572220042375a356769746875622e636f6d2f476f6f676c65436c6f75" .
"64506c6174666f726d2f6f70656e2d6d617463682f696e7465726e616c2f" .
"7062620670726f746f33"
));
static::$is_initialized = true;
}
}

View File

@ -0,0 +1,85 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Generated from protobuf message <code>messages.Assignments</code>
*/
class Assignments extends \Google\Protobuf\Internal\Message
{
/**
* Generated from protobuf field <code>repeated .messages.Roster rosters = 1;</code>
*/
private $rosters;
/**
* Generated from protobuf field <code>.messages.ConnectionInfo connection_info = 2;</code>
*/
private $connection_info = null;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type \Messages\Roster[]|\Google\Protobuf\Internal\RepeatedField $rosters
* @type \Messages\ConnectionInfo $connection_info
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Generated from protobuf field <code>repeated .messages.Roster rosters = 1;</code>
* @return \Google\Protobuf\Internal\RepeatedField
*/
public function getRosters()
{
return $this->rosters;
}
/**
* Generated from protobuf field <code>repeated .messages.Roster rosters = 1;</code>
* @param \Messages\Roster[]|\Google\Protobuf\Internal\RepeatedField $var
* @return $this
*/
public function setRosters($var)
{
$arr = GPBUtil::checkRepeatedField($var, \Google\Protobuf\Internal\GPBType::MESSAGE, \Messages\Roster::class);
$this->rosters = $arr;
return $this;
}
/**
* Generated from protobuf field <code>.messages.ConnectionInfo connection_info = 2;</code>
* @return \Messages\ConnectionInfo
*/
public function getConnectionInfo()
{
return $this->connection_info;
}
/**
* Generated from protobuf field <code>.messages.ConnectionInfo connection_info = 2;</code>
* @param \Messages\ConnectionInfo $var
* @return $this
*/
public function setConnectionInfo($var)
{
GPBUtil::checkMessage($var, \Messages\ConnectionInfo::class);
$this->connection_info = $var;
return $this;
}
}

View File

@ -0,0 +1,68 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Simple message used to pass the connection string for the DGS to the player.
* DEPRECATED: Likely to be integrated into another protobuf message in a future version.
*
* Generated from protobuf message <code>messages.ConnectionInfo</code>
*/
class ConnectionInfo extends \Google\Protobuf\Internal\Message
{
/**
* Passed by the matchmaker to game clients without modification.
*
* Generated from protobuf field <code>string connection_string = 1;</code>
*/
private $connection_string = '';
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $connection_string
* Passed by the matchmaker to game clients without modification.
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Passed by the matchmaker to game clients without modification.
*
* Generated from protobuf field <code>string connection_string = 1;</code>
* @return string
*/
public function getConnectionString()
{
return $this->connection_string;
}
/**
* Passed by the matchmaker to game clients without modification.
*
* Generated from protobuf field <code>string connection_string = 1;</code>
* @param string $var
* @return $this
*/
public function setConnectionString($var)
{
GPBUtil::checkString($var, True);
$this->connection_string = $var;
return $this;
}
}

View File

@ -0,0 +1,203 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* A 'hard' filter to apply to the player pool.
*
* Generated from protobuf message <code>messages.Filter</code>
*/
class Filter extends \Google\Protobuf\Internal\Message
{
/**
* Arbitrary developer-chosen, human-readable name of this filter. Appears in logs and metrics.
*
* Generated from protobuf field <code>string name = 1;</code>
*/
private $name = '';
/**
* Name of the player attribute this filter operates on.
*
* Generated from protobuf field <code>string attribute = 2;</code>
*/
private $attribute = '';
/**
* Maximum value. Defaults to positive infinity (any value above minv).
*
* Generated from protobuf field <code>int64 maxv = 3;</code>
*/
private $maxv = 0;
/**
* Minimum value. Defaults to 0.
*
* Generated from protobuf field <code>int64 minv = 4;</code>
*/
private $minv = 0;
/**
* Statistics for the last time the filter was applied.
*
* Generated from protobuf field <code>.messages.Stats stats = 5;</code>
*/
private $stats = null;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $name
* Arbitrary developer-chosen, human-readable name of this filter. Appears in logs and metrics.
* @type string $attribute
* Name of the player attribute this filter operates on.
* @type int|string $maxv
* Maximum value. Defaults to positive infinity (any value above minv).
* @type int|string $minv
* Minimum value. Defaults to 0.
* @type \Messages\Stats $stats
* Statistics for the last time the filter was applied.
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Arbitrary developer-chosen, human-readable name of this filter. Appears in logs and metrics.
*
* Generated from protobuf field <code>string name = 1;</code>
* @return string
*/
public function getName()
{
return $this->name;
}
/**
* Arbitrary developer-chosen, human-readable name of this filter. Appears in logs and metrics.
*
* Generated from protobuf field <code>string name = 1;</code>
* @param string $var
* @return $this
*/
public function setName($var)
{
GPBUtil::checkString($var, True);
$this->name = $var;
return $this;
}
/**
* Name of the player attribute this filter operates on.
*
* Generated from protobuf field <code>string attribute = 2;</code>
* @return string
*/
public function getAttribute()
{
return $this->attribute;
}
/**
* Name of the player attribute this filter operates on.
*
* Generated from protobuf field <code>string attribute = 2;</code>
* @param string $var
* @return $this
*/
public function setAttribute($var)
{
GPBUtil::checkString($var, True);
$this->attribute = $var;
return $this;
}
/**
* Maximum value. Defaults to positive infinity (any value above minv).
*
* Generated from protobuf field <code>int64 maxv = 3;</code>
* @return int|string
*/
public function getMaxv()
{
return $this->maxv;
}
/**
* Maximum value. Defaults to positive infinity (any value above minv).
*
* Generated from protobuf field <code>int64 maxv = 3;</code>
* @param int|string $var
* @return $this
*/
public function setMaxv($var)
{
GPBUtil::checkInt64($var);
$this->maxv = $var;
return $this;
}
/**
* Minimum value. Defaults to 0.
*
* Generated from protobuf field <code>int64 minv = 4;</code>
* @return int|string
*/
public function getMinv()
{
return $this->minv;
}
/**
* Minimum value. Defaults to 0.
*
* Generated from protobuf field <code>int64 minv = 4;</code>
* @param int|string $var
* @return $this
*/
public function setMinv($var)
{
GPBUtil::checkInt64($var);
$this->minv = $var;
return $this;
}
/**
* Statistics for the last time the filter was applied.
*
* Generated from protobuf field <code>.messages.Stats stats = 5;</code>
* @return \Messages\Stats
*/
public function getStats()
{
return $this->stats;
}
/**
* Statistics for the last time the filter was applied.
*
* Generated from protobuf field <code>.messages.Stats stats = 5;</code>
* @param \Messages\Stats $var
* @return $this
*/
public function setStats($var)
{
GPBUtil::checkMessage($var, \Messages\Stats::class);
$this->stats = $var;
return $this;
}
}

View File

@ -0,0 +1,33 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* IlInput is an empty message reserved for future use.
*
* Generated from protobuf message <code>messages.IlInput</code>
*/
class IlInput extends \Google\Protobuf\Internal\Message
{
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
}

View File

@ -0,0 +1,213 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Open Match's internal representation and wire protocol format for "MatchObjects".
* In order to request a match using the Backend API, your backend code should generate
* a new MatchObject with an ID and properties filled in (for more details about valid
* values for these fields, see the documentation). Open Match then sends the Match
* Object through to your matchmaking function, where you add players to 'rosters' and
* store any schemaless data you wish in the 'properties' field. The MatchObject
* is then sent, populated, out through the Backend API to your backend code.
*
* MatchObjects contain a number of fields, but many gRPC calls that take a
* MatchObject as input only require a few of them to be filled in. Check the
* gRPC function in question for more details.
*
* Generated from protobuf message <code>messages.MatchObject</code>
*/
class MatchObject extends \Google\Protobuf\Internal\Message
{
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
*/
private $id = '';
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
*/
private $properties = '';
/**
* Last error encountered.
*
* Generated from protobuf field <code>string error = 3;</code>
*/
private $error = '';
/**
* Rosters of players.
*
* Generated from protobuf field <code>repeated .messages.Roster rosters = 4;</code>
*/
private $rosters;
/**
* 'Hard' filters, and the players who match them.
*
* Generated from protobuf field <code>repeated .messages.PlayerPool pools = 5;</code>
*/
private $pools;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $id
* By convention, an Xid
* @type string $properties
* By convention, a JSON-encoded string
* @type string $error
* Last error encountered.
* @type \Messages\Roster[]|\Google\Protobuf\Internal\RepeatedField $rosters
* Rosters of players.
* @type \Messages\PlayerPool[]|\Google\Protobuf\Internal\RepeatedField $pools
* 'Hard' filters, and the players who match them.
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @return string
*/
public function getId()
{
return $this->id;
}
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @param string $var
* @return $this
*/
public function setId($var)
{
GPBUtil::checkString($var, True);
$this->id = $var;
return $this;
}
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
* @return string
*/
public function getProperties()
{
return $this->properties;
}
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
* @param string $var
* @return $this
*/
public function setProperties($var)
{
GPBUtil::checkString($var, True);
$this->properties = $var;
return $this;
}
/**
* Last error encountered.
*
* Generated from protobuf field <code>string error = 3;</code>
* @return string
*/
public function getError()
{
return $this->error;
}
/**
* Last error encountered.
*
* Generated from protobuf field <code>string error = 3;</code>
* @param string $var
* @return $this
*/
public function setError($var)
{
GPBUtil::checkString($var, True);
$this->error = $var;
return $this;
}
/**
* Rosters of players.
*
* Generated from protobuf field <code>repeated .messages.Roster rosters = 4;</code>
* @return \Google\Protobuf\Internal\RepeatedField
*/
public function getRosters()
{
return $this->rosters;
}
/**
* Rosters of players.
*
* Generated from protobuf field <code>repeated .messages.Roster rosters = 4;</code>
* @param \Messages\Roster[]|\Google\Protobuf\Internal\RepeatedField $var
* @return $this
*/
public function setRosters($var)
{
$arr = GPBUtil::checkRepeatedField($var, \Google\Protobuf\Internal\GPBType::MESSAGE, \Messages\Roster::class);
$this->rosters = $arr;
return $this;
}
/**
* 'Hard' filters, and the players who match them.
*
* Generated from protobuf field <code>repeated .messages.PlayerPool pools = 5;</code>
* @return \Google\Protobuf\Internal\RepeatedField
*/
public function getPools()
{
return $this->pools;
}
/**
* 'Hard' filters, and the players who match them.
*
* Generated from protobuf field <code>repeated .messages.PlayerPool pools = 5;</code>
* @param \Messages\PlayerPool[]|\Google\Protobuf\Internal\RepeatedField $var
* @return $this
*/
public function setPools($var)
{
$arr = GPBUtil::checkRepeatedField($var, \Google\Protobuf\Internal\GPBType::MESSAGE, \Messages\PlayerPool::class);
$this->pools = $arr;
return $this;
}
}

View File

@ -0,0 +1,169 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Data structure to hold details about a player
*
* Generated from protobuf message <code>messages.Player</code>
*/
class Player extends \Google\Protobuf\Internal\Message
{
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
*/
private $id = '';
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
*/
private $properties = '';
/**
* Optionally used to specify the PlayerPool in which to find a player.
*
* Generated from protobuf field <code>string pool = 3;</code>
*/
private $pool = '';
/**
* Attributes of this player.
*
* Generated from protobuf field <code>repeated .messages.Player.Attribute attributes = 4;</code>
*/
private $attributes;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $id
* By convention, an Xid
* @type string $properties
* By convention, a JSON-encoded string
* @type string $pool
* Optionally used to specify the PlayerPool in which to find a player.
* @type \Messages\Player\Attribute[]|\Google\Protobuf\Internal\RepeatedField $attributes
* Attributes of this player.
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @return string
*/
public function getId()
{
return $this->id;
}
/**
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @param string $var
* @return $this
*/
public function setId($var)
{
GPBUtil::checkString($var, True);
$this->id = $var;
return $this;
}
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
* @return string
*/
public function getProperties()
{
return $this->properties;
}
/**
* By convention, a JSON-encoded string
*
* Generated from protobuf field <code>string properties = 2;</code>
* @param string $var
* @return $this
*/
public function setProperties($var)
{
GPBUtil::checkString($var, True);
$this->properties = $var;
return $this;
}
/**
* Optionally used to specify the PlayerPool in which to find a player.
*
* Generated from protobuf field <code>string pool = 3;</code>
* @return string
*/
public function getPool()
{
return $this->pool;
}
/**
* Optionally used to specify the PlayerPool in which to find a player.
*
* Generated from protobuf field <code>string pool = 3;</code>
* @param string $var
* @return $this
*/
public function setPool($var)
{
GPBUtil::checkString($var, True);
$this->pool = $var;
return $this;
}
/**
* Attributes of this player.
*
* Generated from protobuf field <code>repeated .messages.Player.Attribute attributes = 4;</code>
* @return \Google\Protobuf\Internal\RepeatedField
*/
public function getAttributes()
{
return $this->attributes;
}
/**
* Attributes of this player.
*
* Generated from protobuf field <code>repeated .messages.Player.Attribute attributes = 4;</code>
* @param \Messages\Player\Attribute[]|\Google\Protobuf\Internal\RepeatedField $var
* @return $this
*/
public function setAttributes($var)
{
$arr = GPBUtil::checkRepeatedField($var, \Google\Protobuf\Internal\GPBType::MESSAGE, \Messages\Player\Attribute::class);
$this->attributes = $arr;
return $this;
}
}

View File

@ -0,0 +1,95 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages\Player;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Generated from protobuf message <code>messages.Player.Attribute</code>
*/
class Attribute extends \Google\Protobuf\Internal\Message
{
/**
* Name should match a Filter.attribute field.
*
* Generated from protobuf field <code>string name = 1;</code>
*/
private $name = '';
/**
* Generated from protobuf field <code>int64 value = 2;</code>
*/
private $value = 0;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $name
* Name should match a Filter.attribute field.
* @type int|string $value
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Name should match a Filter.attribute field.
*
* Generated from protobuf field <code>string name = 1;</code>
* @return string
*/
public function getName()
{
return $this->name;
}
/**
* Name should match a Filter.attribute field.
*
* Generated from protobuf field <code>string name = 1;</code>
* @param string $var
* @return $this
*/
public function setName($var)
{
GPBUtil::checkString($var, True);
$this->name = $var;
return $this;
}
/**
* Generated from protobuf field <code>int64 value = 2;</code>
* @return int|string
*/
public function getValue()
{
return $this->value;
}
/**
* Generated from protobuf field <code>int64 value = 2;</code>
* @param int|string $var
* @return $this
*/
public function setValue($var)
{
GPBUtil::checkInt64($var);
$this->value = $var;
return $this;
}
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(Attribute::class, \Messages\Player_Attribute::class);

View File

@ -0,0 +1,173 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* PlayerPools are defined by a set of 'hard' filters, and can be filled in
* with the players that match those filters.
* PlayerPools contain a number of fields, but many gRPC calls that take a
* PlayerPool as input only require a few of them to be filled in. Check the
* gRPC function in question for more details.
*
* Generated from protobuf message <code>messages.PlayerPool</code>
*/
class PlayerPool extends \Google\Protobuf\Internal\Message
{
/**
* Arbitrary developer-chosen, human-readable string.
*
* Generated from protobuf field <code>string name = 1;</code>
*/
private $name = '';
/**
* Filters are logical AND-ed (a player must match every filter).
*
* Generated from protobuf field <code>repeated .messages.Filter filters = 2;</code>
*/
private $filters;
/**
* Roster of players that match all filters.
*
* Generated from protobuf field <code>.messages.Roster roster = 3;</code>
*/
private $roster = null;
/**
* Statisticss for the last time this Pool was retrieved from state storage.
*
* Generated from protobuf field <code>.messages.Stats stats = 4;</code>
*/
private $stats = null;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $name
* Arbitrary developer-chosen, human-readable string.
* @type \Messages\Filter[]|\Google\Protobuf\Internal\RepeatedField $filters
* Filters are logical AND-ed (a player must match every filter).
* @type \Messages\Roster $roster
* Roster of players that match all filters.
* @type \Messages\Stats $stats
* Statisticss for the last time this Pool was retrieved from state storage.
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Arbitrary developer-chosen, human-readable string.
*
* Generated from protobuf field <code>string name = 1;</code>
* @return string
*/
public function getName()
{
return $this->name;
}
/**
* Arbitrary developer-chosen, human-readable string.
*
* Generated from protobuf field <code>string name = 1;</code>
* @param string $var
* @return $this
*/
public function setName($var)
{
GPBUtil::checkString($var, True);
$this->name = $var;
return $this;
}
/**
* Filters are logical AND-ed (a player must match every filter).
*
* Generated from protobuf field <code>repeated .messages.Filter filters = 2;</code>
* @return \Google\Protobuf\Internal\RepeatedField
*/
public function getFilters()
{
return $this->filters;
}
/**
* Filters are logical AND-ed (a player must match every filter).
*
* Generated from protobuf field <code>repeated .messages.Filter filters = 2;</code>
* @param \Messages\Filter[]|\Google\Protobuf\Internal\RepeatedField $var
* @return $this
*/
public function setFilters($var)
{
$arr = GPBUtil::checkRepeatedField($var, \Google\Protobuf\Internal\GPBType::MESSAGE, \Messages\Filter::class);
$this->filters = $arr;
return $this;
}
/**
* Roster of players that match all filters.
*
* Generated from protobuf field <code>.messages.Roster roster = 3;</code>
* @return \Messages\Roster
*/
public function getRoster()
{
return $this->roster;
}
/**
* Roster of players that match all filters.
*
* Generated from protobuf field <code>.messages.Roster roster = 3;</code>
* @param \Messages\Roster $var
* @return $this
*/
public function setRoster($var)
{
GPBUtil::checkMessage($var, \Messages\Roster::class);
$this->roster = $var;
return $this;
}
/**
* Statisticss for the last time this Pool was retrieved from state storage.
*
* Generated from protobuf field <code>.messages.Stats stats = 4;</code>
* @return \Messages\Stats
*/
public function getStats()
{
return $this->stats;
}
/**
* Statisticss for the last time this Pool was retrieved from state storage.
*
* Generated from protobuf field <code>.messages.Stats stats = 4;</code>
* @param \Messages\Stats $var
* @return $this
*/
public function setStats($var)
{
GPBUtil::checkMessage($var, \Messages\Stats::class);
$this->stats = $var;
return $this;
}
}

View File

@ -0,0 +1,16 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
if (false) {
/**
* This class is deprecated. Use Messages\Player\Attribute instead.
* @deprecated
*/
class Player_Attribute {}
}
class_exists(Player\Attribute::class);
@trigger_error('Messages\Player_Attribute is deprecated and will be removed in the next major release. Use Messages\Player\Attribute instead', E_USER_DEPRECATED);

View File

@ -0,0 +1,87 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Simple message to return success/failure and error status.
*
* Generated from protobuf message <code>messages.Result</code>
*/
class Result extends \Google\Protobuf\Internal\Message
{
/**
* Generated from protobuf field <code>bool success = 1;</code>
*/
private $success = false;
/**
* Generated from protobuf field <code>string error = 2;</code>
*/
private $error = '';
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type bool $success
* @type string $error
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Generated from protobuf field <code>bool success = 1;</code>
* @return bool
*/
public function getSuccess()
{
return $this->success;
}
/**
* Generated from protobuf field <code>bool success = 1;</code>
* @param bool $var
* @return $this
*/
public function setSuccess($var)
{
GPBUtil::checkBool($var);
$this->success = $var;
return $this;
}
/**
* Generated from protobuf field <code>string error = 2;</code>
* @return string
*/
public function getError()
{
return $this->error;
}
/**
* Generated from protobuf field <code>string error = 2;</code>
* @param string $var
* @return $this
*/
public function setError($var)
{
GPBUtil::checkString($var, True);
$this->error = $var;
return $this;
}
}

View File

@ -0,0 +1,101 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Data structure to hold a list of players in a match.
*
* Generated from protobuf message <code>messages.Roster</code>
*/
class Roster extends \Google\Protobuf\Internal\Message
{
/**
* Arbitrary developer-chosen, human-readable string. By convention, set to team name.
*
* Generated from protobuf field <code>string name = 1;</code>
*/
private $name = '';
/**
* Player profiles on this roster.
*
* Generated from protobuf field <code>repeated .messages.Player players = 2;</code>
*/
private $players;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type string $name
* Arbitrary developer-chosen, human-readable string. By convention, set to team name.
* @type \Messages\Player[]|\Google\Protobuf\Internal\RepeatedField $players
* Player profiles on this roster.
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Arbitrary developer-chosen, human-readable string. By convention, set to team name.
*
* Generated from protobuf field <code>string name = 1;</code>
* @return string
*/
public function getName()
{
return $this->name;
}
/**
* Arbitrary developer-chosen, human-readable string. By convention, set to team name.
*
* Generated from protobuf field <code>string name = 1;</code>
* @param string $var
* @return $this
*/
public function setName($var)
{
GPBUtil::checkString($var, True);
$this->name = $var;
return $this;
}
/**
* Player profiles on this roster.
*
* Generated from protobuf field <code>repeated .messages.Player players = 2;</code>
* @return \Google\Protobuf\Internal\RepeatedField
*/
public function getPlayers()
{
return $this->players;
}
/**
* Player profiles on this roster.
*
* Generated from protobuf field <code>repeated .messages.Player players = 2;</code>
* @param \Messages\Player[]|\Google\Protobuf\Internal\RepeatedField $var
* @return $this
*/
public function setPlayers($var)
{
$arr = GPBUtil::checkRepeatedField($var, \Google\Protobuf\Internal\GPBType::MESSAGE, \Messages\Player::class);
$this->players = $arr;
return $this;
}
}

View File

@ -0,0 +1,101 @@
<?php
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/protobuf-spec/messages.proto
namespace Messages;
use Google\Protobuf\Internal\GPBType;
use Google\Protobuf\Internal\RepeatedField;
use Google\Protobuf\Internal\GPBUtil;
/**
* Holds statistics
*
* Generated from protobuf message <code>messages.Stats</code>
*/
class Stats extends \Google\Protobuf\Internal\Message
{
/**
* Number of results.
*
* Generated from protobuf field <code>int64 count = 1;</code>
*/
private $count = 0;
/**
* How long it took to get the results.
*
* Generated from protobuf field <code>double elapsed = 2;</code>
*/
private $elapsed = 0.0;
/**
* Constructor.
*
* @param array $data {
* Optional. Data for populating the Message object.
*
* @type int|string $count
* Number of results.
* @type float $elapsed
* How long it took to get the results.
* }
*/
public function __construct($data = NULL) {
\GPBMetadata\Api\ProtobufSpec\Messages::initOnce();
parent::__construct($data);
}
/**
* Number of results.
*
* Generated from protobuf field <code>int64 count = 1;</code>
* @return int|string
*/
public function getCount()
{
return $this->count;
}
/**
* Number of results.
*
* Generated from protobuf field <code>int64 count = 1;</code>
* @param int|string $var
* @return $this
*/
public function setCount($var)
{
GPBUtil::checkInt64($var);
$this->count = $var;
return $this;
}
/**
* How long it took to get the results.
*
* Generated from protobuf field <code>double elapsed = 2;</code>
* @return float
*/
public function getElapsed()
{
return $this->elapsed;
}
/**
* How long it took to get the results.
*
* Generated from protobuf field <code>double elapsed = 2;</code>
* @param float $var
* @return $this
*/
public function setElapsed($var)
{
GPBUtil::checkDouble($var);
$this->elapsed = $var;
return $this;
}
}

View File

@ -67,8 +67,8 @@ with grpc.insecure_channel(api_conn_info) as channel:
for player in partial_results.roster.players:
if not player.id in player_pools[empty_pool.name]:
player_pools[empty_pool.name][player.id] = dict()
for prop in player.properties:
player_pools[empty_pool.name][player.id][prop.name] = prop.value
for attr in player.attributes:
player_pools[empty_pool.name][player.id][attr.name] = attr.value
except Exception as err:
print("Error encountered: %s" % err)
if cfg['debug']:

View File

@ -32,6 +32,7 @@ def makeMatches(profile_dict, player_pools):
for player in roster['players']:
if 'pool' in player:
player['id'] = random.choice(list(player_pools[player['pool']]))
del player_pools[player['pool']][player['id']]
print("Selected player %s from pool %s (strategy: RANDOM)" % (player['id'], player['pool']))
else:
print(player)

View File

@ -1,66 +0,0 @@
#! /usr/bin/env python3
#Copyright 2018 Google LLC
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
import os
import grpc
import customlogic
import mmlogic_pb2
import mmlogic_pb2_grpc
#import simplejson as json
import ujson as json
import pprint as pp
cfg = ''
# Load config file
with open("matchmaker_config.json") as f:
cfg = json.loads(f.read())
api_conn_info = "%s:%d" % (cfg['api']['mmlogic']['hostname'], cfg['api']['mmlogic']['port'])
# Step 2 - Talk to Redis. This example uses the MM Logic API in OM to read/write to/from redis.
with grpc.insecure_channel(api_conn_info) as channel:
mmlogic_api = mmlogic_pb2_grpc.APIStub(channel)
# Step 3 - Read the profile written to the Backend API.
profile_dict = json.loads(mmlogic_api.GetProfile(mmlogic_pb2.Profile(id=os.environ["MMF_PROFILE_ID"])))
# Step 4 - Select the player data from Redis that we want for our matchmaking logic.
player_pools = dict() # holds pools returned by the associated filter
for p in profile_dict['properties']['playerPools']:
player_pools[p['id']] =mmlogic_api.GetPlayerPool(mmlogic_pb2.JsonFilterSet(id=p['id'],json=json.dumps(p)))
# Step 5 - Run custom matchmaking logic to try to find a match
# This is in the file customlogic.py
match_properties = json.dumps(customlogic.makeMatches(profile_dict, player_pools))
# Step 6 - Write the outcome of the matchmaking logic back to state storage.
# Step 7 - Remove the selected players from consideration by other MMFs.
# CreateProposal does both of these for you, and some other items as well.
success = mmlogic_api.CreateProposal(mmlogic_pb2.MMFResults(
id = os.environ["MMF_PROPOSAL_ID"],
matchobject = mmlogic_pb2.MatchObject(
properties = match_properties,
),
roster = mmlogic_pb2.Roster(
id = os.environ["MMF_ROSTER_ID"],
player = match_properties[cfg['jsonKeys']['roster']],
),
)
)
# [OPTIONAL] Step 8 - Export stats about this run.

View File

@ -0,0 +1,152 @@
apiVersion: storage.spotahome.com/v1alpha2
kind: RedisFailover
metadata:
name: redisfailover
labels:
tier: storage
spec:
hardAntiAffinity: true # Optional. Value by default. If true, the pods will not be scheduled on the same node.
sentinel:
replicas: 3 # Optional. 3 by default, can be set higher.
resources: # Optional. If not set, it won't be defined on created resources.
requests:
cpu: 100m
limits:
memory: 100Mi
customConfig: [] # Optional. Empty by default.
redis:
replicas: 3 # Optional. 3 by default, can be set higher.
image: redis # Optional. "redis" by default.
version: 4.0.11-alpine # Optional. "3.2-alpine" by default.
resources: # Optional. If not set, it won't be defined on created resources
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 400m
memory: 500Mi
exporter: false # Optional. False by default. Adds a redis-exporter container to export metrics.
exporterImage: oliver006/redis_exporter # Optional. oliver006/redis_exporter by default.
exporterVersion: v0.11.3 # Optional. v0.11.3 by default.
disableExporterProbes: false # Optional. False by default. Disables the readiness and liveness probes for the exporter.
storage:
emptyDir: {} # Optional. emptyDir by default.
customConfig: [] # Optional. Empty by default.\
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-master-proxy-configmap
data:
haproxy.cfg: |
defaults REDIS
mode tcp
timeout connect 5s
timeout client 61s # should respect 'redis.pool.idleTimeout' from open-match config
timeout server 61s
global
stats socket ipv4@127.0.0.1:9999 level admin
stats timeout 2m
frontend fe_redis
bind *:17000 name redis
default_backend be_redis
backend be_redis
server redis-master-serv 127.0.0.1:6379
redis-master-finder.sh: |
#!/bin/sh
set -e
set -u
SENTINEL_HOST="rfs-redisfailover" # change this if RedisFailover name changes
LAST_MASTER_IP=""
LAST_MASTER_PORT=""
update_master_addr() {
# lookup current master address
local r="SENTINEL get-master-addr-by-name mymaster"
local r_out=$(echo $r | nc -q1 $SENTINEL_HOST 26379)
# parse output
local master_ip=$(echo "${r_out}" | tail -n+3 | head -n1 | tr -d '\r') # IP is on 3d line
local master_port=$(echo "${r_out}" | tail -n+5 | head -n1 | tr -d '\r') # 5th line is port number
# update HAProxy cfg if needed
if [ "$master_ip" != "$LAST_MASTER_IP" ] || [ "$master_port" != "$LAST_MASTER_PORT" ]; then
local s="set server be_redis/redis-master-serv addr ${master_ip} port ${master_port}"
echo $s | nc 127.0.0.1 9999 # haproxy is in the same pod
LAST_MASTER_IP=$master_ip
LAST_MASTER_PORT=$master_port
echo "New master address is ${LAST_MASTER_IP}:${LAST_MASTER_PORT}"
fi
}
while :; do update_master_addr; sleep 1; done
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master-proxy
labels:
app: openmatch
component: redis
tier: storage
spec:
replicas: 1
selector:
matchLabels:
app: openmatch
component: redis
tier: storage
template:
metadata:
labels:
app: openmatch
component: redis
tier: storage
spec:
volumes:
- name: configmap
configMap:
name: redis-master-proxy-configmap
defaultMode: 0700
containers:
- name: redis-master-haproxy
image: haproxy:1.8-alpine
ports:
- name: haproxy
containerPort: 17000
- name: haproxy-stats
containerPort: 9999
volumeMounts:
- name: configmap
mountPath: /usr/local/etc/haproxy/haproxy.cfg
subPath: haproxy.cfg
- name: redis-master-finder
image: subfuzion/netcat # alpine image with only netcat-openbsd installed
imagePullPolicy: Always
command: ["redis-master-finder.sh"]
volumeMounts:
- name: configmap
mountPath: /usr/local/bin/redis-master-finder.sh
subPath: redis-master-finder.sh
resources:
requests:
memory: 20Mi
cpu: 100m
---
kind: Service
apiVersion: v1
metadata:
name: redis
spec:
selector:
app: openmatch
component: redis
tier: storage
ports:
- protocol: TCP
port: 6379
targetPort: haproxy

View File

@ -2,7 +2,7 @@
kind: Service
apiVersion: v1
metadata:
name: redis-sentinel
name: redis
spec:
selector:
app: mm

View File

@ -33,7 +33,7 @@ spec:
spec:
containers:
- name: om-backend
image: gcr.io/unite-au-demo/openmatch-backendapi:dev
image: gcr.io/open-match-public-images/openmatch-backendapi:dev
imagePullPolicy: Always
ports:
- name: grpc
@ -79,7 +79,7 @@ spec:
spec:
containers:
- name: om-frontendapi
image: gcr.io/unite-au-demo/openmatch-frontendapi:dev
image: gcr.io/open-match-public-images/openmatch-frontendapi:dev
imagePullPolicy: Always
ports:
- name: grpc
@ -125,7 +125,7 @@ spec:
spec:
containers:
- name: om-mmforc
image: gcr.io/unite-au-demo/openmatch-mmforc:dev
image: gcr.io/open-match-public-images/openmatch-mmforc:dev
imagePullPolicy: Always
ports:
- name: metrics
@ -139,3 +139,49 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: om-mmlogicapi
labels:
app: openmatch
component: mmlogic
spec:
replicas: 1
selector:
matchLabels:
app: openmatch
component: mmlogic
template:
metadata:
labels:
app: openmatch
component: mmlogic
spec:
containers:
- name: om-mmlogic
image: gcr.io/open-match-public-images/openmatch-mmlogicapi:dev
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 50503
- name: metrics
containerPort: 9555
resources:
requests:
memory: 100Mi
cpu: 100m
---
kind: Service
apiVersion: v1
metadata:
name: om-mmlogicapi
spec:
selector:
app: openmatch
component: mmlogic
ports:
- protocol: TCP
port: 50503
targetPort: grpc

73
install/yaml/README.md Normal file
View File

@ -0,0 +1,73 @@
# install/yaml
This directory contains Kubernetes YAML resource definitions, which should be applied according to their filename order. Only Redis & Open Match are required, Prometheus is optional.
```
kubectl apply -f 01-redis.yaml
kubectl apply -f 02-open-match.yaml
```
**Note**: Trying to apply the Kubernetes Prometheus Operator resource definition files without a cluster-admin rolebinding on GKE doesn't work without running the following command first. See https://github.com/coreos/prometheus-operator/issues/357
```
kubectl create clusterrolebinding projectowner-cluster-admin-binding --clusterrole=cluster-admin --user=<GCP_ACCOUNT>
```
```
kubectl apply -f 03-prometheus.yaml
```
[There is a known dependency ordering issue when applying the Prometheus resource; just wait a couple moments and apply it again.](https://github.com/GoogleCloudPlatform/open-match/issues/46)
[Accurate as of v0.2.0] Output from `kubectl get all` if everything succeeded should look something like this:
```
NAME READY STATUS RESTARTS AGE
pod/om-backendapi-84bc9d8fff-q89kr 1/1 Running 0 9m
pod/om-frontendapi-55d5bb7946-c5ccb 1/1 Running 0 9m
pod/om-mmforc-85bfd7f4f6-wmwhc 1/1 Running 0 9m
pod/om-mmlogicapi-6488bc7fc6-g74dm 1/1 Running 0 9m
pod/prometheus-operator-5c8774cdd8-7c5qm 1/1 Running 0 9m
pod/prometheus-prometheus-0 2/2 Running 0 9m
pod/redis-master-9b6b86c46-b7ggn 1/1 Running 0 9m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 19m
service/om-backend-metrics ClusterIP 10.59.254.43 <none> 29555/TCP 9m
service/om-backendapi ClusterIP 10.59.240.211 <none> 50505/TCP 9m
service/om-frontend-metrics ClusterIP 10.59.246.228 <none> 19555/TCP 9m
service/om-frontendapi ClusterIP 10.59.250.59 <none> 50504/TCP 9m
service/om-mmforc-metrics ClusterIP 10.59.240.59 <none> 39555/TCP 9m
service/om-mmlogicapi ClusterIP 10.59.248.3 <none> 50503/TCP 9m
service/prometheus NodePort 10.59.252.212 <none> 9090:30900/TCP 9m
service/prometheus-operated ClusterIP None <none> 9090/TCP 9m
service/redis ClusterIP 10.59.249.197 <none> 6379/TCP 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/om-backendapi 1 1 1 1 9m
deployment.extensions/om-frontendapi 1 1 1 1 9m
deployment.extensions/om-mmforc 1 1 1 1 9m
deployment.extensions/om-mmlogicapi 1 1 1 1 9m
deployment.extensions/prometheus-operator 1 1 1 1 9m
deployment.extensions/redis-master 1 1 1 1 9m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/om-backendapi-84bc9d8fff 1 1 1 9m
replicaset.extensions/om-frontendapi-55d5bb7946 1 1 1 9m
replicaset.extensions/om-mmforc-85bfd7f4f6 1 1 1 9m
replicaset.extensions/om-mmlogicapi-6488bc7fc6 1 1 1 9m
replicaset.extensions/prometheus-operator-5c8774cdd8 1 1 1 9m
replicaset.extensions/redis-master-9b6b86c46 1 1 1 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/om-backendapi 1 1 1 1 9m
deployment.apps/om-frontendapi 1 1 1 1 9m
deployment.apps/om-mmforc 1 1 1 1 9m
deployment.apps/om-mmlogicapi 1 1 1 1 9m
deployment.apps/prometheus-operator 1 1 1 1 9m
deployment.apps/redis-master 1 1 1 1 9m
NAME DESIRED CURRENT READY AGE
replicaset.apps/om-backendapi-84bc9d8fff 1 1 1 9m
replicaset.apps/om-frontendapi-55d5bb7946 1 1 1 9m
replicaset.apps/om-mmforc-85bfd7f4f6 1 1 1 9m
replicaset.apps/om-mmlogicapi-6488bc7fc6 1 1 1 9m
replicaset.apps/prometheus-operator-5c8774cdd8 1 1 1 9m
replicaset.apps/redis-master-9b6b86c46 1 1 1 9m
NAME DESIRED CURRENT AGE
statefulset.apps/prometheus-prometheus 1 1 9m
```

Some files were not shown because too many files have changed in this diff Show More