Compare commits

..

58 Commits

Author SHA1 Message Date
56e08e82d4 Revert accidental file type change 2019-01-14 09:32:13 -05:00
2df027c9f6 Bold release numbers 2019-01-10 00:28:31 -05:00
913af84931 Use public repo URL 2019-01-09 02:18:53 -05:00
de6064f9fd Use public repo URL 2019-01-09 02:18:22 -05:00
867c55a409 Fix registry URL and add symlink issue 2019-01-09 02:15:11 -05:00
36420be2ce Revert accidental removal of symlink 2019-01-09 02:14:32 -05:00
16e9dda64a Bugfix for no commandline args 2019-01-09 02:14:07 -05:00
1ef9a896bf Revert accidental commit of empty file 2019-01-09 02:13:30 -05:00
75f2b84ded Up default timeout 2019-01-09 02:03:47 -05:00
2268baf1ba revert accidential commit of local change 2019-01-09 02:00:36 -05:00
9e43d989ea Remove debug sleep command 2019-01-09 00:10:47 -05:00
869725baee Bump k8s version 2019-01-08 23:56:07 -05:00
ae26ac3cd3 Merge remote-tracking branch 'origin/master' into 030wip 2019-01-08 23:41:55 -05:00
826af77396 Point to public registry and update tag 2019-01-08 23:37:38 -05:00
294d03e18b Roadmap 2019-01-08 22:39:08 -05:00
b27116aedd 030 RC2 2019-01-08 02:19:53 -05:00
074c0584f5 030 RC1 issue thread updates https://github.com/GoogleCloudPlatform/open-match/pull/55 2019-01-07 23:35:42 -05:00
210e00703a production guide now has placeholder notes, low hanging fruit 2019-01-07 23:35:14 -05:00
3ffbddbdd8 Updates to add optional TTL to redis objects 2019-01-05 23:37:38 -05:00
5f827b5c7c doesn't work 2019-01-05 23:01:33 -05:00
a161e6dba9 030 WIP first pass 2018-12-30 05:31:49 -05:00
7e70683d9b fix broken sed command 2018-12-30 04:34:27 -05:00
38bd94c078 Merge NoFr1ends commit 6a5dc1c 2018-12-30 04:16:48 -05:00
83366498d3 Update Docs 2018-12-30 03:45:39 -05:00
929e089e4d rename api call 2018-12-30 03:35:25 -05:00
a6b56b19d2 Merge branch to address issue 2018-12-28 04:01:59 -05:00
c2b6fdc198 Updates to FEClient and protos 2018-12-28 02:48:03 -05:00
43a4f046f0 Update config 2018-12-27 03:14:40 -05:00
b79bc2591c Remove references to connstring 2018-12-27 03:07:26 -05:00
61198fd168 No unused code 2018-12-27 03:04:18 -05:00
c1dd3835fe Updated logging 2018-12-27 02:55:16 -05:00
f3c9e87653 updates to documentation and builds 2018-12-27 02:28:43 -05:00
0064116c34 Further deletion and fix indexing for empty fields 2018-12-27 02:09:20 -05:00
298fe18f29 Updates to player deletion logic, metadata indices 2018-12-27 01:27:39 -05:00
6c539ab2a4 Remove manual filenames in logs 2018-12-26 07:43:54 -05:00
b6c59a7a0a Player watcher for FEAPI brought over from Doodle 2018-12-26 07:29:28 -05:00
f0536cedde Merge Ilya's updates 2018-12-26 00:18:00 -05:00
48fa4ba962 Update Redis HA details 2018-12-25 23:58:54 -05:00
39ff99b65e rename 'redis-sentinel' to just 'redis' 2018-12-26 13:51:24 +09:00
78c7b3b949 redis failover deployment 2018-12-26 13:51:24 +09:00
6a5dc1c508 Fix typo in development guide 2018-12-26 13:49:54 +09:00
9f84ec9bc9 First pass. Works but hacky. 2018-12-25 23:47:30 -05:00
e48b7db56f Fix parsing of empty matchobject fields 2018-12-26 13:45:40 +09:00
bffd54727c Merge branch 'udptest' into test_agones 2018-12-19 02:59:04 -05:00
ab90f5f6e0 got udp test workign 2018-12-19 02:56:20 -05:00
632415c746 simple udp client & server to integrate with agones 2018-12-18 23:58:02 +03:00
0882c63eb1 Update messages; more redis code sequestered to redis module 2018-12-16 08:12:42 -05:00
ee6716c60e Merge PL 47 2018-12-15 23:56:35 -05:00
bb5ad8a596 Merge 951bc8509d5eb8fceb138135c001c6a7b7f9bb25 into 275fa2d125e91fd25981124387f6388431f73874 2018-12-15 19:32:28 +00:00
951bc8509d Remove strings import as it's no longer used 2018-12-15 14:11:31 -05:00
ab8cd21633 Update to use Xid instead of UUID. 2018-12-15 14:11:05 -05:00
721cd2f7ae Still needs make file or the like and updated instructions 2018-12-10 14:05:00 +09:00
13cd1da631 Merge remote-tracking branch 'origin/json-logging' into feupdate 2018-12-06 23:28:35 -05:00
275fa2d125 Awkward wording 2018-12-07 13:17:39 +09:00
486c64798b Merge tag '020rc2' into feupdate 2018-12-06 02:14:58 -05:00
52f9e2810f WIP indexing 2018-11-28 04:10:08 -05:00
db60d7ac5f Merge from 0.2.0 2018-11-28 02:23:26 -05:00
326dd6c6dd Add logging config to support json and level selection for logrus 2018-11-17 16:11:33 -08:00
107 changed files with 4042 additions and 2669 deletions
.gitignoreCHANGELOG.mdREADME.md
api/protobuf-spec
cloudbuild_base.yamlcloudbuild_mmf_go.yamlcloudbuild_mmf_php.yamlcloudbuild_mmf_py3.yaml
cmd
config
deployments/k8s
docs
examples
install/yaml
internal
test/cmd

1
.gitignore vendored

@ -27,6 +27,7 @@ populations
# Discarded code snippets
build.sh
*-fast.yaml
detritus/
# Dotnet Core ignores
*.swp

@ -1,20 +1,41 @@
# Release history
##v0.2.0 (alpha)
## v0.3.0 (alpha)
This update is focused on the Frontend API and Player Records, including more robust code for indexing, deindexing, reading, writing, and expiring player requests from Open Match state storage. All Frontend API function argument have changed, although many only slightly. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
### Release notes
- The Frontend API calls have all be changed to reflect the fact that they operate on Players in state storage. To queue a game client, 'CreatePlayer' in Open Match, to get updates 'GetUpdates', and to stop matching, 'DeletePlayer'. The calls are now much more obviously related to how Open Match sees players: they are database records that it creates on demand, updates using MMFs and the Backend API, and deletes when the player is no longer looking for a match.
- The Player record in state storage has changed to a more complete hash format, and it no longer makes sense to remove a player's assignment from the Frontend as a separate action to removing their record entirely. `DeleteAssignment()` has therefore been removed. Just use `DeletePlayer` instead; you'll always want the client to re-request matching with its latest attributes anyway.
- There is now a module for [indexing and deindexing players in state storage](internal/statestorage/redis/playerindices/playerindices.go). This is a *much* more efficient, as well as being cleaner and more maintainable than the previous implementation which was **hard-coded to index everything** you passed in to the Frontend API at a specific JSON object depth.
- This paves the way for dynamically choosing your indicies without restarting the matchmaker. This will be implemented if there is demand. Pull Requests are welcome!
- Two internal timestamp-based indices have replaced the previous `timestamp` index. `created` is used to calculate how long a player has been waiting for a match, `accessed` is used to determine when a player needs to be expired out of state storage. Both are prefixed by the string `OM_METADATA` so it should be easy to spot them.
- A call to the Frontend API `GetUpdates()` gRPC endpoint returns a stream of player messages. This is used to send updates to state storage for the `Assignment`, `Status`, and `Error` Player fields in near-realtime. **It is the responsibility of the game client to disconnect** from the stream when it has gotten the results it was waiting for!
- Moved the rest of the gRPC messages into a shared [`messages.proto` file](api/protobuf-spec/messages.proto).
- Added documentation to Frontend API gRPC calls to the [`frontend.proto` file](api/protobuf-spec/frontend.proto).
- [Issue #41](https://github.com/GoogleCloudPlatform/open-match/issues/41)|[PR #48](https://github.com/GoogleCloudPlatform/open-match/pull/48) There is now a HA Redis install available in `install/yaml/01-redis-failover.yaml`. This would be used as a drop-in replacement for a single-instance Redis configuration in `install/yaml/01-redis.yaml`. The HA configuration requires that you install the [Redis Operator](https://github.com/spotahome/redis-operator) (note: **currently alpha**, use at your own risk) in your Kubernetes cluster.
- As part of this change, the kubernetes service name is now `redis` not `redis-sentinel` to denote that it is accessed using a standard Redis client.
- Open Match uses a new feature of the go module [logrus](github.com/sirupsen/logrus) to include filenames and line numbers. If you have an older version in your local build environment, you may need to delete the module and `go get github.com/sirupsen/logrus` again. When building using the provided `cloudbuild.yaml` and `Dockerfile`s this is handled for you.
- The program that was formerly in `examples/frontendclient` has been expanded and has been moved to the `test` directory under (`test/cmd/frontendclient/`)[test/cmd/frontendclient/].
- The client load generator program has been moved from `test/cmd/client` to (`test/cmd/clientloadgen/`)[test/cmd/clientloadgen/] to better reflect what it does.
- [Issue #45](https://github.com/GoogleCloudPlatform/open-match/issues/45) The process for moving the build files (`Dockerfile` and `cloudbuild.yaml`) for each component, example, and test program to their respective directories and out of the repository root has started but won't be completed until a future version.
- Put some basic notes in the [production guide](docs/production.md)
- Added a basic [roadmap](docs/roadmap.md)
## v0.2.0 (alpha)
This is a pretty large update. Custom MMFs or evaluators from 0.1.0 may need some tweaking to work with this version. Some Backend API function arguments have changed. Please join the [Slack channel](https://open-match.slack.com/) if you need help ([Signup link](https://join.slack.com/t/open-match/shared_invite/enQtNDM1NjcxNTY4MTgzLWQzMzE1MGY5YmYyYWY3ZjE2MjNjZTdmYmQ1ZTQzMmNiNGViYmQyN2M4ZmVkMDY2YzZlOTUwMTYwMzI1Y2I2MjU))!
v0.2.0 focused on adding additional functionality to Backend API calls and on **reducing the amount of boilerplate code required to make a custom Matchmaking Function**. For this, a new internal API for use by MMFs called the [Matchmaking Logic API (MMLogic API)](README.md#matchmaking-logic-mmlogic-api) has been added. Many of the core components and examples had to be updated to use the new Backend API arguments and the modules to support them, so we recommend you rebuild and redeploy all the components to use v0.2.0.
### Release notes
- MMLogic API is now available. Deploy it to kubernetes using the [appropriate json file]() and check out the [gRPC API specification](api/protobuf-spec/mmlogic.proto) to see how to use it. To write a client against this API, you'll need to compile the protobuf files to your language of choice. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory.
- When using the MMLogic API to filter players into pools, it will attempt to report back the number of players that matched the filters and how long the filters took to query state storage.
- An [example MMF](examples/functions/python3/mmlogic-simple/harness.py) using it has been written in Python3. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory. By default the [example backend client](examples/backendclient/main.go) is now configured to use this MMF, so make sure you have it avaiable before you try to run the latest backend client.
- An [example MMF](examples/functions/php/mmlogic-simple/harness.py) using it has been contributed by Ilya Hrankouski in PHP (thanks!). - The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The [example golang MMF](examples/functions/golang/manual-simple/) has been updated to use the latest data schemas for MatchObjects, and renamed to `manual-simple` to denote that it is manually manipulating Redis, not using the MMLogic API.
- The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The message model for using the Backend API has changed slightly - for calls that make MatchObjects, the expectation is that you will provide a MatchObject with a few fields populated, and it will then be shuttled along through state storage to your MMF and back out again, with various processes 'filling in the blanks' of your MatchObject, which is then returned to your code calling the Backend API. Read the[gRPC API specification](api/protobuf-spec/backend.proto) for more information.
- As part of this, compiled protobuf golang modules now live in the [`internal/pb`](internal/pb) directory. There's a handy [bash script](api/protoc-go.sh) for compiling them from the `api/protobuf-spec` directory into this new `internal/pb` directory for development in your local golang environment if you need it.
- As part of this Backend API message shift and the advent of the MMLogic API, 'player pools' and 'rosters' are now first-class data structures in MatchObjects for those who wish to use them. You can ignore them if you like, but if you want to use some of the MMLogic API calls to automate tasks for you - things like filtering a pool of players according attributes or adding all the players in your rosters to the ignorelist so other MMFs don't try to grab them - you'll need to put your data into the [protobuf messages](api/protobuf-spec/messages.proto) so Open Match knows how to read them. The sample backend client [test profile JSON](examples/backendclient/profiles/testprofile.json)has been updated to use this format if you want to see an example.
- MMLogic API is now available. Deploy it to kubernetes using the [appropriate json file]() and check out the [gRPC API specification](api/protobuf-spec/mmlogic.proto) to see how to use it. To write a client against this API, you'll need to compile the protobuf files to your language of choice. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory.
- When using the MMLogic API to filter players into pools, it will attempt to report back the number of players that matched the filters and how long the filters took to query state storage.
- An [example MMF](examples/functions/python3/mmlogic-simple/harness.py) using it has been written in Python3. There is an associated cloudbuild.yaml file and Dockerfile for it in the root directory. By default the [example backend client](examples/backendclient/main.go) is now configured to use this MMF, so make sure you have it avaiable before you try to run the latest backend client.
- An [example MMF](examples/functions/php/mmlogic-simple/harness.py) using it has been contributed by Ilya Hrankouski in PHP (thanks!). - The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The [example golang MMF](examples/functions/golang/manual-simple/) has been updated to use the latest data schemas for MatchObjects, and renamed to `manual-simple` to denote that it is manually manipulating Redis, not using the MMLogic API.
- The API specs have been split into separate files per API and the protobuf messages are in a separate file. Things were renamed slightly as a result, and you will need to update your API clients. The Frontend API hasn't had it's messages moved to the shared messages file yet, but this will happen in an upcoming version.
- The message model for using the Backend API has changed slightly - for calls that make MatchObjects, the expectation is that you will provide a MatchObject with a few fields populated, and it will then be shuttled along through state storage to your MMF and back out again, with various processes 'filling in the blanks' of your MatchObject, which is then returned to your code calling the Backend API. Read the[gRPC API specification](api/protobuf-spec/backend.proto) for more information.
- As part of this, compiled protobuf golang modules now live in the [`internal/pb`](internal/pb) directory. There's a handy [bash script](api/protoc-go.sh) for compiling them from the `api/protobuf-spec` directory into this new `internal/pb` directory for development in your local golang environment if you need it.
- As part of this Backend API message shift and the advent of the MMLogic API, 'player pools' and 'rosters' are now first-class data structures in MatchObjects for those who wish to use them. You can ignore them if you like, but if you want to use some of the MMLogic API calls to automate tasks for you - things like filtering a pool of players according attributes or adding all the players in your rosters to the ignorelist so other MMFs don't try to grab them - you'll need to put your data into the [protobuf messages](api/protobuf-spec/messages.proto) so Open Match knows how to read them. The sample backend client [test profile JSON](examples/backendclient/profiles/testprofile.json)has been updated to use this format if you want to see an example.
- Rosters were formerly space-delimited lists of player IDs. They are now first-class repeated protobuf message fields in the [Roster message format](api/protobuf-spec/messages.proto). That means that in most languages, you can access the roster as a list of players using your native language data structures (more info can be found in the [guide for using protocol buffers in your langauge of choice](https://developers.google.com/protocol-buffers/docs/reference/overview)). If you don't care about the new fields or the new functionality, you can just leave all the other fields but the player ID unset.
- Open Match is transitioning to using [protocol buffer messages](https://developers.google.com/protocol-buffers/) as its internal data format. There is now a Redis state storage [golang module](internal/statestorage/redis/redispb/) for marshaling and unmarshaling MatchObject messages to and from Redis. It isn't very clean code right now but will get worked on for the next couple releases.
- Ignorelists now exist, and have a Redis state storage [golang module](internal/statestorage/redis/ignorelist/) for CRUD access. Currently three ignorelists are defined in the [config file](config/matchmaker_config.json) with their respective parameters. These are implemented as [Sorted Sets in Redis](https://redis.io/commands#sorted_set).
@ -23,10 +44,10 @@
### Roadmap
- It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
- We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
- Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an [outstanding issue to investigate and implement](https://github.com/GoogleCloudPlatform/open-match/issues/41) if it fills our needs, feel free to contribute!
- It has become clear from talking to multiple users that the software they write to talk to the Backend API needs a name. 'Backend API Client' is technically correct, but given how many APIs are in Open Match and the overwhelming use of 'Client' to refer to a Game Client in the industry, we're currently calling this a 'Director', as its primary purpose is to 'direct' which profiles are sent to the backend, and 'direct' the resulting MatchObjects to game servers. Further discussion / suggestions are welcome.
- We'll be entering the design stage on longer-running MMFs before the end of the year. We'll get a proposal together and on the github repo as a request for comments, so please keep your eye out for that.
- Match profiles providing multiple MMFs to run isn't planned anymore. Just send multiple copies of the profile with different MMFs specified via the backendapi.
- Redis Sentinel will likely not be supported. Instead, replicated instances and HAProxy may be the HA solution of choice. There's an [outstanding issue to investigate and implement](https://github.com/GoogleCloudPlatform/open-match/issues/41) if it fills our needs, feel free to contribute!
## v0.1.0 (alpha)
Initial release.

105
README.md

@ -1,6 +1,6 @@
# Open Match
Open Match is an open source game matchmaking framework designed to allow game creators to re-use a common matchmaker framework. Its designed to be flexible (run it anywhere Kubernetes runs), extensible (match logic can be customized to work for any game), and scalable.
Open Match is an open source game matchmaking framework designed to allow game creators to build matchmakers of any size easily and with as much possibility for sharing and code re-use as possible. Its designed to be flexible (run it anywhere Kubernetes runs), extensible (match logic can be customized to work for any game), and scalable.
Matchmaking is a complicated process, and when large player populations are involved, many popular matchmaking approaches touch on significant areas of computer science including graph theory and massively concurrent processing. Open Match is an effort to provide a foundation upon which these difficult problems can be addressed by the wider game development community. As Josh Menke — famous for working on matchmaking for many popular triple-A franchises — put it:
@ -12,7 +12,8 @@ This project attempts to solve the networking and plumbing problems, so game dev
This software is currently alpha, and subject to change. Although Open Match has already been used to run [production workloads within Google](https://cloud.google.com/blog/topics/inside-google-cloud/no-tricks-just-treats-globally-scaling-the-halloween-multiplayer-doodle-with-open-match-on-google-cloud), but it's still early days on the way to our final goal. There's plenty left to write and we welcome contributions. **We strongly encourage you to engage with the community through the [Slack or Mailing lists](#get-involved) if you're considering using Open Match in production before the 1.0 release, as the documentation is likely to lag behind the latest version a bit while we focus on getting out of alpha/beta as soon as possible.**
## Version
[The current stable version in master is 0.2.0 (alpha)](https://github.com/GoogleCloudPlatform/open-match/releases/tag/020).
[The current stable version in master is 0.3.0 (alpha)](https://github.com/GoogleCloudPlatform/open-match/releases/tag/030). At this time only bugfixes and doc update pull requests will be considered.
Version 0.4.0 is in active development; please target code changes to the 040wip branch.
# Core Concepts
@ -22,20 +23,33 @@ Open Match is designed to support massively concurrent matchmaking, and to be sc
## Glossary
* **MMF** — Matchmaking function. This is the customizable matchmaking logic.
* **Component** — One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called 'components'.
* **Roster** — A list of all the players in a match.
* **Profile** — The json blob containing all the parameters used to select which players go into a roster.
* **Match Object** — A protobuffer message format that contains the Profile and the results of the matchmaking function. Sent to the backend API from your game backend with an empty roster and then returned from your MMF with the matchmaking results filled in.
* **MMFOrc** — Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
* **State Storage** — The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
* **Assignment** — Refers to assigning a player or group of players to a dedicated game server instance. Open Match offers a path to send dedicated game server connection details from your backend to your game clients after a match has been made.
### General
* **DGS** — Dedicated game server
* **Client** — The game client program the player uses when playing the game
* **Session** — In Open Match, players are matched together, then assigned to a server which hosts the game _session_. Depending on context, this may be referred to as a _match_, _map_, or just _game_ elsewhere in the industry.
### Open Match
* **Component** — One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called _components_.
* **State Storage** — The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
* **MMFOrc** — Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
* **MMF** — Matchmaking function. This is the customizable matchmaking logic.
* **MMLogic API** — An API that provides MMF SDK functionality. It is optional - you can also do all the state storage read and write operations yourself if you have a good reason to do so.
* **Director** — The software you (as a developer) write against the Open Match Backend API. The _Director_ decides which MMFs to run, and is responsible for sending MMF results to a DGS to host the session.
### Data Model
* **Player** — An ID and list of attributes with values for a player who wants to participate in matchmaking.
* **Roster** — A list of player objects. Used to hold all the players on a single team.
* **Filter** — A _filter_ is used to narrow down the players to only those who have an attribute value within a certain integer range. All attributes are integer values in Open Match because [that is how indices are implemented](internal/statestorage/redis/playerindices/playerindices.go). A _filter_ is defined in a _player pool_.
* **Player Pool** — A list of all the players who fit all the _filters_ defined in the pool.
* **Match Object** — A protobuffer message format that contains the _profile_ and the results of the matchmaking function. Sent to the backend API from your game backend with the _roster_(s) empty and then returned from your MMF with the matchmaking results filled in.
* **Profile** — The json blob containing all the parameters used by your MMF to select which players go into a roster together.
* **Assignment** — Refers to assigning a player or group of players to a dedicated game server instance. Open Match offers a path to send dedicated game server connection details from your backend to your game clients after a match has been made.
* **Ignore List** — Removing players from matchmaking consideration is accomplished using _ignore lists_. They contain lists of player IDs that your MMF should not include when making matches.
## Requirements
* [Kubernetes](https://kubernetes.io/) cluster — tested with version 1.9.
* [Redis 4+](https://redis.io/) — tested with 4.0.11.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) — tested with 1.10.3.
* Open Match is compiled against the latest release of [Golang](https://golang.org/) — tested with 1.10.9.
## Components
@ -43,15 +57,17 @@ Open Match is a set of processes designed to run on Kubernetes. It contains thes
1. Frontend API
1. Backend API
1. Matchmaker Function Orchestrator (MMFOrc)
1. Matchmaker Function Orchestrator (MMFOrc) (may be deprecated in future versions)
It includes these **optional** (but recommended) components:
1. Matchmaking Logic (MMLogic) API
It also explicitly depends on these two **customizable** components.
1. Matchmaking "Function" (MMF)
1. Evaluator (may be deprecated in future versions)
1. Evaluator (may be optional in future versions)
While **core** components are fully open source and *can* be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** example MMF and Evaluator processes, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
While **core** components are fully open source and _can_ be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** MMF and Evaluator examples, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
### Frontend API
@ -65,18 +81,20 @@ The client is expected to maintain a connection, waiting for an update from the
### Backend API
The Backend API puts match profiles in state storage which the Matchmaking Function (MMF) can access and use to decide which players should be put into a match together, then return those matches to dedicated game server instances.
The Backend API writes match objects to state storage which the Matchmaking Functions (MMFs) access to decide which players should be matched. It returns the results from those MMFs.
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or scheduler, or even directly to a dedicated game server), and to receive:
The Backend API is a server application that implements the [gRPC](https://grpc.io/) service defined in `api/protobuf-spec/backend.proto`. At the most basic level, it expects to be connected to your online infrastructure (probably to your server scaling manager or **director**, or even directly to a dedicated game server), and to receive:
* A **unique ID** for a matchmaking profile.
* A **json blob** containing all the match-related data you want to use in your matchmaking function, in an 'empty' match object.
* A **json blob** containing all the matching-related data and filters you want to use in your matchmaking function.
* An optional list of **roster**s to hold the resulting teams chosen by your matchmaking function.
* An optional set of **filters** that define player pools your matchmaking function will choose players from.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
### Matchmaking Function Orchestrator (MMFOrc)
The MMFOrc kicks off your custom matchmaking function (MMF) for every profile submitted to the Backend API. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
The MMFOrc kicks off your custom matchmaking function (MMF) for every unique profile submitted to the Backend API in a match object. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
The MMFOrc exists to orchestrate/schedule your **custom components**, running them as often as required to meet the demands of your game. MMFOrc runs in an endless loop, submitting MMFs and Evaluator jobs to Kubernetes.
@ -85,8 +103,8 @@ The MMFOrc exists to orchestrate/schedule your **custom components**, running th
The MMLogic API provides a series of gRPC functions that act as a Matchmaking Function SDK. Much of the basic, boilerplate code for an MMF is the same regardless of what players you want to match together. The MMLogic API offers a gRPC interface for many common MMF tasks, such as:
1. Reading a profile from state storage.
1. Running filters on players in state strorage.
1. Removing chosen players from consideration by other MMFs (by adding them to an ignore list).
1. Running filters on players in state strorage. It automatically removes players on ignore lists as well!
1. Removing chosen players from consideration by other MMFs (by adding them to an ignore list). It does it automatically for you when writing your results!
1. Writing the matchmaking results to state storage.
1. (Optional, NYI) Exporting MMF stats for metrics collection.
@ -96,9 +114,9 @@ More details about the available gRPC calls can be found in the [API Specificati
### Evaluator
The Evaluator resolves conflicts when multiple matches want to include the same player(s).
The Evaluator resolves conflicts when multiple MMFs select the same player(s).
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposed matches, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and matchmaking functions that always look at different parts of the matchmaking pool, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposals, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and well-tuned matchmaking functions, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
Large-scale concurrent matchmaking functions is a complex topic, and users who wish to do this are encouraged to engage with the [Open Match community](https://github.com/GoogleCloudPlatform/open-match#get-involved) about patterns and best practices.
@ -109,10 +127,10 @@ Matchmaking Functions (MMFs) are run by the Matchmaker Function Orchestrator (MM
- [x] Be packaged in a (Linux) Docker container.
- [x] Read/write from the Open Match state storage — Open Match ships with Redis as the default state storage.
- [x] Read a profile you wrote to state storage using the Backend API.
- [x] Select from the player data you wrote to state storage using the Frontend API.
- [x] Select from the player data you wrote to state storage using the Frontend API. It must respect all the ignore lists defined in the matchmaker config.
- [ ] Run your custom logic to try to find a match.
- [x] Write the match object it creates to state storage at a specified key.
- [x] Remove the players it selected from consideration by other MMFs.
- [x] Remove the players it selected from consideration by other MMFs by adding them to the appropriate ignore list.
- [x] Notify the MMFOrc of completion.
- [x] (Optional, but recommended) Export stats for metrics collection.
@ -128,7 +146,7 @@ Example MMFs are provided in these languages:
### Structured logging
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. Level and format are configurable via config/matchmaker_config.json. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
### Instrumentation for metrics
@ -140,7 +158,7 @@ Open Match uses [OpenCensus](https://opencensus.io/) for metrics instrumentation
By default, Open Match expects you to run Redis *somewhere*. Connection information can be put in the config file (`matchmaker_config.json`) for any Redis instance reachable from the [Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By default, Open Match sensibly runs in the Kubernetes `default` namespace. In most instances, we expect users will run a copy of Redis in a pod in Kubernetes, with a service pointing to it.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis-sentinel`, which provides an easier path to multi-instance deployments.
* HA configurations for Redis aren't implemented by the provided Kubernetes resource definition files, but Open Match expects the Redis service to be named `redis`, which provides an easier path to multi-instance deployments.
## Additional examples
@ -148,7 +166,7 @@ By default, Open Match expects you to run Redis *somewhere*. Connection informat
The following examples of how to call the APIs are provided in the repository. Both have a `Dockerfile` and `cloudbuild.yaml` files in their respective directories:
* `examples/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `test/cmd/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
* `examples/backendclient/main.go` calls the Backend API and passes in the profile found in `backendstub/profiles/testprofile.json` to the `ListMatches` API endpoint, then continually prints the results until you exit, or there are insufficient players to make a match based on the profile..
## Usage
@ -161,18 +179,17 @@ Once we reach a 1.0 release, we plan to produce publicly available (Linux) Docke
### Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild_COMPONENT.yaml` files for each component in the repository root.
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in the corresponding `cmd/<COMPONENT>` directories.
All the core components for Open Match are written in Golang and use the [Dockerfile multistage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/). This pattern uses intermediate Docker containers as a Golang build environment while producing lightweight, minimized container images as final build artifacts. When the project is ready for production, we will modify the `Dockerfile`s to uncomment the last build stage. Although this pattern is great for production container images, it removes most of the utilities required to troubleshoot issues during development.
### Configuration
## Configuration
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. When `docker build`ing the component container images, the Dockerfile copies the centralized config file into the component directory.
We plan to replace this with a Kubernetes-managed config with dynamic reloading when development time allows. Pull requests are welcome!
We plan to replace this with a Kubernetes-managed config with dynamic reloading, please join the discussion in [Issue #42](issues/42).
### Guides
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release. **WIP**
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release, right now it's a scattered collection of notes. **WIP**
* [Development guide](./docs/development.md)
### Reference
@ -213,8 +230,8 @@ Open Match is in active development - we would love your help in shaping its fut
Apache 2.0
# Planned improvements
See the [provisional roadmap](docs/roadmap.md) for more information on upcoming releases.
## Documentation
- [ ] “Writing your first matchmaker” getting started guide will be included in an upcoming version.
@ -222,25 +239,27 @@ Apache 2.0
- [ ] Documentation on release process and release calendar.
## State storage
- [ ] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
- [ ] [The Redis deployment should have an example HA configuration](https://github.com/GoogleCloudPlatform/open-match/issues/41)
- [ ] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet. We don't want to support two redis watcher code paths, so the backend watch of the match object should be switched to unify the way the frontend and backend watch keys. The backend part of this is in but the frontend part is in another branch and will be committed later.
- [ ] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
- [X] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
- [X] [The Redis deployment should have an example HA configuration](https://github.com/GoogleCloudPlatform/open-match/issues/41)
- [X] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet.
- [ ] We don't want to support two redis watcher code paths, but we will until golang protobuf reflection is a bit more usable. [Design doc](https://docs.google.com/document/d/19kfhro7-CnBdFqFk7l4_HmwaH2JT_Rhw5-2FLWLEGGk/edit#heading=h.q3iwtwhfujjx), [github issue](https://github.com/golang/protobuf/issues/364)
- [X] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
## Instrumentation / Metrics / Analytics
- [ ] Instrumentation of MMFs is in the planning stages. Since MMFs are by design meant to be completely customizable (to the point of allowing any process that can be packaged in a Docker container), metrics/stats will need to have an expected format and formalized outgoing pathway. Currently the thought is that it might be that the metrics should be written to a particular key in statestorage in a format compatible with opencensus, and will be collected, aggreggated, and exported to Prometheus using another process.
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version.
- [ ] Read logrus logging configuration from matchmaker_config.json.
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version. This is likely going to require knative.
- [X] Read logrus logging configuration from matchmaker_config.json.
## Security
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions.
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions. [Issue 52](issues/52)
## Kubernetes
- [ ] Autoscaling isn't turned on for the Frontend or Backend API Kubernetes deployments by default.
- [ ] A [Helm](https://helm.sh/) chart to stand up Open Match will be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
- [ ] A [Helm](https://helm.sh/) chart to stand up Open Match may be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
- [ ] A knative-based implementation of MMFs is in the planning stages.
## CI / CD / Build
- [ ] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon.
- [ ] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon. This is tracked in [Issue #45](issues/45) and is blocked by [Issue 42](issues/42).
- [ ] CI/CD for this repo and the associated status tags are planned.
- [ ] Golang unit tests will be shipped in an upcoming version.
- [ ] A full load-testing and e2e testing suite will be included in an upcoming version.

@ -21,35 +21,35 @@ service Backend {
// - rosters, if you choose to fill them in your MMF. (Recommended)
// - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)
rpc CreateMatch(messages.MatchObject) returns (messages.MatchObject) {}
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection. Same inputs/outputs as CreateMatch.
// Continually run MMF and stream MatchObjects that fit this profile until
// the backend client closes the connection. Same inputs/outputs as CreateMatch.
rpc ListMatches(messages.MatchObject) returns (stream messages.MatchObject) {}
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
// Delete a MatchObject from state storage manually. (MatchObjects in state
// storage will also automatically expire after a while, defined in the config)
// INPUT: MatchObject message with the 'id' field populated.
// (All other fields are ignored.)
rpc DeleteMatch(messages.MatchObject) returns (messages.Result) {}
// Call fors communication of connection info to players.
// Calls for communication of connection info to players.
// Write the connection info for the list of players in the
// Assignments.messages.Rosters to state storage. The FrontendAPI is
// Assignments.messages.Rosters to state storage. The Frontend API is
// responsible for sending anything sent here to the game clients.
// Sending a player to this function kicks off a process that removes
// the player from future matchmaking functions by adding them to the
// 'deindexed' player list and then deleting their player ID from state storage
// indexes.
// INPUT: Assignments message with these fields populated:
// - connection_info, anything you write to this string is sent to Frontend API
// - assignment, anything you write to this string is sent to Frontend API
// - rosters. You can send any number of rosters, containing any number of
// player messages. All players from all rosters will be sent the connection_info.
// The only field in the Player object that is used by CreateAssignments is
// the id field. All others are silently ignored.
// player messages. All players from all rosters will be sent the assignment.
// The only field in the Roster's Player messages used by CreateAssignments is
// the id field. All other fields in the Player messages are silently ignored.
rpc CreateAssignments(messages.Assignments) returns (messages.Result) {}
// Remove DGS connection info from state storage for players.
// INPUT: Roster message with the 'players' field populated.
// The only field in the Player object that is used by
// The only field in the Roster's Player messages used by
// DeleteAssignments is the 'id' field. All others are silently ignored. If
// you need to delete multiple rosters, make multiple calls.
rpc DeleteAssignments(messages.Roster) returns (messages.Result) {}

@ -1,23 +1,65 @@
// TODO: In a future version, these messages will be moved/merged with those in om_messages.proto
syntax = 'proto3';
package api;
option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
import 'api/protobuf-spec/messages.proto';
service Frontend {
rpc CreateRequest(Group) returns (messages.Result) {}
rpc DeleteRequest(Group) returns (messages.Result) {}
rpc GetAssignment(PlayerId) returns (messages.ConnectionInfo) {}
rpc DeleteAssignment(PlayerId) returns (messages.Result) {}
}
// Call to start matchmaking for a player
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
message Group{
string id = 1; // By convention, string of space-delimited playerIDs
string properties = 2; // By convention, a JSON-encoded string
}
// CreatePlayer will put the player in state storage, and then look
// through the 'properties' field for the attributes you have defined as
// indices your matchmaker config. If the attributes exist and are valid
// integers, they will be indexed.
// INPUT: Player message with these fields populated:
// - id
// - properties
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc CreatePlayer(messages.Player) returns (messages.Result) {}
message PlayerId {
string id = 1; // By convention, a UUID
// Call to stop matchmaking for a player
// DeletePlayer removes the player from state storage by doing the
// following:
// 1) Delete player from configured indices. This effectively removes the
// player from matchmaking when using recommended MMF patterns.
// Everything after this is just cleanup to save stage storage space.
// 2) 'Lazily' delete the player's state storage record. This is kicked
// off in the background and may take some time to complete.
// 2) 'Lazily' delete the player's metadata indicies (like, the timestamp when
// they called CreatePlayer, and the last time the record was accessed). This
// is also kicked off in the background and may take some time to complete.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
rpc DeletePlayer(messages.Player) returns (messages.Result) {}
// Calls to access matchmaking results for a player
// GetUpdates streams matchmaking results from Open Match for the
// provided player ID.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: a stream of player objects with one or more of the following
// fields populated, if an update to that field is seen in state storage:
// - 'assignment': string that usually contains game server connection information.
// - 'status': string to communicate current matchmaking status to the client.
// - 'error': string to pass along error information to the client.
//
// During normal operation, the expectation is that the 'assignment' field
// will be updated by a Backend process calling the 'CreateAssignments' Backend API
// endpoint. 'Status' and 'Error' are free for developers to use as they see fit.
// Even if you had multiple players enter a matchmaking request as a group, the
// Backend API 'CreateAssignments' call will write the results to state
// storage separately under each player's ID. OM expects you to make all game
// clients 'GetUpdates' with their own ID from the Frontend API to get
// their results.
//
// NOTE: This call generates a small amount of load on the Frontend API and state
// storage while watching the player record for updates. You are expected
// to close the stream from your client after receiving your matchmaking
// results (or a reasonable timeout), or you will continue to
// generate load on OM until you do!
// NOTE: Just bear in mind that every update will send egress traffic from
// Open Match to game clients! Frugality is recommended.
rpc GetUpdates(messages.Player) returns (stream messages.Player) {}
}

@ -14,7 +14,7 @@ option go_package = "github.com/GoogleCloudPlatform/open-match/internal/pb";
// MatchObject as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message MatchObject{
string id = 1; // By convention, a UUID
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string error = 3; // Last error encountered.
repeated Roster rosters = 4; // Rosters of players.
@ -55,16 +55,26 @@ message PlayerPool{
Stats stats = 4; // Statisticss for the last time this Pool was retrieved from state storage.
}
// Data structure to hold details about a player
// Open Match's internal representation and wire protocol format for "Players".
// In order to enter matchmaking using the Frontend API, your client code should generate
// a consistent (same result for each client every time they launch) with an ID and
// properties filled in (for more details about valid values for these fields,
// see the documentation).
// Players contain a number of fields, but the gRPC calls that take a
// Player as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
message Player{
message Attribute{
string name = 1; // Name should match a Filter.attribute field.
int64 value = 2;
}
string id = 1; // By convention, a UUID
string id = 1; // By convention, an Xid
string properties = 2; // By convention, a JSON-encoded string
string pool = 3; // Optionally used to specify the PlayerPool in which to find a player.
repeated Attribute attributes= 4; // Attributes of this player.
string assignment = 5; // By convention, ip:port of a DGS to connect to
string status = 6; // Arbitrary developer-chosen string.
string error = 7; // Arbitrary developer-chosen string.
}
@ -78,13 +88,7 @@ message Result{
message IlInput{
}
// Simple message used to pass the connection string for the DGS to the player.
// DEPRECATED: Likely to be integrated into another protobuf message in a future version.
message ConnectionInfo{
string connection_string = 1; // Passed by the matchmaker to game clients without modification.
}
message Assignments{
repeated Roster rosters = 1;
ConnectionInfo connection_info = 2;
string assignment = 10;
}

@ -2,9 +2,8 @@ steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-devbase:latest',
'--cache-from=gcr.io/$PROJECT_ID/openmatch-devbase:latest',
'--tag=gcr.io/$PROJECT_ID/openmatch-base:dev',
'-f', 'Dockerfile.base',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-devbase:latest']
images: ['gcr.io/$PROJECT_ID/openmatch-base:dev']

@ -1,11 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-devbase:latest' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:go',
'-f', 'Dockerfile.mmf_go',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:go']

@ -2,8 +2,8 @@ steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:php',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple',
'-f', 'Dockerfile.mmf_php',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:php']
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-php-mmlogic-simple']

@ -2,8 +2,8 @@ steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:py3',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev',
'-f', 'Dockerfile.mmf_py3',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:py3']
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-py3-mmlogic-simple:dev']

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/backendapi cmd/backendapi
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

@ -1,5 +1,6 @@
/*
package apisrv provides an implementation of the gRPC server defined in ../../../api/protobuf-spec/backend.proto
package apisrv provides an implementation of the gRPC server defined in
../../../api/protobuf-spec/backend.proto
Copyright 2018 Google LLC
@ -24,7 +25,6 @@ import (
"errors"
"fmt"
"net"
"strings"
"time"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
@ -42,7 +42,7 @@ import (
"github.com/tidwall/gjson"
"github.com/gomodule/redigo/redis"
"github.com/google/uuid"
"github.com/rs/xid"
"github.com/spf13/viper"
"google.golang.org/grpc"
@ -53,7 +53,6 @@ var (
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
"caller": "backend/apisrv/apisrv.go",
}
beLog = log.WithFields(beLogFields)
)
@ -108,7 +107,7 @@ func (s *BackendAPI) Open() error {
}
// CreateMatch is this service's implementation of the CreateMatch gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject) (*backend.MatchObject, error) {
// Get a cancel-able context
@ -120,7 +119,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Generate a request to fill the profile. Make a unique request ID.
moID := strings.Replace(uuid.New().String(), "-", "", -1)
moID := xid.New().String()
requestKey := moID + "." + profile.Id
/*
@ -135,8 +134,8 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
*/
// Case where no protobuf pools was passed; check if there's a JSON version in the properties.
// This is for backwards compatibility, it is recommended you populate the
// pools before calling CreateMatch/ListMatches
// This is for backwards compatibility, it is recommended you populate the protobuf's
// 'pools' field directly and pass it to CreateMatch/ListMatches
if profile.Pools == nil && s.cfg.IsSet("jsonkeys.pools") &&
gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.pools")).Exists() {
poolsJSON := fmt.Sprintf("{\"pools\": %v}", gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.pools")).String())
@ -155,7 +154,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
// Case where no protobuf roster was passed; check if there's a JSON version in the properties.
// This is for backwards compatibility, it is recommended you populate the
// pools before calling CreateMatch/ListMatches
// protobuf's 'rosters' field directly and pass it to CreateMatch/ListMatches
if profile.Rosters == nil && s.cfg.IsSet("jsonkeys.rosters") &&
gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.rosters")).Exists() {
rostersJSON := fmt.Sprintf("{\"rosters\": %v}", gjson.Get(profile.Properties, s.cfg.GetString("jsonkeys.rosters")).String())
@ -183,8 +182,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
beLog.Info(profile)
// Write profile to state storage
//_, err := redisHelpers.Create(ctx, s.pool, profile.Id, profile.Properties)
err := redispb.MarshalToRedis(ctx, profile, s.pool)
err := redispb.MarshalToRedis(ctx, s.pool, profile, s.cfg.GetInt("redis.expirations.matchobject"))
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
@ -216,7 +214,7 @@ func (s *backendAPI) CreateMatch(c context.Context, profile *backend.MatchObject
newMO := backend.MatchObject{Id: requestKey}
watchChan := redispb.Watcher(ctx, s.pool, newMO) // Watcher() runs the appropriate Redis commands.
errString := ("Error retrieving matchmaking results from state storage")
timeout := time.Duration(s.cfg.GetInt("interval.resultsTimeout")) * time.Second
timeout := time.Duration(s.cfg.GetInt("api.backend.timeout")) * time.Second
select {
case <-time.After(timeout):
@ -311,7 +309,7 @@ func (s *backendAPI) ListMatches(p *backend.MatchObject, matchStream backend.Bac
}
// DeleteMatch is this service's implementation of the DeleteMatch gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (*backend.Result, error) {
// Create context for tagging OpenCensus metrics.
@ -323,7 +321,7 @@ func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (
"matchObjectID": mo.Id,
}).Info("gRPC call executing")
_, err := redisHelpers.Delete(ctx, s.pool, mo.Id)
err := redisHelpers.Delete(ctx, s.pool, mo.Id)
if err != nil {
beLog.WithFields(log.Fields{
"error": err.Error(),
@ -343,12 +341,25 @@ func (s *backendAPI) DeleteMatch(ctx context.Context, mo *backend.MatchObject) (
}
// CreateAssignments is this service's implementation of the CreateAssignments gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignments) (*backend.Result, error) {
assignments := make([]string, 0)
for _, roster := range a.Rosters {
assignments = append(assignments, getPlayerIdsFromRoster(roster)...)
// Make a map of players and what assignments we want to send them.
playerIDs := make([]string, 0)
players := make(map[string]string, 0)
for _, roster := range a.Rosters { // Loop through all rosters
for _, player := range roster.Players { // Loop through all players in this roster
if player.Id != "" {
if player.Assignment == "" {
// No player-specific assignment, so use the default one in
// the Assignment message.
player.Assignment = a.Assignment
}
players[player.Id] = player.Assignment
beLog.Debug(fmt.Sprintf("playerid %v assignment %v", player.Id, player.Assignment))
}
}
playerIDs = append(playerIDs, getPlayerIdsFromRoster(roster)...)
}
// Create context for tagging OpenCensus metrics.
@ -357,30 +368,16 @@ func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignmen
beLog = beLog.WithFields(log.Fields{"func": funcName})
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
"numAssignments": len(players),
}).Info("gRPC call executing")
// TODO: relocate this redis functionality to a module
redisConn := s.pool.Get()
defer redisConn.Close()
// TODO: These two calls are done in two different transactions; could be
// combined as an optimization but probably not particularly necessary
// Send the players their assignments.
err := redisHelpers.UpdateMultiFields(ctx, s.pool, players, "assignment")
// Create player assignments in a transaction.
redisConn.Send("MULTI")
for _, playerID := range assignments {
beLog.WithFields(log.Fields{
"query": "HSET",
"playerID": playerID,
s.cfg.GetString("jsonkeys.connstring"): a.ConnectionInfo.ConnectionString,
}).Debug("state storage operation")
redisConn.Send("HSET", playerID, s.cfg.GetString("jsonkeys.connstring"), a.ConnectionInfo.ConnectionString)
}
// Remove these players from the proposed list.
ignorelist.SendRemove(redisConn, "proposed", assignments)
// Add these players from the deindexed list.
ignorelist.SendAdd(redisConn, "deindexed", assignments)
// Send the multi-command transaction to Redis.
_, err := redisConn.Do("EXEC")
// Move these players from the proposed list to the deindexed list.
ignorelist.Move(ctx, s.pool, playerIDs, "proposed", "deindexed")
// Issue encountered
if err != nil {
@ -390,25 +387,23 @@ func (s *backendAPI) CreateAssignments(ctx context.Context, a *backend.Assignmen
}).Error("State storage error")
stats.Record(fnCtx, BeGrpcErrors.M(1))
stats.Record(fnCtx, BeAssignmentFailures.M(int64(len(assignments))))
stats.Record(fnCtx, BeAssignmentFailures.M(int64(len(players))))
return &backend.Result{Success: false, Error: err.Error()}, err
}
// Success!
beLog.WithFields(log.Fields{
"numAssignments": len(assignments),
"numPlayers": len(players),
}).Info("Assignments complete")
stats.Record(fnCtx, BeGrpcRequests.M(1))
stats.Record(fnCtx, BeAssignments.M(int64(len(assignments))))
stats.Record(fnCtx, BeAssignments.M(int64(len(players))))
return &backend.Result{Success: true, Error: ""}, err
}
// DeleteAssignments is this service's implementation of the DeleteAssignments gRPC method
// defined in ../proto/backend.proto
// defined in api/protobuf-spec/backend.proto
func (s *backendAPI) DeleteAssignments(ctx context.Context, r *backend.Roster) (*backend.Result, error) {
// TODO: make playerIDs a repeated protobuf message field and iterate over it
//assignments := strings.Split(a.PlayerIds, " ")
assignments := getPlayerIdsFromRoster(r)
// Create context for tagging OpenCensus metrics.
@ -420,18 +415,7 @@ func (s *backendAPI) DeleteAssignments(ctx context.Context, r *backend.Roster) (
"numAssignments": len(assignments),
}).Info("gRPC call executing")
// TODO: relocate this redis functionality to a module
redisConn := s.pool.Get()
defer redisConn.Close()
// Remove player assignments in a transaction
redisConn.Send("MULTI")
// TODO: make playerIDs a repeated protobuf message field and iterate over it
for _, playerID := range assignments {
beLog.WithFields(log.Fields{"query": "DEL", "key": playerID}).Debug("state storage operation")
redisConn.Send("DEL", playerID)
}
_, err := redisConn.Do("EXEC")
err := redisHelpers.DeleteMultiFields(ctx, s.pool, assignments, "assignment")
// Issue encountered
if err != nil {
@ -451,6 +435,8 @@ func (s *backendAPI) DeleteAssignments(ctx context.Context, r *backend.Roster) (
return &backend.Result{Success: true, Error: ""}, err
}
// getPlayerIdsFromRoster returns the slice of player ID strings contained in
// the input roster.
func getPlayerIdsFromRoster(r *backend.Roster) []string {
playerIDs := make([]string, 0)
for _, p := range r.Players {

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendapi:dev',
'-f', 'Dockerfile.backendapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendapi:dev']

@ -1,6 +1,7 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in proto/backend.proto
running a gRPC server serving the APIService as defined in
${OM_ROOT}/internal/pb/backend.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
@ -28,6 +29,7 @@ import (
"github.com/GoogleCloudPlatform/open-match/cmd/backendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
@ -41,7 +43,6 @@ var (
beLogFields = log.Fields{
"app": "openmatch",
"component": "backend",
"caller": "backendapi/main.go",
}
beLog = log.WithFields(beLogFields)
@ -51,7 +52,6 @@ var (
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.BeLogLines, apisrv.KeySeverity))
@ -63,10 +63,8 @@ func init() {
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
beLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
@ -88,7 +86,7 @@ func main() {
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
beLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
beLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server

@ -1,749 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: backend.proto
/*
Package backend is a generated protocol buffer package.
It is generated from these files:
backend.proto
It has these top-level messages:
Profile
MatchObject
Roster
Filter
Stats
PlayerPool
Player
Result
IlInput
Timestamp
ConnectionInfo
Assignments
*/
package backend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
type Profile struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
Name string `protobuf:"bytes,3,opt,name=name" json:"name,omitempty"`
// When you send a Profile to the backendAPI, it looks to see if you populated
// this field with protobuf-encoded PlayerPool objects containing valid the filters
// objects. If you did, they are used by OM. If you didn't, the backendAPI
// next looks in your properties blob at the key specified in the 'jsonkeys.pools'
// config value from config/matchmaker_config.json - If it finds valid player
// pool definitions at that key, it will try to unmarshal them into this field.
// If you didn't specify valid player pools in either place, OM assumes you
// know what you're doing and just leaves this unpopulatd.
Pools []*PlayerPool `protobuf:"bytes,4,rep,name=pools" json:"pools,omitempty"`
}
func (m *Profile) Reset() { *m = Profile{} }
func (m *Profile) String() string { return proto.CompactTextString(m) }
func (*Profile) ProtoMessage() {}
func (*Profile) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Profile) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Profile) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
func (m *Profile) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Profile) GetPools() []*PlayerPool {
if m != nil {
return m.Pools
}
return nil
}
// A MMF takes the Profile object above, and generates a MatchObject.
type MatchObject struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
Rosters []*Roster `protobuf:"bytes,3,rep,name=rosters" json:"rosters,omitempty"`
Pools []*PlayerPool `protobuf:"bytes,4,rep,name=pools" json:"pools,omitempty"`
}
func (m *MatchObject) Reset() { *m = MatchObject{} }
func (m *MatchObject) String() string { return proto.CompactTextString(m) }
func (*MatchObject) ProtoMessage() {}
func (*MatchObject) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *MatchObject) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *MatchObject) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
func (m *MatchObject) GetRosters() []*Roster {
if m != nil {
return m.Rosters
}
return nil
}
func (m *MatchObject) GetPools() []*PlayerPool {
if m != nil {
return m.Pools
}
return nil
}
// Data structure to hold a list of players in a match.
type Roster struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Players []*Player `protobuf:"bytes,2,rep,name=players" json:"players,omitempty"`
}
func (m *Roster) Reset() { *m = Roster{} }
func (m *Roster) String() string { return proto.CompactTextString(m) }
func (*Roster) ProtoMessage() {}
func (*Roster) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *Roster) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Roster) GetPlayers() []*Player {
if m != nil {
return m.Players
}
return nil
}
// A filter to apply to the player pool.
type Filter struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Attribute string `protobuf:"bytes,2,opt,name=attribute" json:"attribute,omitempty"`
Maxv int64 `protobuf:"varint,3,opt,name=maxv" json:"maxv,omitempty"`
Minv int64 `protobuf:"varint,4,opt,name=minv" json:"minv,omitempty"`
Stats *Stats `protobuf:"bytes,5,opt,name=stats" json:"stats,omitempty"`
}
func (m *Filter) Reset() { *m = Filter{} }
func (m *Filter) String() string { return proto.CompactTextString(m) }
func (*Filter) ProtoMessage() {}
func (*Filter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Filter) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Filter) GetAttribute() string {
if m != nil {
return m.Attribute
}
return ""
}
func (m *Filter) GetMaxv() int64 {
if m != nil {
return m.Maxv
}
return 0
}
func (m *Filter) GetMinv() int64 {
if m != nil {
return m.Minv
}
return 0
}
func (m *Filter) GetStats() *Stats {
if m != nil {
return m.Stats
}
return nil
}
type Stats struct {
Count int64 `protobuf:"varint,1,opt,name=count" json:"count,omitempty"`
Elapsed float64 `protobuf:"fixed64,2,opt,name=elapsed" json:"elapsed,omitempty"`
}
func (m *Stats) Reset() { *m = Stats{} }
func (m *Stats) String() string { return proto.CompactTextString(m) }
func (*Stats) ProtoMessage() {}
func (*Stats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *Stats) GetCount() int64 {
if m != nil {
return m.Count
}
return 0
}
func (m *Stats) GetElapsed() float64 {
if m != nil {
return m.Elapsed
}
return 0
}
type PlayerPool struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Filters []*Filter `protobuf:"bytes,2,rep,name=filters" json:"filters,omitempty"`
Roster *Roster `protobuf:"bytes,3,opt,name=roster" json:"roster,omitempty"`
Stats *Stats `protobuf:"bytes,4,opt,name=stats" json:"stats,omitempty"`
}
func (m *PlayerPool) Reset() { *m = PlayerPool{} }
func (m *PlayerPool) String() string { return proto.CompactTextString(m) }
func (*PlayerPool) ProtoMessage() {}
func (*PlayerPool) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
func (m *PlayerPool) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *PlayerPool) GetFilters() []*Filter {
if m != nil {
return m.Filters
}
return nil
}
func (m *PlayerPool) GetRoster() *Roster {
if m != nil {
return m.Roster
}
return nil
}
func (m *PlayerPool) GetStats() *Stats {
if m != nil {
return m.Stats
}
return nil
}
// Data structure for a profile to pass to the matchmaking function.
type Player struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
Pool string `protobuf:"bytes,3,opt,name=pool" json:"pool,omitempty"`
Attributes []*Player_Attribute `protobuf:"bytes,4,rep,name=attributes" json:"attributes,omitempty"`
}
func (m *Player) Reset() { *m = Player{} }
func (m *Player) String() string { return proto.CompactTextString(m) }
func (*Player) ProtoMessage() {}
func (*Player) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
func (m *Player) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Player) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
func (m *Player) GetPool() string {
if m != nil {
return m.Pool
}
return ""
}
func (m *Player) GetAttributes() []*Player_Attribute {
if m != nil {
return m.Attributes
}
return nil
}
type Player_Attribute struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Value int64 `protobuf:"varint,2,opt,name=value" json:"value,omitempty"`
}
func (m *Player_Attribute) Reset() { *m = Player_Attribute{} }
func (m *Player_Attribute) String() string { return proto.CompactTextString(m) }
func (*Player_Attribute) ProtoMessage() {}
func (*Player_Attribute) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6, 0} }
func (m *Player_Attribute) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Player_Attribute) GetValue() int64 {
if m != nil {
return m.Value
}
return 0
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
// IlInput is an empty message reserved for future use.
type IlInput struct {
}
func (m *IlInput) Reset() { *m = IlInput{} }
func (m *IlInput) String() string { return proto.CompactTextString(m) }
func (*IlInput) ProtoMessage() {}
func (*IlInput) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
// Epoch timestamp in seconds.
type Timestamp struct {
Ts int64 `protobuf:"varint,1,opt,name=ts" json:"ts,omitempty"`
}
func (m *Timestamp) Reset() { *m = Timestamp{} }
func (m *Timestamp) String() string { return proto.CompactTextString(m) }
func (*Timestamp) ProtoMessage() {}
func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
func (m *Timestamp) GetTs() int64 {
if m != nil {
return m.Ts
}
return 0
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
type Assignments struct {
Rosters []*Roster `protobuf:"bytes,1,rep,name=rosters" json:"rosters,omitempty"`
ConnectionInfo *ConnectionInfo `protobuf:"bytes,2,opt,name=connection_info,json=connectionInfo" json:"connection_info,omitempty"`
}
func (m *Assignments) Reset() { *m = Assignments{} }
func (m *Assignments) String() string { return proto.CompactTextString(m) }
func (*Assignments) ProtoMessage() {}
func (*Assignments) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
func (m *Assignments) GetRosters() []*Roster {
if m != nil {
return m.Rosters
}
return nil
}
func (m *Assignments) GetConnectionInfo() *ConnectionInfo {
if m != nil {
return m.ConnectionInfo
}
return nil
}
func init() {
proto.RegisterType((*Profile)(nil), "Profile")
proto.RegisterType((*MatchObject)(nil), "MatchObject")
proto.RegisterType((*Roster)(nil), "Roster")
proto.RegisterType((*Filter)(nil), "Filter")
proto.RegisterType((*Stats)(nil), "Stats")
proto.RegisterType((*PlayerPool)(nil), "PlayerPool")
proto.RegisterType((*Player)(nil), "Player")
proto.RegisterType((*Player_Attribute)(nil), "Player.Attribute")
proto.RegisterType((*Result)(nil), "Result")
proto.RegisterType((*IlInput)(nil), "IlInput")
proto.RegisterType((*Timestamp)(nil), "Timestamp")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Assignments)(nil), "Assignments")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error)
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error)
// Call for communication of connection info to players.
//
// Write the connection info for the list of players in the
// Assignments.Rosters to state storage. The FrontendAPI is responsible for
// sending anything written here to the game clients.
// TODO: change this to be agnostic; return a 'result' instead of a connection
// string so it can be integrated with session service etc
CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error)
// Remove DGS connection info from state storage for all players in the Roster.
DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateMatch(ctx context.Context, in *Profile, opts ...grpc.CallOption) (*MatchObject, error) {
out := new(MatchObject)
err := grpc.Invoke(ctx, "/API/CreateMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) ListMatches(ctx context.Context, in *Profile, opts ...grpc.CallOption) (API_ListMatchesClient, error) {
stream, err := grpc.NewClientStream(ctx, &_API_serviceDesc.Streams[0], c.cc, "/API/ListMatches", opts...)
if err != nil {
return nil, err
}
x := &aPIListMatchesClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type API_ListMatchesClient interface {
Recv() (*MatchObject, error)
grpc.ClientStream
}
type aPIListMatchesClient struct {
grpc.ClientStream
}
func (x *aPIListMatchesClient) Recv() (*MatchObject, error) {
m := new(MatchObject)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *aPIClient) DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteMatch", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignments", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
// Calls to ask the matchmaker to run a matchmaking function.
//
// Run MMF once. Return a matchobject that fits this profile.
CreateMatch(context.Context, *Profile) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection.
ListMatches(*Profile, API_ListMatchesServer) error
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
DeleteMatch(context.Context, *MatchObject) (*Result, error)
// Call for communication of connection info to players.
//
// Write the connection info for the list of players in the
// Assignments.Rosters to state storage. The FrontendAPI is responsible for
// sending anything written here to the game clients.
// TODO: change this to be agnostic; return a 'result' instead of a connection
// string so it can be integrated with session service etc
CreateAssignments(context.Context, *Assignments) (*Result, error)
// Remove DGS connection info from state storage for all players in the Roster.
DeleteAssignments(context.Context, *Roster) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Profile)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateMatch(ctx, req.(*Profile))
}
return interceptor(ctx, in, info, handler)
}
func _API_ListMatches_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(Profile)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(APIServer).ListMatches(m, &aPIListMatchesServer{stream})
}
type API_ListMatchesServer interface {
Send(*MatchObject) error
grpc.ServerStream
}
type aPIListMatchesServer struct {
grpc.ServerStream
}
func (x *aPIListMatchesServer) Send(m *MatchObject) error {
return x.ServerStream.SendMsg(m)
}
func _API_DeleteMatch_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(MatchObject)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteMatch(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteMatch",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteMatch(ctx, req.(*MatchObject))
}
return interceptor(ctx, in, info, handler)
}
func _API_CreateAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Assignments)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateAssignments(ctx, req.(*Assignments))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignments_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Roster)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignments(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignments",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignments(ctx, req.(*Roster))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateMatch",
Handler: _API_CreateMatch_Handler,
},
{
MethodName: "DeleteMatch",
Handler: _API_DeleteMatch_Handler,
},
{
MethodName: "CreateAssignments",
Handler: _API_CreateAssignments_Handler,
},
{
MethodName: "DeleteAssignments",
Handler: _API_DeleteAssignments_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "ListMatches",
Handler: _API_ListMatches_Handler,
ServerStreams: true,
},
},
Metadata: "backend.proto",
}
func init() { proto.RegisterFile("backend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 591 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0x51, 0x6f, 0xd3, 0x30,
0x10, 0x9e, 0x9b, 0x26, 0x59, 0x2f, 0x63, 0xa3, 0xd6, 0x1e, 0xa2, 0x31, 0x41, 0xe7, 0x07, 0x56,
0x04, 0x8a, 0xa0, 0x08, 0xb1, 0x17, 0x84, 0xaa, 0x21, 0xa4, 0x4a, 0x20, 0x2a, 0x8f, 0x77, 0x94,
0xa6, 0xee, 0xf0, 0x48, 0xed, 0xc8, 0x76, 0x2a, 0x78, 0x43, 0xf0, 0x9f, 0xf8, 0x2d, 0xfc, 0x1c,
0x14, 0x3b, 0x69, 0x53, 0x41, 0x25, 0xe0, 0xcd, 0xdf, 0xe7, 0xbb, 0xf3, 0x77, 0xdf, 0xe5, 0x02,
0xb7, 0x66, 0x69, 0xf6, 0x89, 0x89, 0x79, 0x52, 0x28, 0x69, 0x24, 0x29, 0x20, 0x9c, 0x2a, 0xb9,
0xe0, 0x39, 0xc3, 0x87, 0xd0, 0xe1, 0xf3, 0x18, 0x0d, 0xd0, 0xb0, 0x47, 0x3b, 0x7c, 0x8e, 0xef,
0x02, 0x14, 0x4a, 0x16, 0x4c, 0x19, 0xce, 0x74, 0xdc, 0xb1, 0x7c, 0x8b, 0xc1, 0x18, 0xba, 0x22,
0x5d, 0xb2, 0xd8, 0xb3, 0x37, 0xf6, 0x8c, 0xcf, 0xc0, 0x2f, 0xa4, 0xcc, 0x75, 0xdc, 0x1d, 0x78,
0xc3, 0x68, 0x14, 0x25, 0xd3, 0x3c, 0xfd, 0xc2, 0xd4, 0x54, 0xca, 0x9c, 0xba, 0x1b, 0xf2, 0x1d,
0x41, 0xf4, 0x36, 0x35, 0xd9, 0xc7, 0x77, 0xb3, 0x1b, 0x96, 0x99, 0x7f, 0x7e, 0xf6, 0x0c, 0x42,
0x25, 0xb5, 0x61, 0x4a, 0xc7, 0x9e, 0x7d, 0x24, 0x4c, 0xa8, 0xc5, 0xb4, 0xe1, 0xff, 0x46, 0xc5,
0x4b, 0x08, 0x5c, 0xd6, 0xba, 0x0d, 0xb4, 0xd5, 0x46, 0x58, 0xd8, 0x94, 0x4a, 0x80, 0x7b, 0xc3,
0x95, 0xa0, 0x0d, 0x4f, 0xbe, 0x22, 0x08, 0x5e, 0xf3, 0x7c, 0x57, 0x85, 0x53, 0xe8, 0xa5, 0xc6,
0x28, 0x3e, 0x2b, 0x0d, 0xab, 0x9b, 0xd8, 0x10, 0x55, 0xc6, 0x32, 0xfd, 0xbc, 0xb2, 0xd6, 0x79,
0xd4, 0x9e, 0x2d, 0xc7, 0xc5, 0x2a, 0xee, 0xd6, 0x1c, 0x17, 0x2b, 0x7c, 0x0a, 0xbe, 0x36, 0xa9,
0xd1, 0xb1, 0x3f, 0x40, 0xc3, 0x68, 0x14, 0x24, 0x57, 0x15, 0xa2, 0x8e, 0x24, 0xcf, 0xc1, 0xb7,
0x18, 0x1f, 0x83, 0x9f, 0xc9, 0x52, 0x18, 0xab, 0xc0, 0xa3, 0x0e, 0xe0, 0x18, 0x42, 0x96, 0xa7,
0x85, 0x66, 0x73, 0x2b, 0x00, 0xd1, 0x06, 0x92, 0x6f, 0x08, 0x60, 0x63, 0xc9, 0x2e, 0x07, 0x16,
0xb6, 0xbb, 0x8d, 0x03, 0xae, 0x5b, 0xda, 0xf0, 0xf8, 0x1e, 0x04, 0xce, 0x70, 0xdb, 0x46, 0x6b,
0x0e, 0x35, 0xbd, 0x51, 0xdf, 0xfd, 0x93, 0xfa, 0x1f, 0x08, 0x02, 0x27, 0xe2, 0x7f, 0xbe, 0xbc,
0x6a, 0x8a, 0xcd, 0x97, 0x57, 0x9d, 0xf1, 0x13, 0x80, 0xb5, 0xbf, 0xcd, 0xe0, 0xfb, 0xf5, 0xd4,
0x92, 0x71, 0x73, 0x43, 0x5b, 0x41, 0x27, 0xcf, 0xa0, 0x37, 0x6e, 0x8f, 0xe4, 0x37, 0x13, 0x8e,
0xc1, 0x5f, 0xa5, 0x79, 0xe9, 0x06, 0xe8, 0x51, 0x07, 0xc8, 0x05, 0x04, 0x94, 0xe9, 0x32, 0xb7,
0x0e, 0xeb, 0x32, 0xcb, 0x98, 0xd6, 0x36, 0x6d, 0x9f, 0x36, 0xb0, 0xca, 0x64, 0x4a, 0x49, 0x55,
0x8b, 0x77, 0x80, 0xf4, 0x20, 0x9c, 0xe4, 0x13, 0x51, 0x94, 0x86, 0xdc, 0x81, 0xde, 0x7b, 0xbe,
0x64, 0xda, 0xa4, 0xcb, 0xa2, 0xea, 0xdf, 0xe8, 0x7a, 0x78, 0x1d, 0xa3, 0xc9, 0x0b, 0x38, 0xbc,
0x94, 0x42, 0xb0, 0xcc, 0x70, 0x29, 0x26, 0x62, 0x21, 0xf1, 0x43, 0xe8, 0x67, 0x6b, 0xe6, 0x83,
0x36, 0x8a, 0x8b, 0xeb, 0x5a, 0xea, 0xed, 0xcd, 0xc5, 0x95, 0xe5, 0xc9, 0x0d, 0x44, 0x63, 0xad,
0xf9, 0xb5, 0x58, 0x32, 0x61, 0xb6, 0x16, 0x06, 0xed, 0x58, 0x98, 0x0b, 0x38, 0x6a, 0x95, 0xe7,
0x62, 0x21, 0xad, 0xf0, 0x68, 0x74, 0x94, 0x6c, 0x0b, 0xa1, 0x87, 0xd9, 0x16, 0x1e, 0xfd, 0x44,
0xe0, 0x8d, 0xa7, 0x13, 0x7c, 0x0e, 0xd1, 0xa5, 0x62, 0xa9, 0x61, 0x76, 0xb5, 0xf1, 0x7e, 0x52,
0xff, 0x55, 0x4e, 0x0e, 0x92, 0xd6, 0xb2, 0x93, 0x3d, 0xfc, 0x00, 0xa2, 0x37, 0x5c, 0x1b, 0x4b,
0x32, 0xbd, 0x3b, 0xf0, 0x31, 0xc2, 0xf7, 0x21, 0x7a, 0xc5, 0x72, 0xd6, 0xd4, 0xdc, 0x0a, 0x38,
0x09, 0x13, 0x37, 0x04, 0xb2, 0x87, 0x1f, 0x41, 0xdf, 0xbd, 0xdd, 0xee, 0xfa, 0x20, 0x69, 0xa1,
0x76, 0xf4, 0x39, 0xf4, 0x5d, 0xd5, 0x76, 0x74, 0x63, 0x49, 0x2b, 0x70, 0x16, 0xd8, 0x3f, 0xe4,
0xd3, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x04, 0x1a, 0xf8, 0x0a, 0x32, 0x05, 0x00, 0x00,
}

@ -1,4 +0,0 @@
/*
backend is a package compiled from the protobuffer in <REPO_ROOT>/api/protobuf-spec/backend.proto. It is auto-generated and shouldn't be edited.
*/
package backend

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/frontendapi cmd/frontendapi
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/frontendapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

@ -25,9 +25,11 @@ import (
"net"
"time"
frontend "github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/proto"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
playerq "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerq"
frontend "github.com/GoogleCloudPlatform/open-match/internal/pb"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerindices"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/redispb"
log "github.com/sirupsen/logrus"
"go.opencensus.io/stats"
"go.opencensus.io/tag"
@ -44,7 +46,6 @@ var (
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
"caller": "frontendapi/apisrv/apisrv.go",
}
feLog = log.WithFields(feLogFields)
)
@ -70,7 +71,7 @@ func New(cfg *viper.Viper, pool *redis.Pool) *FrontendAPI {
log.AddHook(metrics.NewHook(FeLogLines, KeySeverity))
// Register gRPC server
frontend.RegisterAPIServer(s.grpc, (*frontendAPI)(&s))
frontend.RegisterFrontendServer(s.grpc, (*frontendAPI)(&s))
feLog.Info("Successfully registered gRPC server")
return &s
}
@ -98,22 +99,15 @@ func (s *FrontendAPI) Open() error {
return nil
}
// CreateRequest is this service's implementation of the CreateRequest gRPC method // defined in ../proto/frontend.proto
func (s *frontendAPI) CreateRequest(c context.Context, g *frontend.Group) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// CreatePlayer is this service's implementation of the CreatePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) CreatePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "CreateRequest"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
funcName := "CreatePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Write group
// TODO: Remove playerq module and just use redishelper module once
// indexing has its own implementation
err := playerq.Create(redisConn, g.Id, g.Properties)
err := redispb.MarshalToRedis(ctx, s.pool, group, s.cfg.GetInt("redis.expirations.player"))
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
@ -124,24 +118,8 @@ func (s *frontendAPI) CreateRequest(c context.Context, g *frontend.Group) (*fron
return &frontend.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// DeleteRequest is this service's implementation of the DeleteRequest gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) DeleteRequest(c context.Context, g *frontend.Group) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "DeleteRequest"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Write group
err := playerq.Delete(redisConn, g.Id)
// Index group
err = playerindices.Create(ctx, s.pool, s.cfg, *group)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
@ -152,16 +130,60 @@ func (s *frontendAPI) DeleteRequest(c context.Context, g *frontend.Group) (*fron
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Return success.
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// GetAssignment is this service's implementation of the GetAssignment gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) GetAssignment(c context.Context, p *frontend.PlayerId) (*frontend.ConnectionInfo, error) {
// DeletePlayer is this service's implementation of the DeletePlayer gRPC method defined in frontend.proto
func (s *frontendAPI) DeletePlayer(ctx context.Context, group *frontend.Player) (*frontend.Result, error) {
// Create context for tagging OpenCensus metrics.
funcName := "DeletePlayer"
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// Deindex this player; at that point they don't show up in MMFs anymore. We can then delete
// their actual player object from Redis later.
err := playerindices.Delete(ctx, s.pool, s.cfg, group.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
// Kick off delete but don't wait for it to complete.
go s.deletePlayer(group.Id)
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
// deletePlayer is a 'lazy' player delete
// It should always be called as a goroutine and should only be called after
// confirmation that a player has been deindexed (and therefore MMF's can't
// find the player to read them anyway)
// As a final action, it also kicks off a lazy delete of the player's metadata
func (s *frontendAPI) deletePlayer(id string) {
err := redisHelpers.Delete(context.Background(), s.pool, id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Warn("Error deleting player from state storage, this could leak state storage memory but is usually not a fatal error")
}
go playerindices.DeleteMeta(context.Background(), s.pool, id)
}
// GetUpdates is this service's implementation of the GetUpdates gRPC method defined in frontend.proto
func (s *frontendAPI) GetUpdates(p *frontend.Player, assignmentStream frontend.Frontend_GetUpdatesServer) error {
// Get cancellable context
ctx, cancel := context.WithCancel(c)
ctx, cancel := context.WithCancel(assignmentStream.Context())
defer cancel()
// Create context for tagging OpenCensus metrics.
@ -169,132 +191,49 @@ func (s *frontendAPI) GetAssignment(c context.Context, p *frontend.PlayerId) (*f
fnCtx, _ := tag.New(ctx, tag.Insert(KeyMethod, funcName))
// get and return connection string
var connString string
watchChan := s.watcher(ctx, s.pool, p.Id) // watcher() runs the appropriate Redis commands.
watchChan := redispb.PlayerWatcher(ctx, s.pool, *p) // watcher() runs the appropriate Redis commands.
timeoutChan := time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
select {
case <-time.After(30 * time.Second): // TODO: Make this configurable.
err := errors.New("did not see matchmaking results in redis before timeout")
// TODO:Timeout: deal with the fallout
// When there is a timeout, need to send a stop to the watch channel.
// cancelling ctx isn't doing it.
//cancel()
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"playerid": p.Id,
}).Error("State storage error")
for {
errTag, _ := tag.NewKey("errtype")
fnCtx, _ := tag.New(ctx, tag.Insert(errTag, "watch_timeout"))
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.ConnectionInfo{ConnectionString: ""}, err
select {
case <-ctx.Done():
// Context cancelled
feLog.WithFields(log.Fields{
"playerid": p.Id,
}).Info("client closed connection successfully")
stats.Record(fnCtx, FeGrpcRequests.M(1))
return nil
case <-timeoutChan: // Timeout reached without client closing connection
// TODO:deal with the fallout
err := errors.New("server timeout reached without client closing connection")
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"playerid": p.Id,
}).Error("State storage error")
case connString = <-watchChan:
feLog.Debug(p.Id, "connString:", connString)
}
// Count errors for metrics
errTag, _ := tag.NewKey("errtype")
fnCtx, _ := tag.New(ctx, tag.Insert(errTag, "watch_timeout"))
stats.Record(fnCtx, FeGrpcErrors.M(1))
//TODO: we could generate a frontend.player message with an error
//field and stream it to the client before throwing the error here
//if we wanted to send more useful client retry information
return err
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.ConnectionInfo{ConnectionString: connString}, nil
}
// DeleteAssignment is this service's implementation of the DeleteAssignment gRPC method defined in
// frontendapi/proto/frontend.proto
func (s *frontendAPI) DeleteAssignment(c context.Context, p *frontend.PlayerId) (*frontend.Result, error) {
// Get redis connection from pool
redisConn := s.pool.Get()
defer redisConn.Close()
// Create context for tagging OpenCensus metrics.
funcName := "DeleteAssignment"
fnCtx, _ := tag.New(c, tag.Insert(KeyMethod, funcName))
// Write group
err := playerq.Delete(redisConn, p.Id)
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("State storage error")
stats.Record(fnCtx, FeGrpcErrors.M(1))
return &frontend.Result{Success: false, Error: err.Error()}, err
}
stats.Record(fnCtx, FeGrpcRequests.M(1))
return &frontend.Result{Success: true, Error: ""}, err
}
//TODO: Everything below this line will be moved to the redis statestorage library
// in an upcoming version.
// ================================================
// watcher makes a channel and returns it immediately. It also launches an
// asynchronous goroutine that watches a redis key and returns the value of
// the 'connstring' field of that key once it exists on the channel.
//
// The pattern for this function is from 'Go Concurrency Patterns', it is a function
// that wraps a closure goroutine, and returns a channel.
// reference: https://talks.golang.org/2012/concurrency.slide#25
func (s *frontendAPI) watcher(ctx context.Context, pool *redis.Pool, key string) <-chan string {
// Add the key as a field to all logs for the execution of this function.
feLog = feLog.WithFields(log.Fields{"key": key})
feLog.Debug("Watching key in statestorage for changes")
watchChan := make(chan string)
go func() {
// var declaration
var results string
var err = errors.New("haven't queried Redis yet")
// Loop, querying redis until this key has a value
for err != nil {
select {
case <-ctx.Done():
// Cleanup
close(watchChan)
return
default:
results, err = s.retrieveConnstring(ctx, pool, key, s.cfg.GetString("jsonkeys.connstring"))
if err != nil {
time.Sleep(5 * time.Second) // TODO: exp bo + jitter
}
}
case a := <-watchChan:
feLog.WithFields(log.Fields{
"assignment": a.Assignment,
"playerid": a.Id,
"status": a.Status,
"error": a.Error,
}).Info("updating client")
assignmentStream.Send(&a)
stats.Record(fnCtx, FeGrpcStreamedResponses.M(1))
// Reset timeout.
timeoutChan = time.After(time.Duration(s.cfg.GetInt("api.frontend.timeout")) * time.Second)
}
// Return value retreived from Redis asynchonously and tell calling function we're done
feLog.Debug("Statestorage watched record update detected")
watchChan <- results
close(watchChan)
}()
return watchChan
}
// retrieveConnstring is a concurrent-safe, context-aware redis HGET of the 'connstring' fieldin the input key
// TODO: This will be moved to the redis statestorage module.
func (s *frontendAPI) retrieveConnstring(ctx context.Context, pool *redis.Pool, key string, field string) (string, error) {
// Add the key as a field to all logs for the execution of this function.
feLog = feLog.WithFields(log.Fields{"key": key})
cmd := "HGET"
feLog.WithFields(log.Fields{"query": cmd}).Debug("Statestorage operation")
// Get a connection to redis
redisConn, err := pool.GetContext(ctx)
defer redisConn.Close()
// Encountered an issue getting a connection from the pool.
if err != nil {
feLog.WithFields(log.Fields{
"error": err.Error(),
"query": cmd}).Error("Statestorage connection error")
return "", err
}
// Run redis query and return
return redis.String(redisConn.Do("HGET", key, field))
}

@ -55,9 +55,10 @@ import (
//
var (
// API instrumentation
FeGrpcRequests = stats.Int64("frontendapi/requests_total", "Number of requests to the gRPC Frontend API endpoints", "1")
FeGrpcErrors = stats.Int64("frontendapi/errors_total", "Number of errors generated by the gRPC Frontend API endpoints", "1")
FeGrpcLatencySecs = stats.Float64("frontendapi/latency_seconds", "Latency in seconds of the gRPC Frontend API endpoints", "1")
FeGrpcRequests = stats.Int64("frontendapi/requests_total", "Number of requests to the gRPC Frontend API endpoints", "1")
FeGrpcStreamedResponses = stats.Int64("frontendapi/streamed_responses_total", "Number of responses streamed back from the gRPC Frontend API endpoints", "1")
FeGrpcErrors = stats.Int64("frontendapi/errors_total", "Number of errors generated by the gRPC Frontend API endpoints", "1")
FeGrpcLatencySecs = stats.Float64("frontendapi/latency_seconds", "Latency in seconds of the gRPC Frontend API endpoints", "1")
// Logging instrumentation
// There's no need to record this measurement directly if you use
@ -105,6 +106,14 @@ var (
TagKeys: []tag.Key{KeyMethod},
}
FeStreamedResponseCountView = &view.View{
Name: "frontend/grpc/streamed_responses",
Measure: FeGrpcRequests,
Description: "The number of successful streamed gRPC responses",
Aggregation: view.Count(),
TagKeys: []tag.Key{KeyMethod},
}
FeErrorCountView = &view.View{
Name: "frontend/grpc/errors",
Measure: FeGrpcErrors,
@ -133,6 +142,7 @@ var (
var DefaultFrontendAPIViews = []*view.View{
FeLatencyView,
FeRequestCountView,
FeStreamedResponseCountView,
FeErrorCountView,
FeLogCountView,
FeFailureCountView,

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-frontendapi:dev',
'-f', 'Dockerfile.frontendapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-frontendapi:dev']

@ -1,7 +1,7 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
frontendapi/proto/frontend.pb.go
${OM_ROOT}/internal/pb/frontend.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
@ -28,6 +28,7 @@ import (
"github.com/GoogleCloudPlatform/open-match/cmd/frontendapi/apisrv"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redishelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
@ -41,7 +42,6 @@ var (
feLogFields = log.Fields{
"app": "openmatch",
"component": "frontend",
"caller": "frontendapi/main.go",
}
feLog = log.WithFields(feLogFields)
@ -51,10 +51,12 @@ var (
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.AddHook(metrics.NewHook(apisrv.FeLogLines, apisrv.KeySeverity))
// Add a hook to the logger to log the filename & line number.
log.SetReportCaller(true)
// Viper config management initialization
cfg, err = config.Read()
if err != nil {
@ -63,10 +65,8 @@ func init() {
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
feLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
@ -88,7 +88,7 @@ func main() {
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
feLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
feLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server

@ -1,4 +0,0 @@
/*
frontend is a package compiled from the protobuffer in <REPO_ROOT>/api/protobuf-spec/frontend.proto. It is auto-generated and shouldn't be edited.
*/
package frontend

@ -1,335 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: frontend.proto
/*
Package frontend is a generated protocol buffer package.
It is generated from these files:
frontend.proto
It has these top-level messages:
Group
PlayerId
ConnectionInfo
Result
*/
package frontend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
type Group struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Group) Reset() { *m = Group{} }
func (m *Group) String() string { return proto.CompactTextString(m) }
func (*Group) ProtoMessage() {}
func (*Group) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Group) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Group) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
type PlayerId struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *PlayerId) Reset() { *m = PlayerId{} }
func (m *PlayerId) String() string { return proto.CompactTextString(m) }
func (*PlayerId) ProtoMessage() {}
func (*PlayerId) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *PlayerId) GetId() string {
if m != nil {
return m.Id
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
func init() {
proto.RegisterType((*Group)(nil), "Group")
proto.RegisterType((*PlayerId)(nil), "PlayerId")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Result)(nil), "Result")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error)
DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error) {
out := new(ConnectionInfo)
err := grpc.Invoke(ctx, "/API/GetAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
CreateRequest(context.Context, *Group) (*Result, error)
DeleteRequest(context.Context, *Group) (*Result, error)
GetAssignment(context.Context, *PlayerId) (*ConnectionInfo, error)
DeleteAssignment(context.Context, *PlayerId) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_GetAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).GetAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/GetAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).GetAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateRequest",
Handler: _API_CreateRequest_Handler,
},
{
MethodName: "DeleteRequest",
Handler: _API_DeleteRequest_Handler,
},
{
MethodName: "GetAssignment",
Handler: _API_GetAssignment_Handler,
},
{
MethodName: "DeleteAssignment",
Handler: _API_DeleteAssignment_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "frontend.proto",
}
func init() { proto.RegisterFile("frontend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0x41, 0x4b, 0xfb, 0x40,
0x10, 0xc5, 0x9b, 0xfc, 0x69, 0xda, 0x0e, 0x34, 0xff, 0xba, 0x78, 0x08, 0x39, 0x88, 0xec, 0xa9,
0x20, 0xee, 0x41, 0x0f, 0x7a, 0xf1, 0x50, 0x2a, 0x94, 0xdc, 0x4a, 0xfc, 0x00, 0x52, 0x93, 0x69,
0x59, 0x88, 0xbb, 0x71, 0x66, 0x72, 0xf0, 0x0b, 0xf9, 0x39, 0xc5, 0x4d, 0x6b, 0x55, 0xc4, 0xe3,
0xfb, 0xed, 0x7b, 0x8f, 0x7d, 0x03, 0xe9, 0x96, 0xbc, 0x13, 0x74, 0xb5, 0x69, 0xc9, 0x8b, 0xd7,
0x37, 0x30, 0x5c, 0x91, 0xef, 0x5a, 0x95, 0x42, 0x6c, 0xeb, 0x2c, 0x3a, 0x8f, 0xe6, 0x93, 0x32,
0xb6, 0xb5, 0x3a, 0x03, 0x68, 0xc9, 0xb7, 0x48, 0x62, 0x91, 0xb3, 0x38, 0xf0, 0x2f, 0x44, 0xe7,
0x30, 0x5e, 0x37, 0x9b, 0x57, 0xa4, 0xa2, 0xfe, 0x99, 0xd5, 0x77, 0x90, 0x2e, 0xbd, 0x73, 0x58,
0x89, 0xf5, 0xae, 0x70, 0x5b, 0xaf, 0x2e, 0xe0, 0xa4, 0xfa, 0x24, 0x8f, 0x2c, 0x64, 0xdd, 0x6e,
0x1f, 0x98, 0x1d, 0x1f, 0x1e, 0x02, 0xd7, 0xb7, 0x90, 0x94, 0xc8, 0x5d, 0x23, 0x2a, 0x83, 0x11,
0x77, 0x55, 0x85, 0xcc, 0xc1, 0x3c, 0x2e, 0x0f, 0x52, 0x9d, 0xc2, 0x10, 0x89, 0x3c, 0xed, 0x7f,
0xd6, 0x8b, 0xab, 0xb7, 0x08, 0xfe, 0x2d, 0xd6, 0x85, 0xd2, 0x30, 0x5d, 0x12, 0x6e, 0x04, 0x4b,
0x7c, 0xe9, 0x90, 0x45, 0x25, 0x26, 0xac, 0xcc, 0x47, 0xa6, 0x6f, 0xd6, 0x83, 0x0f, 0xcf, 0x3d,
0x36, 0xf8, 0xa7, 0xe7, 0x12, 0xa6, 0x2b, 0x94, 0x05, 0xb3, 0xdd, 0xb9, 0x67, 0x74, 0xa2, 0x26,
0xe6, 0x30, 0x3a, 0xff, 0x6f, 0xbe, 0x6f, 0xd4, 0x03, 0x35, 0x87, 0x59, 0x5f, 0xf9, 0x7b, 0xe2,
0x58, 0xfc, 0x94, 0x84, 0xeb, 0x5f, 0xbf, 0x07, 0x00, 0x00, 0xff, 0xff, 0x2b, 0xde, 0x2c, 0x5b,
0x8f, 0x01, 0x00, 0x00,
}

@ -1,5 +1,5 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
# Necessary to get a specific version of the golang k8s client
RUN go get github.com/tools/godep
@ -10,11 +10,8 @@ RUN godep restore ./...
RUN rm -rf vendor/
RUN rm -rf /go/src/github.com/golang/protobuf/
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/
COPY cmd/mmforc cmd/mmforc
COPY config config
COPY internal internal
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmforc/
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmforc:dev',
'-f', 'Dockerfile.mmforc',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmforc:dev']

@ -28,6 +28,7 @@ import (
"time"
"github.com/GoogleCloudPlatform/open-match/config"
"github.com/GoogleCloudPlatform/open-match/internal/logging"
"github.com/GoogleCloudPlatform/open-match/internal/metrics"
redisHelpers "github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis"
"github.com/tidwall/gjson"
@ -54,7 +55,6 @@ var (
mmforcLogFields = log.Fields{
"app": "openmatch",
"component": "mmforc",
"caller": "mmforc/main.go",
}
mmforcLog = log.WithFields(mmforcLogFields)
@ -64,9 +64,7 @@ var (
)
func init() {
// Logrus structured logging initialization
// Add a hook to the logger to auto-count log lines for metrics output thru OpenCensus
log.SetFormatter(&log.JSONFormatter{})
log.AddHook(metrics.NewHook(MmforcLogLines, KeySeverity))
// Viper config management initialization
@ -77,10 +75,8 @@ func init() {
}).Error("Unable to load config file")
}
if cfg.GetBool("debug") == true {
log.SetLevel(log.DebugLevel) // debug only, verbose - turn off in production!
mmforcLog.Warn("Debug logging configured. Not recommended for production!")
}
// Configure open match logging defaults
logging.ConfigureLogging(cfg)
// Configure OpenCensus exporter to Prometheus
// metrics.ConfigureOpenCensusPrometheusExporter expects that every OpenCensus view you
@ -185,9 +181,9 @@ func main() {
// waiting to run the evaluator when all your MMFs are already
// finished.
switch {
case time.Since(start).Seconds() >= float64(cfg.GetInt("interval.evaluator")):
case time.Since(start).Seconds() >= float64(cfg.GetInt("evaluator.interval")):
mmforcLog.WithFields(log.Fields{
"interval": cfg.GetInt("interval.evaluator"),
"interval": cfg.GetInt("evaluator.interval"),
}).Info("Maximum evaluator interval exceeded")
checkProposals = true
@ -219,7 +215,7 @@ func main() {
}).Info("Proposals available, evaluating!")
go evaluator(ctx, cfg, clientset)
}
_, err = redisHelpers.Delete(context.Background(), pool, "concurrentMMFs")
err = redisHelpers.Delete(context.Background(), pool, "concurrentMMFs")
if err != nil {
mmforcLog.WithFields(log.Fields{
"error": err.Error(),

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY cmd/mmlogicapi cmd/mmlogicapi
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/mmlogicapi
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

@ -51,7 +51,6 @@ var (
mlLogFields = log.Fields{
"app": "openmatch",
"component": "mmlogic",
"caller": "mmlogicapi/apisrv/apisrv.go",
}
mlLog = log.WithFields(mlLogFields)
)
@ -166,7 +165,7 @@ func (s *mmlogicAPI) CreateProposal(c context.Context, prop *mmlogic.MatchObject
}
// Write all non-id fields from the protobuf message to state storage.
err := redispb.MarshalToRedis(c, prop, s.pool)
err := redispb.MarshalToRedis(c, s.pool, prop, s.cfg.GetInt("redis.expirations.matchobject"))
if err != nil {
stats.Record(fnCtx, MlGrpcErrors.M(1))
return &mmlogic.Result{Success: false, Error: err.Error()}, err

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev',
'-f', 'Dockerfile.mmlogicapi',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmlogicapi:dev']

@ -1,7 +1,7 @@
/*
This application handles all the startup and connection scaffolding for
running a gRPC server serving the APIService as defined in
mmlogic/proto/mmlogic.pb.go
${OM_ROOT}/internal/pb/mmlogic.pb.go
All the actual important bits are in the API Server source code: apisrv/apisrv.go
@ -41,7 +41,6 @@ var (
mlLogFields = log.Fields{
"app": "openmatch",
"component": "mmlogic",
"caller": "mmlogicapi/main.go",
}
mlLog = log.WithFields(mlLogFields)
@ -88,7 +87,7 @@ func main() {
defer pool.Close()
// Instantiate the gRPC server with the connections we've made
mlLog.WithFields(log.Fields{"testfield": "test"}).Info("Attempting to start gRPC server")
mlLog.Info("Attempting to start gRPC server")
srv := apisrv.New(cfg, pool)
// Run the gRPC server

@ -29,7 +29,6 @@ var (
logFields = log.Fields{
"app": "openmatch",
"component": "config",
"caller": "config/main.go",
}
cfgLog = log.WithFields(logFields)
@ -43,12 +42,11 @@ var (
// REDIS_SENTINEL_PORT_6379_TCP_PROTO=tcp
// REDIS_SENTINEL_SERVICE_HOST=10.55.253.195
envMappings = map[string]string{
"redis.hostname": "REDIS_SENTINEL_SERVICE_HOST",
"redis.port": "REDIS_SENTINEL_SERVICE_PORT",
"redis.hostname": "REDIS_SERVICE_HOST",
"redis.port": "REDIS_SERVICE_PORT",
"redis.pool.maxIdle": "REDIS_POOL_MAXIDLE",
"redis.pool.maxActive": "REDIS_POOL_MAXACTIVE",
"redis.pool.idleTimeout": "REDIS_POOL_IDLETIMEOUT",
"debug": "DEBUG",
}
// Viper config management setup
@ -70,7 +68,10 @@ var (
func Read() (*viper.Viper, error) {
// Viper config management initialization
// Support either json or yaml file types (json for backwards compatibility
// with previous versions)
cfg.SetConfigType("json")
cfg.SetConfigType("yaml")
cfg.SetConfigName("matchmaker_config")
cfg.AddConfigPath(".")
@ -109,5 +110,11 @@ func Read() (*viper.Viper, error) {
}
// Look for updates to the config; in Kubernetes, this is implemented using
// a ConfigMap that is written to the matchmaker_config.yaml file, which is
// what the Open Match components using Viper monitor for changes.
// More details about Open Match's use of Kubernetes ConfigMaps at:
// https://github.com/GoogleCloudPlatform/open-match/issues/42
cfg.WatchConfig() // Watch and re-read config file.
return cfg, err
}

@ -1,19 +1,29 @@
{
"debug": true,
"logging":{
"level": "debug",
"format": "text",
"source": true
},
"api": {
"backend": {
"hostname": "om-backendapi",
"port": 50505
"port": 50505,
"timeout": 90
},
"frontend": {
"hostname": "om-frontendapi",
"port": 50504
"port": 50504,
"timeout": 300
},
"mmlogic": {
"hostname": "om-mmlogicapi",
"port": 50503
}
},
"evalutor": {
"interval": 10
},
"metrics": {
"port": 9555,
"endpoint": "/metrics",
@ -40,19 +50,19 @@
"duration": 800
},
"expired": {
"name": "timestamp",
"name": "OM_METADATA.accessed",
"offset": 800,
"duration": 0
}
},
"defaultImages": {
"evaluator": {
"name": "gcr.io/matchmaker-dev-201405/openmatch-evaluator",
"name": "gcr.io/open-match-public-images/openmatch-evaluator",
"tag": "dev"
},
"mmf": {
"name": "gcr.io/matchmaker-dev-201405/openmatch-mmf",
"tag": "py3"
"name": "gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple",
"tag": "dev"
}
},
"redis": {
@ -68,18 +78,17 @@
},
"results": {
"pageSize": 10000
}
},
"expirations": {
"player": 43200,
"matchobject":43200
}
},
"jsonkeys": {
"mmfImage": "imagename",
"rosters": "properties.rosters",
"connstring": "connstring",
"pools": "properties.pools"
},
"interval": {
"evaluator": 10,
"resultsTimeout": 30
},
"playerIndices": [
"char.cleric",
"char.knight",

@ -27,7 +27,7 @@
"containers":[
{
"name":"om-backend",
"image":"gcr.io/matchmaker-dev-201405/openmatch-backendapi:dev",
"image":"gcr.io/open-match-public-images/openmatch-backendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{

@ -27,7 +27,7 @@
"containers":[
{
"name":"om-frontendapi",
"image":"gcr.io/matchmaker-dev-201405/openmatch-frontendapi:dev",
"image":"gcr.io/open-match-public-images/openmatch-frontendapi:dev",
"imagePullPolicy":"Always",
"ports": [
{

@ -27,7 +27,7 @@
"containers":[
{
"name":"om-mmforc",
"image":"gcr.io/matchmaker-dev-201405/openmatch-mmforc:dev",
"image":"gcr.io/open-match-public-images/openmatch-mmforc:dev",
"imagePullPolicy":"Always",
"ports": [
{

@ -27,7 +27,7 @@
"containers":[
{
"name":"om-mmlogic",
"image":"gcr.io/matchmaker-dev-201405/openmatch-mmlogicapi:dev",
"image":"gcr.io/open-match-public-images/openmatch-mmlogicapi:dev",
"imagePullPolicy":"Always",
"ports": [
{

@ -2,7 +2,7 @@
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "redis-sentinel"
"name": "redis"
},
"spec": {
"selector": {

@ -1,6 +1,6 @@
# Compiling from source
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild_<name>.yaml` files for each component in the repository root.
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in their respective directories. Note that most of them build from an 'base' image called `openmatch-devbase`. You can find a `Dockerfile` and `cloudbuild_base.yaml` file for this in the repository root. Build it first!
Note: Although Google Cloud Platform includes some free usage, you may incur charges following this guide if you use GCP products.
@ -11,11 +11,11 @@ Note: Although Google Cloud Platform includes some free usage, you may incur cha
**NOTE**: Before starting with this guide, you'll need to update all the URIs from the tutorial's gcr.io container image registry with the URI for your own image registry. If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`. Here's an example command in Linux to do the replacement for you this (replace YOUR_REGISTRY_URI with your URI, this should be run from the repository root directory):
```
# Linux
egrep -lR 'gcr.io/matchmaker-dev-201405' . | xargs sed -i -e 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g'
egrep -lR 'open-match-public-images' . | xargs sed -i -e 's|open-match-public-images|<PROJECT_NAME>|g'
```
```
# Mac OS, you can delete the .backup files after if all looks good
egrep -lR 'gcr.io/matchmaker-dev-201405' . | xargs sed -i'.backup' -e 's|gcr.io/matchmaker-dev-201405|YOUR_REGISTRY_URI|g'
egrep -lR 'open-match-public-images' . | xargs sed -i'.backup' -e 's|open-match-public-images|<PROJECT_NAME>|g'
```
## Example of building using Google Cloud Builder
@ -26,9 +26,14 @@ The [Quickstart for Docker](https://cloud.google.com/cloud-build/docs/quickstart
* In Linux, you can run the following one-line bash script to compile all the images for the first time, and push them to your gcr.io registry. You must enable the [Container Registry API](https://console.cloud.google.com/flows/enableapi?apiid=containerregistry.googleapis.com) first.
```
# First, build the 'base' image. Some other images depend on this so it must complete first.
gcloud build submit --config cloudbuild_base.yaml
gcloud builds submit --config cloudbuild_base.yaml
# Build all other images.
for dfile in $(ls Dockerfile.* | grep -v base); do gcloud builds submit --config cloudbuild_${dfile##*.}.yaml & done
for dfile in $(find . -name "Dockerfile" -iregex "./\(cmd\|test\|examples\)/.*"); do cd $(dirname ${dfile}); gcloud builds submit --config cloudbuild.yaml & cd -; done
```
Note: as of v0.3.0 alpha, the Python and PHP MMF examples still depend on the previous way of building until [issue #42, introducing new config management](https://github.com/GoogleCloudPlatform/open-match/issues/42) is resolved (apologies for the inconvenience):
```
gcloud builds submit --config cloudbuild_mmf_py3.yaml
gcloud builds submit --config cloudbuild_mmf_php.yaml
```
* Once the cloud builds have completed, you can verify that all the builds succeeded in the cloud console or by by checking the list of images in your **gcr.io** registry:
```
@ -37,26 +42,16 @@ The [Quickstart for Docker](https://cloud.google.com/cloud-build/docs/quickstart
(your registry name will be different)
```
NAME
gcr.io/matchmaker-dev-201405/openmatch-backendapi
gcr.io/matchmaker-dev-201405/openmatch-devbase
gcr.io/matchmaker-dev-201405/openmatch-evaluator
gcr.io/matchmaker-dev-201405/openmatch-frontendapi
gcr.io/matchmaker-dev-201405/openmatch-mmf
gcr.io/matchmaker-dev-201405/openmatch-mmforc
gcr.io/matchmaker-dev-201405/openmatch-mmlogicapi
gcr.io/open-match-public-images/openmatch-backendapi
gcr.io/open-match-public-images/openmatch-devbase
gcr.io/open-match-public-images/openmatch-evaluator
gcr.io/open-match-public-images/openmatch-frontendapi
gcr.io/open-match-public-images/openmatch-mmf-golang-manual-simple
gcr.io/open-match-public-images/openmatch-mmf-php-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple
gcr.io/open-match-public-images/openmatch-mmforc
gcr.io/open-match-public-images/openmatch-mmlogicapi
```
* The default example MMF images all use the same name (`openmatch-mmf`), with different image tags designating the different examples. You can check that these exist by running this command (again, substituting your **gcr.io** registry):
```
gcloud container images list-tags gcr.io/matchmaker-dev-201405/openmatch-mmf
```
You should see tags for several of the example MMFs. By default, Open Match will try to use the `openmatch-mmf:py3` image in the examples below, so it is important that the image build was successful and a `py3` image tag exists in your **gcr.io** registry before you continue:
```
DIGEST TAGS TIMESTAMP
5345475e026c php 2018-12-05T00:06:47
e5c274c3509c go 2018-12-05T00:02:17
1b3ec3176d0f py3 2018-12-05T00:02:07
```
## Example of starting a GKE cluster
A cluster with mostly default settings will work for this development guide. In the Cloud SDK command below we start it with machines that have 4 vCPUs. Alternatively, you can use the 'Create Cluster' button in [Google Cloud Console]("https://console.cloud.google.com/kubernetes").
@ -73,7 +68,7 @@ gcloud compute zones list
## Configuration
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration (if you would like to help us design the replacement config solution, please join the [discussion](https://github.com/GoogleCloudPlatform/open-match/issues/42). To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally.
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration (if you would like to help us design the replacement config solution, please join the [discussion](https://github.com/GoogleCloudPlatform/open-match/issues/42). To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. Note: [there is an issue with symlinks on Windows](../issues/57).
## Running Open Match in a development environment
@ -93,8 +88,8 @@ The rest of this guide assumes you have a cluster (example is using GKE, but wor
kubectl apply -f frontendapi_service.json
kubectl apply -f mmforc_deployment.json
kubectl apply -f mmforc_serviceaccount.json
kubectl apply -f mmlogic_deployment.json
kubectl apply -f mmlogic_service.json
kubectl apply -f mmlogicapi_deployment.json
kubectl apply -f mmlogicapi_service.json
```
* [optional, but recommended] Configure the OpenCensus metrics services:
```
@ -135,7 +130,7 @@ service/om-mmforc-metrics ClusterIP 10.59.240.59 <none> 39555/TC
service/om-mmlogicapi ClusterIP 10.59.248.3 <none> 50503/TCP 9m
service/prometheus NodePort 10.59.252.212 <none> 9090:30900/TCP 9m
service/prometheus-operated ClusterIP None <none> 9090/TCP 9m
service/redis-sentinel ClusterIP 10.59.249.197 <none> 6379/TCP 9m
service/redis ClusterIP 10.59.249.197 <none> 6379/TCP 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/om-backendapi 1 1 1 1 9m
@ -179,7 +174,7 @@ statefulset.apps/prometheus-prometheus 1 1 9m
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
* `examples/frontendclient` is a fake client for the Frontend API. It pretends to be a real game client connecting to Open Match and requests a game, then dumps out the connection string it receives. Note that it doesn't actually test the return path by looking for arbitrary results from your matchmaking function; it pauses and tells you the name of a key to set a connection string in directly using a redis-cli client. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that sends to the backend look for way more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match.
* `examples/frontendclient` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen until you press the enter key. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that command sends to the backend look for many more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match.
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to openmatch and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools. Note: building this image requires that you first build the 'base' dev image (look for `cloudbuild_base.yaml` and `Dockerfile.base` in the root directory) and then update the first step to point to that image in your registry. This will be simplified in a future release. **Note**: If you run this by itself, expect it to wait about 30 seconds, then return a result of 'insufficient players' and exit - this is working as intended. Use the client load simulation tool below to add players to the pool or you'll never be able to make a successful match.
* `test/cmd/client` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).

@ -1 +1,28 @@
During alpha, please do not use Open Match as-is in production. To develop against it, please see the [development guide](development.md).
# "Productionizing" a deployment
Here are some steps that should be taken to productionize your Open Match deployment before exposing it to live public traffic. Some of these overlap with best practices for [productionizing Kubernetes](https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10) or cloud infrastructure more generally. We will work to make as many of these into the default deployment strategy for Open Match as possible, going forward.
**This is not an exhaustive list and addressing the items in this document alone shouldn't be considered sufficient. Every game is different and will have different production needs.**
## Kubernetes
All the usual guidance around hardening and securing Kubernetes are applicable to running Open Match. [Here is a guide around security for Google Kubernetes Enginge on GCP](https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10), and a number of other guides are available from reputable sources on the internet.
### Minimum permissions on Kubernetes
* The components of Open Match should be run in a separate Kubernetes namespace if you're also using the cluster for other services. As of 0.3.0 they run in the 'default' namespace if you follow the development guide.
* Note that the default MMForc process has cluster management permissions by default. Before moving to production, you should create a role with only access to create kubernetes jobs and configure the MMForc to use it.
### Kubernetes Jobs (MMFOrc)
The 0.3.0 MMFOrc component runs your MMFs as Kubernetes Jobs. You should periodically delete these jobs to keep the cluster running smoothly. How often you need to delete them is dependant on how many you are running. There are a number of open source solutions to do this for you. ***Note that once you delete the job, you won't have access to that job's logs anymore unless you're sending your logs from kubernetes to a log aggregator like Google Stackdriver. This can make it a challenge to troubleshoot issues***
## Open Match config
Debug logging and the extra debug code paths should be disabled in the `config/matchmaker_config.json` file (as of the time of this writing, 0.3.0).
## Public APIs for Open Match
In many cases, you may choose to configure your game clients to connect to the Open Match Frontend API, and in a few select cases (such as using it for P2P non-dedicated game server hosting), the game client may also need to connect to the Backend API. In these cases, it is important to secure the API endpoints against common attacks, such as DDoS or malformed packet floods.
* Using a cloud provider's Load Balancer in front of the Kubernetes Service is a common approach to enable vendor-specific DDoS protections. Check the documentation for your cloud vendor's Load Balancer for more details ([GCP's DDoS protection](https://cloud.google.com/armor/)).
* Using an API framework can be used to limit endpoint access to only game clients you have authenticated using your platform's authentication service. This may be accomplished with simple authentication tokens or a more complex scheme depending on your needs.
## Testing
(as of 0.3.0) The provided test programs are just for validating that Open Match is operating correctly; they are command-line applications designed to be run from within the same cluster as Open Match and are therefore not a suitable test harness for doing production testing to make sure your matchmaker is ready to handle your live game. Instead, it is recommended that you integrate Open Match into your game client and test it using the actual game flow players will use if at all possible.
### Load testing
Ideally, you would already be making 'headless' game clients for automated qa and load testing of your game servers; it is recommended that you also code these testing clients to be able to act as a mock player connecting to Open Match. Load testing platform services is a huge topic and should reflect your actual game access patterns as closely as possible, which will be very game dependant.
**Note: It is never a good idea to do load testing against a cloud vendor without informing them first!**

20
docs/roadmap.md Normal file

@ -0,0 +1,20 @@
# Roadmap. [Subject to change]
Releases are scheduled for every 6 weeks. **Every release is a stable, long-term-support version**. Even for alpha releases, best-effort support is available. With a little work and input from an experienced live services developer, you can go to production with any version on the [releases page](https://github.com/GoogleCloudPlatform/open-match/releases).
Our current thinking is to wait to take Open Match out of alpha/beta (and label it 1.0) until it can be used out-of-the-box, standalone, for developers that dont have any existing platform services. Which is to say, the majority of **established game developers likely won't have any reason to wait for the 1.0 release if Open Match already handles your needs**. If you already have live platform services that you plan to integrate Open Match with (player authentication, a group invite system, dedicated game servers, metrics collection, logging aggregation, etc), then a lot of the features planned between 0.4.0 and 1.0 likely aren't of much interest to you anyway.
## Upcoming releases
* **0.4.0** &mdash; Agones Integration & MMF on [Knative](https://cloud.google.com/Knative/)
MMF instrumentation
Match object expiration / lazy deletion
API autoscaling by default
API changes after this will likely be additions or very minor
* **0.5.0** &mdash; Tracing, Metrics, and KPI Dashboard
* **0.6.0** &mdash; Load testing suite
* **1.0.0** &mdash; API Formally Stable. Breaking API changes will require a new major version number.
* **1.1.0** &mdash; Canonical MMFs
## Philosophy
* The next version (0.4.0) will focus on making MMFs run on serverless platforms - specifically Knative. This will just be first steps, as Knative is still pretty early. We want to get a proof of concept working so we can roadmap out the future "MMF on Knative" experience. Our intention is to keep MMFs as compatible as possible with the current Kubernetes job-based way of doing them. Our hope is that by the time Knative is mature, well be able to provide a [Knative build](https://github.com/Knative/build) pipeline that will take existing MMFs and build them as Knative functions. In the meantime, well map out a relatively painless (but not yet fully automated) way to make an existing MMF into a Kubernetes Deployment that looks as similar to what [Knative serving](https://github.com/knative/serving) is shaping up to be, in an effort to make the eventual switchover painless. Basically all of this is just _optimizing MMFs to make them spin up faster and take less resources_, **we're not planning to change what MMFs do or the interfaces they need to fulfill**. Existing MMFs will continue to run as-is, and in the future moving them to Knative should be both **optional** and **largely automated**.
* 0.4.0 represents the natural stopping point for adding new functionality until we have more community uptake and direction. We don't anticipate many API changes in 0.4.0 and beyond. Maybe new API calls for new functionality, but we're unlikely to see big shifts in existing calls through 1.0 and its point releases. We'll issue a new major release version if we decide we need those changes.
* The 0.5.0 version and beyond will be focused on operationalizing the out-of-the-box experience. Metrics and analytics and a default dashboard, additional tooling, and a load testing suite are all planned. We want it to be easy for operators to see KPI and know what's going on with Open Match.

@ -1,5 +1,5 @@
#FROM golang:1.10.3 as builder
FROM gcr.io/matchmaker-dev-201405/openmatch-devbase as builder
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/backendclient
COPY ./ ./
RUN go get -d -v

@ -1,11 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-devbase' ]
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-backendclient:dev',
'--cache-from=gcr.io/$PROJECT_ID/openmatch-devbase:latest',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-backendclient:dev']

@ -25,7 +25,6 @@ import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"log"
@ -52,10 +51,12 @@ func main() {
// Read the profile
filename := "profiles/testprofile.json"
if len(os.Args) > 1 {
filename = os.Args[1]
}
log.Println("Reading profile from ", filename)
/*
if len(os.Args) > 1 {
filename = os.Args[1]
}
log.Println("Reading profile from ", filename)
*/
jsonFile, err := os.Open(filename)
if err != nil {
panic("Failed to open file specified at command line. Did you forget to specify one?")
@ -116,9 +117,9 @@ func main() {
if err != nil {
log.Fatalf("Attempting to open stream for ListMatches(_) = _, %v", err)
}
log.Printf("Waiting for matches...")
//for i := 0; i < 2; i++ {
for {
log.Printf("Waiting for matches...")
match, err := stream.Recv()
if err == io.EOF {
break
@ -130,7 +131,7 @@ func main() {
if match.Properties == "{error: insufficient_players}" {
log.Println("Waiting for a larger player pool...")
break
//break
}
// Validate JSON before trying to parse it
@ -139,36 +140,23 @@ func main() {
}
log.Println("Received match:")
ppJSON(match.Properties)
fmt.Println(match)
//fmt.Println(match) // Debug
/*
// Get players from the json properties.roster field
log.Println("Gathering roster from received match...")
players := make([]string, 0)
result := gjson.Get(match.Properties, "properties.roster")
result.ForEach(func(teamName, teamRoster gjson.Result) bool {
teamRoster.ForEach(func(_, player gjson.Result) bool {
players = append(players, player.String())
return true // keep iterating
})
return true // keep iterating
})
//log.Printf("players = %+v\n", players)
// Assign players in this match to our server
connstring := "example.com:12345"
if len(os.Args) >= 2 {
connstring = os.Args[1]
log.Printf("Player assignment '%v' specified at commandline", connstring)
}
log.Println("Assigning players to DGS at", connstring)
// Assign players in this match to our server
log.Println("Assigning players to DGS at example.com:12345")
playerstr := strings.Join(players, " ")
roster := &backend.Roster{PlayerIds: playerstr}
ci := &backend.ConnectionInfo{ConnectionString: "example.com:12345"}
assign := &backend.Assignments{Roster: roster, ConnectionInfo: ci}
_, err = client.CreateAssignments(context.Background(), assign)
if err != nil {
panic(err)
}
*/
assign := &backend.Assignments{Rosters: match.Rosters, Assignment: connstring}
log.Printf("Waiting for matches...")
_, err = client.CreateAssignments(context.Background(), assign)
if err != nil {
log.Println(err)
}
log.Println("Success! Not deleting assignments [demo mode].")
}

@ -1,5 +1,5 @@
{
"imagename":"gcr.io/matchmaker-dev-201405/openmatch-mmf:py3",
"imagename":"gcr.io/open-match-public-images/openmatch-mmf-py3-mmlogic-simple:dev",
"name":"testprofilev1",
"id":"testprofile",
"properties":{

@ -1,10 +1,7 @@
# Golang application builder steps
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY examples/evaluators/golang/simple examples/evaluators/golang/simple
COPY config config
COPY internal internal
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/evaluators/golang/simple
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

@ -1,9 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-evaluator:dev',
'-f', 'Dockerfile.evaluator',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-evaluator:dev']

@ -48,8 +48,8 @@ func main() {
// Read config
lgr.Println("Initializing config...")
cfg, err := readConfig("matchmaker_config", map[string]interface{}{
"REDIS_SENTINEL_SERVICE_HOST": "redis-sentinel",
"REDIS_SENTINEL_SERVICE_PORT": "6379",
"REDIS_SERVICE_HOST": "redis",
"REDIS_SERVICE_PORT": "6379",
"auth": map[string]string{
// Read from k8s secret eventually
// Probably doesn't need a map, just here for reference
@ -63,7 +63,7 @@ func main() {
// Connect to redis
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz // redis pool docs: https://godoc.org/github.com/gomodule/redigo/redis#Pool
redisURL := "redis://" + cfg.GetString("REDIS_SENTINEL_SERVICE_HOST") + ":" + cfg.GetString("REDIS_SENTINEL_SERVICE_PORT")
redisURL := "redis://" + cfg.GetString("REDIS_SERVICE_HOST") + ":" + cfg.GetString("REDIS_SERVICE_PORT")
lgr.Println("Connecting to redis at", redisURL)
pool := redis.Pool{
MaxIdle: 3,
@ -157,10 +157,10 @@ func readConfig(filename string, defaults map[string]interface{}) (*viper.Viper,
REDIS_SENTINEL_PORT_6379_TCP=tcp://10.55.253.195:6379
REDIS_SENTINEL_PORT=tcp://10.55.253.195:6379
REDIS_SENTINEL_PORT_6379_TCP_ADDR=10.55.253.195
REDIS_SENTINEL_SERVICE_PORT=6379
REDIS_SERVICE_PORT=6379
REDIS_SENTINEL_PORT_6379_TCP_PORT=6379
REDIS_SENTINEL_PORT_6379_TCP_PROTO=tcp
REDIS_SENTINEL_SERVICE_HOST=10.55.253.195
REDIS_SERVICE_HOST=10.55.253.195
*/
v := viper.New()
for key, value := range defaults {

@ -1 +0,0 @@
../../test/cmd/client/city.percent

@ -1 +0,0 @@
../../test/cmd/client/europe-west1.ping

@ -1 +0,0 @@
../../test/cmd/client/europe-west2.ping

@ -1 +0,0 @@
../../test/cmd/client/europe-west3.ping

@ -1 +0,0 @@
../../test/cmd/client/europe-west4.ping

@ -1,144 +0,0 @@
/*
Stubbed frontend api client. This should be run within a k8s cluster, and
assumes that the frontend api is up and can be accessed through a k8s service
called 'om-frontendapi'.
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"log"
"net"
"os"
"strconv"
"github.com/GoogleCloudPlatform/open-match/examples/frontendclient/player"
frontend "github.com/GoogleCloudPlatform/open-match/examples/frontendclient/proto"
"github.com/gobs/pretty"
"google.golang.org/grpc"
)
func bytesToString(data []byte) string {
return string(data[:])
}
func ppJSON(s string) {
buf := new(bytes.Buffer)
json.Indent(buf, []byte(s), "", " ")
log.Println(buf)
return
}
func main() {
// determine number of players to generate per group
numPlayers := 4 // default if nothing provided
var err error
if len(os.Args) > 1 {
numPlayers, err = strconv.Atoi(os.Args[1])
if err != nil {
panic(err)
}
}
player.New()
log.Printf("Generating %d players", numPlayers)
// Connect gRPC client
ip, err := net.LookupHost("om-frontendapi")
if err != nil {
panic(err)
}
_ = ip
conn, err := grpc.Dial(ip[0]+":50504", grpc.WithInsecure())
if err != nil {
log.Fatalf("failed to connect: %s", err.Error())
}
client := frontend.NewAPIClient(conn)
log.Println("API client connected!")
log.Printf("Establishing HTTPv2 stream...")
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Empty group to fill and then run through the CreateRequest gRPC endpoint
g := &frontend.Group{
Id: "",
Properties: "",
}
// Generate players for the group and put them in
for i := 0; i < numPlayers; i++ {
playerID, playerData, debug := player.Generate()
groupPlayer(g, playerID, playerData)
_ = debug // TODO. For now you could copy this into playerdata before creating player if you want it available in redis
pretty.PrettyPrint(playerID)
pretty.PrettyPrint(playerData)
}
g.Id = g.Id[:len(g.Id)-1] // Remove trailing whitespace
log.Printf("Finished grouping players")
// Test CreateRequest
log.Println("Testing CreateRequest")
results, err := client.CreateRequest(ctx, g)
if err != nil {
panic(err)
}
pretty.PrettyPrint(g.Id)
pretty.PrettyPrint(g.Properties)
pretty.PrettyPrint(results.Success)
// wait for value to be inserted that will be returned by Get ssignment
test := "bitters"
fmt.Println("Pausing: go put a value to return in Redis using HSET", test, "connstring <YOUR_TEST_STRING>")
fmt.Println("Hit Enter to test GetAssignment...")
reader := bufio.NewReader(os.Stdin)
_, _ = reader.ReadString('\n')
connstring, err := client.GetAssignment(ctx, &frontend.PlayerId{Id: test})
pretty.PrettyPrint(connstring.ConnectionString)
// Test DeleteRequest
fmt.Println("Deleting Request")
results, err = client.DeleteRequest(ctx, g)
pretty.PrettyPrint(results.Success)
// Remove assignments key
fmt.Println("deleting the key", test)
results, err = client.DeleteAssignment(ctx, &frontend.PlayerId{Id: test})
pretty.PrettyPrint(results.Success)
return
}
func groupPlayer(g *frontend.Group, playerID string, playerData map[string]int) error {
//g.Properties = playerData
pdJSON, _ := json.Marshal(playerData)
buffer := new(bytes.Buffer) // convert byte array to buffer to send to json.Compact()
if err := json.Compact(buffer, pdJSON); err != nil {
log.Println(err)
}
g.Id = g.Id + playerID + " "
// TODO: actually aggregate group stats
g.Properties = buffer.String()
return nil
}

@ -1,2 +0,0 @@
// package frontend should be a copy of the compiled gRPC protobuf file used by the frontend API.
package frontend

@ -1,335 +0,0 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: frontend.proto
/*
Package frontend is a generated protocol buffer package.
It is generated from these files:
frontend.proto
It has these top-level messages:
Group
PlayerId
ConnectionInfo
Result
*/
package frontend
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import (
context "golang.org/x/net/context"
grpc "google.golang.org/grpc"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
type Group struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Group) Reset() { *m = Group{} }
func (m *Group) String() string { return proto.CompactTextString(m) }
func (*Group) ProtoMessage() {}
func (*Group) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Group) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Group) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
type PlayerId struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *PlayerId) Reset() { *m = PlayerId{} }
func (m *PlayerId) String() string { return proto.CompactTextString(m) }
func (*PlayerId) ProtoMessage() {}
func (*PlayerId) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *PlayerId) GetId() string {
if m != nil {
return m.Id
}
return ""
}
// Simple message used to pass the connection string for the DGS to the player.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
// Simple message to return success/failure and error status.
type Result struct {
Success bool `protobuf:"varint,1,opt,name=success" json:"success,omitempty"`
Error string `protobuf:"bytes,2,opt,name=error" json:"error,omitempty"`
}
func (m *Result) Reset() { *m = Result{} }
func (m *Result) String() string { return proto.CompactTextString(m) }
func (*Result) ProtoMessage() {}
func (*Result) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Result) GetSuccess() bool {
if m != nil {
return m.Success
}
return false
}
func (m *Result) GetError() string {
if m != nil {
return m.Error
}
return ""
}
func init() {
proto.RegisterType((*Group)(nil), "Group")
proto.RegisterType((*PlayerId)(nil), "PlayerId")
proto.RegisterType((*ConnectionInfo)(nil), "ConnectionInfo")
proto.RegisterType((*Result)(nil), "Result")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for API service
type APIClient interface {
CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error)
DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error)
}
type aPIClient struct {
cc *grpc.ClientConn
}
func NewAPIClient(cc *grpc.ClientConn) APIClient {
return &aPIClient{cc}
}
func (c *aPIClient) CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/CreateRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteRequest", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error) {
out := new(ConnectionInfo)
err := grpc.Invoke(ctx, "/API/GetAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *aPIClient) DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/API/DeleteAssignment", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for API service
type APIServer interface {
CreateRequest(context.Context, *Group) (*Result, error)
DeleteRequest(context.Context, *Group) (*Result, error)
GetAssignment(context.Context, *PlayerId) (*ConnectionInfo, error)
DeleteAssignment(context.Context, *PlayerId) (*Result, error)
}
func RegisterAPIServer(s *grpc.Server, srv APIServer) {
s.RegisterService(&_API_serviceDesc, srv)
}
func _API_CreateRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).CreateRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/CreateRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).CreateRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteRequest(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteRequest",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteRequest(ctx, req.(*Group))
}
return interceptor(ctx, in, info, handler)
}
func _API_GetAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).GetAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/GetAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).GetAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
func _API_DeleteAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(APIServer).DeleteAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/API/DeleteAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(APIServer).DeleteAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
}
var _API_serviceDesc = grpc.ServiceDesc{
ServiceName: "API",
HandlerType: (*APIServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateRequest",
Handler: _API_CreateRequest_Handler,
},
{
MethodName: "DeleteRequest",
Handler: _API_DeleteRequest_Handler,
},
{
MethodName: "GetAssignment",
Handler: _API_GetAssignment_Handler,
},
{
MethodName: "DeleteAssignment",
Handler: _API_DeleteAssignment_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "frontend.proto",
}
func init() { proto.RegisterFile("frontend.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x90, 0x41, 0x4b, 0xfb, 0x40,
0x10, 0xc5, 0x9b, 0xfc, 0x69, 0xda, 0x0e, 0x34, 0xff, 0xba, 0x78, 0x08, 0x39, 0x88, 0xec, 0xa9,
0x20, 0xee, 0x41, 0x0f, 0x7a, 0xf1, 0x50, 0x2a, 0x94, 0xdc, 0x4a, 0xfc, 0x00, 0x52, 0x93, 0x69,
0x59, 0x88, 0xbb, 0x71, 0x66, 0x72, 0xf0, 0x0b, 0xf9, 0x39, 0xc5, 0x4d, 0x6b, 0x55, 0xc4, 0xe3,
0xfb, 0xed, 0x7b, 0x8f, 0x7d, 0x03, 0xe9, 0x96, 0xbc, 0x13, 0x74, 0xb5, 0x69, 0xc9, 0x8b, 0xd7,
0x37, 0x30, 0x5c, 0x91, 0xef, 0x5a, 0x95, 0x42, 0x6c, 0xeb, 0x2c, 0x3a, 0x8f, 0xe6, 0x93, 0x32,
0xb6, 0xb5, 0x3a, 0x03, 0x68, 0xc9, 0xb7, 0x48, 0x62, 0x91, 0xb3, 0x38, 0xf0, 0x2f, 0x44, 0xe7,
0x30, 0x5e, 0x37, 0x9b, 0x57, 0xa4, 0xa2, 0xfe, 0x99, 0xd5, 0x77, 0x90, 0x2e, 0xbd, 0x73, 0x58,
0x89, 0xf5, 0xae, 0x70, 0x5b, 0xaf, 0x2e, 0xe0, 0xa4, 0xfa, 0x24, 0x8f, 0x2c, 0x64, 0xdd, 0x6e,
0x1f, 0x98, 0x1d, 0x1f, 0x1e, 0x02, 0xd7, 0xb7, 0x90, 0x94, 0xc8, 0x5d, 0x23, 0x2a, 0x83, 0x11,
0x77, 0x55, 0x85, 0xcc, 0xc1, 0x3c, 0x2e, 0x0f, 0x52, 0x9d, 0xc2, 0x10, 0x89, 0x3c, 0xed, 0x7f,
0xd6, 0x8b, 0xab, 0xb7, 0x08, 0xfe, 0x2d, 0xd6, 0x85, 0xd2, 0x30, 0x5d, 0x12, 0x6e, 0x04, 0x4b,
0x7c, 0xe9, 0x90, 0x45, 0x25, 0x26, 0xac, 0xcc, 0x47, 0xa6, 0x6f, 0xd6, 0x83, 0x0f, 0xcf, 0x3d,
0x36, 0xf8, 0xa7, 0xe7, 0x12, 0xa6, 0x2b, 0x94, 0x05, 0xb3, 0xdd, 0xb9, 0x67, 0x74, 0xa2, 0x26,
0xe6, 0x30, 0x3a, 0xff, 0x6f, 0xbe, 0x6f, 0xd4, 0x03, 0x35, 0x87, 0x59, 0x5f, 0xf9, 0x7b, 0xe2,
0x58, 0xfc, 0x94, 0x84, 0xeb, 0x5f, 0xbf, 0x07, 0x00, 0x00, 0xff, 0xff, 0x2b, 0xde, 0x2c, 0x5b,
0x8f, 0x01, 0x00, 0x00,
}

@ -18,8 +18,8 @@ namespace mmfdotnet
{
static void Main(string[] args)
{
string host = Environment.GetEnvironmentVariable("REDIS_SENTINEL_SERVICE_HOST");
string port = Environment.GetEnvironmentVariable("REDIS_SENTINEL_SERVICE_PORT");
string host = Environment.GetEnvironmentVariable("REDIS_SERVICE_HOST");
string port = Environment.GetEnvironmentVariable("REDIS_SERVICE_PORT");
// Single connection to the open match redis cluster
Console.WriteLine($"Connecting to redis...{host}:{port}");

@ -1,9 +1,7 @@
# Golang application builder steps
# FROM golang:1.10.3 as builder
FROM gcr.io/matchmaker-dev-201405/openmatch-devbase as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY examples/functions/golang/manual-simple examples/functions/golang/manual-simple
FROM gcr.io/open-match-public-images/openmatch-base:dev as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/functions/golang/manual-simple
COPY . .
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o mmf .

@ -1,11 +1,10 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['pull', 'gcr.io/$PROJECT_ID/openmatch-mmf:latest']
args: [ 'pull', 'gcr.io/$PROJECT_ID/openmatch-base:dev' ]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf:$TAG_NAME',
'--cache-from', 'gcr.io/$PROJECT_ID/openmatch-mmf:latest',
'--tag=gcr.io/$PROJECT_ID/openmatch-mmf-golang-manual-simple',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-mmf:$TAG_NAME']
images: ['gcr.io/$PROJECT_ID/openmatch-mmf-golang-manual-simple']

@ -54,7 +54,7 @@ func main() {
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz
redisURL := "redis://" + os.Getenv("REDIS_SENTINEL_SERVICE_HOST") + ":" + os.Getenv("REDIS_SENTINEL_SERVICE_PORT")
redisURL := "redis://" + os.Getenv("REDIS_SERVICE_HOST") + ":" + os.Getenv("REDIS_SERVICE_PORT")
fmt.Println("Connecting to Redis at", redisURL)
redisConn, err := redis.DialURL(redisURL)
if err != nil {

@ -14,7 +14,7 @@ use Google\Protobuf\Internal\GPBUtil;
class PlayerId extends \Google\Protobuf\Internal\Message
{
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
*/
@ -27,7 +27,7 @@ class PlayerId extends \Google\Protobuf\Internal\Message
* Optional. Data for populating the Message object.
*
* @type string $id
* By convention, a UUID
* By convention, an Xid
* }
*/
public function __construct($data = NULL) {
@ -36,7 +36,7 @@ class PlayerId extends \Google\Protobuf\Internal\Message
}
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @return string
@ -47,7 +47,7 @@ class PlayerId extends \Google\Protobuf\Internal\Message
}
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @param string $var

@ -26,7 +26,7 @@ use Google\Protobuf\Internal\GPBUtil;
class MatchObject extends \Google\Protobuf\Internal\Message
{
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
*/
@ -63,7 +63,7 @@ class MatchObject extends \Google\Protobuf\Internal\Message
* Optional. Data for populating the Message object.
*
* @type string $id
* By convention, a UUID
* By convention, an Xid
* @type string $properties
* By convention, a JSON-encoded string
* @type string $error
@ -80,7 +80,7 @@ class MatchObject extends \Google\Protobuf\Internal\Message
}
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @return string
@ -91,7 +91,7 @@ class MatchObject extends \Google\Protobuf\Internal\Message
}
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @param string $var

@ -16,7 +16,7 @@ use Google\Protobuf\Internal\GPBUtil;
class Player extends \Google\Protobuf\Internal\Message
{
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
*/
@ -47,7 +47,7 @@ class Player extends \Google\Protobuf\Internal\Message
* Optional. Data for populating the Message object.
*
* @type string $id
* By convention, a UUID
* By convention, an Xid
* @type string $properties
* By convention, a JSON-encoded string
* @type string $pool
@ -62,7 +62,7 @@ class Player extends \Google\Protobuf\Internal\Message
}
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @return string
@ -73,7 +73,7 @@ class Player extends \Google\Protobuf\Internal\Message
}
/**
* By convention, a UUID
* By convention, an Xid
*
* Generated from protobuf field <code>string id = 1;</code>
* @param string $var

@ -32,6 +32,7 @@ def makeMatches(profile_dict, player_pools):
for player in roster['players']:
if 'pool' in player:
player['id'] = random.choice(list(player_pools[player['pool']]))
del player_pools[player['pool']][player['id']]
print("Selected player %s from pool %s (strategy: RANDOM)" % (player['id'], player['pool']))
else:
print(player)

@ -0,0 +1,152 @@
apiVersion: storage.spotahome.com/v1alpha2
kind: RedisFailover
metadata:
name: redisfailover
labels:
tier: storage
spec:
hardAntiAffinity: true # Optional. Value by default. If true, the pods will not be scheduled on the same node.
sentinel:
replicas: 3 # Optional. 3 by default, can be set higher.
resources: # Optional. If not set, it won't be defined on created resources.
requests:
cpu: 100m
limits:
memory: 100Mi
customConfig: [] # Optional. Empty by default.
redis:
replicas: 3 # Optional. 3 by default, can be set higher.
image: redis # Optional. "redis" by default.
version: 4.0.11-alpine # Optional. "3.2-alpine" by default.
resources: # Optional. If not set, it won't be defined on created resources
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 400m
memory: 500Mi
exporter: false # Optional. False by default. Adds a redis-exporter container to export metrics.
exporterImage: oliver006/redis_exporter # Optional. oliver006/redis_exporter by default.
exporterVersion: v0.11.3 # Optional. v0.11.3 by default.
disableExporterProbes: false # Optional. False by default. Disables the readiness and liveness probes for the exporter.
storage:
emptyDir: {} # Optional. emptyDir by default.
customConfig: [] # Optional. Empty by default.\
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-master-proxy-configmap
data:
haproxy.cfg: |
defaults REDIS
mode tcp
timeout connect 5s
timeout client 61s # should respect 'redis.pool.idleTimeout' from open-match config
timeout server 61s
global
stats socket ipv4@127.0.0.1:9999 level admin
stats timeout 2m
frontend fe_redis
bind *:17000 name redis
default_backend be_redis
backend be_redis
server redis-master-serv 127.0.0.1:6379
redis-master-finder.sh: |
#!/bin/sh
set -e
set -u
SENTINEL_HOST="rfs-redisfailover" # change this if RedisFailover name changes
LAST_MASTER_IP=""
LAST_MASTER_PORT=""
update_master_addr() {
# lookup current master address
local r="SENTINEL get-master-addr-by-name mymaster"
local r_out=$(echo $r | nc -q1 $SENTINEL_HOST 26379)
# parse output
local master_ip=$(echo "${r_out}" | tail -n+3 | head -n1 | tr -d '\r') # IP is on 3d line
local master_port=$(echo "${r_out}" | tail -n+5 | head -n1 | tr -d '\r') # 5th line is port number
# update HAProxy cfg if needed
if [ "$master_ip" != "$LAST_MASTER_IP" ] || [ "$master_port" != "$LAST_MASTER_PORT" ]; then
local s="set server be_redis/redis-master-serv addr ${master_ip} port ${master_port}"
echo $s | nc 127.0.0.1 9999 # haproxy is in the same pod
LAST_MASTER_IP=$master_ip
LAST_MASTER_PORT=$master_port
echo "New master address is ${LAST_MASTER_IP}:${LAST_MASTER_PORT}"
fi
}
while :; do update_master_addr; sleep 1; done
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master-proxy
labels:
app: openmatch
component: redis
tier: storage
spec:
replicas: 1
selector:
matchLabels:
app: openmatch
component: redis
tier: storage
template:
metadata:
labels:
app: openmatch
component: redis
tier: storage
spec:
volumes:
- name: configmap
configMap:
name: redis-master-proxy-configmap
defaultMode: 0700
containers:
- name: redis-master-haproxy
image: haproxy:1.8-alpine
ports:
- name: haproxy
containerPort: 17000
- name: haproxy-stats
containerPort: 9999
volumeMounts:
- name: configmap
mountPath: /usr/local/etc/haproxy/haproxy.cfg
subPath: haproxy.cfg
- name: redis-master-finder
image: subfuzion/netcat # alpine image with only netcat-openbsd installed
imagePullPolicy: Always
command: ["redis-master-finder.sh"]
volumeMounts:
- name: configmap
mountPath: /usr/local/bin/redis-master-finder.sh
subPath: redis-master-finder.sh
resources:
requests:
memory: 20Mi
cpu: 100m
---
kind: Service
apiVersion: v1
metadata:
name: redis
spec:
selector:
app: openmatch
component: redis
tier: storage
ports:
- protocol: TCP
port: 6379
targetPort: haproxy

@ -2,7 +2,7 @@
kind: Service
apiVersion: v1
metadata:
name: redis-sentinel
name: redis
spec:
selector:
app: mm

@ -33,7 +33,7 @@ spec:
spec:
containers:
- name: om-backend
image: gcr.io/matchmaker-dev-201405/openmatch-backendapi:dev
image: gcr.io/open-match-public-images/openmatch-backendapi:dev
imagePullPolicy: Always
ports:
- name: grpc
@ -79,7 +79,7 @@ spec:
spec:
containers:
- name: om-frontendapi
image: gcr.io/matchmaker-dev-201405/openmatch-frontendapi:dev
image: gcr.io/open-match-public-images/openmatch-frontendapi:dev
imagePullPolicy: Always
ports:
- name: grpc
@ -125,7 +125,7 @@ spec:
spec:
containers:
- name: om-mmforc
image: gcr.io/matchmaker-dev-201405/openmatch-mmforc:dev
image: gcr.io/open-match-public-images/openmatch-mmforc:dev
imagePullPolicy: Always
ports:
- name: metrics
@ -161,11 +161,8 @@ spec:
spec:
containers:
- name: om-mmlogic
image: gcr.io/matchmaker-dev-201405/openmatch-mmlogicapi:dev
image: gcr.io/open-match-public-images/openmatch-mmlogicapi:dev
imagePullPolicy: Always
command:
- sleep
- '30000'
ports:
- name: grpc
containerPort: 50503

@ -34,7 +34,7 @@ service/om-mmforc-metrics ClusterIP 10.59.240.59 <none> 39555/TC
service/om-mmlogicapi ClusterIP 10.59.248.3 <none> 50503/TCP 9m
service/prometheus NodePort 10.59.252.212 <none> 9090:30900/TCP 9m
service/prometheus-operated ClusterIP None <none> 9090/TCP 9m
service/redis-sentinel ClusterIP 10.59.249.197 <none> 6379/TCP 9m
service/redis ClusterIP 10.59.249.197 <none> 6379/TCP 9m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/om-backendapi 1 1 1 1 9m

@ -0,0 +1,43 @@
package logging
import (
"github.com/sirupsen/logrus"
"github.com/spf13/viper"
)
// ConfigureLogging sets up open match logrus instance using the logging section of the matchmaker_config.json
// - log line format (text[default] or json)
// - min log level to include (debug, info [default], warn, error, fatal, panic)
// - include source file and line number for every event (false [default], true)
func ConfigureLogging(cfg *viper.Viper) {
switch cfg.GetString("logging.format") {
case "json":
logrus.SetFormatter(&logrus.JSONFormatter{})
case "text":
default:
logrus.SetFormatter(&logrus.TextFormatter{})
}
switch cfg.GetString("logging.level") {
case "debug":
logrus.SetLevel(logrus.DebugLevel)
logrus.Warn("Debug logging level configured. Not recommended for production!")
case "warn":
logrus.SetLevel(logrus.WarnLevel)
case "error":
logrus.SetLevel(logrus.ErrorLevel)
case "fatal":
logrus.SetLevel(logrus.FatalLevel)
case "panic":
logrus.SetLevel(logrus.PanicLevel)
case "info":
default:
logrus.SetLevel(logrus.InfoLevel)
}
switch cfg.GetBool("logging.source") {
case true:
logrus.SetReportCaller(true)
}
}

@ -37,7 +37,6 @@ var (
metricsLogFields = log.Fields{
"app": "openmatch",
"component": "metrics",
"caller": "metrics/helper.go",
}
mhLog = log.WithFields(metricsLogFields)
)

@ -11,8 +11,6 @@ It is generated from these files:
api/protobuf-spec/messages.proto
It has these top-level messages:
Group
PlayerId
MatchObject
Roster
Filter
@ -21,7 +19,6 @@ It has these top-level messages:
Player
Result
IlInput
ConnectionInfo
Assignments
*/
package pb
@ -70,31 +67,31 @@ type BackendClient interface {
// - rosters, if you choose to fill them in your MMF. (Recommended)
// - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)
CreateMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection. Same inputs/outputs as CreateMatch.
// Continually run MMF and stream MatchObjects that fit this profile until
// the backend client closes the connection. Same inputs/outputs as CreateMatch.
ListMatches(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (Backend_ListMatchesClient, error)
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
// Delete a MatchObject from state storage manually. (MatchObjects in state
// storage will also automatically expire after a while, defined in the config)
// INPUT: MatchObject message with the 'id' field populated.
// (All other fields are ignored.)
DeleteMatch(ctx context.Context, in *MatchObject, opts ...grpc.CallOption) (*Result, error)
// Write the connection info for the list of players in the
// Assignments.messages.Rosters to state storage. The FrontendAPI is
// Assignments.messages.Rosters to state storage. The Frontend API is
// responsible for sending anything sent here to the game clients.
// Sending a player to this function kicks off a process that removes
// the player from future matchmaking functions by adding them to the
// 'deindexed' player list and then deleting their player ID from state storage
// indexes.
// INPUT: Assignments message with these fields populated:
// - connection_info, anything you write to this string is sent to Frontend API
// - assignment, anything you write to this string is sent to Frontend API
// - rosters. You can send any number of rosters, containing any number of
// player messages. All players from all rosters will be sent the connection_info.
// The only field in the Player object that is used by CreateAssignments is
// the id field. All others are silently ignored.
// player messages. All players from all rosters will be sent the assignment.
// The only field in the Roster's Player messages used by CreateAssignments is
// the id field. All other fields in the Player messages are silently ignored.
CreateAssignments(ctx context.Context, in *Assignments, opts ...grpc.CallOption) (*Result, error)
// Remove DGS connection info from state storage for players.
// INPUT: Roster message with the 'players' field populated.
// The only field in the Player object that is used by
// The only field in the Roster's Player messages used by
// DeleteAssignments is the 'id' field. All others are silently ignored. If
// you need to delete multiple rosters, make multiple calls.
DeleteAssignments(ctx context.Context, in *Roster, opts ...grpc.CallOption) (*Result, error)
@ -192,31 +189,31 @@ type BackendServer interface {
// - rosters, if you choose to fill them in your MMF. (Recommended)
// - pools, if you used the MMLogicAPI in your MMF. (Recommended, and provides stats)
CreateMatch(context.Context, *MatchObject) (*MatchObject, error)
// Continually run MMF and stream matchobjects that fit this profile until
// client closes the connection. Same inputs/outputs as CreateMatch.
// Continually run MMF and stream MatchObjects that fit this profile until
// the backend client closes the connection. Same inputs/outputs as CreateMatch.
ListMatches(*MatchObject, Backend_ListMatchesServer) error
// Delete a matchobject from state storage manually. (Matchobjects in state
// storage will also automatically expire after a while)
// Delete a MatchObject from state storage manually. (MatchObjects in state
// storage will also automatically expire after a while, defined in the config)
// INPUT: MatchObject message with the 'id' field populated.
// (All other fields are ignored.)
DeleteMatch(context.Context, *MatchObject) (*Result, error)
// Write the connection info for the list of players in the
// Assignments.messages.Rosters to state storage. The FrontendAPI is
// Assignments.messages.Rosters to state storage. The Frontend API is
// responsible for sending anything sent here to the game clients.
// Sending a player to this function kicks off a process that removes
// the player from future matchmaking functions by adding them to the
// 'deindexed' player list and then deleting their player ID from state storage
// indexes.
// INPUT: Assignments message with these fields populated:
// - connection_info, anything you write to this string is sent to Frontend API
// - assignment, anything you write to this string is sent to Frontend API
// - rosters. You can send any number of rosters, containing any number of
// player messages. All players from all rosters will be sent the connection_info.
// The only field in the Player object that is used by CreateAssignments is
// the id field. All others are silently ignored.
// player messages. All players from all rosters will be sent the assignment.
// The only field in the Roster's Player messages used by CreateAssignments is
// the id field. All other fields in the Player messages are silently ignored.
CreateAssignments(context.Context, *Assignments) (*Result, error)
// Remove DGS connection info from state storage for players.
// INPUT: Roster message with the 'players' field populated.
// The only field in the Player object that is used by
// The only field in the Roster's Player messages used by
// DeleteAssignments is the 'id' field. All others are silently ignored. If
// you need to delete multiple rosters, make multiple calls.
DeleteAssignments(context.Context, *Roster) (*Result, error)

@ -17,53 +17,6 @@ var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// Data structure for a group of players to pass to the matchmaking function.
// Obviously, the group can be a group of one!
type Group struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
}
func (m *Group) Reset() { *m = Group{} }
func (m *Group) String() string { return proto.CompactTextString(m) }
func (*Group) ProtoMessage() {}
func (*Group) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{0} }
func (m *Group) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *Group) GetProperties() string {
if m != nil {
return m.Properties
}
return ""
}
type PlayerId struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *PlayerId) Reset() { *m = PlayerId{} }
func (m *PlayerId) String() string { return proto.CompactTextString(m) }
func (*PlayerId) ProtoMessage() {}
func (*PlayerId) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{1} }
func (m *PlayerId) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func init() {
proto.RegisterType((*Group)(nil), "api.Group")
proto.RegisterType((*PlayerId)(nil), "api.PlayerId")
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
@ -75,10 +28,56 @@ const _ = grpc.SupportPackageIsVersion4
// Client API for Frontend service
type FrontendClient interface {
CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error)
GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error)
DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error)
// CreatePlayer will put the player in state storage, and then look
// through the 'properties' field for the attributes you have defined as
// indices your matchmaker config. If the attributes exist and are valid
// integers, they will be indexed.
// INPUT: Player message with these fields populated:
// - id
// - properties
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
CreatePlayer(ctx context.Context, in *Player, opts ...grpc.CallOption) (*Result, error)
// DeletePlayer removes the player from state storage by doing the
// following:
// 1) Delete player from configured indices. This effectively removes the
// player from matchmaking when using recommended MMF patterns.
// Everything after this is just cleanup to save stage storage space.
// 2) 'Lazily' delete the player's state storage record. This is kicked
// off in the background and may take some time to complete.
// 2) 'Lazily' delete the player's metadata indicies (like, the timestamp when
// they called CreatePlayer, and the last time the record was accessed). This
// is also kicked off in the background and may take some time to complete.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
DeletePlayer(ctx context.Context, in *Player, opts ...grpc.CallOption) (*Result, error)
// GetUpdates streams matchmaking results from Open Match for the
// provided player ID.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: a stream of player objects with one or more of the following
// fields populated, if an update to that field is seen in state storage:
// - 'assignment': string that usually contains game server connection information.
// - 'status': string to communicate current matchmaking status to the client.
// - 'error': string to pass along error information to the client.
//
// During normal operation, the expectation is that the 'assignment' field
// will be updated by a Backend process calling the 'CreateAssignments' Backend API
// endpoint. 'Status' and 'Error' are free for developers to use as they see fit.
// Even if you had multiple players enter a matchmaking request as a group, the
// Backend API 'CreateAssignments' call will write the results to state
// storage separately under each player's ID. OM expects you to make all game
// clients 'GetUpdates' with their own ID from the Frontend API to get
// their results.
//
// NOTE: This call generates a small amount of load on the Frontend API and state
// storage while watching the player record for updates. You are expected
// to close the stream from your client after receiving your matchmaking
// results (or a reasonable timeout), or you will continue to
// generate load on OM until you do!
// NOTE: Just bear in mind that every update will send egress traffic from
// Open Match to game clients! Frugality is recommended.
GetUpdates(ctx context.Context, in *Player, opts ...grpc.CallOption) (Frontend_GetUpdatesClient, error)
}
type frontendClient struct {
@ -89,125 +88,170 @@ func NewFrontendClient(cc *grpc.ClientConn) FrontendClient {
return &frontendClient{cc}
}
func (c *frontendClient) CreateRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
func (c *frontendClient) CreatePlayer(ctx context.Context, in *Player, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/api.Frontend/CreateRequest", in, out, c.cc, opts...)
err := grpc.Invoke(ctx, "/api.Frontend/CreatePlayer", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *frontendClient) DeleteRequest(ctx context.Context, in *Group, opts ...grpc.CallOption) (*Result, error) {
func (c *frontendClient) DeletePlayer(ctx context.Context, in *Player, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/api.Frontend/DeleteRequest", in, out, c.cc, opts...)
err := grpc.Invoke(ctx, "/api.Frontend/DeletePlayer", in, out, c.cc, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *frontendClient) GetAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*ConnectionInfo, error) {
out := new(ConnectionInfo)
err := grpc.Invoke(ctx, "/api.Frontend/GetAssignment", in, out, c.cc, opts...)
func (c *frontendClient) GetUpdates(ctx context.Context, in *Player, opts ...grpc.CallOption) (Frontend_GetUpdatesClient, error) {
stream, err := grpc.NewClientStream(ctx, &_Frontend_serviceDesc.Streams[0], c.cc, "/api.Frontend/GetUpdates", opts...)
if err != nil {
return nil, err
}
return out, nil
x := &frontendGetUpdatesClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
func (c *frontendClient) DeleteAssignment(ctx context.Context, in *PlayerId, opts ...grpc.CallOption) (*Result, error) {
out := new(Result)
err := grpc.Invoke(ctx, "/api.Frontend/DeleteAssignment", in, out, c.cc, opts...)
if err != nil {
type Frontend_GetUpdatesClient interface {
Recv() (*Player, error)
grpc.ClientStream
}
type frontendGetUpdatesClient struct {
grpc.ClientStream
}
func (x *frontendGetUpdatesClient) Recv() (*Player, error) {
m := new(Player)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return out, nil
return m, nil
}
// Server API for Frontend service
type FrontendServer interface {
CreateRequest(context.Context, *Group) (*Result, error)
DeleteRequest(context.Context, *Group) (*Result, error)
GetAssignment(context.Context, *PlayerId) (*ConnectionInfo, error)
DeleteAssignment(context.Context, *PlayerId) (*Result, error)
// CreatePlayer will put the player in state storage, and then look
// through the 'properties' field for the attributes you have defined as
// indices your matchmaker config. If the attributes exist and are valid
// integers, they will be indexed.
// INPUT: Player message with these fields populated:
// - id
// - properties
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
CreatePlayer(context.Context, *Player) (*Result, error)
// DeletePlayer removes the player from state storage by doing the
// following:
// 1) Delete player from configured indices. This effectively removes the
// player from matchmaking when using recommended MMF patterns.
// Everything after this is just cleanup to save stage storage space.
// 2) 'Lazily' delete the player's state storage record. This is kicked
// off in the background and may take some time to complete.
// 2) 'Lazily' delete the player's metadata indicies (like, the timestamp when
// they called CreatePlayer, and the last time the record was accessed). This
// is also kicked off in the background and may take some time to complete.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: Result message denoting success or failure (and an error if
// necessary)
DeletePlayer(context.Context, *Player) (*Result, error)
// GetUpdates streams matchmaking results from Open Match for the
// provided player ID.
// INPUT: Player message with the 'id' field populated.
// OUTPUT: a stream of player objects with one or more of the following
// fields populated, if an update to that field is seen in state storage:
// - 'assignment': string that usually contains game server connection information.
// - 'status': string to communicate current matchmaking status to the client.
// - 'error': string to pass along error information to the client.
//
// During normal operation, the expectation is that the 'assignment' field
// will be updated by a Backend process calling the 'CreateAssignments' Backend API
// endpoint. 'Status' and 'Error' are free for developers to use as they see fit.
// Even if you had multiple players enter a matchmaking request as a group, the
// Backend API 'CreateAssignments' call will write the results to state
// storage separately under each player's ID. OM expects you to make all game
// clients 'GetUpdates' with their own ID from the Frontend API to get
// their results.
//
// NOTE: This call generates a small amount of load on the Frontend API and state
// storage while watching the player record for updates. You are expected
// to close the stream from your client after receiving your matchmaking
// results (or a reasonable timeout), or you will continue to
// generate load on OM until you do!
// NOTE: Just bear in mind that every update will send egress traffic from
// Open Match to game clients! Frugality is recommended.
GetUpdates(*Player, Frontend_GetUpdatesServer) error
}
func RegisterFrontendServer(s *grpc.Server, srv FrontendServer) {
s.RegisterService(&_Frontend_serviceDesc, srv)
}
func _Frontend_CreateRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
func _Frontend_CreatePlayer_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Player)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(FrontendServer).CreateRequest(ctx, in)
return srv.(FrontendServer).CreatePlayer(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/api.Frontend/CreateRequest",
FullMethod: "/api.Frontend/CreatePlayer",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(FrontendServer).CreateRequest(ctx, req.(*Group))
return srv.(FrontendServer).CreatePlayer(ctx, req.(*Player))
}
return interceptor(ctx, in, info, handler)
}
func _Frontend_DeleteRequest_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Group)
func _Frontend_DeletePlayer_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Player)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(FrontendServer).DeleteRequest(ctx, in)
return srv.(FrontendServer).DeletePlayer(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/api.Frontend/DeleteRequest",
FullMethod: "/api.Frontend/DeletePlayer",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(FrontendServer).DeleteRequest(ctx, req.(*Group))
return srv.(FrontendServer).DeletePlayer(ctx, req.(*Player))
}
return interceptor(ctx, in, info, handler)
}
func _Frontend_GetAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
func _Frontend_GetUpdates_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(Player)
if err := stream.RecvMsg(m); err != nil {
return err
}
if interceptor == nil {
return srv.(FrontendServer).GetAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/api.Frontend/GetAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(FrontendServer).GetAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
return srv.(FrontendServer).GetUpdates(m, &frontendGetUpdatesServer{stream})
}
func _Frontend_DeleteAssignment_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(PlayerId)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(FrontendServer).DeleteAssignment(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: "/api.Frontend/DeleteAssignment",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(FrontendServer).DeleteAssignment(ctx, req.(*PlayerId))
}
return interceptor(ctx, in, info, handler)
type Frontend_GetUpdatesServer interface {
Send(*Player) error
grpc.ServerStream
}
type frontendGetUpdatesServer struct {
grpc.ServerStream
}
func (x *frontendGetUpdatesServer) Send(m *Player) error {
return x.ServerStream.SendMsg(m)
}
var _Frontend_serviceDesc = grpc.ServiceDesc{
@ -215,46 +259,39 @@ var _Frontend_serviceDesc = grpc.ServiceDesc{
HandlerType: (*FrontendServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "CreateRequest",
Handler: _Frontend_CreateRequest_Handler,
MethodName: "CreatePlayer",
Handler: _Frontend_CreatePlayer_Handler,
},
{
MethodName: "DeleteRequest",
Handler: _Frontend_DeleteRequest_Handler,
},
{
MethodName: "GetAssignment",
Handler: _Frontend_GetAssignment_Handler,
},
{
MethodName: "DeleteAssignment",
Handler: _Frontend_DeleteAssignment_Handler,
MethodName: "DeletePlayer",
Handler: _Frontend_DeletePlayer_Handler,
},
},
Streams: []grpc.StreamDesc{
{
StreamName: "GetUpdates",
Handler: _Frontend_GetUpdates_Handler,
ServerStreams: true,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "api/protobuf-spec/frontend.proto",
}
func init() { proto.RegisterFile("api/protobuf-spec/frontend.proto", fileDescriptor1) }
var fileDescriptor1 = []byte{
// 278 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0xd0, 0x4f, 0x4b, 0xc3, 0x40,
0x10, 0x05, 0xf0, 0x26, 0xa2, 0xd4, 0x85, 0x48, 0xd9, 0x53, 0xc8, 0x41, 0x4a, 0x4e, 0x5e, 0x9a,
0x05, 0xa5, 0x14, 0xbc, 0x69, 0xc4, 0xd0, 0x5b, 0xc9, 0xd1, 0xdb, 0x26, 0x99, 0xa4, 0x0b, 0x9b,
0x9d, 0x75, 0x77, 0xf6, 0xe0, 0xa7, 0xf5, 0xab, 0x88, 0xa9, 0xff, 0x90, 0x0a, 0x5e, 0x1f, 0xf3,
0xe3, 0x3d, 0x86, 0x2d, 0xa5, 0x55, 0xc2, 0x3a, 0x24, 0x6c, 0x42, 0xbf, 0xf2, 0x16, 0x5a, 0xd1,
0x3b, 0x34, 0x04, 0xa6, 0x2b, 0xa6, 0x98, 0x9f, 0x48, 0xab, 0xb2, 0x23, 0x67, 0x23, 0x78, 0x2f,
0x07, 0xf0, 0x87, 0xb3, 0x7c, 0xc3, 0x4e, 0x2b, 0x87, 0xc1, 0xf2, 0x0b, 0x16, 0xab, 0x2e, 0x8d,
0x96, 0xd1, 0xd5, 0x79, 0x1d, 0xab, 0x8e, 0x5f, 0x32, 0x66, 0x1d, 0x5a, 0x70, 0xa4, 0xc0, 0xa7,
0xf1, 0x94, 0xff, 0x48, 0xf2, 0x8c, 0xcd, 0x77, 0x5a, 0xbe, 0x80, 0xdb, 0x76, 0xbf, 0xed, 0xf5,
0x6b, 0xc4, 0xe6, 0x8f, 0x1f, 0x73, 0xb8, 0x60, 0x49, 0xe9, 0x40, 0x12, 0xd4, 0xf0, 0x1c, 0xc0,
0x13, 0x67, 0x85, 0xb4, 0xaa, 0x98, 0x5a, 0xb3, 0x45, 0xf1, 0xb5, 0xa7, 0x06, 0x1f, 0x34, 0xe5,
0xb3, 0x77, 0xf0, 0x00, 0x1a, 0xfe, 0x0f, 0x6e, 0x59, 0x52, 0x01, 0xdd, 0x79, 0xaf, 0x06, 0x33,
0x82, 0x21, 0x9e, 0x4c, 0xe0, 0x73, 0x5e, 0x96, 0x7e, 0x9b, 0x12, 0x8d, 0x81, 0x96, 0x14, 0x9a,
0xad, 0xe9, 0x31, 0x9f, 0xf1, 0x35, 0x5b, 0x1c, 0xca, 0xfe, 0xe6, 0x47, 0x2a, 0xef, 0x37, 0x4f,
0xeb, 0x41, 0xd1, 0x3e, 0x34, 0x45, 0x8b, 0xa3, 0xa8, 0x10, 0x07, 0x0d, 0xa5, 0xc6, 0xd0, 0xed,
0xb4, 0xa4, 0x1e, 0xdd, 0x28, 0xd0, 0x82, 0x59, 0x8d, 0x92, 0xda, 0xbd, 0x50, 0x86, 0xc0, 0x19,
0xa9, 0x85, 0x6d, 0x9a, 0xb3, 0xe9, 0xed, 0x37, 0x6f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x97, 0x2e,
0x6a, 0x58, 0xc1, 0x01, 0x00, 0x00,
// 201 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0xcf, 0x3d, 0x6b, 0xc3, 0x30,
0x10, 0xc6, 0x71, 0x9b, 0x42, 0x29, 0xa2, 0x43, 0xf1, 0xe8, 0xa9, 0x78, 0xb7, 0x55, 0xfa, 0x42,
0xf7, 0xba, 0xd4, 0xab, 0x29, 0x74, 0xe9, 0x76, 0xb2, 0xcf, 0xb6, 0x40, 0xd2, 0x09, 0xe9, 0x34,
0xf4, 0x3b, 0xf5, 0x43, 0x86, 0xd8, 0x21, 0x64, 0x08, 0x81, 0xac, 0x7f, 0x9e, 0xdf, 0xf0, 0x88,
0x47, 0xf0, 0x5a, 0xfa, 0x40, 0x4c, 0x2a, 0x4d, 0x75, 0xf4, 0x38, 0xc8, 0x29, 0x90, 0x63, 0x74,
0x63, 0xb3, 0xe6, 0xe2, 0x06, 0xbc, 0x2e, 0xcf, 0xcc, 0x2c, 0xc6, 0x08, 0x33, 0xc6, 0x6d, 0xf6,
0xfc, 0x9f, 0x8b, 0xbb, 0xaf, 0x83, 0x2c, 0x5e, 0xc5, 0x7d, 0x1b, 0x10, 0x18, 0x7b, 0x03, 0x7f,
0x18, 0x8a, 0x87, 0xe6, 0xb8, 0xde, 0x4a, 0x79, 0x52, 0xbe, 0x31, 0x26, 0xc3, 0x55, 0xb6, 0x57,
0x9f, 0x68, 0xf0, 0x6a, 0x25, 0x3a, 0xe4, 0x1f, 0x3f, 0x02, 0x63, 0xbc, 0x6c, 0xb6, 0x52, 0x65,
0x4f, 0xf9, 0xc7, 0xfb, 0xef, 0xdb, 0xac, 0x79, 0x49, 0xaa, 0x19, 0xc8, 0xca, 0x8e, 0x68, 0x36,
0xd8, 0x1a, 0x4a, 0x63, 0x6f, 0x80, 0x27, 0x0a, 0x56, 0x92, 0x47, 0x57, 0x5b, 0xe0, 0x61, 0x91,
0xda, 0x31, 0x06, 0x07, 0x46, 0x7a, 0xa5, 0x6e, 0xd7, 0xbb, 0x2f, 0xbb, 0x00, 0x00, 0x00, 0xff,
0xff, 0xe8, 0x9b, 0x69, 0x06, 0x39, 0x01, 0x00, 0x00,
}

@ -216,12 +216,22 @@ func (m *PlayerPool) GetStats() *Stats {
return nil
}
// Data structure to hold details about a player
// Open Match's internal representation and wire protocol format for "Players".
// In order to enter matchmaking using the Frontend API, your client code should generate
// a consistent (same result for each client every time they launch) with an ID and
// properties filled in (for more details about valid values for these fields,
// see the documentation).
// Players contain a number of fields, but the gRPC calls that take a
// Player as input only require a few of them to be filled in. Check the
// gRPC function in question for more details.
type Player struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Properties string `protobuf:"bytes,2,opt,name=properties" json:"properties,omitempty"`
Pool string `protobuf:"bytes,3,opt,name=pool" json:"pool,omitempty"`
Attributes []*Player_Attribute `protobuf:"bytes,4,rep,name=attributes" json:"attributes,omitempty"`
Assignment string `protobuf:"bytes,5,opt,name=assignment" json:"assignment,omitempty"`
Status string `protobuf:"bytes,6,opt,name=status" json:"status,omitempty"`
Error string `protobuf:"bytes,7,opt,name=error" json:"error,omitempty"`
}
func (m *Player) Reset() { *m = Player{} }
@ -257,6 +267,27 @@ func (m *Player) GetAttributes() []*Player_Attribute {
return nil
}
func (m *Player) GetAssignment() string {
if m != nil {
return m.Assignment
}
return ""
}
func (m *Player) GetStatus() string {
if m != nil {
return m.Status
}
return ""
}
func (m *Player) GetError() string {
if m != nil {
return m.Error
}
return ""
}
type Player_Attribute struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Value int64 `protobuf:"varint,2,opt,name=value" json:"value,omitempty"`
@ -315,33 +346,15 @@ func (m *IlInput) String() string { return proto.CompactTextString(m)
func (*IlInput) ProtoMessage() {}
func (*IlInput) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{7} }
// Simple message used to pass the connection string for the DGS to the player.
// DEPRECATED: Likely to be integrated into another protobuf message in a future version.
type ConnectionInfo struct {
ConnectionString string `protobuf:"bytes,1,opt,name=connection_string,json=connectionString" json:"connection_string,omitempty"`
}
func (m *ConnectionInfo) Reset() { *m = ConnectionInfo{} }
func (m *ConnectionInfo) String() string { return proto.CompactTextString(m) }
func (*ConnectionInfo) ProtoMessage() {}
func (*ConnectionInfo) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{8} }
func (m *ConnectionInfo) GetConnectionString() string {
if m != nil {
return m.ConnectionString
}
return ""
}
type Assignments struct {
Rosters []*Roster `protobuf:"bytes,1,rep,name=rosters" json:"rosters,omitempty"`
ConnectionInfo *ConnectionInfo `protobuf:"bytes,2,opt,name=connection_info,json=connectionInfo" json:"connection_info,omitempty"`
Rosters []*Roster `protobuf:"bytes,1,rep,name=rosters" json:"rosters,omitempty"`
Assignment string `protobuf:"bytes,10,opt,name=assignment" json:"assignment,omitempty"`
}
func (m *Assignments) Reset() { *m = Assignments{} }
func (m *Assignments) String() string { return proto.CompactTextString(m) }
func (*Assignments) ProtoMessage() {}
func (*Assignments) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{9} }
func (*Assignments) Descriptor() ([]byte, []int) { return fileDescriptor3, []int{8} }
func (m *Assignments) GetRosters() []*Roster {
if m != nil {
@ -350,11 +363,11 @@ func (m *Assignments) GetRosters() []*Roster {
return nil
}
func (m *Assignments) GetConnectionInfo() *ConnectionInfo {
func (m *Assignments) GetAssignment() string {
if m != nil {
return m.ConnectionInfo
return m.Assignment
}
return nil
return ""
}
func init() {
@ -367,47 +380,45 @@ func init() {
proto.RegisterType((*Player_Attribute)(nil), "messages.Player.Attribute")
proto.RegisterType((*Result)(nil), "messages.Result")
proto.RegisterType((*IlInput)(nil), "messages.IlInput")
proto.RegisterType((*ConnectionInfo)(nil), "messages.ConnectionInfo")
proto.RegisterType((*Assignments)(nil), "messages.Assignments")
}
func init() { proto.RegisterFile("api/protobuf-spec/messages.proto", fileDescriptor3) }
var fileDescriptor3 = []byte{
// 556 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x54, 0xd1, 0x6a, 0xdb, 0x30,
0x14, 0xc5, 0x49, 0xec, 0x36, 0x37, 0xd0, 0x76, 0x22, 0x0f, 0xa6, 0x8c, 0x11, 0x0c, 0x83, 0xd0,
0xd1, 0x18, 0x32, 0x4a, 0xc7, 0x60, 0x0f, 0x59, 0x61, 0x5b, 0x1e, 0xc6, 0x82, 0xfa, 0xb6, 0x97,
0x21, 0x3b, 0x4a, 0xaa, 0x21, 0x4b, 0x42, 0x92, 0xc3, 0x06, 0xfb, 0x81, 0x7d, 0xc4, 0xbe, 0x60,
0x1f, 0xb1, 0x5f, 0x1b, 0x96, 0xec, 0xd8, 0xed, 0xda, 0x8d, 0xbd, 0xe9, 0x9e, 0x7b, 0xaf, 0x75,
0xce, 0x3d, 0xba, 0x86, 0x09, 0x51, 0x2c, 0x55, 0x5a, 0x5a, 0x99, 0x95, 0x9b, 0x73, 0xa3, 0x68,
0x9e, 0x16, 0xd4, 0x18, 0xb2, 0xa5, 0x66, 0xe6, 0x60, 0x74, 0xd8, 0xc4, 0xc9, 0xcf, 0x00, 0x46,
0xef, 0x89, 0xcd, 0x6f, 0x3e, 0x64, 0x9f, 0x69, 0x6e, 0xd1, 0x11, 0xf4, 0xd8, 0x3a, 0x0e, 0x26,
0xc1, 0x74, 0x88, 0x7b, 0x6c, 0x8d, 0x9e, 0x00, 0x28, 0x2d, 0x15, 0xd5, 0x96, 0x51, 0x13, 0xf7,
0x1c, 0xde, 0x41, 0xd0, 0x18, 0x42, 0xaa, 0xb5, 0xd4, 0x71, 0xdf, 0xa5, 0x7c, 0x80, 0xce, 0xe0,
0x40, 0x4b, 0x63, 0xa9, 0x36, 0xf1, 0x60, 0xd2, 0x9f, 0x8e, 0xe6, 0x27, 0xb3, 0x3d, 0x03, 0xec,
0x12, 0xb8, 0x29, 0x40, 0x67, 0x10, 0x2a, 0x29, 0xb9, 0x89, 0x43, 0x57, 0x39, 0x6e, 0x2b, 0x57,
0x9c, 0x7c, 0xa5, 0x7a, 0x25, 0x25, 0xc7, 0xbe, 0x24, 0x79, 0x07, 0x91, 0x6f, 0x47, 0x08, 0x06,
0x82, 0x14, 0xb4, 0x66, 0xea, 0xce, 0xd5, 0xad, 0xca, 0xb5, 0x54, 0x44, 0xef, 0xdc, 0xea, 0xbf,
0x85, 0x9b, 0x82, 0xe4, 0x7b, 0x00, 0xd1, 0x1b, 0xc6, 0x1f, 0xfa, 0xd4, 0x63, 0x18, 0x12, 0x6b,
0x35, 0xcb, 0x4a, 0x4b, 0x6b, 0xd5, 0x2d, 0x50, 0x75, 0x14, 0xe4, 0xcb, 0xce, 0x69, 0xee, 0x63,
0x77, 0x76, 0x18, 0x13, 0xbb, 0x78, 0x50, 0x63, 0x4c, 0xec, 0xd0, 0x53, 0x08, 0x8d, 0x25, 0xb6,
0x92, 0x16, 0x4c, 0x47, 0xf3, 0xe3, 0x96, 0xce, 0x75, 0x05, 0x63, 0x9f, 0x4d, 0x2e, 0x21, 0x74,
0x71, 0x35, 0xcc, 0x5c, 0x96, 0xc2, 0x3a, 0x2a, 0x7d, 0xec, 0x03, 0x14, 0xc3, 0x01, 0xe5, 0x44,
0x19, 0xba, 0x76, 0x4c, 0x02, 0xdc, 0x84, 0xc9, 0x8f, 0x00, 0xa0, 0x1d, 0xd2, 0x43, 0x33, 0xd9,
0x38, 0x99, 0xf7, 0xcc, 0xc4, 0xeb, 0xc7, 0x4d, 0x01, 0x9a, 0x42, 0xe4, 0x4d, 0x71, 0xc2, 0xee,
0x33, 0xad, 0xce, 0xb7, 0xc2, 0x06, 0x7f, 0x15, 0xf6, 0x2b, 0x80, 0xc8, 0xf3, 0xfb, 0xef, 0x77,
0x85, 0x60, 0x50, 0x59, 0x5e, 0x3f, 0x2b, 0x77, 0x46, 0x2f, 0x01, 0xf6, 0x1e, 0x34, 0x0f, 0xeb,
0xf4, 0xae, 0xc5, 0xb3, 0x45, 0x53, 0x82, 0x3b, 0xd5, 0xa7, 0x17, 0x30, 0x5c, 0x74, 0xfd, 0xfb,
0x63, 0x50, 0x63, 0x08, 0x77, 0x84, 0x97, 0xde, 0xed, 0x3e, 0xf6, 0x41, 0xf2, 0x02, 0x22, 0x4c,
0x4d, 0xc9, 0x9d, 0x0b, 0xa6, 0xcc, 0x73, 0x6a, 0x8c, 0x6b, 0x3b, 0xc4, 0x4d, 0xd8, 0xae, 0x40,
0xaf, 0xb3, 0x02, 0xc9, 0x10, 0x0e, 0x96, 0x7c, 0x29, 0x54, 0x69, 0x93, 0x57, 0x70, 0x74, 0x25,
0x85, 0xa0, 0xb9, 0x65, 0x52, 0x2c, 0xc5, 0x46, 0xa2, 0x67, 0xf0, 0x28, 0xdf, 0x23, 0x9f, 0x8c,
0xd5, 0x4c, 0x6c, 0x6b, 0x36, 0x27, 0x6d, 0xe2, 0xda, 0xe1, 0xc9, 0x37, 0x18, 0x2d, 0x8c, 0x61,
0x5b, 0x51, 0x50, 0x61, 0x4d, 0x77, 0xb7, 0x82, 0x7f, 0xed, 0xd6, 0x02, 0x8e, 0x3b, 0xf7, 0x30,
0xb1, 0x91, 0x8e, 0xe4, 0x68, 0x1e, 0xb7, 0x3d, 0xb7, 0xa9, 0xe1, 0xa3, 0xfc, 0x56, 0xfc, 0xfa,
0xf2, 0xe3, 0xc5, 0x96, 0xd9, 0x9b, 0x32, 0x9b, 0xe5, 0xb2, 0x48, 0xdf, 0x4a, 0xb9, 0xe5, 0xf4,
0x8a, 0xcb, 0x72, 0xbd, 0xe2, 0xc4, 0x6e, 0xa4, 0x2e, 0x52, 0xa9, 0xa8, 0x38, 0x2f, 0xaa, 0x7f,
0x48, 0xca, 0x84, 0xa5, 0x5a, 0x10, 0x9e, 0xaa, 0x2c, 0x8b, 0xdc, 0xaf, 0xe6, 0xf9, 0xef, 0x00,
0x00, 0x00, 0xff, 0xff, 0x29, 0x1e, 0x07, 0x0d, 0x8e, 0x04, 0x00, 0x00,
// 532 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x54, 0x51, 0x8b, 0xd3, 0x40,
0x10, 0x26, 0x6d, 0x93, 0x5e, 0xa7, 0xa0, 0xb2, 0x14, 0x09, 0x87, 0x48, 0x09, 0x08, 0xe5, 0xe0,
0x1a, 0x38, 0x39, 0x4e, 0x7c, 0xab, 0x82, 0x7a, 0x0f, 0x62, 0x59, 0x9f, 0xf4, 0x6d, 0x93, 0x6e,
0x7b, 0x2b, 0x9b, 0xec, 0xb2, 0xbb, 0x29, 0xfa, 0x13, 0x7c, 0xf0, 0x27, 0xf8, 0x0b, 0xfc, 0x93,
0x92, 0xd9, 0xa4, 0x89, 0xf5, 0x4e, 0xb9, 0xb7, 0x9d, 0x6f, 0xbe, 0xcd, 0x7c, 0xdf, 0xcc, 0x64,
0x61, 0xce, 0xb4, 0x48, 0xb5, 0x51, 0x4e, 0x65, 0xd5, 0xf6, 0xdc, 0x6a, 0x9e, 0xa7, 0x05, 0xb7,
0x96, 0xed, 0xb8, 0x5d, 0x22, 0x4c, 0x4e, 0xda, 0x38, 0xf9, 0x15, 0xc0, 0xf4, 0x3d, 0x73, 0xf9,
0xcd, 0x87, 0xec, 0x0b, 0xcf, 0x1d, 0x79, 0x00, 0x03, 0xb1, 0x89, 0x83, 0x79, 0xb0, 0x98, 0xd0,
0x81, 0xd8, 0x90, 0xa7, 0x00, 0xda, 0x28, 0xcd, 0x8d, 0x13, 0xdc, 0xc6, 0x03, 0xc4, 0x7b, 0x08,
0x99, 0x41, 0xc8, 0x8d, 0x51, 0x26, 0x1e, 0x62, 0xca, 0x07, 0xe4, 0x0c, 0xc6, 0x46, 0x59, 0xc7,
0x8d, 0x8d, 0x47, 0xf3, 0xe1, 0x62, 0x7a, 0xf1, 0x68, 0x79, 0x50, 0x40, 0x31, 0x41, 0x5b, 0x02,
0x39, 0x83, 0x50, 0x2b, 0x25, 0x6d, 0x1c, 0x22, 0x73, 0xd6, 0x31, 0xd7, 0x92, 0x7d, 0xe3, 0x66,
0xad, 0x94, 0xa4, 0x9e, 0x92, 0xbc, 0x83, 0xc8, 0x5f, 0x27, 0x04, 0x46, 0x25, 0x2b, 0x78, 0xa3,
0x14, 0xcf, 0x75, 0x55, 0x8d, 0x57, 0x6a, 0xa1, 0x47, 0x55, 0xfd, 0xb7, 0x68, 0x4b, 0x48, 0xbe,
0x07, 0x10, 0xbd, 0x11, 0xf2, 0xae, 0x4f, 0x3d, 0x81, 0x09, 0x73, 0xce, 0x88, 0xac, 0x72, 0xbc,
0x71, 0xdd, 0x01, 0xf5, 0x8d, 0x82, 0x7d, 0xdd, 0xa3, 0xe7, 0x21, 0xc5, 0x33, 0x62, 0xa2, 0xdc,
0xc7, 0xa3, 0x06, 0x13, 0xe5, 0x9e, 0x3c, 0x83, 0xd0, 0x3a, 0xe6, 0x6a, 0x6b, 0xc1, 0x62, 0x7a,
0xf1, 0xb0, 0x93, 0xf3, 0xb1, 0x86, 0xa9, 0xcf, 0x26, 0x57, 0x10, 0x62, 0x5c, 0x37, 0x33, 0x57,
0x55, 0xe9, 0x50, 0xca, 0x90, 0xfa, 0x80, 0xc4, 0x30, 0xe6, 0x92, 0x69, 0xcb, 0x37, 0xa8, 0x24,
0xa0, 0x6d, 0x98, 0xfc, 0x0c, 0x00, 0xba, 0x26, 0xdd, 0xd5, 0x93, 0x2d, 0xda, 0xbc, 0xa5, 0x27,
0xde, 0x3f, 0x6d, 0x09, 0x64, 0x01, 0x91, 0x1f, 0x0a, 0x1a, 0xbb, 0x6d, 0x68, 0x4d, 0xbe, 0x33,
0x36, 0xfa, 0xa7, 0xb1, 0x1f, 0x03, 0x88, 0xbc, 0xbe, 0x7b, 0xef, 0x15, 0x81, 0x51, 0x3d, 0xf2,
0x66, 0xad, 0xf0, 0x4c, 0x5e, 0x02, 0x1c, 0x66, 0xd0, 0x2e, 0xd6, 0xe9, 0xf1, 0x88, 0x97, 0xab,
0x96, 0x42, 0x7b, 0xec, 0xba, 0x1e, 0xb3, 0x56, 0xec, 0xca, 0x82, 0x97, 0x0e, 0xe7, 0x31, 0xa1,
0x3d, 0x84, 0x3c, 0x86, 0xa8, 0xd6, 0x5c, 0xd9, 0x38, 0xc2, 0x5c, 0x13, 0x75, 0xfb, 0x3d, 0xee,
0xed, 0xf7, 0xe9, 0x25, 0x4c, 0x56, 0xfd, 0x6d, 0xf8, 0xab, 0xed, 0x33, 0x08, 0xf7, 0x4c, 0x56,
0x7e, 0x77, 0x86, 0xd4, 0x07, 0xc9, 0x0b, 0x88, 0x28, 0xb7, 0x95, 0xc4, 0x99, 0xda, 0x2a, 0xcf,
0xb9, 0xb5, 0x78, 0xed, 0x84, 0xb6, 0x61, 0x57, 0x70, 0xd0, 0x2b, 0x98, 0x4c, 0x60, 0x7c, 0x2d,
0xaf, 0x4b, 0x5d, 0xb9, 0xe4, 0x13, 0x4c, 0x57, 0x07, 0xdd, 0xb6, 0xff, 0xab, 0x05, 0xff, 0xfb,
0xd5, 0xfe, 0x6c, 0x02, 0x1c, 0x37, 0xe1, 0xd5, 0xd5, 0xe7, 0xcb, 0x9d, 0x70, 0x37, 0x55, 0xb6,
0xcc, 0x55, 0x91, 0xbe, 0x55, 0x6a, 0x27, 0xf9, 0x6b, 0xa9, 0xaa, 0xcd, 0x5a, 0x32, 0xb7, 0x55,
0xa6, 0x48, 0x95, 0xe6, 0xe5, 0x79, 0x51, 0xbf, 0x17, 0xa9, 0x28, 0x1d, 0x37, 0x25, 0x93, 0xa9,
0xce, 0xb2, 0x08, 0x9f, 0x95, 0xe7, 0xbf, 0x03, 0x00, 0x00, 0xff, 0xff, 0xf2, 0xef, 0x26, 0x51,
0x7a, 0x04, 0x00, 0x00,
}

@ -1,20 +1,5 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: api/protobuf-spec/mmlogic.proto
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package pb
@ -85,15 +70,15 @@ type MmLogicClient interface {
// Player listing and filtering functions
//
// RetrievePlayerPool gets the list of players that match every Filter in the
// PlayerPool, and then removes all players it finds in the ignore list. It
// PlayerPool, .excluding players in any configured ignore lists. It
// combines the results, and returns the resulting player pool.
GetPlayerPool(ctx context.Context, in *PlayerPool, opts ...grpc.CallOption) (MmLogic_GetPlayerPoolClient, error)
// Ignore List functions
//
// IlInput is an empty message reserved for future use.
GetAllIgnoredPlayers(ctx context.Context, in *IlInput, opts ...grpc.CallOption) (*Roster, error)
// RetrieveIgnoreList retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposedPlayers.key'.
// ListIgnoredPlayers retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposed.name'.
ListIgnoredPlayers(ctx context.Context, in *IlInput, opts ...grpc.CallOption) (*Roster, error)
}
@ -218,15 +203,15 @@ type MmLogicServer interface {
// Player listing and filtering functions
//
// RetrievePlayerPool gets the list of players that match every Filter in the
// PlayerPool, and then removes all players it finds in the ignore list. It
// PlayerPool, .excluding players in any configured ignore lists. It
// combines the results, and returns the resulting player pool.
GetPlayerPool(*PlayerPool, MmLogic_GetPlayerPoolServer) error
// Ignore List functions
//
// IlInput is an empty message reserved for future use.
GetAllIgnoredPlayers(context.Context, *IlInput) (*Roster, error)
// RetrieveIgnoreList retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposedPlayers.key'.
// ListIgnoredPlayers retrieves players from the ignore list specified in the
// config file under 'ignoreLists.proposed.name'.
ListIgnoredPlayers(context.Context, *IlInput) (*Roster, error)
}

@ -21,6 +21,7 @@ was added to the list.
package ignorelist
import (
"context"
"strconv"
"time"
@ -34,7 +35,6 @@ var (
ilLogFields = log.Fields{
"app": "openmatch",
"component": "statestorage",
"caller": "statestorage/redis/ignorelist/ignorelist.go",
}
ilLog = log.WithFields(ilLogFields)
)
@ -58,6 +58,28 @@ func Create(redisConn redis.Conn, ignorelistID string, playerIDs []string) error
return err
}
// Move moves a list of players from one ignorelist to another.
// TODO: Make cancellable with context
func Move(ctx context.Context, pool *redis.Pool, playerIDs []string, src string, dest string) error {
// Get redis connection
redisConn := pool.Get()
defer redisConn.Close()
// Setup default logging
ilLog.WithFields(log.Fields{
"src": src,
"dest": dest,
"numPlayers": len(playerIDs),
}).Debug("moving players to a different ignorelist")
redisConn.Send("MULTI")
SendAdd(redisConn, dest, playerIDs)
SendRemove(redisConn, src, playerIDs)
_, err := redisConn.Do("EXEC")
return err
}
// SendAdd is identical to Add only does a redigo 'Send' as part of a MULTI command.
func SendAdd(redisConn redis.Conn, ignorelistID string, playerIDs []string) {

@ -0,0 +1,273 @@
// Package playerindices indexes player attributes in Redis for faster
// filtering of player pools.
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package playerindices
import (
"context"
"errors"
"fmt"
"strings"
"time"
om_messages "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/gomodule/redigo/redis"
log "github.com/sirupsen/logrus"
"github.com/spf13/viper"
"github.com/tidwall/gjson"
)
var (
// Logrus structured logging setup
piLogFields = log.Fields{
"app": "openmatch",
"component": "statestorage",
}
piLog = log.WithFields(piLogFields)
// OM Internal metadata indices
MetaIndices = []string{
"OM_METADATA.created",
"OM_METADATA.accessed",
}
)
// Indexing is all done using Sorted Sets in Redis, which require integer
// 'scores' for each attribute in order to index players.
//
// Here are the guidelines if you want to index a player attribute in your
// Properties JSON blob when the player's request comes in the Frontend API
// so that you can filter on that attribute in the Profiles you pass to the
// Backend API.
// - Fields you want to index in your JSON should always be a key with an
// integer value, so use dictionaries/maps/hashes instad of lists/arrays for
// your data. (see examples below)
// - When indexing fields in a player's JSON object, the key to index should
// be compatible with dot notation. This means no keys with meta characters
// (anything escaped by regexp.QuoteMeta() shouldn't be used in your key
// name!)
// - If you're trying to index a flag, just use the epoch timestamp of the
// request as the value unless you have a compelling reason to do otherwise.
//
// For example, if you want to index the following:
// - Player's ping value to us-east
// - Bool flag 'true' denoting player's choice to play CTF mode
// - Bool flag 'true' denoting player's choice to play TeamDM mode
// - Bool flag 'true' denoting player's choice to play SunsetValley map
// - Players' matchmaking ranking value
//
// DON'T structure your JSON like this:
// player {
// "pings": {"us-east": 70, "eu-central": 120 },
// "maps": ["sunsetvalley", "bigskymountain"] ,
// "modes": "ctf"
// }
// Instead, use dictionaries with key/value pairs instead of lists (use epoch
// timestamp as the value if your attribute should act as a boolean flag):
// player {
// "pings": {"us-east": 70, "eu-central": 120 },
// "maps": {"sunsetvalley": 1234567890, "bigskymountain": 1234567890 } ,
// "modes": {"ctf": 1234567890}
// }
// Then, configure your list of indices for OM to look like this:
// "indices": [
// "pings.us-east",
// "modes.ctf",
// "modes.teamdm",
// "maps.sunsetvalley",
// "mmr.rating",
// ]
//
// For now, OM reads your 'config/matchmaker_config.(json|yaml)' file for a
// list of indices, which it monitors using the golang module Viper
// (https://github.com/spf13/viper).
// In a full deployment, it is expected that you don't manage the config file
// directly, but instead put the contents of that file into a Kubernetes
// ConfigMap. Kubernetes will write those contents to a file inside your
// running container for you. You can see where and how this is happening by
// looking at the kubernetes deployment resource definitions in the
// 'deployments/k8s/' directory.
//
// https://github.com/GoogleCloudPlatform/open-match/issues/42 discusses more
// about how configs are managed in Open Match.
//
// You can update the list of indices at run-time if you need to add or remove
// an index. Changes will affect all indexed players that come in the Frontend
// API from that point on.
// NOTE: there are potential edge cases here; see Retrieve() for details.
// Create indices for given player attributes in Redis.
// TODO: make this quit and not index the player if the context is cancelled.
func Create(ctx context.Context, rPool *redis.Pool, cfg *viper.Viper, player om_messages.Player) error {
// Connect to redis
redisConn := rPool.Get()
defer redisConn.Close()
iLog := piLog.WithFields(log.Fields{"playerId": player.Id})
// Get the indices from viper
indices, err := Retrieve(cfg)
// Get metadata indicies
indices = append(indices, MetaIndices...)
if err != nil {
iLog.Error(err.Error())
return err
}
// Start putting this player into the indices in Redis.
redisConn.Send("MULTI")
// Loop through all attributes we want to index.
for _, attribute := range indices {
// Default value for all attributes if missing or malformed is the current epoch timestamp.
value := time.Now().Unix()
// If this is a user-defined index, look for it in the input player properties JSON
if !strings.HasPrefix(attribute, "OM_METADATA") {
// NOTE: This gjson call has issues with JSON keys containing meta characters (dot, slash, etc).
// The regexp.QuoteMeta below gets around those issues, but won't pick up dot-notation keys!
// End result is that you shouldn't use meta characters in your JSON property keys!
//v := gjson.Get(player.Properties, regexp.QuoteMeta(attribute))
v := gjson.Get(player.Properties, attribute)
// If this attribue wasn't provided in the JSON, continue to the
// next attribute to index.
if !v.Exists() {
iLog.WithFields(log.Fields{"attribute": attribute, "value": v.Raw}).Debug("Couldn't find index in JSON: ", player.Properties)
continue
} else if -9223372036854775808 <= v.Int() && v.Int() <= 9223372036854775807 {
// value contains a valid 64-bit integer
value = v.Int()
} else {
iLog.WithFields(log.Fields{"attribute": attribute}).Debug("No valid value for attribute, not indexing")
}
}
// Index the attribute by value.
iLog.Debug(fmt.Sprintf("%v %v %v %v", "ZADD", attribute, player.Id, value))
redisConn.Send("ZADD", attribute, value, player.Id)
}
// Run pipelined Redis commands.
_, err = redisConn.Do("EXEC")
return err
}
// Delete a player's indices without deleting their JSON object representation from
// state storage.
// Note: In Open Match, it is best practice to 'lazily' remove indices
// by running this as a goroutine.
// TODO: make this quit cleanly if the context is cancelled.
func Delete(ctx context.Context, rPool *redis.Pool, cfg *viper.Viper, playerID string) error {
diLog := piLog.WithFields(log.Fields{"playerID": playerID})
// Connect to redis
redisConn := rPool.Get()
defer redisConn.Close()
// Get the list of indices to delete
indices, err := Retrieve(cfg)
// Look for previously configured indices
indices = append(indices, RetrievePrevious(cfg)...)
if err != nil {
diLog.Error(err.Error())
return err
}
// Remove playerID from indices
redisConn.Send("MULTI")
for _, attribute := range indices {
diLog.WithFields(log.Fields{"attribute": attribute}).Debug("De-indexing")
redisConn.Send("ZREM", attribute, playerID)
}
_, err = redisConn.Do("EXEC")
return err
}
// DeleteMeta removes a player's internal Open Match metadata indices, and should only be used
// after deleting their JSON object representation from state storage.
// Note: In Open Match, it is best practice to 'lazily' remove indices
// by running this as a goroutine.
// TODO: make this quit cleanly if the context is cancelled.
func DeleteMeta(ctx context.Context, rPool *redis.Pool, playerID string) {
dmLog := piLog.WithFields(log.Fields{"playerID": playerID})
// Connect to redis
redisConn := rPool.Get()
defer redisConn.Close()
// Remove playerID from metaindices
redisConn.Send("MULTI")
for _, attribute := range MetaIndices {
dmLog.WithFields(log.Fields{"attribute": attribute}).Debug("De-indexing from metadata")
redisConn.Send("ZREM", attribute, playerID)
}
_, err := redisConn.Do("EXEC")
if err != nil {
dmLog.WithFields(log.Fields{"error": err.Error}).Error("Error de-indexing from metadata")
}
}
// Touch is analogous to the Unix touch command. It updates the accessed time of the player
// in the OM_METADATA.accessed index to the current epoch timestamp.
func Touch(ctx context.Context, rPool *redis.Pool, playerID string) error {
// Connect to redis
redisConn := rPool.Get()
defer redisConn.Close()
_, err := redisConn.Do("ZADD", "OM_METADATA.accessed", time.Now().Unix(), playerID)
return err
}
// Retrieve pulls the player indices from the Viper config
func Retrieve(cfg *viper.Viper) (indices []string, err error) {
// In addition to the user-defined indices from the config file, Open Match
// forces the following indicies to exist for all players. 'created' is
// used to calculate how long a player has been waiting for a match,
// 'accessed' is used to determine when a player needs to be expired out of
// state storage.
indices = append(indices, []string{}...)
if cfg.IsSet("playerIndices") {
indices = append(indices, cfg.GetStringSlice("playerIndices")...)
} else {
err = errors.New("Failure to get list of indices")
return nil, err
}
return
}
// RetrievePrevious attempts to handle an edge case when the user has removed an
// index from the list of player indices but players still exist who are
// indexed using the (now no longer used) index. The user should put the
// index they are no longer using into this config parameter so that
// deleting players with previous indexes doesn't result in a Redis memory
// leak. In a future version, Open Match should track previous indices
// itself and handle this for the user.
func RetrievePrevious(cfg *viper.Viper) []string {
if cfg.IsSet("previousPlayerIndices") {
return cfg.GetStringSlice("previousPlayerIndices")
}
return nil
}

@ -1,161 +0,0 @@
// Package playerq is a player queue specific redis implementation and will be removed in a future version.
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package playerq
import (
"encoding/json"
"strings"
"github.com/gomodule/redigo/redis"
log "github.com/sirupsen/logrus"
)
// Logrus structured logging setup
var (
pqLogFields = log.Fields{
"app": "openmatch",
"component": "statestorage",
"caller": "statestorage/redis/playerq/playerq.go",
}
pqLog = log.WithFields(pqLogFields)
)
func indicesMap(results []string) interface{} {
indices := make(map[string][]string)
for _, iName := range results {
field := strings.Split(iName, ":")
indices[field[0]] = append(indices[field[0]], field[1])
}
return indices
}
// PlayerIndices retrieves available indices for player parameters.
func playerIndices(redisConn redis.Conn) (results []string, err error) {
results, err = redis.Strings(redisConn.Do("SMEMBERS", "indices"))
return
}
// Create adds a player's JSON representation to the current matchmaker state storage,
// and indexes all fields in that player's JSON object. All values in the JSON should be integers.
// If you're trying to index a boolean, just use the epoch timestamp of the
// request as the value; the existance of that value for this group/player can
// be considered a 'true' value.
// Example:
// player {
// "ping.us-east": 70,
// "ping.eu-central": 120,
// "map.sunsetvalley": "123456782", // TRUE flag key, epoch timestamp value
// "mode.ctf" // TRUE flag key, epoch timestamp value
// }
func Create(redisConn redis.Conn, playerID string, playerData string) error {
//pdJSON, err := json.Marshal(playerData)
pdMap := redisValuetoMap(playerData)
redisConn.Send("MULTI")
redisConn.Send("HSET", playerID, "properties", playerData)
for key, value := range pdMap {
// TODO: walk the JSON and flatten it
// Index this property
redisConn.Send("ZADD", key, value, playerID)
// Add this index to the list of indices
redisConn.Send("SADD", "indices", key)
}
_, err := redisConn.Do("EXEC")
check(err, "")
return err
}
// Update is an alias for Create() in this implementation
func Update(redisConn redis.Conn, playerID string, playerData string) (err error) {
Create(redisConn, playerID, playerData)
return
}
// Retrieve a player's JSON object representation from state storage.
func Retrieve(redisConn redis.Conn, playerID string) (results map[string]interface{}, err error) {
r, err := redis.String(redisConn.Do("HGET", playerID, "properties"))
if err != nil {
log.Println("Failed to get properties from playerID using HGET", err)
}
results = redisValuetoMap(r)
return
}
// Convert redis result (JSON blob in a string) to golang map
func redisValuetoMap(result string) map[string]interface{} {
jsonPD := make(map[string]interface{})
byt := []byte(result)
err := json.Unmarshal(byt, &jsonPD)
check(err, "")
return jsonPD
}
// Delete a player's JSON object representation from state storage,
// and attempt to remove the player's presence in any indexes.
func Delete(redisConn redis.Conn, playerID string) (err error) {
results, err := Retrieve(redisConn, playerID)
redisConn.Send("MULTI")
redisConn.Send("DEL", playerID)
// Remove playerID from indices
for iName := range results {
log.WithFields(log.Fields{
"field": iName,
"key": playerID}).Debug("De-Indexing field")
redisConn.Send("ZREM", iName, playerID)
}
_, err = redisConn.Do("EXEC")
check(err, "")
return
}
// Deindex a player without deleting there JSON object representation from
// state storage. Unindexing is done in two stages: first the player is added to an ignore list, which 'atomically' removes them from consideration. A Goroutine is then kicked off to 'lazily' remove them from any field indicies that contain them.
func Deindex(redisConn redis.Conn, playerID string) (err error) {
//TODO: remove deindexing from delete and call this instead
results, err := Retrieve(redisConn, playerID)
if err != nil {
log.Println("couldn't retreive player properties for ", playerID)
}
redisConn.Send("MULTI")
// Remove playerID from indices
for iName := range results {
log.WithFields(log.Fields{
"field": iName,
"key": playerID}).Debug("Un-indexing field")
redisConn.Send("ZREM", iName, playerID)
}
_, err = redisConn.Do("EXEC")
check(err, "")
return
}
func check(err error, action string) {
if err != nil {
if action == "QUIT" {
log.Fatal(err)
} else {
log.Print(err)
}
}
}

@ -35,14 +35,12 @@ var (
rhLogFields = log.Fields{
"app": "openmatch",
"component": "redishelpers",
"caller": "statestorage/redis/redishelpers.go",
}
rhLog = log.WithFields(rhLogFields)
)
// ConnectionPool reads the configuration and attempts to instantiate a redis connection
// pool based on the configured hostname and port.
// TODO: needs to be reworked to use redis sentinel when we're ready to support it.
func ConnectionPool(cfg *viper.Viper) *redis.Pool {
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz
@ -277,14 +275,16 @@ func Update(ctx context.Context, pool *redis.Pool, key string, value string) (st
return "", err
}
// Delete is a concurrent-safe, context-aware redis DEL on the input key
func Delete(ctx context.Context, pool *redis.Pool, key string) (string, error) {
// UpdateMultiFields is a concurrent-safe, context-aware Redis HSET of the input field
// Keys to update and the values to set the field to are passed in the 'kv' map.
// Example usage is to set multiple player's "assignment" field to various game server connection strings.
// "field" := "assignment"
// "kv" := map[string]string{ "player1": "servername:10000", "player2": "otherservername:10002" }
func UpdateMultiFields(ctx context.Context, pool *redis.Pool, kv map[string]string, field string) error {
// Add the key as a field to all logs for the execution of this function.
rhLog = rhLog.WithFields(log.Fields{"key": key})
cmd := "DEL"
rhLog.WithFields(log.Fields{"query": cmd}).Debug("state storage operation")
// Add the cmd & field to all logs for the execution of this function.
cmd := "HSET"
dfLog := rhLog.WithFields(log.Fields{"field": field, "query": cmd})
// Get a connection to redis
redisConn, err := pool.GetContext(ctx)
@ -292,15 +292,75 @@ func Delete(ctx context.Context, pool *redis.Pool, key string) (string, error) {
// Encountered an issue getting a connection from the pool.
if err != nil {
rhLog.WithFields(log.Fields{
dfLog.WithFields(log.Fields{
"error": err.Error(),
"query": cmd}).Error("state storage connection error")
return "", err
}).Error("state storage connection error")
return err
}
// Run redis query and return
_, err = redisConn.Do("DEL", key)
return "", err
redisConn.Send("MULTI")
for key, value := range kv {
dfLog.WithFields(log.Fields{"key": key, "value": value}).Debug("state storage operation")
redisConn.Send(cmd, key, field, value)
}
_, err = redisConn.Do("EXEC")
return err
}
// Delete is a concurrent-safe, context-aware redis DEL on the input key
func Delete(ctx context.Context, pool *redis.Pool, key string) error {
// Add the key as a field to all logs for the execution of this function.
cmd := "DEL"
dLog := rhLog.WithFields(log.Fields{"key": key, "query": cmd})
dLog.Debug("state storage operation")
// Get a connection to redis
redisConn, err := pool.GetContext(ctx)
defer redisConn.Close()
// Encountered an issue getting a connection from the pool.
if err != nil {
dLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("state storage connection error")
return err
}
// Run redis query and return
_, err = redisConn.Do(cmd, key)
return err
}
// DeleteMultiFields is a concurrent-safe, context-aware Redis DEL of the input field
// from the input keys
func DeleteMultiFields(ctx context.Context, pool *redis.Pool, keys []string, field string) error {
// Add the cmd & field to all logs for the execution of this function.
cmd := "HDEL"
dfLog := rhLog.WithFields(log.Fields{"field": field, "query": cmd})
// Get a connection to redis
redisConn, err := pool.GetContext(ctx)
defer redisConn.Close()
// Encountered an issue getting a connection from the pool.
if err != nil {
dfLog.WithFields(log.Fields{
"error": err.Error(),
}).Error("state storage connection error")
return err
}
// Run redis query and return
redisConn.Send("MULTI")
for _, key := range keys {
dfLog.WithFields(log.Fields{"key": key}).Debug("state storage operation")
redisConn.Send(cmd, key, field)
}
_, err = redisConn.Do("EXEC")
return err
}
// Count is a concurrent-safe, context-aware redis SCARD on the input key

@ -0,0 +1,6 @@
# Redis State Storage Protobuffer Modules
These are modules used to read ('unmarshal'), write ('marshal'), and monitor Open Match protobuffer formats directly to/from Redis.
## FAQs
1. Why are there separate implementations for the Frontend objects (Players/Groups, participants in matchmaking) and Backend objects (MatchObjects, which hold profiles and match results)?
We'd like to unify these at some point, but to make a more generic version of this library, we'd really like to depend on golang reflection, which is kind of messy right now for protobuffers. For now, we'll just focus on having separate implementations to carry us until the situation improves.

@ -0,0 +1,143 @@
// Package redispb marshals and unmarshals Open Match Backend protobuf messages
// ('MatchObject') for redis state storage.
// More details about the protobuf messages used in Open Match can be found in
// the api/protobuf-spec/om_messages.proto file.
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
All of this can probably be done more succinctly with some more interface and
reflection, this is a hack but works for now.
*/
package redispb
import (
"context"
"errors"
"fmt"
"time"
om_messages "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/gogo/protobuf/jsonpb"
"github.com/gomodule/redigo/redis"
log "github.com/sirupsen/logrus"
)
// Logrus structured logging setup
var (
moLogFields = log.Fields{
"app": "openmatch",
"component": "statestorage",
}
moLog = log.WithFields(moLogFields)
)
// UnmarshalFromRedis unmarshals a MatchObject from a redis hash.
// This can probably be made generic to work with other pb messages in the future.
// In every case where we don't get an update, we return an error.
func UnmarshalFromRedis(ctx context.Context, pool *redis.Pool, pb *om_messages.MatchObject) error {
// Get the Redis connection.
redisConn, err := pool.GetContext(context.Background())
defer redisConn.Close()
if err != nil {
moLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("failed to connect to redis")
return err
}
// Prepare redis command.
cmd := "HGETALL"
key := pb.Id
resultLog := moLog.WithFields(log.Fields{
"component": "statestorage",
"cmd": cmd,
"key": key,
})
pbMap, err := redis.StringMap(redisConn.Do(cmd, key))
if len(pbMap) == 0 {
return errors.New("matchobject key does not exist")
}
// Put values from redis into the MatchObject message
pb.Error = pbMap["error"]
pb.Properties = pbMap["properties"]
// TODO: Room for improvement here.
if j := pbMap["pools"]; j != "" {
poolsJSON := fmt.Sprintf("{\"pools\": %v}", j)
err = jsonpb.UnmarshalString(poolsJSON, pb)
if err != nil {
resultLog.Error("failure on pool")
resultLog.Error(j)
resultLog.Error(err)
}
}
if j := pbMap["rosters"]; j != "" {
rostersJSON := fmt.Sprintf("{\"rosters\": %v}", j)
err = jsonpb.UnmarshalString(rostersJSON, pb)
if err != nil {
resultLog.Error("failure on roster")
resultLog.Error(j)
resultLog.Error(err)
}
}
moLog.Debug("Final pb:")
moLog.Debug(pb)
return err
}
// Watcher makes a channel and returns it immediately. It also launches an
// asynchronous goroutine that watches a redis key and returns updates to
// that key on the channel.
//
// The pattern for this function is from 'Go Concurrency Patterns', it is a function
// that wraps a closure goroutine, and returns a channel.
// reference: https://talks.golang.org/2012/concurrency.slide#25
func Watcher(ctx context.Context, pool *redis.Pool, pb om_messages.MatchObject) <-chan om_messages.MatchObject {
watchChan := make(chan om_messages.MatchObject)
results := om_messages.MatchObject{Id: pb.Id}
go func() {
// var declaration
var err = errors.New("haven't queried Redis yet")
// Loop, querying redis until this key has a value
for err != nil {
select {
case <-ctx.Done():
// Cleanup
close(watchChan)
return
default:
//results, err = Retrieve(ctx, pool, key)
results = om_messages.MatchObject{Id: pb.Id}
err = UnmarshalFromRedis(ctx, pool, &results)
if err != nil {
moLog.Debug("No new results")
time.Sleep(2 * time.Second) // TODO: exp bo + jitter
}
}
}
// Return value retreived from Redis asynchonously and tell calling function we're done
moLog.Debug("state storage watched record update detected")
watchChan <- results
}()
return watchChan
}

@ -0,0 +1,179 @@
// Package redispb marshals and unmarshals Open Match Frontend protobuf
// messages ('Players' or groups) for redis state storage.
// More details about the protobuf messages used in Open Match can be found in
// the api/protobuf-spec/om_messages.proto file.
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
All of this can probably be done more succinctly with some more interface and
reflection, this is a hack but works for now.
*/
package redispb
import (
"context"
"fmt"
"time"
om_messages "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/GoogleCloudPlatform/open-match/internal/statestorage/redis/playerindices"
"github.com/gogo/protobuf/jsonpb"
"github.com/gomodule/redigo/redis"
log "github.com/sirupsen/logrus"
)
// Logrus structured logging setup
var (
pLogFields = log.Fields{
"app": "openmatch",
"component": "statestorage",
}
pLog = log.WithFields(pLogFields)
)
// UnmarshalPlayerFromRedis unmarshals a Player from a redis hash.
// This can probably be deprecated if we work on getting the above generic enough.
// The problem is that protobuf message reflection is pretty messy.
func UnmarshalPlayerFromRedis(ctx context.Context, pool *redis.Pool, player *om_messages.Player) error {
// Get the Redis connection.
redisConn, err := pool.GetContext(context.Background())
defer redisConn.Close()
if err != nil {
pLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("failed to connect to redis")
return err
}
// Prepare redis command.
cmd := "HGETALL"
key := player.Id
resultLog := pLog.WithFields(log.Fields{
"component": "statestorage",
"cmd": cmd,
"key": key,
})
// Run redis command
playerMap, err := redis.StringMap(redisConn.Do(cmd, key))
// Put values from redis into the Player message
player.Properties = playerMap["properties"]
player.Pool = playerMap["pool"]
player.Assignment = playerMap["assignment"]
player.Status = playerMap["status"]
player.Error = playerMap["error"]
// TODO: Room for improvement here.
if a := playerMap["attributes"]; a != "" {
attrsJSON := fmt.Sprintf("{\"attributes\": %v}", a)
err = jsonpb.UnmarshalString(attrsJSON, player)
if err != nil {
resultLog.Error("failure on attributes")
resultLog.Error(a)
}
}
resultLog.Debug("state storage operation: player unmarshalled")
return err
}
// PlayerWatcher makes a channel and returns it immediately. It also launches an
// asynchronous goroutine that watches a redis key and returns updates to
// that key on the channel.
//
// The pattern for this function is from 'Go Concurrency Patterns', it is a function
// that wraps a closure goroutine, and returns a channel.
// reference: https://talks.golang.org/2012/concurrency.slide#25
//
// NOTE: this function will never stop querying Redis during normal operation! You need to
// disconnect the client from the frontend API (which closes the context) once
// you've received the results you were waiting for to stop doing work!
func PlayerWatcher(ctx context.Context, pool *redis.Pool, pb om_messages.Player) <-chan om_messages.Player {
pwLog := pLog.WithFields(log.Fields{"playerId": pb.Id})
// Establish channel to return results on.
watchChan := make(chan om_messages.Player)
go func() {
// var declaration
var prevResults = ""
// Loop, querying redis until this key has a value or the Redis query fails.
for {
select {
case <-ctx.Done():
// Player stopped asking for updates; clean up
close(watchChan)
return
default:
// Update the player's 'accessed' timestamp to denote they haven't disappeared
err := playerindices.Touch(ctx, pool, pb.Id)
if err != nil {
// Not fatal, but this error should be addressed. This could
// cause the player to expire while still actively connected!
pwLog.WithFields(log.Fields{"error": err.Error()}).Error("Unable to update accessed metadata timestamp")
}
// Get player from redis.
results := om_messages.Player{Id: pb.Id}
err = UnmarshalPlayerFromRedis(ctx, pool, &results)
if err != nil {
// Return error and quit.
pwLog.Debug("State storage error:", err.Error())
results.Error = err.Error()
watchChan <- results
close(watchChan)
return
}
// Check for new results and send them. Store a copy of the
// latest version in string form so we can compare it easily in
// future loops.
//
// If we decide to watch other message fields for updates,
// they will need to be added here.
//
// This can be made much cleaner if protobuffer reflection improves.
curResults := fmt.Sprintf("%v%v%v", results.Assignment, results.Status, results.Error)
if prevResults == curResults {
pwLog.Debug("No new watcher results")
// TODO: change the debug message once exp bo + jitter is implemented
//pwLog.Debug("No new results, backing off")
time.Sleep(2 * time.Second) // TODO: exp bo + jitter
} else {
// Return value retreived from Redis
pwLog.Debug("state storage watched player record changed")
watchedFields := om_messages.Player{
// Return only the watched fields to minimize traffic
Id: results.Id,
Assignment: results.Assignment,
Status: results.Status,
Error: results.Error,
}
watchChan <- watchedFields
prevResults = curResults
time.Sleep(2 * time.Second) // TODO: reset exp bo + jitter
}
}
}
}()
return watchChan
}

@ -1,5 +1,7 @@
// Package redispb marshals and unmarshals protobuf messages for redis state storage.
// More details about the protobuf messages used in Open Match can be found in the api/protobuf-spec/om_messages.proto file.
// Package redispb marshals and unmarshals Open Match Backend protobuf messages
// ('MatchObject') for redis state storage.
// More details about the protobuf messages used in Open Match can be found in
// the api/protobuf-spec/om_messages.proto file.
/*
Copyright 2018 Google LLC
@ -23,12 +25,9 @@ package redispb
import (
"context"
"errors"
"fmt"
"reflect"
"strings"
"time"
om_messages "github.com/GoogleCloudPlatform/open-match/internal/pb"
"github.com/gogo/protobuf/jsonpb"
"github.com/gogo/protobuf/proto"
"github.com/gomodule/redigo/redis"
@ -38,29 +37,29 @@ import (
// Logrus structured logging setup
var (
rpLogFields = log.Fields{
sLogFields = log.Fields{
"app": "openmatch",
"component": "statestorage",
"caller": "internal/statestorage/redis/redispb/redispb.go",
}
rpLog = log.WithFields(rpLogFields)
sLog = log.WithFields(sLogFields)
)
// MarshalToRedis marshals a protobuf message to a redis hash.
// The protobuf message in question must have an 'id' field.
func MarshalToRedis(ctx context.Context, pb proto.Message, pool *redis.Pool) (err error) {
// If a positive integer TTL is provided, it will also be set.
func MarshalToRedis(ctx context.Context, pool *redis.Pool, pb proto.Message, ttl int) error {
// We want to serialize to redis as JSON, not the typical protobuf string
// serializer, so start by marshalling to json.
this := jsonpb.Marshaler{}
jsonMsg, err := this.MarshalToString(pb)
if err != nil {
rpLog.WithFields(log.Fields{
sLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
"protobuf": pb,
}).Error("failure marshaling protobuf message to JSON")
return
return err
}
// Get redis key
@ -69,17 +68,17 @@ func MarshalToRedis(ctx context.Context, pb proto.Message, pool *redis.Pool) (er
// Return error if the provided protobuf message doesn't have an ID field
if !keyResult.Exists() {
err = errors.New("cannot unmarshal protobuf messages without an id field")
rpLog.WithFields(log.Fields{
sLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("failed to retrieve from redis")
return
return err
}
key := keyResult.String()
// Prepare redis command.
cmd := "HSET"
resultLog := rpLog.WithFields(log.Fields{
resultLog := sLog.WithFields(log.Fields{
"key": key,
"cmd": cmd,
})
@ -88,11 +87,11 @@ func MarshalToRedis(ctx context.Context, pb proto.Message, pool *redis.Pool) (er
redisConn, err := pool.GetContext(context.Background())
defer redisConn.Close()
if err != nil {
rpLog.WithFields(log.Fields{
sLog.WithFields(log.Fields{
"error": err.Error(),
"component": "statestorage",
}).Error("failed to connect to redis")
return
return err
}
redisConn.Send("MULTI")
@ -104,7 +103,12 @@ func MarshalToRedis(ctx context.Context, pb proto.Message, pool *redis.Pool) (er
// something like parseTag() in src/encoding/json/tags.go
//field := strings.ToLower(pbInfo.Type().Field(i).Tag.Get("json"))
field := strings.ToLower(pbInfo.Type().Field(i).Name)
value := gjson.Get(jsonMsg, field)
value := ""
//value, err = strconv.Unquote(gjson.Get(jsonMsg, field).String())
value = gjson.Get(jsonMsg, field).String()
if err != nil {
resultLog.Error("Issue with Unquoting string", err)
}
if field != "id" {
// This isn't the ID field, so write it to the redis hash.
redisConn.Send(cmd, key, field, value)
@ -114,7 +118,7 @@ func MarshalToRedis(ctx context.Context, pb proto.Message, pool *redis.Pool) (er
"component": "statestorage",
"field": field,
}).Error("State storage error")
return
return err
}
resultLog.WithFields(log.Fields{
"component": "statestorage",
@ -124,94 +128,19 @@ func MarshalToRedis(ctx context.Context, pb proto.Message, pool *redis.Pool) (er
}
}
_, err = redisConn.Do("EXEC")
return
}
// UnmarshalFromRedis unmarshals a MatchObject from a redis hash.
// This can probably be made generic to work with other pb messages in the future.
// In every case where we don't get an update, we return an error.
func UnmarshalFromRedis(ctx context.Context, pool *redis.Pool, pb *om_messages.MatchObject) error {
// Get the Redis connection.
redisConn, err := pool.GetContext(context.Background())
defer redisConn.Close()
if err != nil {
rpLog.WithFields(log.Fields{
"error": err.Error(),
if ttl > 0 {
redisConn.Send("EXPIRE", key, ttl)
resultLog.WithFields(log.Fields{
"component": "statestorage",
}).Error("failed to connect to redis")
return err
"ttl": ttl,
}).Info("State storage expiration set")
} else {
resultLog.WithFields(log.Fields{
"component": "statestorage",
"ttl": ttl,
}).Debug("State storage expiration not set")
}
// Prepare redis command.
cmd := "HGETALL"
key := pb.Id
resultLog := rpLog.WithFields(log.Fields{
"component": "statestorage",
"cmd": cmd,
"key": key,
})
pbMap, err := redis.StringMap(redisConn.Do(cmd, key))
pb.Error = pbMap["error"]
pb.Properties = pbMap["properties"]
poolsJSON := fmt.Sprintf("{\"pools\": %v}", pbMap["pools"])
err = jsonpb.UnmarshalString(poolsJSON, pb)
if err != nil {
resultLog.Error("failure on pool")
resultLog.Error(pbMap["pools"])
resultLog.Error(err)
}
rostersJSON := fmt.Sprintf("{\"rosters\": %v}", pbMap["rosters"])
err = jsonpb.UnmarshalString(rostersJSON, pb)
if err != nil {
resultLog.Error("failure on roster")
resultLog.Error(pbMap["rosters"])
log.Error(err)
}
rpLog.Debug("Final pb:")
rpLog.Debug(pb)
_, err = redisConn.Do("EXEC")
return err
}
// Watcher makes a channel and returns it immediately. It also launches an
// asynchronous goroutine that watches a redis key and returns updates to
// that key on the channel.
//
// The pattern for this function is from 'Go Concurrency Patterns', it is a function
// that wraps a closure goroutine, and returns a channel.
// reference: https://talks.golang.org/2012/concurrency.slide#25
func Watcher(ctx context.Context, pool *redis.Pool, pb om_messages.MatchObject) <-chan om_messages.MatchObject {
watchChan := make(chan om_messages.MatchObject)
results := om_messages.MatchObject{Id: pb.Id}
go func() {
// var declaration
var err = errors.New("haven't queried Redis yet")
// Loop, querying redis until this key has a value
for err != nil {
select {
case <-ctx.Done():
// Cleanup
close(watchChan)
return
default:
//results, err = Retrieve(ctx, pool, key)
results = om_messages.MatchObject{Id: pb.Id}
err = UnmarshalFromRedis(ctx, pool, &results)
if err != nil {
rpLog.Debug("No new results")
time.Sleep(2 * time.Second) // TODO: exp bo + jitter
}
}
}
// Return value retreived from Redis asynchonously and tell calling function we're done
rpLog.Debug("state storage watched record update detected")
watchChan <- results
}()
return watchChan
}

@ -1,8 +0,0 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-clientloadgen:$TAG_NAME',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-clientloadgen:$TAG_NAME']

@ -1,7 +1,7 @@
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/test/cmd/client/
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/test/cmd/clientloadgen/
COPY ./ ./
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o loadgen .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o clientloadgen .
CMD ["./loadgen"]
CMD ["./clientloadgen"]

@ -0,0 +1,8 @@
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'--tag=gcr.io/$PROJECT_ID/openmatch-clientloadgen:dev',
'.'
]
images: ['gcr.io/$PROJECT_ID/openmatch-clientloadgen:dev']

@ -21,8 +21,8 @@ import (
"log"
"time"
"github.com/GoogleCloudPlatform/open-match/test/cmd/client/player"
"github.com/GoogleCloudPlatform/open-match/test/cmd/client/redis/playerq"
"github.com/GoogleCloudPlatform/open-match/test/cmd/clientloadgen/player"
"github.com/GoogleCloudPlatform/open-match/test/cmd/clientloadgen/redis/playerq"
"github.com/gomodule/redigo/redis"
"github.com/spf13/viper"
@ -31,14 +31,14 @@ import (
func main() {
conf, err := readConfig("", map[string]interface{}{
"REDIS_SENTINEL_SERVICE_HOST": "127.0.0.1",
"REDIS_SENTINEL_SERVICE_PORT": "6379",
"REDIS_SERVICE_HOST": "127.0.0.1",
"REDIS_SERVICE_PORT": "6379",
})
check(err, "QUIT")
// As per https://www.iana.org/assignments/uri-schemes/prov/redis
// redis://user:secret@localhost:6379/0?foo=bar&qux=baz
redisURL := "redis://" + conf.GetString("REDIS_SENTINEL_SERVICE_HOST") + ":" + conf.GetString("REDIS_SENTINEL_SERVICE_PORT")
redisURL := "redis://" + conf.GetString("REDIS_SERVICE_HOST") + ":" + conf.GetString("REDIS_SERVICE_PORT")
pool := redis.Pool{
MaxIdle: 3,
@ -65,7 +65,7 @@ func main() {
elapsed := time.Since(start)
check(err, "")
fmt.Printf("Redis queries and UUID generation took %s\n", elapsed)
fmt.Printf("Redis queries and Xid generation took %s\n", elapsed)
fmt.Println("Sleeping")
time.Sleep(5 * time.Second)
}

@ -28,7 +28,7 @@ import (
"strings"
"time"
"github.com/google/uuid"
"github.com/rs/xid"
)
var (
@ -133,10 +133,9 @@ func New() {
// For PoC, we're flattening the JSON so it can be easily indexed in Redis.
// Flattened keys are joined using periods.
// That should be abstracted out of this level and into the db storage module
func Generate() (UUID string, properties map[string]int, debug map[string]string) {
//return UUID, properties, debug
// https://stackoverflow.com/a/37944520/3113674
UUID = strings.Replace(uuid.New().String(), "-", "", -1)
func Generate() (Xid string, properties map[string]int, debug map[string]string) {
//return Xid, properties, debug
Xid = xid.New().String()
properties = make(map[string]int)
debug = make(map[string]string)

@ -1,5 +1,5 @@
FROM golang:1.10.3 as builder
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/examples/frontendclient
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/test/cmd/frontendclient
COPY ./ ./
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o frontendclient .

@ -1,5 +1,5 @@
# Frontend API Client Stub
`frontendclient` is a fake client for the Frontend API. It pretends to be a real game client connecting to Open Match and requests a game, then dumps out the connection string it receives. Note that it doesn't actually test the return path by looking for arbitrary results from your matchmaking function; it pauses and tells you the name of a key to set a connection string in directly using a redis-cli client.
`frontendclient` is a fake client for the Frontend API. It pretends to be a number of real game clients connecting to Open Match and requests a match, as a group. It then waits for results to come back from the Frontend API, and prints them to your screen. You can generate these results using the entire Open Match end-to-end workflow - querying the backend, running an MMF, and assigning players to a match - or you can manually test the results pathway by directly putting the results you want into redis.
Only to be used for testing, and only in isolated environments (not in production!)

@ -0,0 +1,245 @@
0.0010714152016974474 Palermo
0.002142830403394895 Stockholm
0.0032142456050923422 Atlanta
0.00428566080678979 La Ceiba
0.0053570760084872375 Zurich
0.011447000014935527 Bursa
0.01944190025000188 Medellin
0.020513315451699324 Joao Pessoa
0.02158473065339677 Bristol
0.022656145855094217 Redding
0.023727561056791663 Liege
0.02479897625848911 Ljubljana
0.025870391460186555 San Jose
0.026941806661884 Des Moines
0.028013221863581447 Basel
0.03765595867885847 Guatemala
0.042646610688365186 San Antonio
0.043718025890062635 Washington
0.044789441091760085 Saskatoon
0.045860856293457535 The Hague
0.046932271495154984 Ktis
0.048003686696852434 Wellington
0.04907510189854988 Roseburg
0.058554983603168895 Guadalajara
0.06805843644222526 Johannesburg
0.0691298516439227 Strasbourg
0.07020126684562014 Stamford
0.07127268204731758 Dubai
0.07234409724901503 Louisville
0.07341551245071247 Halifax
0.12425202094085296 New York
0.1253234361425504 Canberra
0.12639485134424785 Dhaka
0.1274662665459453 Las Vegas
0.12853768174764277 Buffalo
0.12960909694934022 Moscow
0.13068051215103768 Algiers
0.13175192735273514 Austin
0.1328233425544326 Pittsburgh
0.13389475775613005 Tbilisi
0.20675099147155646 Shanghai
0.2078224066732539 Gothenburg
0.20889382187495137 Sacramento
0.20996523707664883 Brunswick
0.26482169540355815 Seoul
0.2658931106052556 Dar es Salaam
0.26696452580695307 Siauliai
0.2680359410086505 Colorado Springs
0.2779808169108062 Melbourne
0.2790522321125037 Cape Town
0.28012364731420114 Santiago
0.2921384973860363 Singapore
0.29320991258773377 Cardiff
0.2942813277894312 Dublin
0.2953527429911287 Dronten
0.29642415819282614 Vilnius
0.2974955733945236 Vancouver
0.34301571965384137 Sao Paulo
0.3440871348555388 Orlando
0.3451585500572363 Denver
0.34622996525893374 Manila
0.3473013804606312 Fremont
0.34837279566232865 Belfast
0.3494442108640261 Miami
0.35051562606572356 Baltimore
0.37020395181211585 Hangzhou
0.3712753670138133 Boston
0.380058828837329 Montreal
0.38113024403902646 Copenhagen
0.3822016592407239 Anchorage
0.38327307444242137 Paris
0.3843444896441188 Toledo
0.3854159048458163 Brussels
0.38648732004751374 Budapest
0.3875587352492112 Cleveland
0.38863015045090865 Berlin
0.39597791590414977 Hyderabad
0.3970493311058472 Vladivostok
0.3981207463075447 Charlotte
0.39919216150924214 Brasilia
0.4002635767109396 Panama
0.40133499191263705 Albany
0.4024064071143345 Geneva
0.43054391314131285 Los Angeles
0.4316153283430103 Prague
0.43268674354470776 Richmond
0.4337581587464052 Berkeley Springs
0.4348295739481027 Phnom Penh
0.43590098914980013 Portland
0.4369724043514976 Thessaloniki
0.43804381955319505 Lyon
0.4391152347548925 Dusseldorf
0.45782000134612655 Bangalore
0.458891416547824 Secaucus
0.45996283174952146 New Delhi
0.4610342469512189 Scranton
0.4621056621529164 Gdansk
0.46317707735461383 Jakarta
0.4642484925563113 Fez
0.46531990775800874 Edmonton
0.4663913229597062 Quito
0.46746273816140366 Venice
0.4685341533631011 Lima
0.5002566146549592 Istanbul
0.5013280298566566 Ottawa
0.502399445058354 Hanoi
0.5034708602600514 Asheville
0.5045422754617488 Oklahoma City
0.5045423118898656 Lahore
0.505613727091563 Paramaribo
0.5066851422932604 Knoxville
0.5077565574949578 Savannah
0.5088279726966553 Memphis
0.5098993878983527 Auckland
0.51097080310005 Bucharest
0.5120422183017475 Luxembourg
0.5131136335034449 Cromwell
0.5141850487051423 Oslo
0.5152564639068397 London
0.5163278791085371 Lugano
0.5173992943102345 Piscataway
0.5184707095119319 Maidstone
0.5195421247136293 Milwaukee
0.5206135399153267 Antwerp
0.5216849551170241 Heredia
0.534387653748349 Toronto
0.5354590689500464 Jerusalem
0.551530296975508 St Petersburg
0.5526017121772054 Monticello
0.5633265783461969 Sydney
0.5643979935478943 Rotterdam
0.5654694087495917 Lausanne
0.5665408239512891 Manhattan
0.570783628150011 Perth
0.5718550433517084 Seattle
0.5729264585534058 Indianapolis
0.5739978737551032 San Francisco
0.5750692889568007 Bern
0.5774306880613418 Mexico
0.5785021032630392 Roubaix
0.5813434963779408 Adelaide
0.5824149115796382 Arezzo
0.5834863267813356 Chisinau
0.584557741983033 Westpoort
0.5856291571847304 Cincinnati
0.6011561062877299 Dallas
0.6022275214894273 Columbus
0.6032989366911247 Bruges
0.6043703518928221 Honolulu
0.6054417670945195 Detroit
0.6065131822962169 Karaganda
0.6075845974979143 Bratislava
0.6211122858345463 Houston
0.6221837010362437 Coventry
0.623255116237941 Eindhoven
0.6243265314396385 Kampala
0.6253979466413359 Dagupan
0.6264693618430333 Riga
0.6275407770447307 Kansas City
0.6286121922464281 Brno
0.6296836074481255 Tokyo
0.6307550226498229 Newcastle
0.6318264378515203 Ankara
0.6328978530532177 Tempe
0.6339692682549151 Montevideo
0.6543540138824107 Chicago
0.6554254290841081 Lincoln
0.6564968442858055 Minneapolis
0.6575682594875029 Salt Lake City
0.6586396746892003 Sofia
0.7000863003516643 Osaka
0.7011577155533617 Varna
0.7022291307550591 Madrid
0.7033005459567565 Green Bay
0.7043719611584539 New Orleans
0.7054433763601513 San Juan
0.7169663075704311 Sapporo
0.7180377227721285 St Louis
0.7191091379738259 Amsterdam
0.7201805531755233 Raleigh
0.7212519683772207 Alblasserdam
0.7268233274260475 Vienna
0.73073827857305 Valencia
0.749280190053626 Chennai
0.7503516052553234 Marseille
0.7514230204570208 Buenos Aires
0.7524944356587182 Warsaw
0.7535658508604156 Lisbon
0.754637266062113 Christchurch
0.7557086812638104 Athens
0.7625785955370944 Milan
0.7700785019489765 Brisbane
0.7788919633981397 Izmir
0.7799633785998371 Rome
0.8143772348783591 Lagos
0.8154486500800565 Frosinone
0.8165200652817539 Calgary
0.8421911735144247 Shenzhen
0.8432625887161221 Caracas
0.8443340039178195 Riyadh
0.8551702972677875 Pune
0.8705986761722307 Malaysia
0.8716700913739281 Tel Aviv
0.8846749290921317 Philadelphia
0.8857463442938291 Missoula
0.8868177594955265 Albuquerque
0.8901755747376463 Novosibirsk
0.8957597907688934 Munich
0.8968312059705909 Bogota
0.8979026211722883 Winnipeg
0.8989740363739857 Valletta
0.898974117801541 Hong Kong
0.9000455330032384 Zagreb
0.9011169482049358 Kiev
0.9021883634066332 Edinburgh
0.9032597786083306 Jackson
0.9089232793645033 Tunis
0.930458724918622 Ho Chi Minh City
0.9315301401203194 Koto
0.9423428623358501 Hamburg
0.9434142775375475 Nairobi
0.9444856927392449 Bergen
0.9491356347146118 Indore
0.9596847887905249 San Diego
0.9607562039922223 Taipei
0.9618276191939197 Zhangjiakou
0.9719846353060115 Barcelona
0.9730560505077089 Syracuse
0.9741274657094063 Frankfurt
0.9751988809111037 Tampa
0.9762702961128011 Carlow
0.9773417113144985 South Bend
0.9784131265161959 Bangkok
0.9794845417178933 Nis
0.9805559569195907 Limassol
0.9816273721212881 Tallinn
0.9826987873229855 Jacksonville
0.983770202524683 Nuremberg
0.9935715087898112 Phoenix
0.9946429239915086 Reykjavik
0.995714339193206 Tirana
0.9967857543949034 Salem
0.9978571695966008 Helsinki
0.9989285847982982 Quebec City
1.0000000000000000 Manchester

@ -0,0 +1,245 @@
Adelaide 15974km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Albany 5733km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Alblasserdam 114km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Albuquerque 8325km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Algiers 1571km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Amsterdam 163km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Anchorage 7346km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Ankara 2514km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Antwerp 41km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Arezzo 997km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Asheville 6825km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Athens 2090km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Atlanta 7089km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Auckland 18285km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Austin 8226km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Baltimore 6162km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bangalore 7721km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bangkok 9251km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Barcelona 1065km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Basel 435km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Belfast 807km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 00:19:45
Bergen 1061km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Berkeley Springs 6242km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Berlin 561km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bern 489km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bogota 8802km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Boston 5584km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Brasilia 8979km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bratislava 968km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Brisbane 16322km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bristol 488km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Brno 894km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bruges 88km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Brunswick 5409km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bucharest 1771km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Budapest 1129km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Buenos Aires 11308km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Buffalo 6049km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Bursa 2244km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Calgary 7288km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Canberra 16723km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Cape Town 9535km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Caracas 7795km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Cardiff 529km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Carlow 805km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Charlotte 6743km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Chennai 7900km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Chicago 6666km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Chisinau 1836km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Christchurch 18816km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Cincinnati 6682km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Cleveland 6289km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Colorado Springs 7898km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Columbus 6521km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Copenhagen 765km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Coventry 440km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Cromwell 5739km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dagupan 10339km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dallas 7953km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dar es Salaam 7233km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Denver 7833km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Des Moines 7040km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Detroit 6349km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dhaka 7719km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dronten 207km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dubai 5179km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dublin 775km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Dusseldorf 175km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Edinburgh 755km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:22
Edmonton 7043km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Eindhoven 102km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Fez 2016km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Frankfurt 317km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Fremont 8882km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Frosinone 1245km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Gdansk 1038km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Geneva 532km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Gothenburg 907km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Green Bay 6487km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Groningen 303km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Guadalajara 9409km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Guatemala 8804km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Halifax 4946km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Hamburg 489km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Hangzhou 9030km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Hanoi 8976km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Helsinki 1637km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Heredia 9042km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Ho Chi Minh City 9922km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Hong Kong 9396km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Honolulu 11807km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Houston 8118km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Hyderabad 7412km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Indianapolis 6733km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Indore 6789km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Istanbul 2181km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Izmir 2256km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Jackson 7580km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Jacksonville 7180km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Jakarta 11408km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Jerusalem 3298km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Joao Pessoa 7457km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Johannesburg 8881km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Kampala 6218km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Kansas City 7296km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Karaganda 4703km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Kiev 1836km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Knoxville 6886km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Koto 9457km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Ktis 732km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
La Ceiba 8735km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lagos 4930km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lahore 5990km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Las Vegas 8691km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lausanne 509km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Liege 89km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lima 10452km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Limassol 2922km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lincoln 7279km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lisbon 1717km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Ljubljana 918km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
London 319km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Los Angeles 9034km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Louisville 6826km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lugano 637km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Luxembourg 180km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Lyon 567km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Madrid 1317km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Maidstone 271km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Malaysia 10253km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Manchester 536km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Manhattan 5881km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Manila 10511km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Marseille 843km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Medellin 8765km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Melbourne 16617km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Memphis 7340km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Mexico 9248km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Miami 7449km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Milan 698km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Milwaukee 6591km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Minneapolis 6759km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Missoula 7657km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Montevideo 11220km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Monticello 6847km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Montreal 5535km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Moscow 2253km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Munich 602km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Nairobi 6561km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
New Delhi 6416km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
New Orleans 7762km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
New York 5865km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Newcastle 607km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Nis 1566km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Novosibirsk 5005km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Nuremberg 504km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Oklahoma City 7770km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:23
Orlando 7305km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Osaka 9373km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Oslo 1088km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Ottawa 5675km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Palermo 1584km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Panama 9089km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Paramaribo 7406km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Paris 262km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Perth 14165km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Philadelphia 6019km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Phnom Penh 9743km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Phoenix 8775km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Piscataway 5931km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Pittsburgh 6295km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Portland 8155km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Prague 717km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Pune 6992km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Quebec City 5303km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Quito 9522km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Raleigh 6566km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Redding 8611km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Reykjavik 2127km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Richmond 6344km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Riga 1454km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Riyadh 4632km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Rome 1174km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Roseburg 8402km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Rotterdam 119km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Roubaix 84km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Sacramento 8762km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Salem 5564km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Salt Lake City 8103km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
San Antonio 8342km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
San Diego 9105km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
San Francisco 8880km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
San Jose 8899km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
San Juan 7063km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Santiago 11910km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Sao Paulo 9662km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Sapporo 8777km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Saskatoon 6869km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Savannah 7004km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Scranton 5948km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Seattle 7944km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Secaucus 5889km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Seoul 8707km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Shanghai 9016km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Shenzhen 9367km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Siauliai 1373km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Singapore 10550km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Sofia 1699km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
South Bend 6593km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
St Louis 7071km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
St Petersburg 1907km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Stamford 5835km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Stockholm 1280km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Strasbourg 352km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Sydney 16749km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Syracuse 5865km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Taipei 9584km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tallinn 1599km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tampa 7426km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tbilisi 3226km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tel Aviv 3251km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tempe 8767km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
The Hague 137km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Thessaloniki 1824km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tirana 1589km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tokyo 9454km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Toledo 6430km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Toronto 6028km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Tunis 1933km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Valencia 1317km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Valletta 1850km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Vancouver 7821km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Varna 1966km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Venice 835km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Vienna 915km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Vilnius 1465km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Vladivostok 8404km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Warsaw 1160km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Washington 6235km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Wellington 18727km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Westpoort 176km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Winnipeg 6583km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Zagreb 1024km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Zhangjiakou 7808km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24
Zurich 492km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:19:24

@ -0,0 +1,245 @@
Adelaide 16266km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:03
Albany 5417km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:03
Alblasserdam 330km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Albuquerque 8031km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Algiers 1660km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Amsterdam 341km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Anchorage 7197km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Ankara 2833km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Antwerp 314km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Arezzo 1266km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:04
Asheville 6509km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Athens 2392km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Atlanta 6772km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Auckland 18338km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Austin 7915km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Baltimore 5845km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bangalore 8035km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bangkok 9532km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Barcelona 1140km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Basel 708km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Belfast 518km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 00:19:49
Bergen 1041km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Berkeley Springs 5926km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Berlin 875km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bern 747km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bogota 8504km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Boston 5266km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Brasilia 8796km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bratislava 1287km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Brisbane 16529km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bristol 172km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Brno 1211km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bruges 234km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Brunswick 5093km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Brussels 319km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bucharest 2091km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Budapest 1448km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Buenos Aires 11132km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Buffalo 5736km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Bursa 2562km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Calgary 7036km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Canberra 16986km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Cape Town 9680km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Caracas 7502km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Cardiff 213km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Carlow 488km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Charlotte 6425km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Chennai 8212km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Chicago 6358km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Chisinau 2150km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Christchurch 18975km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Cincinnati 6369km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Cleveland 5977km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Colorado Springs 7606km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Columbus 6208km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Copenhagen 953km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Coventry 138km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Cromwell 5422km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Dagupan 10564km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:05
Dallas 7644km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Dar es Salaam 7492km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Denver 7542km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Des Moines 6738km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Detroit 6038km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Dhaka 8002km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Dronten 413km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Dubai 5498km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Dublin 464km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Dusseldorf 479km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Edinburgh 533km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Edmonton 6797km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Eindhoven 386km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Fez 1982km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Frankfurt 636km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Fremont 8619km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Frosinone 1513km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Gdansk 1290km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Geneva 745km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Gothenburg 1035km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Green Bay 6183km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Groningen 490km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Guadalajara 9096km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Guatemala 8494km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Halifax 4627km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Hamburg 719km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Hangzhou 9222km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Hanoi 9231km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Helsinki 1806km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Heredia 8727km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Ho Chi Minh City 10195km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Hong Kong 9625km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Honolulu 11634km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Houston 7805km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:06
Hyderabad 7722km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:07
Indianapolis 6422km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:07
Indore 7096km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:07
Istanbul 2499km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:07
Izmir 2568km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:07
Jackson 7265km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:07
Jacksonville 6861km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:07
Jakarta 11706km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Jerusalem 3609km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Joao Pessoa 7311km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Johannesburg 9072km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Kampala 6465km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Kansas City 6992km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Karaganda 4945km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Kiev 2132km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Knoxville 6570km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Koto 9565km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Ktis 1051km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
La Ceiba 8417km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lagos 5014km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lahore 6285km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Las Vegas 8412km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lausanne 741km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Liege 409km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lima 10173km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Limassol 3236km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lincoln 6979km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lisbon 1590km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Ljubljana 1228km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Los Angeles 8758km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Louisville 6513km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lugano 902km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Luxembourg 484km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Lyon 733km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Madrid 1266km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Maidstone 51km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Malaysia 10547km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:08
Manchester 261km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Manhattan 5564km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Manila 10736km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Marseille 1002km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Medellin 8463km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Melbourne 16904km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Memphis 7027km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Mexico 8932km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Miami 7130km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Milan 959km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Milwaukee 6286km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Minneapolis 6465km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Missoula 7396km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Montevideo 11052km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Monticello 6543km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Montreal 5222km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Moscow 2499km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Munich 916km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Nairobi 6821km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
New Delhi 6713km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
New Orleans 7446km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
New York 5548km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Newcastle 397km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Nis 1881km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Novosibirsk 5204km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Nuremberg 823km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Oklahoma City 7465km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Orlando 6986km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Osaka 9500km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Oslo 1155km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Ottawa 5363km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Palermo 1824km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Panama 8770km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 01:19:52
Paramaribo 7149km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Paris 341km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Perth 14481km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Philadelphia 5702km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Phnom Penh 10018km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Phoenix 8487km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Piscataway 5614km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Pittsburgh 5980km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:09
Portland 7908km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Prague 1031km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Pune 7305km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Quebec City 4990km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Quito 9224km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Raleigh 6248km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Redding 8354km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Reykjavik 1888km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Richmond 6026km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Riga 1675km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Riyadh 4948km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Rome 1434km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Roseburg 8152km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Rotterdam 320km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Roubaix 247km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Sacramento 8499km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Salem 5247km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Salt Lake City 7826km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
San Antonio 8031km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
San Diego 8824km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
San Francisco 8619km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
San Jose 8636km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
San Juan 6760km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Santiago 11690km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Sao Paulo 9501km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Sapporo 8863km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Saskatoon 6607km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Savannah 6685km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Scranton 5632km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Seattle 7701km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Secaucus 5572km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Seoul 8858km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Shanghai 9201km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Shenzhen 9596km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Siauliai 1608km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Singapore 10843km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:10
Sofia 2014km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
South Bend 6285km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
St Louis 6762km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
St Petersburg 2097km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Stamford 5518km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Stockholm 1431km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Strasbourg 649km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Sydney 16996km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Syracuse 5550km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Taipei 9784km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tallinn 1782km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tampa 7106km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tbilisi 3538km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tel Aviv 3563km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tempe 8478km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
The Hague 310km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Thessaloniki 2132km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tirana 1891km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tokyo 9563km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Toledo 6119km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Toronto 5716km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Tunis 2109km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Valencia 1339km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Valletta 2088km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Vancouver 7582km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Varna 2285km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Venice 1132km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Vienna 1234km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Vilnius 1721km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Vladivostok 8523km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Warsaw 1447km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Washington 5918km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Wellington 18817km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Westpoort 351km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Winnipeg 6301km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Zagreb 1337km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Zhangjiakou 7988km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11
Zurich 776km 75.00ms 100.00% 75.00ms 75.00ms 1.00ms 2018-05-14 02:20:11

Some files were not shown because too many files have changed in this diff Show More