mirror of
https://github.com/googleforgames/open-match.git
synced 2025-03-22 19:08:31 +00:00
Compare commits
3 Commits
release-1.
...
0.5.0-rc.2
Author | SHA1 | Date | |
---|---|---|---|
50db76d1ff | |||
50b52dc2e9 | |||
82226b1be1 |
MakefileREADME.mdcloudbuild.yaml
docs
building.mdconcepts.mddevelopment.mdfaq.mdgcloud.md
governance/templates
integrations.mdproduction.mdreferences.mdroadmap.mdexamples/backendclient
install/helm
site
2
Makefile
2
Makefile
@ -46,7 +46,7 @@
|
||||
##
|
||||
# http://makefiletutorial.com/
|
||||
|
||||
BASE_VERSION = 0.5.0-rc1
|
||||
BASE_VERSION = 0.5.0-rc.2
|
||||
VERSION_SUFFIX = $(shell git rev-parse --short=7 HEAD | tr -d [:punct:])
|
||||
BRANCH_NAME = $(shell git rev-parse --abbrev-ref HEAD | tr -d [:punct:])
|
||||
VERSION = $(BASE_VERSION)-$(VERSION_SUFFIX)
|
||||
|
168
README.md
168
README.md
@ -14,124 +14,82 @@ Under the covers matchmaking approaches touch on significant areas of computer s
|
||||
|
||||
This project attempts to solve the networking and plumbing problems, so game developers can focus on the logic to match players into great games.
|
||||
|
||||
## Running Open Match
|
||||
Open Match framework is a collection of servers that run within Kubernetes (the [puppet master](https://en.wikipedia.org/wiki/Puppet_Master_(gaming)) for your server cluster.)
|
||||
## Open Match Demo
|
||||
|
||||
This section lists the steps to set up a demo for the basic functionality of Open Match. If you just want to see an E2E Open Match setup in action, please continue with this section. If you want to build Open Match from source, or modify the match functions, please follow the [Development Guide](docs/development.md).
|
||||
|
||||
## Deploy to Kubernetes
|
||||
### Create a Kubernetes Cluster
|
||||
|
||||
If you have an [existing Kubernetes cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster) you can run these commands to install Open Match.
|
||||
Open Match framework is a collection of servers that run within a Kubernetes cluster. Having a Kubernetes cluster is a prerequisite to deploying Open Match. If you want to deploy Open Match to an existing Kubernetes cluster, skip this step and proceed to Deploying Open Match, otherwise create a kubernetes cluster with one of the options listed below:
|
||||
|
||||
* [Set up a Google Cloud Kubernetes Cluster](docs/gcloud.md) (*this may involve extra charges unless you are on free tier*)
|
||||
* [Set up a Local Minikube cluster](https://kubernetes.io/docs/setup/minikube/)
|
||||
|
||||
### Deplying Open Match
|
||||
|
||||
Run the following steps to deploy core Open Match components and the monitoring services in the Kubernetes cluster.
|
||||
|
||||
```bash
|
||||
# Grant yourself cluster-admin permissions so that you can deploy service accounts.
|
||||
kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(YOUR_KUBERNETES_USER_NAME)
|
||||
# Place all Open Match components in their own namespace.
|
||||
# Create a cluster role binding (if using gcloud)
|
||||
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user `gcloud config get-value account`
|
||||
|
||||
# Create a cluster role binding (if using minikube)
|
||||
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:default
|
||||
|
||||
# Create a namespace to place all the Open Match components in.
|
||||
kubectl create namespace open-match
|
||||
# Install Open Match and monitoring services.
|
||||
kubectl apply -f https://storage.googleapis.com/open-match-chart/install/yaml/master-latest/install.yaml --namespace open-match
|
||||
# Install the example MMF and Evaluator.
|
||||
kubectl apply -f https://storage.googleapis.com/open-match-chart/install/yaml/master-latest/install-example.yaml --namespace open-match
|
||||
|
||||
# Install the core Open Match and monitoring services.
|
||||
kubectl apply -f https://github.com/GoogleCloudPlatform/open-match/releases/download/0.5.0-rc.2/install.yaml --namespace open-match
|
||||
```
|
||||
|
||||
To delete Open Match
|
||||
### Deploy demo components
|
||||
|
||||
Open Match framework requires the user to author a custom match function and an evaluator that are invoked to create matches. For demo purposes, we will use an example MMF and Evaluator. The following command deploys these in the kubernetes cluster:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://github.com/GoogleCloudPlatform/open-match/releases/download/0.5.0-rc.2/install-example.yaml --namespace open-match
|
||||
```
|
||||
|
||||
This command also deploys a component that continuously generates players with different properties and adds them to Open Match state storage. This is because a populated player pool is required to generate matches.
|
||||
|
||||
### Generate Matches!
|
||||
|
||||
The in a real setup, a game backend (Director / DGS etc.) will request Open Match for mathes. For demo purposes, this is simulated by a backend client that requests Open Match to continuously list matches till it runs out of players.
|
||||
|
||||
```bash
|
||||
kubectl run om-backendclient --rm --restart=Never --image-pull-policy=Always -i --tty --image=gcr.io/open-match-public-images/openmatch-backendclient:0.5.0-rc.2 --namespace=open-match
|
||||
```
|
||||
|
||||
If successful, the backend client should successfully generate matches, displaying players populated in Rosters.
|
||||
|
||||
### Cleanup
|
||||
|
||||
To delete Open Match from this cluster, simply run:
|
||||
|
||||
```bash
|
||||
# Delete the open-match namespace that holds all the Open Match configuration.
|
||||
kubectl delete namespace open-match
|
||||
```
|
||||
|
||||
## Development
|
||||
Open Match can be deployed locally or in the cloud for development. Below are the steps to build, push, and deploy the binaries to Kubernetes.
|
||||
## Documentation
|
||||
|
||||
### Deploy to Minikube (Locally)
|
||||
[Minikube](https://kubernetes.io/docs/setup/minikube/) is Kubernetes in a VM. It's mainly used for development.
|
||||
Here are some useful links to additional documentation:
|
||||
|
||||
```bash
|
||||
# Create a Minikube Cluster and install Helm
|
||||
make create-mini-cluster push-helm
|
||||
# Deploy Open Match with example functions
|
||||
make REGISTRY=gcr.io/open-match-public-images TAG=latest install-chart install-example-chart
|
||||
```
|
||||
* [Future Roadmap](docs/roadmap.md)
|
||||
* [Open Match Concepts](docs/concepts.md)
|
||||
* [Development Guide](docs/development.md)
|
||||
* [Open Match Integrations](docs/integrations.md)
|
||||
* [References](docs/references.md)
|
||||
|
||||
### Deploy to Google Cloud Platform (Cloud)
|
||||
For more information on the technical underpinnings of Open Match you can refer to the [docs/](docs/) directory.
|
||||
|
||||
Create a GCP project via [Google Cloud Console](https://console.cloud.google.com/). Billing must be enabled but if you're a new customer you can get some [free credits](https://cloud.google.com/free/). When you create a project you'll need to set a Project ID, if you forget it you can see it here, https://console.cloud.google.com/iam-admin/settings/project.
|
||||
## Contributing
|
||||
|
||||
Now install [Google Cloud SDK](https://cloud.google.com/sdk/) which is the command line tool to work against your project. The following commands log you into your GCP Project.
|
||||
Please read the [contributing](CONTRIBUTING.md) guide for directions on submitting Pull Requests to Open Match.
|
||||
|
||||
```bash
|
||||
# Login to your Google Account for GCP.
|
||||
gcloud auth login
|
||||
gcloud config set project $YOUR_GCP_PROJECT_ID
|
||||
# Enable GCP services
|
||||
gcloud services enable containerregistry.googleapis.com
|
||||
gcloud services enable container.googleapis.com
|
||||
# Test that everything is good, this command should work.
|
||||
gcloud compute zones list
|
||||
```
|
||||
See the [Development guide](docs/development.md) for documentation for development and building Open Match from source.
|
||||
|
||||
Please follow the instructions to [Setup Local Open Match Repository](#local-repository-setup). Once everything is setup you can deploy Open Match by creating a cluster in Google Kubernetes Engine (GKE).
|
||||
|
||||
```bash
|
||||
# Create a GKE Cluster and install Helm
|
||||
make create-gke-cluster push-helm
|
||||
# Deploy Open Match with example functions
|
||||
make REGISTRY=gcr.io/open-match-build TAG=0.4.0-e98e1b6 install-chart install-example-chart
|
||||
```
|
||||
|
||||
To generate matches using a test client, run the following command:
|
||||
|
||||
```bash
|
||||
make REGISTRY=gcr.io/open-match-build TAG=0.4.0-e98e1b6 run-backendclient
|
||||
```
|
||||
|
||||
Once deployed you can view the jobs in [Cloud Console](https://console.cloud.google.com/kubernetes/workload).
|
||||
|
||||
### Local Repository Setup
|
||||
|
||||
Here are the instructions to set up a local repository for Open Match.
|
||||
|
||||
```bash
|
||||
# Install Open Match Toolchain Dependencies (for Debian, other OSes including Mac OS X have similar dependencies)
|
||||
sudo apt-get update; sudo apt-get install -y -q python3 python3-virtualenv virtualenv make google-cloud-sdk git unzip tar
|
||||
# Setup your repository like Go workspace, https://golang.org/doc/code.html#Workspaces
|
||||
# This requirement will go away soon.
|
||||
mkdir -p $HOME/workspace/src/github.com/GoogleCloudPlatform/
|
||||
cd $HOME/workspace/src/github.com/GoogleCloudPlatform/
|
||||
export GOPATH=$HOME/workspace
|
||||
export GO111MODULE=on
|
||||
git clone https://github.com/GoogleCloudPlatform/open-match.git
|
||||
cd open-match
|
||||
```
|
||||
|
||||
### Compiling From Source
|
||||
|
||||
The easiest way to build Open Match is to use the [Makefile](Makefile). Please follow the instructions to [Setup Local Open Match Repository](#local-repository-setup).
|
||||
|
||||
[Docker](https://docs.docker.com/install/) and [Go 1.12+](https://golang.org/dl/) is also required.
|
||||
|
||||
To build all the artifacts of Open Match you can simply run the following commands.
|
||||
|
||||
```bash
|
||||
# Downloads all the tools needed to build Open Match
|
||||
make install-toolchain
|
||||
# Generates protocol buffer code files
|
||||
make all-protos
|
||||
# Builds all the binaries
|
||||
make all
|
||||
# Builds all the images.
|
||||
make build-images
|
||||
```
|
||||
|
||||
Once build you can use a command like `docker images` to see all the images that were build.
|
||||
|
||||
Before creating a pull request you can run `make local-cloud-build` to simulate a Cloud Build run to check for regressions.
|
||||
|
||||
The directory structure is a typical Go structure so if you do the following you should be able to work on this project within your IDE.
|
||||
|
||||
Lastly, this project uses go modules so you'll want to set `export GO111MODULE=on` in your `~/.bashrc`.
|
||||
|
||||
The [Build Queue](https://console.cloud.google.com/cloud-build/builds?project=open-match-build) runs against all PRs, requires membership to [open-match-discuss@googlegroups.com](https://groups.google.com/forum/#!forum/open-match-discuss).
|
||||
Open Match is in active development - we would love your help in shaping its future!
|
||||
|
||||
## Support
|
||||
|
||||
@ -140,20 +98,6 @@ The [Build Queue](https://console.cloud.google.com/cloud-build/builds?project=op
|
||||
* [Mailing list](https://groups.google.com/forum/#!forum/open-match-discuss)
|
||||
* [Managed Service Survey](https://goo.gl/forms/cbrFTNCmy9rItSv72)
|
||||
|
||||
## Contributing
|
||||
|
||||
Please read the [contributing](CONTRIBUTING.md) guide for directions on submitting Pull Requests to Open Match.
|
||||
|
||||
See the [Development guide](docs/development.md) for documentation for development and building Open Match from source.
|
||||
|
||||
The [Release Process](docs/governance/release_process.md) documentation displays the project's upcoming release calendar and release process.
|
||||
|
||||
Open Match is in active development - we would love your help in shaping its future!
|
||||
|
||||
## Documentation
|
||||
|
||||
For more information on the technical underpinnings of Open Match you can refer to the [docs/](docs/) directory.
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Participation in this project comes under the [Contributor Covenant Code of Conduct](code-of-conduct.md)
|
||||
|
@ -189,7 +189,7 @@ images:
|
||||
- 'gcr.io/$PROJECT_ID/openmatch-clientloadgen:${_OM_VERSION}-${SHORT_SHA}'
|
||||
- 'gcr.io/$PROJECT_ID/openmatch-frontendclient:${_OM_VERSION}-${SHORT_SHA}'
|
||||
substitutions:
|
||||
_OM_VERSION: "0.5.0-rc1"
|
||||
_OM_VERSION: "0.5.0-rc.2"
|
||||
_GCB_POST_SUBMIT: "0"
|
||||
logsBucket: 'gs://open-match-build-logs/'
|
||||
options:
|
||||
|
@ -1,84 +0,0 @@
|
||||
## Building
|
||||
|
||||
Documentation and usage guides on how to set up and customize Open Match.
|
||||
|
||||
### Precompiled container images
|
||||
|
||||
Once we reach a 1.0 release, we plan to produce publicly available (Linux) Docker container images of major releases in a public image registry. Until then, refer to the 'Compiling from source' section below.
|
||||
|
||||
### Compiling from source
|
||||
|
||||
The easiest way to build Open Match is to use the Makefile. Before you can use the Makefile make sure you have the following dependencies:
|
||||
|
||||
```bash
|
||||
# Install Open Match Toolchain Dependencies (Debian other OSes including Mac OS X have similar dependencies)
|
||||
sudo apt-get update; sudo apt-get install -y -q python3 python3-virtualenv virtualenv make google-cloud-sdk git unzip tar
|
||||
# Setup your repository like Go workspace, https://golang.org/doc/code.html#Workspaces
|
||||
# This requirement will go away soon.
|
||||
mkdir -p workspace/src/github.com/GoogleCloudPlatform/
|
||||
cd workspace/src/github.com/GoogleCloudPlatform/
|
||||
export GOPATH=$HOME/workspace
|
||||
export GO111MODULE=on
|
||||
git clone https://github.com/GoogleCloudPlatform/open-match.git
|
||||
cd open-match
|
||||
```
|
||||
|
||||
[Docker](https://docs.docker.com/install/) and [Go 1.11+](https://golang.org/dl/) is also required. If your distro is new enough you can probably run `sudo apt-get install -y golang` or download the newest version from https://golang.org/.
|
||||
|
||||
To build all the artifacts of Open Match you can simply run the following commands.
|
||||
|
||||
```bash
|
||||
# Downloads all the tools needed to build Open Match
|
||||
make install-toolchain
|
||||
# Generates protocol buffer code files
|
||||
make all-protos
|
||||
# Builds all the binaries
|
||||
make all
|
||||
# Builds all the images.
|
||||
make build-images
|
||||
```
|
||||
|
||||
Once build you can use a command like `docker images` to see all the images that were build.
|
||||
|
||||
Before creating a pull request you can run `make local-cloud-build` to simulate a Cloud Build run to check for regressions.
|
||||
|
||||
The directory structure is a typical Go structure so if you do the following you should be able to work on this project within your IDE.
|
||||
|
||||
```bash
|
||||
cd $GOPATH
|
||||
mkdir -p src/github.com/GoogleCloudPlatform/
|
||||
cd src/github.com/GoogleCloudPlatform/
|
||||
# If you're going to contribute you'll want to fork open-match, see CONTRIBUTING.md for details.
|
||||
git clone https://github.com/GoogleCloudPlatform/open-match.git
|
||||
cd open-match
|
||||
# Open IDE in this directory.
|
||||
```
|
||||
|
||||
Lastly, this project uses go modules so you'll want to set `export GO111MODULE=on` before building.
|
||||
|
||||
## Zero to Open Match
|
||||
To deploy Open Match quickly to a Kubernetes cluster run these commands.
|
||||
|
||||
```bash
|
||||
# Downloads all the tools.
|
||||
make install-toolchain
|
||||
# Create a GKE Cluster
|
||||
make create-gke-cluster
|
||||
# OR Create a Minikube Cluster
|
||||
make create-mini-cluster
|
||||
# Install Helm
|
||||
make push-helm
|
||||
# Build and push images
|
||||
make push-images -j4
|
||||
# Deploy Open Match with example functions
|
||||
make install-chart install-example-chart
|
||||
```
|
||||
|
||||
## Docker Image Builds
|
||||
|
||||
All the core components for Open Match are written in Golang and use the [Dockerfile multistage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/). This pattern uses intermediate Docker containers as a Golang build environment while producing lightweight, minimized container images as final build artifacts. When the project is ready for production, we will modify the `Dockerfile`s to uncomment the last build stage. Although this pattern is great for production container images, it removes most of the utilities required to troubleshoot issues during development.
|
||||
|
||||
## Configuration
|
||||
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration. To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. When `docker build`ing the component container images, the Dockerfile copies the centralized config file into the component directory.
|
||||
|
||||
We plan to replace this with a Kubernetes-managed config with dynamic reloading, please join the discussion in [Issue #42](issues/42).
|
@ -1,4 +1,3 @@
|
||||
|
||||
# Core Concepts
|
||||
|
||||
[Watch the introduction of Open Match at Unite Berlin 2018 on YouTube](https://youtu.be/qasAmy_ko2o)
|
||||
@ -8,19 +7,23 @@ Open Match is designed to support massively concurrent matchmaking, and to be sc
|
||||
## Glossary
|
||||
|
||||
### General
|
||||
|
||||
* **DGS** — Dedicated game server
|
||||
* **Client** — The game client program the player uses when playing the game
|
||||
* **Session** — In Open Match, players are matched together, then assigned to a server which hosts the game _session_. Depending on context, this may be referred to as a _match_, _map_, or just _game_ elsewhere in the industry.
|
||||
* **Session** — In Open Match, players are matched together, then assigned to a server which hosts the game _session_. Depending on context, this may be referred to as a _match_, _map_, or just _game_ elsewhere in the industry.
|
||||
|
||||
### Open Match
|
||||
|
||||
* **Component** — One of the discrete processes in an Open Match deployment. Open Match is composed of multiple scalable microservices called _components_.
|
||||
* **State Storage** — The storage software used by Open Match to hold all the matchmaking state. Open Match ships with [Redis](https://redis.io/) as the default state storage.
|
||||
* **MMFOrc** — Matchmaker function orchestrator. This Open Match core component is in charge of kicking off custom matchmaking functions (MMFs) and evaluator processes.
|
||||
* **MMF** — Matchmaking function. This is the customizable matchmaking logic.
|
||||
* **MMLogic API** — An API that provides MMF SDK functionality. It is optional - you can also do all the state storage read and write operations yourself if you have a good reason to do so.
|
||||
* **Function Harness** — A GRPC serving harness that triggers the Match function.
|
||||
* **Evaluator** — Customizable evaluation logic that analyzes match proposals and approves / rejects matches.
|
||||
* **MMLogic API** — An API that provides MMF SDK functionality.
|
||||
* **Director** — The software you (as a developer) write against the Open Match Backend API. The _Director_ decides which MMFs to run, and is responsible for sending MMF results to a DGS to host the session.
|
||||
|
||||
### Data Model
|
||||
### Data Model
|
||||
|
||||
* **Player** — An ID and list of attributes with values for a player who wants to participate in matchmaking.
|
||||
* **Roster** — A list of player objects. Used to hold all the players on a single team.
|
||||
* **Filter** — A _filter_ is used to narrow down the players to only those who have an attribute value within a certain integer range. All attributes are integer values in Open Match because [that is how indices are implemented](internal/statestorage/redis/playerindices/playerindices.go). A _filter_ is defined in a _player pool_.
|
||||
@ -31,6 +34,7 @@ Open Match is designed to support massively concurrent matchmaking, and to be sc
|
||||
* **Ignore List** — Removing players from matchmaking consideration is accomplished using _ignore lists_. They contain lists of player IDs that your MMF should not include when making matches.
|
||||
|
||||
## Requirements
|
||||
|
||||
* [Kubernetes](https://kubernetes.io/) cluster — tested with version 1.11.7.
|
||||
* [Redis 4+](https://redis.io/) — tested with 4.0.11.
|
||||
* Open Match is compiled against the latest release of [Golang](https://golang.org/) — tested with 1.11.5.
|
||||
@ -39,17 +43,14 @@ Open Match is designed to support massively concurrent matchmaking, and to be sc
|
||||
|
||||
Open Match is a set of processes designed to run on Kubernetes. It contains these **core** components:
|
||||
|
||||
1. Frontend API
|
||||
1. Backend API
|
||||
1. Matchmaker Function Orchestrator (MMFOrc) (may be deprecated in future versions)
|
||||
* Frontend API
|
||||
* Backend API
|
||||
* Matchmaking Logic (MMLogic) API
|
||||
|
||||
It includes these **optional** (but recommended) components:
|
||||
1. Matchmaking Logic (MMLogic) API
|
||||
It also depends on these two **customizable** components.
|
||||
|
||||
It also explicitly depends on these two **customizable** components.
|
||||
|
||||
1. Matchmaking "Function" (MMF)
|
||||
1. Evaluator (may be optional in future versions)
|
||||
* Match Function (MMF)
|
||||
* Evaluator
|
||||
|
||||
While **core** components are fully open source and _can_ be modified, they are designed to support the majority of matchmaking scenarios *without need to change the source code*. The Open Match repository ships with simple **customizable** MMF and Evaluator examples, but it is expected that most users will want full control over the logic in these, so they have been designed to be as easy to modify or replace as possible.
|
||||
|
||||
@ -71,16 +72,9 @@ The Backend API is a server application that implements the [gRPC](https://grpc.
|
||||
* A **unique ID** for a matchmaking profile.
|
||||
* A **json blob** containing all the matching-related data and filters you want to use in your matchmaking function.
|
||||
* An optional list of **roster**s to hold the resulting teams chosen by your matchmaking function.
|
||||
* An optional set of **filters** that define player pools your matchmaking function will choose players from.
|
||||
* An optional set of **filters** that define player pools your matchmaking function will choose players from.
|
||||
|
||||
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
|
||||
|
||||
|
||||
### Matchmaking Function Orchestrator (MMFOrc)
|
||||
|
||||
The MMFOrc kicks off your custom matchmaking function (MMF) for every unique profile submitted to the Backend API in a match object. It also runs the Evaluator to resolve conflicts in case more than one of your profiles matched the same players.
|
||||
|
||||
The MMFOrc exists to orchestrate/schedule your **custom components**, running them as often as required to meet the demands of your game. MMFOrc runs in an endless loop, submitting MMFs and Evaluator jobs to Kubernetes.
|
||||
Your game backend is expected to maintain a connection, waiting for 'filled' match objects containing a roster of players. The Backend API also provides a return path for your game backend to return dedicated game server connection details (an 'assignment') to the game client, and to delete these 'assignments'.
|
||||
|
||||
### Matchmaking Logic (MMLogic) API
|
||||
|
||||
@ -98,39 +92,23 @@ More details about the available gRPC calls can be found in the [API Specificati
|
||||
|
||||
### Evaluator
|
||||
|
||||
The Evaluator resolves conflicts when multiple MMFs select the same player(s).
|
||||
The Evaluator resolves conflicts when multiple MMFs select the same player(s). Evaluator is provided by the developer (sample included in Open Match).
|
||||
|
||||
The Evaluator is a component run by the Matchmaker Function Orchestrator (MMFOrc) after the matchmaker functions have been run, and some proposed results are available. The Evaluator looks at all the proposals, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and well-tuned matchmaking functions, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
|
||||
The Evaluator runs forever, looping over a configured interval, checking if MMFs have completed execution or if certain time interval has passed. Upon reaching those conditions, the Evaluator calls the Evaluation Function (to be modified by the user) with the proposals to choose from. The sample Evaluation function looks at all the proposals, and if multiple proposals contain the same player(s), it breaks the tie. In many simple matchmaking setups with only a few game modes and well-tuned matchmaking functions, the Evaluator may functionally be a no-op or first-in-first-out algorithm. In complex matchmaking setups where, for example, a player can queue for multiple types of matches, the Evaluator provides the critical customizability to evaluate all available proposals and approve those that will passed to your game servers.
|
||||
|
||||
Large-scale concurrent matchmaking functions is a complex topic, and users who wish to do this are encouraged to engage with the [Open Match community](https://github.com/GoogleCloudPlatform/open-match#get-involved) about patterns and best practices.
|
||||
|
||||
### Matchmaking Functions (MMFs)
|
||||
|
||||
Matchmaking Functions (MMFs) are run by the Matchmaker Function Orchestrator (MMFOrc) — once per profile it sees in state storage. The MMF is run as a Job in Kubernetes, and has full access to read and write from state storage. At a high level, the encouraged pattern is to write a MMF in whatever language you are comfortable in that can do the following things:
|
||||
Matchmaking Functions (MMFs) are implemented by the developer and are hosted as a gRPC service. Open Match provides a harness (currently for golang) that handles the broiler-plate Open Match communitation, gRPC server setup etc., so that the user only has to write a function that accepts a set of player pools and a match profile and returns a proposal based on some core match making logic. An MMF is called each time a request to generate a match is received. At a high level, an MMF needs to generate a proposal using the given players, match profile and its custom match making logic and return the proposal to the calling harness.
|
||||
**Note**: Currently Open Match only has a golang harness. To add an MMF in any other language, a harness needs to be implemented in that language.
|
||||
|
||||
- [x] Be packaged in a (Linux) Docker container.
|
||||
- [x] Read/write from the Open Match state storage — Open Match ships with Redis as the default state storage.
|
||||
- [x] Read a profile you wrote to state storage using the Backend API.
|
||||
- [x] Select from the player data you wrote to state storage using the Frontend API. It must respect all the ignore lists defined in the matchmaker config.
|
||||
- [ ] Run your custom logic to try to find a match.
|
||||
- [x] Write the match object it creates to state storage at a specified key.
|
||||
- [x] Remove the players it selected from consideration by other MMFs by adding them to the appropriate ignore list.
|
||||
- [x] Notify the MMFOrc of completion.
|
||||
- [x] (Optional, but recommended) Export stats for metrics collection.
|
||||
## Example Tooling
|
||||
|
||||
**Open Match offers [matchmaking logic API](#matchmaking-logic-mmlogic-api) calls for handling the checked items, as long as you are able to format your input and output in the data schema Open Match expects (defined in the [protobuf messages](api/protobuf-spec/messages.proto)).** You can to do this work yourself if you don't want to or can't use the data schema Open Match is looking for. However, the data formats expected by Open Match are pretty generalized and will work with most common matchmaking scenarios and game types. If you have questions about how to fit your data into the formats specified, feel free to ask us in the [Slack or mailing group](#get-involved).
|
||||
To see Open Match, in action, here are some basic tools that are provided as samples:
|
||||
|
||||
Example MMFs are provided in these languages:
|
||||
- [C#](examples/functions/csharp/simple) (doesn't use the MMLogic API)
|
||||
- [Python3](examples/functions/python3/mmlogic-simple) (MMLogic API enabled)
|
||||
- [PHP](examples/functions/php/mmlogic-simple) (MMLogic API enabled)
|
||||
- [golang](examples/functions/golang/manual-simple) (doesn't use the MMLogic API)
|
||||
* `test/cmd/clientloadgen/` is a (VERY) basic client load simulation tool. It endlessly writes players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
|
||||
|
||||
## Additional examples
|
||||
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to Open Match and sending in a match profile to fill and receives completed matches. It can call Create / List matches.
|
||||
|
||||
**Note:** These examples will be expanded on in future releases.
|
||||
|
||||
The following examples of how to call the APIs are provided in the repository. Both have a `Dockerfile` and `cloudbuild.yaml` files in their respective directories:
|
||||
|
||||
* `test/cmd/frontendclient/main.go` acts as a client to the the Frontend API, putting a player into the queue with simulated latencies from major metropolitan cities and a couple of other matchmaking attributes. It then waits for you to manually put a value in Redis to simulate a server connection string being written using the backend API 'CreateAssignments' call, and displays that value on stdout for you to verify.
|
||||
* `examples/backendclient/main.go` calls the Backend API and passes in the profile found in `backendstub/profiles/testprofile.json` to the `ListMatches` API endpoint, then continually prints the results until you exit, or there are insufficient players to make a match based on the profile..
|
||||
* `test/cmd/frontendclient/` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen.
|
||||
|
@ -1,188 +1,175 @@
|
||||
# Development Guide
|
||||
|
||||
This doc explains how to setup a development environment so you can get started contributing to Open Match. If you instead want to write a matchmaker that _uses_ Open Match, you probably want to read the [User Guide](user_guide.md).
|
||||
|
||||
# Compiling from source
|
||||
|
||||
All components of Open Match produce (Linux) Docker container images as artifacts, and there are included `Dockerfile`s for each. [Google Cloud Platform Cloud Build](https://cloud.google.com/cloud-build/docs/) users will also find `cloudbuild.yaml` files for each component in their respective directories. Note that most of them build from an 'base' image called `openmatch-devbase`. You can find a `Dockerfile` and `cloudbuild_base.yaml` file for this in the repository root. Build it first!
|
||||
|
||||
Note: Although Google Cloud Platform includes some free usage, you may incur charges following this guide if you use GCP products.
|
||||
This doc explains how to setup a development environment, compile from source and deploy your changes to test cluster. This document is targeted to to developers contributing to Open Match.
|
||||
|
||||
## Security Disclaimer
|
||||
**This project has not completed a first-line security audit, and there are definitely going to be some service accounts that are too permissive. This should be fine for testing/development in a local environment, but absolutely should not be used as-is in a production environment without your team/organization evaluating it's permissions.**
|
||||
**This project has not completed a first-line security audit. This should be fine for testing/development in a local environment, but absolutely should not be used as-is in a production environment.**
|
||||
|
||||
## Before getting started
|
||||
**NOTE**: Before starting with this guide, you'll need to update all the URIs from the tutorial's gcr.io container image registry with the URI for your own image registry. If you are using the gcr.io registry on GCP, the default URI is `gcr.io/<PROJECT_NAME>`. Here's an example command in Linux to do the replacement for you this (replace YOUR_REGISTRY_URI with your URI, this should be run from the repository root directory):
|
||||
```
|
||||
# Linux
|
||||
egrep -lR 'matchmaker-dev-201405' . | xargs sed -i -e 's|matchmaker-dev-201405|<PROJECT_NAME>|g'
|
||||
```
|
||||
```
|
||||
# Mac OS, you can delete the .backup files after if all looks good
|
||||
egrep -lR 'matchmaker-dev-201405' . | xargs sed -i'.backup' -e 's|matchmaker-dev-201405|<PROJECT_NAME>|g'
|
||||
## Setting up a local Open Match Repository
|
||||
|
||||
Here are the instructions to set up a local repository for Open Match.
|
||||
|
||||
```bash
|
||||
# Install Open Match Toolchain Dependencies (for Debian, other OSes including Mac OS X have similar dependencies)
|
||||
sudo apt-get update; sudo apt-get install -y -q python3 python3-virtualenv virtualenv make google-cloud-sdk git unzip tar
|
||||
mkdir -p $HOME/<workspace>
|
||||
cd $HOME/<workspace>
|
||||
git clone https://github.com/GoogleCloudPlatform/open-match.git
|
||||
cd open-match
|
||||
```
|
||||
|
||||
## Example of building using Google Cloud Builder
|
||||
## Compiling From Source
|
||||
|
||||
The [Quickstart for Docker](https://cloud.google.com/cloud-build/docs/quickstart-docker) guide explains how to set up a project, enable billing, enable Cloud Build, and install the Cloud SDK if you haven't do these things before. Once you get to 'Preparing source files' you are ready to continue with the steps below.
|
||||
The easiest way to build Open Match is to use the [Makefile](Makefile). This section assumes that you have followed the steps to [Setup Local Open Match Repository](#local-repository-setup).
|
||||
|
||||
* Clone this repo to a local machine or Google Cloud Shell session, and cd into it.
|
||||
* In Linux, you can run the following one-line bash script to compile all the images for the first time, and push them to your gcr.io registry. You must enable the [Container Registry API](https://console.cloud.google.com/flows/enableapi?apiid=containerregistry.googleapis.com) first.
|
||||
```
|
||||
# First, build the 'base' image. Some other images depend on this so it must complete first.
|
||||
gcloud builds submit --config cloudbuild_base.yaml
|
||||
# Build all other images.
|
||||
for dfile in $(find . -name "Dockerfile" -iregex "./\(cmd\|test\|examples\)/.*"); do cd $(dirname ${dfile}); gcloud builds submit --config cloudbuild.yaml & cd -; done
|
||||
```
|
||||
Note: as of v0.3.0 alpha, the Python and PHP MMF examples still depend on the previous way of building until [issue #42, introducing new config management](https://github.com/GoogleCloudPlatform/open-match/issues/42) is resolved (apologies for the inconvenience):
|
||||
```
|
||||
gcloud builds submit --config cloudbuild_mmf_py3.yaml
|
||||
gcloud builds submit --config cloudbuild_mmf_php.yaml
|
||||
```
|
||||
* Once the cloud builds have completed, you can verify that all the builds succeeded in the cloud console or by by checking the list of images in your **gcr.io** registry:
|
||||
```
|
||||
gcloud container images list
|
||||
```
|
||||
(your registry name will be different)
|
||||
```
|
||||
NAME
|
||||
gcr.io/matchmaker-dev-201405/openmatch-backendapi
|
||||
gcr.io/matchmaker-dev-201405/openmatch-devbase
|
||||
gcr.io/matchmaker-dev-201405/openmatch-evaluator
|
||||
gcr.io/matchmaker-dev-201405/openmatch-frontendapi
|
||||
gcr.io/matchmaker-dev-201405/openmatch-mmf-golang-manual-simple
|
||||
gcr.io/matchmaker-dev-201405/openmatch-mmf-php-mmlogic-simple
|
||||
gcr.io/matchmaker-dev-201405/openmatch-mmf-py3-mmlogic-simple
|
||||
gcr.io/matchmaker-dev-201405/openmatch-mmforc
|
||||
gcr.io/matchmaker-dev-201405/openmatch-mmlogicapi
|
||||
```
|
||||
## Example of starting a GKE cluster
|
||||
You will also need [Docker](https://docs.docker.com/install/) and [Go 1.12+](https://golang.org/dl/) installed.
|
||||
|
||||
A cluster with mostly default settings will work for this development guide. In the Cloud SDK command below we start it with machines that have 4 vCPUs. Alternatively, you can use the 'Create Cluster' button in [Google Cloud Console]("https://console.cloud.google.com/kubernetes").
|
||||
To build all the artifacts of Open Match, please run the following commands:
|
||||
|
||||
```
|
||||
gcloud container clusters create --machine-type n1-standard-4 open-match-dev-cluster --zone <ZONE>
|
||||
```bash
|
||||
# Downloads all the tools needed to build Open Match
|
||||
make install-toolchain
|
||||
# Generates protocol buffer code files
|
||||
make all-protos
|
||||
# Builds all the binaries
|
||||
make all
|
||||
# Builds all the images.
|
||||
make build-images
|
||||
```
|
||||
|
||||
If you don't know which zone to launch the cluster in (`<ZONE>`), you can list all available zones by running the following command.
|
||||
After successfully building, run `docker images` to see all the images that were build.
|
||||
|
||||
```
|
||||
Before creating a pull request you can run `make local-cloud-build` to simulate a Cloud Build run to check for regressions.
|
||||
|
||||
The [Build Queue](https://console.cloud.google.com/cloud-build/builds?project=open-match-build) runs against all PRs, requires membership to [open-match-discuss@googlegroups.com](https://groups.google.com/forum/#!forum/open-match-discuss).
|
||||
|
||||
## Deploy Open Match to Google Cloud Platform
|
||||
|
||||
Create a GCP project via [Google Cloud Console](https://console.cloud.google.com/). Billing must be enabled but if you're a new customer you can get some [free credits](https://cloud.google.com/free/). When you create a project you'll need to set a Project ID, if you forget it you can see it here, https://console.cloud.google.com/iam-admin/settings/project.
|
||||
|
||||
Now install [Google Cloud SDK](https://cloud.google.com/sdk/) which is the command line tool to work against your project. The following commands log you into your GCP Project.
|
||||
|
||||
```bash
|
||||
# Login to your Google Account for GCP.
|
||||
gcloud auth login
|
||||
gcloud config set project $YOUR_GCP_PROJECT_ID
|
||||
# Enable GCP services
|
||||
gcloud services enable containerregistry.googleapis.com
|
||||
gcloud services enable container.googleapis.com
|
||||
# Test that everything is good, this command should work.
|
||||
gcloud compute zones list
|
||||
```
|
||||
|
||||
## Configuration
|
||||
This section assumes that you have followed the steps to [Setup Local Open Match Repository](#local-repository-setup). Once everything is setup you can deploy Open Match by creating a cluster in Google Kubernetes Engine (GKE).
|
||||
|
||||
Currently, each component reads a local config file `matchmaker_config.json`, and all components assume they have the same configuration (if you would like to help us design the replacement config solution, please join the [discussion](https://github.com/GoogleCloudPlatform/open-match/issues/42). To this end, there is a single centralized config file located in the `<REPO_ROOT>/config/` which is symlinked to each component's subdirectory for convenience when building locally. Note: [there is an issue with symlinks on Windows](../issues/57).
|
||||
|
||||
## Running Open Match in a development environment
|
||||
|
||||
The rest of this guide assumes you have a cluster (example is using GKE, but works on any cluster with a little tweaking), and kubectl configured to administer that cluster, and you've built all the Docker container images described by `Dockerfiles` in the repository root directory and given them the docker tag 'dev'. It assumes you are in the `<REPO_ROOT>/deployments/k8s/` directory.
|
||||
|
||||
* Start a copy of redis and a service in front of it:
|
||||
```
|
||||
kubectl apply -f redis_deployment.yaml
|
||||
kubectl apply -f redis_service.yaml
|
||||
```
|
||||
* Run the **core components**: the frontend API, the backend API, the matchmaker function orchestrator (MMFOrc), and the matchmaking logic API.
|
||||
**NOTE** In order to kick off jobs, the matchmaker function orchestrator needs a service account with permission to administer the cluster. This should be updated to have min required perms before launch, this is pretty permissive but acceptable for closed testing:
|
||||
```
|
||||
kubectl apply -f backendapi_deployment.yaml
|
||||
kubectl apply -f backendapi_service.yaml
|
||||
kubectl apply -f frontendapi_deployment.yaml
|
||||
kubectl apply -f frontendapi_service.yaml
|
||||
kubectl apply -f mmforc_deployment.yaml
|
||||
kubectl apply -f mmforc_serviceaccount.yaml
|
||||
kubectl apply -f mmlogicapi_deployment.yaml
|
||||
kubectl apply -f mmlogicapi_service.yaml
|
||||
```
|
||||
* [optional, but recommended] Configure the OpenCensus metrics services:
|
||||
```
|
||||
kubectl apply -f metrics_services.yaml
|
||||
```
|
||||
* [optional] Trying to apply the Kubernetes Prometheus Operator resource definition files without a cluster-admin rolebinding on GKE doesn't work without running the following command first. See https://github.com/coreos/prometheus-operator/issues/357
|
||||
```
|
||||
kubectl create clusterrolebinding projectowner-cluster-admin-binding --clusterrole=cluster-admin --user=<GCP_ACCOUNT>
|
||||
```
|
||||
* [optional, uses beta software] If using Prometheus as your metrics gathering backend, configure the [Prometheus Kubernetes Operator](https://github.com/coreos/prometheus-operator):
|
||||
|
||||
```
|
||||
kubectl apply -f prometheus_operator.yaml
|
||||
kubectl apply -f prometheus.yaml
|
||||
kubectl apply -f prometheus_service.yaml
|
||||
kubectl apply -f metrics_servicemonitor.yaml
|
||||
```
|
||||
You should now be able to see the core component pods running using a `kubectl get pods`, and the core component metrics in the Prometheus Web UI by running `kubectl proxy <PROMETHEUS_POD_NAME> 9090:9090` in your local shell, then opening http://localhost:9090/targets in your browser to see which services Prometheus is collecting from.
|
||||
|
||||
Here's an example output from `kubectl get all` if everything started correctly, and you included all the optional components (note: this could become out-of-date with upcoming versions; apologies if that happens):
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/om-backendapi-84bc9d8fff-q89kr 1/1 Running 0 9m
|
||||
pod/om-frontendapi-55d5bb7946-c5ccb 1/1 Running 0 9m
|
||||
pod/om-mmforc-85bfd7f4f6-wmwhc 1/1 Running 0 9m
|
||||
pod/om-mmlogicapi-6488bc7fc6-g74dm 1/1 Running 0 9m
|
||||
pod/prometheus-operator-5c8774cdd8-7c5qm 1/1 Running 0 9m
|
||||
pod/prometheus-prometheus-0 2/2 Running 0 9m
|
||||
pod/redis-master-9b6b86c46-b7ggn 1/1 Running 0 9m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/kubernetes ClusterIP 10.59.240.1 <none> 443/TCP 19m
|
||||
service/om-backend-metrics ClusterIP 10.59.254.43 <none> 29555/TCP 9m
|
||||
service/om-backendapi ClusterIP 10.59.240.211 <none> 50505/TCP 9m
|
||||
service/om-frontend-metrics ClusterIP 10.59.246.228 <none> 19555/TCP 9m
|
||||
service/om-frontendapi ClusterIP 10.59.250.59 <none> 50504/TCP 9m
|
||||
service/om-mmforc-metrics ClusterIP 10.59.240.59 <none> 39555/TCP 9m
|
||||
service/om-mmlogicapi ClusterIP 10.59.248.3 <none> 50503/TCP 9m
|
||||
service/prometheus NodePort 10.59.252.212 <none> 9090:30900/TCP 9m
|
||||
service/prometheus-operated ClusterIP None <none> 9090/TCP 9m
|
||||
service/redis ClusterIP 10.59.249.197 <none> 6379/TCP 9m
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.extensions/om-backendapi 1 1 1 1 9m
|
||||
deployment.extensions/om-frontendapi 1 1 1 1 9m
|
||||
deployment.extensions/om-mmforc 1 1 1 1 9m
|
||||
deployment.extensions/om-mmlogicapi 1 1 1 1 9m
|
||||
deployment.extensions/prometheus-operator 1 1 1 1 9m
|
||||
deployment.extensions/redis-master 1 1 1 1 9m
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.extensions/om-backendapi-84bc9d8fff 1 1 1 9m
|
||||
replicaset.extensions/om-frontendapi-55d5bb7946 1 1 1 9m
|
||||
replicaset.extensions/om-mmforc-85bfd7f4f6 1 1 1 9m
|
||||
replicaset.extensions/om-mmlogicapi-6488bc7fc6 1 1 1 9m
|
||||
replicaset.extensions/prometheus-operator-5c8774cdd8 1 1 1 9m
|
||||
replicaset.extensions/redis-master-9b6b86c46 1 1 1 9m
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/om-backendapi 1 1 1 1 9m
|
||||
deployment.apps/om-frontendapi 1 1 1 1 9m
|
||||
deployment.apps/om-mmforc 1 1 1 1 9m
|
||||
deployment.apps/om-mmlogicapi 1 1 1 1 9m
|
||||
deployment.apps/prometheus-operator 1 1 1 1 9m
|
||||
deployment.apps/redis-master 1 1 1 1 9m
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/om-backendapi-84bc9d8fff 1 1 1 9m
|
||||
replicaset.apps/om-frontendapi-55d5bb7946 1 1 1 9m
|
||||
replicaset.apps/om-mmforc-85bfd7f4f6 1 1 1 9m
|
||||
replicaset.apps/om-mmlogicapi-6488bc7fc6 1 1 1 9m
|
||||
replicaset.apps/prometheus-operator-5c8774cdd8 1 1 1 9m
|
||||
replicaset.apps/redis-master-9b6b86c46 1 1 1 9m
|
||||
|
||||
NAME DESIRED CURRENT AGE
|
||||
statefulset.apps/prometheus-prometheus 1 1 9m
|
||||
```bash
|
||||
# Create a GKE Cluster and install Helm
|
||||
make create-gke-cluster push-helm
|
||||
# Push images to Registry
|
||||
make push-images
|
||||
# Deploy Open Match
|
||||
make install-chart
|
||||
```
|
||||
|
||||
### End-to-End testing
|
||||
This will install all Open Match core components to the kubernetes cluster. Once deployed you can view the jobs in [Cloud Console](https://console.cloud.google.com/kubernetes/workload).
|
||||
|
||||
Run `kubectl --namespace open-match get pods,svc` to verify if the deployment succeded. If everything started correctly, the output should look like:
|
||||
|
||||
```
|
||||
$ kubectl --namespace open-match get pods,svc
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/om-backendapi-6f8f9796f7-ncfgf 1/1 Running 0 10m
|
||||
pod/om-frontendapi-868f7df859-5dbcd 1/1 Running 0 10m
|
||||
pod/om-mmlogicapi-5998dcdc9c-vmjhn 1/1 Running 0 10m
|
||||
pod/om-redis-master-0 1/1 Running 0 10m
|
||||
pod/om-redis-metrics-66c8fbfbc-vnmls 1/1 Running 0 10m
|
||||
pod/om-redis-slave-8477c666fc-kb2gv 1/1 Running 1 10m
|
||||
pod/open-match-grafana-6769f969f-t76zz 2/2 Running 0 10m
|
||||
pod/open-match-prometheus-alertmanager-58c9f6ffc7-7f7fq 2/2 Running 0 10m
|
||||
pod/open-match-prometheus-kube-state-metrics-79c8d85c55-q69qf 1/1 Running 0 10m
|
||||
pod/open-match-prometheus-node-exporter-88pjh 1/1 Running 0 10m
|
||||
pod/open-match-prometheus-node-exporter-qq9h7 1/1 Running 0 10m
|
||||
pod/open-match-prometheus-node-exporter-rcmdq 1/1 Running 0 10m
|
||||
pod/open-match-prometheus-pushgateway-6c67d47f48-8bhgk 1/1 Running 0 10m
|
||||
pod/open-match-prometheus-server-86c459ddc4-gk5m7 2/2 Running 0 10m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/om-backendapi ClusterIP 10.0.2.206 <none> 50505/TCP,51505/TCP 10m
|
||||
service/om-frontendapi ClusterIP 10.0.14.157 <none> 50504/TCP,51504/TCP 10m
|
||||
service/om-mmlogicapi ClusterIP 10.0.10.71 <none> 50503/TCP,51503/TCP 10m
|
||||
service/om-redis-master ClusterIP 10.0.9.110 <none> 6379/TCP 10m
|
||||
service/om-redis-metrics ClusterIP 10.0.9.114 <none> 9121/TCP 10m
|
||||
service/om-redis-slave ClusterIP 10.0.3.46 <none> 6379/TCP 10m
|
||||
service/open-match-grafana ClusterIP 10.0.0.213 <none> 3000/TCP 10m
|
||||
service/open-match-prometheus-alertmanager ClusterIP 10.0.6.126 <none> 80/TCP 10m
|
||||
service/open-match-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 10m
|
||||
service/open-match-prometheus-node-exporter ClusterIP None <none> 9100/TCP 10m
|
||||
service/open-match-prometheus-pushgateway ClusterIP 10.0.15.222 <none> 9091/TCP 10m
|
||||
service/open-match-prometheus-server ClusterIP 10.0.6.7 <none> 80/TCP 10m
|
||||
```
|
||||
|
||||
## End-to-End testing
|
||||
|
||||
### Example MMF, Evaluator
|
||||
|
||||
When Open Match is setup, it requires a Match Function and an Evaluator to be set up that it will call into at runtime when requests to generate matches are received. Open Match itself provides harness code (currently only for golang) that abstracts the complexity of setting up the Match Function and Evaluator as GRPC services. You do not need to modify the harness code but simply the actual Match Function, Evaluator Function to suit your game's needs. Open Match includes sample Match function and Evaluation Function as described below:
|
||||
|
||||
* `examples/functions/golang/grpc-serving` is a sample Match function that is built using the GRPC harness. The function scans a simple profile, populating a player into each Roster slot that matches the requested player pool. This function is over-simplified simply matching player pools to roster slots. You will need to modify this function to add your match making logic.
|
||||
|
||||
* `examples/evaluators/golang/serving` is a sample evaluator function that is called by an evaluator harness that runs as forever-runnig kubernetes job. The function is triggered each time there are results to evaluate. The current sample simply approves matches with unique players, identifies the ones with overlap and approves the first overlapping player match rejecting the rest. You would need to build your own evaluation logic with this sample as a reference.
|
||||
|
||||
### Example Tooling
|
||||
|
||||
Once Open Match core components are set up and your Match Function and Evaluator GRPC services are running, Open Match functionality is triggered when new players request assignments and when the game backend requests matches. To see Open Match, in action, here are some basic tools that are provided as samples. Note that these tools are meant to exercise Open Match functionality and should only be used as a reference point when building similar abilities into your components using Open Match.
|
||||
|
||||
* `test/cmd/clientloadgen/` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
|
||||
|
||||
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to Open Match and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools. **Note**: If you run this by itself, expect it to wait about 30 seconds, then return a result of 'insufficient players' and exit - this is working as intended. Use the client load simulation tool below to add players to the pool or you'll never be able to make a successful match.
|
||||
|
||||
* `test/cmd/frontendclient/` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen until you press the enter key. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that command sends to the backend look for many more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match. Also, if the simulator has generated significant load, the player injected by te Frontend Client may still not find a match by the timeout duration and exit.
|
||||
|
||||
### Setting up an E2E scenario
|
||||
|
||||
These steps assume that you already have [deployed core Open Match to a cluster](deploy-open-match-to-google-cloud-platform). Once Open Match is deployed, run the below command to deploy the Match Function Harness, the Evaluator and the Client Load Generator to the Open Match cluster.
|
||||
|
||||
```bash
|
||||
# Deploy Open Match
|
||||
make install-example-chart
|
||||
```
|
||||
|
||||
Once this succeeds, run the below command to validate that these components are up and running as expected (in addition to the Open Match core components):
|
||||
|
||||
```bash
|
||||
$kubectl --namespace open-match get pods,svc
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/om-clientloadgen-b6cf884cd-nslrl 1/1 Running 0 19s
|
||||
pod/om-evaluator-7795968f9-729qs 1/1 Running 0 19s
|
||||
pod/om-function-697db9cd6-xh89j 1/1 Running 0 19s
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/om-function ClusterIP 10.0.4.81 <none> 50502/TCP,51502/TCP 20s
|
||||
```
|
||||
|
||||
At this point, the Evaluator and Match Function are both running and a client load generator is continuously adding players to the state storage. To see match generation in action, run the following command:
|
||||
|
||||
```bash
|
||||
make run-backendclient
|
||||
```
|
||||
|
||||
Some other handy commands:
|
||||
|
||||
```bash
|
||||
# Cleanup the installation
|
||||
make delete-chart delete-example-chart
|
||||
|
||||
# To install a pre-built image without building again:
|
||||
make REGISTRY=$REGISTRY TAG=$TAG install-chart install-example-chart
|
||||
make REGISTRY=$REGISTRY TAG=$TAG install-example-chart
|
||||
make REGISTRY=$REGISTRY TAG=$TAG run-backendclient
|
||||
```
|
||||
|
||||
**Note**: The programs provided below are just bare-bones manual testing programs with no automation and no claim of code coverage. This sparseness of this part of the documentation is because we expect to discard all of these tools and write a fully automated end-to-end test suite and a collection of load testing tools, with extensive stats output and tracing capabilities before 1.0 release. Tracing has to be integrated first, which will be in an upcoming release.
|
||||
|
||||
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
|
||||
|
||||
* `test/cmd/frontendclient/` is a fake client for the Frontend API. It pretends to be group of real game clients connecting to Open Match. It requests a game, then dumps out the results each player receives to the screen until you press the enter key. **Note**: If you're using the rest of these test programs, you're probably using the Backend Client below. The default profiles that command sends to the backend look for many more than one player, so if you want to see meaningful results from running this Frontend Client, you're going to need to generate a bunch of fake players using the client load simulation tool at the same time. Otherwise, expect to wait until it times out as your matchmaker never has enough players to make a successful match.
|
||||
* `examples/backendclient` is a fake client for the Backend API. It pretends to be a dedicated game server backend connecting to openmatch and sending in a match profile to fill. Once it receives a match object with a roster, it will also issue a call to assign the player IDs, and gives an example connection string. If it never seems to get a match, make sure you're adding players to the pool using the other two tools. Note: building this image requires that you first build the 'base' dev image (look for `cloudbuild_base.yaml` and `Dockerfile.base` in the root directory) and then update the first step to point to that image in your registry. This will be simplified in a future release. **Note**: If you run this by itself, expect it to wait about 30 seconds, then return a result of 'insufficient players' and exit - this is working as intended. Use the client load simulation tool below to add players to the pool or you'll never be able to make a successful match.
|
||||
* `test/cmd/clientloadgen/` is a (VERY) basic client load simulation tool. It does **not** test the Frontend API - in fact, it ignores it and writes players directly to state storage on its own. It doesn't do anything but loop endlessly, writing players into state storage so you can test your backend integration, and run your custom MMFs and Evaluators (which are only triggered when there are players in the pool).
|
||||
|
||||
### Resources
|
||||
|
||||
* [Prometheus Operator spec](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md)
|
||||
|
||||
In the end: *caveat emptor*. These tools all work and are quite small, and as such are fairly easy for developers to understand by looking at the code and logging output. They are provided as-is just as a reference point of how to begin experimenting with Open Match integrations.
|
||||
|
@ -1 +0,0 @@
|
||||
*"I notice that all the APIs use gRPC. What if I want to make my calls using REST, or via a Websocket?"** (gateway/proxy OSS projects are available)
|
26
docs/gcloud.md
Normal file
26
docs/gcloud.md
Normal file
@ -0,0 +1,26 @@
|
||||
# Create a GKE Cluster
|
||||
|
||||
Below are the steps to create a GKE cluster in Google Cloud Platform.
|
||||
|
||||
* Create a GCP project via [Google Cloud Console](https://console.cloud.google.com/).
|
||||
* Billing must be enabled. If you're a new customer you can get some [free credits](https://cloud.google.com/free/).
|
||||
* When you create a project you'll need to set a Project ID, if you forget it you can see it here, https://console.cloud.google.com/iam-admin/settings/project.
|
||||
* Install [Google Cloud SDK](https://cloud.google.com/sdk/) which is the command line tool to work against your project.
|
||||
|
||||
Here are the next steps using the gcloud tool.
|
||||
|
||||
```bash
|
||||
# Login to your Google Account for GCP
|
||||
gcloud auth login
|
||||
gcloud config set project $YOUR_GCP_PROJECT_ID
|
||||
|
||||
# Enable necessary GCP services
|
||||
gcloud services enable containerregistry.googleapis.com
|
||||
gcloud services enable container.googleapis.com
|
||||
|
||||
# Test that everything is good, this command should work.
|
||||
gcloud compute zones list
|
||||
|
||||
# Create a GKE Cluster in this project
|
||||
gcloud container clusters create --machine-type n1-standard-4 open-match-dev-cluster --zone us-west1-a --tags open-match
|
||||
```
|
@ -12,16 +12,24 @@ SOURCE_VERSION=$1
|
||||
DEST_VERSION=$2
|
||||
SOURCE_PROJECT_ID=open-match-build
|
||||
DEST_PROJECT_ID=open-match-public-images
|
||||
IMAGE_NAMES="openmatch-backendapi openmatch-frontendapi openmatch-mmforc openmatch-mmlogicapi openmatch-evaluator-simple openmatch-mmf-cs-mmlogic-simple openmatch-mmf-go-mmlogic-simple openmatch-mmf-go-grpc-serving-simple openmatch-mmf-py3-mmlogic-simple openmatch-backendclient openmatch-clientloadgen openmatch-frontendclient"
|
||||
IMAGE_NAMES="openmatch-backendapi openmatch-frontendapi openmatch-mmforc openmatch-mmlogicapi openmatch-evaluator-serving openmatch-mmf-go-grpc-serving-simple openmatch-backendclient openmatch-clientloadgen openmatch-frontendclient"
|
||||
|
||||
for name in $IMAGE_NAMES
|
||||
do
|
||||
source_image=gcr.io/$SOURCE_PROJECT_ID/$name:$SOURCE_VERSION
|
||||
dest_image=gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION
|
||||
dest_image_latest=gcr.io/$DEST_PROJECT_ID/$name:latest
|
||||
docker pull $source_image
|
||||
docker tag $source_image $dest_image
|
||||
docker tag $source_image $dest_image_latest
|
||||
docker push $dest_image
|
||||
docker push $dest_image_latest
|
||||
done
|
||||
|
||||
echo "=============================================================="
|
||||
echo "=============================================================="
|
||||
echo "=============================================================="
|
||||
echo "=============================================================="
|
||||
|
||||
echo "Add these lines to your release notes:"
|
||||
for name in $IMAGE_NAMES
|
||||
do
|
||||
echo "docker pull gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION"
|
||||
done
|
||||
|
@ -1,16 +1,16 @@
|
||||
## Open Source Software integrations
|
||||
|
||||
### Structured logging
|
||||
### Structured Logging - Logrus
|
||||
|
||||
Logging for Open Match uses the [Golang logrus module](https://github.com/sirupsen/logrus) to provide structured logs. Logs are output to `stdout` in each component, as expected by Docker and Kubernetes. Level and format are configurable via config/matchmaker_config.json. If you have a specific log aggregator as your final destination, we recommend you have a look at the logrus documentation as there is probably a log formatter that plays nicely with your stack.
|
||||
|
||||
### Instrumentation for metrics
|
||||
### Instrumentation - OpenCensus
|
||||
|
||||
Open Match uses [OpenCensus](https://opencensus.io/) for metrics instrumentation. The [gRPC](https://grpc.io/) integrations are built-in, and Golang redigo module integrations are incoming, but [haven't been merged into the official repo](https://github.com/opencensus-integrations/redigo/pull/1). All of the core components expose HTTP `/metrics` endpoints on the port defined in `config/matchmaker_config.json` (default: 9555) for Prometheus to scrape. If you would like to export to a different metrics aggregation platform, we suggest you have a look at the OpenCensus documentation — there may be one written for you already, and switching to it may be as simple as changing a few lines of code.
|
||||
|
||||
**Note:** A standard for instrumentation of MMFs is planned.
|
||||
|
||||
### Redis setup
|
||||
### State Storage - Redis
|
||||
|
||||
By default, Open Match expects you to run Redis *somewhere*. Connection information can be put in the config file (`matchmaker_config.json`) for any Redis instance reachable from the [Kubernetes namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By default, Open Match sensibly runs in the Kubernetes `default` namespace. In most instances, we expect users will run a copy of Redis in a pod in Kubernetes, with a service pointing to it.
|
||||
|
||||
|
@ -1,33 +0,0 @@
|
||||
During alpha, please do not use Open Match as-is in production. To develop against it, please see the [development guide](development.md).
|
||||
|
||||
# "Productionizing" a deployment
|
||||
Here are some steps that should be taken to productionize your Open Match deployment before exposing it to live public traffic. Some of these overlap with best practices for [productionizing Kubernetes](https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10) or cloud infrastructure more generally. We will work to make as many of these into the default deployment strategy for Open Match as possible, going forward.
|
||||
**This is not an exhaustive list and addressing the items in this document alone shouldn't be considered sufficient. Every game is different and will have different production needs.**
|
||||
|
||||
## Kubernetes
|
||||
All the usual guidance around hardening and securing Kubernetes are applicable to running Open Match. [Here is a guide around security for Google Kubernetes Enginge on GCP](https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10), and a number of other guides are available from reputable sources on the internet.
|
||||
### Minimum permissions on Kubernetes
|
||||
* The components of Open Match should be run in a separate Kubernetes namespace if you're also using the cluster for other services. As of 0.3.0 they run in the 'default' namespace if you follow the development guide.
|
||||
* Note that the default MMForc process has cluster management permissions by default. Before moving to production, you should create a role with only access to create kubernetes jobs and configure the MMForc to use it.
|
||||
### Kubernetes Jobs (MMFOrc)
|
||||
The 0.3.0 MMFOrc component runs your MMFs as Kubernetes Jobs. You should periodically delete these jobs to keep the cluster running smoothly. How often you need to delete them is dependant on how many you are running. There are a number of open source solutions to do this for you. ***Note that once you delete the job, you won't have access to that job's logs anymore unless you're sending your logs from kubernetes to a log aggregator like Google Stackdriver. This can make it a challenge to troubleshoot issues***
|
||||
### CPU and Memory limits
|
||||
For any production Kubernetes deployment, it is good practice to profile your processes and select [resource limits and requests](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) according to your results. For example, you'll likely want to set adequate resource requests based on your expected player base and some load testing for the Redis state storage pods. This will help Kubernetes avoid scheduling other intensive processes on the same underlying node and keep you from running into resource contention issues. Another example might be an MMF with a particularly large memory or CPU footprint - maybe you have one that searches a lot of players for a potential match. This would be a good candidate for resource limits and requests in Kubernetes to both ensure it gets the CPU and RAM it needs to complete quickly, and to make sure it's not scheduled alongside another intensive Kubernetes pod.
|
||||
### State storage
|
||||
The default state storage for Open Match is a _single instance_ of Redis. Although it _is_ possible to go to production with this as the solution if you're willing to accept the potential downsides, for most deployments, a HA Redis configuration would better fit your needs. An example YAML file for creating a [self-healing HA Redis deployment on Kubernetes](../install/yaml/01-redis-failover.yaml) is available. Regardless of which configuation you use, it is probably a good idea to put some [resource requests](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) in your Kubernetes resource definition as mentioned above.
|
||||
|
||||
You can find more discussion in the [state storage readme doc](../internal/statestorage/redis/README.md).
|
||||
## Open Match config
|
||||
Debug logging and the extra debug code paths should be disabled in the `config/matchmaker_config.json` file (as of the time of this writing, 0.3.0).
|
||||
|
||||
## Public APIs for Open Match
|
||||
In many cases, you may choose to configure your game clients to connect to the Open Match Frontend API, and in a few select cases (such as using it for P2P non-dedicated game server hosting), the game client may also need to connect to the Backend API. In these cases, it is important to secure the API endpoints against common attacks, such as DDoS or malformed packet floods.
|
||||
* Using a cloud provider's Load Balancer in front of the Kubernetes Service is a common approach to enable vendor-specific DDoS protections. Check the documentation for your cloud vendor's Load Balancer for more details ([GCP's DDoS protection](https://cloud.google.com/armor/)).
|
||||
* Using an API framework can be used to limit endpoint access to only game clients you have authenticated using your platform's authentication service. This may be accomplished with simple authentication tokens or a more complex scheme depending on your needs.
|
||||
|
||||
## Testing
|
||||
(as of 0.3.0) The provided test programs are just for validating that Open Match is operating correctly; they are command-line applications designed to be run from within the same cluster as Open Match and are therefore not a suitable test harness for doing production testing to make sure your matchmaker is ready to handle your live game. Instead, it is recommended that you integrate Open Match into your game client and test it using the actual game flow players will use if at all possible.
|
||||
|
||||
### Load testing
|
||||
Ideally, you would already be making 'headless' game clients for automated qa and load testing of your game servers; it is recommended that you also code these testing clients to be able to act as a mock player connecting to Open Match. Load testing platform services is a huge topic and should reflect your actual game access patterns as closely as possible, which will be very game dependant.
|
||||
**Note: It is never a good idea to do load testing against a cloud vendor without informing them first!**
|
@ -1,10 +1,4 @@
|
||||
|
||||
|
||||
### Guides
|
||||
* [Production guide](./docs/production.md) Lots of best practices to be written here before 1.0 release, right now it's a scattered collection of notes. **WIP**
|
||||
* [Development guide](./docs/development.md)
|
||||
|
||||
## This all sounds great, but can you explain Docker and/or Kubernetes to me?
|
||||
## Additional References
|
||||
|
||||
### Docker
|
||||
- [Docker's official "Getting Started" guide](https://docs.docker.com/get-started/)
|
||||
@ -13,3 +7,6 @@
|
||||
### Kubernetes
|
||||
- [You should totally read this comic, and interactive tutorial](https://cloud.google.com/kubernetes-engine/kubernetes-comic/)
|
||||
- [Katacoda's free, interactive Kubernetes course](https://www.katacoda.com/courses/kubernetes)
|
||||
|
||||
### Prometheus
|
||||
- [Prometheus Operator spec](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md)
|
||||
|
100
docs/roadmap.md
100
docs/roadmap.md
@ -1,60 +1,68 @@
|
||||
# Roadmap. [Subject to change]
|
||||
Releases are scheduled for every 6 weeks. **Every release is a stable, long-term-support version**. Even for alpha releases, best-effort support is available. With a little work and input from an experienced live services developer, you can go to production with any version on the [releases page](https://github.com/GoogleCloudPlatform/open-match/releases).
|
||||
# Open Match Roadmap
|
||||
|
||||
Our current thinking is to wait to take Open Match out of alpha/beta (and label it 1.0) until it can be used out-of-the-box, standalone, for developers that don’t have any existing platform services. Which is to say, the majority of **established game developers likely won't have any reason to wait for the 1.0 release if Open Match already handles your needs**. If you already have live platform services that you plan to integrate Open Match with (player authentication, a group invite system, dedicated game servers, metrics collection, logging aggregation, etc), then a lot of the features planned between 0.4.0 and 1.0 likely aren't of much interest to you anyway.
|
||||
Open Match is currently at release 0.4.0. Open Match 0.5.0 currently has a Release Candidate and we are targeting to cut the release on 04/25/2019.
|
||||
|
||||
## Upcoming releases
|
||||
* **0.4.0** — Agones Integration & MMF on [Knative](https://cloud.google.com/knative/)
|
||||
MMF instrumentation
|
||||
Match object expiration / lazy deletion
|
||||
API autoscaling by default
|
||||
API changes after this will likely be additions or very minor
|
||||
* **0.5.0** — Tracing, Metrics, and KPI Dashboard
|
||||
* **0.6.0** — Load testing suite
|
||||
* **1.0.0** — API Formally Stable. Breaking API changes will require a new major version number.
|
||||
* **1.1.0** — Canonical MMFs
|
||||
Releases can be found on the [releases page](https://github.com/GoogleCloudPlatform/open-match/releases).
|
||||
|
||||
## Philosophy
|
||||
* The next version (0.4.0) will focus on making MMFs run on serverless platforms - specifically Knative. This will just be first steps, as Knative is still pretty early. We want to get a proof of concept working so we can roadmap out the future "MMF on Knative" experience. Our intention is to keep MMFs as compatible as possible with the current Kubernetes job-based way of doing them. Our hope is that by the time Knative is mature, we’ll be able to provide a [Knative build](https://github.com/Knative/build) pipeline that will take existing MMFs and build them as Knative functions. In the meantime, we’ll map out a relatively painless (but not yet fully automated) way to make an existing MMF into a Kubernetes Deployment that looks as similar to what [Knative serving](https://github.com/knative/serving) is shaping up to be, in an effort to make the eventual switchover painless. Basically all of this is just _optimizing MMFs to make them spin up faster and take less resources_, **we're not planning to change what MMFs do or the interfaces they need to fulfill**. Existing MMFs will continue to run as-is, and in the future moving them to Knative should be both **optional** and **largely automated**.
|
||||
* 0.4.0 represents the natural stopping point for adding new functionality until we have more community uptake and direction. We don't anticipate many API changes in 0.4.0 and beyond. Maybe new API calls for new functionality, but we're unlikely to see big shifts in existing calls through 1.0 and its point releases. We'll issue a new major release version if we decide we need those changes.
|
||||
* The 0.5.0 version and beyond will be focused on operationalizing the out-of-the-box experience. Metrics and analytics and a default dashboard, additional tooling, and a load testing suite are all planned. We want it to be easy for operators to see KPI and know what's going on with Open Match.
|
||||
Below sections detail the themes and the roadmap for the future releases. The tasks listed for the 0.6.0 release have been finalized and are well understood. As for the 0.7.0 and beyond, the tasks currently identified are listed. These are subject to change as we make our way through 0.6.0 release and get more feedback from the community.
|
||||
|
||||
# Planned improvements
|
||||
See the [provisional roadmap](docs/roadmap.md) for more information on upcoming releases.
|
||||
## 0.5.0 - Usability
|
||||
|
||||
## Documentation
|
||||
- [ ] “Writing your first matchmaker” getting started guide will be included in an upcoming version.
|
||||
- [ ] Documentation for using the example customizable components and the `backendstub` and `frontendstub` applications to do an end-to-end (e2e) test will be written. This all works now, but needs to be written up.
|
||||
- [ ] Documentation on release process and release calendar.
|
||||
The primary focus of the 0.5 release is usability. The goal for this release is to make Open Match easy to build and deploy and have solid supporting documentation. Users should be able to try Open Match 0.5.0 functionality and experiment with its features, MMFs etc. Here are some planned features for this release:
|
||||
|
||||
## State storage
|
||||
- [X] All state storage operations should be isolated from core components into the `statestorage/` modules. This is necessary precursor work to enabling Open Match state storage to use software other than Redis.
|
||||
- [X] [The Redis deployment should have an example HA configuration](https://github.com/GoogleCloudPlatform/open-match/issues/41)
|
||||
- [X] Redis watch should be unified to watch a hash and stream updates. The code for this is written and validated but not committed yet.
|
||||
- [ ] We don't want to support two redis watcher code paths, but we will until golang protobuf reflection is a bit more usable. [Design doc](https://docs.google.com/document/d/19kfhro7-CnBdFqFk7l4_HmwaH2JT_Rhw5-2FLWLEGGk/edit#heading=h.q3iwtwhfujjx), [github issue](https://github.com/golang/protobuf/issues/364)
|
||||
- [X] Player/Group records generated when a client enters the matchmaking pool need to be removed after a certain amount of time with no activity. When using Redis, this will be implemented as a expiration on the player record.
|
||||
- [X] Add support to invoke MMFs as a gRPC function call.
|
||||
- [X] Provide a gRPC serving harness and an example MMF built using this harness. (golang based).
|
||||
- [X] Provide a evaluation harness and a sample evaluator using this harness (golang based)
|
||||
- [X] Deprecate the k8s based job scheduling mechanism for MMFs, Evaluator in favor of hosted MMFs, Evaluator.
|
||||
- [X] Switch all core Open Match services to use gRPC style request / response protos.
|
||||
- [X] Documentation: Add basic user, developer documentation and set up the Open Match website.
|
||||
- [X] Create and document a formal release process.
|
||||
- [X] Improve developer experience (simplify compiling, deploying and validating)
|
||||
|
||||
## Instrumentation / Metrics / Analytics
|
||||
- [ ] Instrumentation of MMFs is in the planning stages. Since MMFs are by design meant to be completely customizable (to the point of allowing any process that can be packaged in a Docker container), metrics/stats will need to have an expected format and formalized outgoing pathway. Currently the thought is that it might be that the metrics should be written to a particular key in statestorage in a format compatible with opencensus, and will be collected, aggreggated, and exported to Prometheus using another process.
|
||||
- [ ] [OpenCensus tracing](https://opencensus.io/core-concepts/tracing/) will be implemented in an upcoming version. This is likely going to require knative.
|
||||
- [X] Read logrus logging configuration from matchmaker_config.json.
|
||||
## 0.6.0 - API changes, Maturity
|
||||
|
||||
## Security
|
||||
- [ ] The Kubernetes service account used by the MMFOrc should be updated to have min required permissions. [Issue 52](issues/52)
|
||||
In 0.6.0 release, we are revisiting the Data Model and the API surface exposed by Open Match. The goal of this release is to front-load a major API refactoring that will facilitate achieving scale and other productionizing goals in forthcoming releases. Although breaking chagnes can happen any time till 1.0, the goal is to implement any major breaking changes in 0.6.0 so that future chagnes if any are relatively minor. Customer should be able to start building their Match Makers using the 0.6.0 API surface.
|
||||
|
||||
## Kubernetes
|
||||
- [ ] Autoscaling isn't turned on for the Frontend or Backend API Kubernetes deployments by default.
|
||||
- [X] A [Helm](https://helm.sh/) chart to stand up Open Match may be provided in an upcoming version. For now just use the [installation YAMLs](./install/yaml).
|
||||
- [ ] A knative-based implementation of MMFs is in the planning stages.
|
||||
Here are the tasks planned for 0.6.0 release:
|
||||
|
||||
## CI / CD / Build
|
||||
- [X] We plan to host 'official' docker images for all release versions of the core components in publicly available docker registries soon. This is tracked in [Issue #45](issues/45) and is blocked by [Issue 42](issues/42).
|
||||
- [X] CI/CD for this repo and the associated status tags are planned.
|
||||
- [ ] Golang unit tests will be shipped in an upcoming version.
|
||||
- [ ] A full load-testing and e2e testing suite will be included in an upcoming version.
|
||||
- [ ] Implement the new Data model and the API changes for the Frontend, Backend and MMLogic API [Change Proposal](https://github.com/GoogleCloudPlatform/open-match/issues/279)
|
||||
- [ ] Accept multiple proposals per MMF execution.
|
||||
- [ ] Remove persistance of matches and proposals from Open Match state storage.
|
||||
- [ ] Implement synchronized evaluation to eliminate use of state storage during evaluation.
|
||||
- [ ] Introduce test framework for unit testing, Component testing and E2E testing.
|
||||
- [ ] Add unit tests, component tests and integration tests for Open Match core components and examples.
|
||||
- [ ] Update harness, evaluator, mmf samples etc., to reflect the API changes.
|
||||
- [ ] Update documentation, website to reflect 0.6.0 API changes.
|
||||
|
||||
## Will not Implement
|
||||
- [X] Defining multiple images inside a profile for the purposes of experimentation adds another layer of complexity into profiles that can instead be handled outside of open match with custom match functions in collaboration with a director (thing that calls backend to schedule matchmaking)
|
||||
## 0.7.0 - Scale, Operationalizing
|
||||
|
||||
Features for 0.7.0 are targeted to enable Open Match to be productionizable. Note that as we identify more feature work past 0.6.0, these tasks may get pushed to future releases. However, these are core tasks that need to be addressed before Open Match reaches 1.0.
|
||||
|
||||
- [ ] Introduce Test framework for load, performance testing
|
||||
- [ ] Automated Load / performance / scale tests
|
||||
- [ ] Test results Dashboard
|
||||
- [ ] Add support for Instrumentation, Monitoring, Dashboards
|
||||
- [ ] Add support or Metrics collection, Analytics, Dashboards
|
||||
- [ ] Identify Autoscaling patterns for each component and configure them.
|
||||
|
||||
## Other Features
|
||||
|
||||
Below are additional features that are not tied to a specific release but will be added incrementally across releases:
|
||||
|
||||
- [ ] Harness support for Python, PHP, C#, C++
|
||||
- [ ] User Guide for Open Match, Tutorials
|
||||
- [ ] Developer Guide for Open Match
|
||||
- [ ] APIs & Reference
|
||||
- [ ] Concept Documentation
|
||||
- [ ] Website Improvements
|
||||
|
||||
## 1.1.0
|
||||
|
||||
Below are the features that have been identified but are not considered critical for Open Match (as a match making framework) to itself reach 1.0. Any other features that are related to Open Match ecosystem but not a part of the framework itself can be listed here. These features may not necessarily wait for Open Match 1.0 and can be implemented before that - but any of the currently identified 1.0 tasks are higher in priority than these to make Open Match production ready.
|
||||
|
||||
- [ ] Canonical usable examples out of box.
|
||||
- [ ] KNative support to run MMFs
|
||||
- [ ] OSS Director to integrate with Agones, other DGS backends
|
||||
|
||||
### Special Thanks
|
||||
- Thanks to https://jbt.github.io/markdown-editor/ for help in marking this document down.
|
||||
|
@ -29,12 +29,12 @@ import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleCloudPlatform/open-match/internal/pb"
|
||||
"github.com/gobs/pretty"
|
||||
"github.com/tidwall/gjson"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
var (
|
||||
@ -44,6 +44,8 @@ var (
|
||||
assignment = flag.String("assignment", "example.server.dgs:12345", "Assignment to send to matched players")
|
||||
delAssignments = flag.Bool("rm", false, "Delete assignments. Leave off to be able to manually validate assignments in state storage")
|
||||
verbose = flag.Bool("verbose", false, "Print out as much as possible")
|
||||
runForever = flag.Bool("loop", true, "Make the desired call in a loop till process terminates")
|
||||
runInterval = flag.Int("interval", 5, "seconds to wait between consequitive calls")
|
||||
)
|
||||
|
||||
func bytesToString(data []byte) string {
|
||||
@ -67,6 +69,8 @@ func main() {
|
||||
log.Printf(" [flags] Using OM Backend %v call", *beCall)
|
||||
log.Printf(" [flags] Assigning players to %v", *assignment)
|
||||
log.Printf(" [flags] Deleting assignments? %v", *delAssignments)
|
||||
log.Printf(" [flags] Run forever? %v", *runForever)
|
||||
log.Printf(" [flags] Interval between consequitive runs - %v", *runInterval)
|
||||
if !(*beCall == "CreateMatch" || *beCall == "ListMatches") {
|
||||
log.Printf(" [flags] Unknown OM Backend call %v! Exiting...", *beCall)
|
||||
return
|
||||
@ -184,40 +188,47 @@ func main() {
|
||||
|
||||
// Make the requested backend call: CreateMatch calls once, ListMatches continually calls.
|
||||
log.Printf("Attempting %v() call", *beCall)
|
||||
switch *beCall {
|
||||
case "CreateMatch":
|
||||
resp, err := client.CreateMatch(ctx, req)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
log.Printf("CreateMatch returned; processing match")
|
||||
|
||||
matchChan <- resp.Match
|
||||
<-doneChan
|
||||
case "ListMatches":
|
||||
stream, err := client.ListMatches(ctx, &pb.ListMatchesRequest{
|
||||
Mmfcfg: req.Mmfcfg,
|
||||
Match: req.Match,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("Attempting to open stream for ListMatches(_) = _, %v", err)
|
||||
}
|
||||
for {
|
||||
log.Printf("Waiting for matches...")
|
||||
resp, err := stream.Recv()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
for {
|
||||
switch *beCall {
|
||||
case "CreateMatch":
|
||||
resp, err := client.CreateMatch(ctx, req)
|
||||
if err != nil {
|
||||
stat, ok := status.FromError(err)
|
||||
if ok {
|
||||
log.Printf("Error reading stream for ListMatches() returned status: %s %s", stat.Code().String(), stat.Message())
|
||||
} else {
|
||||
log.Printf("Error reading stream for ListMatches() returned status: %s", err)
|
||||
}
|
||||
log.Printf("Failed CreateMatch, %v", err)
|
||||
break
|
||||
}
|
||||
log.Printf("CreateMatch returned; processing match")
|
||||
|
||||
matchChan <- resp.Match
|
||||
<-doneChan
|
||||
case "ListMatches":
|
||||
stream, err := client.ListMatches(ctx, &pb.ListMatchesRequest{
|
||||
Mmfcfg: req.Mmfcfg,
|
||||
Match: req.Match,
|
||||
})
|
||||
if err != nil {
|
||||
log.Printf("Failed ListMatches, %v", err)
|
||||
break
|
||||
}
|
||||
|
||||
for {
|
||||
log.Printf("Waiting for matches...")
|
||||
resp, err := stream.Recv()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
log.Printf("Error reading stream for ListMatches, %v", err)
|
||||
break
|
||||
}
|
||||
matchChan <- resp.Match
|
||||
}
|
||||
}
|
||||
|
||||
if !*runForever {
|
||||
break
|
||||
}
|
||||
|
||||
// Wait for the retry interval before calling again.
|
||||
time.Sleep(time.Duration(*runInterval) * time.Second)
|
||||
}
|
||||
}
|
||||
|
@ -13,8 +13,8 @@
|
||||
# limitations under the License.
|
||||
|
||||
apiVersion: v1
|
||||
appVersion: "0.5.0-rc1"
|
||||
version: 0.5.0-rc1
|
||||
appVersion: "0.5.0-rc.2"
|
||||
version: 0.5.0-rc.2
|
||||
name: open-match-example
|
||||
description: Flexible, extensible, and scalable video game matchmaking.
|
||||
keywords:
|
||||
|
@ -39,7 +39,7 @@ openmatch:
|
||||
testprofile: /profiles
|
||||
image:
|
||||
registry: gcr.io/open-match-public-images
|
||||
tag: 0.5.0-rc1
|
||||
tag: 0.5.0-rc.2
|
||||
backendclient:
|
||||
name: openmatch-backendclient
|
||||
pullPolicy: Always
|
||||
|
@ -13,8 +13,8 @@
|
||||
# limitations under the License.
|
||||
|
||||
apiVersion: v1
|
||||
appVersion: "0.5.0-rc1"
|
||||
version: 0.5.0-rc1
|
||||
appVersion: "0.5.0-rc.2"
|
||||
version: 0.5.0-rc.2
|
||||
name: open-match
|
||||
description: Flexible, extensible, and scalable video game matchmaking.
|
||||
keywords:
|
||||
|
@ -47,7 +47,7 @@ openmatch:
|
||||
# You can refer to other chart values using the Helm templates syntax here.
|
||||
image:
|
||||
registry: gcr.io/open-match-public-images
|
||||
tag: 0.5.0-rc1
|
||||
tag: 0.5.0-rc.2
|
||||
backendapi:
|
||||
name: openmatch-backendapi
|
||||
pullPolicy: Always
|
||||
|
@ -72,8 +72,8 @@ github_repo = "https://github.com/GoogleCloudPlatform/open-match"
|
||||
# Google Custom Search Engine ID. Remove or comment out to disable search.
|
||||
gcs_engine_id = "008748710159674449076:sqoelpnrdoe"
|
||||
|
||||
release_branch = "release-0.5.0-rc1"
|
||||
release_version = "0.5.0-rc1"
|
||||
release_branch = "release-0.5.0-rc.2"
|
||||
release_version = "0.5.0-rc.2"
|
||||
|
||||
# User interface configuration
|
||||
[params.ui]
|
||||
|
Reference in New Issue
Block a user