Compare commits

...

20 Commits

Author SHA1 Message Date
02ce5e26b7 Release 1.1 fix (#1316)
* Update repo location

* Update repo location

* Update chart repo location in Makefile
2020-12-17 16:01:44 -05:00
32611444d6 Release 1.1 (#1306)
* updated versions in various files. ran make release and make api/api.md targets as per release steps

* updated versions to 1.1.0-rc.1

* Updated Makefile BASE_VERSION

* Updated GKE_VERSION in create-gke-cluster target

* Updated appVersion and version tags in Chart.yaml

* Updated tag in values.yaml

* updated _OM_VERSION in cloudbuild.yaml

* make release and make api/api.md execution
2020-12-16 15:58:45 -05:00
c0b355da51 release-1.1.0-rc.1 (#1286) 2020-11-18 13:42:34 -08:00
6f05e526fb Improved tests for statestore - redis (#1264) 2020-10-12 19:21:51 -07:00
496d156faa Added unary interceptor and removed extra logs (#1255)
* added unary interceptor and removed logs from frontend service

* removed extra logs from backend serrvice

* updated evaluator logging

* updated query logging


linter fix

* fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-21 15:02:29 -07:00
3a3d618c43 Replaced GS bucket links with substitution variables (#1262) 2020-09-21 12:22:03 -07:00
e1cbd855f5 Added time to assignment metrics to backend (#1241)
* Added time to assignment metrics to backend

- The time to match for tickets is now recorded as a metric

* Fixed formatting errors

* Fixed minor review changes

- Renamed function to calculate time to assignment
- Moved from callback to returning tickets from UpdateAssignments

* Return only successfully assigned tickets

* Fixed linting errors
2020-09-15 11:18:17 -07:00
10b36705f0 Tests update: use require assertion (#1257)
* use require in filter package


fix

* use require in rpc package

* use require in tools/certgen package

* use require in mmf package

* use require in telemetry and logging


fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-09 14:24:18 -07:00
a6fc4724bc Fix spelling in Proto files (#1256)
Regenerated dependent Swagger and Golang files.
2020-09-09 12:20:29 -07:00
511337088a Reduce logging in statestore - redis (#1248)
* reduce logging in statestore - redis  #1228


fix

* added grpc interceptors to log errors

lint fix

Co-authored-by: Scott Redig <sredig@google.com>
2020-09-02 12:50:39 -07:00
5f67bb36a6 Use require in app tests and improve error messages (#1253) 2020-08-31 13:17:29 -07:00
94d2105809 Use require in tests to avoid nil pointer exceptions (#1249)
* use require in tests to avoid nil pointer exceptions

* statestore tests: replaced assert with require
2020-08-28 12:19:53 -07:00
d85f1f4bc7 Added a PR template (#1250) 2020-08-25 14:16:36 -07:00
79e9afeca7 Use Helm release to name resources (#1246)
* Fix indent of TLS certificate annotations

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Small whitespace fixes

Picked up the VSCode Yaml auto-formatter.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass 'query' config to open-match-customize

It's not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Don't pass frontend/backend to open-match-scale

They're not used.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Allow redis to derive resource names from the release

This ensures that multiple OpenMatch installs in a single namespace do
not attempt to install Redis stacks with the same resource names.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Include release names in PodSecurityPolicies

This avoids conflicts between multiple Open Match installations in the
same namespace.

`openmatch.fullname` named template per Helm default chart.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the Service Account name release-dependent

This makes the existing global.kubernetes.serviceAccount value an
override if specified, but if left unspecified, an appropriate name will
be chosen.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the RBAC resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the TLS Secret names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make the CI-test resource names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make all Pod/Service names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make Grafana dashboard names release-dependent

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make open-match-scale slightly more standalone

This makes the hostname templates more standard in their case, because
there is no need to coordinate the hostname with the superchart.

This chart still uses a lot of templates from the open-match chart
though, so it's not yet standalone-installable.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Make ConfigMap default names release-dependent

A specific ConfigMap can be applied in the same way it was previously,
by overriding configs.default.configName and
configs.override.configName, in which case it is up to the person doing
the deployment to manage name conflicts.

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Use correct Jaeger service names for subcharts

This fixes an existing issue where the Jaeger connection URLs in
the configuration would be incorrect if your Helm chart was not
installed as a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>

* Populate Grafana Datasource using a ConfigMap

This allows us to access the Prometheus subchart's named template to get
the correct Service name for the datasource.

This fixes an existing issue where the Prometheus data source URL in
Grafana would be incorrect if your Helm chart was not installed as
a release named "open-match".

Signed-off-by: Paul "Hampy" Hampson <p_hampson@wargaming.net>
2020-08-17 12:04:26 -07:00
3334f7f74a Make: fix create-gke-cluster, create clusterRole (#1234)
If there are multiple `gcloud auth list` accounts the command would fail,
adding grep active to fix.
2020-07-10 10:57:16 -07:00
85ce954eb9 Update backend_service.go (#1233)
Fixed typo
2020-07-09 11:45:33 -07:00
679cfb5839 Rename Ignore list to Pending Release (#1230)
Fix naming across all code. Swagger changes left.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 13:56:30 -07:00
c53a5b7c88 Update Swagger JSONs as well as go proto files (#1231)
Output of run make presubmit on master.

Co-authored-by: Scott Redig <sredig@google.com>
2020-07-08 12:52:51 -07:00
cfb316169a Use supported GKE cluster version (#1232)
Update Makefile.
2020-07-08 12:25:53 -07:00
a9365b5333 fix release.sh not knowing the right images (#1219) 2020-06-01 11:05:27 -07:00
116 changed files with 1706 additions and 957 deletions

16
.github/pull_request_template.md vendored Normal file
View File

@ -0,0 +1,16 @@
<!-- Thanks for sending a pull request! Here are some tips for you:
If this is your first time, please read our contributor guidelines: https://github.com/googleforgames/open-match/blob/master/CONTRIBUTING.md and developer guide https://github.com/googleforgames/open-match/blob/master/docs/development.md
-->
**What this PR does / Why we need it**:
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Closes #<issue number>`, or `Closes (paste link of issue)`.
-->
Closes #
**Special notes for your reviewer**:

View File

@ -52,7 +52,7 @@
# If you want information on how to edit this file checkout,
# http://makefiletutorial.com/
BASE_VERSION = 0.0.0-dev
BASE_VERSION = 1.1.0
SHORT_SHA = $(shell git rev-parse --short=7 HEAD | tr -d [:punct:])
BRANCH_NAME = $(shell git rev-parse --abbrev-ref HEAD | tr -d [:punct:])
VERSION = $(BASE_VERSION)-$(SHORT_SHA)
@ -123,7 +123,7 @@ GCLOUD = gcloud --quiet
OPEN_MATCH_HELM_NAME = open-match
OPEN_MATCH_KUBERNETES_NAMESPACE = open-match
OPEN_MATCH_SECRETS_DIR = $(REPOSITORY_ROOT)/install/helm/open-match/secrets
GCLOUD_ACCOUNT_EMAIL = $(shell gcloud auth list --format yaml | grep account: | cut -c 10-)
GCLOUD_ACCOUNT_EMAIL = $(shell gcloud auth list --format yaml | grep ACTIVE -a2 | grep account: | cut -c 10-)
_GCB_POST_SUBMIT ?= 0
# Latest version triggers builds of :latest images.
_GCB_LATEST_VERSION ?= undefined
@ -215,6 +215,9 @@ local-cloud-build: gcloud
## "openmatch-" prefix on the image name and tags.
##
list-images:
@echo $(IMAGES)
#######################################
## build-images / build-<image name>-image: builds images locally
##
@ -282,7 +285,7 @@ $(foreach IMAGE,$(IMAGES),clean-$(IMAGE)-image): clean-%-image:
#####################################################################################################################
update-chart-deps: build/toolchain/bin/helm$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/helm/open-match; $(HELM) repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com; $(HELM) dependency update)
(cd $(REPOSITORY_ROOT)/install/helm/open-match; $(HELM) repo add incubator https://charts.helm.sh/stable; $(HELM) dependency update)
lint-chart: build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/ct$(EXE_EXTENSION)
(cd $(REPOSITORY_ROOT)/install/helm; $(HELM) lint $(OPEN_MATCH_HELM_NAME))
@ -360,7 +363,7 @@ install-scale-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EX
install-ci-chart: install-chart-prerequisite build/toolchain/bin/helm$(EXE_EXTENSION) install/helm/open-match/secrets/
$(HELM) upgrade $(OPEN_MATCH_HELM_NAME) $(HELM_UPGRADE_FLAGS) --atomic install/helm/open-match $(HELM_IMAGE_FLAGS) \
--set query.replicas=1,frontend.replicas=1,backend.replicas=1 \
--set evaluator.hostName=test \
--set evaluator.hostName=open-match-test \
--set evaluator.grpcPort=50509 \
--set evaluator.httpPort=51509 \
--set open-match-core.registrationInterval=200ms \
@ -386,9 +389,12 @@ install/yaml/: TAG = $(BASE_VERSION)
endif
install/yaml/: update-chart-deps install/yaml/install.yaml install/yaml/01-open-match-core.yaml install/yaml/02-open-match-demo.yaml install/yaml/03-prometheus-chart.yaml install/yaml/04-grafana-chart.yaml install/yaml/05-jaeger-chart.yaml install/yaml/06-open-match-override-configmap.yaml install/yaml/07-open-match-default-evaluator.yaml
# We have to hard-code the Jaeger endpoints as we are excluding Jaeger, so Helm cannot determine the endpoints from the Jaeger subchart
install/yaml/01-open-match-core.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
--set-string global.telemetry.jaeger.agentEndpoint="$(OPEN_MATCH_HELM_NAME)-jaeger-agent:6831" \
--set-string global.telemetry.jaeger.collectorEndpoint="http://$(OPEN_MATCH_HELM_NAME)-jaeger-collector:14268/api/traces" \
install/helm/open-match > install/yaml/01-open-match-core.yaml
install/yaml/02-open-match-demo.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
@ -406,6 +412,7 @@ install/yaml/03-prometheus-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
--set global.telemetry.prometheus.enabled=true \
install/helm/open-match > install/yaml/03-prometheus-chart.yaml
# We have to hard-code the Prometheus Server URL as we are excluding Prometheus, so Helm cannot determine the URL from the Prometheus subchart
install/yaml/04-grafana-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
mkdir -p install/yaml/
$(HELM) template $(OPEN_MATCH_HELM_NAME) $(HELM_TEMPLATE_FLAGS) $(HELM_IMAGE_FLAGS) \
@ -413,6 +420,7 @@ install/yaml/04-grafana-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
--set open-match-core.redis.enabled=false \
--set open-match-telemetry.enabled=true \
--set global.telemetry.grafana.enabled=true \
--set-string global.telemetry.grafana.prometheusServer="http://$(OPEN_MATCH_HELM_NAME)-prometheus-server.$(OPEN_MATCH_KUBERNETES_NAMESPACE).svc.cluster.local:80/" \
install/helm/open-match > install/yaml/04-grafana-chart.yaml
install/yaml/05-jaeger-chart.yaml: build/toolchain/bin/helm$(EXE_EXTENSION)
@ -459,7 +467,7 @@ set-redis-password:
read REDIS_PASSWORD; \
stty echo; \
printf "\n"; \
$(KUBECTL) create secret generic om-redis -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) --from-literal=redis-password=$$REDIS_PASSWORD --dry-run -o yaml | $(KUBECTL) replace -f - --force
$(KUBECTL) create secret generic open-match-redis -n $(OPEN_MATCH_KUBERNETES_NAMESPACE) --from-literal=redis-password=$$REDIS_PASSWORD --dry-run -o yaml | $(KUBECTL) replace -f - --force
install-toolchain: install-kubernetes-tools install-protoc-tools install-openmatch-tools
install-kubernetes-tools: build/toolchain/bin/kubectl$(EXE_EXTENSION) build/toolchain/bin/helm$(EXE_EXTENSION) build/toolchain/bin/minikube$(EXE_EXTENSION) build/toolchain/bin/terraform$(EXE_EXTENSION)
@ -597,7 +605,10 @@ get-kind-kubeconfig: build/toolchain/bin/kind$(EXE_EXTENSION)
delete-kind-cluster: build/toolchain/bin/kind$(EXE_EXTENSION) build/toolchain/bin/kubectl$(EXE_EXTENSION)
-$(KIND) delete cluster
create-gke-cluster: GKE_VERSION = 1.14.10-gke.32 # gcloud beta container get-server-config --zone us-west1-a
create-cluster-role-binding:
$(KUBECTL) create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(GCLOUD_ACCOUNT_EMAIL)
create-gke-cluster: GKE_VERSION = 1.15.12-gke.20 # gcloud beta container get-server-config --zone us-west1-a
create-gke-cluster: GKE_CLUSTER_SHAPE_FLAGS = --machine-type n1-standard-4 --enable-autoscaling --min-nodes 1 --num-nodes 2 --max-nodes 10 --disk-size 50
create-gke-cluster: GKE_FUTURE_COMPAT_FLAGS = --no-enable-basic-auth --no-issue-client-certificate --enable-ip-alias --metadata disable-legacy-endpoints=true --enable-autoupgrade
create-gke-cluster: build/toolchain/bin/kubectl$(EXE_EXTENSION) gcloud
@ -606,7 +617,8 @@ create-gke-cluster: build/toolchain/bin/kubectl$(EXE_EXTENSION) gcloud
--cluster-version $(GKE_VERSION) \
--image-type cos_containerd \
--tags open-match
$(KUBECTL) create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=$(GCLOUD_ACCOUNT_EMAIL)
$(MAKE) create-cluster-role-binding
delete-gke-cluster: gcloud
-$(GCLOUD) $(GCP_PROJECT_FLAG) container clusters delete $(GKE_CLUSTER_NAME) $(GCP_LOCATION_FLAG) $(GCLOUD_EXTRA_FLAGS)
@ -656,16 +668,11 @@ api/api.md: third_party/ build/toolchain/bin/protoc-gen-doc$(EXE_EXTENSION)
$(PROTOC) api/*.proto \
-I $(REPOSITORY_ROOT) -I $(PROTOC_INCLUDES) \
--doc_out=. \
--doc_opt=markdown,api.md
--doc_opt=markdown,api_temp.md
# Crazy hack that insert hugo link reference to this API doc -)
$(SED_REPLACE) '1 i\---\
title: "Open Match API References" \
linkTitle: "Open Match API References" \
weight: 2 \
description: \
This document provides API references for Open Match services. \
--- \
' ./api.md && mv ./api.md $(REPOSITORY_ROOT)/../open-match-docs/site/content/en/docs/Reference/
cat ./docs/hugo_apiheader.txt ./api_temp.md >> api.md
mv ./api.md $(REPOSITORY_ROOT)/../open-match-docs/site/content/en/docs/Reference/
rm ./api_temp.md
# Include structure of the protos needs to be called out do the dependency chain is run through properly.
pkg/pb/backend.pb.go: pkg/pb/messages.pb.go

View File

@ -26,7 +26,7 @@
"paths": {
"/v1/backendservice/matches:fetch": {
"post": {
"summary": "FetchMatches triggers a MatchFunction with the specified MatchProfile and returns a set of match proposals that \nmatch the description of that MatchProfile.\nFetchMatches immediately returns an error if it encounters any execution failures.",
"summary": "FetchMatches triggers a MatchFunction with the specified MatchProfile and\nreturns a set of matches generated by the Match Making Function, and\naccepted by the evaluator.\nTickets in matches returned by FetchMatches are moved from active to\npending, and will not be returned by query.",
"operationId": "FetchMatches",
"responses": {
"200": {
@ -94,7 +94,7 @@
},
"/v1/backendservice/tickets:release": {
"post": {
"summary": "ReleaseTickets removes the submitted tickets from the list that prevents tickets \nthat are awaiting assignment from appearing in MMF queries, effectively putting them back into\nthe matchmaking pool",
"summary": "ReleaseTickets moves tickets from the pending state, to the active state.\nThis enables them to be returned by query, and find different matches.",
"description": "BETA FEATURE WARNING: This call and the associated Request and Response\nmessages are not finalized and still subject to possible change or removal.",
"operationId": "ReleaseTickets",
"responses": {
@ -211,7 +211,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchAssignmentFailure": {
"type": "object",
@ -483,7 +483,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -499,10 +499,10 @@
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time represents the time at which this Ticket was created. It is\npopulated by Open Match at the time of Ticket creation."
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",

View File

@ -76,7 +76,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchEvaluateRequest": {
"type": "object",
@ -165,7 +165,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -181,10 +181,10 @@
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time represents the time at which this Ticket was created. It is\npopulated by Open Match at the time of Ticket creation."
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",

View File

@ -91,7 +91,7 @@
]
},
"delete": {
"summary": "DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.\nThe client must delete the Ticket when finished matchmaking with it. \n - If SearchFields exist in a Ticket, DeleteTicket will deindex the fields lazily.\nUsers may still be able to assign/get a ticket after calling DeleteTicket on it.",
"summary": "DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.\nThe client should delete the Ticket when finished matchmaking with it.",
"operationId": "DeleteTicket",
"responses": {
"200": {
@ -172,7 +172,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchCreateTicketRequest": {
"type": "object",
@ -220,7 +220,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -236,10 +236,10 @@
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time represents the time at which this Ticket was created. It is\npopulated by Open Match at the time of Ticket creation."
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"openmatchWatchAssignmentsResponse": {
"type": "object",

View File

@ -69,7 +69,7 @@ message RunResponse {
// The MatchFunction service implements APIs to run user-defined matchmaking logics.
service MatchFunction {
// DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.
// Run pulls Tickets that satisify Profile constraints from QueryService, runs matchmaking logics against them, then
// Run pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then
// constructs and streams back match candidates to the Backend service.
rpc Run(RunRequest) returns (stream RunResponse) {
option (google.api.http) = {

View File

@ -26,7 +26,7 @@
"paths": {
"/v1/matchfunction:run": {
"post": {
"summary": "DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.\nRun pulls Tickets that satisify Profile constraints from QueryService, runs matchmaking logics against them, then\nconstructs and streams back match candidates to the Backend service.",
"summary": "DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.\nRun pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then\nconstructs and streams back match candidates to the Backend service.",
"operationId": "Run",
"responses": {
"200": {
@ -75,7 +75,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchDoubleRangeFilter": {
"type": "object",
@ -269,7 +269,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -285,10 +285,10 @@
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time represents the time at which this Ticket was created. It is\npopulated by Open Match at the time of Ticket creation."
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",

View File

@ -80,7 +80,7 @@ service QueryService {
// QueryTickets gets a list of Tickets that match all Filters of the input Pool.
// - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.
// QueryTickets pages the Tickets by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTickets(QueryTicketsRequest) returns (stream QueryTicketsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/tickets:query"
@ -91,7 +91,7 @@ service QueryService {
// QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.
// - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.
// QueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
rpc QueryTicketIds(QueryTicketIdsRequest) returns (stream QueryTicketIdsResponse) {
option (google.api.http) = {
post: "/v1/queryservice/ticketids:query"

View File

@ -26,7 +26,7 @@
"paths": {
"/v1/queryservice/ticketids:query": {
"post": {
"summary": "QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.\n - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.\nQueryTicketIds pages the TicketIDs by `storage.pool.size` and stream back responses.\n - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.",
"summary": "QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.\n - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.\nQueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryTicketIds",
"responses": {
"200": {
@ -60,7 +60,7 @@
},
"/v1/queryservice/tickets:query": {
"post": {
"summary": "QueryTickets gets a list of Tickets that match all Filters of the input Pool.\n - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.\nQueryTickets pages the Tickets by `storage.pool.size` and stream back responses.\n - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.",
"summary": "QueryTickets gets a list of Tickets that match all Filters of the input Pool.\n - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.\nQueryTickets pages the Tickets by `queryPageSize` and stream back responses.\n - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.",
"operationId": "QueryTickets",
"responses": {
"200": {
@ -109,7 +109,7 @@
"description": "Customized information not inspected by Open Match, to be used by the match\nmaking function, evaluator, and components making calls to Open Match.\nOptional, depending on the requirements of the connected systems."
}
},
"description": "An Assignment represents a game server assignment associated with a Ticket. Open\nmatch does not require or inspect any fields on assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on assignment."
},
"openmatchDoubleRangeFilter": {
"type": "object",
@ -271,7 +271,7 @@
},
"assignment": {
"$ref": "#/definitions/openmatchAssignment",
"description": "An Assignment represents a game server assignment associated with a Ticket.\nOpen Match does not require or inspect any fields on Assignment."
"description": "An Assignment represents a game server assignment associated with a Ticket,\nor whatever finalized matched state means for your use case.\nOpen Match does not require or inspect any fields on Assignment."
},
"search_fields": {
"$ref": "#/definitions/openmatchSearchFields",
@ -287,10 +287,10 @@
"create_time": {
"type": "string",
"format": "date-time",
"description": "Create time represents the time at which this Ticket was created. It is\npopulated by Open Match at the time of Ticket creation."
"description": "Create time is the time the Ticket was created. It is populated by Open\nMatch at the time of Ticket creation."
}
},
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an\nindividual 'Player' or a 'Group' of players. Open Match will not interpret\nwhat the Ticket represents but just treat it as a matchmaking unit with a set\nof SearchFields. Open Match stores the Ticket in state storage and enables an\nAssignment to be associated with this Ticket."
"description": "A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent\nan individual 'Player', a 'Group' of players, or any other concepts unique to\nyour use case. Open Match will not interpret what the Ticket represents but\njust treat it as a matchmaking unit with a set of SearchFields. Open Match\nstores the Ticket in state storage and enables an Assignment to be set on the\nTicket."
},
"protobufAny": {
"type": "object",

View File

@ -153,7 +153,7 @@ steps:
artifacts:
objects:
location: gs://open-match-build-artifacts/output/
location: '${_ARTIFACTS_BUCKET}'
paths:
- install/yaml/install.yaml
- install/yaml/01-open-match-core.yaml
@ -164,10 +164,12 @@ artifacts:
- install/yaml/06-open-match-override-configmap.yaml
substitutions:
_OM_VERSION: "0.0.0-dev"
_OM_VERSION: "1.1.0"
_GCB_POST_SUBMIT: "0"
_GCB_LATEST_VERSION: "undefined"
logsBucket: 'gs://open-match-build-logs/'
_ARTIFACTS_BUCKET: "gs://open-match-build-artifacts/output/"
_LOGS_BUCKET: "gs://open-match-build-logs/"
logsBucket: '${_LOGS_BUCKET}'
options:
sourceProvenanceHash: ['SHA256']
machineType: 'N1_HIGHCPU_32'

View File

@ -111,8 +111,8 @@ While iterating on the project, you may need to:
## Accessing logs
To look at Open Match core services' logs, run:
```bash
# Replace om-frontend with the service name that you would like to access
kubectl logs -n open-match svc/om-frontend
# Replace open-match-frontend with the service name that you would like to access
kubectl logs -n open-match svc/open-match-frontend
```
## API References

View File

@ -12,24 +12,13 @@ SOURCE_VERSION=$1
DEST_VERSION=$2
SOURCE_PROJECT_ID=open-match-build
DEST_PROJECT_ID=open-match-public-images
IMAGE_NAMES="openmatch-backend openmatch-frontend openmatch-query openmatch-synchronizer openmatch-minimatch openmatch-demo-first-match openmatch-mmf-go-soloduel openmatch-mmf-go-pool openmatch-evaluator-go-simple openmatch-swaggerui openmatch-reaper"
IMAGE_NAMES=$(make list-images)
for name in $IMAGE_NAMES
do
source_image=gcr.io/$SOURCE_PROJECT_ID/$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION
source_image=gcr.io/$SOURCE_PROJECT_ID/openmatch-$name:$SOURCE_VERSION
dest_image=gcr.io/$DEST_PROJECT_ID/openmatch-$name:$DEST_VERSION
docker pull $source_image
docker tag $source_image $dest_image
docker push $dest_image
done
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "=============================================================="
echo "Add these lines to your release notes:"
for name in $IMAGE_NAMES
do
echo "docker pull gcr.io/$DEST_PROJECT_ID/$name:$DEST_VERSION"
done

7
docs/hugo_apiheader.txt Normal file
View File

@ -0,0 +1,7 @@
---
title: "Open Match API References"
linkTitle: "Open Match API References"
weight: 2
description:
This document provides API references for Open Match services.
---

View File

@ -81,7 +81,7 @@ func runScenario(ctx context.Context, name string, update updater.SetFunc) {
update(s)
// See https://open-match.dev/site/docs/guides/api/
conn, err := grpc.Dial("om-frontend.open-match.svc.cluster.local:50504", grpc.WithInsecure())
conn, err := grpc.Dial("open-match-frontend.open-match.svc.cluster.local:50504", grpc.WithInsecure())
if err != nil {
panic(err)
}

View File

@ -68,7 +68,7 @@ func run(ds *components.DemoShared) {
ds.Update(s)
// See https://open-match.dev/site/docs/guides/api/
conn, err := grpc.Dial("om-backend.open-match.svc.cluster.local:50505", grpc.WithInsecure())
conn, err := grpc.Dial("open-match-backend.open-match.svc.cluster.local:50505", grpc.WithInsecure())
if err != nil {
panic(err)
}

View File

@ -24,8 +24,8 @@ import (
)
const (
queryServiceAddr = "om-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
queryServiceAddr = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
)
func main() {

View File

@ -19,11 +19,11 @@ import (
"open-match.dev/open-match/pkg/pb"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestMakeMatchesDeduplicate(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": {{Id: "1"}},
@ -31,12 +31,12 @@ func TestMakeMatchesDeduplicate(t *testing.T) {
}
matches, err := makeMatches(poolNameToTickets)
assert.Nil(err)
assert.Equal(len(matches), 0)
require.Nil(err)
require.Equal(len(matches), 0)
}
func TestMakeMatches(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
poolNameToTickets := map[string][]*pb.Ticket{
"pool1": {{Id: "1"}, {Id: "2"}, {Id: "3"}},
@ -45,11 +45,11 @@ func TestMakeMatches(t *testing.T) {
}
matches, err := makeMatches(poolNameToTickets)
assert.Nil(err)
assert.Equal(len(matches), 3)
require.Nil(err)
require.Equal(len(matches), 3)
for _, match := range matches {
assert.Equal(2, len(match.Tickets))
assert.Equal(matchName, match.MatchFunction)
require.Equal(2, len(match.Tickets))
require.Equal(matchName, match.MatchFunction)
}
}

View File

@ -39,7 +39,7 @@ var (
func Run() {
activeScenario := scenarios.ActiveScenario
conn, err := grpc.Dial("om-query.open-match.svc.cluster.local:50503", utilTesting.NewGRPCDialOptions(logger)...)
conn, err := grpc.Dial("open-match-query.open-match.svc.cluster.local:50503", utilTesting.NewGRPCDialOptions(logger)...)
if err != nil {
logger.Fatalf("Failed to connect to Open Match, got %v", err)
}

View File

@ -28,7 +28,7 @@ import (
)
var (
queryServiceAddress = "om-query.open-match.svc.cluster.local:50503" // Address of the QueryService Endpoint.
queryServiceAddress = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService Endpoint.
logger = logrus.WithFields(logrus.Fields{
"app": "scale",

1
go.mod
View File

@ -45,6 +45,7 @@ require (
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v1.2.1
github.com/pseudomuto/protoc-gen-doc v1.3.2 // indirect
github.com/rs/xid v1.2.1
github.com/sirupsen/logrus v1.4.2
github.com/spf13/afero v1.2.1 // indirect

25
go.sum
View File

@ -27,6 +27,10 @@ github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/FZambia/sentinel v1.0.0 h1:KJ0ryjKTZk5WMp0dXvSdNqp3lFaW1fNFuEYfrkLOYIc=
github.com/FZambia/sentinel v1.0.0/go.mod h1:ytL1Am/RLlAoAXG6Kj5LNuw/TRRQrv2rt2FT26vP5gI=
github.com/Masterminds/semver v1.4.2 h1:WBLTQ37jOCzSLtXNdoo8bNM8876KhNqOKvrlGITgsTc=
github.com/Masterminds/semver v1.4.2/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/sprig v2.15.0+incompatible h1:0gSxPGWS9PAr7U2NsQ2YQg6juRDINkUyuvbb4b2Xm8w=
github.com/Masterminds/sprig v2.15.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
@ -41,6 +45,8 @@ github.com/alicebob/gopher-json v0.0.0-20180125190556-5a6b3ba71ee6/go.mod h1:SGn
github.com/alicebob/miniredis/v2 v2.11.0 h1:Dz6uJ4w3Llb1ZiFoqyzF9aLuzbsEWCeKwstu9MzmSAk=
github.com/alicebob/miniredis/v2 v2.11.0/go.mod h1:UA48pmi7aSazcGAvcdKcBB49z521IC9VjTTRz2nIaJE=
github.com/antihax/optional v0.0.0-20180407024304-ca021399b1a6/go.mod h1:V8iCPQYkqmusNa815XgQio277wI47sdRh1dUOLdyC6Q=
github.com/aokoli/goutils v1.0.1 h1:7fpzNGoJ3VA8qcrm++XEE1QUe0mIwNeLa02Nwq7RDkg=
github.com/aokoli/goutils v1.0.1/go.mod h1:SijmP0QR8LtwsmDs8Yii5Z/S4trXFGFC2oO5g9DP+DQ=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0 h1:5hryIiq9gtn+MiLVn0wP37kb/uTeRZgN08WoCsAhIhI=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
@ -69,6 +75,7 @@ github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/davecgh/go-spew v0.0.0-20161028175848-04cdfd42973b/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -79,6 +86,8 @@ github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5m
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.0.14/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v0.1.0 h1:EQciDnbrYxy13PgWoY8AqoxGiPrpgBZ1R8UNe3ddc+A=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
@ -106,6 +115,7 @@ github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9/go.mod h1:cIg4er
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
@ -128,6 +138,7 @@ github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXi
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v0.0.0-20161128191214-064e2069ce9c/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
@ -153,6 +164,9 @@ github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huandu/xstrings v1.0.0 h1:pO2K/gKgKaat5LdpAhxhluX2GPQMaI3W5FUz/I/UnWk=
github.com/huandu/xstrings v1.0.0/go.mod h1:4qWG/gcEcfX4z/mBDHJ++3ReCw9ibxbsNJbcucJdbSo=
github.com/imdario/mergo v0.3.4/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.8 h1:CGgOkSJeqMRmt0D9XLWExdT4m4F1vd3FV3VPt+0VxkQ=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
@ -191,6 +205,8 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007 h1:28i1IjGcx8AofiB4N3q5Yls55VEaitzuEPkFJEVgGkA=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007/go.mod h1:m2XC9Qq0AlmmVksL6FktJCdTYyLk7V3fKyp0sl1yWQo=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
@ -207,6 +223,7 @@ github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
@ -236,6 +253,10 @@ github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsT
github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/pseudomuto/protoc-gen-doc v1.3.2 h1:61vWZuxYa8D7Rn4h+2dgoTNqnluBmJya2MgbqO32z6g=
github.com/pseudomuto/protoc-gen-doc v1.3.2/go.mod h1:y5+P6n3iGrbKG+9O04V5ld71in3v/bX88wUwgt+U8EA=
github.com/pseudomuto/protokit v0.2.0 h1:hlnBDcy3YEDXH7kc9gV+NLaN0cDzhDvD1s7Y6FZ8RpM=
github.com/pseudomuto/protokit v0.2.0/go.mod h1:2PdH30hxVHsup8KpBTOXTBeMVhJZVio3Q8ViKSAXT0Q=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
@ -262,6 +283,7 @@ github.com/spf13/viper v1.5.0 h1:GpsTwfsQ27oS/Aha/6d1oD7tpKIqWnOA6tgOX9HHkt4=
github.com/spf13/viper v1.5.0/go.mod h1:AkYRkVJF8TkSG/xet6PzXX+l39KhhXa2pdqVSxnTcn4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v0.0.0-20170130113145-4d4bfba8f1d1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
@ -284,6 +306,7 @@ go.opencensus.io v0.22.1/go.mod h1:Ap50jQcDJrx6rB6VgeeFPtuPIf3wMRvRfrfYDO6+BmA=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180501155221-613d6eafa307/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@ -338,6 +361,7 @@ golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190412183630-56d357773e84/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -406,6 +430,7 @@ google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181107211654-5fc9ac540362/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=

View File

@ -13,13 +13,13 @@
# limitations under the License.
apiVersion: v2
appVersion: "0.0.0-dev"
version: 0.0.0-dev
appVersion: "1.1.0"
version: 1.1.0
name: open-match
dependencies:
- name: redis
version: 9.5.0
repository: https://kubernetes-charts.storage.googleapis.com/
repository: https://charts.helm.sh/stable
condition: open-match-core.redis.enabled
- name: open-match-telemetry
version: 0.0.0-dev

View File

@ -0,0 +1,20 @@
{*
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*}
{{/* vim: set filetype=mustache: */}}
{{- define "openmatchcustomize.function.hostName" -}}
{{- .Values.function.hostName | default (printf "%s-function" (include "openmatch.fullname" . ) ) -}}
{{- end -}}

View File

@ -18,7 +18,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -46,20 +46,20 @@ spec:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
{{- include "openmatch.HorizontalPodAutoscaler.spec.common" . | nindent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.evaluator.hostName }}
name: {{ include "openmatch.evaluator.hostName" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "openmatch.name" . }}
@ -83,11 +83,11 @@ spec:
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.evaluatorConfigs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.evaluatorConfigs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.evaluator.hostName }}
- name: {{ include "openmatch.evaluator.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.evaluatorConfigs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -18,7 +18,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -46,20 +46,20 @@ spec:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
{{- include "openmatch.HorizontalPodAutoscaler.spec.common" . | nindent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.function.hostName }}
name: {{ include "openmatchcustomize.function.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -84,11 +84,11 @@ spec:
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.mmfConfigs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.mmfConfigs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.function.hostName }}
- name: {{ include "openmatchcustomize.function.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.mmfConfigs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -35,11 +35,13 @@ evaluatorConfigs:
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
configName: om-configmap-default
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
customize:
volumeName: om-config-volume-override
mountPath: /app/config/override
configName: om-configmap-override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'
mmfConfigs:
# We use harness to implement the MMFs. MMF itself only requires one configmap but harness expects two,
@ -48,8 +50,10 @@ mmfConfigs:
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
configName: om-configmap-default
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
customize:
volumeName: om-config-volume-override
mountPath: /app/config/override
configName: om-configmap-override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'

View File

@ -0,0 +1,42 @@
{*
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*}
{{/* vim: set filetype=mustache: */}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "openmatchscale.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{- define "openmatchscale.scaleBackend.hostName" -}}
{{- .Values.scaleBackend.hostName | default (printf "%s-backend" (include "openmatchscale.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatchscale.scaleFrontend.hostName" -}}
{{- .Values.scaleFrontend.hostName | default (printf "%s-frontend" (include "openmatchscale.fullname" . ) ) -}}
{{- end -}}

View File

@ -15,7 +15,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.scaleBackend.hostName }}
name: {{ include "openmatchscale.scaleBackend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -34,7 +34,7 @@ spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.scaleBackend.hostName }}
name: {{ include "openmatchscale.scaleBackend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -59,11 +59,11 @@ spec:
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.configs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.scaleBackend.hostName }}
- name: {{ include "openmatchscale.scaleBackend.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -15,7 +15,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.scaleFrontend.hostName }}
name: {{ include "openmatchscale.scaleFrontend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -34,7 +34,7 @@ spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.scaleFrontend.hostName }}
name: {{ include "openmatchscale.scaleFrontend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -59,11 +59,11 @@ spec:
release: {{ .Release.Name }}
spec:
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.configs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.scaleFrontend.hostName }}
- name: {{ include "openmatchscale.scaleFrontend.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -16,7 +16,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: open-match-scale-dashboard
name: {{ include "openmatchscale.fullname" . }}-dashboard
namespace: {{ .Release.Namespace }}
labels:
grafana_dashboard: "1"

View File

@ -13,13 +13,13 @@
# limitations under the License.
scaleFrontend:
hostName: om-scale-frontend
hostName:
httpPort: 51509
replicas: 1
image: openmatch-scale-frontend
scaleBackend:
hostName: om-scale-backend
hostName:
httpPort: 51509
replicas: 1
image: openmatch-scale-backend
@ -28,8 +28,10 @@ configs:
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
configName: om-configmap-default
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
override:
volumeName: om-config-volume-override
mountPath: /app/config/override
configName: om-configmap-override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'

View File

@ -20,14 +20,14 @@ version: 0.0.0-dev
dependencies:
- name: prometheus
version: 9.2.0
repository: https://kubernetes-charts.storage.googleapis.com/
repository: https://charts.helm.sh/stable
condition: global.telemetry.prometheus.enabled,prometheus.enabled
- name: grafana
version: 4.0.1
repository: https://kubernetes-charts.storage.googleapis.com/
repository: https://charts.helm.sh/stable
condition: global.telemetry.grafana.enabled,grafana.enabled
- name: jaeger
version: 0.13.3
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
repository: https://charts.helm.sh/stable
condition: global.telemetry.jaeger.enabled,jaeger.enabled

View File

@ -62,7 +62,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (pod_name) (\n sum(\n rate(container_cpu_usage_seconds_total{pod_name=~\"om-.*\", container_name!=\"POD\"}[5m])\n ) by (pod_name, container_name) \n \n /\n \n sum(\n container_spec_cpu_quota{pod_name=~\"om-.*\", container_name!=\"POD\"} / container_spec_cpu_period{pod_name=~\"om-.*\", container_name!=\"POD\"}\n ) by (pod_name, container_name) \n \n * \n \n 100\n)",
"expr": "avg by (pod_name) (\n\nsum(\n rate(container_cpu_usage_seconds_total{container_name!=\"POD\"}[5m]) * on (pod_name) group_left(label_app) max by (pod_name, label_app) (label_replace(kube_pod_labels{label_app=\"open-match\"}, \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n) by (pod_name, container_name)\n\n/\n\nsum(\n (container_spec_cpu_quota{container_name!=\"POD\"} * on (pod_name) group_left(label_app) max by (pod_name, label_app) (label_replace(kube_pod_labels{label_app=\"open-match\"}, \"pod_name\", \"$1\", \"pod\", \"(.*)\")))\n /\n (container_spec_cpu_period{container_name!=\"POD\"} * on (pod_name) group_left(label_app) max by (pod_name, label_app) (label_replace(kube_pod_labels{label_app=\"open-match\"}, \"pod_name\", \"$1\", \"pod\", \"(.*)\")))\n) by (pod_name, container_name)\n\n*\n\n100\n)\n",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{pod_name}}",
@ -155,7 +155,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component) (go_goroutines{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component) (go_goroutines{app=~\"open-match\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}}",
@ -256,7 +256,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component,app) (process_resident_memory_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component,app) (process_resident_memory_bytes{app=~\"open-match\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - resident",
@ -265,7 +265,7 @@
"step": 4
},
{
"expr": "avg by (component,app) (process_virtual_memory_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component,app) (process_virtual_memory_bytes{app=~\"open-match\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - virtual",
@ -365,7 +365,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component) (deriv(process_resident_memory_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"}[$interval]))",
"expr": "avg by (component) (deriv(process_resident_memory_bytes{app=~\"open-match\"}[$interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - resident",
@ -374,7 +374,7 @@
"step": 4
},
{
"expr": "avg by (component) (deriv(process_virtual_memory_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"}[$interval]))",
"expr": "avg by (component) (deriv(process_virtual_memory_bytes{app=~\"open-match\"}[$interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - virtual",
@ -475,7 +475,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component) (go_memstats_alloc_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component) (go_memstats_alloc_bytes{app=~\"open-match\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - bytes allocated",
@ -484,7 +484,7 @@
"step": 4
},
{
"expr": "avg by (component) (rate(go_memstats_alloc_bytes_total{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"}[$interval]))",
"expr": "avg by (component) (rate(go_memstats_alloc_bytes_total{app=~\"open-match\"}[$interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - alloc rate",
@ -493,7 +493,7 @@
"step": 4
},
{
"expr": "avg by (component) (go_memstats_stack_inuse_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component) (go_memstats_stack_inuse_bytes{app=~\"open-match\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - stack inuse",
@ -502,7 +502,7 @@
"step": 4
},
{
"expr": "avg by (component) (go_memstats_heap_inuse_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component) (go_memstats_heap_inuse_bytes{app=~\"open-match\"})",
"format": "time_series",
"hide": false,
"intervalFactor": 2,
@ -604,7 +604,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component) (deriv(go_memstats_alloc_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"}[$interval]))",
"expr": "avg by (component) (deriv(go_memstats_alloc_bytes{app=~\"open-match\"}[$interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - bytes allocated",
@ -613,7 +613,7 @@
"step": 4
},
{
"expr": "avg by (component) (deriv(go_memstats_stack_inuse_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"}[$interval]))",
"expr": "avg by (component) (deriv(go_memstats_stack_inuse_bytes{app=~\"open-match\"}[$interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}} - stack inuse",
@ -622,7 +622,7 @@
"step": 4
},
{
"expr": "avg by (component) (deriv(go_memstats_heap_inuse_bytes{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"}[$interval]))",
"expr": "avg by (component) (deriv(go_memstats_heap_inuse_bytes{app=~\"open-match\"}[$interval]))",
"format": "time_series",
"hide": false,
"intervalFactor": 2,
@ -719,7 +719,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component) (process_open_fds{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component) (process_open_fds{app=~\"open-match\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}}",
@ -815,7 +815,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component) (deriv(process_open_fds{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"}[$interval]))",
"expr": "avg by (component) (deriv(process_open_fds{app=~\"open-match\"}[$interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}}",
@ -911,7 +911,7 @@
"steppedLine": false,
"targets": [
{
"expr": "avg by (component, quantile) (go_gc_duration_seconds{app=~\"open-match\", kubernetes_pod_name=~\"om-.*\"})",
"expr": "avg by (component, quantile) (go_gc_duration_seconds{app=~\"open-match\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{component}}: {{quantile}}",

View File

@ -348,14 +348,14 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_usage_seconds_total{pod_name=~\"om-redis.*\", name!~\".*prometheus.*\", image!=\"\", container_name!=\"POD\"}[5m])) by (pod_name)",
"expr": "sum(rate(container_cpu_usage_seconds_total{name!~\".*prometheus.*\", image!=\"\", container_name!=\"POD\"}[5m]) * on (pod_name) group_left(label_app) max by (pod_name, label_app) (label_replace(kube_pod_labels{label_app=\"redis\"}, \"pod_name\", \"$1\", \"pod\", \"(.*)\"))) by (pod_name)",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{pod_name}} usage",
"refId": "A"
},
{
"expr": "sum(kube_pod_container_resource_limits_cpu_cores{pod=~\"om-redis.*\"}) by (pod)",
"expr": "sum(kube_pod_container_resource_limits_cpu_cores * on (pod) group_left(label_app) max by (pod, label_app) (kube_pod_labels{label_app=\"redis\"})) by (pod)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@ -363,7 +363,7 @@
"refId": "B"
},
{
"expr": "sum(kube_pod_container_resource_requests_cpu_cores{pod=~\"om-redis.*\"}) by (pod)",
"expr": "sum(kube_pod_container_resource_requests_cpu_cores * on (pod) group_left(label_app) max by (pod, label_app) (kube_pod_labels{label_app=\"redis\"})) by (pod)",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "request",

View File

@ -16,7 +16,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: open-match-dashboards
name: {{ include "openmatch.fullname" . }}-dashboards
labels:
grafana_dashboard: "1"
data:

View File

@ -0,0 +1,31 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{{- if .Values.global.telemetry.grafana.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "openmatch.fullname" . }}-datasource
labels:
grafana_datasource: "1"
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: {{ tpl .Values.global.telemetry.grafana.prometheusServer . }}
access: proxy
isDefault: true
{{- end }}

View File

@ -142,17 +142,10 @@ grafana:
notifiers: {}
sidecar:
dashboards:
enabled: true
enabled: true
datasources:
enabled: true
plugins: grafana-piechart-panel
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://open-match-prometheus-server.{{ .Release.Namespace }}.svc.cluster.local:80/
access: proxy
isDefault: true
jaeger:
enabled: true

View File

@ -22,6 +22,26 @@ Expand the name of the chart.
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
Instead of .Chart.Name, we hard-code "open-match" as we need to call this from subcharts, but get the
same result as if called from this chart.
*/}}
{{- define "openmatch.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default "open-match" .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Render chart metadata labels: "chart", "heritage" unless "openmatch.noChartMeta" is set.
*/}}
@ -57,7 +77,7 @@ resources:
{{- range $configIndex, $configValues := .configs }}
- name: {{ $configValues.volumeName }}
configMap:
name: {{ $configValues.configName }}
name: {{ tpl $configValues.configName $ }}
{{- end }}
{{- end -}}
@ -74,10 +94,10 @@ resources:
{{- if .Values.global.tls.enabled }}
- name: tls-server-volume
secret:
secretName: om-tls-server
secretName: {{ include "openmatch.fullname" . }}-tls-server
- name: root-ca-volume
secret:
secretName: om-tls-rootca
secretName: {{ include "openmatch.fullname" . }}-tls-rootca
{{- end -}}
{{- end -}}
@ -92,7 +112,7 @@ resources:
{{- if .Values.redis.usePassword }}
- name: redis-password
secret:
secretName: {{ .Values.redis.fullnameOverride }}
secretName: {{ include "call-nested" (list . "redis" "redis.fullname") }}
{{- end -}}
{{- end -}}
@ -135,3 +155,72 @@ minReplicas: {{ .Values.global.kubernetes.horizontalPodAutoScaler.minReplicas }}
maxReplicas: {{ .Values.global.kubernetes.horizontalPodAutoScaler.maxReplicas }}
targetCPUUtilizationPercentage: {{ .Values.global.kubernetes.horizontalPodAutoScaler.targetCPUUtilizationPercentage }}
{{- end -}}
{{- define "openmatch.serviceAccount.name" -}}
{{- .Values.global.kubernetes.serviceAccount | default (printf "%s-unprivileged-service" (include "openmatch.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatch.swaggerui.hostName" -}}
{{- .Values.swaggerui.hostName | default (printf "%s-swaggerui" (include "openmatch.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatch.query.hostName" -}}
{{- .Values.query.hostName | default (printf "%s-query" (include "openmatch.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatch.frontend.hostName" -}}
{{- .Values.frontend.hostName | default (printf "%s-frontend" (include "openmatch.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatch.backend.hostName" -}}
{{- .Values.backend.hostName | default (printf "%s-backend" (include "openmatch.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatch.synchronizer.hostName" -}}
{{- .Values.synchronizer.hostName | default (printf "%s-synchronizer" (include "openmatch.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatch.evaluator.hostName" -}}
{{- .Values.evaluator.hostName | default (printf "%s-evaluator" (include "openmatch.fullname" . ) ) -}}
{{- end -}}
{{- define "openmatch.configmap.default" -}}
{{- printf "%s-configmap-default" (include "openmatch.fullname" . ) -}}
{{- end -}}
{{- define "openmatch.configmap.override" -}}
{{- printf "%s-configmap-override" (include "openmatch.fullname" . ) -}}
{{- end -}}
{{- define "openmatch.jaeger.agent" -}}
{{- if index .Values "open-match-telemetry" "enabled" -}}
{{- if index .Values "open-match-telemetry" "jaeger" "enabled" -}}
{{ include "call-nested" (list . "open-match-telemetry.jaeger" "jaeger.agent.name") }}:6831
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "openmatch.jaeger.collector" -}}
{{- if index .Values "open-match-telemetry" "enabled" -}}
{{- if index .Values "open-match-telemetry" "jaeger" "enabled" -}}
http://{{ include "call-nested" (list . "open-match-telemetry.jaeger" "jaeger.collector.name") }}:14268/api/traces
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Call templates from sub-charts in a synthesized context, workaround for https://github.com/helm/helm/issues/3920
Mainly useful for things like `{{ include "call-nested" (list . "redis" "redis.fullname") }}`
https://github.com/helm/helm/issues/4535#issuecomment-416022809
https://github.com/helm/helm/issues/4535#issuecomment-477778391
*/}}
{{- define "call-nested" }}
{{- $dot := index . 0 }}
{{- $subchart := index . 1 | splitList "." }}
{{- $template := index . 2 }}
{{- $values := $dot.Values }}
{{- range $subchart }}
{{- $values = index $values . }}
{{- end }}
{{- include $template (dict "Chart" (dict "Name" (last $subchart)) "Values" $values "Release" $dot.Release "Capabilities" $dot.Capabilities) }}
{{- end }}

View File

@ -16,7 +16,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.backend.hostName }}
name: {{ include "openmatch.backend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -44,19 +44,19 @@ spec:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.backend.hostName }}
name: {{ include "openmatch.backend.hostName" . }}
namespace: {{ .Release.Namespace }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.backend.hostName }}
name: {{ include "openmatch.backend.hostName" . }}
{{- include "openmatch.HorizontalPodAutoscaler.spec.common" . | nindent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.backend.hostName }}
name: {{ include "openmatch.backend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -82,12 +82,12 @@ spec:
spec:
{{- include "openmatch.labels.nodegrouping" . | nindent 6 }}
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.configs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
{{- include "openmatch.volumes.withredis" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.backend.hostName }}
- name: {{ include "openmatch.backend.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -16,7 +16,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.frontend.hostName }}
name: {{ include "openmatch.frontend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -44,19 +44,19 @@ spec:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.frontend.hostName }}
name: {{ include "openmatch.frontend.hostName" . }}
namespace: {{ .Release.Namespace }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.frontend.hostName }}
name: {{ include "openmatch.frontend.hostName" . }}
{{- include "openmatch.HorizontalPodAutoscaler.spec.common" . | nindent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.frontend.hostName }}
name: {{ include "openmatch.frontend.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -82,12 +82,12 @@ spec:
spec:
{{- include "openmatch.labels.nodegrouping" . | nindent 6 }}
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.configs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
{{- include "openmatch.volumes.withredis" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.frontend.hostName }}
- name: {{ include "openmatch.frontend.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -16,7 +16,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: om-configmap-default
name: {{ include "openmatch.configmap.default" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -50,28 +50,28 @@ data:
api:
backend:
hostname: "{{ .Values.backend.hostName }}"
hostname: "{{ include "openmatch.backend.hostName" . }}"
grpcport: "{{ .Values.backend.grpcPort }}"
httpport: "{{ .Values.backend.httpPort }}"
frontend:
hostname: "{{ .Values.frontend.hostName }}"
hostname: "{{ include "openmatch.frontend.hostName" . }}"
grpcport: "{{ .Values.frontend.grpcPort }}"
httpport: "{{ .Values.frontend.httpPort }}"
query:
hostname: "{{ .Values.query.hostName }}"
hostname: "{{ include "openmatch.query.hostName" . }}"
grpcport: "{{ .Values.query.grpcPort }}"
httpport: "{{ .Values.query.httpPort }}"
synchronizer:
hostname: "{{ .Values.synchronizer.hostName }}"
hostname: "{{ include "openmatch.synchronizer.hostName" . }}"
grpcport: "{{ .Values.synchronizer.grpcPort }}"
httpport: "{{ .Values.synchronizer.httpPort }}"
swaggerui:
hostname: "{{ .Values.swaggerui.hostName }}"
hostname: "{{ include "openmatch.swaggerui.hostName" . }}"
httpport: "{{ .Values.swaggerui.httpPort }}"
# Configurations for api.test and api.scale are used for testing.
test:
hostname: "test"
hostname: "{{ include "openmatch.fullname" . }}-test"
grpcport: "50509"
httpport: "51509"
scale:
@ -90,11 +90,11 @@ data:
{{- if index .Values "redis" "sentinel" "enabled"}}
sentinelPort: {{ .Values.redis.sentinel.port }}
sentinelMaster: {{ .Values.redis.sentinel.masterSet }}
sentinelHostname: {{ .Values.redis.fullnameOverride }}
sentinelHostname: {{ include "call-nested" (list . "redis" "redis.fullname") }}
sentinelUsePassword: {{ .Values.redis.sentinel.usePassword }}
{{- else}}
# Open Match's default Redis setups
hostname: {{ .Values.redis.fullnameOverride }}-master.{{ .Release.Namespace }}.svc.cluster.local
hostname: {{ include "call-nested" (list . "redis" "redis.fullname") }}-master.{{ .Release.Namespace }}.svc.cluster.local
port: {{ .Values.redis.redisPort }}
user: {{ .Values.redis.user }}
{{- end}}
@ -119,8 +119,8 @@ data:
enable: "{{ .Values.global.telemetry.zpages.enabled }}"
jaeger:
enable: "{{ .Values.global.telemetry.jaeger.enabled }}"
agentEndpoint: "{{ .Values.global.telemetry.jaeger.agentEndpoint }}"
collectorEndpoint: "{{ .Values.global.telemetry.jaeger.collectorEndpoint }}"
agentEndpoint: "{{ tpl .Values.global.telemetry.jaeger.agentEndpoint . }}"
collectorEndpoint: "{{ tpl .Values.global.telemetry.jaeger.collectorEndpoint . }}"
prometheus:
enable: "{{ .Values.global.telemetry.prometheus.enabled }}"
endpoint: "{{ .Values.global.telemetry.prometheus.endpoint }}"

View File

@ -16,7 +16,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: om-configmap-override
name: {{ include "openmatch.configmap.override" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -42,7 +42,7 @@ data:
queryPageSize: {{ index .Values "open-match-core" "queryPageSize" }}
api:
evaluator:
hostname: "{{ .Values.evaluator.hostName }}"
hostname: "{{ include "openmatch.evaluator.hostName" . }}"
grpcport: "{{ .Values.evaluator.grpcPort }}"
httpport: "{{ .Values.evaluator.httpPort }}"
{{- end }}

View File

@ -14,11 +14,11 @@
{{- if index .Values "open-match-core" "enabled" }}
{{- if empty .Values.ci }}
# om-redis-podsecuritypolicy is the least restricted PSP used to create privileged pods to disable THP in host kernel.
# This is the least restricted PSP used to create privileged pods to disable THP in host kernel.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: om-redis-podsecuritypolicy
name: {{ include "openmatch.fullname" . }}-redis-podsecuritypolicy
namespace: {{ .Release.Namespace }}
annotations:
{{- include "openmatch.chartmeta" . | nindent 4 }}
@ -51,11 +51,11 @@ spec:
fsGroup:
rule: 'RunAsAny'
---
# om-core-podsecuritypolicy does not allow creating privileged pods and restrict binded pods to use the specified port ranges.
# This does not allow creating privileged pods and restrict binded pods to use the specified port ranges.
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: om-core-podsecuritypolicy
name: {{ include "openmatch.fullname" . }}-core-podsecuritypolicy
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:

View File

@ -16,7 +16,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.query.hostName }}
name: {{ include "openmatch.query.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -44,19 +44,19 @@ spec:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.query.hostName }}
name: {{ include "openmatch.query.hostName" . }}
namespace: {{ .Release.Namespace }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Values.query.hostName }}
name: {{ include "openmatch.query.hostName" . }}
{{- include "openmatch.HorizontalPodAutoscaler.spec.common" . | nindent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.query.hostName }}
name: {{ include "openmatch.query.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -82,12 +82,12 @@ spec:
spec:
{{- include "openmatch.labels.nodegrouping" . | nindent 6 }}
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.configs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
{{- include "openmatch.volumes.withredis" . | nindent 8 }}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.query.hostName }}
- name: {{ include "openmatch.query.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -29,7 +29,7 @@ metadata:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.global.kubernetes.serviceAccount }}
name: {{ include "openmatch.serviceAccount.name" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -40,28 +40,26 @@ automountServiceAccountToken: true
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: om-service-role
name: {{ include "openmatch.fullname" . }}-service-role
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
app: {{ template "openmatch.name" . }}
release: {{ .Release.Name }}
rules:
# Define om-service-role to use om-core-podsecuritypolicy
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- om-core-podsecuritypolicy
- {{ include "openmatch.fullname" . }}-core-podsecuritypolicy
verbs:
- use
---
# This applies om-service-role to the open-match unprivileged service account under the release namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: om-service-role-binding
name: {{ include "openmatch.fullname" . }}-service-role-binding
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -73,34 +71,32 @@ subjects:
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: om-service-role
name: {{ include "openmatch.fullname" . }}-service-role
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: om-redis-role
name: {{ include "openmatch.fullname" . }}-redis-role
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
app: {{ template "openmatch.name" . }}
release: {{ .Release.Name }}
rules:
# Define om-redis-role to use om-redis-podsecuritypolicy
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- om-redis-podsecuritypolicy
- {{ include "openmatch.fullname" . }}-redis-podsecuritypolicy
verbs:
- use
---
# This applies om-redis role to the om-redis privileged service account under the release namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: om-redis-role-binding
name: {{ include "openmatch.fullname" . }}-redis-role-binding
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -108,10 +104,10 @@ metadata:
release: {{ .Release.Name }}
subjects:
- kind: ServiceAccount
name: {{ .Values.redis.serviceAccount.name }} # Redis service account
name: {{ include "call-nested" (list . "redis" "redis.serviceAccountName") }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: om-redis-role
name: {{ include "openmatch.fullname" . }}-redis-role
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@ -16,7 +16,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.swaggerui.hostName }}
name: {{ include "openmatch.swaggerui.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -36,7 +36,7 @@ spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.swaggerui.hostName }}
name: {{ include "openmatch.swaggerui.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -61,11 +61,11 @@ spec:
spec:
{{- include "openmatch.labels.nodegrouping" . | nindent 6 }}
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.configs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.swaggerui.hostName }}
- name: {{ include "openmatch.swaggerui.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -16,7 +16,7 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Values.synchronizer.hostName }}
name: {{ include "openmatch.synchronizer.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -40,7 +40,7 @@ spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.synchronizer.hostName }}
name: {{ include "openmatch.synchronizer.hostName" . }}
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -66,12 +66,12 @@ spec:
spec:
{{- include "openmatch.labels.nodegrouping" . | nindent 6 }}
volumes:
{{- include "openmatch.volumes.configs" (dict "configs" .Values.configs) | nindent 8}}
{{- include "openmatch.volumes.configs" (. | merge (dict "configs" .Values.configs)) | nindent 8}}
{{- include "openmatch.volumes.tls" . | nindent 8}}
{{- include "openmatch.volumes.withredis" . | nindent 8 }}
serviceAccountName: {{ .Values.global.kubernetes.serviceAccount }}
serviceAccountName: {{ include "openmatch.serviceAccount.name" . }}
containers:
- name: {{ .Values.synchronizer.hostName }}
- name: {{ include "openmatch.synchronizer.hostName" . }}
volumeMounts:
{{- include "openmatch.volumemounts.configs" (dict "configs" .Values.configs) | nindent 10 }}
{{- include "openmatch.volumemounts.tls" . | nindent 10 }}

View File

@ -14,11 +14,10 @@
{{- if .Values.ci }}
# This applies om-test-role to the open-match-test-service account under the release namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: om-test-role-binding
name: {{ include "openmatch.fullname" . }}-test-role-binding
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -26,11 +25,11 @@ metadata:
release: {{ .Release.Name }}
subjects:
- kind: ServiceAccount
name: open-match-test-service
name: {{ include "openmatch.fullname" . }}-test-service
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: om-test-role
name: {{ include "openmatch.fullname" . }}-test-role
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@ -17,23 +17,22 @@
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: om-test-role
name: {{ include "openmatch.fullname" . }}-test-role
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
app: {{ template "openmatch.name" . }}
release: {{ .Release.Name }}
rules:
# Define om-test-role to use om-core-podsecuritypolicy
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- om-core-podsecuritypolicy
- {{ include "openmatch.fullname" . }}-core-podsecuritypolicy
verbs:
- use
# Grant om-test-role get & list permission for k8s endpoints and pods resources
# Grant this role get & list permission for k8s endpoints and pods resources
# Required for e2e in-cluster testing.
- apiGroups:
- ""

View File

@ -14,11 +14,11 @@
{{- if .Values.ci }}
# Create a service account for open-match-test services.
# Create a service account for test services.
apiVersion: v1
kind: ServiceAccount
metadata:
name: open-match-test-service
name: {{ include "openmatch.fullname" . }}-test-service
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:

View File

@ -17,7 +17,7 @@
kind: Service
apiVersion: v1
metadata:
name: test
name: {{ include "openmatch.fullname" . }}-test
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -40,7 +40,7 @@ spec:
apiVersion: v1
kind: Pod
metadata:
name: test
name: {{ include "openmatch.fullname" . }}-test
namespace: {{ .Release.Namespace }}
annotations:
{{- include "openmatch.chartmeta" . | nindent 4 }}
@ -52,19 +52,19 @@ metadata:
spec:
# Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it.
activeDeadlineSeconds: 900
serviceAccountName: open-match-test-service
serviceAccountName: {{ include "openmatch.fullname" . }}-test-service
automountServiceAccountToken: true
volumes:
- configMap:
defaultMode: 420
name: om-configmap-default
name: {{ include "openmatch.configmap.default" . }}
name: om-config-volume-default
- configMap:
defaultMode: 420
name: om-configmap-override
name: {{ include "openmatch.configmap.override" . }}
name: om-config-volume-override
containers:
- name: "test"
- name: {{ include "openmatch.fullname" . }}-test
volumeMounts:
- mountPath: /app/config/default
name: om-config-volume-default

View File

@ -17,7 +17,7 @@
apiVersion: v1
kind: Secret
metadata:
name: om-tls-rootca
name: {{ include "openmatch.fullname" . }}-tls-rootca
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
@ -31,9 +31,9 @@ data:
apiVersion: v1
kind: Secret
metadata:
name: om-tls-server
name: {{ include "openmatch.fullname" . }}-tls-server
namespace: {{ .Release.Namespace }}
annotations: {{- include "openmatch.chartmeta" . | nindent 2 }}
annotations: {{- include "openmatch.chartmeta" . | nindent 4 }}
labels:
app: {{ template "openmatch.name" . }}
component: tls

View File

@ -23,7 +23,7 @@
# Begins the configuration section for `query` component in Open Match.
# query:
#
# # Specifies om-query as the in-cluster domain name for the `query` service.
# # Override the default in-cluster domain name for the `query` service to om-query.
# hostName: om-query
#
# # Specifies the port for receiving RESTful HTTP requests in the `query` service.
@ -44,67 +44,68 @@
# # Specifies the image name to be used in a Kubernetes pod for `query` compoenent.
# image: openmatch-query
swaggerui: &swaggerui
hostName: om-swaggerui
hostName:
httpPort: 51500
portType: ClusterIP
replicas: 1
image: openmatch-swaggerui
query: &query
hostName: om-query
hostName:
grpcPort: 50503
httpPort: 51503
portType: ClusterIP
replicas: 3
image: openmatch-query
frontend: &frontend
hostName: om-frontend
hostName:
grpcPort: 50504
httpPort: 51504
portType: ClusterIP
replicas: 3
image: openmatch-frontend
backend: &backend
hostName: om-backend
hostName:
grpcPort: 50505
httpPort: 51505
portType: ClusterIP
replicas: 3
image: openmatch-backend
synchronizer: &synchronizer
hostName: om-synchronizer
hostName:
grpcPort: 50506
httpPort: 51506
portType: ClusterIP
replicas: 1
image: openmatch-synchronizer
evaluator: &evaluator
hostName: om-evaluator
hostName:
grpcPort: 50508
httpPort: 51508
replicas: 3
function: &function
hostName: om-function
hostName:
grpcPort: 50502
httpPort: 51502
replicas: 3
# Specifies the location and name of the Open Match application-level config volumes.
# Used in template: `openmatch.volumemounts.configs` and `openmatch.volumes.configs` under `templates/_helpers.tpl` file.
# Used in template: `openmatch.volumemounts.configs` and `openmatch.volumes.configs` under `templates/_helpers.tpl` file.
configs:
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
configName: om-configmap-default
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
override:
volumeName: om-config-volume-override
mountPath: /app/config/override
configName: om-configmap-override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'
# Override Redis settings
# https://hub.helm.sh/charts/stable/redis
# https://github.com/helm/charts/tree/master/stable/redis
redis:
fullnameOverride: om-redis
redisPort: 6379
usePassword: false
usePasswordFile: false
@ -133,7 +134,6 @@ redis:
slaveCount: 3
serviceAccount:
create: true
name: open-match-redis-service
slave:
persistence:
enabled: false
@ -174,7 +174,7 @@ open-match-core:
enabled: true
# Length of time between first fetch matches call, and when no further fetch
# matches calls will join the current evaluation/synchronization cycle,
# matches calls will join the current evaluation/synchronization cycle,
# instead waiting for the next cycle.
registrationInterval: 250ms
# Length of time after match function as started before it will be canceled,
@ -195,7 +195,7 @@ open-match-core:
# Otherwise the default is set to the om-redis instance.
hostname: # Your redis server address
port: 6379
user:
user:
pool:
maxIdle: 500
maxActive: 500
@ -208,8 +208,6 @@ open-match-core:
open-match-scale:
# Switch the value between true/false to turn on/off this subchart
enabled: false
frontend: *frontend
backend: *backend
# Controls if users need to install the monitoring tools in Open Match.
open-match-telemetry:
@ -222,7 +220,6 @@ open-match-customize:
enabled: false
evaluator: *evaluator
function: *function
query: *query
# You can override the evaluator/mmf image
# evaluator:
# image: [YOUR_EVALUATOR_IMAGE]
@ -249,8 +246,8 @@ global:
limits:
memory: 3Gi
cpu: 2
# Defines a service account which provides an identity for processes that run in a Pod in Open Match.
serviceAccount: open-match-unprivileged-service
# Overrides the name of the service account which provides an identity for processes that run in a Pod in Open Match.
serviceAccount:
# Use this field if you need to override the port type for all services defined in this chart
service:
portType:
@ -275,7 +272,6 @@ global:
tag: 0.0.0-dev
pullPolicy: Always
# Expose the telemetry configurations to all subcharts because prometheus, for example,
# requires pod-level annotation to customize its scrape path.
# See definitions in templates/_helpers.tpl - "prometheus.annotations" section for details
@ -286,8 +282,8 @@ global:
enabled: true
jaeger:
enabled: false
agentEndpoint: "open-match-jaeger-agent:6831"
collectorEndpoint: "http://open-match-jaeger-collector:14268/api/traces"
agentEndpoint: '{{ include "openmatch.jaeger.agent" . }}'
collectorEndpoint: '{{ include "openmatch.jaeger.collector" . }}'
prometheus:
enabled: false
endpoint: "/metrics"

View File

@ -23,7 +23,7 @@
# Begins the configuration section for `query` component in Open Match.
# query:
#
# # Specifies om-query as the in-cluster domain name for the `query` service.
# # Override the default in-cluster domain name for the `query` service to om-query.
# hostName: om-query
#
# # Specifies the port for receiving RESTful HTTP requests in the `query` service.
@ -44,46 +44,46 @@
# # Specifies the image name to be used in a Kubernetes pod for `query` compoenent.
# image: openmatch-query
swaggerui: &swaggerui
hostName: om-swaggerui
hostName:
httpPort: 51500
portType: ClusterIP
replicas: 1
image: openmatch-swaggerui
query: &query
hostName: om-query
hostName:
grpcPort: 50503
httpPort: 51503
portType: ClusterIP
replicas: 3
image: openmatch-query
frontend: &frontend
hostName: om-frontend
hostName:
grpcPort: 50504
httpPort: 51504
portType: ClusterIP
replicas: 3
image: openmatch-frontend
backend: &backend
hostName: om-backend
hostName:
grpcPort: 50505
httpPort: 51505
portType: ClusterIP
replicas: 3
image: openmatch-backend
synchronizer: &synchronizer
hostName: om-synchronizer
hostName:
grpcPort: 50506
httpPort: 51506
portType: ClusterIP
replicas: 1
image: openmatch-synchronizer
evaluator: &evaluator
hostName: om-evaluator
hostName:
grpcPort: 50508
httpPort: 51508
replicas: 3
function: &function
hostName: om-function
hostName:
grpcPort: 50502
httpPort: 51502
replicas: 3
@ -94,17 +94,18 @@ configs:
default:
volumeName: om-config-volume-default
mountPath: /app/config/default
configName: om-configmap-default
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.default" . }}'
override:
volumeName: om-config-volume-override
mountPath: /app/config/override
configName: om-configmap-override
# This will be parsed through the `tpl` function.
configName: '{{ include "openmatch.configmap.override" . }}'
# Override Redis settings
# https://hub.helm.sh/charts/stable/redis
# https://github.com/helm/charts/tree/master/stable/redis
redis:
fullnameOverride: om-redis
redisPort: 6379
usePassword: false
usePasswordFile: false
@ -128,7 +129,6 @@ redis:
slaveCount: 2
serviceAccount:
create: true
name: open-match-redis-service
sysctlImage:
# Enable this setting in production if you are running Open Match under Linux environment
enabled: false
@ -159,7 +159,7 @@ open-match-core:
enabled: true
# Length of time between first fetch matches call, and when no further fetch
# matches calls will join the current evaluation/synchronization cycle,
# matches calls will join the current evaluation/synchronization cycle,
# instead waiting for the next cycle.
registrationInterval: 250ms
# Length of time after match function as started before it will be canceled,
@ -180,7 +180,7 @@ open-match-core:
# Otherwise the default is set to the om-redis instance.
hostname: # Your redis server address
port: 6379
user:
user:
pool:
maxIdle: 200
maxActive: 0
@ -193,8 +193,6 @@ open-match-core:
open-match-scale:
# Switch the value between true/false to turn on/off this subchart
enabled: false
frontend: *frontend
backend: *backend
# Controls if users need to install the monitoring tools in Open Match.
open-match-telemetry:
@ -207,7 +205,6 @@ open-match-customize:
enabled: false
evaluator: *evaluator
function: *function
query: *query
# You can override the evaluator/mmf image
# evaluator:
# image: [YOUR_EVALUATOR_IMAGE]
@ -234,8 +231,8 @@ global:
limits:
memory: 100Mi
cpu: 100m
# Defines a service account which provides an identity for processes that run in a Pod in Open Match.
serviceAccount: open-match-unprivileged-service
# Overrides the name of the service account which provides an identity for processes that run in a Pod in Open Match.
serviceAccount:
# Use this field if you need to override the port type for all services defined in this chart
service:
portType:
@ -257,10 +254,9 @@ global:
# Use this field if you need to override the image registry and image tag for all services defined in this chart
image:
registry: gcr.io/open-match-public-images
tag: 0.0.0-dev
tag: 1.1.0
pullPolicy: Always
# Expose the telemetry configurations to all subcharts because prometheus, for example,
# requires pod-level annotation to customize its scrape path.
# See definitions in templates/_helpers.tpl - "prometheus.annotations" section for details
@ -271,8 +267,8 @@ global:
enabled: true
jaeger:
enabled: false
agentEndpoint: "open-match-jaeger-agent:6831"
collectorEndpoint: "http://open-match-jaeger-collector:14268/api/traces"
agentEndpoint: '{{ include "openmatch.jaeger.agent" . }}'
collectorEndpoint: '{{ include "openmatch.jaeger.collector" . }}'
prometheus:
enabled: false
endpoint: "/metrics"
@ -282,3 +278,5 @@ global:
prefix: "open_match"
grafana:
enabled: false
# This will be called with `tpl` in the open-match-telemetry subchart namespace.
prometheusServer: 'http://{{ include "call-nested" (list . "prometheus" "prometheus.server.fullname") }}.{{ .Release.Namespace }}.svc.cluster.local:80/'

View File

@ -26,10 +26,11 @@ import (
)
var (
totalBytesPerMatch = stats.Int64("open-match.dev/backend/total_bytes_per_match", "Total bytes per match", stats.UnitBytes)
ticketsPerMatch = stats.Int64("open-match.dev/backend/tickets_per_match", "Number of tickets per match", stats.UnitDimensionless)
ticketsReleased = stats.Int64("open-match.dev/backend/tickets_released", "Number of tickets released per request", stats.UnitDimensionless)
ticketsAssigned = stats.Int64("open-match.dev/backend/tickets_assigned", "Number of tickets assigned per request", stats.UnitDimensionless)
totalBytesPerMatch = stats.Int64("open-match.dev/backend/total_bytes_per_match", "Total bytes per match", stats.UnitBytes)
ticketsPerMatch = stats.Int64("open-match.dev/backend/tickets_per_match", "Number of tickets per match", stats.UnitDimensionless)
ticketsReleased = stats.Int64("open-match.dev/backend/tickets_released", "Number of tickets released per request", stats.UnitDimensionless)
ticketsAssigned = stats.Int64("open-match.dev/backend/tickets_assigned", "Number of tickets assigned per request", stats.UnitDimensionless)
ticketsTimeToAssignment = stats.Int64("open-match.dev/backend/ticket_time_to_assignment", "Time to assignment for tickets", stats.UnitMilliseconds)
totalMatchesView = &view.View{
Measure: totalBytesPerMatch,
@ -61,6 +62,13 @@ var (
Description: "Number of tickets released per request",
Aggregation: view.Sum(),
}
ticketsTimeToAssignmentView = &view.View{
Measure: ticketsTimeToAssignment,
Name: "open-match.dev/backend/ticket_time_to_assignment",
Description: "Time to assignment for tickets",
Aggregation: telemetry.DefaultMillisecondsDistribution,
}
)
// BindService creates the backend service and binds it to the serving harness.
@ -81,6 +89,7 @@ func BindService(p *appmain.Params, b *appmain.Bindings) error {
ticketsPerMatchView,
ticketsAssignedView,
ticketsReleasedView,
ticketsTimeToAssignmentView,
)
return nil
}

View File

@ -22,11 +22,13 @@ import (
"net/http"
"strings"
"sync"
"time"
"go.opencensus.io/stats"
"github.com/golang/protobuf/jsonpb"
"github.com/golang/protobuf/proto"
"github.com/golang/protobuf/ptypes"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
@ -102,13 +104,8 @@ func (s *backendService) FetchMatches(req *pb.FetchMatchesRequest, stream pb.Bac
// TODO: Send mmf error in FetchSummary instead of erroring call.
if syncErr != nil || mmfErr != nil {
logger.WithFields(logrus.Fields{
"syncErr": syncErr,
"mmfErr": mmfErr,
}).Error("error(s) in FetchMatches call.")
return fmt.Errorf(
"error(s) in FetchMatches call. syncErr=[%s], mmfErr=[%s]",
"error(s) in FetchMatches call. syncErr=[%v], mmfErr=[%v]",
syncErr,
mmfErr,
)
@ -201,17 +198,13 @@ func callGrpcMmf(ctx context.Context, cc *rpc.ClientCache, profile *pb.MatchProf
var conn *grpc.ClientConn
conn, err := cc.GetGRPC(address)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"function": address,
}).Error("failed to establish grpc client connection to match function")
return status.Error(codes.InvalidArgument, "failed to connect to match function")
return status.Error(codes.InvalidArgument, "failed to establish grpc client connection to match function")
}
client := pb.NewMatchFunctionClient(conn)
stream, err := client.Run(ctx, &pb.RunRequest{Profile: profile})
if err != nil {
logger.WithError(err).Error("failed to run match function for profile")
err = errors.Wrap(err, "failed to run match function for profile")
if ctx.Err() != nil {
// gRPC likes to suppress the context's error, so stop that.
return ctx.Err()
@ -225,7 +218,7 @@ func callGrpcMmf(ctx context.Context, cc *rpc.ClientCache, profile *pb.MatchProf
break
}
if err != nil {
logger.Errorf("%v.Run() error, %v\n", client, err)
err = errors.Wrapf(err, "%v.Run() error, %v", client, err)
if ctx.Err() != nil {
// gRPC likes to suppress the context's error, so stop that.
return ctx.Err()
@ -245,11 +238,8 @@ func callGrpcMmf(ctx context.Context, cc *rpc.ClientCache, profile *pb.MatchProf
func callHTTPMmf(ctx context.Context, cc *rpc.ClientCache, profile *pb.MatchProfile, address string, proposals chan<- *pb.Match) error {
client, baseURL, err := cc.GetHTTP(address)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"function": address,
}).Error("failed to establish rest client connection to match function")
return status.Error(codes.InvalidArgument, "failed to connect to match function")
err = errors.Wrapf(err, "failed to establish rest client connection to match function: %s", address)
return status.Error(codes.InvalidArgument, err.Error())
}
var m jsonpb.Marshaler
@ -265,7 +255,7 @@ func callHTTPMmf(ctx context.Context, cc *rpc.ClientCache, profile *pb.MatchProf
resp, err := client.Do(req.WithContext(ctx))
if err != nil {
return status.Errorf(codes.Internal, "failed to get response from mmf run for proile %s: %s", profile.Name, err.Error())
return status.Errorf(codes.Internal, "failed to get response from mmf run for profile %s: %s", profile.Name, err.Error())
}
defer func() {
err = resp.Body.Close()
@ -306,9 +296,9 @@ func callHTTPMmf(ctx context.Context, cc *rpc.ClientCache, profile *pb.MatchProf
}
func (s *backendService) ReleaseTickets(ctx context.Context, req *pb.ReleaseTicketsRequest) (*pb.ReleaseTicketsResponse, error) {
err := doReleasetickets(ctx, req, s.store)
err := s.store.DeleteTicketsFromPendingRelease(ctx, req.GetTicketIds())
if err != nil {
logger.WithError(err).Error("failed to remove the awaiting tickets from the ignore list for requested tickets")
err = errors.Wrap(err, "failed to remove the awaiting tickets from the pending release for requested tickets")
return nil, err
}
@ -328,7 +318,6 @@ func (s *backendService) ReleaseAllTickets(ctx context.Context, req *pb.ReleaseA
func (s *backendService) AssignTickets(ctx context.Context, req *pb.AssignTicketsRequest) (*pb.AssignTicketsResponse, error) {
resp, err := doAssignTickets(ctx, req, s.store)
if err != nil {
logger.WithError(err).Error("failed to update assignments for requested tickets")
return nil, err
}
@ -342,12 +331,18 @@ func (s *backendService) AssignTickets(ctx context.Context, req *pb.AssignTicket
}
func doAssignTickets(ctx context.Context, req *pb.AssignTicketsRequest, store statestore.Service) (*pb.AssignTicketsResponse, error) {
resp, err := store.UpdateAssignments(ctx, req)
resp, tickets, err := store.UpdateAssignments(ctx, req)
if err != nil {
logger.WithError(err).Error("failed to update assignments")
return nil, err
}
for _, ticket := range tickets {
err = recordTimeToAssignment(ctx, ticket)
if err != nil {
logger.WithError(err).Errorf("failed to record time to assignment for ticket %s", ticket.Id)
}
}
ids := []string{}
for _, ag := range req.Assignments {
@ -363,7 +358,7 @@ func doAssignTickets(ctx context.Context, req *pb.AssignTicketsRequest, store st
}
}
if err = store.DeleteTicketsFromIgnoreList(ctx, ids); err != nil {
if err = store.DeleteTicketsFromPendingRelease(ctx, ids); err != nil {
logger.WithFields(logrus.Fields{
"ticket_ids": ids,
}).Error(err)
@ -372,14 +367,18 @@ func doAssignTickets(ctx context.Context, req *pb.AssignTicketsRequest, store st
return resp, nil
}
func doReleasetickets(ctx context.Context, req *pb.ReleaseTicketsRequest, store statestore.Service) error {
err := store.DeleteTicketsFromIgnoreList(ctx, req.GetTicketIds())
func recordTimeToAssignment(ctx context.Context, ticket *pb.Ticket) error {
if ticket.Assignment == nil {
return fmt.Errorf("assignment for ticket %s is nil", ticket.Id)
}
now := time.Now()
created, err := ptypes.Timestamp(ticket.CreateTime)
if err != nil {
logger.WithFields(logrus.Fields{
"ticket_ids": req.GetTicketIds(),
}).WithError(err).Error("failed to delete the tickets from the ignore list")
return err
}
stats.Record(ctx, ticketsTimeToAssignment.M(now.Sub(created).Milliseconds()))
return nil
}

View File

@ -21,7 +21,7 @@ import (
"github.com/golang/protobuf/proto"
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/any"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"open-match.dev/open-match/pkg/pb"
)
@ -122,17 +122,17 @@ func TestEvaluate(t *testing.T) {
close(in)
err := evaluate(context.Background(), in, out)
assert.Nil(t, err)
require.Nil(t, err)
gotMatchIDs := []string{}
close(out)
for id := range out {
gotMatchIDs = append(gotMatchIDs, id)
}
assert.Equal(t, len(test.wantMatchIDs), len(gotMatchIDs))
require.Equal(t, len(test.wantMatchIDs), len(gotMatchIDs))
for _, mID := range gotMatchIDs {
assert.Contains(t, test.wantMatchIDs, mID)
require.Contains(t, test.wantMatchIDs, mID)
}
})
}

View File

@ -19,19 +19,12 @@ import (
"context"
"io"
"github.com/sirupsen/logrus"
"github.com/pkg/errors"
"go.opencensus.io/stats"
"golang.org/x/sync/errgroup"
"open-match.dev/open-match/pkg/pb"
)
var (
logger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": "evaluator.harness.golang",
})
)
// Evaluator is the function signature for the Evaluator to be implemented by
// the user. The harness will pass the Matches to evaluate to the Evaluator
// and the Evaluator will return an accepted list of Matches.
@ -95,8 +88,5 @@ func (s *evaluatorService) Evaluate(stream pb.Evaluator_EvaluateServer) error {
})
err := g.Wait()
if err != nil {
logger.WithError(err).Error("Error in evaluator.Evaluate")
}
return err
return errors.Wrap(err, "Error in evaluator.Evaluate")
}

View File

@ -83,19 +83,11 @@ func doCreateTicket(ctx context.Context, req *pb.CreateTicketRequest, store stat
err := store.CreateTicket(ctx, ticket)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"ticket": ticket,
}).Error("failed to create the ticket")
return nil, err
}
err = store.IndexTicket(ctx, ticket)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"ticket": ticket,
}).Error("failed to index the ticket")
return nil, err
}
@ -118,10 +110,6 @@ func doDeleteTicket(ctx context.Context, id string, store statestore.Service) er
// Deindex this Ticket to remove it from matchmaking pool.
err := store.DeindexTicket(ctx, id)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"id": id,
}).Error("failed to deindex the ticket")
return err
}
@ -137,12 +125,12 @@ func doDeleteTicket(ctx context.Context, id string, store statestore.Service) er
"id": id,
}).Error("failed to delete the ticket")
}
err = store.DeleteTicketsFromIgnoreList(ctx, []string{id})
err = store.DeleteTicketsFromPendingRelease(ctx, []string{id})
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"id": id,
}).Error("failed to delete the ticket from ignorelist")
}).Error("failed to delete the ticket from pendingRelease")
}
// TODO: If other redis queues are implemented or we have custom index fields
// created by Open Match, those need to be cleaned up here.
@ -152,20 +140,7 @@ func doDeleteTicket(ctx context.Context, id string, store statestore.Service) er
// GetTicket get the Ticket associated with the specified TicketId.
func (s *frontendService) GetTicket(ctx context.Context, req *pb.GetTicketRequest) (*pb.Ticket, error) {
return doGetTickets(ctx, req.GetTicketId(), s.store)
}
func doGetTickets(ctx context.Context, id string, store statestore.Service) (*pb.Ticket, error) {
ticket, err := store.GetTicket(ctx, id)
if err != nil {
logger.WithFields(logrus.Fields{
"error": err.Error(),
"id": id,
}).Error("failed to get the ticket")
return nil, err
}
return ticket, nil
return s.store.GetTicket(ctx, req.GetTicketId())
}
// WatchAssignments stream back Assignment of the specified TicketId if it is updated.
@ -197,7 +172,6 @@ func doWatchAssignments(ctx context.Context, id string, sender func(*pb.Assignme
err := sender(currAssignment)
if err != nil {
logger.WithError(err).Error("failed to send Redis response to grpc server")
return status.Errorf(codes.Aborted, err.Error())
}
}

View File

@ -23,7 +23,7 @@ import (
"time"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"open-match.dev/open-match/internal/statestore"
@ -77,12 +77,12 @@ func TestDoCreateTickets(t *testing.T) {
test.preAction(cancel)
res, err := doCreateTicket(ctx, &pb.CreateTicketRequest{Ticket: test.ticket}, store)
assert.Equal(t, test.wantCode, status.Convert(err).Code())
require.Equal(t, test.wantCode.String(), status.Convert(err).Code().String())
if err == nil {
matched, err := regexp.MatchString(`[0-9a-v]{20}`, res.GetId())
assert.True(t, matched)
assert.Nil(t, err)
assert.Equal(t, test.ticket.SearchFields.DoubleArgs["test-arg"], res.SearchFields.DoubleArgs["test-arg"])
require.True(t, matched)
require.Nil(t, err)
require.Equal(t, test.ticket.SearchFields.DoubleArgs["test-arg"], res.SearchFields.DoubleArgs["test-arg"])
}
})
}
@ -118,12 +118,12 @@ func TestDoWatchAssignments(t *testing.T) {
{
description: "expect two assignment reads from preAction writes and fail in grpc aborted code",
preAction: func(ctx context.Context, t *testing.T, store statestore.Service, wantAssignments []*pb.Assignment, wg *sync.WaitGroup) {
assert.Nil(t, store.CreateTicket(ctx, testTicket))
require.Nil(t, store.CreateTicket(ctx, testTicket))
go func(wg *sync.WaitGroup) {
for i := 0; i < len(wantAssignments); i++ {
time.Sleep(50 * time.Millisecond)
_, err := store.UpdateAssignments(ctx, &pb.AssignTicketsRequest{
_, _, err := store.UpdateAssignments(ctx, &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: []string{testTicket.GetId()},
@ -131,7 +131,7 @@ func TestDoWatchAssignments(t *testing.T) {
},
},
})
assert.Nil(t, err)
require.Nil(t, err)
wg.Done()
}
}(wg)
@ -155,11 +155,11 @@ func TestDoWatchAssignments(t *testing.T) {
test.preAction(ctx, t, store, test.wantAssignments, &wg)
err := doWatchAssignments(ctx, testTicket.GetId(), senderGenerator(gotAssignments, len(test.wantAssignments)), store)
assert.Equal(t, test.wantCode, status.Convert(err).Code())
require.Equal(t, test.wantCode.String(), status.Convert(err).Code().String())
wg.Wait()
for i := 0; i < len(gotAssignments); i++ {
assert.Equal(t, gotAssignments[i], test.wantAssignments[i])
require.Equal(t, gotAssignments[i], test.wantAssignments[i])
}
})
}
@ -211,7 +211,7 @@ func TestDoDeleteTicket(t *testing.T) {
test.preAction(ctx, cancel, store)
err := doDeleteTicket(ctx, fakeTicket.GetId(), store)
assert.Equal(t, test.wantCode, status.Convert(err).Code())
require.Equal(t, test.wantCode.String(), status.Convert(err).Code().String())
})
}
}
@ -264,12 +264,12 @@ func TestDoGetTicket(t *testing.T) {
test.preAction(ctx, cancel, store)
ticket, err := doGetTickets(ctx, fakeTicket.GetId(), store)
assert.Equal(t, test.wantCode, status.Convert(err).Code())
ticket, err := store.GetTicket(ctx, fakeTicket.GetId())
require.Equal(t, test.wantCode.String(), status.Convert(err).Code().String())
if err == nil {
assert.Equal(t, test.wantTicket.GetId(), ticket.GetId())
assert.Equal(t, test.wantTicket.SearchFields.DoubleArgs, ticket.SearchFields.DoubleArgs)
require.Equal(t, test.wantTicket.GetId(), ticket.GetId())
require.Equal(t, test.wantTicket.SearchFields.DoubleArgs, ticket.SearchFields.DoubleArgs)
}
})
}

View File

@ -67,7 +67,7 @@ func (s *queryService) QueryTickets(req *pb.QueryTicketsRequest, responseServer
}
})
if err != nil {
logger.WithError(err).Error("Failed to run request.")
err = errors.Wrap(err, "QueryTickets: failed to run request")
return err
}
stats.Record(ctx, ticketsPerQuery.M(int64(len(results))))
@ -111,7 +111,7 @@ func (s *queryService) QueryTicketIds(req *pb.QueryTicketIdsRequest, responseSer
}
})
if err != nil {
logger.WithError(err).Error("Failed to run request.")
err = errors.Wrap(err, "QueryTicketIds: failed to run request")
return err
}
stats.Record(ctx, ticketsPerQuery.M(int64(len(results))))

View File

@ -18,6 +18,7 @@ import (
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
"open-match.dev/open-match/internal/config"
)
@ -61,9 +62,7 @@ func TestGetPageSize(t *testing.T) {
cfg := viper.New()
tt.configure(cfg)
actual := getPageSize(cfg)
if actual != tt.expected {
t.Errorf("got %d, want %d", actual, tt.expected)
}
require.Equal(t, tt.expected, actual)
})
}
}

View File

@ -43,7 +43,7 @@ var (
// Streams from multiple GRPC calls of matches are combined on a single channel.
// These matches are sent to the evaluator, then the tickets are added to the
// ignore list. Finally the matches are returned to the calling stream.
// pending release list. Finally the matches are returned to the calling stream.
// receive from backend | Synchronize
// -> m1c ->
@ -55,7 +55,7 @@ var (
// -> m4c -> (buffered)
// send to evaluator | wrapEvaluator
// -> m5c -> (buffered)
// add tickets to ignore list | addMatchesToIgnoreList
// add tickets to pending release | addMatchesToPendingRelease
// -> m6c ->
// fan out to origin synchronize call | fanInFanOut
// -> (Synchronize call specific ) m7c -> (buffered)
@ -240,8 +240,8 @@ func (s *synchronizerService) runCycle() {
go s.cacheMatchIDToTicketIDs(matchTickets, m3c, m4c)
go s.wrapEvaluator(ctx, cancel, bufferMatchChannel(m4c), m5c)
go func() {
s.addMatchesToIgnoreList(ctx, matchTickets, cancel, bufferStringChannel(m5c), m6c)
// Wait for ignore list, but not all matches returned, the next cycle
s.addMatchesToPendingRelease(ctx, matchTickets, cancel, bufferStringChannel(m5c), m6c)
// Wait for pending release, but not all matches returned, the next cycle
// can start now.
close(closedOnCycleEnd)
}()
@ -435,10 +435,10 @@ func getTicketIds(tickets []*pb.Ticket) []string {
///////////////////////////////////////
// Calls statestore to add all of the tickets returned by the evaluator to the
// ignorelist. If it partially fails for whatever reason (not all tickets will
// pendingRelease list. If it partially fails for whatever reason (not all tickets will
// nessisarily be in the same call), only the matches which can be safely
// returned to the Synchronize calls are.
func (s *synchronizerService) addMatchesToIgnoreList(ctx context.Context, m *sync.Map, cancel contextcause.CancelErrFunc, m5c <-chan []string, m6c chan<- string) {
func (s *synchronizerService) addMatchesToPendingRelease(ctx context.Context, m *sync.Map, cancel contextcause.CancelErrFunc, m5c <-chan []string, m6c chan<- string) {
totalMatches := 0
successfulMatches := 0
var lastErr error
@ -453,7 +453,7 @@ func (s *synchronizerService) addMatchesToIgnoreList(ctx context.Context, m *syn
}
}
err := s.store.AddTicketsToIgnoreList(ctx, ids)
err := s.store.AddTicketsToPendingRelease(ctx, ids)
totalMatches += len(mIDs)
if err == nil {
@ -472,10 +472,10 @@ func (s *synchronizerService) addMatchesToIgnoreList(ctx context.Context, m *syn
"error": lastErr.Error(),
"totalMatches": totalMatches,
"successfulMatches": successfulMatches,
}).Error("some or all matches were not successfully added to the ignore list, failed matches dropped")
}).Error("some or all matches were not successfully added to the pending release, failed matches dropped")
if successfulMatches == 0 {
cancel(fmt.Errorf("no matches successfully added to the ignore list. Last error: %w", lastErr))
cancel(fmt.Errorf("no matches successfully added to the pending release. Last error: %w", lastErr))
}
}
close(m6c)

View File

@ -19,7 +19,7 @@ import (
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/timestamp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"open-match.dev/open-match/internal/filter/testcases"
@ -31,9 +31,10 @@ func TestMeetsCriteria(t *testing.T) {
tc := tc
t.Run(tc.Name, func(t *testing.T) {
pf, err := NewPoolFilter(tc.Pool)
if err != nil {
t.Error("pool should be valid")
}
require.NoError(t, err)
require.NotNil(t, pf)
tc.Ticket.CreateTime = ptypes.TimestampNow()
if !pf.In(tc.Ticket) {
t.Error("ticket should be included in the pool")
@ -45,9 +46,10 @@ func TestMeetsCriteria(t *testing.T) {
tc := tc
t.Run(tc.Name, func(t *testing.T) {
pf, err := NewPoolFilter(tc.Pool)
if err != nil {
t.Error("pool should be valid")
}
require.NoError(t, err)
require.NotNil(t, pf)
tc.Ticket.CreateTime = ptypes.TimestampNow()
if pf.In(tc.Ticket) {
t.Error("ticket should be excluded from the pool")
@ -83,10 +85,13 @@ func TestValidPoolFilter(t *testing.T) {
tc := tc
t.Run(tc.name, func(t *testing.T) {
pf, err := NewPoolFilter(tc.pool)
assert.Nil(t, pf)
require.Error(t, err)
require.Nil(t, pf)
s := status.Convert(err)
assert.Equal(t, tc.code, s.Code())
assert.Equal(t, tc.msg, s.Message())
require.Equal(t, tc.code, s.Code())
require.Equal(t, tc.msg, s.Message())
})
}
}

View File

@ -21,7 +21,7 @@ import (
stackdriver "github.com/TV4/logrus-stackdriver-formatter"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewFormatter(t *testing.T) {
@ -37,9 +37,9 @@ func TestNewFormatter(t *testing.T) {
for _, tc := range testCases {
tc := tc
t.Run(fmt.Sprintf("newFormatter(%s) => %s", tc.in, tc.expected), func(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
actual := newFormatter(tc.in)
assert.Equal(reflect.TypeOf(tc.expected), reflect.TypeOf(actual))
require.Equal(reflect.TypeOf(tc.expected), reflect.TypeOf(actual))
})
}
}
@ -60,9 +60,9 @@ func TestIsDebugLevel(t *testing.T) {
for _, tc := range testCases {
tc := tc
t.Run(fmt.Sprintf("isDebugLevel(%s) => %t", tc.in, tc.expected), func(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
actual := isDebugLevel(tc.in)
assert.Equal(tc.expected, actual)
require.Equal(tc.expected, actual)
})
}
}
@ -87,9 +87,9 @@ func TestToLevel(t *testing.T) {
for _, tc := range testCases {
tc := tc
t.Run(fmt.Sprintf("toLevel(%s) => %s", tc.in, tc.expected), func(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
actual := toLevel(tc.in)
assert.Equal(tc.expected, actual)
require.Equal(tc.expected, actual)
})
}
}

View File

@ -18,7 +18,7 @@ import (
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
@ -27,31 +27,31 @@ const (
)
func TestGetGRPC(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
cc := NewClientCache(viper.New())
client, err := cc.GetGRPC(fakeGRPCAddress)
assert.Nil(err)
require.Nil(err)
cachedClient, err := cc.GetGRPC(fakeGRPCAddress)
assert.Nil(err)
require.Nil(err)
// Test caching by comparing pointer value
assert.EqualValues(client, cachedClient)
require.EqualValues(client, cachedClient)
}
func TestGetHTTP(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
cc := NewClientCache(viper.New())
client, address, err := cc.GetHTTP(fakeHTTPAddress)
assert.Nil(err)
assert.Equal(fakeHTTPAddress, address)
require.Nil(err)
require.Equal(fakeHTTPAddress, address)
cachedClient, address, err := cc.GetHTTP(fakeHTTPAddress)
assert.Nil(err)
assert.Equal(fakeHTTPAddress, address)
require.Nil(err)
require.Equal(fakeHTTPAddress, address)
// Test caching by comparing pointer value
assert.EqualValues(client, cachedClient)
require.EqualValues(client, cachedClient)
}

View File

@ -23,7 +23,7 @@ import (
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/telemetry"
@ -34,39 +34,39 @@ import (
)
func TestSecureGRPCFromConfig(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(assert, true)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(t, require, true)
defer closer()
runGrpcClientTests(t, assert, cfg, rpcParams)
runGrpcClientTests(t, require, cfg, rpcParams)
}
func TestInsecureGRPCFromConfig(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(assert, false)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(t, require, false)
defer closer()
runGrpcClientTests(t, assert, cfg, rpcParams)
runGrpcClientTests(t, require, cfg, rpcParams)
}
func TestHTTPSFromConfig(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(assert, true)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(t, require, true)
defer closer()
runHTTPClientTests(assert, cfg, rpcParams)
runHTTPClientTests(require, cfg, rpcParams)
}
func TestInsecureHTTPFromConfig(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(assert, false)
cfg, rpcParams, closer := configureConfigAndKeysForTesting(t, require, false)
defer closer()
runHTTPClientTests(assert, cfg, rpcParams)
runHTTPClientTests(require, cfg, rpcParams)
}
func TestSanitizeHTTPAddress(t *testing.T) {
@ -88,15 +88,15 @@ func TestSanitizeHTTPAddress(t *testing.T) {
tc := testCase
description := fmt.Sprintf("sanitizeHTTPAddress(%s, %t) => (%s, %v)", tc.address, tc.preferHTTPS, tc.expected, tc.err)
t.Run(description, func(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
actual, err := sanitizeHTTPAddress(tc.address, tc.preferHTTPS)
assert.Equal(tc.expected, actual)
assert.Equal(tc.err, err)
require.Equal(tc.expected, actual)
require.Equal(tc.err, err)
})
}
}
func runGrpcClientTests(t *testing.T, assert *assert.Assertions, cfg config.View, rpcParams *ServerParams) {
func runGrpcClientTests(t *testing.T, require *require.Assertions, cfg config.View, rpcParams *ServerParams) {
// Serve a fake frontend server and wait for its full start up
ff := &shellTesting.FakeFrontend{}
rpcParams.AddHandleFunc(func(s *grpc.Server) {
@ -106,22 +106,22 @@ func runGrpcClientTests(t *testing.T, assert *assert.Assertions, cfg config.View
s := &Server{}
defer s.Stop()
err := s.Start(rpcParams)
assert.Nil(err)
require.Nil(err)
// Acquire grpc client
grpcConn, err := GRPCClientFromConfig(cfg, "test")
assert.Nil(err)
assert.NotNil(grpcConn)
require.Nil(err)
require.NotNil(grpcConn)
// Confirm the client works as expected
ctx := utilTesting.NewContext(t)
feClient := pb.NewFrontendServiceClient(grpcConn)
grpcResp, err := feClient.CreateTicket(ctx, &pb.CreateTicketRequest{})
assert.Nil(err)
assert.NotNil(grpcResp)
require.Nil(err)
require.NotNil(grpcResp)
}
func runHTTPClientTests(assert *assert.Assertions, cfg config.View, rpcParams *ServerParams) {
func runHTTPClientTests(require *require.Assertions, cfg config.View, rpcParams *ServerParams) {
// Serve a fake frontend server and wait for its full start up
ff := &shellTesting.FakeFrontend{}
rpcParams.AddHandleFunc(func(s *grpc.Server) {
@ -130,20 +130,20 @@ func runHTTPClientTests(assert *assert.Assertions, cfg config.View, rpcParams *S
s := &Server{}
defer s.Stop()
err := s.Start(rpcParams)
assert.Nil(err)
require.Nil(err)
// Acquire http client
httpClient, baseURL, err := HTTPClientFromConfig(cfg, "test")
assert.Nil(err)
require.Nil(err)
// Confirm the client works as expected
httpReq, err := http.NewRequest(http.MethodGet, baseURL+telemetry.HealthCheckEndpoint, nil)
assert.Nil(err)
assert.NotNil(httpReq)
require.Nil(err)
require.NotNil(httpReq)
httpResp, err := httpClient.Do(httpReq)
assert.Nil(err)
assert.NotNil(httpResp)
require.Nil(err)
require.NotNil(httpResp)
defer func() {
if httpResp != nil {
httpResp.Body.Close()
@ -151,13 +151,13 @@ func runHTTPClientTests(assert *assert.Assertions, cfg config.View, rpcParams *S
}()
body, err := ioutil.ReadAll(httpResp.Body)
assert.Nil(err)
assert.Equal(200, httpResp.StatusCode)
assert.Equal("ok", string(body))
require.Nil(err)
require.Equal(200, httpResp.StatusCode)
require.Equal("ok", string(body))
}
// Generate a config view and optional TLS key manifests (optional) for testing
func configureConfigAndKeysForTesting(assert *assert.Assertions, tlsEnabled bool) (config.View, *ServerParams, func()) {
func configureConfigAndKeysForTesting(t *testing.T, require *require.Assertions, tlsEnabled bool) (config.View, *ServerParams, func()) {
// Create netlisteners on random ports used for rpc serving
grpcL := MustListen()
httpL := MustListen()
@ -171,7 +171,7 @@ func configureConfigAndKeysForTesting(assert *assert.Assertions, tlsEnabled bool
// Create temporary TLS key files for testing
pubFile, err := ioutil.TempFile("", "pub*")
assert.Nil(err)
require.Nil(err)
if tlsEnabled {
// Generate public and private key bytes
@ -179,11 +179,11 @@ func configureConfigAndKeysForTesting(assert *assert.Assertions, tlsEnabled bool
fmt.Sprintf("localhost:%s", MustGetPortNumber(grpcL)),
fmt.Sprintf("localhost:%s", MustGetPortNumber(httpL)),
})
assert.Nil(err)
require.Nil(err)
// Write certgen key bytes to the temp files
err = ioutil.WriteFile(pubFile.Name(), pubBytes, 0400)
assert.Nil(err)
require.Nil(err)
// Generate a config view with paths to the manifests
cfg.Set(configNameClientTrustedCertificatePath, pubFile.Name())
@ -191,7 +191,7 @@ func configureConfigAndKeysForTesting(assert *assert.Assertions, tlsEnabled bool
rpcParams.SetTLSConfiguration(pubBytes, pubBytes, priBytes)
}
return cfg, rpcParams, func() { removeTempFile(assert, pubFile.Name()) }
return cfg, rpcParams, func() { removeTempFile(t, pubFile.Name()) }
}
func MustListen() net.Listener {
@ -210,9 +210,11 @@ func MustGetPortNumber(l net.Listener) string {
return port
}
func removeTempFile(assert *assert.Assertions, paths ...string) {
func removeTempFile(t *testing.T, paths ...string) {
for _, path := range paths {
err := os.Remove(path)
assert.Nil(err)
if err != nil {
t.Errorf("Can not remove the temporary file: %s, err: %s", path, err.Error())
}
}
}

View File

@ -22,13 +22,13 @@ import (
"open-match.dev/open-match/pkg/pb"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
shellTesting "open-match.dev/open-match/internal/testing"
)
func TestInsecureStartStop(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
grpcL := MustListen()
httpL := MustListen()
ff := &shellTesting.FakeFrontend{}
@ -40,15 +40,15 @@ func TestInsecureStartStop(t *testing.T) {
s := newInsecureServer(grpcL, httpL)
defer s.stop()
err := s.start(params)
assert.Nil(err)
require.Nil(err)
conn, err := grpc.Dial(fmt.Sprintf(":%s", MustGetPortNumber(grpcL)), grpc.WithInsecure())
assert.Nil(err)
require.Nil(err)
defer conn.Close()
endpoint := fmt.Sprintf("http://localhost:%s", MustGetPortNumber(httpL))
httpClient := &http.Client{
Timeout: time.Second,
}
runGrpcWithProxyTests(t, assert, s, conn, httpClient, endpoint)
runGrpcWithProxyTests(t, require, s, conn, httpClient, endpoint)
}

View File

@ -87,6 +87,11 @@ type ServerParams struct {
// NewServerParamsFromConfig returns server Params initialized from the configuration file.
func NewServerParamsFromConfig(cfg config.View, prefix string, listen func(network, address string) (net.Listener, error)) (*ServerParams, error) {
serverLogger = logrus.WithFields(logrus.Fields{
"app": "openmatch",
"component": prefix,
})
grpcL, err := listen("tcp", fmt.Sprintf(":%d", cfg.GetInt(prefix+".grpcport")))
if err != nil {
return nil, errors.Wrap(err, "can't start listener for grpc")
@ -283,6 +288,9 @@ func newGRPCServerOptions(params *ServerParams) []grpc.ServerOption {
}
}
ui = append(ui, serverUnaryInterceptor)
si = append(si, serverStreamInterceptor)
if params.enableMetrics {
opts = append(opts, grpc.StatsHandler(&ocgrpc.ServerHandler{}))
}
@ -297,3 +305,25 @@ func newGRPCServerOptions(params *ServerParams) []grpc.ServerOption {
},
))
}
func serverStreamInterceptor(srv interface{},
stream grpc.ServerStream,
info *grpc.StreamServerInfo,
handler grpc.StreamHandler) error {
err := handler(srv, stream)
if err != nil {
serverLogger.Error(err)
}
return err
}
func serverUnaryInterceptor(ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler) (interface{}, error) {
h, err := handler(ctx, req)
if err != nil {
serverLogger.Error(err)
}
return h, err
}

View File

@ -22,7 +22,7 @@ import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"open-match.dev/open-match/internal/telemetry"
shellTesting "open-match.dev/open-match/internal/testing"
@ -31,7 +31,7 @@ import (
)
func TestStartStopServer(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
grpcL := MustListen()
httpL := MustListen()
ff := &shellTesting.FakeFrontend{}
@ -44,57 +44,57 @@ func TestStartStopServer(t *testing.T) {
defer s.Stop()
err := s.Start(params)
assert.Nil(err)
require.Nil(err)
conn, err := grpc.Dial(fmt.Sprintf(":%s", MustGetPortNumber(grpcL)), grpc.WithInsecure())
assert.Nil(err)
require.Nil(err)
endpoint := fmt.Sprintf("http://localhost:%s", MustGetPortNumber(httpL))
httpClient := &http.Client{
Timeout: time.Second,
}
runGrpcWithProxyTests(t, assert, s.serverWithProxy, conn, httpClient, endpoint)
runGrpcWithProxyTests(t, require, s.serverWithProxy, conn, httpClient, endpoint)
}
func runGrpcWithProxyTests(t *testing.T, assert *assert.Assertions, s grpcServerWithProxy, conn *grpc.ClientConn, httpClient *http.Client, endpoint string) {
func runGrpcWithProxyTests(t *testing.T, require *require.Assertions, s grpcServerWithProxy, conn *grpc.ClientConn, httpClient *http.Client, endpoint string) {
ctx := utilTesting.NewContext(t)
feClient := pb.NewFrontendServiceClient(conn)
grpcResp, err := feClient.CreateTicket(ctx, &pb.CreateTicketRequest{})
assert.Nil(err)
assert.NotNil(grpcResp)
require.Nil(err)
require.NotNil(grpcResp)
httpReq, err := http.NewRequest(http.MethodPost, endpoint+"/v1/frontendservice/tickets", strings.NewReader("{}"))
assert.Nil(err)
assert.NotNil(httpReq)
require.Nil(err)
require.NotNil(httpReq)
httpResp, err := httpClient.Do(httpReq)
assert.Nil(err)
assert.NotNil(httpResp)
require.Nil(err)
require.NotNil(httpResp)
defer func() {
if httpResp != nil {
httpResp.Body.Close()
}
}()
body, err := ioutil.ReadAll(httpResp.Body)
assert.Nil(err)
assert.Equal(200, httpResp.StatusCode)
assert.Equal("{}", string(body))
require.Nil(err)
require.Equal(200, httpResp.StatusCode)
require.Equal("{}", string(body))
httpReq, err = http.NewRequest(http.MethodGet, endpoint+telemetry.HealthCheckEndpoint, nil)
assert.Nil(err)
require.Nil(err)
httpResp, err = httpClient.Do(httpReq)
assert.Nil(err)
assert.NotNil(httpResp)
require.Nil(err)
require.NotNil(httpResp)
defer func() {
if httpResp != nil {
httpResp.Body.Close()
}
}()
body, err = ioutil.ReadAll(httpResp.Body)
assert.Nil(err)
assert.Equal(200, httpResp.StatusCode)
assert.Equal("ok", string(body))
require.Nil(err)
require.Equal(200, httpResp.StatusCode)
require.Equal("ok", string(body))
s.stop()
}

View File

@ -22,7 +22,7 @@ import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
shellTesting "open-match.dev/open-match/internal/testing"
@ -32,14 +32,14 @@ import (
// TestStartStopTlsServerWithCARootedCertificate verifies that we can have a gRPC+TLS+HTTPS server/client work with a single self-signed certificate.
func TestStartStopTlsServerWithSingleCertificate(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
grpcL := MustListen()
proxyL := MustListen()
grpcAddress := fmt.Sprintf("localhost:%s", MustGetPortNumber(grpcL))
proxyAddress := fmt.Sprintf("localhost:%s", MustGetPortNumber(proxyL))
allHostnames := []string{grpcAddress, proxyAddress}
pub, priv, err := certgenTesting.CreateCertificateAndPrivateKeyForTesting(allHostnames)
assert.Nil(err)
require.Nil(err)
runTestStartStopTLSServer(t, &tlsServerTestParams{
rootPublicCertificateFileData: pub,
rootPrivateKeyFileData: priv,
@ -54,17 +54,17 @@ func TestStartStopTlsServerWithSingleCertificate(t *testing.T) {
// TestStartStopTlsServerWithCARootedCertificate verifies that we can have a gRPC+TLS+HTTPS server/client work with a self-signed CA-rooted certificate.
func TestStartStopTlsServerWithCARootedCertificate(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
grpcL := MustListen()
proxyL := MustListen()
grpcAddress := fmt.Sprintf("localhost:%s", MustGetPortNumber(grpcL))
proxyAddress := fmt.Sprintf("localhost:%s", MustGetPortNumber(proxyL))
allHostnames := []string{grpcAddress, proxyAddress}
rootPub, rootPriv, err := certgenTesting.CreateRootCertificateAndPrivateKeyForTesting(allHostnames)
assert.Nil(err)
require.Nil(err)
pub, priv, err := certgenTesting.CreateDerivedCertificateAndPrivateKeyForTesting(rootPub, rootPriv, allHostnames)
assert.Nil(err)
require.Nil(err)
runTestStartStopTLSServer(t, &tlsServerTestParams{
rootPublicCertificateFileData: rootPub,
@ -90,7 +90,7 @@ type tlsServerTestParams struct {
}
func runTestStartStopTLSServer(t *testing.T, tp *tlsServerTestParams) {
assert := assert.New(t)
require := require.New(t)
ff := &shellTesting.FakeFrontend{}
@ -104,16 +104,16 @@ func runTestStartStopTLSServer(t *testing.T, tp *tlsServerTestParams) {
defer s.stop()
err := s.start(serverParams)
assert.Nil(err)
require.Nil(err)
pool, err := trustedCertificateFromFileData(tp.rootPublicCertificateFileData)
assert.Nil(err)
require.Nil(err)
conn, err := grpc.Dial(tp.grpcAddress, grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(pool, tp.grpcAddress)))
assert.Nil(err)
require.Nil(err)
tlsCert, err := certificateFromFileData(tp.publicCertificateFileData, tp.privateKeyFileData)
assert.Nil(err)
require.Nil(err)
tlsTransport := &http.Transport{
TLSClientConfig: &tls.Config{
ServerName: tp.proxyAddress,
@ -126,5 +126,5 @@ func runTestStartStopTLSServer(t *testing.T, tp *tlsServerTestParams) {
Timeout: time.Second * 10,
Transport: tlsTransport,
}
runGrpcWithProxyTests(t, assert, s, conn, httpClient, httpsEndpoint)
runGrpcWithProxyTests(t, require, s, conn, httpClient, httpsEndpoint)
}

View File

@ -77,7 +77,7 @@ func (is *instrumentedService) GetIndexedIDSet(ctx context.Context) (map[string]
return is.s.GetIndexedIDSet(ctx)
}
func (is *instrumentedService) UpdateAssignments(ctx context.Context, req *pb.AssignTicketsRequest) (*pb.AssignTicketsResponse, error) {
func (is *instrumentedService) UpdateAssignments(ctx context.Context, req *pb.AssignTicketsRequest) (*pb.AssignTicketsResponse, []*pb.Ticket, error) {
ctx, span := trace.StartSpan(ctx, "statestore/instrumented.UpdateAssignments")
defer span.End()
return is.s.UpdateAssignments(ctx, req)
@ -89,16 +89,16 @@ func (is *instrumentedService) GetAssignments(ctx context.Context, id string, ca
return is.s.GetAssignments(ctx, id, callback)
}
func (is *instrumentedService) AddTicketsToIgnoreList(ctx context.Context, ids []string) error {
ctx, span := trace.StartSpan(ctx, "statestore/instrumented.AddTicketsToIgnoreList")
func (is *instrumentedService) AddTicketsToPendingRelease(ctx context.Context, ids []string) error {
ctx, span := trace.StartSpan(ctx, "statestore/instrumented.AddTicketsToPendingRelease")
defer span.End()
return is.s.AddTicketsToIgnoreList(ctx, ids)
return is.s.AddTicketsToPendingRelease(ctx, ids)
}
func (is *instrumentedService) DeleteTicketsFromIgnoreList(ctx context.Context, ids []string) error {
ctx, span := trace.StartSpan(ctx, "statestore/instrumented.DeleteTicketsFromIgnoreList")
func (is *instrumentedService) DeleteTicketsFromPendingRelease(ctx context.Context, ids []string) error {
ctx, span := trace.StartSpan(ctx, "statestore/instrumented.DeleteTicketsFromPendingRelease")
defer span.End()
return is.s.DeleteTicketsFromIgnoreList(ctx, ids)
return is.s.DeleteTicketsFromPendingRelease(ctx, ids)
}
func (is *instrumentedService) ReleaseAllTickets(ctx context.Context) error {

View File

@ -50,16 +50,15 @@ type Service interface {
GetTickets(ctx context.Context, ids []string) ([]*pb.Ticket, error)
// UpdateAssignments update using the request's specified tickets with assignments.
UpdateAssignments(ctx context.Context, req *pb.AssignTicketsRequest) (*pb.AssignTicketsResponse, error)
UpdateAssignments(ctx context.Context, req *pb.AssignTicketsRequest) (*pb.AssignTicketsResponse, []*pb.Ticket, error)
// GetAssignments returns the assignment associated with the input ticket id
GetAssignments(ctx context.Context, id string, callback func(*pb.Assignment) error) error
// AddProposedTickets appends new proposed tickets to the proposed sorted set with current timestamp
AddTicketsToIgnoreList(ctx context.Context, ids []string) error
AddTicketsToPendingRelease(ctx context.Context, ids []string) error
// DeleteTicketsFromIgnoreList deletes tickets from the proposed sorted set
DeleteTicketsFromIgnoreList(ctx context.Context, ids []string) error
// DeleteTicketsFromPendingRelease deletes tickets from the proposed sorted set
DeleteTicketsFromPendingRelease(ctx context.Context, ids []string) error
// ReleaseAllTickets releases all pending tickets back to active
ReleaseAllTickets(ctx context.Context) error

View File

@ -31,7 +31,10 @@ import (
"open-match.dev/open-match/pkg/pb"
)
const allTickets = "allTickets"
const (
allTickets = "allTickets"
proposedTicketIDs = "proposed_ticket_ids"
)
var (
redisLogger = logrus.WithFields(logrus.Fields{
@ -220,41 +223,23 @@ func redisURLFromAddr(addr string, cfg config.View, usePassword bool) string {
return redisURL + addr
}
func (rb *redisBackend) connect(ctx context.Context) (redis.Conn, error) {
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"error": err.Error(),
}).Error("failed to connect to redis")
return nil, status.Errorf(codes.Unavailable, "%v", err)
}
return redisConn, nil
}
// CreateTicket creates a new Ticket in the state storage. If the id already exists, it will be overwritten.
func (rb *redisBackend) CreateTicket(ctx context.Context, ticket *pb.Ticket) error {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "CreateTicket, id: %s, failed to connect to redis: %v", ticket.GetId(), err)
}
defer handleConnectionClose(&redisConn)
value, err := proto.Marshal(ticket)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"key": ticket.GetId(),
"error": err.Error(),
}).Error("failed to marshal the ticket proto")
err = errors.Wrapf(err, "failed to marshal the ticket proto, id: %s", ticket.GetId())
return status.Errorf(codes.Internal, "%v", err)
}
_, err = redisConn.Do("SET", ticket.GetId(), value)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"cmd": "SET",
"key": ticket.GetId(),
"error": err.Error(),
}).Error("failed to set the value for ticket")
err = errors.Wrapf(err, "failed to set the value for ticket, id: %s", ticket.GetId())
return status.Errorf(codes.Internal, "%v", err)
}
@ -263,49 +248,33 @@ func (rb *redisBackend) CreateTicket(ctx context.Context, ticket *pb.Ticket) err
// GetTicket gets the Ticket with the specified id from state storage. This method fails if the Ticket does not exist.
func (rb *redisBackend) GetTicket(ctx context.Context, id string) (*pb.Ticket, error) {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return nil, err
return nil, status.Errorf(codes.Unavailable, "GetTicket, id: %s, failed to connect to redis: %v", id, err)
}
defer handleConnectionClose(&redisConn)
value, err := redis.Bytes(redisConn.Do("GET", id))
if err != nil {
redisLogger.WithFields(logrus.Fields{
"cmd": "GET",
"key": id,
"error": err.Error(),
}).Error("failed to get the ticket from state storage")
// Return NotFound if redigo did not find the ticket in storage.
if err == redis.ErrNil {
msg := fmt.Sprintf("Ticket id:%s not found", id)
redisLogger.WithFields(logrus.Fields{
"key": id,
"cmd": "GET",
}).Error(msg)
msg := fmt.Sprintf("Ticket id: %s not found", id)
return nil, status.Error(codes.NotFound, msg)
}
err = errors.Wrapf(err, "failed to get the ticket from state storage, id: %s", id)
return nil, status.Errorf(codes.Internal, "%v", err)
}
if value == nil {
msg := fmt.Sprintf("Ticket id:%s not found", id)
redisLogger.WithFields(logrus.Fields{
"key": id,
"cmd": "GET",
}).Error(msg)
msg := fmt.Sprintf("Ticket id: %s not found", id)
return nil, status.Error(codes.NotFound, msg)
}
ticket := &pb.Ticket{}
err = proto.Unmarshal(value, ticket)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"key": id,
"error": err.Error(),
}).Error("failed to unmarshal the ticket proto")
err = errors.Wrapf(err, "failed to unmarshal the ticket proto, id: %s", id)
return nil, status.Errorf(codes.Internal, "%v", err)
}
@ -314,19 +283,15 @@ func (rb *redisBackend) GetTicket(ctx context.Context, id string) (*pb.Ticket, e
// DeleteTicket removes the Ticket with the specified id from state storage.
func (rb *redisBackend) DeleteTicket(ctx context.Context, id string) error {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "DeleteTicket, id: %s, failed to connect to redis: %v", id, err)
}
defer handleConnectionClose(&redisConn)
_, err = redisConn.Do("DEL", id)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"cmd": "DEL",
"key": id,
"error": err.Error(),
}).Error("failed to delete the ticket from state storage")
err = errors.Wrapf(err, "failed to delete the ticket from state storage, id: %s", id)
return status.Errorf(codes.Internal, "%v", err)
}
@ -335,20 +300,15 @@ func (rb *redisBackend) DeleteTicket(ctx context.Context, id string) error {
// IndexTicket indexes the Ticket id for the configured index fields.
func (rb *redisBackend) IndexTicket(ctx context.Context, ticket *pb.Ticket) error {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "IndexTicket, id: %s, failed to connect to redis: %v", ticket.GetId(), err)
}
defer handleConnectionClose(&redisConn)
err = redisConn.Send("SADD", allTickets, ticket.Id)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"cmd": "SADD",
"ticket": ticket.GetId(),
"error": err.Error(),
"key": allTickets,
}).Error("failed to add ticket to all tickets")
err = errors.Wrapf(err, "failed to add ticket to all tickets, id: %s", ticket.Id)
return status.Errorf(codes.Internal, "%v", err)
}
@ -357,20 +317,15 @@ func (rb *redisBackend) IndexTicket(ctx context.Context, ticket *pb.Ticket) erro
// DeindexTicket removes the indexing for the specified Ticket. Only the indexes are removed but the Ticket continues to exist.
func (rb *redisBackend) DeindexTicket(ctx context.Context, id string) error {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "DeindexTicket, id: %s, failed to connect to redis: %v", id, err)
}
defer handleConnectionClose(&redisConn)
err = redisConn.Send("SREM", allTickets, id)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"cmd": "SREM",
"key": allTickets,
"id": id,
"error": err.Error(),
}).Error("failed to remove ticket from all tickets")
err = errors.Wrapf(err, "failed to remove ticket from all tickets, id: %s", id)
return status.Errorf(codes.Internal, "%v", err)
}
@ -379,9 +334,9 @@ func (rb *redisBackend) DeindexTicket(ctx context.Context, id string) error {
// GetIndexedIds returns the ids of all tickets currently indexed.
func (rb *redisBackend) GetIndexedIDSet(ctx context.Context) (map[string]struct{}, error) {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return nil, err
return nil, status.Errorf(codes.Unavailable, "GetIndexedIDSet, failed to connect to redis: %v", err)
}
defer handleConnectionClose(&redisConn)
@ -391,17 +346,13 @@ func (rb *redisBackend) GetIndexedIDSet(ctx context.Context) (map[string]struct{
startTimeInt := curTime.Add(-ttl).UnixNano()
// Filter out tickets that are fetched but not assigned within ttl time (ms).
idsInIgnoreLists, err := redis.Strings(redisConn.Do("ZRANGEBYSCORE", "proposed_ticket_ids", startTimeInt, endTimeInt))
idsInPendingReleases, err := redis.Strings(redisConn.Do("ZRANGEBYSCORE", proposedTicketIDs, startTimeInt, endTimeInt))
if err != nil {
redisLogger.WithError(err).Error("failed to get proposed tickets")
return nil, status.Errorf(codes.Internal, "error getting ignore list %v", err)
return nil, status.Errorf(codes.Internal, "error getting pending release %v", err)
}
idsIndexed, err := redis.Strings(redisConn.Do("SMEMBERS", allTickets))
if err != nil {
redisLogger.WithFields(logrus.Fields{
"Command": "SMEMBER allTickets",
}).WithError(err).Error("Failed to lookup all tickets.")
return nil, status.Errorf(codes.Internal, "error getting all indexed ticket ids %v", err)
}
@ -409,7 +360,7 @@ func (rb *redisBackend) GetIndexedIDSet(ctx context.Context) (map[string]struct{
for _, id := range idsIndexed {
r[id] = struct{}{}
}
for _, id := range idsInIgnoreLists {
for _, id := range idsInPendingReleases {
delete(r, id)
}
@ -423,9 +374,9 @@ func (rb *redisBackend) GetTickets(ctx context.Context, ids []string) ([]*pb.Tic
return nil, nil
}
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return nil, err
return nil, status.Errorf(codes.Unavailable, "GetTickets, failed to connect to redis: %v", err)
}
defer handleConnectionClose(&redisConn)
@ -436,9 +387,7 @@ func (rb *redisBackend) GetTickets(ctx context.Context, ids []string) ([]*pb.Tic
ticketBytes, err := redis.ByteSlices(redisConn.Do("MGET", queryParams...))
if err != nil {
redisLogger.WithFields(logrus.Fields{
"Command": fmt.Sprintf("MGET %v", ids),
}).WithError(err).Error("Failed to lookup tickets.")
err = errors.Wrapf(err, "failed to lookup tickets %v", ids)
return nil, status.Errorf(codes.Internal, "%v", err)
}
@ -450,9 +399,7 @@ func (rb *redisBackend) GetTickets(ctx context.Context, ids []string) ([]*pb.Tic
t := &pb.Ticket{}
err = proto.Unmarshal(b, t)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"key": ids[i],
}).WithError(err).Error("Failed to unmarshal ticket from redis.")
err = errors.Wrapf(err, "failed to unmarshal ticket from redis, key %s", ids[i])
return nil, status.Errorf(codes.Internal, "%v", err)
}
r = append(r, t)
@ -463,23 +410,29 @@ func (rb *redisBackend) GetTickets(ctx context.Context, ids []string) ([]*pb.Tic
}
// UpdateAssignments update using the request's specified tickets with assignments.
func (rb *redisBackend) UpdateAssignments(ctx context.Context, req *pb.AssignTicketsRequest) (*pb.AssignTicketsResponse, error) {
func (rb *redisBackend) UpdateAssignments(ctx context.Context, req *pb.AssignTicketsRequest) (*pb.AssignTicketsResponse, []*pb.Ticket, error) {
resp := &pb.AssignTicketsResponse{}
if len(req.Assignments) == 0 {
return resp, nil
return resp, []*pb.Ticket{}, nil
}
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return nil, nil, status.Errorf(codes.Unavailable, "UpdateAssignments, failed to connect to redis: %v", err)
}
defer handleConnectionClose(&redisConn)
idToA := make(map[string]*pb.Assignment)
ids := make([]string, 0)
idsI := make([]interface{}, 0)
for _, a := range req.Assignments {
if a.Assignment == nil {
return nil, status.Error(codes.InvalidArgument, "AssignmentGroup.Assignment is required")
return nil, nil, status.Error(codes.InvalidArgument, "AssignmentGroup.Assignment is required")
}
for _, id := range a.TicketIds {
if _, ok := idToA[id]; ok {
return nil, status.Errorf(codes.InvalidArgument, "Ticket id %s is assigned multiple times in one assign tickets call.", id)
return nil, nil, status.Errorf(codes.InvalidArgument, "Ticket id %s is assigned multiple times in one assign tickets call", id)
}
idToA[id] = a.Assignment
@ -488,15 +441,9 @@ func (rb *redisBackend) UpdateAssignments(ctx context.Context, req *pb.AssignTic
}
}
redisConn, err := rb.connect(ctx)
if err != nil {
return nil, err
}
defer handleConnectionClose(&redisConn)
ticketBytes, err := redis.ByteSlices(redisConn.Do("MGET", idsI...))
if err != nil {
return nil, err
return nil, nil, err
}
tickets := make([]*pb.Ticket, 0, len(ticketBytes))
@ -511,10 +458,8 @@ func (rb *redisBackend) UpdateAssignments(ctx context.Context, req *pb.AssignTic
t := &pb.Ticket{}
err = proto.Unmarshal(ticketByte, t)
if err != nil {
redisLogger.WithFields(logrus.Fields{
"key": ids[i],
}).WithError(err).Error("failed to unmarshal ticket from redis.")
return nil, status.Errorf(codes.Internal, "%v", err)
err = errors.Wrapf(err, "failed to unmarshal ticket from redis %s", ids[i])
return nil, nil, status.Errorf(codes.Internal, "%v", err)
}
tickets = append(tickets, t)
}
@ -522,7 +467,7 @@ func (rb *redisBackend) UpdateAssignments(ctx context.Context, req *pb.AssignTic
assignmentTimeout := rb.cfg.GetDuration("assignedDeleteTimeout") / time.Millisecond
err = redisConn.Send("MULTI")
if err != nil {
return nil, errors.Wrap(err, "error starting redis multi")
return nil, nil, errors.Wrap(err, "error starting redis multi")
}
for _, ticket := range tickets {
@ -531,24 +476,25 @@ func (rb *redisBackend) UpdateAssignments(ctx context.Context, req *pb.AssignTic
var ticketByte []byte
ticketByte, err = proto.Marshal(ticket)
if err != nil {
return nil, status.Errorf(codes.Internal, "failed to marshal ticket %s", ticket.GetId())
return nil, nil, status.Errorf(codes.Internal, "failed to marshal ticket %s", ticket.GetId())
}
err = redisConn.Send("SET", ticket.Id, ticketByte, "PX", int64(assignmentTimeout), "XX")
if err != nil {
return nil, errors.Wrap(err, "error sending ticket assignment set")
return nil, nil, errors.Wrap(err, "error sending ticket assignment set")
}
}
wasSet, err := redis.Values(redisConn.Do("EXEC"))
if err != nil {
return nil, errors.Wrap(err, "error executing assignment set")
return nil, nil, errors.Wrap(err, "error executing assignment set")
}
if len(wasSet) != len(tickets) {
return nil, status.Errorf(codes.Internal, "sent %d tickets to redis, but received %d back", len(tickets), len(wasSet))
return nil, nil, status.Errorf(codes.Internal, "sent %d tickets to redis, but received %d back", len(tickets), len(wasSet))
}
assignedTickets := make([]*pb.Ticket, 0, len(tickets))
for i, ticket := range tickets {
v, err := redis.String(wasSet[i], nil)
if err == redis.ErrNil {
@ -559,21 +505,22 @@ func (rb *redisBackend) UpdateAssignments(ctx context.Context, req *pb.AssignTic
continue
}
if err != nil {
return nil, errors.Wrap(err, "unexpected error from redis multi set")
return nil, nil, errors.Wrap(err, "unexpected error from redis multi set")
}
if v != "OK" {
return nil, status.Errorf(codes.Internal, "unexpected response from redis: %s", v)
return nil, nil, status.Errorf(codes.Internal, "unexpected response from redis: %s", v)
}
assignedTickets = append(assignedTickets, ticket)
}
return resp, nil
return resp, assignedTickets, nil
}
// GetAssignments returns the assignment associated with the input ticket id
func (rb *redisBackend) GetAssignments(ctx context.Context, id string, callback func(*pb.Assignment) error) error {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "GetAssignments, id: %s, failed to connect to redis: %v", id, err)
}
defer handleConnectionClose(&redisConn)
@ -581,7 +528,6 @@ func (rb *redisBackend) GetAssignments(ctx context.Context, id string, callback
var ticket *pb.Ticket
ticket, err = rb.GetTicket(ctx, id)
if err != nil {
redisLogger.WithError(err).Errorf("failed to get ticket %s when executing get assignments", id)
return backoff.Permanent(err)
}
@ -600,55 +546,55 @@ func (rb *redisBackend) GetAssignments(ctx context.Context, id string, callback
return nil
}
// AddProposedTickets appends new proposed tickets to the proposed sorted set with current timestamp
func (rb *redisBackend) AddTicketsToIgnoreList(ctx context.Context, ids []string) error {
// AddTicketsToPendingRelease appends new proposed tickets to the proposed sorted set with current timestamp
func (rb *redisBackend) AddTicketsToPendingRelease(ctx context.Context, ids []string) error {
if len(ids) == 0 {
return nil
}
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "AddTicketsToPendingRelease, failed to connect to redis: %v", err)
}
defer handleConnectionClose(&redisConn)
currentTime := time.Now().UnixNano()
cmds := make([]interface{}, 0, 2*len(ids)+1)
cmds = append(cmds, "proposed_ticket_ids")
cmds = append(cmds, proposedTicketIDs)
for _, id := range ids {
cmds = append(cmds, currentTime, id)
}
_, err = redisConn.Do("ZADD", cmds...)
if err != nil {
redisLogger.WithError(err).Error("failed to append proposed tickets to ignore list")
err = errors.Wrap(err, "failed to append proposed tickets to pending release")
return status.Error(codes.Internal, err.Error())
}
return nil
}
// DeleteTicketsFromIgnoreList deletes tickets from the proposed sorted set
func (rb *redisBackend) DeleteTicketsFromIgnoreList(ctx context.Context, ids []string) error {
// DeleteTicketsFromPendingRelease deletes tickets from the proposed sorted set
func (rb *redisBackend) DeleteTicketsFromPendingRelease(ctx context.Context, ids []string) error {
if len(ids) == 0 {
return nil
}
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "DeleteTicketsFromPendingRelease, failed to connect to redis: %v", err)
}
defer handleConnectionClose(&redisConn)
cmds := make([]interface{}, 0, len(ids)+1)
cmds = append(cmds, "proposed_ticket_ids")
cmds = append(cmds, proposedTicketIDs)
for _, id := range ids {
cmds = append(cmds, id)
}
_, err = redisConn.Do("ZREM", cmds...)
if err != nil {
redisLogger.WithError(err).Error("failed to delete proposed tickets from ignore list")
err = errors.Wrap(err, "failed to delete proposed tickets from pending release")
return status.Error(codes.Internal, err.Error())
}
@ -656,13 +602,13 @@ func (rb *redisBackend) DeleteTicketsFromIgnoreList(ctx context.Context, ids []s
}
func (rb *redisBackend) ReleaseAllTickets(ctx context.Context) error {
redisConn, err := rb.connect(ctx)
redisConn, err := rb.redisPool.GetContext(ctx)
if err != nil {
return err
return status.Errorf(codes.Unavailable, "ReleaseAllTickets, failed to connect to redis: %v", err)
}
defer handleConnectionClose(&redisConn)
_, err = redisConn.Do("DEL", "proposed_ticket_ids")
_, err = redisConn.Do("DEL", proposedTicketIDs)
return err
}

View File

@ -17,6 +17,7 @@ package statestore
import (
"context"
"errors"
"fmt"
"io/ioutil"
"os"
"testing"
@ -27,32 +28,28 @@ import (
"github.com/gomodule/redigo/redis"
"github.com/rs/xid"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"open-match.dev/open-match/internal/config"
"open-match.dev/open-match/internal/telemetry"
internalTesting "open-match.dev/open-match/internal/testing"
utilTesting "open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/pkg/pb"
)
func TestStatestoreSetup(t *testing.T) {
assert := assert.New(t)
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
assert.NotNil(service)
require.NotNil(t, service)
defer service.Close()
}
func TestTicketLifecycle(t *testing.T) {
// Create State Store
assert := assert.New(t)
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
assert.NotNil(service)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
@ -73,123 +70,43 @@ func TestTicketLifecycle(t *testing.T) {
// Validate that GetTicket fails for a Ticket that does not exist.
_, err := service.GetTicket(ctx, id)
assert.NotNil(err)
assert.Equal(status.Code(err), codes.NotFound)
require.NotNil(t, err)
require.Equal(t, status.Code(err), codes.NotFound)
// Validate nonexisting Ticket deletion
err = service.DeleteTicket(ctx, id)
assert.Nil(err)
require.Nil(t, err)
// Validate nonexisting Ticket deindexing
err = service.DeindexTicket(ctx, id)
assert.Nil(err)
require.Nil(t, err)
// Validate Ticket creation
err = service.CreateTicket(ctx, ticket)
assert.Nil(err)
require.Nil(t, err)
// Validate Ticket retrival
result, err := service.GetTicket(ctx, ticket.Id)
assert.Nil(err)
assert.NotNil(result)
assert.Equal(ticket.Id, result.Id)
assert.Equal(ticket.SearchFields.DoubleArgs["testindex1"], result.SearchFields.DoubleArgs["testindex1"])
assert.Equal(ticket.Assignment.Connection, result.Assignment.Connection)
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, ticket.Id, result.Id)
require.Equal(t, ticket.SearchFields.DoubleArgs["testindex1"], result.SearchFields.DoubleArgs["testindex1"])
require.NotNil(t, result.Assignment)
require.Equal(t, ticket.Assignment.Connection, result.Assignment.Connection)
// Validate Ticket deletion
err = service.DeleteTicket(ctx, id)
assert.Nil(err)
require.Nil(t, err)
_, err = service.GetTicket(ctx, id)
assert.NotNil(err)
}
func TestIgnoreLists(t *testing.T) {
// Create State Store
assert := assert.New(t)
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
assert.NotNil(service)
defer service.Close()
ctx := utilTesting.NewContext(t)
tickets := internalTesting.GenerateFloatRangeTickets(
internalTesting.Property{Name: "testindex1", Min: 0, Max: 10, Interval: 2},
internalTesting.Property{Name: "testindex2", Min: 0, Max: 10, Interval: 2},
)
ticketIds := []string{}
for _, ticket := range tickets {
assert.Nil(service.CreateTicket(ctx, ticket))
assert.Nil(service.IndexTicket(ctx, ticket))
ticketIds = append(ticketIds, ticket.GetId())
}
verifyTickets := func(service Service, expectLen int) {
ids, err := service.GetIndexedIDSet(ctx)
assert.Nil(err)
assert.Equal(expectLen, len(ids))
}
// Verify all tickets are created and returned
verifyTickets(service, len(tickets))
// Add the first three tickets to the ignore list and verify changes are reflected in the result
assert.Nil(service.AddTicketsToIgnoreList(ctx, ticketIds[:3]))
verifyTickets(service, len(tickets)-3)
// Sleep until the ignore list expired and verify we still have all the tickets
time.Sleep(cfg.GetDuration("pendingReleaseTimeout"))
verifyTickets(service, len(tickets))
}
func TestDeleteTicketsFromIgnoreList(t *testing.T) {
// Create State Store
assert := assert.New(t)
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
assert.NotNil(service)
defer service.Close()
ctx := utilTesting.NewContext(t)
tickets := internalTesting.GenerateFloatRangeTickets(
internalTesting.Property{Name: "testindex1", Min: 0, Max: 10, Interval: 2},
internalTesting.Property{Name: "testindex2", Min: 0, Max: 10, Interval: 2},
)
ticketIds := []string{}
for _, ticket := range tickets {
assert.Nil(service.CreateTicket(ctx, ticket))
assert.Nil(service.IndexTicket(ctx, ticket))
ticketIds = append(ticketIds, ticket.GetId())
}
verifyTickets := func(service Service, expectLen int) {
ids, err := service.GetIndexedIDSet(ctx)
assert.Nil(err)
assert.Equal(expectLen, len(ids))
}
// Verify all tickets are created and returned
verifyTickets(service, len(tickets))
// Add the first three tickets to the ignore list and verify changes are reflected in the result
assert.Nil(service.AddTicketsToIgnoreList(ctx, ticketIds[:3]))
verifyTickets(service, len(tickets)-3)
assert.Nil(service.DeleteTicketsFromIgnoreList(ctx, ticketIds[:3]))
verifyTickets(service, len(tickets))
require.NotNil(t, err)
}
func TestGetAssignmentBeforeSet(t *testing.T) {
// Create State Store
assert := assert.New(t)
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
assert.NotNil(service)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
@ -200,17 +117,15 @@ func TestGetAssignmentBeforeSet(t *testing.T) {
return nil
})
// GetAssignment failed because the ticket does not exists
assert.Equal(status.Convert(err).Code(), codes.NotFound)
assert.Nil(assignmentResp)
require.Equal(t, status.Convert(err).Code(), codes.NotFound)
require.Nil(t, assignmentResp)
}
func TestGetAssignmentNormal(t *testing.T) {
// Create State Store
assert := assert.New(t)
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
assert.NotNil(service)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
@ -218,7 +133,7 @@ func TestGetAssignmentNormal(t *testing.T) {
Id: "1",
Assignment: &pb.Assignment{Connection: "2"},
})
assert.Nil(err)
require.Nil(t, err)
var assignmentResp *pb.Assignment
ctx, cancel := context.WithCancel(ctx)
@ -233,7 +148,7 @@ func TestGetAssignmentNormal(t *testing.T) {
return returnedErr
} else if callbackCount > 0 {
// Test the assignment returned was successfully passed in to the callback function
assert.Equal(assignmentResp.Connection, "2")
require.Equal(t, assignmentResp.Connection, "2")
}
callbackCount++
@ -241,8 +156,198 @@ func TestGetAssignmentNormal(t *testing.T) {
})
// Test GetAssignments was retried for 5 times and returned with expected error
assert.Equal(5, callbackCount)
assert.Equal(returnedErr, err)
require.Equal(t, 5, callbackCount)
require.Equal(t, returnedErr, err)
// Pass an expired context, err expected
ctx, cancel = context.WithCancel(context.Background())
cancel()
service = New(cfg)
err = service.GetAssignments(ctx, "1", func(assignment *pb.Assignment) error { return nil })
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "GetAssignments, id: 1, failed to connect to redis:")
}
func TestUpdateAssignments(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
err := service.CreateTicket(ctx, &pb.Ticket{
Id: "1",
Assignment: &pb.Assignment{Connection: "2"},
})
require.Nil(t, err)
c, err := redis.Dial("tcp", fmt.Sprintf("%s:%s", cfg.GetString("redis.hostname"), cfg.GetString("redis.port")))
require.NoError(t, err)
_, err = c.Do("SET", "wrong-type-key", "wrong-type-value")
require.NoError(t, err)
type expected struct {
resp *pb.AssignTicketsResponse
errCode codes.Code
errMessage string
assignedTicketsIDs []string
}
var testCases = []struct {
description string
request *pb.AssignTicketsRequest
expected
}{
{
description: "no assignments, empty response is returned",
request: &pb.AssignTicketsRequest{},
expected: expected{
resp: &pb.AssignTicketsResponse{},
errCode: codes.OK,
errMessage: "",
assignedTicketsIDs: []string{},
},
},
{
description: "updated assignments, no errors",
request: &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: []string{"1"},
Assignment: &pb.Assignment{Connection: "2"},
},
},
},
expected: expected{
resp: &pb.AssignTicketsResponse{},
errCode: codes.OK,
errMessage: "",
assignedTicketsIDs: []string{"1"},
},
},
{
description: "nil assignment, error expected",
request: &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: []string{"1"},
Assignment: nil,
},
},
},
expected: expected{
resp: nil,
errCode: codes.InvalidArgument,
errMessage: "AssignmentGroup.Assignment is required",
assignedTicketsIDs: []string{},
},
},
{
description: "ticket is assigned multiple times, error expected",
request: &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: []string{"1"},
Assignment: &pb.Assignment{Connection: "2"},
},
{
TicketIds: []string{"1"},
Assignment: &pb.Assignment{Connection: "2"},
},
},
},
expected: expected{
resp: nil,
errCode: codes.InvalidArgument,
errMessage: "Ticket id 1 is assigned multiple times in one assign tickets call",
assignedTicketsIDs: []string{},
},
},
{
description: "ticket doesn't exist, no error, response failure expected",
request: &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: []string{"11111"},
Assignment: &pb.Assignment{Connection: "2"},
},
},
},
expected: expected{
resp: &pb.AssignTicketsResponse{
Failures: []*pb.AssignmentFailure{{
TicketId: "11111",
Cause: pb.AssignmentFailure_TICKET_NOT_FOUND,
}},
},
errCode: codes.OK,
errMessage: "",
assignedTicketsIDs: []string{},
},
},
{
description: "wrong value, error expected",
request: &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: []string{"wrong-type-key"},
Assignment: &pb.Assignment{Connection: "2"},
},
},
},
expected: expected{
resp: nil,
errCode: codes.Internal,
errMessage: "failed to unmarshal ticket from redis wrong-type-key",
assignedTicketsIDs: []string{},
},
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.description, func(t *testing.T) {
resp, ticketsAssignedActual, errActual := service.UpdateAssignments(ctx, tc.request)
if tc.expected.errCode != codes.OK {
require.Error(t, errActual)
require.Equal(t, tc.expected.errCode.String(), status.Convert(errActual).Code().String())
require.Contains(t, status.Convert(errActual).Message(), tc.expected.errMessage)
} else {
require.NoError(t, errActual)
require.Equal(t, tc.expected.resp, resp)
require.Equal(t, len(tc.expected.assignedTicketsIDs), len(ticketsAssignedActual))
for _, ticket := range ticketsAssignedActual {
found := false
for _, id := range tc.expected.assignedTicketsIDs {
if ticket.GetId() == id {
found = true
break
}
}
require.Truef(t, found, "assigned ticket ID %s is not found in an expected slice", ticket.GetId())
}
}
})
}
// Pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
_, _, err = service.UpdateAssignments(ctx, &pb.AssignTicketsRequest{
Assignments: []*pb.AssignmentGroup{
{
TicketIds: []string{"11111"},
Assignment: &pb.Assignment{Connection: "2"},
},
},
})
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "UpdateAssignments, failed to connect to redis: context canceled")
}
func TestConnect(t *testing.T) {
@ -252,8 +357,516 @@ func TestConnect(t *testing.T) {
testConnect(t, true, "redispassword")
}
func TestHealthCheck(t *testing.T) {
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
// OK
ctx := utilTesting.NewContext(t)
err := service.HealthCheck(ctx)
require.NoError(t, err)
// Error expected
closer()
err = service.HealthCheck(ctx)
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
}
func TestCreateTicket(t *testing.T) {
cfg, closer := createRedis(t, true, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
var testCases = []struct {
description string
ticket *pb.Ticket
expectedCode codes.Code
expectedMessage string
}{
{
description: "ok",
ticket: &pb.Ticket{
Id: "1",
Assignment: &pb.Assignment{Connection: "2"},
},
expectedCode: codes.OK,
expectedMessage: "",
},
{
description: "nil ticket passed, err expected",
ticket: nil,
expectedCode: codes.Internal,
expectedMessage: "failed to marshal the ticket proto, id: : proto: Marshal called with nil",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.description, func(t *testing.T) {
err := service.CreateTicket(ctx, tc.ticket)
if tc.expectedCode == codes.OK {
require.NoError(t, err)
} else {
require.Error(t, err)
require.Equal(t, tc.expectedCode.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), tc.expectedMessage)
}
})
}
// pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
err := service.CreateTicket(ctx, &pb.Ticket{
Id: "222",
Assignment: &pb.Assignment{Connection: "2"},
})
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "CreateTicket, id: 222, failed to connect to redis:")
}
func TestGetTicket(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
err := service.CreateTicket(ctx, &pb.Ticket{
Id: "mockTicketID",
Assignment: &pb.Assignment{Connection: "2"},
})
require.NoError(t, err)
c, err := redis.Dial("tcp", fmt.Sprintf("%s:%s", cfg.GetString("redis.hostname"), cfg.GetString("redis.port")))
require.NoError(t, err)
_, err = c.Do("SET", "wrong-type-key", "wrong-type-value")
require.NoError(t, err)
var testCases = []struct {
description string
ticketID string
expectedCode codes.Code
expectedMessage string
}{
{
description: "ticket is found",
ticketID: "mockTicketID",
expectedCode: codes.OK,
expectedMessage: "",
},
{
description: "empty id passed, err expected",
ticketID: "",
expectedCode: codes.NotFound,
expectedMessage: "Ticket id: not found",
},
{
description: "wrong id passed, err expected",
ticketID: "123456",
expectedCode: codes.NotFound,
expectedMessage: "Ticket id: 123456 not found",
},
{
description: "item of a wrong type is requested, err expected",
ticketID: "wrong-type-key",
expectedCode: codes.Internal,
expectedMessage: "failed to unmarshal the ticket proto, id: wrong-type-key: proto: can't skip unknown wire type",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.description, func(t *testing.T) {
ticketActual, errActual := service.GetTicket(ctx, tc.ticketID)
if tc.expectedCode == codes.OK {
require.NoError(t, errActual)
require.NotNil(t, ticketActual)
} else {
require.Error(t, errActual)
require.Equal(t, tc.expectedCode.String(), status.Convert(errActual).Code().String())
require.Contains(t, status.Convert(errActual).Message(), tc.expectedMessage)
}
})
}
// pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
res, err := service.GetTicket(ctx, "12345")
require.Error(t, err)
require.Nil(t, res)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "GetTicket, id: 12345, failed to connect to redis:")
}
func TestDeleteTicket(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
err := service.CreateTicket(ctx, &pb.Ticket{
Id: "mockTicketID",
Assignment: &pb.Assignment{Connection: "2"},
})
require.NoError(t, err)
var testCases = []struct {
description string
ticketID string
expectedCode codes.Code
expectedMessage string
}{
{
description: "ticket is found and deleted",
ticketID: "mockTicketID",
expectedCode: codes.OK,
expectedMessage: "",
},
{
description: "empty id passed, no err expected",
ticketID: "",
expectedCode: codes.OK,
expectedMessage: "",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.description, func(t *testing.T) {
errActual := service.DeleteTicket(ctx, tc.ticketID)
if tc.expectedCode == codes.OK {
require.NoError(t, errActual)
if tc.ticketID != "" {
_, errGetTicket := service.GetTicket(ctx, tc.ticketID)
require.Error(t, errGetTicket)
require.Equal(t, codes.NotFound.String(), status.Convert(errGetTicket).Code().String())
}
} else {
require.Error(t, errActual)
require.Equal(t, tc.expectedCode.String(), status.Convert(errActual).Code().String())
require.Contains(t, status.Convert(errActual).Message(), tc.expectedMessage)
}
})
}
// pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
err = service.DeleteTicket(ctx, "12345")
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "DeleteTicket, id: 12345, failed to connect to redis:")
}
func TestIndexTicket(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
generateTickets(ctx, t, service, 2)
c, err := redis.Dial("tcp", fmt.Sprintf("%s:%s", cfg.GetString("redis.hostname"), cfg.GetString("redis.port")))
require.NoError(t, err)
idsIndexed, err := redis.Strings(c.Do("SMEMBERS", "allTickets"))
require.NoError(t, err)
require.Len(t, idsIndexed, 2)
require.Equal(t, "mockTicketID-0", idsIndexed[0])
require.Equal(t, "mockTicketID-1", idsIndexed[1])
// pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
err = service.IndexTicket(ctx, &pb.Ticket{
Id: "12345",
Assignment: &pb.Assignment{Connection: "2"},
})
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "IndexTicket, id: 12345, failed to connect to redis:")
}
func TestDeindexTicket(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
generateTickets(ctx, t, service, 2)
c, err := redis.Dial("tcp", fmt.Sprintf("%s:%s", cfg.GetString("redis.hostname"), cfg.GetString("redis.port")))
require.NoError(t, err)
idsIndexed, err := redis.Strings(c.Do("SMEMBERS", "allTickets"))
require.NoError(t, err)
require.Len(t, idsIndexed, 2)
require.Equal(t, "mockTicketID-0", idsIndexed[0])
require.Equal(t, "mockTicketID-1", idsIndexed[1])
// deindex and check that there is only 1 ticket in the returned slice
err = service.DeindexTicket(ctx, "mockTicketID-1")
require.NoError(t, err)
idsIndexed, err = redis.Strings(c.Do("SMEMBERS", "allTickets"))
require.NoError(t, err)
require.Len(t, idsIndexed, 1)
require.Equal(t, "mockTicketID-0", idsIndexed[0])
// pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
err = service.DeindexTicket(ctx, "12345")
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "DeindexTicket, id: 12345, failed to connect to redis:")
}
func TestGetIndexedIDSet(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
tickets, _ := generateTickets(ctx, t, service, 2)
verifyTickets := func(service Service, tickets []*pb.Ticket) {
ids, err := service.GetIndexedIDSet(ctx)
require.Nil(t, err)
require.Equal(t, len(tickets), len(ids))
for _, tt := range tickets {
_, ok := ids[tt.GetId()]
require.True(t, ok)
}
}
// Verify all tickets are created and returned
verifyTickets(service, tickets)
c, err := redis.Dial("tcp", fmt.Sprintf("%s:%s", cfg.GetString("redis.hostname"), cfg.GetString("redis.port")))
require.NoError(t, err)
// Add the first ticket to the pending release and verify changes are reflected in the result
redis.Strings(c.Do("ZADD", "proposed_ticket_ids", time.Now().UnixNano(), "mockTicketID-0"))
verifyTickets(service, tickets[1:2])
// Sleep until the pending release expired and verify we still have all the tickets
time.Sleep(cfg.GetDuration("pendingReleaseTimeout"))
verifyTickets(service, tickets)
// Pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
_, err = service.GetIndexedIDSet(ctx)
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "GetIndexedIDSet, failed to connect to redis:")
}
func TestGetTickets(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
tickets, ids := generateTickets(ctx, t, service, 2)
res, err := service.GetTickets(ctx, ids)
require.NoError(t, err)
for i, tc := range tickets {
require.Equal(t, tc.GetId(), res[i].GetId())
}
// pass empty ids slice
empty := []string{}
res, err = service.GetTickets(ctx, empty)
require.NoError(t, err)
require.Nil(t, res)
// pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
_, err = service.GetTickets(ctx, ids)
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "GetTickets, failed to connect to redis:")
}
func TestDeleteTicketsFromPendingRelease(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
tickets, ids := generateTickets(ctx, t, service, 2)
verifyTickets := func(service Service, tickets []*pb.Ticket) {
ids, err := service.GetIndexedIDSet(ctx)
require.Nil(t, err)
require.Equal(t, len(tickets), len(ids))
for _, tt := range tickets {
_, ok := ids[tt.GetId()]
require.True(t, ok)
}
}
// Verify all tickets are created and returned
verifyTickets(service, tickets)
c, err := redis.Dial("tcp", fmt.Sprintf("%s:%s", cfg.GetString("redis.hostname"), cfg.GetString("redis.port")))
require.NoError(t, err)
// Add the first ticket to the pending release and verify changes are reflected in the result
redis.Strings(c.Do("ZADD", "proposed_ticket_ids", time.Now().UnixNano(), ids[0]))
// Verify 1 ticket is indexed
verifyTickets(service, tickets[1:2])
require.NoError(t, service.DeleteTicketsFromPendingRelease(ctx, ids[:1]))
// Verify that ticket is deleted from indexed set
verifyTickets(service, tickets)
// Pass an empty ids slice
empty := []string{}
require.NoError(t, service.DeleteTicketsFromPendingRelease(ctx, empty))
// Pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
err = service.DeleteTicketsFromPendingRelease(ctx, ids)
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "DeleteTicketsFromPendingRelease, failed to connect to redis:")
}
func TestReleaseAllTickets(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
tickets, ids := generateTickets(ctx, t, service, 2)
verifyTickets := func(service Service, tickets []*pb.Ticket) {
ids, err := service.GetIndexedIDSet(ctx)
require.Nil(t, err)
require.Equal(t, len(tickets), len(ids))
for _, tt := range tickets {
_, ok := ids[tt.GetId()]
require.True(t, ok)
}
}
// Verify all tickets are created and returned
verifyTickets(service, tickets)
c, err := redis.Dial("tcp", fmt.Sprintf("%s:%s", cfg.GetString("redis.hostname"), cfg.GetString("redis.port")))
require.NoError(t, err)
// Add the first ticket to the pending release and verify changes are reflected in the result
redis.Strings(c.Do("ZADD", "proposed_ticket_ids", time.Now().UnixNano(), ids[0]))
// Verify 1 ticket is indexed
verifyTickets(service, tickets[1:2])
require.NoError(t, service.ReleaseAllTickets(ctx))
// Verify that ticket is deleted from indexed set
verifyTickets(service, tickets)
// Pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
err = service.ReleaseAllTickets(ctx)
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "ReleaseAllTickets, failed to connect to redis:")
}
func TestAddTicketsToPendingRelease(t *testing.T) {
cfg, closer := createRedis(t, false, "")
defer closer()
service := New(cfg)
require.NotNil(t, service)
defer service.Close()
ctx := utilTesting.NewContext(t)
tickets, ids := generateTickets(ctx, t, service, 2)
verifyTickets := func(service Service, tickets []*pb.Ticket) {
ids, err := service.GetIndexedIDSet(ctx)
require.Nil(t, err)
require.Equal(t, len(tickets), len(ids))
for _, tt := range tickets {
_, ok := ids[tt.GetId()]
require.True(t, ok)
}
}
// Verify all tickets are created and returned
verifyTickets(service, tickets)
// Add 1st ticket to pending release state
require.NoError(t, service.AddTicketsToPendingRelease(ctx, ids[:1]))
// Verify 1 ticket is indexed
verifyTickets(service, tickets[1:2])
// Pass an empty ids slice
empty := []string{}
require.NoError(t, service.AddTicketsToPendingRelease(ctx, empty))
// Pass an expired context, err expected
ctx, cancel := context.WithCancel(context.Background())
cancel()
service = New(cfg)
err := service.AddTicketsToPendingRelease(ctx, ids)
require.Error(t, err)
require.Equal(t, codes.Unavailable.String(), status.Convert(err).Code().String())
require.Contains(t, status.Convert(err).Message(), "AddTicketsToPendingRelease, failed to connect to redis:")
}
func testConnect(t *testing.T, withSentinel bool, withPassword string) {
assert := assert.New(t)
cfg, closer := createRedis(t, withSentinel, withPassword)
defer closer()
store := New(cfg)
@ -261,17 +874,17 @@ func testConnect(t *testing.T, withSentinel bool, withPassword string) {
ctx := utilTesting.NewContext(t)
is, ok := store.(*instrumentedService)
assert.True(ok)
require.True(t, ok)
rb, ok := is.s.(*redisBackend)
assert.True(ok)
require.True(t, ok)
conn, err := rb.connect(ctx)
assert.NotNil(conn)
assert.Nil(err)
conn, err := rb.redisPool.GetContext(ctx)
require.NoError(t, err)
require.NotNil(t, conn)
rply, err := redis.String(conn.Do("PING"))
assert.Nil(err)
assert.Equal("PONG", rply)
require.Nil(t, err)
require.Equal(t, "PONG", rply)
}
func createRedis(t *testing.T, withSentinel bool, withPassword string) (config.View, func()) {
@ -295,6 +908,7 @@ func createRedis(t *testing.T, withSentinel bool, withPassword string) (config.V
cfg.Set("backoff.maxInterval", 300*time.Millisecond)
cfg.Set("backoff.maxElapsedTime", 100*time.Millisecond)
cfg.Set(telemetry.ConfigNameEnableMetrics, true)
cfg.Set("assignedDeleteTimeout", 1000*time.Millisecond)
if withSentinel {
s := minisentinel.NewSentinel(mredis)
@ -336,3 +950,22 @@ func createRedis(t *testing.T, withSentinel bool, withPassword string) (config.V
}
}
}
//nolint: unparam
// generateTickets creates a proper amount of ticket, returns a slice of tickets and a slice of tickets ids
func generateTickets(ctx context.Context, t *testing.T, service Service, amount int) ([]*pb.Ticket, []string) {
tickets := make([]*pb.Ticket, 0, amount)
ids := make([]string, 0, amount)
for i := 0; i < amount; i++ {
tmp := &pb.Ticket{
Id: fmt.Sprintf("mockTicketID-%d", i),
Assignment: &pb.Assignment{Connection: "2"},
}
require.NoError(t, service.CreateTicket(ctx, tmp))
require.NoError(t, service.IndexTicket(ctx, tmp))
tickets = append(tickets, tmp)
ids = append(ids, tmp.GetId())
}
return tickets, ids
}

View File

@ -25,7 +25,7 @@ const (
PoolIdleTimeout = 10 * time.Second
// PoolHealthCheckTimeout is the read/write timeout of a healthcheck HTTP request
PoolHealthCheckTimeout = 100 * time.Millisecond
// pendingReleaseTimeout is the time to live duration of Open Match ignore list settings
// pendingReleaseTimeout is the time to live duration of Open Match pending release settings
pendingReleaseTimeout = 500 * time.Millisecond
// InitialInterval is the initial backoff time of a backoff strategy
InitialInterval = 30 * time.Millisecond

View File

@ -18,14 +18,13 @@ import (
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"open-match.dev/open-match/internal/statestore"
utilTesting "open-match.dev/open-match/internal/util/testing"
"open-match.dev/open-match/pkg/pb"
)
func TestFakeStatestore(t *testing.T) {
assert := assert.New(t)
cfg := viper.New()
closer := New(t, cfg)
defer closer()
@ -35,8 +34,8 @@ func TestFakeStatestore(t *testing.T) {
ticket := &pb.Ticket{
Id: "abc",
}
assert.Nil(s.CreateTicket(ctx, ticket))
require.Nil(t, s.CreateTicket(ctx, ticket))
retrievedTicket, err := s.GetTicket(ctx, "abc")
assert.Nil(err)
assert.Equal(ticket.Id, retrievedTicket.Id)
require.Nil(t, err)
require.Equal(t, ticket.Id, retrievedTicket.Id)
}

View File

@ -20,11 +20,11 @@ import (
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConfigz(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
cfg := viper.New()
cfg.Set("char-val", "b")
cfg.Set("int-val", 1)
@ -33,8 +33,8 @@ func TestConfigz(t *testing.T) {
czFunc := func(w http.ResponseWriter, r *http.Request) {
cz.ServeHTTP(w, r)
}
assert.HTTPSuccess(czFunc, http.MethodGet, "/", url.Values{}, "")
assert.HTTPBodyContains(czFunc, http.MethodGet, "/", url.Values{}, `<!DOCTYPE html>
require.HTTPSuccess(czFunc, http.MethodGet, "/", url.Values{}, "")
require.HTTPBodyContains(czFunc, http.MethodGet, "/", url.Values{}, `<!DOCTYPE html>
<head>
<title>Open Match Configuration</title>
</head>

View File

@ -19,12 +19,12 @@ import (
"net/url"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestHelp(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
h := newHelp()
assert.HTTPSuccess(h, http.MethodGet, "/", url.Values{}, "")
assert.HTTPBodyContains(h, http.MethodGet, "/", url.Values{}, `Open Match Server Help`)
require.HTTPSuccess(h, http.MethodGet, "/", url.Values{}, "")
require.HTTPBodyContains(h, http.MethodGet, "/", url.Values{}, `Open Match Server Help`)
}

View File

@ -22,7 +22,7 @@ import (
"sync/atomic"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func angryHealthCheck(context.Context) error {
@ -56,22 +56,22 @@ func TestHealthCheck(t *testing.T) {
}
func assertHealthCheck(t *testing.T, hc http.Handler, errorString string) {
assert := assert.New(t)
require := require.New(t)
sp, ok := hc.(*statefulProbe)
assert.True(ok)
assert.Equal(healthStateFirstProbe, atomic.LoadInt32(sp.healthState))
require.True(ok)
require.Equal(healthStateFirstProbe, atomic.LoadInt32(sp.healthState))
hcFunc := func(w http.ResponseWriter, r *http.Request) {
hc.ServeHTTP(w, r)
}
assert.HTTPSuccess(hcFunc, http.MethodGet, "/", url.Values{}, "ok")
require.HTTPSuccess(hcFunc, http.MethodGet, "/", url.Values{}, "ok")
// A readiness probe has not happened yet so it's still in "first state"
assert.Equal(healthStateFirstProbe, atomic.LoadInt32(sp.healthState))
require.Equal(healthStateFirstProbe, atomic.LoadInt32(sp.healthState))
if errorString == "" {
assert.HTTPSuccess(hcFunc, http.MethodGet, "/", url.Values{"readiness": []string{"true"}}, "ok")
assert.Equal(healthStateHealthy, atomic.LoadInt32(sp.healthState))
require.HTTPSuccess(hcFunc, http.MethodGet, "/", url.Values{"readiness": []string{"true"}}, "ok")
require.Equal(healthStateHealthy, atomic.LoadInt32(sp.healthState))
} else {
assert.HTTPError(hcFunc, http.MethodGet, "/", url.Values{"readiness": []string{"true"}}, errorString)
assert.Equal(healthStateUnhealthy, atomic.LoadInt32(sp.healthState))
require.HTTPError(hcFunc, http.MethodGet, "/", url.Values{"readiness": []string{"true"}}, errorString)
require.Equal(healthStateUnhealthy, atomic.LoadInt32(sp.healthState))
}
}

View File

@ -188,39 +188,39 @@ backoff:
api:
backend:
hostname: "om-backend"
hostname: "open-match-backend"
grpcport: "50505"
httpport: "51505"
frontend:
hostname: "om-frontend"
hostname: "open-match-frontend"
grpcport: "50504"
httpport: "51504"
query:
hostname: "om-query"
hostname: "open-match-query"
grpcport: "50503"
httpport: "51503"
synchronizer:
hostname: "om-synchronizer"
hostname: "open-match-synchronizer"
grpcport: "50506"
httpport: "51506"
swaggerui:
hostname: "om-swaggerui"
hostname: "open-match-swaggerui"
httpport: "51500"
scale:
httpport: "51509"
evaluator:
hostname: "test"
hostname: "open-match-test"
grpcport: "50509"
httpport: "51509"
test:
hostname: "test"
hostname: "open-match-test"
grpcport: "50509"
httpport: "51509"
redis:
sentinelPort: 26379
sentinelMaster: om-redis-master
sentinelHostname: om-redis
sentinelHostname: open-match-redis
sentinelUsePassword:
usePassword: false
passwordPath: /opt/bitnami/redis/secrets/redis-password
@ -237,8 +237,8 @@ telemetry:
enable: "true"
jaeger:
enable: "false"
agentEndpoint: "open-match-jaeger-agent:6831"
collectorEndpoint: "http://open-match-jaeger-collector:14268/api/traces"
agentEndpoint: ""
collectorEndpoint: ""
prometheus:
enable: "false"
endpoint: "/metrics"

View File

@ -103,7 +103,7 @@ func TestAssignTicketsInvalidArgument(t *testing.T) {
},
},
},
"Ticket id " + ctResp.Id + " is assigned multiple times in one assign tickets call.",
"Ticket id " + ctResp.Id + " is assigned multiple times in one assign tickets call",
},
{
"ticket used twice two groups",
@ -119,13 +119,13 @@ func TestAssignTicketsInvalidArgument(t *testing.T) {
},
},
},
"Ticket id " + ctResp.Id + " is assigned multiple times in one assign tickets call.",
"Ticket id " + ctResp.Id + " is assigned multiple times in one assign tickets call",
},
} {
tt := tt
t.Run(tt.name, func(t *testing.T) {
_, err := om.Backend().AssignTickets(ctx, tt.req)
require.Equal(t, codes.InvalidArgument, status.Convert(err).Code())
require.Equal(t, codes.InvalidArgument.String(), status.Convert(err).Code().String())
require.Equal(t, tt.msg, status.Convert(err).Message())
})
}
@ -183,7 +183,7 @@ func TestTicketDelete(t *testing.T) {
resp, err := om.Frontend().GetTicket(ctx, &pb.GetTicketRequest{TicketId: t1.Id})
require.Nil(t, resp)
require.Equal(t, "Ticket id:"+t1.Id+" not found", status.Convert(err).Message())
require.Equal(t, "Ticket id: "+t1.Id+" not found", status.Convert(err).Message())
require.Equal(t, codes.NotFound, status.Convert(err).Code())
}
@ -524,7 +524,7 @@ func TestCreateTicketErrors(t *testing.T) {
resp, err := om.Frontend().CreateTicket(ctx, tt.req)
require.Nil(t, resp)
s := status.Convert(err)
require.Equal(t, codes.InvalidArgument, s.Code())
require.Equal(t, codes.InvalidArgument.String(), s.Code().String())
require.Equal(t, s.Message(), tt.msg)
})
}

View File

@ -1,51 +0,0 @@
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package testing provides testing primitives for the codebase.
package testing
import (
"fmt"
"open-match.dev/open-match/pkg/pb"
)
// Property defines the required fields that we need to generate tickets for testing.
type Property struct {
Name string
Min float64
Max float64
Interval float64
}
// GenerateFloatRangeTickets takes in two property manifests to generate tickets with two fake properties for testing.
func GenerateFloatRangeTickets(manifest1, manifest2 Property) []*pb.Ticket {
testTickets := make([]*pb.Ticket, 0)
for i := manifest1.Min; i < manifest1.Max; i += manifest1.Interval {
for j := manifest2.Min; j < manifest2.Max; j += manifest2.Interval {
testTickets = append(testTickets, &pb.Ticket{
Id: fmt.Sprintf("%s%f-%s%f", manifest1.Name, i, manifest2.Name, j),
SearchFields: &pb.SearchFields{
DoubleArgs: map[string]float64{
manifest1.Name: i,
manifest2.Name: j,
},
},
})
}
}
return testTickets
}

View File

@ -627,15 +627,16 @@ const _ = grpc.SupportPackageIsVersion4
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type BackendServiceClient interface {
// FetchMatches triggers a MatchFunction with the specified MatchProfile and returns a set of match proposals that
// match the description of that MatchProfile.
// FetchMatches immediately returns an error if it encounters any execution failures.
// FetchMatches triggers a MatchFunction with the specified MatchProfile and
// returns a set of matches generated by the Match Making Function, and
// accepted by the evaluator.
// Tickets in matches returned by FetchMatches are moved from active to
// pending, and will not be returned by query.
FetchMatches(ctx context.Context, in *FetchMatchesRequest, opts ...grpc.CallOption) (BackendService_FetchMatchesClient, error)
// AssignTickets overwrites the Assignment field of the input TicketIds.
AssignTickets(ctx context.Context, in *AssignTicketsRequest, opts ...grpc.CallOption) (*AssignTicketsResponse, error)
// ReleaseTickets removes the submitted tickets from the list that prevents tickets
// that are awaiting assignment from appearing in MMF queries, effectively putting them back into
// the matchmaking pool
// ReleaseTickets moves tickets from the pending state, to the active state.
// This enables them to be returned by query, and find different matches.
//
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.
@ -718,15 +719,16 @@ func (c *backendServiceClient) ReleaseAllTickets(ctx context.Context, in *Releas
// BackendServiceServer is the server API for BackendService service.
type BackendServiceServer interface {
// FetchMatches triggers a MatchFunction with the specified MatchProfile and returns a set of match proposals that
// match the description of that MatchProfile.
// FetchMatches immediately returns an error if it encounters any execution failures.
// FetchMatches triggers a MatchFunction with the specified MatchProfile and
// returns a set of matches generated by the Match Making Function, and
// accepted by the evaluator.
// Tickets in matches returned by FetchMatches are moved from active to
// pending, and will not be returned by query.
FetchMatches(*FetchMatchesRequest, BackendService_FetchMatchesServer) error
// AssignTickets overwrites the Assignment field of the input TicketIds.
AssignTickets(context.Context, *AssignTicketsRequest) (*AssignTicketsResponse, error)
// ReleaseTickets removes the submitted tickets from the list that prevents tickets
// that are awaiting assignment from appearing in MMF queries, effectively putting them back into
// the matchmaking pool
// ReleaseTickets moves tickets from the pending state, to the active state.
// This enables them to be returned by query, and find different matches.
//
// BETA FEATURE WARNING: This call and the associated Request and Response
// messages are not finalized and still subject to possible change or removal.

View File

@ -300,9 +300,7 @@ type FrontendServiceClient interface {
// - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.
CreateTicket(ctx context.Context, in *CreateTicketRequest, opts ...grpc.CallOption) (*Ticket, error)
// DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.
// The client must delete the Ticket when finished matchmaking with it.
// - If SearchFields exist in a Ticket, DeleteTicket will deindex the fields lazily.
// Users may still be able to assign/get a ticket after calling DeleteTicket on it.
// The client should delete the Ticket when finished matchmaking with it.
DeleteTicket(ctx context.Context, in *DeleteTicketRequest, opts ...grpc.CallOption) (*empty.Empty, error)
// GetTicket get the Ticket associated with the specified TicketId.
GetTicket(ctx context.Context, in *GetTicketRequest, opts ...grpc.CallOption) (*Ticket, error)
@ -386,9 +384,7 @@ type FrontendServiceServer interface {
// - If SearchFields exist in a Ticket, CreateTicket will also index these fields such that one can query the ticket with query.QueryTickets function.
CreateTicket(context.Context, *CreateTicketRequest) (*Ticket, error)
// DeleteTicket immediately stops Open Match from using the Ticket for matchmaking and removes the Ticket from state storage.
// The client must delete the Ticket when finished matchmaking with it.
// - If SearchFields exist in a Ticket, DeleteTicket will deindex the fields lazily.
// Users may still be able to assign/get a ticket after calling DeleteTicket on it.
// The client should delete the Ticket when finished matchmaking with it.
DeleteTicket(context.Context, *DeleteTicketRequest) (*empty.Empty, error)
// GetTicket get the Ticket associated with the specified TicketId.
GetTicket(context.Context, *GetTicketRequest) (*Ticket, error)

View File

@ -162,7 +162,7 @@ const _ = grpc.SupportPackageIsVersion4
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type MatchFunctionClient interface {
// DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.
// Run pulls Tickets that satisify Profile constraints from QueryService, runs matchmaking logics against them, then
// Run pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then
// constructs and streams back match candidates to the Backend service.
Run(ctx context.Context, in *RunRequest, opts ...grpc.CallOption) (MatchFunction_RunClient, error)
}
@ -210,7 +210,7 @@ func (x *matchFunctionRunClient) Recv() (*RunResponse, error) {
// MatchFunctionServer is the server API for MatchFunction service.
type MatchFunctionServer interface {
// DO NOT CALL THIS FUNCTION MANUALLY. USE backend.FetchMatches INSTEAD.
// Run pulls Tickets that satisify Profile constraints from QueryService, runs matchmaking logics against them, then
// Run pulls Tickets that satisfy Profile constraints from QueryService, runs matchmaking logics against them, then
// constructs and streams back match candidates to the Backend service.
Run(*RunRequest, MatchFunction_RunServer) error
}

View File

@ -23,15 +23,17 @@ var _ = math.Inf
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// A Ticket is a basic matchmaking entity in Open Match. A Ticket represents either an
// individual 'Player' or a 'Group' of players. Open Match will not interpret
// what the Ticket represents but just treat it as a matchmaking unit with a set
// of SearchFields. Open Match stores the Ticket in state storage and enables an
// Assignment to be associated with this Ticket.
// A Ticket is a basic matchmaking entity in Open Match. A Ticket may represent
// an individual 'Player', a 'Group' of players, or any other concepts unique to
// your use case. Open Match will not interpret what the Ticket represents but
// just treat it as a matchmaking unit with a set of SearchFields. Open Match
// stores the Ticket in state storage and enables an Assignment to be set on the
// Ticket.
type Ticket struct {
// Id represents an auto-generated Id issued by Open Match.
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
// An Assignment represents a game server assignment associated with a Ticket.
// An Assignment represents a game server assignment associated with a Ticket,
// or whatever finalized matched state means for your use case.
// Open Match does not require or inspect any fields on Assignment.
Assignment *Assignment `protobuf:"bytes,3,opt,name=assignment,proto3" json:"assignment,omitempty"`
// Search fields are the fields which Open Match is aware of, and can be used
@ -41,8 +43,8 @@ type Ticket struct {
// making function, evaluator, and components making calls to Open Match.
// Optional, depending on the requirements of the connected systems.
Extensions map[string]*any.Any `protobuf:"bytes,5,rep,name=extensions,proto3" json:"extensions,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
// Create time represents the time at which this Ticket was created. It is
// populated by Open Match at the time of Ticket creation.
// Create time is the time the Ticket was created. It is populated by Open
// Match at the time of Ticket creation.
CreateTime *timestamp.Timestamp `protobuf:"bytes,6,opt,name=create_time,json=createTime,proto3" json:"create_time,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
@ -169,8 +171,8 @@ func (m *SearchFields) GetTags() []string {
return nil
}
// An Assignment represents a game server assignment associated with a Ticket. Open
// match does not require or inspect any fields on assignment.
// An Assignment represents a game server assignment associated with a Ticket.
// Open Match does not require or inspect any fields on assignment.
type Assignment struct {
// Connection information for this Assignment.
Connection string `protobuf:"bytes,1,opt,name=connection,proto3" json:"connection,omitempty"`

View File

@ -250,13 +250,13 @@ const _ = grpc.SupportPackageIsVersion4
type QueryServiceClient interface {
// QueryTickets gets a list of Tickets that match all Filters of the input Pool.
// - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.
// QueryTickets pages the Tickets by `storage.pool.size` and stream back responses.
// - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.
// QueryTickets pages the Tickets by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
QueryTickets(ctx context.Context, in *QueryTicketsRequest, opts ...grpc.CallOption) (QueryService_QueryTicketsClient, error)
// QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.
// - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.
// QueryTicketIds pages the TicketIDs by `storage.pool.size` and stream back responses.
// - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.
// QueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
QueryTicketIds(ctx context.Context, in *QueryTicketIdsRequest, opts ...grpc.CallOption) (QueryService_QueryTicketIdsClient, error)
}
@ -336,13 +336,13 @@ func (x *queryServiceQueryTicketIdsClient) Recv() (*QueryTicketIdsResponse, erro
type QueryServiceServer interface {
// QueryTickets gets a list of Tickets that match all Filters of the input Pool.
// - If the Pool contains no Filters, QueryTickets will return all Tickets in the state storage.
// QueryTickets pages the Tickets by `storage.pool.size` and stream back responses.
// - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.
// QueryTickets pages the Tickets by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
QueryTickets(*QueryTicketsRequest, QueryService_QueryTicketsServer) error
// QueryTicketIds gets the list of TicketIDs that meet all the filtering criteria requested by the pool.
// - If the Pool contains no Filters, QueryTicketIds will return all TicketIDs in the state storage.
// QueryTicketIds pages the TicketIDs by `storage.pool.size` and stream back responses.
// - storage.pool.size is default to 1000 if not set, and has a mininum of 10 and maximum of 10000.
// QueryTicketIds pages the TicketIDs by `queryPageSize` and stream back responses.
// - queryPageSize is default to 1000 if not set, and has a minimum of 10 and maximum of 10000.
QueryTicketIds(*QueryTicketIdsRequest, QueryService_QueryTicketIdsServer) error
}

View File

@ -1,10 +1,10 @@
{
"urls": [
{"name": "Frontend", "url": "https://open-match.dev/api/v0.0.0-dev/frontend.swagger.json"},
{"name": "Backend", "url": "https://open-match.dev/api/v0.0.0-dev/backend.swagger.json"},
{"name": "Query", "url": "https://open-match.dev/api/v0.0.0-dev/query.swagger.json"},
{"name": "MatchFunction", "url": "https://open-match.dev/api/v0.0.0-dev/matchfunction.swagger.json"},
{"name": "Synchronizer", "url": "https://open-match.dev/api/v0.0.0-dev/synchronizer.swagger.json"},
{"name": "Evaluator", "url": "https://open-match.dev/api/v0.0.0-dev/evaluator.swagger.json"}
{"name": "Frontend", "url": "https://open-match.dev/api/v1.1.0/frontend.swagger.json"},
{"name": "Backend", "url": "https://open-match.dev/api/v1.1.0/backend.swagger.json"},
{"name": "Query", "url": "https://open-match.dev/api/v1.1.0/query.swagger.json"},
{"name": "MatchFunction", "url": "https://open-match.dev/api/v1.1.0/matchfunction.swagger.json"},
{"name": "Synchronizer", "url": "https://open-match.dev/api/v1.1.0/synchronizer.swagger.json"},
{"name": "Evaluator", "url": "https://open-match.dev/api/v1.1.0/evaluator.swagger.json"}
]
}

View File

@ -30,16 +30,16 @@ var (
// serviceAddresses is a list of all the HTTP hostname:port combinations for generating a TLS certificate.
// It appears that gRPC does not care about validating the port number so we only add the HTTP addresses here.
serviceAddressList = []string{
"om-backend:51505",
"open-match-backend:51505",
"om-demo:51507",
"om-demoevaluator:51508",
"om-demofunction:51502",
"om-e2eevaluator:51518",
"om-e2ematchfunction:51512",
"om-frontend:51504",
"om-query:51503",
"om-swaggerui:51500",
"om-synchronizer:51506",
"open-match-frontend:51504",
"open-match-query:51503",
"open-match-swaggerui:51500",
"open-match-synchronizer:51506",
}
caFlag = flag.Bool("ca", false, "Create a root certificate. Use if you want a chain of trust with other certificates.")
rootPublicCertificateFlag = flag.String("rootpubliccertificate", "", "(optional) Path to root certificate file. If set the output certificate is rooted from this certificate.")

View File

@ -25,7 +25,7 @@ import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
@ -33,13 +33,13 @@ const (
)
func TestCreateCertificate(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
tmpDir, err := ioutil.TempDir("", "certtest")
defer func() {
assert.Nil(os.RemoveAll(tmpDir))
require.Nil(os.RemoveAll(tmpDir))
}()
assert.Nil(err)
require.Nil(err)
publicCertPath := filepath.Join(tmpDir, "public.cert")
privateKeyPath := filepath.Join(tmpDir, "private.key")
err = CreateCertificateAndPrivateKeyFiles(
@ -50,31 +50,31 @@ func TestCreateCertificate(t *testing.T) {
Hostnames: []string{"a.com", "b.com", "127.0.0.1"},
RSAKeyLength: 2048,
})
assert.Nil(err)
require.Nil(err)
assert.FileExists(publicCertPath)
assert.FileExists(privateKeyPath)
require.FileExists(publicCertPath)
require.FileExists(privateKeyPath)
publicCertFileData, err := ioutil.ReadFile(publicCertPath)
assert.Nil(err)
require.Nil(err)
privateKeyFileData, err := ioutil.ReadFile(privateKeyPath)
assert.Nil(err)
require.Nil(err)
// Verify that we can load the public/private key pair.
pub, pk, err := ReadKeyPair(publicCertFileData, privateKeyFileData)
assert.Nil(err)
assert.NotNil(pub)
assert.NotNil(pk)
require.Nil(err)
require.NotNil(pub)
require.NotNil(pk)
// Verify that the public/private key pair can RSA encrypt/decrypt.
pubKey, ok := pub.PublicKey.(*rsa.PublicKey)
assert.True(ok, "pub.PublicKey is not of type, *rsa.PublicKey, %v", pub.PublicKey)
require.True(ok, "pub.PublicKey is not of type, *rsa.PublicKey, %v", pub.PublicKey)
ciphertext, err := rsa.EncryptOAEP(sha256.New(), rand.Reader, pubKey, []byte(secretMessage), []byte{})
assert.Nil(err)
assert.NotEqual(string(ciphertext), secretMessage)
require.Nil(err)
require.NotEqual(string(ciphertext), secretMessage)
cleartext, err := rsa.DecryptOAEP(sha256.New(), rand.Reader, pk, ciphertext, []byte{})
assert.Nil(err)
assert.Equal(string(cleartext), string(secretMessage))
require.Nil(err)
require.Equal(string(cleartext), string(secretMessage))
}
func TestBadValues(t *testing.T) {
@ -132,9 +132,8 @@ func TestExpandHostnames(t *testing.T) {
for _, testCase := range testCases {
testCase := testCase
t.Run(fmt.Sprintf("expandHostnames(%s) => %s", testCase.input, testCase.expected), func(t *testing.T) {
assert := assert.New(t)
actual := expandHostnames(testCase.input)
assert.Equal(testCase.expected, actual)
require.Equal(t, testCase.expected, actual)
})
}
}

View File

@ -17,7 +17,7 @@ package testing
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
certgenInternal "open-match.dev/open-match/tools/certgen/internal"
)
@ -26,44 +26,44 @@ const (
)
func TestCreateCertificateAndPrivateKeyForTestingAreValid(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
pubData, privData, err := CreateCertificateAndPrivateKeyForTesting([]string{fakeAddress})
assert.Nil(err)
require.Nil(err)
// Verify that we can load the public/private key pair.
pub, pk, err := certgenInternal.ReadKeyPair(pubData, privData)
assert.Nil(err)
assert.NotNil(pub)
assert.NotNil(pk)
require.Nil(err)
require.NotNil(pub)
require.NotNil(pk)
}
func TestCreateRootedCertificateAndPrivateKeyForTestingAreValid(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
rootPubData, rootPrivData, err := CreateRootCertificateAndPrivateKeyForTesting([]string{fakeAddress})
assert.Nil(err)
require.Nil(err)
pubData, privData, err := CreateDerivedCertificateAndPrivateKeyForTesting(rootPubData, rootPrivData, []string{fakeAddress})
assert.Nil(err)
require.Nil(err)
rootPub, rootPk, err := certgenInternal.ReadKeyPair(rootPubData, rootPrivData)
assert.Nil(err)
assert.NotNil(rootPk)
require.Nil(err)
require.NotNil(rootPk)
pub, pk, err := certgenInternal.ReadKeyPair(pubData, privData)
assert.Nil(err)
assert.NotNil(pk)
require.Nil(err)
require.NotNil(pk)
assert.Nil(pub.CheckSignatureFrom(rootPub))
require.Nil(pub.CheckSignatureFrom(rootPub))
}
func TestCreateCertificateAndPrivateKeyForTestingAreDifferent(t *testing.T) {
assert := assert.New(t)
require := require.New(t)
pubData1, privData1, err := CreateCertificateAndPrivateKeyForTesting([]string{fakeAddress})
assert.Nil(err)
require.Nil(err)
pubData2, privData2, err := CreateCertificateAndPrivateKeyForTesting([]string{fakeAddress})
assert.Nil(err)
require.Nil(err)
assert.NotEqual(string(pubData1), string(pubData2))
assert.NotEqual(string(privData1), string(privData2))
require.NotEqual(string(pubData1), string(pubData2))
require.NotEqual(string(privData1), string(privData2))
}

View File

@ -32,7 +32,7 @@ import (
const (
// The endpoint for the Open Match Backend service.
omBackendEndpoint = "om-backend.open-match.svc.cluster.local:50505"
omBackendEndpoint = "open-match-backend.open-match.svc.cluster.local:50505"
// The Host and Port for the Match Function service endpoint.
functionHostName = "custom-eval-tutorial-matchfunction.custom-eval-tutorial.svc.cluster.local"
functionPort int32 = 50502

View File

@ -27,7 +27,7 @@ import (
const (
// The endpoint for the Open Match Frontend service.
omFrontendEndpoint = "om-frontend.open-match.svc.cluster.local:50504"
omFrontendEndpoint = "open-match-frontend.open-match.svc.cluster.local:50504"
// Number of tickets created per iteration
ticketsPerIter = 20
)

View File

@ -28,8 +28,8 @@ import (
// with which the Match Function communicates to query the Tickets.
const (
queryServiceAddress = "om-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
queryServiceAddress = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
)
func main() {

View File

@ -32,7 +32,7 @@ import (
const (
// The endpoint for the Open Match Backend service.
omBackendEndpoint = "om-backend.open-match.svc.cluster.local:50505"
omBackendEndpoint = "open-match-backend.open-match.svc.cluster.local:50505"
// The Host and Port for the Match Function service endpoint.
functionHostName = "custom-eval-tutorial-matchfunction.custom-eval-tutorial.svc.cluster.local"
functionPort int32 = 50502

View File

@ -27,7 +27,7 @@ import (
const (
// The endpoint for the Open Match Frontend service.
omFrontendEndpoint = "om-frontend.open-match.svc.cluster.local:50504"
omFrontendEndpoint = "open-match-frontend.open-match.svc.cluster.local:50504"
// Number of tickets created per iteration
ticketsPerIter = 20
)

View File

@ -28,8 +28,8 @@ import (
// with which the Match Function communicates to query the Tickets.
const (
queryServiceAddress = "om-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
queryServiceAddress = "open-match-query.open-match.svc.cluster.local:50503" // Address of the QueryService endpoint.
serverPort = 50502 // The port for hosting the Match Function.
)
func main() {

View File

@ -32,7 +32,7 @@ import (
const (
// The endpoint for the Open Match Backend service.
omBackendEndpoint = "om-backend.open-match.svc.cluster.local:50505"
omBackendEndpoint = "open-match-backend.open-match.svc.cluster.local:50505"
// The Host and Port for the Match Function service endpoint.
functionHostName = "default-eval-tutorial-matchfunction.default-eval-tutorial.svc.cluster.local"
functionPort int32 = 50502

View File

@ -27,7 +27,7 @@ import (
const (
// The endpoint for the Open Match Frontend service.
omFrontendEndpoint = "om-frontend.open-match.svc.cluster.local:50504"
omFrontendEndpoint = "open-match-frontend.open-match.svc.cluster.local:50504"
// Number of tickets created per iteration
ticketsPerIter = 20
)

Some files were not shown because too many files have changed in this diff Show More